- 0 Posts
- 23 Comments
Also Bell experiments have proven the indeterminacy which you say is absurd. No theory of local hidden variables can describe quantum mechanics.
You say Bell’s theorem disproves realism, but then you immediately follow it up with saying it disproved local realism. Do you see how those two are not the same statements? It never even crossed Bell’s mind to deny reality. He believed that the conclusion to his own theorem is just that it is not local.
(Technically, anything explained non-locally can also be explained non-temporally instead, so it is more accurate methinks to say spatiotemporal realism is ruled out. I am not as big of a fan of thinking about it non-temporally but there are some respectable people like Avshalom Elitzur who do. Thinking about it non-locally is far more intuitive.)
Also, again, this is not about indeterminacy and determinacy, but about indefiniteness and definiteness, i.e. anti-realism vs realism. These are not the same things. To say something is indeterminate is merely to imply it is random. To say something is indefinite is to say it doesn’t even have a value at all. It is also sometimes called realism because it’s about object permanence. Definiteness is just object permanence, it is the idea that systems still possess observable properties even when they are not being directly observed in the moment.
He’s asking where the line is between this indeterminacy and determinacy. At what scale to things move from quantum to “real” and why?
You could in principle make this non-realism make sense if you imposed some sort of well-defined physical conditions as to when particles take on real values. Bell described this as a kind of “flash” ontology because you would not have continuous definite values but “flashes” of definite values under certain conditions. But it turns out that you cannot do this without contradicting the mathematics of quantum mechanics.
These are called physical collapse models, like GRW theory, but these transitions are non-reversible even though all evolution operators in quantum mechanics are reversible, and so in principle if you rigorously define what conditions would cause this transition, you could conduct an experiment where you set up those conditions, and then try to reverse it. Orthodox quantum theory and the physical collapse model would make different predictions at that point.
These models never end up being local, anyways.
The reason I say value indefiniteness is absurd as a way to interpret quantum mechanics is because it is not necessitated by the mathematics at all, and if you believe it:
- It devolves into solipsism if you do not rigorously define a mathematical criterion as to when definite values arise, because then nothing has real values outside of you directly looking at it.
- If you do rigorously define a criterion, then it is no longer quantum mechanics but an alternative theoretical model.
So, either it devolves into solipsism, or it is a different theory to begin with.
Bell was fine with #2 as long as people were honest about that being what they were doing. He wrote an article “Against ‘Measurement’” where he criticized the vagueness of people who claim there is a transition “at measurement” but then do not even rigorously define what qualifies as a “measurement.” He wrote positively of GRW theory in his paper “Are there Quantum Jumps?” precisely because they do give a rigorous mathematical definition of how this process takes place.
But Bell also didn’t particularly believe there was any reason to believe in value indefiniteness to begin with. You can just interpret quantum mechanics as a kind of stochastic mechanics, just one with non-local features, where it is random but particles still have definite values at all times. The same year he published his famous theorem in 1964 in the paper “On the Einstein Podolsky Rosen Paradox” he also published the paper “On the Problem of Hidden Variables” debunking von Neumann’s proof that supposedly you cannot interpret quantum mechanics in value definite terms. He also wrote a paper “Beables in Quantum Field Theory” where he shows QFT can be represented as a stochastic theory. He also wrote a paper “On the Impossible Pilot Wave” where he promoted pilot wave theory, not necessarily because he believed it, but because he saw it as a counterexample to all the supposed “proofs” that quantum mechanics cannot be interpreted as a value definite theory.
My point isn’t about randomness/indeterminacy. It is about “indefiniteness,” the claim that things have no values until you look. This either devolves into solipsism, or into a theory which is not quantum mechanics. It is far simpler to just say the systems have values when you’re not looking, you just don’t know what they are, because the random evolution of the system prevents you from tracking them. It is sort of like, if I hit a fork in the road and take either the left or right path, and you don’t know which, you wouldn’t then conclude I didn’t take a path at all until you look. You would conclude that you just don’t know what it is, and maybe assign probabilities to them. The fact that the probability distribution doesn’t contain a definite value does not demonstrate that the real world doesn’t contain a definite value, and believing it doesn’t unnecessarily over-complicates things. And definite ≠ deterministic. Maybe the path taken is truly random, but there is a path taken.
Not to be the 🤓 but just so we’re clear, the point of Schrödinger’s cat was to illustrate that you can’t know a quantum state until you measure it. Basically just saying “probability exists.”
That wasn’t Schrödinger’s point at all.
Schrödinger was responding to people in Bohr and von Neumann’s camp who claim that particles described mathematically by a superposition of states literally have no real observables in the real world at all. It is not just that they are random or probabilistic, but people in the “anti-realist” camp argue that they effectively no longer even exist anymore when they are described mathematically by a superposition of states. This position is sometimes called value indefiniteness.
Schrödinger was criticizing this position by pointing out that you cannot separate your beliefs about the microworld from the macroworld, because macroscopic objects like cats are also made up of particles and should follow the same rules. Hence, he puts forward a thought experiment whereby a cat would also be described mathematically in a superposition of states.
If you think superposition of states means it no longer has real definite properties in the real world, then the cat wouldn’t have real define properties in the real world until you open the box. Schrödinger’s point was that this is such an obvious absurdity that we should reject value indefiniteness for individual particles as well.
You say:
The reason it’s a big deal is that this probability is a real property. One that is supposed to be only one of two states. But instead it isn’t really in a state at all until you measure it, and that’s weird.
But that is exactly the point Schrödinger was criticizing, not supporting.
Value indefiniteness / anti-realism ultimately amounts to solipsism because if particles lack real, definite, observable properties in the real world when you are not looking at them, other people are also made up of particles, so other people wouldn’t have real, definite, observable properties in the real world when you are not looking at them.
He was trying to illustrate that this position reduces to an absurdity and so we should not believe in that position.
The point is that instead of assuming it is in one state or the other, you can and often should think of both possibilities at once. This is what makes quantum computing useful.
If you perform a polar decomposition on the quantum state, you are left with a probability vector and a phase vector. The probability vector is the same kind of probability vector you use in classical probabilistic computing. The update rule for it in quantum computing literally only differs by an additional term which is a non-linear term that depends upon the phase vector.
The "advantage’ comes from the phase vector. For N qubits, there are 2^N phases. A system of 300 qubits would have 2^300 phases, which is far greater than the number of atoms in the observable universe. A single logic gate thus can manipulate far more states of the system at once because it can manipulate these phases, which the stochastic dynamics of the bits have a dependence upon the phases, and thus you can not only manipulate the phases to do calculations but, if you are clever, you can write the algorithm in such a way that the effect it has on the probability distribution allows you to read off the results from the probability distribution.
The phase vector does not contain anything probabilistic, so it contains nothing that looks like the qubit being in two places at once. That is contained in the probability vector, but there is no good reason to interpret a probability distribution as the system being in two places at once in quantum mechanics than there is in classical mechanics. The advantage comes from the phases, and the state of the phases just can influence the stochastic perturbations of the bits, and thus can influence the probability distribution.
So you simply apply operations that increase or decrease the chances of certain outcomes and repeat until the answer you want has an incredibly high probability and the rest are nearly zero. Then you measure your qubit, collapsing the wave function, with a high probability that collapse will give you the answer you wanted.
Again, perform a polar decomposition on the quantum state, break it apart into the probability vector and a phase vector. Then, apply a Bayesian knowledge update using Bayes’ theorem to the probability vector, exactly the way you’d do it in classical probabilistic computing. Then, simply undo the polar decomposition, i.e. recompose it back into a single complex-valued vector in Cartesian form.
What you find is that this is mathematically equivalent to the collapse of the wavefunction. The so-called “collapse of the wavefunction” is literally just a Bayesian knowledge update on the degree of freedom of the quantum state associated with the probability distribution of the bits.
It’s less like “the cat is both alive and dead” and more that “the terms ‘alive’ and ‘dead’ do not apply to the cat till you open the box”
Sure, but that position reduces to solipsism, because then you don’t exist with a definite value until I look at you, either. But clearly you are thinking definite thoughts when I’m not looking, right?
bunchberry@lemmy.worldto
Not The Onion@lemmy.world•Data Finds Republicans are Obsessed with Searching for Transgender PornEnglish
11·13 days agoI think you’re conflating mathematical and philosophical realness and then Principle of Explosion-ing your way into hating on physicsts.
Waa waa boo hoo. You can cry about me criticizing crackpot quantum mysticism by saying “stop hatin’ bro 😢😢😢😢” but that doesn’t magically make your crackpot mysticism justifiable. You have the right to have incoherent mystical beliefs, but I also have the right to criticize them. If you don’t want to be criticized then don’t post them on a public forum.
I think you’re conflating mathematical and philosophical realness and then Principle of Explosion-ing your way into hating on physicsts. Quantum indefinite interpretations still result in the same mathematical predictions about observations
Did you read what I wrote at all? This is a criticism about the crackpot anti-realist claims. Yes, you can argue that objective reality doesn’t exist, that all that exists is what you are directly observing in the direct moment of the observation and nothing exists outside of your direct gaze, and that you have a mathematical model for predicting what will show up in your direct gaze, and that this model makes the right predictions.
If that is just your own personal belief, I’d think you’re crazy, but whatever. If, however, you start lying and claiming that this is somehow implied by the linear algebra, that quantum mechanics somehow “proves” your solipsistic crackpottery, then I am going to call you out on being a crackpot quantum mystic. If you don’t want to be criticized then don’t spread your quantum mysticism on a public forum.
so all your talk about MW saying your memory is a lie is just obvious bullshit.
Because you don’t understand the mathematics so you don’t understand what I am talking about. You have a Laymen’s interpretation of MW you got from YouTube videos that paints it as just saying that different classical worlds occur in different parallel branches of a multiverse. In your mind, you think what MW is claiming is that if a photon has a 50%/50% chance of being reflected/transmitted at a beam splitter, then the world splits into two classical branches where in one the observer measures the photon having been reflected and in the other they measure the photon having been transmitted.
You think what I am saying is absurd because you get all your info from YouTube videos and don’t even understand what is seriously being advocated by these crackpots as you don’t actually read the academic literature on the subject. No, what they are claiming is indeed far more absurd, which is that the photon does neither of those things, it takes no real trajectories at all in 3D space in any sense, it doesn’t even exist as a distinct object in the world.
“Thus in our interpretation of the Everett theory there is no association of the particular present with any particular past. And the essential claim is that this does not matter at all. For we have no access to the past. We have only our ‘memories’ and ‘records’. But these memories and records are in fact present phenomena. The instantaneous configuration of the xs can include clusters which are markings in notebooks, or in computer memories, or in human memories. These memories can be of the initial conditions in experiments, among other things, and of the results of those experiments. The theory should account for the present correlations between these present phenomena. And in this respect we have seen it to agree with ordinary quantum mechanics, in so far as the latter is unambiguous.” … “Everett’s replacement of the past by memories is a radical solipsism—extending to the temporal dimension the replacement of everything outside my head by my impressions, of ordinary solipsism or positivism. Solipsism cannot be refuted. But if such a theory were taken seriously it would hardly be possible to take anything else seriously. So much for the social implications. It is always interesting to find that solipsists and positivists, when they have children, have life insurance.”
— John Bell, “Quantum Mechanics for Cosmologists”
MW is even more crackpot nonsense than typical anti-realist claims, because at least the solipsist believes in what they can observe in the moment. You simply cannot derive what is empirically observed from MW because it has no connection at all to the real world, and so it only reflects one’s ignorance on this subject to claim that MW actually has a formula for making empirical predictions. They simply do not.
MW is anti-realist not just in the properties you are not observing, but even in the properties you observe, and just claims reality is literally a mathematical function, like a Platonic realm but rather than all mathematics it is just one function ψ(x,t). We obviously cannot observe pure mathematical functions. You need something in the mathematical model, some mathematical symbol, that refers to something that we can empirically observe, usually called an observable, yet there are no observables in MW so there is no possibility of actually making an empirical prediction with it.
“The gigantic, universal ψ wave that contains all the possible worlds is like Hegel’s dark night in which all cows are black: it does not account, per se, for the phenomenological reality that we actually observe. In order to describe the phenomena that we observe, other mathematical elements are needed besides ψ: the individual variables, like X and P, that we use to describe the world. The Many Worlds interpretation does not explain them clearly. It is not enough to know the ψ wave and Schrödinger’s equation in order to define and use quantum theory: we need to specify an algebra of observables, otherwise we cannot calculate anything and there is no relation with the phenomena of our experience. The role of this algebra of observables, which is extremely clear in other interpretations, is not at all clear in the Many Worlds interpretation.”
— Carlo Rovelli, “Helgoland”
Even the crackpot solipsist’s views are more coherent than the views of the crackpot Many Worlder’s views.
Tim Maudlin has a good lecture on this fact I will link below. I’d also recommend his paper “Can the World be Only Wavefunction?”
https://www.youtube.com/watch?v=us7gbWWPUsA
Again, my criticism is not solely that these views are obviously crackpot mystical nonsense (they are). The problem with quantum mystics is not just that they are mystics, but that they pretend quantum mechanics bolsters their mystical claims. Nothing in the linear algebra of the model comes close to having the hint of an air of implying these things. If you want to believe that personally, go ahead, but stop pretending these crank views are in any way backed by physics.
The rampant spread of quantum mysticism in academic circles is a problem because these physicists who buy into it don’t always keep to themselves, many go to the media and start trying to deceive the public that solipsism is somehow proved by physics. Some even manage to get peer-reviewed papers published in academic journals claiming objective reality doesn’t exist, which then crackpot idealists like Bernardo Kastrup latch onto to “prove” we all live in a grand “cosmic consciousness” because they have an academic paper and real physicists backing their views.
When even the physics departments are becoming overrun with crackpot mystics then we have a serious problem because the public trusts these people. I hold them to a higher standard than I would hold a random charlatan like Deepak Chopra which I don’t expect to tell the truth anyways. It bothers me much more when I see physicists like Chris Ferrie publishing Medium articles where he claims quantum mechanics “denies reality” or Mithuna Yoganathan deliberately lying about the mathematics with claims repeatedly debunked in the academic literature to push the nonsense that the mathematics proves there is a multiverse “if you just take it seriously” than I do some random Twitter user saying some quantum mystical nonsense. These people exploit their credentials to push their own mystical mumbo jumbo views.
bunchberry@lemmy.worldto
Not The Onion@lemmy.world•Data Finds Republicans are Obsessed with Searching for Transgender PornEnglish
11·14 days agoYou should generally dismiss what physicists in academia say about metaphysics, because crackpot quantum mysticism is rampantly popular and so you rarely get anything coherent from them.
I would recommend you check out my article here. Most academics in the physics departments believe in a property called “value indefiniteness” which amounts to crackpot solipsism based on poorly reasoned arguments that obviously cannot possibly be correct because Louis de Broglie presented a counterexample decades before these crackpot arguments were even made.
This is a strange phenomenon that the physicist John Bell points out in his paper “On the Impossible Pilot Wave.” The “pilot wave” theory is a model which is mathematically equivalent to standard quantum mechanics yet is value definite, and was first presented by de Broglie in the Solvay conference in 1927. Yet, despite this, academics from John von Neumann to Richard Feynman would go on to publish “impossibility theorems” trying to prove value definiteness is impossible, even though they all had a counterexample sitting in their lap.
Bell would then go on to publish several papers showing where the flaws in all their arguments are, but it had no impact on academia, and solipsism remains the overwhelmingly dominant position. Indeed “value indefiniteness” really is just a renaming of solipsism to make it sound less ridiculous. It literally means that particles have no values when you’re not looking at them, and since macroscopic objects, even other human beings, are made up of particles, it naturally applies to them as well: value indefiniteness = other people don’t exist if you’re not looking at them.
Many Worlds arose from this same crackpot delusion of physicists who recognize that solipsism is kinda silly but don’t want to give up value indefiniteness… which is literally solipsism. So they try to find a middle ground between solipsism and solipsism and their views just end up becoming coherent.
Bell points out in his paper “Quantum Mechanics for Cosmologists” that Many Worlds is still basically just solipsism but with a lot of extra baggage to confuse people to what they are even arguing so it is not so obvious that it is. A lot of Laymen falsely think Many Worlds is just the claim that there are many classical worlds. If I go to measure a photon in a superposition of both possible paths, then they think it means there will be a classical world where I perceive it on one path and another classical world where I perceive it on another path.
No, Many Worlds is even more incoherent, because no one perceives anything on any path at all. There are simply no objects which travel through 3D space within the interpretation. Consider that you walk from your living room to your bedroom, and you remember clearly that you did that. Since Many Worlds is still value indefinite, there does not exist any definite trajectories in 3D space, and so your memory has to be a complete lie. That didn’t happen. Indeed, no matter how strongly you feel that there is a computer/phone screen in front of you right now, in Many Worlds, that also must be a lie, because no objects exist in 3D space so there cannot be an object with a definite value in front of you right now.
This is what Bell saw as so absurd about it. Everything we perceive and believe we have perceived has to be largely disconnected from the real world, almost as if we’re living in a fake simulation, a brain in a vat, that is entirely disconnected from what is “actually going on.” Many Worlds is more batshit idiotic than you are led to believe from YouTube videos. It does not follow from the science at all, but follows from the crackpot quantum mysticism of “value indefiniteness,” which has no basis in the mathematics at all. Even many of the believers in academia admit that no one knows how to actually derive what we actually perceive from the interpretation.
Indeed, to some extent, it has always been both necessary and proper for man, in his thinking, to divide things up, and to separate them, so as to reduce his problems to manageable proportions; for evidently, if in our practical technical work we tried to deal with the whole of reality all at once, we would be swamped…However, when this mode of thought is applied more broadly…then man ceases to regard the resulting divisions as merely useful or convenient and begins to see and experience himself and his world as actually constituted of separately existent fragments…fragmentation is continually being brought about by the almost universal habit of taking the content of our thought for ‘a description of the world as it is’. Or we could say that, in this habit, our thought is regarded as in direct correspondence with objective reality. Since our thought is pervaded with differences and distinctions, it follows that such a habit leads us to look on these as real divisions, so that the world is then seen and experienced as actually broken up into fragments.
— David Bohm, “Wholeness and the Implicate Order”
bunchberry@lemmy.worldto
Ask Lemmy@lemmy.world•What is something about how people view or use technology that needs to die?
1·2 months ago“Why” implies an underlying ontology. Maybe there is something underneath it but it’s as far as it goes down as far as we currently know. If we don’t at least tentatively accept that our current most fundamental theories are the fundamental ontology of nature, at least as far as we currently know, then we can never believe anything about nature at all, because it would be an infinite regress. Every time we discover a new theory we can ask “well why does it work like that?” and so it would be impossible to actually believe anything about nature.
bunchberry@lemmy.worldto
Ask Lemmy@lemmy.world•What is something about how people view or use technology that needs to die?
2·2 months agoWhat is the distinction you are making between knowing the math and understanding it?
bunchberry@lemmy.worldto
Technology@lemmy.world•Quantum teleportation demonstrated over existing fiber networks — Deutsche Telekom’s T‑Labs used commercially available Qunnect hardware for the demo, claims 90% average accuracyEnglish
2·2 months agoThere are nonlocal effects in quantum mechanics but I am not sure I would consider quantum teleportation to be one of them. Quantum teleportation may look at first glance to be nonlocal but it can be trivially fit to local hidden variable models, such as Spekkens’ toy model, which makes it at least seem to me to belong in the class of local algorithms.
You have to remember that what is being “transferred” is a statistical description, not something physically tangible, and only observable in a large sample size (an ensemble). Hence, it would be a strange to think that the qubit is like holding a register of its entire quantum state and then that register is disappearing and reappearing on another qubit. The total information in the quantum state only exists in an ensemble.
In an individual run of the experiment, clearly, the joint measurement of 2 bits of information and its transmission over a classical channel is not transmitting the entire quantum state, but the quantum state is not something that exists in an individual run of the experiment anyways. The total information transmitted over an ensemble is much greater can would provide sufficient information to move the statistical description of one of the qubits to another entirely locally.
The complete quantum state is transmitted through the classical channel over the whole ensemble, and not in an individual run of the experiment. Hence, it can be replicated in a local model. It only looks like more than 2 bits of data is moving from one qubit to the other if you treat the quantum state as if it actually is a real physical property of a single qubit, because obviously that is not something that can be specified with 2 bits of information, but an ensemble can indeed encode a continuous distribution.
This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.
— Dmitry Blokhintsev
Here’s a trivially simple analogy. We describe a system in a statistical distribution of a single bit with [a; b] where a is the probability of 0 and b is the probability of 1. This is a continuous distribution and thus cannot be specified with just 1 bit of information. But we set up a protocol where I measure this bit and send you the bit’s value, and then you set your own bit to match what you received. The statistics on your bit now will also be guaranteed to be [a; b]. How is it that we transmitted a continuous statistical description that cannot be specified in just 1 bit with only 1 bit of information? Because we didn’t. In every single individual trial, we are always just transmitting 1 single bit. The statistical descriptions refer to an ensemble, and so you have to consider the amount of information actually transmitted over the ensemble.
A qubit’s quantum state has 2 degrees of freedom, as it can it be specified on the Bloch sphere with just an angle and a rotation. The amount of data transmitted over the classical channel is 2 bits. Over an ensemble, those 2 bits would become 2 continuous values, and thus the classical channel over an ensemble contains the exact degrees of freedom needed to describe the complete quantum state of a single qubit.
bunchberry@lemmy.worldto
Technology@lemmy.world•I’m a Computing Dummy Who Tried Quantum Coding. Here’s What HappenedEnglish
23·2 months agoI got interested in quantum computing as a way to combat quantum mysticism. Quantum mystics love to use quantum mechanics to justify their mystical claims, like quantum immortality, quantum consciousness, quantum healing, etc. Some mystics use quantum mechanics to “prove” things like we all live inside of a big “cosmic consciousness” and there is no objective reality, and they often reference papers published in the actual academic literature.
These papers on quantum foundations are almost universally framed in terms of a quantum circuit, because this deals with quantum information science, giving you a logical argument as to something “weird” about quantum mechanic’s logical structure, as shown in things like Bell’s theorem, the Frauchiger-Renner paradox, the Elitzur-Vaidman paradox, etc.
If a person claims something mystical and sends you a paper, and you can’t understand the paper, how are you supposed to respond? But you can use quantum computing as a tool to help you learn quantum information science so that you can eventually parse the paper, and then you can know how to rebut their mystical claims. But without actually studying the mathematics you will be at a loss.
You have to put some effort into understanding the mathematics. If you just go vaguely off of what you see in YouTube videos then you’re not going to understand what is actually being talked about. You can go through for example IBM’s courses on the basics of quantum computing and read a textbook on quantum computing and it gives you the foundations in quantum information science needed to actually parse the logical arguments in these papers and what they are really trying to say.
bunchberry@lemmy.worldto
Technology@lemmy.world•America Isn’t Ready for What AI Will Do to JobsEnglish
4·2 months agoMoore’s law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it’s a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.
This actually started to happen around 2015. These clever tricks were always exaggerated because there isn’t an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.
The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it’s actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia’s locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.
Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.
the world is run by PDF files
bunchberry@lemmy.worldto
Ask Lemmy@lemmy.world•Do you think intergalactic travel will ever be possible?
6·2 months agoSpeed of light limitation. Andromeda is 2.5 million light years away. Even if someone debunks special relativity and finds you could go faster than light, you would be moving so fast relative to cosmic dust particles that it would destroy the ship. So, either way, you cannot practically go faster than the speed of light.
The only way we could have intergalactic travel is a one-way trip that humanity here on earth would be long gone by the time it reached its destination so we could never know if it succeeded or not.
There is a lot of confusion because physicists changed the meaning of “locality” since the EPR paper to refer to relativistic locality (sending information faster than light) which was not what Einstein was on about. Einstein’s locality is probably most succently summarized as such:
- ∀x(Var(Pr(S’|S))=Var(Pr(S’|S∪x))) where x∉S
In this case, assume a bunch of particles are interacting, and S is the state of a system of interacting particles prior to the interaction, and S’ is the state of the system of interacting particles after the interaction. We then want to look at the variance (statistical spread) of the probability distribution of S’ preconditioned on S, that is to say, a prediction of the state of the system after the interaction given complete knowledge of the state of the system prior to interaction, and then compare that to the variance of another prediction where we precondition both on S and x, where x is the state of something outside of the system of interacting particles.
If a theory is local, then the two should always be equal for any possible value of x. This is because the outcome of a local interaction should only be determined by everything participating in the local interaction, that is to say, S, so preconditioning on complete knowledge of the initial states of everything participating in the interaction should give you sufficient knowledge to predict the outcome of the interaction, that is to say, S’, to best that is physically possible.
If you can include something outside of the interaction, that is to say, x, and it can improve your prediction further, then it must be nonlocal because it contains irreducible dependence upon something not involved in the interaction.
The point about the EPR paper is that if you don’t assume hidden variables, then this definition of locality is broken. Two entangled particles are said to be ontologically in a superposition of states, meaning, having complete knowledge on their states prior to the measurement interaction can only predict them both with a distribution of 50%/50%, but if you precondition on knowledge of an observer’s measurement far away, then you can improve your prediction as to your measurement of your local particle to 100% certainty, which violates this locality condition.
This is still local in the classical case where the only reason you could improve your prediction is because you were ignorant of the initial state of the particle to begin with, so you never preconditioned on the complete initial state of the system to begin with. Hence, adding hidden variables would, supposedly, restore this notion of locality, which we can call causal locality as opposed to relativistic locality.
What Bell’s theorem proves is that adding hidden variables does not restore causal locality. This is because, as he proves, in quantum mechanics, the state of an individual particle in a collection of entangled particles can have dependence upon the configuration of a collection of measurement devices, even though it only ever interacts with an individual measurement device. That means this violation of causal locality is intrinsic to the mathematics of the theory and is not something that just arises due to a lack of hidden variables.
Even worse, as Bell says, adding hidden variables appears to make it “grossly nonlocal,” which by that he meant it violates relativistic locality as well. At least without introducing something like superdeterminism or retrocausality.
Many Worlds is a rather bizarre interpretation.
\1) Even the creator of MWI, Hugh Everett, agreed that wavefunction is relative and wrote a paper on that, but then he also claims there is a “universal” wavefunction. That makes about as much sense as saying there is a “universal velocity” in Galilean relativity. There is never a mathematical justification for how there can possibly be a universal wavefunction. It is just asserted that there is. It does not fall out of QM naturally, a theory which only deals with relative wavefunctions.
This paper shows some technical arguments for the impossibility of a universal wavefunction:
\2) The EPR paper proves that the statistical predictions of QM violate causal locality (although not relativistic locality), and MWI proponents claim they can get around this by assuming that the statistical predictions, given by the Born rule, are just a subjective illusion. But this makes no sense. A subjective illusion still arises somehow, it still needs a physical explanation, and any attempt to give a physical explanation must necessarily reproduce Born rule probabilities, which as Einstein already proved, violate causal locality. Some try to redefine locality to be in terms of relativistic locality (no-communication), but even Copenhagen is local in that sense!
These papers show how interpretations like MWI simply cannot be compatible with causal locality:
\3) MWI proponents also forget that nobody on earth has ever seen a wavefunction. The wavefunction is just a mathematical tool used to predict the behavior of particles with definite values. The Born rule wasn’t added for fun. Einstein had lamented at how if you evolve a radioactive atom according to the Schrodinger equation, it never at any point evolves into anything that looks like decay or no-decay. The evolved wavefunction is very different than anything we have actually ever observed, and you only can tie it back to what we observe with the Born rule, which then converts the wavefunction into a probability distribution of decay or no-decay.
If you throw out the Born rule, then you are thus left with a mathematical description of the universe which has no relationship to anything we ever observe or can ever observe. This lecture below explains this problem in more detail:
bunchberry@lemmy.worldto
Technology@lemmy.world•Jeff Bezos said the quiet part out loud — hopes that you'll give up your PC to rent one from the cloudEnglish
2·3 months agoIt’s… literally the opposite. The giant AI models with trillions of parameters are not something you can run without spending many thousands of dollars, and quantum computers cost millions. These are definitely not services that are going to fall into the hands of everyday people. At best you get small AI models.
bunchberry@lemmy.worldto
Technology@lemmy.world•How we get to 1 nanometer chips and beyondEnglish
1·3 months agoThe reason quantum computers are theoretically faster is because of the non-separable nature of quantum systems.
Imagine you have a classical computer where some logic gates flip bits randomly, and multi-bit logic gates could flip them randomly but in a correlated way. These kinds of computers exist and are called probabilistic computers and you can represent all the bits using a vector and the logic gates with matrices called stochastic matrices.
The vector necessarily is non-separable, meaning, you cannot get the right predictions if you describe the statistics of the computer with a vector assigned to each p-bit separately, but must assign a single vector to all p-bits taken together. This is because the statistics can become correlated with each other, i.e. the statistics of one p-bit depends upon another, and thus if you describe them using separate vectors you will lose information about the correlations between the p-bits.
The p-bit vector grows in complexity exponentially as you add more p-bits to the system (complexity = 2^N where N is the number of p-bits), even though the total states of all the p-bits only grows linearly (complexity = 2N). The reason for this is purely an epistemic one. The physical system only grows in complexity linearly, but because we are ignorant of the actual state of the system (2N), we have to consider all possible configurations of the system (2^N) over an infinite number of experiments.
The exponential complexity arises from considering what physicists call an “ensemble” of individual systems. We are not considering the state of the physical system as it currently exists right now (which only has a complexity of 2N) precisely because we do not know the values of the p-bits, but we are instead considering a statistical distribution which represents repeating the same experiment an infinite number of times and distributing the results, and in such an ensemble the system would take every possible path and thus the ensemble has far more complexity (2^N).
This is a classical computer with p-bits. What about a quantum computer with q-bits? It turns out that you can represent all of quantum mechanics simply by allowing probability theory to have negative numbers. If you introduce negative numbers, you get what are called quasi-probabilities, and this is enough to reproduce the logic of quantum mechanics.
You can imagine that quantum computers consist of q-bits that can be either 0 or 1 and logic gates that randomly flip their states, but rather than representing the q-bit in terms of the probability of being 0 or 1, you can represent the qubit with four numbers, the first two associated with its probability of being 0 (summing them together gives you the real probability of 0) and the second two associated with its probability of being 1 (summing them together gives you the real probability of 1).
Like normal probability theory, the numbers have to all add up to 1, being 100%, but because you have two numbers assigned to each state, you can have some quasi-probabilities be negative while the whole thing still adds up to 100%. (Note: we use two numbers instead of one to describe each state with quasi-probabilities because otherwise the introduction of negative numbers would break L1 normalization, which is a crucial feature to probability theory.)
Indeed, with that simple modification, the rest of the theory just becomes normal probability theory, and you can do everything you would normally do in normal classical probability theory, such as build probability trees and whatever to predict the behavior of the system.
However, this is where it gets interesting.
As we said before, the exponential complexity of classical probability is assumed to merely something epistemic because we are considering an ensemble of systems, even though the physical system in reality only has linear complexity. Yet, it is possible to prove that the exponential complexity of a quasi-probabilistic system cannot be treated as epistemic. There is no classical system with linear complexity where an ensemble of that system will give you quasi-probabilistic behavior.
As you add more q-bits to a quantum computer, its complexity grows exponentially in a way that is irreducible to linear complexity. In order for a classical computer to keep up, every time an additional q-bit is added, if you want to simulate it on a classical computer, you have to increase the number of bits in a way that grows exponentially. Even after 300 q-bits, that means the complexity would be 2^N = 2^300, which means the number of bits you would need to simulate it would exceed the number of atoms in the observable universe.
This is what I mean by quantum systems being inherently “non-separable.” You cannot take an exponentially complex quantum system and imagine it as separable into an ensemble of many individual linearly complex systems. Even if it turns out that quantum mechanics is not fundamental and there are deeper deterministic dynamics, the deeper deterministic dynamics must still have exponential complexity for the physical state of the system.
In practice, this increase in complexity does not mean you can always solve problems faster. The system might be more complex, but it requires clever algorithms to figure out how to actually translate that into problem solving, and currently there are only a handful of known algorithms you can significantly speed up with quantum computers.
For reference: https://arxiv.org/abs/0711.4770
Many have worked out in the USA’s favor. Not necessarily the favor of the people there. Such as Chile, the US got a pro-US puppet in there, even though he was a mass murdering fascist.
The US also sanctions countries it doesn’t control, which note that sanctions does not just mean “we won’t trade with you” but also “we won’t trade with everyone who trades with you,” meaning every country has to pick between US market or that country’s market and since the US is the biggest economy in the world they all obviously pick the US, so it practically has the effect as an economic blockade on the country. General Electric for example was fined millions for selling water purifiers to Cuba to help them get clean drinking water.
Hence, sometimes the economies do indeed do better after the coup and some people use this as “proof” the totalitarian fascists were better, such as in Chile the economy did improve when Pinochet took power, but Pinochet actually kept in place a lot of the nationalizations that occurred under Allende, including the nationalization of the copper mines. It’s obvious that they did better economically because the US restored normal economic relations.
But it’s always a convenient excuse and it works on a lot of people. Just sanction country into poverty then blame the government for the poverty then use that as an excuse to install your own government then lift the sanctions and claim that the economic improvement is because of the new government.
The US nationalists who decry the poverty in these countries and use it to call for regime change will never never never never agree to lift the sanctions first to check if that will help them get out of poverty before intervening. They will always come up with a million and one excuses as to why we can’t do that and we must try invading them first right now.






Nope. Quantum mechanics is just a statistical theory. Anything beyond that is a delusion from crackpot quantum mystics, which sadly even your views are popular among academics, because quantum woo permeates both the non-academic and academic circles alike.