Decoherence Saves Reality in a World of Relative Facts

How Decoherence Solves the Measurement Problem

bunchberry
29 min readAug 21, 2024

Quantum mechanics is a probabilistic theory that differs drastically from classical probability theory. These differences show up in the form of interference effects. These interference effects, on their surface, seem to imply a catastrophic breakdown in the notion of an objective reality independent of the observe due to what will be referred to here as the interference paradox. This interference paradox gives rise to what is known as the measurement problem.

Yet, as will be shown, decoherence solves this paradox and makes quantum mechanics consistent as a theory of the natural world. Indeed, despite it often being claimed to the contrary that decoherence fails to solve the measurement problem, it will also be shown here how decoherence indeed is sufficient to solve the measurement problem when quantum mechanics is taken at face value.

What are Interference Effects?

The main difference between quantum and classical probability theory is that in classical probability, all events have a probability of occurring that can be represented as a positive real number between 0 and 1. In quantum mechanics, however, probability distributions are given by a list of complex values probability amplitudes. This means that probabilities can indeed be between 0 and 1, but they can also be between -1 and 0, and even 𝒾 and -𝒾.

How do we interpret the probability of an event that is negative or even imaginary? You can visualize a complex number not on a number line but on a plane known as the complex plane. A plane is two dimensional and thus the complex number can be plotted as a point where the x-coordinate is determined by its real component and its y-value is determined by its imaginary component.

The likelihood of an event occurring is related to the distance the complex values is from the origin (the distance from zero). The two complex numbers 0.5 and 0.5𝒾 are equidistant from the origin and thus have an equal likelihood of occurring. The two complex numbers -1 and 0.5, the former is further from the origin and thus represents a greater likelihood of an event occurring.

https://mathhints.com/trigonometry/trig-form-of-a-complex-number/

Note that the way we are going to discuss probabilities in relation to quantum mechanics is not exactly how they are used in reality, but it is a simplification that makes it easy to follow along with for anyone with an understanding of basic concepts like multiplication and addition.

With that out the way, consider a classical (non-quantum) machine which you place a coin into and it flips that coin so that whether or not comes out heads or tails is random, but it only flips the coin if the coin is entered into the machine with heads facing up. Now, let’s say, we flip the coin before putting it into the machine. What is the probability distribution for the machine’s output?

We can say our input is Pr(H)=50% and Pr(T)=50% for the probability of heads and take respectively since we flip the coin before putting it into the machine. Next, we can consider the logic of our machine. It can be expressed by the two statements below. Like we said, if the input is heads, it flips the coin, if the input is tails, it does nothing to it, and thus it deterministically leaves the machine with tails facing up.

if H then Pr(H)=50% and Pr(T)=50%
if T then Pr(H)=0% and Pr(T)=100%

To compute the probability distribution for the output, we just need to take each possible output and weight it by the possible inputs and then sum the two together. Keep in mind our input is Pr(H)=50% and Pr(T)=50% as we flip the coin ourselves before putting it into the machine.

In the case where it is heads, the output is Pr(H)=50% and Pr(T)=50%. If we weight this by the input Pr(H), then the output would be Pr(H)=25% and Pr(T)=25%. In the case where it is tails, the output is Pr(H)=0% and Pr(T)=100%. If we weight this by the input Pr(T), then the output would be Pr(H)=0% and Pr(T)=50%. If we add the two together, we get our final probability distribution of Pr(H)=25% and Pr(T)=75%.

Let’s consider a second machine. This machine applies the same distribution independently of the input. Both heads or tails, it will flip the coin.

if H then Pr(H)=50% and Pr(T)=50%
if T then Pr(H)=50% and Pr(T)=50%

What if we don’t flip the coin initially but just input the coin with heads definitely facing up? We do not even have to do the calculation as it is rather obvious the output would be Pr(H)=50% and Pr(T)=50% as the machine would flip the coin.

What if we chained two of these machines together so that it flips the coin then immediately feeds it into a second machine which also flips the coin? Again, the outcome is obvious, we don’t need to even do the calculation. If you flip a coin twice, the final probability distribution is still going to be Pr(H)=50% and Pr(T)=50%.

However, consider we have a quantum coin flipper that allows for negative probability amplitudes. In such a case, we could construct a machine that has the behavior shown below. Notice how there is a negative sign for the probability for tails of and only if the input is tails.

if H then Pr(H)=50% and Pr(T)=50%
if T then Pr(H)=50% and Pr(T)=-50%

Now, for this machine, we do the same thing by putting in a coin that has heads facing up. This can be expressed by the probability distribution Pr(H)=100% and Pr(T)=0%.

In the case where it is heads, the output is Pr(H)=50% and Pr(T)=50%. The contribution from the input for heads is 100%, and so if we scale this by 50%, we get Pr(H)=50% and Pr(T)=50%.

In the case where it is tails, the output is Pr(H)=50% and Pr(T)=-50%. The contribution from the input for tails is 0%, and so if we scale this by 0%, we get Pr(H)=0% and Pr(T)=0%.

If we add these two distributions together, we get a total final distribution of Pr(H)=50% and Pr(T)=50%.

What is interesting about this? So far, this looks identical to our classical coin flipper. The output is exactly the same. However, it gets rather interesting if we chain two of these quantum coin flipping machines together. We take the output distribution from the first machine of Pr(H)=50% and Pr(T)=50% and directly pass it into a second machine.

In the case where it is heads, the output is Pr(H)=50% and Pr(T)=50%. The contribution from the input for heads is 50%, and so if we scale this by 50%, we get Pr(H)=25% and Pr(T)=25%.

In the case where it is tails, the output is Pr(H)=50% and Pr(T)=-50%. The contribution from the input for tails is 50%, and so if we scale this by 50%, we get Pr(H)=25% and Pr(T)=-25%.

If we add these two distributions together, we get a total final distribution of Pr(H)=50% and Pr(T)=0%. If it doesn’t add up to 1, we have to normalize it, and so in this case we get Pr(H)=100% and Pr(T)=0%.

Recall that with the classical coin flipper, flipping the coin twice had the same effect as flipping it once. Yet, in the quantum coin flipper, flipping it twice returns it to its initial state which was heads facing up. If we began with tails facing up rather than heads, then we would end with Pr(H)=0% and Pr(T)=100%. You can carry out the calculation beginning from tails instead if you wish and you will see this is the result.

This is only possible if the second machine has some way to derive the initial state from the input it receives from the output of the first machine. This is possible because the outcome of the first machine is Pr(H)=50% and Pr(T)=50% if the input is heads, but the output of the first machine is actually Pr(H)=50% and Pr(T)=-50% if the input is tails. Notice the negative sign in the latter case. Both these probabilities have the same distance form the origin, so if we measured them in this state, they would appear exactly the same. However, if we do not measure them and pass them into a second machine, that second machine could distinguish between the two based on one of the probability distributions having a negative sign and the other one not having one. This is an example of interference and does not exist in classical probability theory.

Why is it called interference? Look at the case where we passed the output from the first machine into the second machine. The two possible inputs contributed two positive 25% values for heads which constructively interfered to produce a final output of 50% for heads which was later normalized to 100%. However, in the case of tails, the two possible inputs contributed to a positive and a negative -50% value for tails which destructively interfered to produce 0% on the output.

Indeed, these kind of interference effects are observed in real-world experiments. A mirror reflects light, a window allows light to pass through. There is a device known as a beam splitter which is halfway in between: half the light is reflected and half is passed through. If you fire a laser beam through it, half the laser light will reflect off of it and the other half will pass through it, splitting the light where each split beam has half the intensity of the original. You can then place two mirrors on both possible paths for the light to reflect it back into another beam splitter.

You can put a coating on the beam splitters such that when the beam of light reflects off of it, it gets a slight shift to its wavelength. When the two beams of light come together at the second beam splitter, they again have two paths where they can be reflected or pass through. The slight offset in wavelengths will lead to the peaks of the waves lining up so that they will constructively interfere on one of the possible paths causing the two half-intensity light beams to become a full intensity light beam, and destructively interfere on one of the other possible paths, causing them to cancel themselves out. This leads to the beam at the second beam splitter to rejoin on one output and cancel out on the other.

The Mach–Zehnder interferometer. Interference effects cause the final outputs to be 100% / 0%.

This occurs even if you only pass in a single photon of light at a time. Logically, you can think of each beam splitter as the quantum machine we presented before where the outcome of the coin flip — heads or tails — is just whether or not the photon reflects off of or passes through the beam splitter.

The Interference Paradox

Now since we have introduced interference, what is the interference paradox? If the outcome of an experiment is deterministic, then it always, in reality, has a 100% probability of occurring. Any other probability would be due to our own ignorance. We say a coin in the air only has a 50% chance of being heads and a 50% chance of being tails just because we are ignorant of it and thus cannot predict its outcome. If the coin lands and we see the outcome, then we know for certain it is 100% for whatever outcome we saw, such as, if we saw it as heads, we can say with certainty that it is 100% heads and 0% tails.

If the entire universe is deterministic, then the probability distribution for any real outcome is always 100%, and the probabilities we assign it is, again, due to our ignorance. Yet, the probability distribution of 100% is a positive real value. If the outcome for everything is 100%, then there can be no interference effects, because nothing could ever have a negative term or imaginary term. So, we might say, maybe the universe is just fundamentally probabilistic. However, it is not so simple.

Even if the universe is fundamentally probabilistic, we still update our probability to 100% whenever we observe the outcome of an experiment. If I flip a quantum coin that has a 50% chance of being heads and a -50% chance of being tails, and it lands on, let’s say, tails, I am going to update my probability distribution to be 100% tails. Again, 100% has no negative or imaginary components, so any potential for interference from that quantum coin disappears.

Indeed, we see this in reality. Let’s go back to the Mach–Zehnder interferometer. Imagine that you are curious as to how the interference effects work, so you place two whichway detectors on the two possible paths in between the two beams splitters to see the direction the photon is taking. If you do this, then whatever outcome you observe, you have to update its probability to 100%, which does not have any negative or imaginary components, and thus, it could not interfere with itself. What you would expect, then, is that when the photons reach the second beam splitter, the interference effects are lost.

The Mach–Zehnder interferometer with whichway detectors. Interference effects are lost by the presence of these detectors and thus the final output is 50% / 50%.

Indeed, this is actually what is seen in experiments. If you measure the values of particles midway through an experiment, then they lose their ability to interfere with themselves. Interference effects disappear and they behave more like what you would expect classically. The output of the experiment above is exactly the same as when we chained our two classical coin flipping machines together.

To measure something just means to interact with it, and to interact with it requires that the particle really does have a definite, deterministic value at the moment of interaction. You cannot interact with something with no value, that doesn’t make sense. Thus, so far, it actually makes perfectly logical sense that interacting with the particle halfway through the experiment would destroy its ability to interfere with itself.

So, if this makes logical sense, how does it lead to paradox?

Consider that two observers, Alice and Bob, are playing around with the Mach–Zehnder interferometer. Alice sets up measuring devices to measure the whichway information of the particles, but Bob never looks at the whichway information, although he is aware Alice measured it. Bob thus assigns the path the photons take with a probability distribution of 50% and -50% like with our quantum coin machines where there is a negative sign.

Alice, however, assigns the path of all the photons with 100% for the paths she actually observed. There is a fact about the path of the photo nfrom her perspective: it is definite and determined. For Bob, it is not. Since we said that quantum mechanics has to be fundamentally random in order to preserve interference effects, then there really isn’t a fact about the photon’s positions from Bob’s perspective bu there is for Alice. In relational quantum mechanics, this is referred to as the relativity of facts as certain information may simply not exist yet from certain frames of reference.

Indeed, the relativity of facts is inevitable if we take quantum mechanics at face value. It is not an additional assumption but a necessary consequence of the theory. You cannot say that when Alice measures the particle’s position it takes on a definite value and so Bob’s probabilistic description is only due to his lack of knowledge of what Alice measured. Recall that the photon also interacted with a beam splitter. To say Alice’s measurement causes the probability distribution to “collapse” into a definite value for all observers is to suggest something special that differentiates between measurements and interactions in general where only the former category can cause this “collapse.” You would need to then use this distinction to demarcate between Alice’s measurement and the photon’s interaction with the beam splitter.

However, if measurement is some sort of special interaction, then you need a theory of measurement. This is sometimes referred to as the measurement problem as quantum mechanics does not contain any such theory of measurement; it does not define what a “measurement” actually constitutes. A theory of measurement that would demarcate a precise point when interference effects disappear is an objective collapse model. These models are not quantum mechanics but new theories which make new predictions. Recall that, again, the presence or absence of interference effects can produce different observable outcomes.

Any dividing line you draw between what interactions can create interference effects and what cannot (or, in some cases, how long these effects can persist) inherently leads to limitations on the scalability of interference effects. It requires not only additional assumptions to avoid the relativity of facts, but trying to avoid it also changes the predictions of quantum mechanics and is no longer even the same theory. The relativity of facts of just a feature of the formalism of the theory and is unavoidable without modifying it.

There currently exists no evidence for limitations on the scalability of interference effects. There is no evidence for objective collapse theories. They are largely speculative. If we take quantum mechanics at face value, then no interaction is more special than any others whether or not particles interact with beam splitters or with another person measuring them. It remains probabilistic all the way up until it is either measured directly, or measured indirectly by measuring something that measured it. From Bob’s perspective, the path a particular photon takes is fundamentally undetermined until he either measures it himself or asks Alice for her measurement results.

Here is the paradox: From Alice’s perspective, she would predict no interference effects on the output. She assigned all the photons 100% probability for a particular path where there are no negative signs involved. She would expect the photons at the second beam splitter to be reflected or pass through with a probability distribution of 50%/50%. Bob, however, would predict interference effects would cause distribution to be 100%/0%.

Indeed, Bob would not just describe the photons as still having a probabilistic distribution of 50% and -50%, but he would also predict that if we were to ask Alice what she observed for a particular photon, there would be a 50% chance Alice would say the photon was reflected by the first beam splitter and a -50% chance she would say she saw the photons pass through. He would also have to say that not only could the photons still exhibit interference effects, but so can Alice herself, a conclusion Alice would not draw.

Alice’s and Bob’s assigned probability distribution, Bob’s being a dotted line as it is inferred.

This seems to imply a catastrophic breakdown in our notion of an objective reality as experiments based on these facts would produce different outcomes: Alice and Bob would see different “worlds” so to speak and there would be no consistent reality.

There is another form this paradox can take. Let’s say you have a quantum computer that relies on interference effects. If you measure the particles constantly, they would always have definite values of 100% without any negative or imaginary components, and thus could never interfere with themselves, and so you could never have interference effects. You thus have to not make any measurements until the full computation is complete. All is good.

However, every time you apply a logic gate in a quantum computer, the particles inside are undergoing interactions. For there to be interactions, there must be definite values for those interactions, at least from the frame of reference of those particles. This alone is not paradoxical if we consider facts to be relative as it perfectly makes sense that, relative to the observer on the outside, the system has no fact about until they interact with it, but from the reference frame relative to the particles on the inside, there is a fact as to its outcome.

The paradox comes in when we consider that if there is a fact to the outcome of the reference frame of the particles on the inside, then from the inside, there would be no interference effects, and thus the quantum computation could not occur, while there would indeed be interference effects from the point of view on the outside and quantum computation could occur.

Decoherence Preserves Reality

Whenever we observe something, we typically don’t include ourselves in what we say we saw. We don’t say, “I saw a tree and myself.” You can’t typically see yourself except in a mirror, and you usually are not in front of a mirror. When we describe things, we typically only talk about what we see and leave ourselves out. “I saw a tree.” This is the first-person point of view.

In the paradox with Alice, Bob, and the photons, Alice’s point of view on the photons is from the first-person. Alice observes the photons. When Bob considers what Alice will tell him if he asks about her measurements, he is considering the system from a third-person perspective. He would be a third-party to both Alice and the photons, contemplating them from the outside. He knows ,from the outside, that Alice interacted with the photons and assigns probabilities to both Alice and the photons.

The term “interaction” has third-person connotations. From the first-person, Alice’s interaction with the photons is an observation. When we observe something, we are just speaking about interacting with it from the first-person. When we are in the third-person, we can describe two systems we are observing from the outside as an interaction. Alice interacted with the photons from Bob’s perspective, Alice observed the photons from her perspective. Really, the two words ultimately mean the same thing, but only a switch in perspective is involved.

This is important to keep in mind: to observe something is to interact with it as described from a first-person perspective, and to interact with something is to observe it as described from a third-person perspective by an outside observer. What the interference paradox shows is a seeming contradiction between a system when described from the first-person and when described from the third-person. You seem to get different answers as to whether or not there can be interference effects depending on the reference frame you choose when describing the same system.

Decoherence allows us to reconcile this contradiction. What is decoherence? Decoherence is simply a mathematical fact regarding entanglement. Recall how we said Bob would know Alice has a 50% chance of telling him she saw a particular photon be reflected and a -50% chance of telling him that she observed it pass through the first beam splitter. These are the same exact probabilities that he would assign to the photons as well: he would expect a 50% probability for the photons to pass through and a -50% chance of them being reflected by the first beam splitter.

Indeed, not only are the probabilities the same, but they should align. We would not expect a photon to be reflected by the beam splitter and Alice to tell Bob it passed through. Bob would expect Alice to report to him whatever she saw if he asks. Therefore, it is not true that these probability distributions apply to Alice and the photons separately, but they apply only collectively. Bob collectively would say the only two possible outcomes are 50% that both the photon would be reflected and Alice would report that it was reflected, and -50% that both the photon would pass through and that Alice would say it passes through. When you describe a system collectively like this they are said to be entangled with one another.

Bob’s probability distribution should be applied to Alice and the particle together as an entangled system.

Whenever two systems interact in such a way where one system, in a sense, imprints its value onto another, they become entangled. Alice’s interaction with the photon imprints the photon’s state onto her measuring device, which she observes, which in turn imprints that state onto her brain. We can also talk about this for single particles: you can imagine two electrons X and Y where X interacts with Y such that the spin value of Y becomes the same as the spin value of X. Whenever two systems interact in such a way, they become entangled with each other. Entanglement is a third-person phenomena. Bob describes Alice and the photon as entangled, but Alice would not describe herself as entangled with the photon.

Now, what does decoherence tell us? What decoherence shows is that Bob is correct to describe Alice and the photons as an entangled system where their probabilities are described collectively. Describing them collectively is correct. However, where the error in Bob’s reasoning comes in is also assigning these probabilities individually to Alice and the photons. The probability distribution of 50% and -50% only applies to Alice and the photons taken collectively as an entangled system. It cannot be applied to any of the two individually.

To correctly know the state of a subsystem in an entangled system, Bob has to apply a mathematical operation where he traces out (ignores) part of the system not under consideration. In this case, he describes Alice and the photons collectively with a probability distribution of 50% and -50%. However, if Bob wants to consider the behavior of the photon on its own and how it will actually interact with the second beam splitter, he has to trace out Alice. When he traces out Alice, he finds that the probability distribution is not 50% and -50% for the path the photon takes, but 50% and 50%. He gets only positive real-valued probabilities: the interference effects are lost.

When Bob considers Alice and the particle collectively, there are still interference effects, but when he considers them individually in isolation, there are no interference effects.

What hasn’t happened here is that Bob assigns a definitive outcome for the photons. It is still probabilistic. Indeed, facts are still relative. From Alice’s perspective, there is a definitive outcome. From Bob’s perspective, there is not. Yet, what has happened is that Bob will no longer make a different prediction as to how the photon will behave when it interacts with the second beam splitter. He would predict the same probabilistic distribution of photons without interference effects as Alice. Alice and the photons aren’t collectively going through the beam splitter, only the photons are, so he only has to consider the photons on their own whereby the interference effects disappear when he traces out Alice. Alice, when she describes what she observes, she typically does not include herself in her own observation, and thus implicitly “traces out” herself.

When he considers the system together, he would also expect even further consistency between himself and Alice. Again, the 50% and -50% distributions do not apply to Alice and the photons individually, but collectively. It describes the whole system at once. Thus, he would always expect Alice and the photon’s behavior to be aligned: he would not predict that it’s possible, for example, for a particular photon to be reflected Alice tell him it passed through (assuming she’s telling the truth and her equipment is fully functional). He would expect how Alice reports on what she sees about the particle to match what he would see if he looked at it himself. He would expect their two realities to coincide.

Indeed, decoherence allows us to extend this to all interactions without having to give a special role to that of “measurement” or a conscious human observer. Whenever any two physical systems interact — whether human beings or even fundamental particles — in such a way that one leaves an imprint of its state onto the other, if we can think of think of the latter system as the affected system and the former system as the affecting system.

If we assume the reference frame of the affected system, we would say that the affected system observed the affected system. If we added a third reference frame from the outside looking onto the two systems, we would say that the affected system and the affecting system interacted with each other such that they became entangled with one another.

Whenever entanglement is formed, we know relative facts formed between the two systems. There is a relative fact as to the value of the affecting system from the reference frame of the affected system, but there is no fact about it from the third reference frame that is not a party to those interactions. From the perspective of the affected system, because it assigns a relative fact to the affecting system, it would give it a probability distribution of 100%, and thus would not expect it to exhibit interference effects. From the perspective of the third-person reference frame on the outside, it would describe the affecting system and the affected system as having become entangled with each other. When it traces out the affected system, it too finds that the affected system would not exhibit interference effects.

There is always a definite value for the outcome of an interaction from some reference frame, but there is never a definite value for all outcomes of all interactions for a particular reference frame. For Schrodinger’s cat, from its reference frame, it is either dead or alive, but not both. From the outside observer, until they open the box and interact with the system, there is no fact as to the outcome from their frame of reference. It is not that the cat is in two states at once: it has no state at all.

In the case of quantum computers, it would indeed be true that every single particle interaction in the computer has a definite value from its own reference frame for the particles subject to the interaction and therefore could not exhibit interference effects. However, remember that no single reference frame is privy to all information. For any given particle, there is only a fact as to the outcome for its immediate interaction with another particle. For any given particle, if one considers particles outside of their immediate interactions, from its reference frame, those particles would also be entangled without definite values.

If somehow particles could speak, every individual particle would be privy to a definite outcome to the particles it immediately interacts and would therefore describe that particle as no longer being able to exhibit interference effects. However, that particle would describe potentially every other particle in the quantum computer as entangled with one another. This would be true for every particle: they would all say that different particles haw definite values and different particles are entangled with each other.

The fact that no single particle is privy to all the information in the system allows there to be consistency. Every particle can “observe” the particle it immediately interacts with as having a definite value without running into contradiction or destroying interference because this definite value is something that only acquires being within that particle’s reference frame and not all the other ones.

From a single particle’s perspective in a quantum computer (black), most other qubits have no definite value relative to it (white) except for qubits it immediately interacts with or can infer its value through a chain of interactions (grey). These chains are limited due to the uncertainty principle. The length of the chains depends upon the nature of the specific program.

Indeed, even from the programmer on the outside, they would describe the whole quantum computer in an entangled state that can exhibit interference effects until they observe it, but if they do a partial trace to isolate the state of a single particle in an interaction, they would also see that individual particle in that interaction to not be able to exhibit interference effects. The interference effects they expect is only of the whole system taken together collectively.

Due to the fact that quantum mechanics is fundamentally probabilistic, whether or not there is a definitive outcome depends upon reference frame. The relativity of facts only suggests that whether or not a fact is indeed a fact depends upon reference frame, although it does not imply that the specific values of those facts can ever differ. They are held consistent due to decoherence and entanglement as shown here. If two observes look at the same object and communicate results, they will always tell each other that they perceived the same thing (assuming they are not lying and their equipment is not dysfunctional).

In Galilean relativity, for example, two observers can observe the same system yet assign it different velocities. Consider if Alice and Bob observe the velocity of the same object and report it to Charlie without communicating with one another. If they report different values, this is not necessarily confusing to Charlie if Charlie understands the relativity of velocity and is aware of the specifics of the separate frames of reference Alice and Bob made their observations from.

Indeed, if Bob sees Alice making a measurement on a system but cannot see the measurement outcome itself, he would report to Charlie that the two are entangled without a definite outcome, while Alice would report to Charlie a definite value for her measurement result. Charlie, again, would not be confused by this if he understands their different frames of reference. The relativity of facts does not lead to any sort of catastrophic breakdown in reality as long as, whenever a fact is reported, it is specified relative to what.

Decoherence additionally explains why quantum probabilities converge to classical probabilities on large scales, that is to say, why interference effects are rarely observable outside of the laboratory. When particles interact with the environment, they can become entangled with it. If you trace out the environment in the same way as tracing out Alice, this leaves you with particles that cannot exhibit interference effects. Hence, it is rather difficult to scale up quantum effects as their interactions with the environment leads to the loss of interference effects unless you can take into account the whole environment. But the environment will subsequently interact with its environment, and that environment its environment, ad infinitum. Quantum effects therefore quickly get diluted into the environment and not practically recoverable.

Is there still a measurement problem?

The probabilistic nature of quantum mechanics inherently implies the relativity of facts when taken at face value. If I flip a fundamentally random coin and I see the outcome but you do not, then the fact has a definite outcome from my perspective but not from your own.

People who still insist there is a measurement problem do so on philosophical grounds. There is nothing presented thus far that leads to any sort of mathematical inconsistency. The belief that decoherence is not sufficient to solve the measurement problem is not due to a real problem with quantum mechanics as a theory, but more-so due to a distaste for the theory’s implications.

For example, if you dislike the notion that facts are relative, you may try to constrain this notion. Facts are only relative to fundamental particles, but on large scales it is deterministic. At some point, there is a sudden transition from the quantum world to the classical world. These are ultimately what objective collapse models are about. The interference effects “collapse” into a definite state at some point. Objective collapse models, again, are alternative theories to quantum mechanics and make different predictions, and there is currently no evidence for them.

You could also attack this notion by questioning the very premise that nature is actually probabilistic. This would require adding hidden variables to quantum mechanics which would render it incompatible with special relativity (due to Bell’s theorem). Our best theory of the natural world is not quantum mechanics but quantum field theory, which unifies quantum mechanics and special relativity. Ergo, adding hidden variables forces you to create a brand new theory which cannot reproduce exactly the predictions of quantum field theory and there is no evidence for such an alternative theory.

A third approach would be to argue that the probability distributions themselves have some sort of ontological reality. When we assign a distribution like 50% and -50% to something, we are not just making a statistical prediction, but we are actually describing some sort of physical entity that is 50% this way and -50% that way. When you make such a probabilistic prediction but then later perceive only a single outcome, in reality, the universe branched off into two yous where one of the two yous are this way and measured this outcome and one of the other two yous is that way and measures that outcome.

This is known as the Many-Worlds Interpretation. Whether or not there exists a fact as to the particular outcome of an interaction is not something relative to a particular frame of reference, but instead the answer is always “yes.” There is always a definitive answer because all outcomes always exist. It would therefore make quantum mechanics into a deterministic theory. If you predict a probability of observing two possible outcomes, well, two possible outcomes actually occur, it’s just that this means you will also split off into two copies of yourself that see different outcomes.

This notion requires adding additional mathematical complexity to quantum mechanics because, if you make this assumption, it ceases to be clear why these complex-valued probability amplitudes relate to the distance from the origin on the complex plane (more specifically, it is the distance squared). Quantum mechanics just takes this as a postulate known as the Born rule. There is no explanation given other than it being just a law of nature.

By describing these complex-valued probability amplitudes as a physical evolving entity, it becomes more complicated than that. You have to give an underlying “story” as to how the evolution of this entity actually produces the Born rule. The MWI has an issue that is somewhat different than objective collapse or hidden variable models. The latter two can very easily put forward a theory that is falsifiable, but their problem is just that they are often falsified at worst, or just have no practical way to test them at the moment at best. For MWI, the problem is different: it simply cannot be experimentally verified at all as this underlying story is usually construed specifically to reproduce the Born rule exactly. It makes no new predictions and thus is not falsifiable.

Now, one might say, if it makes the same predictions, why is this a problem? The problem is that the moment you give an underlying story for a law of nature, then someone else could in principle give a different underlying story that also predicts the same law, and someone else could give yet another one that also makes the same predictions. This is the problem MWI faces. There is not one Many-Worlds Interpretation, but many Many-Worlds Interpretations, and there currently seems to be no possibility to distinguish between them.

Some have tried to introduce such a possibility by modifying MWI such that it is falsifiable by making slightly different predictions than traditional quantum mechanics. However, the moment you do this, it runs into the same problem as objective collapse models and hidden variable models. That is to say, it either becomes falsified, or we lack the tools to even attempt to test it, and ultimately becomes an entirely new speculative theory which is not the same as quantum mechanics and does not make the same predictions.

The truth is, decoherence really does solve the measurement problem if we take quantum mechanics at face value without changing its predictions, without adding additional mathematical complexity, and without introducing some additional underlying story. Objective collapse models, the Many-Worlds Interpretation, and hidden variable theories are not needed. It makes quantum mechanics a consistent theory from which all nature can be interpreted in a consistent way without the introduction of a fundamental role for the measuring apparatus (excluding gravity).

I speculate a large reason for the distaste of taking quantum mechanics at face value is the probabilistic nature of it. Both the Many-Worlds Interpretation and hidden variable models restore absolute determinism. Objective collapse models contain that nondeterminism only to small scales and therefore leave our macroscopic assumptions largely intact. For example, a few times I have heard something along the lines of, “decoherence explains why quantum probabilities converge to classical probabilities, but it is still probabilistic, therefore, it does not solve the measurement problem.” The implication is that “solving” the measurement problem implies restoring absolute determinism. There is no scientific reason to restore absolute determinism. Only a (weak) philosophical one.

What should be acknowledged, however, is that decoherence achieves more than this. It allows one to treat “measurement” as merely a subset of a much broader notion of “interaction” and “observation” (which are the same things, just with a switch of perspective involved) which every particle is always doing with every other particle, and to then still have a consistent picture of how physical reality behaves without running into contradictions. That is to say, it really does solve the measurement problem. It does not get rid of the probabilistic nature of quantum mechanics, that is to say, why one outcome is observed rather than another. Yet, this does not need an explanation unless you have a philosophical attachment to absolute determinism.

People who do not acknowledge this role of decoherence are really missing out on one of the most interesting facts about quantum theory: Entanglement exists to prevent two observers from seeing different realities. Without entanglement, then different points-of-view would become inconsistent with one another due to the interference paradox mentioned here. Entanglement and decoherence allows all points-of-view to be consistent without running into contradictions with one another. If you get rid of “measurement” and replace it with “interaction” which any two generic systems can engage in, then you never run into any contradictions doing this, precisely because any interaction viewed from the first-person would be viewed as an observation, and any interaction viewed from the outside would be viewed as entanglement. In both cases, you would predict the systems involved in the interaction would cease to exhibit interference effects but for different reasons. In the former case, because of the update of the wave function, and in the latter case, due to decoherence.

The desire to mathematically modify quantum mechanics to add hidden variables, objective collapse, or even an underlying story for the Born rule in the form of a branching multiverse, are only necessary if you have a distaste for the notion of the probabilistic nature of quantum mechanics which inevitably leads one into the notion of the relativity of facts. If you are willing to accept the probabilistic nature of it at face value, then quantum mechanics is a perfectly consistent theory of the natural world when combined with decoherence.

(Note: I wrote this up briefly in one sitting, it is not well proof-read, so sorry if there are a lot of typos.)

--

--

bunchberry

Professional software developer (B.S CompSci), quantum computing enthusiast.