Three arguments on the measurement problem

When talking to folks about the quantum measurement problem, and its potential partial resolution by solving the set selection problem, I’ve recently been deploying three nonstandard arguments. To a large extent, these are dialectic strategies rather than unique arguments per se. That is, they are notable for me mostly because they avoid getting bogged down in some common conceptual dispute, not necessarily because they demonstrate something that doesn’t formally follow from traditional arguments. At least two of these seem new to me, in the sense that I don’t remember anyone else using them, but I strongly suspect that I’ve just appropriated them from elsewhere and forgotten. Citations to prior art are highly appreciated.

Passive quantum mechanics

There are good reasons to believe that, at the most abstract level, the practice of science doesn’t require a notion of active experiment. Rather, a completely passive observer could still in principle derive all fundamental physical theories simply by sitting around and watching. Science, at this level, is about explaining as many observations as possible starting from as minimal assumptions as possible. Abstractly we frame science as a compression algorithm that tries to find the programs with the smallest Kolmogorov complexity that reproduces observed data.

Active experiments are of course useful for at least two important reasons: (1) They gather strong evidence for causality by feeding a source of randomness into a system to test a causal model, and (2) they produce sources of data that are directly correlated with systems of interest rather than relying on highly indirect (and perhaps computationally intractable) correlations. But ultimately these are practical considerations, and an inert but extraordinarily intelligent observer could in principle derive general relativity, quantum mechanics, and field theoryOf course, there may be RG-reasons to think that scales decouple, and that to a good approximation the large-scale dynamics are compatible with lots of possible small-scale dynamics. But this is premised on there not being any natural processes that amplify small scale data to large scales; we are large-scale creatures, and clearly our instruments do allow us to learn about what’s really going on at small scales from time to time without breaking the laws of physics. There doesn’t seem to be any fundamental reason why rare natural amplification processes (and natural experiments) can’t suffice.a  . Call this passive science.

Now consider the measurement problem within passive science. What does quantum mechanics predict? How, even in principle, would I go from the basic theory to my continuous stream of passive observations? Classically this is do-able, but it doesn’t appear to be so using the orthodox procedure of textbook quantum mechanics. Where and when are the measurements happening when I look outside and see the trees, apparently, localized in phase space? What is the measurement basis?

The rigorous answer to this question, of course, would solve the set selection problem, the primary axe I grind.

The purpose of this dialectic strategy is to avoid disputes about whether quantum mechanics can get away only predicting the outcome of experiments. It is reasonable, of course, that the theory need not (and maybe should not) have concrete things to say about things which have no observable consequences. But this strategy highlights the fact that orthodox quantum mechanics only predicts the results of active experiments, which is a proper subset of observations.

Preferred observables

The previous strategy is useful for folks who try and take a modern Copenhagen interpretation, which perhaps makes references to decoherence but which still avoids discussing a wavefunction of the universe. (The measurement process, therefore, must take a preferred role.) In more speculative areas of physics like cosmology, there are a lot more folks who are quite comfortable talking about a wavefunction of the universe but who think all the conceptual problems have been solved — maybe after waving their hands at the decoherence literature. To them, I often just ask: What are the preferred observables, and why?

The Hilbert space of the universe is very big, and likewise for the algebra of operators on it. As far as quantum mechanics is concerned, all the observables are on equal footing. But we can point to some very special observables (e.g., the position and momentum of macroscopic objects) that seem to be objective, easy to measure, and follow classical equations of motion; in the terminology of Gell-Mann and Hartle, they are part of the quasiclassical realm. But if the only fundamental mathematical structure of the universe is the wavefunction and its unitary evolution, what singles out the average position and momentum of certain collections of particles?

Importantly, decoherence theory gives some models where position and momentum of certain variables do have a preferred status for that particular model, but it only does so by assuming a particular separation of the universe into system and environment, and it’s not clear (a) why this particular split is special and (b) how long this should last (it can’t be eternal). So if you don’t point to decoherence theory, you can’t explain why position and momentum of big objects are important, but if you do so point, you are forced to conclude the project is incomplete. (And you can probably guess what I think would constitute a completion…)

Analogy with initial conditions

This last argument is intended as a reply to folks who are prudently suspicious of all this quantum foundations philosophizing if, as is likely, a solution to the set selection problem doesn’t make testable predictions that couldn’t be made in orthodox quantum mechanics with an insightful choice of basis.Note that I do expect it to make some new predictions for practical reasons, e.g., by speeding up numerical simulations in previously intractable systems.b   A (the?!) key conceptual advance in the history of physics was the creation of the distinction between dynamical laws and initial conditions. This is the sort of advance that goes almost completely unnoticed by modern practitioners, yet is crucial and nonobvious, and was much more philosophy than physics. Indeed, this distinction was about what science needs to explain, and what can be put aside as happenstance of history.

When Kepler and co. were monitoring the motion of the planets, the raw data, at an abstract level, were just a sequence of positions (or angles) at different moments in time. It is simply not a-priori obvious that the initial data are somehow not needing explanations but that the correlations between initial and final data must be explained completely (for physics to be “done”).Of course, in a strictly deterministic system like the planets on timescales shorter than their Lyapunov exponents, it’s equivalent to say that the final conditions do not need to be explained, but the correlation between initial and final conditions (and hence, the initial conditions) must be.c   (Maybe rather than drawing a relationship between the period and semi-major axis of each planet’s orbit, it would have been better to draw a relationship between the semi-major axes of different planets.)

Very importantly, it is also possible in the future that physics will be able to account (at least partially) for the initial conditions of the universe. (See, for instance, Feynman’s musings.) The initial state might be inferred with confidence from elegance principles, and indeed the initial state for the simplest inflationary universe model can be just the ground state for the inflaton field in a particular preferred choice of coordinates. Although the real universe requires a quantum description, it seems reasonable that the field of physics in a classical universe could first infer and describe the dynamical laws, and then later infer and describe the initial conditions.

But this is closely analagous to the distinction between the quantum unitary evolution of the wavefunction and the preferred consistent set. Indeed, the unitary evolution is simply the expression of the dynamical laws in the case of quantum mechanics, and the branches of the wavefunction specifies the choice of possible initial states at the start of any quantum evolution. And, while we might be satisfied for a while only knowing the unitary evolution (at least as long as we could intuit the branch structure necessary to make predictions), it seems clear that identifying the branch structure and describing it from first principles would be highly desirable.

Note that the nondeterministic (one-to-many) nature of the branching process means this analogy isn’t perfect. There is a large discrepancy in our ability to explain initial conditions (hard) versus to identify dynamical laws (easier), and classically we are able to make complete nondisturbing classical measurement in order to infer initial states. These facts justify the classical physicist’s almost complete focus on the latter. On the other hand, the inherent vagueness in the textbook description of the measurement postulates cries out for explanation.

EDIT 2016-8-20: Below is an alternative draft, somewhat longer, of the same argument. Unless you’re obsessed with this stuff, skip it.

[Show]

Eugene WignerNote that by “law of nature”, Wigner appears to here refer specifically to the dynamical equations that link initial data to final data, which we will simply call “dynamical laws” for clarity. In classical mechanics, these are unambiguously the equations of motion. In quantum mechanics, there can be disagreement about whether the intial wavefunction, which shares many properties of a probability distribution, should be considered “initial data”. We don’t need to take a position on this issue, but refer the interested reader to the literature on psi-epistemic versus psi-ontic conceptions of the wavefunction \cite{pusey, spekkens, etc.}d  :

The sharp distinction between Initial Conditions and Laws of Nature was initiated by Isaac Newton and I consider this to be one of his most important, if not the most important, accomplishment. Before Newton there was no sharp separation between the two concepts. Kepler, to whom we owe the three precise laws of planetary motion, tried to explain also the size of the planetary orbits, and their periods. After Newton’s time the sharp separation of initial conditions and laws of nature was taken for granted and rarely even mentioned. Of course, the first ones are quite arbitrary and their properties are hardly parts of physics while the recognition of the latter ones are the prime purpose of our science. Whether the sharp separation of the two will stay with us permanently is, of course, as uncertain as is all future development but this question will be further discussed later. Perhaps it should be mentioned here that the permanency of the validity of our deterministic laws of nature became questionable as a result of the realization, due initially to D. Zeh, that the states of macroscopic bodies are always under the influence of their environment; in our world they can not be kept separated from it.

When you observe classical evolution, it’s key to separate the regularities you need to explain from the contingent parts of the data that you don’t. Wigner has suggested that the key insight of Newton and contemporaries was not calculus or the laws of mechanics, but rather the distinction between laws and initial conditions. This distinction, which seems obvious now but historically was anything but, was critical because of the completely different amounts of regularity between them: The laws are highly regular and can be precisely and succinctly specified, while the initial conditions that we find out there in the real world are heterogeneous. In the case of Kepler/Newton, it was critical to cleanly separate out the initial conditions (which they had no hope of explaining) from the correlation between initial and final data (which they could).Note that for the planets, it’s perfectly valid to summarize the unexplainable contingent data as the final data, or any other set of numbers that uniquely specifies the orbit (which has no time labels) from the trajectory (which does). The fact that initial conditions are generally “initial” is kind of a red herring, and is connected with side issues like the arrow of time and the validity of after-the-fact explanations.e   Without this distinction, the whole enterprise becomes very muddy, and it’s hard to tell when you can stop postulating new “laws”, and when there is a risk of laws conflicting or being redundant. The precise version of this is the question of well-posedness of dynamical equations.

Currently, quantum mechanics is applied by inserting the measurement basis or, in the case of consistent histories, the consistent set. The implicit claim is either (1) that physics isn’t responsible for explaining where this comes from at all or (2) that we can appeal to existing notions of classicality to intuit the answer, without ever deriving classicality. So, like Kepler/Newton, this is an argument about what inputs to the theory need to be explained. (The analogy works best when you think about “initial data” for Kepler/Newton not in particular as exact measurement outcomes, but just more generally as the input we feed into the mathematical machinery.) Of course, Newton pointed out the input we shouldn’t try to explain, while here we argue that textbook quantum mechanics is not explaining enough. Nonetheless, this is the same type of question in the philosophy of science.

Most theories of physics require the practitioner to provide some input specifying something like the current state of affairs (initial conditions) or the nature of the experiment. Even for theories that make universal predictions (e.g., the power spectrum of the cosmic microwave background) which are the same for all observers, it is still generally necessary for the observer to take into account local details, which they intuit, to connect the universal prediction to their personal experience. (If I close my eyes, or cannot directly perceive microwaves, I do not expect to see the CMB without equipment. For the theory to make prediction, it must at least be augmented with a theory of the microwave equipment.)

Here we draw the distinction between that sort of input on the one hand, and, on the other hand, the sort of input that answers the question: What sorts of outcomes are possible at all? We argue that this distinction is critical, as quantum mechanics has (unacceptably) avoided providing the latter input by masking it as the (reasonable) failure to provide the former.

Classically, the answer to the question is provided by the phase space, e.g., the position and momentum of each particle. But we certainly don’t reject quantum mechanics simply because it does not specify the microscopic world completely at this level of detail. Instead, QM is lacking because it fails to specify even the most coarse outcomes or observations; it says nothing about concrete observations without additional input. This essentially equivalent to Dowker and Kent’s observation that bare QM, or QM augmented with the consistent histories formalism, cannot predict quasiclassicality, i.e., that there will be objects localized in phase space following classical trajectories. The necessary additional input, to obtain this prediction, is some combination of decoherence theory, thermodynamics, and related ideas.

Footnotes

(↵ returns to text)

  1. Of course, there may be RG-reasons to think that scales decouple, and that to a good approximation the large-scale dynamics are compatible with lots of possible small-scale dynamics. But this is premised on there not being any natural processes that amplify small scale data to large scales; we are large-scale creatures, and clearly our instruments do allow us to learn about what’s really going on at small scales from time to time without breaking the laws of physics. There doesn’t seem to be any fundamental reason why rare natural amplification processes (and natural experiments) can’t suffice.
  2. Note that I do expect it to make some new predictions for practical reasons, e.g., by speeding up numerical simulations in previously intractable systems.
  3. Of course, in a strictly deterministic system like the planets on timescales shorter than their Lyapunov exponents, it’s equivalent to say that the final conditions do not need to be explained, but the correlation between initial and final conditions (and hence, the initial conditions) must be.
  4. Note that by “law of nature”, Wigner appears to here refer specifically to the dynamical equations that link initial data to final data, which we will simply call “dynamical laws” for clarity. In classical mechanics, these are unambiguously the equations of motion. In quantum mechanics, there can be disagreement about whether the intial wavefunction, which shares many properties of a probability distribution, should be considered “initial data”. We don’t need to take a position on this issue, but refer the interested reader to the literature on psi-epistemic versus psi-ontic conceptions of the wavefunction \cite{pusey, spekkens, etc.}
  5. Note that for the planets, it’s perfectly valid to summarize the unexplainable contingent data as the final data, or any other set of numbers that uniquely specifies the orbit (which has no time labels) from the trajectory (which does). The fact that initial conditions are generally “initial” is kind of a red herring, and is connected with side issues like the arrow of time and the validity of after-the-fact explanations.
Bookmark the permalink.

Leave a Reply

Required fields are marked with a *. Your email address will not be published.

Contact me if the spam filter gives you trouble.

Basic HTML tags like ❮em❯ work. Type [latexpage] somewhere to render LaTeX in $'s. (Details.)