How to think about Quantum Mechanics—Part 5: Superpositions and entanglement are relative concepts

[Other parts in this series: 1,2,3,4,5,6,7,8.]

People often talk about “creating entanglement” or “creating a superposition” in the laboratory, and quite rightly think about superpositions and entanglement as resources for things like quantum-enhanced measurements and quantum computing.

However, it’s often not made explicit that a superposition is only defined relative to a particular preferred basis for a Hilbert space. A superposition \vert \psi \rangle = \vert 1 \rangle + \vert 2 \rangle is implicitly a superposition relative to the preferred basis \{\vert 1 \rangle, \vert 2 \rangle\}. Schrödinger’s cat is a superposition relative to the preferred basis \{\vert \mathrm{Alive} \rangle, \vert \mathrm{Dead} \rangle\}. Without there being something special about these bases, the state \vert \psi \rangle is no more or less a superposition than \vert 1 \rangle and \vert 2 \rangle individually. Indeed, for a spin-1/2 system there is a mapping between bases for the Hilbert space and vector directions in real space (as well illustrated by the Bloch sphere); unless one specifies a preferred direction in real space to break rotational symmetry, there is no useful sense of putting that spin in a superposition.

Likewise, entanglement is only defined relative to a particular tensor decomposition of the Hilbert space into subsystems, \mathcal{H} = \mathcal{A} \otimes \mathcal{B}. For any given (possibly mixed) state of \mathcal{H}, it’s always possible to write down an alternate decomposition \mathcal{H} = \mathcal{X} \otimes \mathcal{Y} relative to which the state has no entanglement.

So where do these preferred bases and subsystem structure come from? Why is it so useful to talk about these things as resources when their very existence seems to be dependent on our mathematical formalism? Generally it is because these preferred structures are determined by certain aspects of the dynamics out in the real world (as encoded in the Hamiltonian) that make certain physical operations possible and others completely infeasible.

The most common preferred bases arise from the ubiquitous phenomena of decoherence, when certain orthogonal states of a system are approximately preserved under an interaction with the environment, while superpositions relative to those preferred states are quickly destroyed. For example, the overcomplete basis of wavepacket states is selected by interactions with environments that decohere through local interactions (like the scattering of photons and air molecules); wavepackets are easy to prepare, but superposition widely separated in phase space are a scarce resource. Likewise, the most common grounding of preferred subsystem tensor structure is through spatial separation. Most interactions are local in space, so making the sorts of operations and measurements that would create and reveal entanglement — i.e., non-local ones — is practically very difficult.

To summarize: Superpositions are relative to bases. Entanglements are relative to subsystems.

All of this becomes especially important when we move back and forth between the first- and second-quantized pictures. Indeed, the superposition of an electron in two locations x_1 and x_2 in a first-quantized picture,

(1)   \begin{align*} \vert x_1 \rangle + \vert x_2 \rangle, \end{align*}

looks like an entangled state of the electron field in a second-quantized picture:

(2)   \begin{align*} \vert 1 \rangle_{x_1} \vert 0\rangle_{x_2} + \vert 0 \rangle_{x_1} \vert 1\rangle_{x_2}. \end{align*}

More than one article has been published in certain flashy magazines by capitalizing on this confusion.

See here and here [Edit: also here and here] for related discussion.

Bookmark the permalink.

17 Comments

  1. Hi Jess,

    This series of your posts is excellent, I enjoyed it a lot! Keep it up! :-)

    I also have one comment. Since you are emphasizing the details about these things, just to remind you — establishing the existence of a preferred basis on the locality of interactions has a hidden assumption that gravity is not quantized. In particular, if the gravitational field is in a quantum superposition of two “classical” states (i.e. states describing two different well-defined spacetime backgrounds), then the notion of locality cannot be introduced, and all interactions are explicitly nonlocal.

    So if you imagine that you have a full theory of quantum gravity, the preferred basis can only be reached as follows: first you perform a suitable measurement of the gravitational field, collapsing the total wavefunction into a state where its gravitational part describes a well-defined classical background geometry. From there, going into the semiclassical approximation (where gravity is treated classically and matter is quantized) you can introduce the concept of points of spacetime, with respect to which you can define locality. Then you write an effective Hamiltonian of interacting matter fields on this background geometry, and choose a basis in which the interactions are local (a “coordinate” basis). Finally, you can then postulate that this basis is the preferred basis.

    But in the full theory of QG the notion of locality cannot be introduced, so postulating the choice of a preferred basis based on locality doesn’t make sense. So one can either postulate it in some other way (without invoking locality), or one can be satisfied with an incomplete semiclassical explanation for the preferred basis.

    Best, :-)
    Marko

    • I’m pretty satisfied with an approximate preferred basis rather than an exact one, so I don’t think the possibility that locality is only well-defined far away from the Planck scale bothers me. Branching is a smooth continuous process (both in time and space) which is never exactly complete, so I don’t expect that basis to be exact.

      • Ok, then we basically agree. :-) But I just wanted to emphasize that point, because there are some vocal MWI proponents out there (cough/Sean Carroll/cough…) who ignore the QG aspect of the story, and claim that decoherence plus the existence of a preferred basis solves the measurement problem (such that the collapse postulate becomes unnecessary). So I often find it important to note that their argument comes from circular logic, since one cannot have locality (and therefore a preferred basis) until the gravitational part of the wavefunction collapses into a classical state… Etc…

        Best, :-)
        Marko

        • Well, I think folks interested in branch structure are as justified ignoring how locality emerges from a QG substrate as ornithologist are justified in ignoring biochemistry. Yes, it’s true that birds don’t function without ATP, and it’s likewise true that any branch structure derived from locality is implicitly dependent on a QG story. But I think these things can probably be cleanly separated for separate study. If it turns out that this is wrong and QG is actually very relevant to understanding the quantum-classical transition, then it will require something much more surprising than simply loosing a good sense of locality near the Planck scale.

          • Hmm… I don’t understand why you keep referring to the Planck scale. The superpositions of gravitational fields can be as macroscopic as any other superpositions. If you agree that an electron can be in a superposed state of being “here” and “one meter to the right”, then you must agree the same can hold for C60, a chair, a star, a black hole… And if you look at the superposition of two Schwarzschild geometries that are, say, 1AU apart, the resulting spacetime will be nonlocal on an astrophysical scale, rather than the Planck scale.

            So the way I see it, one first needs to resolve the Q2C transition, use it to obtain a classical spacetime geometry, and only then can one speak of locality, and preferred basis induced by that locality.

            A lot of people seem to think that QG is something confined exclusively to Planck scale. This isn’t true — QG is about providing a quantum description of spacetime, which is as big as it can get (all the way up to cosmological scales). What is Planck-scale related is the dynamics of gravity (the form of the Hamiltonian, if you will), but things like locality, superpositions, etc. are relevant on virtually all scales, just like they are for matter fields.

            The moral of the story is that one must not rely on properties of spacetime structure (like locality) to solve the Q2C transition, but only the other way around.

            Best, :-)
            Marko

            • > The superpositions of gravitational fields can be as macroscopic as any other superpositions

              Any argument based on the the bigness of superpositions is unconvincing. Cells are put in large superpositions, but we don’t need to understand the quantum-classical transition to do biology.

              > the resulting spacetime will be nonlocal on an astrophysical scale, rather than the Planck scale.

              No. Superpositions of gravitational fields do not destroy the notion of locality. They distort *distances* but they do not break the *topology*. A preferred basis would be based on locality but not a preferred distance.

              Much more troubling are non-hyperbolic spacetimes and especially closed timelike curves. But no one has any idea whatsoever on how to handle quantum mechanics in the case of CTCs. The entire conceptual formalism breaks down. Luckily we don’t have any good evidence they exist.

              It’s certainly possible that confusion over the quantum-classical transition is just a signal that all of quantum mechanics needs to be thrown out and, say, CTCs exist. But then there’s no use in studying the quantum-classical transition anyways. Likewise, the failure of atoms to collapse into radiation was a signal that classical mechanics was flawed, and no detailed study of classical electromagnetism could save it.

              > A lot of people seem to think that QG is something confined exclusively to Planck scale. This isn’t true — QG is about providing a quantum description of spacetime, which is as big as it can get (all the way up to cosmological scales).

              By such logic, studying anything would likely require understanding QG first.

              It’s conceivable that understanding how the brain functions will directly call upon quantum mechanics, a la Penrose, but I find it much more likely that brain functioning will be understood using only traditional chemistry and biology, and that these can be cleanly separated from the quantum-classical transition. Likewise, it’s conceivable that “one first needs to resolve the Q2C transition, use it to obtain a classical spacetime geometry, and only then can one speak of locality” but this would be highly surprising to me. Much more likely, and more congruent with the history of science, would be for the quantum-classical transition to be grounded in something like locality, and for locality (if it indeed becomes ill-defined near the Planck scale) to have a cleanly separable explanation.

              • > Cells are put in large superpositions, but we don’t need to understand the quantum-classical transition to do biology. […] By such logic, studying anything would likely require understanding QG first.

                Well, formally speaking, yes it would. As Carl Sagan said, in order to make an apple pie, you first need to create a Universe. The fact that we can sidestep studying QG when we do biology or such is contingent on the assumption that there exists a Q2C mechanism which can be invoked to make a transition to the semiclassical picture of gravity, after which one invokes weakness of Earth-bound gravitational interaction (compared to other interactions) to approximate it away from further considerations of the subject of study. For the same reason one can ignore QM when doing biology. But from a formal POV, one needs to start from a fundamental “theory of everything” when discussing any natural phenomenon at all, or at the very least be explicit about the assumptions why a full-blown ToE is not relevant for the phenomenon being studied.

                > […] CTCs […] brain functioning […] Penrose […]

                Agreed. :-)

                > Superpositions of gravitational fields do not destroy the notion of locality. They distort *distances* but they do not break the *topology*. A preferred basis would be based on locality but not a preferred distance.

                Umm, no, this is the crux of the problem. IIUC, the choice of the preferred basis is the one in which the interactions in the Hamiltonian are local, i.e. pointlike. This means that the EOM is a local differential equation. And this means that the unknown function and its derivatives in this equation are evaluated only at a single point, and its infinitesimal neighborhood. But the latter crucially depends on the spacetime metric — what is infinitesimally close in one spacetime metric may be at finite distance in another. This means that the concept of locality is metric-dependent.

                Even in ordinary language, “local” means “being close by”, in the sense of the distance between points.

                OTOH, topology on its own doesn’t distinguish “infinitesimally close” from “finitely close” from “far far away”. It just tells you how (finite-sized) pieces of the manifold are connected to each other. That’s why topological structure is often referred to as “global structure”. It has little (if anything) to do with locality.

                Best, :-)
                Marko

                • Glad we agree with regard to most of this.

                  > IIUC, the choice of the preferred basis is the one in which the interactions in the Hamiltonian are local, i.e. pointlike. This means that the EOM is a local differential equation. And this means that the unknown function and its derivatives in this equation are evaluated only at a single point, and its infinitesimal neighborhood.

                  Yes, exactly.

                  > what is infinitesimally close in one spacetime metric may be at finite distance in another. This means that the concept of locality is metric-dependent.

                  No way! Non-singular metric do not assign zero distance to paths between distinct points. That’s why, for globally hyperbolic spacetimes, you can always treat gravity as just another field propagating on a flat spacetime background (albeit one that mysteriously obeys the equivalence principle). If you want, you can always imagine my discussion of deriving the preferred basis from locality as operating in this conceptual framework.

                  The places where the metric becomes singular (in an essential way; not just a coordinate trick) are exactly those at which the field curvature diverges and Planck scale effects become important. Two points that are separated by a finite distance in one metric can’t be infinitesimally close (in other words: arbitrarily close) in another metric without introducing Planckian effects.

                  • > Non-singular metric do not assign zero distance to paths between distinct points.

                    I’m not sure I understand what you mean here. The distance between two distinct points on a lightcone is precisely zero. Minkowski metric is nonsingular, but not positive-definite, so anything goes. But I think this is off-topic, so…

                    More importantly,

                    > you can always treat gravity as just another field propagating on a flat spacetime background […] you can always imagine my discussion of deriving the preferred basis from locality as operating in this conceptual framework.

                    Oh, now I see what you’re getting at. This is why I always considered Feynman’s book on gravity to be a great disservice to theoretical physics — no, one cannot consider gravity just as a spin-two field in flat spacetime. There’s more to gravity than just gravitons (otherwise string theory folks would never bother with branes, dualities, holography, etc).

                    In the language of gravitons, the argument against locality goes as follows. Suppose you split the metric as
                    g = eta + h, and rewrite the Einstein-Hilbert action as a field theory for h. Aside from the fact that it is nonrenormalizable, it features infinitely many (local) interaction vertices. When you quantize that, you need to add infinitely many counterterm vertices to remove all divergences. Let’s assume you do this (according to some recipe, since renomalization doesn’t work). And then we get to the crunch point — there is _absolutely_no_guarantee_ that the resulting infinite series will sum up into a local (effective) action. In fact, given nonrenormalizability, that would be nothing short of a miracle. The effective action typically remains local only for polynomial field theories (finite number of interaction vertices), like the Standard Model and such. But for gravity, already the classical action is nonpolynomial, so the effective action is basically guaranteed to be nonlocal (it may even be nonanalytic, rendering the whole perturbation calculus invalid, but let’s not get into that…).

                    In other words, the picture of gravitons living in flat spacetime cannot be considered a conceptual framework, but only an approximation (one which could work when you have scattering of finitely many gravitons evaluated to a finite loop order). If that were the whole story, perturbative string theory from the ’80s would be a complete solution for the QG problem. :-) But I’m afraid perturbation theory is not enough.

                    Best, :-)
                    Marko

                    • >> > Non-singular metric do not assign zero distance to paths between distinct points.

                      > I’m not sure I understand what you mean here. The distance between two distinct points on a lightcone is precisely zero. Minkowski metric is nonsingular, but not positive-definite, so anything goes.

                      I was just thinking in terms of spatial distances, but I can see how that was ambiguous in context. If things are globally hyperbolic then there are many possible foliations but none of them assign zero spatial distance between distinct points.

                      > There’s more to gravity than just gravitons…the effective action is basically guaranteed to be nonlocal…perturbation theory is not enough…

                      Again, none of this stuff kicks in until near the Planck scale. (If you think you disagree, then where do we look in the universe to see such effects? Why aren’t branes being experimentally tested?)

                      • (sorry for the late reply, real life interfered :-) )

                        > Again, none of this stuff kicks in until near the Planck scale. (If you think you disagree, then where do we look in the universe to see such effects? Why aren’t branes being experimentally tested?)

                        We seem to be talking past each other. The way I see it, the macroscopic nonlocal effects of QG are not visible experimentally precisely for the same reason why Schrodinger’s cat is not visible experimentally in the “alive+dead” state. IOW, it’s due to Q2C transition, and has nothing to do with the Planck scale.

                        The issue is the following — one explains the cat problem by invoking decoherence with environment, which requires a preferred basis, which is argued to exist based on locality of interactions, which assume that spacetime has classical geometry. One could attempt to make the same argument for QG, again invoking decoherence with “environment” (see below), which again requires a preferred basis, which now cannot be argued to exist based on locality, since classical spacetime geometry is precisely the thing that needs to be produced by decoherence.

                        So my thesis is — if the decoherence argument is to go through for Q2C transition problem, the existence of preferred basis must be established on something other than locality.

                        As an separate issue, the “environment” in QG is a tricky concept to introduce, since QG should arguably be able to describe cosmology, and the universe (with everything in it) is by definition an isolated system, so it has no environment. But that’s a different problem, independent of the preferred basis issue. :-)

                        Finally, the issue of superpositions of gravitational fields have as much in common with the Planck length as the Schrodinger’s cat issue has with the Higgs mass — that is, very little (if any).

                        Hopefully I’m making myself more clear now. :-)

                        • Right, so I believe this is what we were saying earlier. I think the QC transition can be cleanly separated from GR by taking locality as a given (to be solved later), just like the study of birds can be cleanly separated from biochemistry by taking cell biology as a given (to be solved later). You think that the QC transition depends critically on GR, just like the anomalous stability of atoms is a signal that the underlying framework of classical mechanics is broken (and cannot salvaged with a more detailed study of classical electromagnetism).

                          I think the partial success of the decoherence program in understanding the QC transition is good evidence that this separation of regimes can be done, just like the success of Ornithology in explaining beak shapes is good evidence that one can probably explain migratory patterns without a detailed understanding of biochemistry. It would surprise me if decoherence can derive branch structure from locality (my goal, which looks possible), but that both of the following hold: (1) a different mechanism than decoherence is necessary to derive locality and branch structure from the underlying quantum gravity and (2) this mechanism invalidates (or is somehow not compatible with) the derivation of branch structure from locality with decoherence. It seems to me you need both (1) and (2) in order to say that a derivation of branch structure from locality is somehow incomplete or unsatisfying. If only (1) holds, then we would just say that decoherence explains the QC transition conditional on a separate derivation of locality with quantum gravity.

                          > As an separate issue, the “environment” in QG is a tricky concept to introduce, since QG should arguably be able to describe cosmology, and the universe (with everything in it) is by definition an isolated system, so it has no environment.

                          I think it’s actually better to think in terms of “preferred/macroscopic/amplified degrees of freedom” and “unpreferred/microscopic/unrecorded degrees of freedom”, rather than system and environment. In principle, systems do not need to be spatially separated from their environments, nor eternal. So it might be, in the case of quantum gravity and cosmology, that the natural separation is between (say) matter degrees of freedom and spacetime degrees of freedom.

                          • Hi Jess,

                            > I think the QC transition can be cleanly separated from GR by taking locality as a given (to be solved later)

                            This is the main issue. And after talking to several people, it appears to me that there is a widespread lack of appreciation for the fact that locality is contingent on the classicality of the gravitational field. So a colleague of mine and I have decided to write a draft paper about the whole issue. I am hoping to put in on the arXiv in a couple of months, and I’ll make sure to inform you as soon as it’s online. Then we will be able to discuss all this more efficiently. :-)

                            > I think it’s actually better to think in terms of “preferred/macroscopic/amplified degrees of freedom” and
                            “unpreferred/microscopic/unrecorded degrees of freedom”, rather than system and environment.

                            Actually, this is something I’m hoping for as well, since it would provide a more solid framework for quantum cosmology, as opposed to the traditional system-and-environment paradigm. That said, I’m not completely certain about the details and how the split between preferred and unpreferred degrees of freedom would actually work, but I’m hopeful it can be done. :-)

                            Best, :-)
                            Marko

  2. Hi Dr. Riedel.

    I’m physics student from Argentina (you may notice my poor english) studying some QM interpretations at the moment and these posts are great, thank you.
    I would like to ask you one pretty basic question on decoherence:

    Concerning the meaning of “world” in the many-worlds interpretation David Wallace says: “These worlds are not part of the fundamental ontology of quantum theory – instead, they are to be understood as structures, or patterns, emergent from the underlying theory, through the dynamical process of decoherence.”

    I don’t understand how decoherence would “work” without any notion of measure (something like Born’s rule). Other way to put it: Can I see decoherence without the density matrix formalism?

    Finally, what do you think about disproving MWI using conservation arguments (energy, mass, etc…) if there’s such thing?

    Thank you!

    • Hi Luciano,

      Sorry for the very late reply. I let this slip below the fold in my inbox, and wasn’t reminded until just today.

      “Worlds” can be described rigorously in the consistent histories formalism (which can handle the relativistic case very naturally). It does not require a notion of density matrices or Born’s rule, needing only pure wavefunction and projectors. (One needs a sense of orthogonality, like an orthogonal projector, and hence an inner-product, but this is necessary for identifying the importance of unitary evolution anyways.) See the work of Griffiths, Gell-Mann & Hartle, Finkelstein, and Halliwell. For an introduction, along with cites to those authors and an explicit connection to decoherence, see the introductory sections of our recent work, arXiv:1312.0331.

      However, in order to derive a world-branching process from first principles, one always needs to assume a distinction between system and environment. (A different choice of system will generally not produce a branching into distinct worlds.) Equivalently, one needs to already know what “the” preferred classical degrees of freedom ought to be; only then can one derive their classical (stochastic) behavior. The project of deriving the world-branching process without such strong assumptions put in by hand has been called “the set selection problem” by Adrian Kent, since it boils down to identifying a preferred set of consistent histories. See Dowker and Kent’s description of the problem here, Kent recent approach here, and discussion by me here and here. I think there is good reason to believe we can derive the appearance of worlds based only on fundamental principles like locality.

      I’ll answer you question about conservation laws in a second comment.

    • No, I don’t think one can disprove MWI using conservation arguments. There are actually two completely different objections that people sometimes make, which have to be addressed separately. (I am adapting the following text from an email I wrote previously. Soon I will turn it into a blog post.)

      First possible objection: “If the universe splits into two branches, doesn’t the total amount of energy have to double?” This is the question Frank Wilczek appears to be addressing at the end of these notes.

      I think this question can only be asked by someone who believes that many worlds is an interpretation that is just like Copenhagen (including, in particular, the idea that measurement events are different than normal unitary evolution) except that it simply declares that new worlds are created following measurements. But this is a misunderstanding of many worlds. MWI dispenses with collapse or any sort of departure from unitary evolution. The wavefunction just evolves along, maintaining its energy distributions, and energy doesn’t double when you mathematically identify a decomposition of the wavefunction into two orthogonal components.

      Second possible objection: “If the universe starts out with some finite spread in energy, what happens if it then ‘branches’ into multiple worlds, some of which have vector support outside that energy spread?” Or, another phrasing: “What happens if the basis in which the universe decoheres doesn’t commute with energy basis? Is it then possible to create energy, at least in some branches?” The answer is “no”, but it’s not obvious.

      The argument is as follows: We describe a sequence of historical events in a quantum universe using a set of consistent histories, i.e. time-ordered strings of Heisenberg-picture projectors C_alpha = P_{alpha_N} …P_{alpha_1} . For a pure state |psi> of the universe, the condition of consistency is equivalent to the orthogonality of the branches, which are defined by |psi_alpha> = C_alpha|psi>. Because each branch must be orthogonal to all the other ones, they define a basis (on some subspace, at least). They all sum up to the global wavefunction |psi>, and the norm of each branch is given by projecting |psi> onto the relevant basis vector. Now, if we end up, after many branching events, with a branch with an exact amount of energy (i.e. it’s an energy eigenstate, which might be lying in a degenerate subspace of a given energy), then we can see that the norm of this vector (and hence the probability associated with the branch) must be zero unless the energy lies in the support of the original global state |psi>.

      Of course, it might be that we have branches at a given time that aren’t energy eigenstates. In this case it’s hard to even say what you mean by energy conservation. The branch isn’t an eigenstate, so it’s energy is undefined. But if it later decoheres into branches with specific energy, then this energy must lie in the support of |psi>.

      As it turns out, Hartle et al. have a paper that discusses this in pretty good detail:

      James B. Hartle, Raymond Laflamme, and Donald Marolf.
      “Conservation laws in the quantum mechanics of closed systems.”
      Phys. Rev. D 51, 7007 (1995).

      Their argument in Sec. 2 (that they attribute to Griffiths) is equivalent to the one I have just given. (Everything after that about gauge charges isn’t strictly necessary for this question, but may be of interest.)

Leave a Reply

Required fields are marked with a *. Your email address will not be published.

Contact me if the spam filter gives you trouble.

Basic HTML tags like ❮em❯ work. Type [latexpage] somewhere to render LaTeX in $'s. (Details.)