How to think about Quantum Mechanics—Part 5: Superpositions and entanglement are relative concepts

[Other parts in this series: .]

People often talk about “creating entanglement” or “creating a superposition” in the laboratory, and quite rightly think about superpositions and entanglement as resources for things like quantum-enhanced measurements and quantum computing.

However, it’s often not made explicit that a superposition is only defined relative to a particular preferred basis for a Hilbert space. A superposition \vert \psi \rangle = \vert 1 \rangle + \vert 2 \rangle is implicitly a superposition relative to the preferred basis \{\vert 1 \rangle, \vert 2 \rangle\}. Schrödinger’s cat is a superposition relative to the preferred basis \{\vert \mathrm{Alive} \rangle, \vert \mathrm{Dead} \rangle\}. Without there being something special about these bases, the state \vert \psi \rangle is no more or less a superposition than \vert 1 \rangle and \vert 2 \rangle individually. Indeed, for a spin-1/2 system there is a mapping between bases for the Hilbert space and vector directions in real space (as well illustrated by the Bloch sphere); unless one specifies a preferred direction in real space to break rotational symmetry, there is no useful sense of putting that spin in a superposition.

Likewise, entanglement is only defined relative to a particular tensor decomposition of the Hilbert space into subsystems, \mathcal{H} = \mathcal{A} \otimes \mathcal{B}. For any given (possibly mixed) state of \mathcal{H}, it’s always possible to write down an alternate decomposition \mathcal{H} = \mathcal{X} \otimes \mathcal{Y} relative to which the state has no entanglement.

So where do these preferred bases and subsystem structure come from? Why is it so useful to talk about these things as resources when their very existence seems to be dependent on our mathematical formalism? Generally it is because these preferred structures are determined by certain aspects of the dynamics out in the real world (as encoded in the Hamiltonian) that make certain physical operations possible and others completely infeasible.

The most common preferred bases arise from the ubiquitous phenomena of decoherence, when certain orthogonal states of a system are approximately preserved under an interaction with the environment, while superpositions relative to those preferred states are quickly destroyed.… [continue reading]

Comments on Pikovski et al.’s time dilation decoherence

Folks have been asking about the new Nature Physics article by Pikovski et al., “Universal decoherence due to gravitational time dilation” . Here are some comments:

  • I think their calculation is probably correct for the model they are considering. One could imagine that they were placing their object in a superposition of two different locations in an electric (rather than gravitational field), and in this case we really would expect the internal degrees of freedom to evolve in two distinct ways. Any observer who was “part of the superposition” wouldn’t be able to tell locally whether their clock was ticking fast or slow, but it can be determined by bringing both clocks back together and comparing them.
  • It’s possible the center of mass (COM) gets shifted a bit, but you can avoid this complication by just assuming that the superposition separation L is much bigger than the size of the object R, and that the curvature of the gravitational field is very small compared to both.
  • Their model is a little weird, as hinted at by their observation that they get “Gaussian decoherence”, \sim \exp(-T^2), rather than exponential, \sim \exp(-T). The reason is that their evolution isn’t Markovian, as it is for any environment (like scattered or emitted photons) composed of small parts that interact for a bit of time and then leave. Rather, the COM is becoming more and more entangled with each of the internal degrees of freedom as time goes on.
  • Because they don’t emit any radiation, their “environment” (the internal DOF) is finite dimensional, and so you will eventually get recoherence. This isn’t a problem for Avagadro’s number of particles.
  • This only decoheres superpositions in the direction of the gravitational gradient, so it’s not particularly relevant for why things look classical above any given scale.
[continue reading]

Some intuition about decoherence of macroscopic variables

[This is a vague post intended to give some intuition about how particular toy models of decoherence fit in to the much hairier question of why the macroscopic world appears classical.]

A spatial superposition of a large object is a common model to explain the importance of decoherence in understanding the macroscopic classical world. If you take a rock and put it in a coherent superposition of two locations separated by a macroscopic distance, you find that the initial pure state of the rock is very, very, very quickly decohered into an incoherent mixture of the two positions by the combined effect of things like stray thermal photons, gas molecules, or even the cosmic microwave background.

Formally, the thing you are superposing is the center-of-mass (COM) variable of the rock. For simplicity one typically considers the internal state of the rock (i.e., all its degrees of freedom besides the COM) to be in a (possibly mixed) quantum state that is uncorrelated with the COM. This toy model then explains (with caveats) why the COM can be treated as a “classical variable”, but it doesn’t immediately explain why the rock as a whole can be considered classical. On might ask: what would that mean, anyways? Certainly, parts of the rock still have quantum aspects (e.g., its spectroscopic properties). For Schrödinger’s cat, how is the decoherence of its COM related the fact that the cat, considered holistically, is either dead or alive but not both?

Consider a macroscopic object with Avagadro’s number of particles N, which means it would be described classically in microscopic detail by 3N variables parameterizing configuration space in three dimensions. (Ignore spin.) We know at least two things immediately about the corresponding quantum system:

(1) Decoherence with the external environment prevents the system from exploring the entire Hilbert space associated with the 3N continuous degrees of freedom.… [continue reading]

Standard quantum limit for diffusion

I just posted my newest paper: “Decoherence from classically undetectable sources: A standard quantum limit for diffusion” (arXiv:1504.03250). [Edit: Now published as PRA 92, 010101(R) (2015).] The basic idea is to prove a standard quantum limit (SQL) that shows that some particles can be detected through the anomalous decoherence they induce even though they cannot be detected with any classical experiment. Hopefully, this is more evidence that people should think of big spatial superpositions as sensitive detectors, not just neat curiosities.

Here’s the abstract:

In the pursuit of speculative new particles, forces, and dimensions with vanishingly small influence on normal matter, understanding the ultimate physical limits of experimental sensitivity is essential. Here, I show that quantum decoherence offers a window into otherwise inaccessible realms. There is a standard quantum limit for diffusion that restricts some entanglement-generating phenomena, like soft collisions with new particle species, from having appreciable classical influence on normal matter. Such phenomena are classically undetectable but can be revealed by the anomalous decoherence they induce on non-classical superpositions with long-range coherence in phase space. This gives strong, novel motivation for the construction of matter interferometers and other experimental sources of large superpositions, which recently have seen rapid progress. Decoherence is always at least second order in the coupling strength, so such searches are best suited for soft, but not weak, interactions.

Here’s Figure 2:


Standard quantum limit for forces and momentum diffusion. A test mass is initially placed in a minimal uncertainty wavepacket with a Wigner distribution W(x,p) over phase space (top) that contains the bulk of its mass within a 2\sigma-contour of a Gaussian distribution (dashed black line).
[continue reading]

How to think about Quantum Mechanics—Part 4: Quantum indeterminism as an anomaly

[Other parts in this series: .]

I am firmly of the view…that all the sciences are compatible and that detailed links can be, and are being, forged between them. But of course the links are subtle… a mathematical aspect of theory reduction that I regard as central, but which cannot be captured by the purely verbal arguments commonly employed in philosophical discussions of reduction. My contention here will be that many difficulties associated with reduction arise because they involve singular limits….What nonclassical phenomena emerge as h → 0? This sounds like nonsense, and indeed if the limit were not singular the answer would be: no such phenomena.Michael Berry

One of the great crimes against humanity occurs each year in introductory quantum mechanics courses when students are introduced to an \hbar \to 0 limit, sometimes decorated with words involving “the correspondence principle”. The problem isn’t with the content per se, but with the suggestion that this somehow gives a satisfying answer to why quantum mechanics looks like classical mechanics on large scales.

Sometimes this limit takes the form of a path integral, where the transition probability for a particle to move from position x_1 to x_2 in a time T is

(1)   \begin{align*} P_{x_1 \to x_2} &= \langle x_1 \vert e^{-i H T} \vert x_2 \rangle \\ &\propto \int_{x_1,x_2} \mathcal{D}[x(t)] e^{-i S[x(t),x'(t)]/\hbar} = \int_{x_1,x_2} \mathcal{D}[x(t)] e^{-i \int_0^T \mathrm{d}t L(x(t),x'(t))/\hbar} \end{align*}

where \int_{x_1,x_2} \mathcal{D}[x(t)] is the integral over all paths from x_1 to x_2, and S[x(t),x'(t)]= \int_0^T \mathrm{d}t L(x(t),x'(t)) is the action for that path (L being the Lagrangian corresponding to the Hamiltonian H). As \hbar \to 0, the exponent containing the action spins wildly and averages to zero for all paths not in the immediate vicinity of the classical path that make the action stationary.

Other times this takes the form of Ehrenfest’s theorem, which shows that the expectation values of functions of position and momentum follow the classical equations of motion.… [continue reading]

Decoherence detection and micromechanical resonators

In this post I want to lay out why I am a bit pessimistic about using quantum micromechanical resonators, usually of the optomechanical variety, for decoherence detection. I will need to rely on some simple ideas from 3-4 papers I have “in the pipeline” (read: partially written TeX files) that seek to make precise the sense in which decoherence detection allows us to detect classical undetectable phenomena, and to figure out exactly what sort of phenomena we should apply it to. So this post will sound vague without that supporting material. Hopefully it will still be useful, at least for the author.

The overarching idea is that decoherence detection is only particularly useful when the experimental probe can be placed in a superposition with respect to a probe’s natural pointer basis. Recall that the pointer basis is the basis in which the density matrix of the probe is normally restricted to be approximately diagonal by the interaction with the natural environment. Classically detectable phenomena are those which cause transitions within the pointer basis, i.e. driving the system from one pointer state to another. Classically undetectable phenomena are those which cause pure decoherence with respect to this basis, i.e. they add a decoherence factor to off-diagonal terms in this basis, but preserve on-diagonal terms.

The thing that makes this tricky to think about is that the pointer basis is overcomplete for most physically interesting situations, in particular for any continuous degree of freedom like the position of a molecule or a silicon nanoparticle. It’s impossible to perfectly localize a particle, and the part of the Hamiltonian that fights you on this, p^2/2m, causes a smearing effect that leads to the overcompleteness.… [continue reading]

Records decomposition talk

Last month Scott Aaronson was kind enough to invite me out to MIT to give a seminar to the quantum information group. I presented a small uniqueness theorem which I think is an important intermediary result on the way to solving the set selection problem (or, equivalently, to obtaining an algorithm for breaking the wavefunction of the universe up into branches). I’m not sure when I’ll have a chance to write this up formally, so for now I’m just making the slides available here.

Screen Shot 2015-01-26 at 7.42.02 AM

Scott’s a fantastic, thoughtful host, and I got a lot of great questions from the audience. Thanks to everyone there for having me.… [continue reading]

Ambiguity and a catalog of the actions

I had to brush up on my Hamilton-Jacobi mechanics to referee a paper. I’d like to share, from this Physics.StackExchange answer, Qmechanic’ clear catalog of the conceptually distinct functions all called “the action” in classical mechanics, taking care to specify their functional dependence:

At least three different quantities in physics are customary called an action and denoted with the letter S.

  1. The (off-shell) action

    (1)   \[S[q]~:=~ \int_{t_i}^{t_f}\! dt \ L(q(t),\dot{q}(t),t)\]

    is a functional of the full position curve/path q^i:[t_i,t_f] \to \mathbb{R} for all times t in the interval [t_i,t_f]. See also this question. (Here the words on-shell and off-shell refer to whether the equations of motion (eom) are satisfied or not.)

  2. If the variational problem (1) with well-posed boundary conditions, e.g. Dirichlet boundary conditions

    (2)   \[ q(t_i)~=~q_i\quad\text{and}\quad q(t_f)~=~q_i,\]

    has a unique extremal/classical path q_{\rm cl}^i:[t_i,t_f] \to \mathbb{R}, it makes sense to define an on-shell action

    (3)   \[ S(q_f;t_f;q_i,t_i) ~:=~ S[q_{\rm cl}],\]

    which is a function of the boundary values. See e.g. MTW Section 21.1.

  3. The Hamilton’s principal function S(q,\alpha, t) in Hamilton-Jacobi equation is a function of the position coordinates q^i, integration constants \alpha_i, and time t, see e.g. H. Goldstein, Classical Mechanics, chapter 10.
    The total time derivative

    (4)   \[ \frac{dS}{dt}~=~ \dot{q}^i \frac{\partial S}{\partial q^i}+ \frac{\partial S}{\partial t}\]

    is equal to the Lagrangian L on-shell, as explained here. As a consequence, the Hamilton’s principal function S(q,\alpha, t) can be interpreted as an action on-shell.

These sorts of distinctions are constantly swept under the rug in classical mechanics courses and textbooks (even good books like Goldstein). This leads to serious confusion on the part of the student and, more insidiously, it leads the student to think that this sort of confusion is normal. Ambiguity is baked into the notation! This is a special case of what I conjecture is a common phenomena in physics:

  • Original researcher thinks deeply, discovers a theory, and writes it down.
[continue reading]