Comments on Myrvold’s Taj Mahal

Last week I saw an excellent talk by philosopher Wayne Myrvold.

The Reeh-Schlieder theorem says, roughly, that, in any reasonable quantum field theory, for any bounded region of spacetime R, any state can be approximated arbitrarily closely by operating on the vacuum state (or any state of bounded energy) with operators formed by smearing polynomials in the field operators with functions having support in R. This strikes many as counterintuitive, and Reinhard Werner has glossed the theorem as saying that “By acting on the vacuum with suitable operations in a terrestrial laboratory, an experimenter can create the Taj Mahal on (or even behind) the Moon!” This talk has two parts. First, I hope to convince listeners that the theorem is not counterintuitive, and that it follows immediately from facts that are already familiar fare to anyone who has digested the opening chapters of any standard introductory textbook of QFT. In the second, I will discuss what we can learn from the theorem about how relativistic causality is implemented in quantum field theories.

(Download MP4 video here.)

The topic was well-defined, and of reasonable scope. The theorem is easily and commonly misunderstood. And Wayne’s talk served to dissolve the confusion around it, by unpacking the theorem into a handful of pieces so that you could quickly see where the rub was. I would that all philosophy of physics were so well done.

Here are the key points as I saw them:

  • The vacuum state in QFTs, even non-interacting ones, is entangled over arbitrary distances (albeit by exponentially small amounts). You can think of this as every two space-like separated regions of spacetime sharing extremely diluted Bell pairs.
[continue reading]

How fast do macroscopic wavefunctions branch?

Over at PhysicsOverflow, Daniel Ranard asked a question that’s near and dear to my heart:

How deterministic are large open quantum systems (e.g. with humans)?

Consider some large system modeled as an open quantum system — say, a person in a room, where the walls of the room interact in a boring way with some environment. Begin with a pure initial state describing some comprehensible configuration. (Maybe the person is sitting down.) Generically, the system will be in a highly mixed state after some time. Both normal human experience and the study of decoherence suggest that this state will be a mixture of orthogonal pure states that describe classical-like configurations. Call these configurations branches.

How much does a pure state of the system branch over human time scales? There will soon be many (many) orthogonal branches with distinct microscopic details. But to what extent will probabilities be spread over macroscopically (and noticeably) different branches?

I answered the question over there as best I could. Below, I’ll reproduce my answer and indulge in slightly more detail and speculation.

This question is central to my research interests, in the sense that completing that research would necessarily let me give a precise, unambiguous answer. So I can only give an imprecise, hand-wavy one. I’ll write down the punchline, then work backwards.

Punchline

The instantaneous rate of branching, as measured in entropy/time (e.g., bits/s), is given by the sum of all positive Lyapunov exponents for all non-thermalized degrees of freedom.

Most of the vagueness in this claim comes from defining/identifying degree of freedom that have thermalized, and dealing with cases of partial/incomplete thermalization; these problems exists classically.

Elaboration

The original question postulates that the macroscopic system starts in a quantum state corresponding to some comprehensible classical configuration, i.e.,… [continue reading]

Loophole-free Bell violations

The most profound discovery of science appears to be confirmed with essentially no wiggle room. The group led by Ronald Hanson at the Delft University of Technology in the Netherlands claim to have reported a loophole-free observation of Bell violations. Links:

I hope Matt Leifer is right and they give a Nobel Prize for this work.

EDIT Nov 12: Two other groups, who were clearly in a very close race, have just posted their loophole-free experiments: arXiv:1511.03189 and arXiv:1511.03190. (H/t Peter Morgan. Also, note the sequential numbers.) Delft’s group published as soon as they had sufficient statistics to reasonably exclude local realism, but the two runner-ups have collected gratifyingly larger samples, so their p-values are more like 1 in 10 million.… [continue reading]

How to think about Quantum Mechanics—Part 6: Energy conservation and wavefunction branches

[Other parts in this series: 1,2,3,4,5,6,7,8.]

In discussions of the many-worlds interpretation (MWI) and the process of wavefunction branching, folks sometimes ask whether the branching process conflicts with conservations laws like the conservation of energy.Here are some related questions from around the web, not addressing branching or MWI. None of them get answered particularly well.a   There are actually two completely different objections that people sometimes make, which have to be addressed separately.

First possible objection: “If the universe splits into two branches, doesn’t the total amount of energy have to double?” This is the question Frank Wilczek appears to be addressing at the end of these notes.

I think this question can only be asked by someone who believes that many worlds is an interpretation that is just like Copenhagen (including, in particular, the idea that measurement events are different than normal unitary evolution) except that it simply declares that new worlds are created following measurements. But this is a misunderstanding of many worlds. MWI dispenses with collapse or any sort of departure from unitary evolution. The wavefunction just evolves along, maintaining its energy distributions, and energy doesn’t double when you mathematically identify a decomposition of the wavefunction into two orthogonal components.

Second possible objection: “If the universe starts out with some finite spread in energy, what happens if it then ‘branches’ into multiple worlds, some of which overlap with energy eigenstates outside that energy spread?” Or, another phrasing: “What happens if the basis in which the universe decoheres doesn’t commute with energy basis? Is it then possible to create energy, at least in some branches?”… [continue reading]

Integrating with functional derivatives

I saw a neat talk at Perimeter a couple weeks ago on new integration techniques:

Speaker: Achim Kempf from University of Waterloo.
Title: “How to integrate by differentiating: new methods for QFTs and gravity”.

Abstract: I present a simple new all-purpose integration technique. It is quick to use, applies to functions as well as distributions and it is often easier than contour integration. (And it is not Feynman’s method). It also yields new quick ways to evaluate Fourier and Laplace transforms. The new methods express integration in terms of differentiation. Applied to QFT, the new methods can be used to express functional integration, i.e., path integrals, in terms of functional differentiation. This naturally yields the weak and strong coupling expansions as well as a host of other expansions that may be of use in quantum field theory, e.g., in the context of heat traces.

(Many talks hosted on PIRSA have a link to the mp4 file so you can directly download it. This talk does not, but you can right-click here and select “save as” to get the f4v file.This file format can be watched with VLC player. You can find it for any talk hosted by PIRSA by viewing the page source and searching the text for “.f4v”. There are many nice things about learning physics from videos, one of which is the ability to easily speed up the playback speed and skip around. In VLC player, playback speed can be incremented in 10% steps by pressing the left and right square brackets, ‘[‘ and ‘]’.a  )

The technique is based on the familiar trick of extracting a functional derivate inside a path integral and using integration by parts.… [continue reading]

How to think about Quantum Mechanics—Part 5: Superpositions and entanglement are relative concepts

[Other parts in this series: 1,2,3,4,5,6,7,8.]

People often talk about “creating entanglement” or “creating a superposition” in the laboratory, and quite rightly think about superpositions and entanglement as resources for things like quantum-enhanced measurements and quantum computing.

However, it’s often not made explicit that a superposition is only defined relative to a particular preferred basis for a Hilbert space. A superposition \vert \psi \rangle = \vert 1 \rangle + \vert 2 \rangle is implicitly a superposition relative to the preferred basis \{\vert 1 \rangle, \vert 2 \rangle\}. Schrödinger’s cat is a superposition relative to the preferred basis \{\vert \mathrm{Alive} \rangle, \vert \mathrm{Dead} \rangle\}. Without there being something special about these bases, the state \vert \psi \rangle is no more or less a superposition than \vert 1 \rangle and \vert 2 \rangle individually. Indeed, for a spin-1/2 system there is a mapping between bases for the Hilbert space and vector directions in real space (as well illustrated by the Bloch sphere); unless one specifies a preferred direction in real space to break rotational symmetry, there is no useful sense of putting that spin in a superposition.

Likewise, entanglement is only defined relative to a particular tensor decomposition of the Hilbert space into subsystems, \mathcal{H} = \mathcal{A} \otimes \mathcal{B}. For any given (possibly mixed) state of \mathcal{H}, it’s always possible to write down an alternate decomposition \mathcal{H} = \mathcal{X} \otimes \mathcal{Y} relative to which the state has no entanglement.

So where do these preferred bases and subsystem structure come from? Why is it so useful to talk about these things as resources when their very existence seems to be dependent on our mathematical formalism? Generally it is because these preferred structures are determined by certain aspects of the dynamics out in the real world (as encoded in the Hamiltonian) that make certain physical operations possible and others completely infeasible.… [continue reading]

Comments on Pikovski et al.’s time dilation decoherence

Folks have been asking about the new Nature Physics article by Pikovski et al., “Universal decoherence due to gravitational time dilation. Here are some comments:

  • I think their calculation is probably correct for the model they are considering. One could imagine that they were placing their object in a superposition of two different locations in an electric (rather than gravitational field), and in this case we really would expect the internal degrees of freedom to evolve in two distinct ways. Any observer who was “part of the superposition” wouldn’t be able to tell locally whether their clock was ticking fast or slow, but it can be determined by bringing both clocks back together and comparing them.
  • It’s possible the center of mass (COM) gets shifted a bit, but you can avoid this complication by just assuming that the superposition separation L is much bigger than the size of the object R, and that the curvature of the gravitational field is very small compared to both.
  • Their model is a little weird, as hinted at by their observation that they get “Gaussian decoherence”, \sim \exp(-T^2), rather than exponential, \sim \exp(-T). The reason is that their evolution isn’t Markovian, as it is for any environment (like scattered or emitted photons) composed of small parts that interact for a bit of time and then leave. Rather, the COM is becoming more and more entangled with each of the internal degrees of freedom as time goes on.
  • Because they don’t emit any radiation, their “environment” (the internal DOF) is finite dimensional, and so you will eventually get recoherence. This isn’t a problem for Avagadro’s number of particles.
  • This only decoheres superpositions in the direction of the gravitational gradient, so it’s not particularly relevant for why things look classical above any given scale.
[continue reading]

Some intuition about decoherence of macroscopic variables

[This is a vague post intended to give some intuition about how particular toy models of decoherence fit in to the much hairier question of why the macroscopic world appears classical.]

A spatial superposition of a large object is a common model to explain the importance of decoherence in understanding the macroscopic classical world. If you take a rock and put it in a coherent superposition of two locations separated by a macroscopic distance, you find that the initial pure state of the rock is very, very, very quickly decohered into an incoherent mixture of the two positions by the combined effect of things like stray thermal photons, gas molecules, or even the cosmic microwave background.

Formally, the thing you are superposing is the center-of-mass (COM) variable of the rock. For simplicity one typically considers the internal state of the rock (i.e., all its degrees of freedom besides the COM) to be in a (possibly mixed) quantum state that is uncorrelated with the COM. This toy model then explains (with caveats) why the COM can be treated as a “classical variable”, but it doesn’t immediately explain why the rock as a whole can be considered classical. On might ask: what would that mean, anyways? Certainly, parts of the rock still have quantum aspects (e.g., its spectroscopic properties). For Schrödinger’s cat, how is the decoherence of its COM related the fact that the cat, considered holistically, is either dead or alive but not both?

Consider a macroscopic object with Avagadro’s number of particles N, which means it would be described classically in microscopic detail by 3N variables parameterizing configuration space in three dimensions. (Ignore spin.)… [continue reading]