My talk on ideal quantum Brownian motion

I have blogged before about the conceptual importance of ideal, symplectic covariant quantum Brownian motion (QBM). In short: QBM is to open quantum systems as the harmonic oscillator is to closed quantum systems. Like the harmonic oscillator, (a) QBM is universal because it’s the leading-order behavior of a taylor series expansion; (b) QBM evolution has a very intuitive interpretation in terms of wavepackets evolving under classical flow; and (c) QBM is exactly solvable.

If that sounds like a diatribe up your alley, then you are in luck. I recently ranted about it here at PI. It’s just a summary of the literature; there are no new results. As always, I recommend downloading the raw video file so you can run it at arbitrary speed.


Abstract: In the study of closed quantum system, the simple harmonic oscillator is ubiquitous because all smooth potentials look quadratic locally, and exhaustively understanding it is very valuable because it is exactly solvable. Although not widely appreciated, Markovian quantum Brownian motion (QBM) plays almost exactly the same role in the study of open quantum systems. QBM is ubiquitous because it arises from only the Markov assumption and linear Lindblad operators, and it likewise has an elegant and transparent exact solution. QBM is often introduced with specific non-Markovian models like Caldeira-Leggett, but this makes it very difficult to see which phenomena are universal and which are idiosyncratic to the model. Like frictionless classical mechanics or nonrenormalizable field theories, the exact Markov property is aphysical, but handling this subtlety is a small price to pay for the extreme generality.
[continue reading]

Redundant consistency

I’m happy to announce the recent publication of a paper by Mike, Wojciech, and myself.

The Objective Past of a Quantum Universe: Redundant Records of Consistent Histories
C. Jess Riedel, Wojciech H. Zurek, and Michael Zwolak
Motivated by the advances of quantum Darwinism and recognizing the role played by redundancy in identifying the small subset of quantum states with resilience characteristic of objective classical reality, we explore the implications of redundant records for consistent histories. The consistent histories formalism is a tool for describing sequences of events taking place in an evolving closed quantum system. A set of histories is consistent when one can reason about them using Boolean logic, i.e., when probabilities of sequences of events that define histories are additive. However, the vast majority of the sets of histories that are merely consistent are flagrantly nonclassical in other respects. This embarras de richesses (known as the set selection problem) suggests that one must go beyond consistency to identify how the classical past arises in our quantum universe. The key intuition we follow is that the records of events that define the familiar objective past are inscribed in many distinct systems, e.g., subsystems of the environment, and are accessible locally in space and time to observers. We identify histories that are not just consistent but redundantly consistent using the partial-trace condition introduced by Finkelstein as a bridge between histories and decoherence. The existence of redundant records is a sufficient condition for redundant consistency. It selects, from the multitude of the alternative sets of consistent histories, a small subset endowed with redundant records characteristic of the objective classical past. The information about an objective history of the past is then simultaneously within reach of many, who can independently reconstruct it and arrive at compatible conclusions in the present.
[continue reading]

KS entropy generated by entanglement-breaking quantum Brownian motion

A new paper of mine (PRA 93, 012107 (2016), arXiv:1507.04083) just came out. The main theorem of the paper is not deep, but I think it’s a clarifying result within a formalism that is deep: ideal quantum Brownian motion (QBM) in symplectic generality. In this blog post, I’ll refresh you on ideal QBM, quote my abstract, explain the main result, and then — going beyond the paper — show how it’s related to the Kolmogorov-Sinai entropy and the speed at which macroscopic wavefunctions branch.

Ideal QBM

If you Google around for “quantum Brownian motion”, you’ll come across a bunch of definitions that have quirky features, and aren’t obviously related to each other. This is a shame. As I explained in an earlier blog post, ideal QBM is the generalization of the harmonic oscillator to open quantum systems. If you think harmonic oscillator are important, and you think decoherence is important, then you should understand ideal QBM.

Harmonic oscillators are ubiquitous in the world because all smooth potentials look quadratic locally. Exhaustively understanding harmonic oscillators is very valuable because they are exactly solvable in addition to being ubiquitous. In an almost identical way, all quantum Markovian degrees of freedom look locally like ideal QBM, and their completely positive (CP) dynamics can be solved exactly.

To get true generality, both harmonic oscillators and ideal QBM should be expressed in manifestly symplectic covariant form. Just like for Lorentz covariance, a dynamical equation that exhibits manifest symplectic covariance takes the same form under linear symplectic transformations on phase space. At a microscopic level, all physics is symplectic covariant (and Lorentz covariant), so this better hold.… [continue reading]

My talk on dark matter decoherence detection

I gave a talk recently on Itay’s and my latests results for detecting dark matter through the decoherence it induces in matter interferometers.

Quantum superpositions of matter are unusually sensitive to decoherence by tiny momentum transfers, in a way that can be made precise with a new diffusion standard quantum limit. Upcoming matter interferometers will produce unprecedented spatial superpositions of over a million nucleons. What sorts of dark matter scattering events could be seen in these experiments as anomalous decoherence? We show that it is extremely weak but medium range interaction between matter and dark matter that would be most visible, such as scattering through a Yukawa potential. We construct toy models for these interactions, discuss existing constraints, and delineate the expected sensitivity of forthcoming experiments. In particular, the OTIMA interferometer developing at the University of Vienna will directly probe many orders of magnitude of parameter space, and the proposed MAQRO satellite experiment would be vastly more sensitive yet. This is a multidisciplinary talk that will be accessible to a non-specialized audience.
[Download MP4]If you ever have problems finding the direct download link for videos on PI’s website (they are sometimes missing), this Firefox extension seems to do the trick.

Relevant paper on the diffusion SQL is here: arXiv:1504.03250. The main dark matter paper is still a work in progress.

Footnotes

(↵ returns to text)

  1. If you ever have problems finding the direct download link for videos on PI’s website (they are sometimes missing), this Firefox extension seems to do the trick.
[continue reading]

Comments on Myrvold’s Taj Mahal

Last week I saw an excellent talk by philosopher Wayne Myrvold.

The Reeh-Schlieder theorem says, roughly, that, in any reasonable quantum field theory, for any bounded region of spacetime R, any state can be approximated arbitrarily closely by operating on the vacuum state (or any state of bounded energy) with operators formed by smearing polynomials in the field operators with functions having support in R. This strikes many as counterintuitive, and Reinhard Werner has glossed the theorem as saying that “By acting on the vacuum with suitable operations in a terrestrial laboratory, an experimenter can create the Taj Mahal on (or even behind) the Moon!” This talk has two parts. First, I hope to convince listeners that the theorem is not counterintuitive, and that it follows immediately from facts that are already familiar fare to anyone who has digested the opening chapters of any standard introductory textbook of QFT. In the second, I will discuss what we can learn from the theorem about how relativistic causality is implemented in quantum field theories.

(Download MP4 video here.)

The topic was well-defined, and of reasonable scope. The theorem is easily and commonly misunderstood. And Wayne’s talk served to dissolve the confusion around it, by unpacking the theorem into a handful of pieces so that you could quickly see where the rub was. I would that all philosophy of physics were so well done.

Here are the key points as I saw them:

  • The vacuum state in QFTs, even non-interacting ones, is entangled over arbitrary distances (albeit by exponentially small amounts). You can think of this as every two space-like separated regions of spacetime sharing extremely diluted Bell pairs.
[continue reading]

How fast do macroscopic wavefunctions branch?

Over at PhysicsOverflow, Daniel Ranard asked a question that’s near and dear to my heart:

How deterministic are large open quantum systems (e.g. with humans)?

Consider some large system modeled as an open quantum system — say, a person in a room, where the walls of the room interact in a boring way with some environment. Begin with a pure initial state describing some comprehensible configuration. (Maybe the person is sitting down.) Generically, the system will be in a highly mixed state after some time. Both normal human experience and the study of decoherence suggest that this state will be a mixture of orthogonal pure states that describe classical-like configurations. Call these configurations branches.

How much does a pure state of the system branch over human time scales? There will soon be many (many) orthogonal branches with distinct microscopic details. But to what extent will probabilities be spread over macroscopically (and noticeably) different branches?

I answered the question over there as best I could. Below, I’ll reproduce my answer and indulge in slightly more detail and speculation.

This question is central to my research interests, in the sense that completing that research would necessarily let me give a precise, unambiguous answer. So I can only give an imprecise, hand-wavy one. I’ll write down the punchline, then work backwards.

Punchline

The instantaneous rate of branching, as measured in entropy/time (e.g., bits/s), is given by the sum of all positive Lyapunov exponents for all non-thermalized degrees of freedom.

Most of the vagueness in this claim comes from defining/identifying degree of freedom that have thermalized, and dealing with cases of partial/incomplete thermalization; these problems exists classically.

Elaboration

The original question postulates that the macroscopic system starts in a quantum state corresponding to some comprehensible classical configuration, i.e., the system is initially in a quantum state whose Wigner function is localized around some classical point in phase space.… [continue reading]

Loophole-free Bell violations

The most profound discovery of science appears to be confirmed with essentially no wiggle room. The group led by Ronald Hanson at the Delft University of Technology in the Netherlands claim to have reported a loophole-free observation of Bell violations. Links:

I hope Matt Leifer is right and they give a Nobel Prize for this work.

EDIT Nov 12: Two other groups, who were clearly in a very close race, have just posted their loophole-free experiments: arXiv:1511.03189 and arXiv:1511.03190. (H/t Peter Morgan. Also, note the sequential numbers.) Delft’s group published as soon as they had sufficient statistics to reasonably exclude local realism, but the two runner-ups have collected gratifyingly larger samples, so their p-values are more like 1 in 10 million.… [continue reading]

How to think about Quantum Mechanics—Part 6: Energy conservation and wavefunction branches

[Other parts in this series: 1,2,3,4,5,6,7.]

In discussions of the many-worlds interpretation (MWI) and the process of wavefunction branching, folks sometimes ask whether the branching process conflicts with conservations laws like the conservation of energy.Here are some related questions from around the web, not addressing branching or MWI. None of them get answered particularly well. There are actually two completely different objections that people sometimes make, which have to be addressed separately.

First possible objection: “If the universe splits into two branches, doesn’t the total amount of energy have to double?” This is the question Frank Wilczek appears to be addressing at the end of these notes.

I think this question can only be asked by someone who believes that many worlds is an interpretation that is just like Copenhagen (including, in particular, the idea that measurement events are different than normal unitary evolution) except that it simply declares that new worlds are created following measurements. But this is a misunderstanding of many worlds. MWI dispenses with collapse or any sort of departure from unitary evolution. The wavefunction just evolves along, maintaining its energy distributions, and energy doesn’t double when you mathematically identify a decomposition of the wavefunction into two orthogonal components.

Second possible objection: “If the universe starts out with some finite spread in energy, what happens if it then ‘branches’ into multiple worlds, some of which overlap with energy eigenstates outside that energy spread?” Or, another phrasing: “What happens if the basis in which the universe decoheres doesn’t commute with energy basis? Is it then possible to create energy, at least in some branches?” The answer is “no”, but it’s not obvious.… [continue reading]

Integrating with functional derivatives

I saw a neat talk at Perimeter a couple weeks ago on new integration techniques:

Speaker: Achim Kempf from University of Waterloo.
Title: “How to integrate by differentiating: new methods for QFTs and gravity”.

Abstract: I present a simple new all-purpose integration technique. It is quick to use, applies to functions as well as distributions and it is often easier than contour integration. (And it is not Feynman’s method). It also yields new quick ways to evaluate Fourier and Laplace transforms. The new methods express integration in terms of differentiation. Applied to QFT, the new methods can be used to express functional integration, i.e., path integrals, in terms of functional differentiation. This naturally yields the weak and strong coupling expansions as well as a host of other expansions that may be of use in quantum field theory, e.g., in the context of heat traces.

(Many talks hosted on PIRSA have a link to the mp4 file so you can directly download it. This talk does not, but you can right-click here and select “save as” to get the f4v file.This file format can be watched with VLC player. You can find it for any talk hosted by PIRSA by viewing the page source and searching the text for “.f4v”. There are many nice things about learning physics from videos, one of which is the ability to easily speed up the playback speed and skip around. In VLC player, playback speed can be incremented in 10% steps by pressing the left and right square brackets, ‘[‘ and ‘]’.)

The technique is based on the familiar trick of extracting a functional derivate inside a path integral and using integration by parts.… [continue reading]

How to think about Quantum Mechanics—Part 5: Superpositions and entanglement are relative concepts

[Other parts in this series: 1,2,3,4,5,6,7.]

People often talk about “creating entanglement” or “creating a superposition” in the laboratory, and quite rightly think about superpositions and entanglement as resources for things like quantum-enhanced measurements and quantum computing.

However, it’s often not made explicit that a superposition is only defined relative to a particular preferred basis for a Hilbert space. A superposition \vert \psi \rangle = \vert 1 \rangle + \vert 2 \rangle is implicitly a superposition relative to the preferred basis \{\vert 1 \rangle, \vert 2 \rangle\}. Schrödinger’s cat is a superposition relative to the preferred basis \{\vert \mathrm{Alive} \rangle, \vert \mathrm{Dead} \rangle\}. Without there being something special about these bases, the state \vert \psi \rangle is no more or less a superposition than \vert 1 \rangle and \vert 2 \rangle individually. Indeed, for a spin-1/2 system there is a mapping between bases for the Hilbert space and vector directions in real space (as well illustrated by the Bloch sphere); unless one specifies a preferred direction in real space to break rotational symmetry, there is no useful sense of putting that spin in a superposition.

Likewise, entanglement is only defined relative to a particular tensor decomposition of the Hilbert space into subsystems, \mathcal{H} = \mathcal{A} \otimes \mathcal{B}. For any given (possibly mixed) state of \mathcal{H}, it’s always possible to write down an alternate decomposition \mathcal{H} = \mathcal{X} \otimes \mathcal{Y} relative to which the state has no entanglement.

So where do these preferred bases and subsystem structure come from? Why is it so useful to talk about these things as resources when their very existence seems to be dependent on our mathematical formalism? Generally it is because these preferred structures are determined by certain aspects of the dynamics out in the real world (as encoded in the Hamiltonian) that make certain physical operations possible and others completely infeasible.… [continue reading]

Comments on Pikovski et al.’s time dilation decoherence

Folks have been asking about the new Nature Physics article by Pikovski et al., “Universal decoherence due to gravitational time dilation. Here are some comments:

  • I think their calculation is probably correct for the model they are considering. One could imagine that they were placing their object in a superposition of two different locations in an electric (rather than gravitational field), and in this case we really would expect the internal degrees of freedom to evolve in two distinct ways. Any observer who was “part of the superposition” wouldn’t be able to tell locally whether their clock was ticking fast or slow, but it can be determined by bringing both clocks back together and comparing them.
  • It’s possible the center of mass (COM) gets shifted a bit, but you can avoid this complication by just assuming that the superposition separation L is much bigger than the size of the object R, and that the curvature of the gravitational field is very small compared to both.
  • Their model is a little weird, as hinted at by their observation that they get “Gaussian decoherence”, \sim \exp(-T^2), rather than exponential, \sim \exp(-T). The reason is that their evolution isn’t Markovian, as it is for any environment (like scattered or emitted photons) composed of small parts that interact for a bit of time and then leave. Rather, the COM is becoming more and more entangled with each of the internal degrees of freedom as time goes on.
  • Because they don’t emit any radiation, their “environment” (the internal DOF) is finite dimensional, and so you will eventually get recoherence. This isn’t a problem for Avagadro’s number of particles.
  • This only decoheres superpositions in the direction of the gravitational gradient, so it’s not particularly relevant for why things look classical above any given scale.
[continue reading]

Some intuition about decoherence of macroscopic variables

[This is a vague post intended to give some intuition about how particular toy models of decoherence fit in to the much hairier question of why the macroscopic world appears classical.]

A spatial superposition of a large object is a common model to explain the importance of decoherence in understanding the macroscopic classical world. If you take a rock and put it in a coherent superposition of two locations separated by a macroscopic distance, you find that the initial pure state of the rock is very, very, very quickly decohered into an incoherent mixture of the two positions by the combined effect of things like stray thermal photons, gas molecules, or even the cosmic microwave background.

Formally, the thing you are superposing is the center-of-mass (COM) variable of the rock. For simplicity one typically considers the internal state of the rock (i.e., all its degrees of freedom besides the COM) to be in a (possibly mixed) quantum state that is uncorrelated with the COM. This toy model then explains (with caveats) why the COM can be treated as a “classical variable”, but it doesn’t immediately explain why the rock as a whole can be considered classical. On might ask: what would that mean, anyways? Certainly, parts of the rock still have quantum aspects (e.g., its spectroscopic properties). For Schrödinger’s cat, how is the decoherence of its COM related the fact that the cat, considered holistically, is either dead or alive but not both?

Consider a macroscopic object with Avagadro’s number of particles N, which means it would be described classically in microscopic detail by 3N variables parameterizing configuration space in three dimensions. (Ignore spin.) We know at least two things immediately about the corresponding quantum system:

(1) Decoherence with the external environment prevents the system from exploring the entire Hilbert space associated with the 3N continuous degrees of freedom.… [continue reading]

Standard quantum limit for diffusion

I just posted my newest paper: “Decoherence from classically undetectable sources: A standard quantum limit for diffusion” (arXiv:1504.03250). [Edit: Now published as PRA 92, 010101(R) (2015).] The basic idea is to prove a standard quantum limit (SQL) that shows that some particles can be detected through the anomalous decoherence they induce even though they cannot be detected with any classical experiment. Hopefully, this is more evidence that people should think of big spatial superpositions as sensitive detectors, not just neat curiosities.

Here’s the abstract:

In the pursuit of speculative new particles, forces, and dimensions with vanishingly small influence on normal matter, understanding the ultimate physical limits of experimental sensitivity is essential. Here, I show that quantum decoherence offers a window into otherwise inaccessible realms. There is a standard quantum limit for diffusion that restricts some entanglement-generating phenomena, like soft collisions with new particle species, from having appreciable classical influence on normal matter. Such phenomena are classically undetectable but can be revealed by the anomalous decoherence they induce on non-classical superpositions with long-range coherence in phase space. This gives strong, novel motivation for the construction of matter interferometers and other experimental sources of large superpositions, which recently have seen rapid progress. Decoherence is always at least second order in the coupling strength, so such searches are best suited for soft, but not weak, interactions.

Here’s Figure 2:


Standard quantum limit for forces and momentum diffusion. A test mass is initially placed in a minimal uncertainty wavepacket with a Wigner distribution W(x,p) over phase space (top) that contains the bulk of its mass within a 2\sigma-contour of a Gaussian distribution (dashed black line).
[continue reading]

How to think about Quantum Mechanics—Part 4: Quantum indeterminism as an anomaly

[Other parts in this series: 1,2,3,4,5,6,7.]

I am firmly of the view…that all the sciences are compatible and that detailed links can be, and are being, forged between them. But of course the links are subtle… a mathematical aspect of theory reduction that I regard as central, but which cannot be captured by the purely verbal arguments commonly employed in philosophical discussions of reduction. My contention here will be that many difficulties associated with reduction arise because they involve singular limits….What nonclassical phenomena emerge as h → 0? This sounds like nonsense, and indeed if the limit were not singular the answer would be: no such phenomena.Michael Berry

One of the great crimes against humanity occurs each year in introductory quantum mechanics courses when students are introduced to an \hbar \to 0 limit, sometimes decorated with words involving “the correspondence principle”. The problem isn’t with the content per se, but with the suggestion that this somehow gives a satisfying answer to why quantum mechanics looks like classical mechanics on large scales.

Sometimes this limit takes the form of a path integral, where the transition probability for a particle to move from position x_1 to x_2 in a time T is

(1)   \begin{align*} P_{x_1 \to x_2} &= \langle x_1 \vert e^{-i H T} \vert x_2 \rangle \\ &\propto \int_{x_1,x_2} \mathcal{D}[x(t)] e^{-i S[x(t),x'(t)]/\hbar} = \int_{x_1,x_2} \mathcal{D}[x(t)] e^{-i \int_0^T \mathrm{d}t L(x(t),x'(t))/\hbar} \end{align*}

where \int_{x_1,x_2} \mathcal{D}[x(t)] is the integral over all paths from x_1 to x_2, and S[x(t),x'(t)]= \int_0^T \mathrm{d}t L(x(t),x'(t)) is the action for that path (L being the Lagrangian corresponding to the Hamiltonian H). As \hbar \to 0, the exponent containing the action spins wildly and averages to zero for all paths not in the immediate vicinity of the classical path that make the action stationary.

Other times this takes the form of Ehrenfest’s theorem, which shows that the expectation values of functions of position and momentum follow the classical equations of motion.… [continue reading]

Decoherence detection and micromechanical resonators

In this post I want to lay out why I am a bit pessimistic about using quantum micromechanical resonators, usually of the optomechanical variety, for decoherence detection. I will need to rely on some simple ideas from 3-4 papers I have “in the pipeline” (read: partially written TeX files) that seek to make precise the sense in which decoherence detection allows us to detect classical undetectable phenomena, and to figure out exactly what sort of phenomena we should apply it to. So this post will sound vague without that supporting material. Hopefully it will still be useful, at least for the author.

The overarching idea is that decoherence detection is only particularly useful when the experimental probe can be placed in a superposition with respect to a probe’s natural pointer basis. Recall that the pointer basis is the basis in which the density matrix of the probe is normally restricted to be approximately diagonal by the interaction with the natural environment. Classically detectable phenomena are those which cause transitions within the pointer basis, i.e. driving the system from one pointer state to another. Classically undetectable phenomena are those which cause pure decoherence with respect to this basis, i.e. they add a decoherence factor to off-diagonal terms in this basis, but preserve on-diagonal terms.

The thing that makes this tricky to think about is that the pointer basis is overcomplete for most physically interesting situations, in particular for any continuous degree of freedom like the position of a molecule or a silicon nanoparticle. It’s impossible to perfectly localize a particle, and the part of the Hamiltonian that fights you on this, p^2/2m, causes a smearing effect that leads to the overcompleteness.… [continue reading]