Loophole-free Bell violations

The most profound discovery of science appears to be confirmed with essentially no wiggle room. The group led by Ronald Hanson at the Delft University of Technology in the Netherlands claim to have reported a loophole-free observation of Bell violations. Links:

I hope Matt Leifer is right and they give a Nobel Prize for this work.

EDIT Nov 12: Two other groups, who were clearly in a very close race, have just posted their loophole-free experiments: arXiv:1511.03189 and arXiv:1511.03190. (H/t Peter Morgan. Also, note the sequential numbers.) Delft’s group published as soon as they had sufficient statistics to reasonably exclude local realism, but the two runner-ups have collected gratifyingly larger samples, so their p-values are more like 1 in 10 million.… [continue reading]

How to think about Quantum Mechanics—Part 6: Energy conservation and wavefunction branches

[Other parts in this series: 1,2,3,4,5,6.]

In discussions of the many-worlds interpretation (MWI) and the process of wavefunction branching, folks sometimes ask whether the branching process conflicts with conservations laws like the conservation of energy.Here are some related questions from around the web, not addressing branching or MWI. None of them get answered particularly well. a   There are actually two completely different objections that people sometimes make, which have to be addressed separately.

First possible objection: “If the universe splits into two branches, doesn’t the total amount of energy have to double?” This is the question Frank Wilczek appears to be addressing at the end of these notes.

I think this question can only be asked by someone who believes that many worlds is an interpretation that is just like Copenhagen (including, in particular, the idea that measurement events are different than normal unitary evolution) except that it simply declares that new worlds are created following measurements. But this is a misunderstanding of many worlds. MWI dispenses with collapse or any sort of departure from unitary evolution. The wavefunction just evolves along, maintaining its energy distributions, and energy doesn’t double when you mathematically identify a decomposition of the wavefunction into two orthogonal components.… [continue reading]

Integrating with functional derivatives

I saw a neat talk at Perimeter a couple weeks ago on new integration techniques:

Speaker: Achim Kempf from University of Waterloo.
Title: “How to integrate by differentiating: new methods for QFTs and gravity”.

Abstract: I present a simple new all-purpose integration technique. It is quick to use, applies to functions as well as distributions and it is often easier than contour integration. (And it is not Feynman’s method). It also yields new quick ways to evaluate Fourier and Laplace transforms. The new methods express integration in terms of differentiation. Applied to QFT, the new methods can be used to express functional integration, i.e., path integrals, in terms of functional differentiation. This naturally yields the weak and strong coupling expansions as well as a host of other expansions that may be of use in quantum field theory, e.g., in the context of heat traces.

(Many talks hosted on PIRSA have a link to the mp4 file so you can directly download it. This talk does not, but you can right-click here and select “save as” to get the f4v file.This file format can be watched with VLC player. You can find it for any talk hosted by PIRSA by viewing the page source and searching the text for “.f4v”.[continue reading]

How to think about Quantum Mechanics—Part 5: Superpositions and entanglement are relative concepts

[Other parts in this series: 1,2,3,4,5,6.]

People often talk about “creating entanglement” or “creating a superposition” in the laboratory, and quite rightly think about superpositions and entanglement as resources for things like quantum-enhanced measurements and quantum computing.

However, it’s often not made explicit that a superposition is only defined relative to a particular preferred basis for a Hilbert space. A superposition \vert \psi \rangle = \vert 1 \rangle + \vert 2 \rangle is implicitly a superposition relative to the preferred basis \{\vert 1 \rangle, \vert 2 \rangle\}. Schrödinger’s cat is a superposition relative to the preferred basis \{\vert \mathrm{Alive} \rangle, \vert \mathrm{Dead} \rangle\}. Without a there being something special about these bases, the state \vert \psi \rangle is no more or less a superposition than \vert 1 \rangle and \vert 2 \rangle individually. Indeed, for a spin-1/2 system there is a mapping between bases for the Hilbert space and vector directions in real space (as well illustrated by the Bloch sphere); unless one specifies a preferred direction in real space to break rotational symmetry, there is no useful sense of putting that spin in a superposition.

Likewise, entanglement is only defined relative to a particular tensor decomposition of the Hilbert space into subsystems, \mathcal{H} = \mathcal{A} \otimes \mathcal{B}. For any given (possibly mixed) state of \mathcal{H}, it’s always possible to write down an alternate decomposition \mathcal{H} = \mathcal{X} \otimes \mathcal{Y} relative to which the state has no entanglement.… [continue reading]

Comments on Pikovski et al.’s time dilation decoherence

Folks have been asking about the new Nature Physics article by Pikovski et al., “Universal decoherence due to gravitational time dilation. Here are some comments:

  • I think their calculation is probably correct for the model they are considering. One could imagine that they were placing their object in a superposition of two different locations in an electric (rather than gravitational field), and in this case we really would expect the internal degrees of freedom to evolve in two distinct ways. Any observer who was “part of the superposition” wouldn’t be able to tell locally whether their clock was ticking fast or slow, but it can be determined by bringing both clocks back together and comparing them.
  • It’s possible the center of mass (COM) gets shifted a bit, but you can avoid this complication by just assuming that the superposition separation L is much bigger than the size of the object R, and that the curvature of the gravitational field is very small compared to both.
  • Their model is a little weird, as hinted at by their observation that they get “Gaussian decoherence”, \sim \exp(-T^2), rather than exponential, \sim \exp(-T). The reason is that their evolution isn’t Markovian, as it is for any environment (like scattered or emitted photons) composed of small parts that interact for a bit of time and then leave.
[continue reading]

Some intuition about decoherence of macroscopic variables

[This is a vague post intended to give some intuition about how particular toy models of decoherence fit in to the much hairier question of why the macroscopic world appears classical.]

A spatial superposition of a large object is a common model to explain the importance of decoherence in understanding the macroscopic classical world. If you take a rock and put it in a coherent superposition of two locations separated by a macroscopic distance, you find that the initial pure state of the rock is very, very, very quickly decohered into an incoherent mixture of the two positions by the combined effect of things like stray thermal photons, gas molecules, or even the cosmic microwave background.

Formally, the thing you are superposing is the center-of-mass (COM) variable of the rock. For simplicity one typically considers the internal state of the rock (i.e., all its degrees of freedom besides the COM) to be in a (possibly mixed) quantum state that is uncorrelated with the COM. This toy model then explains (with caveats) why the COM can be treated as a “classical variable”, but it doesn’t immediately explain why the rock as a whole can be considered classical. On might ask: what would that mean, anyways?… [continue reading]

Standard quantum limit for diffusion

I just posted my newest paper: “Decoherence from classically undetectable sources: A standard quantum limit for diffusion” (arXiv:1504.03250). [Edit: Now published as PRA 92, 010101(R) (2015).] The basic idea is to prove a standard quantum limit (SQL) that shows that some particles can be detected through the anomalous decoherence they induce even though they cannot be detected with any classical experiment. Hopefully, this is more evidence that people should think of big spatial superpositions as sensitive detectors, not just neat curiosities.

Here’s the abstract:

In the pursuit of speculative new particles, forces, and dimensions with vanishingly small influence on normal matter, understanding the ultimate physical limits of experimental sensitivity is essential. Here, I show that quantum decoherence offers a window into otherwise inaccessible realms. There is a standard quantum limit for diffusion that restricts some entanglement-generating phenomena, like soft collisions with new particle species, from having appreciable classical influence on normal matter. Such phenomena are classically undetectable but can be revealed by the anomalous decoherence they induce on non-classical superpositions with long-range coherence in phase space. This gives strong, novel motivation for the construction of matter interferometers and other experimental sources of large superpositions, which recently have seen rapid progress.

[continue reading]

How to think about Quantum Mechanics—Part 4: Quantum indeterminism as an anomaly

[Other parts in this series: 1,2,3,4,5,6.]

I am firmly of the view…that all the sciences are compatible and that detailed links can be, and are being, forged between them. But of course the links are subtle… a mathematical aspect of theory reduction that I regard as central, but which cannot be captured by the purely verbal arguments commonly employed in philosophical discussions of reduction. My contention here will be that many difficulties associated with reduction arise because they involve singular limits….What nonclassical phenomena emerge as h → 0? This sounds like nonsense, and indeed if the limit were not singular the answer would be: no such phenomena.Michael Berry

One of the great crimes against humanity occurs each year in introductory quantum mechanics courses when students are introduced to an \hbar \to 0 limit, sometimes decorated with words involving “the correspondence principle”. The problem isn’t with the content per se, but with the suggestion that this somehow gives a satisfying answer to why quantum mechanics looks like classical mechanics on large scales.

Sometimes this limit takes the form of a path integral, where the transition probability for a particle to move from position x_1 to x_2 in a time T is

(1)   \begin{align*} P_{x_1 \to x_2} &= \langle x_1 \vert e^{-i H T} \vert x_2 \rangle \\ &\propto \int_{x_1,x_2} \mathcal{D}[x(t)] e^{-i S[x(t),x'(t)]/\hbar} = \int_{x_1,x_2} \mathcal{D}[x(t)] e^{-i \int_0^T \mathrm{d}t L(x(t),x'(t))/\hbar} \end{align*}

where \int_{x_1,x_2} \mathcal{D}[x(t)] is the integral over all paths from x_1 to x_2, and S[x(t),x'(t)]= \int_0^T \mathrm{d}t L(x(t),x'(t)) is the action for that path (L being the Lagrangian corresponding to the Hamiltonian H).… [continue reading]

Decoherence detection and micromechanical resonators

In this post I want to lay out why I am a bit pessimistic about using quantum micromechanical resonators, usually of the optomechanical variety, for decoherence detection. I will need to rely on some simple ideas from 3-4 papers I have “in the pipeline” (read: partially written TeX files) that seek to make precise the sense in which decoherence detection allows us to detect classical undetectable phenomena, and to figure out exactly what sort of phenomena we should apply it to. So this post will sound vague without that supporting material. Hopefully it will still be useful, at least for the author.

The overarching idea is that decoherence detection is only particularly useful when the experimental probe can be placed in a superposition with respect to a probe’s natural pointer basis. Recall that the pointer basis is the basis in which the density matrix of the probe is normally restricted to be approximately diagonal by the interaction with the natural environment. Classically detectable phenomena are those which cause transitions within the pointer basis, i.e. driving the system from one pointer state to another. Classically undetectable phenomena are those which cause pure decoherence with respect to this basis, i.e. they add a decoherence factor to off-diagonal terms in this basis, but preserve on-diagonal terms.… [continue reading]

Records decomposition talk

Last month Scott Aaronson was kind enough to invite me out to MIT to give a seminar to the quantum information group. I presented a small uniqueness theorem which I think is an important intermediary result on the way to solving the set selection problem (or, equivalently, to obtaining an algorithm for breaking the wavefunction of the universe up into branches). I’m not sure when I’ll have a chance to write this up formally, so for now I’m just making the slides available here.

Screen Shot 2015-01-26 at 7.42.02 AM

Scott’s a fantastic, thoughtful host, and I got a lot of great questions from the audience. Thanks to everyone there for having me.… [continue reading]

Ambiguity and a catalog of the actions

I had to brush up on my Hamilton-Jacobi mechanics to referee a paper. I’d like to share, from this Physics.StackExchange answer, Qmechanic’ clear catalog of the conceptually distinct functions all called “the action” in classical mechanics, taking care to specify their functional dependence:

At least three different quantities in physics are customary called an action and denoted with the letter S.

  1. The (off-shell) action

    (1)   \[S[q]~:=~ \int_{t_i}^{t_f}\! dt \ L(q(t),\dot{q}(t),t)\]

    is a functional of the full position curve/path q^i:[t_i,t_f] \to \mathbb{R} for all times t in the interval [t_i,t_f]. See also this question. (Here the words on-shell and off-shell refer to whether the equations of motion (eom) are satisfied or not.)

  2. If the variational problem (1) with well-posed boundary conditions, e.g. Dirichlet boundary conditions

    (2)   \[ q(t_i)~=~q_i\quad\text{and}\quad q(t_f)~=~q_i,\]

    has a unique extremal/classical path q_{\rm cl}^i:[t_i,t_f] \to \mathbb{R}, it makes sense to define an on-shell action

    (3)   \[ S(q_f;t_f;q_i,t_i) ~:=~ S[q_{\rm cl}],\]

    which is a function of the boundary values. See e.g. MTW Section 21.1.

  3. The Hamilton’s principal function S(q,\alpha, t) in Hamilton-Jacobi equation is a function of the position coordinates q^i, integration constants \alpha_i, and time t, see e.g. H. Goldstein, Classical Mechanics, chapter 10.
    The total time derivative

    (4)   \[ \frac{dS}{dt}~=~ \dot{q}^i \frac{\partial S}{\partial q^i}+ \frac{\partial S}{\partial t}\]

    is equal to the Lagrangian L on-shell, as explained here. As a consequence, the Hamilton’s principal function S(q,\alpha, t) can be interpreted as an action on-shell.

[continue reading]

Approach to equilibrium in a pure-state universe

(This post is vague, and sheer speculation.)

Following a great conversation with Miles Stoudenmire here at PI, I went back and read a paper I forgot about: “Entanglement and the foundations of statistical mechanics” by Popescu et al.S. Popescu, A. Short, and A. Winter, “Entanglement and the foundations of statistical mechanics” Nature Physics 2, 754 – 758 (2006) [Free PDF]. a  . This is one of those papers that has a great simple idea, where you’re not sure if it’s profound or trivial, and whether it’s well known or it’s novel. (They cite references 3-6 as “Significant results along similar lines”; let me know if you’ve read any of these and think they’re more useful.) Anyways, here’s some background on how I think about this.

If a pure quantum state \vert \psi \rangle is drawn at random (according to the Haar measure) from a d_S d_E-dimensional vector space \mathcal{H}, then the entanglement entropy

    \[S(\rho_S) = \mathrm{Tr}[\rho_S \mathrm{log} \rho_S], \qquad \rho_S = \mathrm{Tr}_E[\vert \psi \rangle \langle \psi \vert]\]

across a tensor decomposition into system \mathcal{S} and environment \mathcal{E} is highly likely to be almost the maximum

    \[S_{\mathrm{max}} = \mathrm{log}_2(\mathrm{min}(d_S,d_E)) \,\, \mathrm{bits},\]

for any such choice of decomposition \mathcal{H} = \mathcal{S} \otimes \mathcal{E}. More precisely, if we fix d_S/d_E and let d_S\to \infty, then the fraction of the Haar volume of states that have entanglement entropy more than an exponentially small (in d_S) amount away from the maximum is suppressed exponentially (in d_S).… [continue reading]

Undetected photon imaging

Lemos et al. have a relatively recent letterG. Lemos, V. Borish, G. Cole, S. Ramelow, R. Lapkiewicz, and A. Zeilinger, “Quantum imaging with undetected photons”, Nature 512, 409 (2014) [ arXiv:1401.4318 ]. a   in Nature where they describe a method of imaging with undetected photons. (An experiment with the same essential quantum features was performed by Zou et al.X. Y. Zou, L. J. Wang, and L. Mandel, “Induced coherence and indistinguishability in optical interference”, Phys. Rev. Lett. 67, 318 (1991) [ PDF ]. b   way back in 1991, but Lemos et al. have emphasized its implications for imaging.) The idea is conceptually related to decoherence detection, and I want to map one onto the other to flesh out the connection. Their figure 1 gives a schematic of the experiment, and is copied below.

Figure 1 from Lemos et al.: ''Schematic of the experiment. Laser light (green) splits at beam splitter BS1 into modes a and b. Beam a pumps nonlinear crystal NL1, where collinear down-conversion may produce a pair of photons of different wavelengths called signal (yellow) and idler (red). After passing through the object O, the idler reflects at dichroic mirror D2 to align with the idler produced in NL2, such that the final emerging idler f does not contain any information about which crystal produced the photon pair.
[continue reading]

Quantum Brownian motion: Definition

In this post I’m going to give a clean definition of idealized quantum Brownian motion and give a few entry points into the literature surrounding its abstract formulation. A follow-up post will give an interpretation to the components in the corresponding dynamical equation, and some discussion of how the model can be generalized to take into account the ways the idealization may break down in the real world.

I needed to learn this background for a paper I am working on, and I was motivated to compile it here because the idiosyncratic results returned by Google searches, and especially this MathOverflow question (which I’ve answered), made it clear that a bird’s eye view is not easy to find. All of the material below is available in the work of other authors, but not logically developed in the way I would prefer.


Quantum Brownian motion (QBM) is a prototypical and idealized case of a quantum system \mathcal{S}, consisting of a continuous degree of freedom, that is interacting with a large multi-partite environment \mathcal{E}, in general leading to varying degrees of dissipation, dispersion, and decoherence of the system. Intuitively, the distinguishing characteristics of QBM is Markovian dynamics induced by the cumulative effect of an environment with many independent, individually weak, and (crucially) “phase-space local” components.… [continue reading]

In what sense is the Wigner function a quasiprobability distribution?

For the upteenth time I have read a paper introducing the Wigner function essentially like this:

The Wigner-representation of a quantum state \rho is a real-valued function on phase space definedActually, they usually use a more confusing definition. See my post on the intuitive definition of the Wigner function. a   (with \hbar=1) as

(1)   \begin{align*} W_\rho(x,p) \equiv \int \! \mathrm{d}\Delta x \, e^{i p \Delta x} \langle x+\Delta x /2 \vert \rho \vert x-\Delta x /2 \rangle. \end{align*}

It’s sort of like a probability distribution because the marginals reproduce the probabilities for position and momentum measurements:

(2)   \begin{align*} P(x) \equiv \langle x \vert \rho \vert x \rangle = \int \! \mathrm{d}p \, W_\rho(x,p) \end{align*}


(3)   \begin{align*} P(p) \equiv  \langle p\vert \rho \vert p \rangle = \int \! \mathrm{d}x \, W_\rho(x,p). \end{align*}

But the reason it’s not a real probability distribution is that it can be negative.

The fact that W_\rho(x,p) can be negative is obviously a reason you can’t think about it as a true PDF, but the marginals property is a terribly weak justification for thinking about W_\rho as a “quasi-PDF”. There are all sorts of functions one could write down that would have this same property but wouldn’t encode much information about actual phase space structure, e.g., the Jigner“Jess” + “Wigner” = “Jigner”. Ha! b   function J_\rho(x,p) \equiv P(x)P(p) = \langle x \vert \rho \vert x \rangle \langle p \vert \rho \vert p \rangle, which tells as nothing whatsoever about how position relates to momentum.

Here is the real reason you should think the Wigner function W_\rho is almost, but not quite, a phase-space PDF for a state \rho:

  1. Consider an arbitrary length scale \sigma_x, which determines a corresponding momentum scale \sigma_p = 1/2\sigma_x and a corresponding setNot just a set of states, actually, but a Parseval tight frame.
[continue reading]