Follow up on contextuality and non-locality

This is a follow up on my earlier post on contextuality and non-locality. As far as I can tell, Spekken’s paper is the gold standard for how to think about contextuality in the messy real world. In particular, since the idea of “equivalent” measurements is key, we might never be able to establish that we are making “the same” measurement from one experiment to the next; there could always be small microscopic differences for which we are unable to account. However, Spekken’s idea of forming equivalence classes from measurement protocols that always produce the same results is very natural. It isolates, as much as possible, the inherent ugliness of a contextual model that gives different ontological descriptions for measurements that somehow always seem to give identical results.

I also learned an extremely important thing in my background reading. Apparently John Bell discovered contextuality a few years before Kochen and Specker (KS).This is according to Mermin’s RMP on contextuality and locality. I haven’t gone back and read Bell’s papers to make sure he really did describe something equivalent to the KS theorem. a   More importantly, Bell’s theorem on locality grew out of this discovery; the theorem is just a special case of contextuality where “the context” is a space-like separated measurement.

So now I think I can get behind Spekken’s idea that contextuality is more important than non-locality, especially non-locality per se. It seems very plausible to me that the general idea of contextuality is driving at the key thing that’s weird about quantum mechanics (QM) and that — if QM is one day more clearly explained by a successor theory — we will find that the non-local special case of contextuality isn’t particularly different from local versions.… [continue reading]

Wigner function = Fourier transform + Coordinate rotation

[Follow-up post: In what sense is the Wigner function a quasiprobability distribution?]

I’ve never liked how people introduce the Wigner function (aka the Wigner quasi-probability distribution). Usually, they just write down a definition like

(1)   \begin{align*} W(x,p) = \frac{1}{\pi \hbar} \int \mathrm{d}y \rho(x+y, x-y) e^{-2 i p y/\hbar} \end{align*}

and say that it’s the “closest phase-space representation” of a quantum state. One immediately wonders: What’s with the weird factor of 2, and what the heck is y? Usually, the only justification given for the probability interpretation is that integrating over one of the variables recovers the probability distribution for the other (if it were measured):

(2)   \begin{align*} \int \! \mathrm{d}p \, W(x,p) = \vert \rho(x,x) \vert^2 , \\ \int \! \mathrm{d}x \, W(x,p) = \vert \hat{\rho}(p,p) \vert^2 , \end{align*}

where \hat{\rho}(p,p') is just the density matrix in the momentum basis. But of course, that doesn’t really tell us why we should think of W(x,p), as having anything to do with the (rough) value of x conditional on a (rough) value of p.

Well now I have a much better idea of what the Wigner function actually is and how to interpret it. We start by writing it down in sane variables (and suppress \hbar):

(3)   \begin{align*} W(\bar{x},\bar{p}) = \frac{1}{2 \pi} \int \! \mathrm{d}\Delta x \,\rho \left(\bar{x}+\frac{\Delta x}{2}, \bar{x}-\frac{\Delta x}{2} \right) e^{-i \bar{p} \Delta x}. \end{align*}

So the first step in the interpretation is to consider the function

(4)   \begin{align*} M(\bar{x},\Delta x) \equiv  \rho \left(\bar{x}+\frac{\Delta x}{2}, \bar{x}-\frac{\Delta x}{2} \right) , \end{align*}

which appears in the integrand. This is just the (position-space) density matrix in rotated coordinates \bar{x} \equiv (x+x')/2 and \Delta x = x-x'. There is a strong sense in which the off-diagonal terms of the density matrix represent the quantum coherence of the state between different positions, so \Delta x indexes how far this coherence extends; large values of \Delta x indicate large spatial coherence. On the other hand, \bar{x} indexes how far down the diagonal of the density matrix we move; it’s the average position of the two points between which the off-diagonal terms of the density matrix measures coherence. (See the figure below.)


The function M(\bar{x},\Delta x) is just the position-space density matrix \rho(x,x') rotated in new coordinates: (\bar{x},\Delta x) = ((x+x')/2,x-x').
[continue reading]

Wavepacket spreading produces force sensitivity

I’m still trying to decide if I understand this correctly, but it looks like coherent wavepacket spreading is sufficient to produce states of a test-mass that are highly sensitive to weak forces. The Wigner function of a coherent wavepacket is sheared horizontally in phase space (see hand-drawn figure). A force that perturbs it slightly with a small momentum shift will still produce an orthogonal state of the test mass.


The Gaussian wavepacket of a test mass (left) will be sheared horizontally in phase space by the free-particle evolution governed by H=p^2/2m. A small vertical (i.e. momentum) shift by a weak force can then produce an orthogonal state of the test mass, while it would not for the unsheared state. However, discriminating between the shifted and unshifted wavepackets requires a momentum-like measurement; position measurements would not suffice.

Of course, we could simply start with a wavepacket with a very wide spatial width and narrow momentum width. Back when this was being discussed by Caves and others in the ’80s, they recognized that these states would have such sensitivity. However, they pointed out, this couldn’t really be exploited because of the difficulty in making true momentum measurements. Rather, we usually measure momentum indirectly by allowing the normal free-particle (H=p^2/2m) evolution carry the state to different points in space, and then measuring position. But this doesn’t work under the condition in which we’re interested: when the time between measurements is limited.The original motivation was for detecting gravitational waves, which transmit zero net momentum when averaged over the time interval on which the wave interacts with the test mass. The only way to notice the wave is to measure it in the act since the momentum transfer can be finite for intermediate times.[continue reading]

Comments on Gell-Mann & Hartle’s latest

Back in December Gell-Mann and Hartle (G&H) posted their latest paper on consistent histories, “Adaptive Coarse Graining, Environment, Strong Decoherence, and Quasiclassical Realms”. Here are my thoughts.

The discussion of adaptive coarse graining was brief and very much in agreement with previous work.

G&H then give a name and formal description to the idea, long part of the intuitive lore, of a history being defined by the values taken by a particular variable over many time step. (This might be the position of an object, which is being recorded to some accuracy by a environment that decoheres it.) The key idea is that all the Schrödinger-picture projectors P^{k}_{\alpha_k} at different times t_k defining the history commute:

(1)   \begin{align*} [P^{k}_{\alpha_k},P^{k'}_{\alpha_{k'}}]=0 \quad \forall k,k' \end{align*}

This they call the narrative condition. From it, one is able to define a smallest set of maximal projectors Q_i (which they call a common framework) that obey either Q_i \le P^{k}_{\alpha_k} or Q_i  P^{k}_{\alpha_k} = P^{k}_{\alpha_k} Q_i = 0 for all P^{k}_{\alpha_k}. For instance, if the P‘s are onto spatial volumes of position, then the Q‘s are just the minimal partition of position space such that the region associated with each Q_i is fully contained in the regions corresponding to some of the P^{k}_{\alpha_k}, and is completely disjoint from the regions corresponding to the others.


For times steps k=1,\ldots,4, the history projectors at a given time project into subspaces which disjointly span Hilbert space. The narrative condition states that all history projectors commute, which means we can think of them as projecting onto disjoint subsets forming a partition of the range of some variable (e.g. position). The common framework is just the set of smaller projectors that also partitions the range of the variable and which obey Q \le P or QP = PQ = 0 for each P and Q.
[continue reading]

Contextuality versus nonlocality

I wanted to understand Rob Spekkens’ self-described lonely view that the contextual aspect of quantum mechanics is more important than the non-local aspect. Although I like to think I know a thing or two about the foundations of quantum mechanics, I’m embarrassingly unfamiliar with the discussion surrounding contextuality. 90% of my understanding is comes from this famous explanation by David Bacon at his old blog. (Non-experts should definitely take the time to read that nice little post.) What follows are my thoughts before diving into the literature.

I find the map-territory distinction very important for thinking about this. Bell’s theorem isn’t a theorem about quantum mechanics (QM) per se, it’s a theorem about locally realistic theories. It says if the universe satisfies certain very reasonable assumption, then it will behave in a certain manner. We observe that it doesn’t behave in this manner, therefore the universe doesn’t satisfy those assumption. The only reason that QM come into it is that QM correctly predicts the misbehavior, whereas classical mechanics does not (since classical mechanics satisfies the assumptions).

Now, if you’re comfortable writing down a unitarily evolving density matrix of macroscopic systems, then the mechanism by which QM is able to misbehave is actually fairly transparent. Write down an initial state, evolve it, and behold: the wavefunction is a sum of branches of macroscopically distinct outcomes with the appropriate statistics (assuming the Born rule). The importance of Bell’s Theorem is not that it shows that QM is weird, it’s that it shows that the universe is weird. After all, we knew that the QM formalism violated all sorts of our intuitions: entanglement, Heisenberg uncertainty, wave-particle duality, etc.; we didn’t need Bell’s theorem to tell us QM was strange.… [continue reading]

Consistency conditions in consistent histories

[This is akin to a living review, which may improve from time to time. Last edited 2015-4-27.]

This post will summarize the various consistency conditions that can be found discussed in the consistent histories literature. Most of the conditions have gone by different names under different authors (and sometimes even under the same author), so I’ll try to give all the aliases I know; just hover over the footnote markers.

There is an overarching schism in the choice of terminology in the literature between the terms “consistent” and “decoherent”. Most authors, including Gell-Mann and Hartle, now use the term “decoherent” very loosely and no longer employ “consistent” as an official label for any particular condition (or for the formalism as a whole). Zurek and I believe this is a significant loss in terminology, and we are stubbornly resisting it. In our recent arXiv offering, our rant was thus:

…we emphasize that decoherence is a dynamical physical process predicated on a distinction between system and environment, whereas consistency is a static property of a set of histories, a Hamiltonian, and an initial state. For a given decohering quantum system, there is generally a preferred basis of pointer states [1, 8]. In contrast, the mere requirement of consistency does not distinguish a preferred set of histories which describe classical behavior from any of the many sets with no physical interpretation.

(See also the first footnote on page 3347 of “Classical Equations for Quantum Systems”Gell-Mann and Hartle a   which agrees with the importance of this conceptual distinction.) Since Gell-Mann and Hartle did many of the investigations of consistency conditions, some conditions have only appeared in the literature using their terminology (like “medium-strong decoherence”).… [continue reading]

Direct versus indirect measurements

andrelaszlo on HackerNews asked how someone could draw a reasonable distinction between “direct” and “indirect” measurements in science. Below is how I answered. This is old hat to many folks and, needless to say, none of this is original to me.

There’s a good philosophy of science argument to be made that there’s no precise and discrete distinction between direct and indirect measurement. In our model of the universe, there are always multiple physical steps that link the phenomena under investigation to our conscious perception. Therefore, any conclusions we draw from a perception are conditional on our confidence in the entire causal chain performing reliably (e.g. a gravitational wave induces a B-mode in the CMB, which propagates as a photon to our detectors, which heats up a transition-edge sensor, which increases the resistivity of the circuit, which flips a bit in the flash memory, which is read out to a monitor, which emits photons to our eye, which change the nerves firing in our brain). “Direct” measurements, then, are just ones that rely on a small number of reliable inferences, while “indirect” measurements rely on a large number of less reliable inferences.

Nonetheless, in practice there is a rather clear distinction which declares “direct” measurements to be those that take place locally (in space) using well-characterized equipment that we can (importantly) manipulate, and which is conditional only on physical laws which are very strongly established. All other measurements are called “indirect”, generally because they are observational (i.e. no manipulation of the experimental parameters), are conditional on tenuous ideas (i.e. naturalness arguments as indirect evidence for supersymmetry), and/or involve intermediary systems that are not well understood (e.g.

[continue reading]

Comments on Hotta’s Quantum Energy Teleportation

[This is a “literature impression“.]

Masahiro Hotta has a series of paper about what he calls “quantum energy teleportation (QET)”, modeled after the well-known notion of quantum teleportation (of information). Although it sounds like crazy crack pot stuff, and they contain the red-flag term “zero-point energy”, the basic physics of Hotta’s work are sound. But they don’t appear to have important consequences for energy transmission.

The idea is to exploit the fact that the ground state of the vacuum in QFT is, in principle, entangled over arbitrary distances. In a toy Alice and Bob model with respective systems A and B, you assume a Hamiltonian for which the ground state is unique and entangled. Then, Alice makes a local measurement on her system A. Neither of the two conditional global states for the joint AB system — conditional on the outcome of the measurement — are eigenstates of the Hamiltonian, and so therefore the average energy must increase for the joint system. The source of this energy is the device Alice used to make the measurement. Now, if Bob were to independently make a measurement of his system, he would find that energy would also necessarily flow from his device into the joint system; this follows from the symmetry of the problem. But if he waits for Alice to transmit to him the outcome of her result, it turns out that he can apply a local unitary to his B system and a subsequent local measurement that leads to a net average energy flow to his equipment. The fact that he must wait for the outcome of Alice’s measurement, which travels no faster than the speed of light, is what gives this the flavor of teleportation.… [continue reading]

Literature impressions

I have often been frustrated by the inefficiency of reading through the physics literature. One problem is that physicists are sometimes bad teachers and are usually bad writers, and so it can take a long time of reading a paper before you even figure out what the author is trying to say. This gets worse when you look at papers that aren’t in your immediate physics niche, because then the author will probably use assumptions, mathematical techniques, and terminology you aren’t familiar with. If you had infinite time, you could spend days reading every paper that looks reasonably interesting, but you don’t. A preferred technique is to ask your colleagues to explain it to you, because they are more likely to speak your language and (unlike a paper) can answer your questions when you come up against a confusion. But generally your colleagues haven’t read it; they want you to read it so you can explain it to them. I spend a lot of time reading papers that end up being uninteresting, but it’s worth it for the occasional gems. And it seems clear that there is a lot of duplicated work being done sorting through the chaff.

So on the one hand we have a lengthy, fixed document from a single, often unfamiliar perspective (i.e. the actual paper in a different field) and on the other hand we have a breathing human being in your own field who will patiently explain things to you. An intermediate solution would be to have a few people in different fields read the paper and then translate the key parts into their field’s language, which could then be passed around.… [continue reading]

Cosmology meets philanthropy

[This was originally posted at the Quantum Pontiff.]

People sometimes ask me what how my research will help society.  This question is familiar to physicists, especially those of us whose research is connected to every-day life only… shall we say…tenuously.  And of course, this is a fair question from the layman; tax dollars support most of our work.

I generally take the attitude of former Fermilab director Robert R. Wilson.  During his testimony before the Joint Committee on Atomic Energy in the US Congress, he was asked how discoveries from the proposed accelerator would contribute to national security during a time of intense Cold War competition with the USSR.  He famously replied “this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”

Still, it turns out there are philosophers of practical ethics who think a few of the academic questions physicists study could have tremendous moral implications, and in fact might drive key decisions we all make each day. Oxford philosopher Nick Bostrom has in particular written about the idea of “astronomical waste“.  As is well known to physicists, the universe has a finite, ever-dwindling supply of negentropy, i.e. the difference between our current low-entropy state and the bleak maximal entropy state that lies in our far future.  And just about everything we might value is ultimately powered by it.  As we speak (or blog), the stupendously vast majority of negentropy usage is directed toward rather uninspiring ends, like illuminating distant planets no one will ever see.

These resources can probably be put to better use.  … [continue reading]

Decoherence Detection FAQ—Part 1: Dark matter

[Updated 2016-7-2]

I’ve submitted my papers (long and short arXiv versions) on detecting classically undetectable new particles through decoherence. The short version introduces the basic idea and states the main implications for dark matter and gravitons. The long version covers the dark matter case in depth. Abstract for the short version:

Detecting Classically Undetectable Particles through Quantum Decoherence

Some hypothetical particles are considered essentially undetectable because they are far too light and slow-moving to transfer appreciable energy or momentum to the normal matter that composes a detector. I propose instead directly detecting such feeble particles, like sub-MeV dark matter or even gravitons, through their uniquely distinguishable decoherent effects on quantum devices like matter interferometers. More generally, decoherence can reveal phenomena that have arbitrarily little classical influence on normal matter, giving new motivation for the pursuit of macroscopic superpositions.

This is figure 1:

MZ2_cropped
Decoherence detection with a Mach-Zehnder interferometer. System \mathcal{N} is placed in a coherent superposition of spatially displaced wavepackets \vert N_{L} \rangle and \vert N_{R} \rangle that each travel a separate path and then are recombined. In the absence of system \mathcal{E}, the interferometer is tuned so that \mathcal{N} will be detected at the bright port with near unit probability, and at the dim port with near vanishing probability. However, if system \mathcal{D} scatters off \mathcal{N}, these two paths can decohere and \mathcal{N} will be detected at the dim port 50% of the time.

Below are some FAQs I have received.

Won’t there always be momentum transfer in any nontrivial scattering?

For any nontrivial scattering of two particles, there must be some momentum transfer.  But the momentum transfer can be arbitrarily small by simply making the mass of the dark particle as tiny as desired (while keeping its velocity fixed).  … [continue reading]

Follow-up questions on the set-selection problem

Physics StackExchange user QuestionAnswers asked the question “Is the preferred basis problem solved?“, and I reproduced my “answer” (read: discussion) in a post last week.  He had some thoughtful follow-up questions, and (with his permission) I am going to answer them here. His questions are in bold, with minor punctuation changes.

How serious would you consider what you call the “Kent set-selection” problem?

If a set of CHs could be shown to be impossible to find, then this would break QM without necessarily telling us how to correct it. (Similar problems exist with the breakdown of gravity at the Planck scale.) Although I worry about this, I think it’s unlikely and most people think it’s very unlikely. If a set can be found, but no principle can be found to prefer it, I would consider QM to be correct but incomplete. It would kinda be like if big bang neucleosynthesis had not been discovered to explain the primordial frequency of elements.

And what did Zurek think of it, did he agree that it’s a substantial problem?

I think Wojciech believes a set of consistent histories (CHs) corresponding to the branch structure could be found, but that no one will find a satisfying beautiful principle within the CH framework which singles out the preferred set from the many, many other sets. He believes the concept of redundant records (see “quantum Darwinism”) is key, and that a set of CHs could be found after the fact, but that this is probably not important. I am actually leaving for NM on Friday to work with him on a joint paper exploring the connection between redundancy and histories.… [continue reading]

Macro superpostions of the metric

Now I would like to apply the reasoning of the last post to the case of verifying macroscopic superpositions of the metric.  It’s been 4 years since I’ve touched GR, so I’m going to rely heavily on E&M concepts and pray I don’t miss any key changes in the translation to gravity.

In the two-slit experiment with light, we don’t take the visibility of interference fringes as evidence of quantum mechanics when there are many photons.  This is because the observations are compatible with a classical field description. We could interfere gravitational waves in a two-slit set up, and this would also have a purely classical explanation.

But in this post I’m not concentrating on evidence for pure quantum mechanics (i.e. a Bell-like argument grounded in locality), or evidence of the discrete nature of gravitons. Rather, I am interested in superpositions of two macroscopically distinct states of the metric as might be produced by a superposition of a large mass in two widely-separated positions.  Now, we can only call a quantum state a (proper) superposition by first identifying a preferred basis that it can be a superposition with respect to.  For now, I will wave my hands and say that the preferred states of the metric are just those metric states produced by the preferred states of matter, where the preferred states of matter are wavepackets of macroscopic amounts of mass localized in phase space (e.g. L/R).  Likewise, the conjugate basis states (e.g. L+R/L-R) are proper superpositions in the preferred basis, and these two bases do not commute.

There are two very distinct ways to produce a superposition with different states of the metric: (1) a coherent superposition of just gravitational radiation Note that we expect to produce this superposition by moving a macroscopic amount of matter into a superposition of two distinct position or momentum states.  [continue reading]

Verifying superpositions

Suppose we are given an ensemble of systems which are believed to contain coherent superposition of the metric. How would we confirm this?

Well, in order to verify that an arbitrary system is in a coherent superposition, which is always relative to a preferred basis, it’s well known that we need to make measurements with respect to (at least?) two non-commuting bases. If we can make measurement M we expect it to be possible to make measurement M` = RM for some symmetry R.

I consider essentially two types of Hilbert spaces: the infinite-dimensional space associated with position, and the finite-dimensional space associated with spin. They have a very different relationship with the fundamental symmetries of spacetime.

For spin, an arbitrary rotation in space is represented by a unitary which can produce proper superpositions. Rotating 90 degrees about the y axis takes a z-up eigenstate to an equal superposition of z-up and z-down. The rotation takes one basis to another with which it does not commute.

In contrast, for position, the unitary representing spatial translation is essentially just a permutation on the space of position eigenstates. It does not produce superpositions from non-superpositions with respect to this basis.

You might think things are different when you consider more realistic measurements with respect to the over-complete basis of wavepackets. (Not surprisingly, the issue is one of preferred basis!) If you imagine the wavepackets as discretely tiling space, it’s tempting to think that translating a single wavepacket a half-integer number of tile spacing will yield an approximate superposition of two wavepackets. But the wavepackets are of course not discrete, and a POVM measurement of “fuzzy” position (for any degree of fuzziness σ) is invariant under spatial translations.… [continue reading]

Kent’s set-selection problem

Unfortunately, physicists and philosophers disagree on what exactly the preferred basis problem is, what would constitute a solution, and how this relates (or subsumes) “the measurement problem” more generally. In my opinion, the most general version of the preferred basis problem was best articulated by Adrian Kent and Fey Dowker near the end their 1996 article “On the Consistent Histories Approach to Quantum Mechanics” in the Journal of Statistical Physics. Unfortunately, this article is long so I will try to quickly summarize the idea.

Kent and Dowker analyzed the question of whether the consistent histories formalism provided a satisfactory and complete account of quantum mechanics (QM). Contrary to what is often said, consistent histories and many-worlds need not be opposing interpretations of quantum mechanics Of course, some consistent historians make ontological claims about how the histories are “real”, where as the many-world’ers might say that the wavefunction is more “real”. In this sense they are contradictory. Personally, I think this is purely a matter of taste. a  . Instead, consistent histories is a good mathematical framework for rigorously identifying the branch structure of the wavefunction of the universe Note that although many-worlders may not consider the consistent histories formalism the only way possible to mathematically identify branch structure, I believe most would agree that if, in the future, some branch structure was identified using a completely different formalism, it could be described at least approximately by the consistent histories formalism.  Consistent histories may not be perfect, but it’s unlikely that the ideas are totally wrong. b  . Most many-world’ers would agree that unambiguously describing this branch structure would be very nice (although they might disagree on whether this is “necessary” for QM to be a complete theory).… [continue reading]