Wavepacket spreading produces force sensitivity

I’m still trying to decide if I understand this correctly, but it looks like coherent wavepacket spreading is sufficient to produce states of a test-mass that are highly sensitive to weak forces. The Wigner function of a coherent wavepacket is sheared horizontally in phase space (see hand-drawn figure). A force that perturbs it slightly with a small momentum shift will still produce an orthogonal state of the test mass.


The Gaussian wavepacket of a test mass (left) will be sheared horizontally in phase space by the free-particle evolution governed by H=p^2/2m. A small vertical (i.e. momentum) shift by a weak force can then produce an orthogonal state of the test mass, while it would not for the unsheared state. However, discriminating between the shifted and unshifted wavepackets requires a momentum-like measurement; position measurements would not suffice.

Of course, we could simply start with a wavepacket with a very wide spatial width and narrow momentum width. Back when this was being discussed by Caves and others in the ’80s, they recognized that these states would have such sensitivity. However, they pointed out, this couldn’t really be exploited because of the difficulty in making true momentum measurements. Rather, we usually measure momentum indirectly by allowing the normal free-particle (H=p^2/2m) evolution carry the state to different points in space, and then measuring position. But this doesn’t work under the condition in which we’re interested: when the time between measurements is limited.The original motivation was for detecting gravitational waves, which transmit zero net momentum when averaged over the time interval on which the wave interacts with the test mass. The only way to notice the wave is to measure it in the act since the momentum transfer can be finite for intermediate times.[continue reading]

Comments on Gell-Mann & Hartle’s latest

Back in December Gell-Mann and Hartle (G&H) posted their latest paper on consistent histories, “Adaptive Coarse Graining, Environment, Strong Decoherence, and Quasiclassical Realms”. Here are my thoughts.

The discussion of adaptive coarse graining was brief and very much in agreement with previous work.

G&H then give a name and formal description to the idea, long part of the intuitive lore, of a history being defined by the values taken by a particular variable over many time step. (This might be the position of an object, which is being recorded to some accuracy by a environment that decoheres it.) The key idea is that all the Schrödinger-picture projectors P^{k}_{\alpha_k} at different times t_k defining the history commute:

(1)   \begin{align*} [P^{k}_{\alpha_k},P^{k'}_{\alpha_{k'}}]=0 \quad \forall k,k' \end{align*}

This they call the narrative condition. From it, one is able to define a smallest set of maximal projectors Q_i (which they call a common framework) that obey either Q_i \le P^{k}_{\alpha_k} or Q_i  P^{k}_{\alpha_k} = P^{k}_{\alpha_k} Q_i = 0 for all P^{k}_{\alpha_k}. For instance, if the P‘s are onto spatial volumes of position, then the Q‘s are just the minimal partition of position space such that the region associated with each Q_i is fully contained in the regions corresponding to some of the P^{k}_{\alpha_k}, and is completely disjoint from the regions corresponding to the others.


For times steps k=1,\ldots,4, the history projectors at a given time project into subspaces which disjointly span Hilbert space. The narrative condition states that all history projectors commute, which means we can think of them as projecting onto disjoint subsets forming a partition of the range of some variable (e.g. position). The common framework is just the set of smaller projectors that also partitions the range of the variable and which obey Q \le P or QP = PQ = 0 for each P and Q.
[continue reading]

Contextuality versus nonlocality

I wanted to understand Rob Spekkens’ self-described lonely view that the contextual aspect of quantum mechanics is more important than the non-local aspect. Although I like to think I know a thing or two about the foundations of quantum mechanics, I’m embarrassingly unfamiliar with the discussion surrounding contextuality. 90% of my understanding is comes from this famous explanation by David Bacon at his old blog. (Non-experts should definitely take the time to read that nice little post.) What follows are my thoughts before diving into the literature.

I find the map-territory distinction very important for thinking about this. Bell’s theorem isn’t a theorem about quantum mechanics (QM) per se, it’s a theorem about locally realistic theories. It says if the universe satisfies certain very reasonable assumption, then it will behave in a certain manner. We observe that it doesn’t behave in this manner, therefore the universe doesn’t satisfy those assumption. The only reason that QM come into it is that QM correctly predicts the misbehavior, whereas classical mechanics does not (since classical mechanics satisfies the assumptions).

Now, if you’re comfortable writing down a unitarily evolving density matrix of macroscopic systems, then the mechanism by which QM is able to misbehave is actually fairly transparent. Write down an initial state, evolve it, and behold: the wavefunction is a sum of branches of macroscopically distinct outcomes with the appropriate statistics (assuming the Born rule). The importance of Bell’s Theorem is not that it shows that QM is weird, it’s that it shows that the universe is weird. After all, we knew that the QM formalism violated all sorts of our intuitions: entanglement, Heisenberg uncertainty, wave-particle duality, etc.; we didn’t need Bell’s theorem to tell us QM was strange.… [continue reading]

Consistency conditions in consistent histories

[This is akin to a living review, which may improve from time to time. Last edited 2015-4-27.]

This post will summarize the various consistency conditions that can be found discussed in the consistent histories literature. Most of the conditions have gone by different names under different authors (and sometimes even under the same author), so I’ll try to give all the aliases I know; just hover over the footnote markers.

There is an overarching schism in the choice of terminology in the literature between the terms “consistent” and “decoherent”. Most authors, including Gell-Mann and Hartle, now use the term “decoherent” very loosely and no longer employ “consistent” as an official label for any particular condition (or for the formalism as a whole). Zurek and I believe this is a significant loss in terminology, and we are stubbornly resisting it. In our recent arXiv offering, our rant was thus:

…we emphasize that decoherence is a dynamical physical process predicated on a distinction between system and environment, whereas consistency is a static property of a set of histories, a Hamiltonian, and an initial state. For a given decohering quantum system, there is generally a preferred basis of pointer states [1, 8]. In contrast, the mere requirement of consistency does not distinguish a preferred set of histories which describe classical behavior from any of the many sets with no physical interpretation.

(See also the first footnote on page 3347 of “Classical Equations for Quantum Systems”Gell-Mann and Hartlea   which agrees with the importance of this conceptual distinction.) Since Gell-Mann and Hartle did many of the investigations of consistency conditions, some conditions have only appeared in the literature using their terminology (like “medium-strong decoherence”).… [continue reading]

Direct versus indirect measurements

andrelaszlo on HackerNews asked how someone could draw a reasonable distinction between “direct” and “indirect” measurements in science. Below is how I answered. This is old hat to many folks and, needless to say, none of this is original to me.

There’s a good philosophy of science argument to be made that there’s no precise and discrete distinction between direct and indirect measurement. In our model of the universe, there are always multiple physical steps that link the phenomena under investigation to our conscious perception. Therefore, any conclusions we draw from a perception are conditional on our confidence in the entire causal chain performing reliably (e.g. a gravitational wave induces a B-mode in the CMB, which propagates as a photon to our detectors, which heats up a transition-edge sensor, which increases the resistivity of the circuit, which flips a bit in the flash memory, which is read out to a monitor, which emits photons to our eye, which change the nerves firing in our brain). “Direct” measurements, then, are just ones that rely on a small number of reliable inferences, while “indirect” measurements rely on a large number of less reliable inferences.

Nonetheless, in practice there is a rather clear distinction which declares “direct” measurements to be those that take place locally (in space) using well-characterized equipment that we can (importantly) manipulate, and which is conditional only on physical laws which are very strongly established. All other measurements are called “indirect”, generally because they are observational (i.e. no manipulation of the experimental parameters), are conditional on tenuous ideas (i.e. naturalness arguments as indirect evidence for supersymmetry), and/or involve intermediary systems that are not well understood (e.g.

[continue reading]

Comments on Hotta’s Quantum Energy Teleportation

[This is a “literature impression“.]

Masahiro Hotta has a series of paper about what he calls “quantum energy teleportation (QET)”, modeled after the well-known notion of quantum teleportation (of information). Although it sounds like crazy crack pot stuff, and they contain the red-flag term “zero-point energy”, the basic physics of Hotta’s work are sound. But they don’t appear to have important consequences for energy transmission.

The idea is to exploit the fact that the ground state of the vacuum in QFT is, in principle, entangled over arbitrary distances. In a toy Alice and Bob model with respective systems A and B, you assume a Hamiltonian for which the ground state is unique and entangled. Then, Alice makes a local measurement on her system A. Neither of the two conditional global states for the joint AB system — conditional on the outcome of the measurement — are eigenstates of the Hamiltonian, and so therefore the average energy must increase for the joint system. The source of this energy is the device Alice used to make the measurement. Now, if Bob were to independently make a measurement of his system, he would find that energy would also necessarily flow from his device into the joint system; this follows from the symmetry of the problem. But if he waits for Alice to transmit to him the outcome of her result, it turns out that he can apply a local unitary to his B system and a subsequent local measurement that leads to a net average energy flow to his equipment. The fact that he must wait for the outcome of Alice’s measurement, which travels no faster than the speed of light, is what gives this the flavor of teleportation.… [continue reading]

Literature impressions

I have often been frustrated by the inefficiency of reading through the physics literature. One problem is that physicists are sometimes bad teachers and are usually bad writers, and so it can take a long time of reading a paper before you even figure out what the author is trying to say. This gets worse when you look at papers that aren’t in your immediate physics niche, because then the author will probably use assumptions, mathematical techniques, and terminology you aren’t familiar with. If you had infinite time, you could spend days reading every paper that looks reasonably interesting, but you don’t. A preferred technique is to ask your colleagues to explain it to you, because they are more likely to speak your language and (unlike a paper) can answer your questions when you come up against a confusion. But generally your colleagues haven’t read it; they want you to read it so you can explain it to them. I spend a lot of time reading papers that end up being uninteresting, but it’s worth it for the occasional gems. And it seems clear that there is a lot of duplicated work being done sorting through the chaff.

So on the one hand we have a lengthy, fixed document from a single, often unfamiliar perspective (i.e. the actual paper in a different field) and on the other hand we have a breathing human being in your own field who will patiently explain things to you. An intermediate solution would be to have a few people in different fields read the paper and then translate the key parts into their field’s language, which could then be passed around.… [continue reading]

Cosmology meets philanthropy

[This was originally posted at the Quantum Pontiff.]

People sometimes ask me what how my research will help society.  This question is familiar to physicists, especially those of us whose research is connected to every-day life only… shall we say…tenuously.  And of course, this is a fair question from the layman; tax dollars support most of our work.

I generally take the attitude of former Fermilab director Robert R. Wilson.  During his testimony before the Joint Committee on Atomic Energy in the US Congress, he was asked how discoveries from the proposed accelerator would contribute to national security during a time of intense Cold War competition with the USSR.  He famously replied “this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”

Still, it turns out there are philosophers of practical ethics who think a few of the academic questions physicists study could have tremendous moral implications, and in fact might drive key decisions we all make each day. Oxford philosopher Nick Bostrom has in particular written about the idea of “astronomical waste“.  As is well known to physicists, the universe has a finite, ever-dwindling supply of negentropy, i.e. the difference between our current low-entropy state and the bleak maximal entropy state that lies in our far future.  And just about everything we might value is ultimately powered by it.  As we speak (or blog), the stupendously vast majority of negentropy usage is directed toward rather uninspiring ends, like illuminating distant planets no one will ever see.

These resources can probably be put to better use.  … [continue reading]