[This is akin to a living review, which may improve from time to time. Last edited 2015-4-27.]
This post will summarize the various consistency conditions that can be found discussed in the consistent histories literature. Most of the conditions have gone by different names under different authors (and sometimes even under the same author), so I’ll try to give all the aliases I know; just hover over the footnote markers.
There is an overarching schism in the choice of terminology in the literature between the terms “consistent” and “decoherent”. Most authors, including Gell-Mann and Hartle, now use the term “decoherent” very loosely and no longer employ “consistent” as an official label for any particular condition (or for the formalism as a whole). Zurek and I believe this is a significant loss in terminology, and we are stubbornly resisting it. In our recent arXiv offering, our rant was thus:
…we emphasize that decoherence is a dynamical physical process predicated on a distinction between system and environment, whereas consistency is a static property of a set of histories, a Hamiltonian, and an initial state. For a given decohering quantum system, there is generally a preferred basis of pointer states [1, 8]. In contrast, the mere requirement of consistency does not distinguish a preferred set of histories which describe classical behavior from any of the many sets with no physical interpretation.
… [continue reading]
andrelaszlo on HackerNews asked how someone could draw a reasonable distinction between “direct” and “indirect” measurements in science. Below is how I answered. This is old hat to many folks and, needless to say, none of this is original to me.
There’s a good philosophy of science argument to be made that there’s no precise and discrete distinction between direct and indirect measurement. In our model of the universe, there are always multiple physical steps that link the phenomena under investigation to our conscious perception. Therefore, any conclusions we draw from a perception are conditional on our confidence in the entire causal chain performing reliably (e.g. a gravitational wave induces a B-mode in the CMB, which propagates as a photon to our detectors, which heats up a transition-edge sensor, which increases the resistivity of the circuit, which flips a bit in the flash memory, which is read out to a monitor, which emits photons to our eye, which change the nerves firing in our brain). “Direct” measurements, then, are just ones that rely on a small number of reliable inferences, while “indirect” measurements rely on a large number of less reliable inferences.
Nonetheless, in practice there is a rather clear distinction which declares “direct” measurements to be those that take place locally (in space) using well-characterized equipment that we can (importantly) manipulate, and which is conditional only on physical laws which are very strongly established.
… [continue reading]
[This is a “literature impression“.]
Masahiro Hotta has a series of paper about what he calls “quantum energy teleportation (QET)”, modeled after the well-known notion of quantum teleportation (of information). Although it sounds like crazy crack pot stuff, and they contain the red-flag term “zero-point energy”, the basic physics of Hotta’s work are sound. But they don’t appear to have important consequences for energy transmission.
The idea is to exploit the fact that the ground state of the vacuum in QFT is, in principle, entangled over arbitrary distances. In a toy Alice and Bob model with respective systems and , you assume a Hamiltonian for which the ground state is unique and entangled. Then, Alice makes a local measurement on her system . Neither of the two conditional global states for the joint system — conditional on the outcome of the measurement — are eigenstates of the Hamiltonian, and so therefore the average energy must increase for the joint system. The source of this energy is the device Alice used to make the measurement. Now, if Bob were to independently make a measurement of his system, he would find that energy would also necessarily flow from his device into the joint system; this follows from the symmetry of the problem.… [continue reading]
I have often been frustrated by the inefficiency of reading through the physics literature. One problem is that physicists are sometimes bad teachers and are usually bad writers, and so it can take a long time of reading a paper before you even figure out what the author is trying to say. This gets worse when you look at papers that aren’t in your immediate physics niche, because then the author will probably use assumptions, mathematical techniques, and terminology you aren’t familiar with. If you had infinite time, you could spend days reading every paper that looks reasonably interesting, but you don’t. A preferred technique is to ask your colleagues to explain it to you, because they are more likely to speak your language and (unlike a paper) can answer your questions when you come up against a confusion. But generally your colleagues haven’t read it; they want you to read it so you can explain it to them. I spend a lot of time reading papers that end up being uninteresting, but it’s worth it for the occasional gems. And it seems clear that there is a lot of duplicated work being done sorting through the chaff.
So on the one hand we have a lengthy, fixed document from a single, often unfamiliar perspective (i.e.… [continue reading]
[This was originally posted at the Quantum Pontiff.]
People sometimes ask me what how my research will help society. This question is familiar to physicists, especially those of us whose research is connected to every-day life only… shall we say…tenuously. And of course, this is a fair question from the layman; tax dollars support most of our work.
I generally take the attitude of former Fermilab director Robert R. Wilson. During his testimony before the Joint Committee on Atomic Energy in the US Congress, he was asked how discoveries from the proposed accelerator would contribute to national security during a time of intense Cold War competition with the USSR. He famously replied “this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”
Still, it turns out there are philosophers of practical ethics who think a few of the academic questions physicists study could have tremendous moral implications, and in fact might drive key decisions we all make each day. Oxford philosopher Nick Bostrom has in particular written about the idea of “astronomical waste“. As is well known to physicists, the universe has a finite, ever-dwindling supply of negentropy, i.e.… [continue reading]
I’ve submitted my papers (long and short arXiv versions) on detecting classically undetectable new particles through decoherence. The short version introduces the basic idea and states the main implications for dark matter and gravitons. The long version covers the dark matter case in depth. Abstract for the short version:
Detecting Classically Undetectable Particles through Quantum Decoherence
Some hypothetical particles are considered essentially undetectable because they are far too light and slow-moving to transfer appreciable energy or momentum to the normal matter that composes a detector. I propose instead directly detecting such feeble particles, like sub-MeV dark matter or even gravitons, through their uniquely distinguishable decoherent effects on quantum devices like matter interferometers. More generally, decoherence can reveal phenomena that have arbitrarily little classical influence on normal matter, giving new motivation for the pursuit of macroscopic superpositions.
This is figure 1:
Decoherence detection with a Mach-Zehnder interferometer.
is placed in a coherent superposition of spatially displaced wavepackets
that each travel a separate path and then are recombined. In the absence of system
, the interferometer is tuned so that
will be detected at the bright port with near unit probability, and at the dim port with near vanishing probability.
… [continue reading]
Physics StackExchange user QuestionAnswers asked the question “Is the preferred basis problem solved?“, and I reproduced my “answer” (read: discussion) in a post last week. He had some thoughtful follow-up questions, and (with his permission) I am going to answer them here. His questions are in bold, with minor punctuation changes.
How serious would you consider what you call the “Kent set-selection” problem?
If a set of CHs could be shown to be impossible to find, then this would break QM without necessarily telling us how to correct it. (Similar problems exist with the breakdown of gravity at the Planck scale.) Although I worry about this, I think it’s unlikely and most people think it’s very unlikely. If a set can be found, but no principle can be found to prefer it, I would consider QM to be correct but incomplete. It would kinda be like if big bang neucleosynthesis had not been discovered to explain the primordial frequency of elements.
And what did Zurek think of it, did he agree that it’s a substantial problem?
I think Wojciech believes a set of consistent histories (CHs) corresponding to the branch structure could be found, but that no one will find a satisfying beautiful principle within the CH framework which singles out the preferred set from the many, many other sets.… [continue reading]
Now I would like to apply the reasoning of the last post to the case of verifying macroscopic superpositions of the metric. It’s been 4 years since I’ve touched GR, so I’m going to rely heavily on E&M concepts and pray I don’t miss any key changes in the translation to gravity.
In the two-slit experiment with light, we don’t take the visibility of interference fringes as evidence of quantum mechanics when there are many photons. This is because the observations are compatible with a classical field description. We could interfere gravitational waves in a two-slit set up, and this would also have a purely classical explanation.
But in this post I’m not concentrating on evidence for pure quantum mechanics (i.e. a Bell-like argument grounded in locality), or evidence of the discrete nature of gravitons. Rather, I am interested in superpositions of two macroscopically distinct states of the metric as might be produced by a superposition of a large mass in two widely-separated positions. Now, we can only call a quantum state a (proper) superposition by first identifying a preferred basis that it can be a superposition with respect to. For now, I will wave my hands and say that the preferred states of the metric are just those metric states produced by the preferred states of matter, where the preferred states of matter are wavepackets of macroscopic amounts of mass localized in phase space (e.g.… [continue reading]
Suppose we are given an ensemble of systems which are believed to contain coherent superposition of the metric. How would we confirm this?
Well, in order to verify that an arbitrary system is in a coherent superposition, which is always relative to a preferred basis, it’s well known that we need to make measurements with respect to (at least?) two non-commuting bases. If we can make measurement M we expect it to be possible to make measurement M` = RM for some symmetry R.
I consider essentially two types of Hilbert spaces: the infinite-dimensional space associated with position, and the finite-dimensional space associated with spin. They have a very different relationship with the fundamental symmetries of spacetime.
For spin, an arbitrary rotation in space is represented by a unitary which can produce proper superpositions. Rotating 90 degrees about the y axis takes a z-up eigenstate to an equal superposition of z-up and z-down. The rotation takes one basis to another with which it does not commute.
In contrast, for position, the unitary representing spatial translation is essentially just a permutation on the space of position eigenstates. It does not produce superpositions from non-superpositions with respect to this basis.
You might think things are different when you consider more realistic measurements with respect to the over-complete basis of wavepackets.… [continue reading]
Unfortunately, physicists and philosophers disagree on what exactly the preferred basis problem is, what would constitute a solution, and how this relates (or subsumes) “the measurement problem” more generally. In my opinion, the most general version of the preferred basis problem was best articulated by Adrian Kent and Fey Dowker near the end their 1996 article “On the Consistent Histories Approach to Quantum Mechanics” in the Journal of Statistical Physics. Unfortunately, this article is long so I will try to quickly summarize the idea.
Kent and Dowker analyzed the question of whether the consistent histories formalism provided a satisfactory and complete account of quantum mechanics (QM). Contrary to what is often said, consistent histories and many-worlds need not be opposing interpretations of quantum mechanics Of course, some consistent historians make ontological claims about how the histories are “real”, where as the many-world’ers might say that the wavefunction is more “real”. In this sense they are contradictory. Personally, I think this is purely a matter of taste. a . Instead, consistent histories is a good mathematical framework for rigorously identifying the branch structure of the wavefunction of the universe Note that although many-worlders may not consider the consistent histories formalism the only way possible to mathematically identify branch structure, I believe most would agree that if, in the future, some branch structure was identified using a completely different formalism, it could be described at least approximately by the consistent histories formalism. … [continue reading]