Comments on Weingarten’s preferred branch

A senior colleague asked me for thoughts on this paper describing a single-preferred-branch flavor of quantum mechanics, and I thought I’d copy them here. Tl;dr: I did not find an important new idea in it, but this paper nicely illustrates the appeal of Finkelstein’s partial-trace decoherence and the ambiguity inherent in connecting a many-worlds wavefunction to our direct observations.

We start by assuming that a precise wavefunction branch structure has been specified. The idea, basically, is to randomly draw a branch at late times according to the Born probability, then to evolve it backwards in time to the beginning of the universe and take that as your initial condition. The main motivating observation is that, if we assume that all branch splittings are defined by a projective decomposition of some subsystem (‘the system’) which is recorded faithfully elsewhere (‘the environment’), then the lone preferred branch — time-evolving by itself — is an eigenstate of each of the projectors defining the splits. In a sense, Weingarten lays claim to ordered consistency [arxiv:gr-qc/9607073] by assuming partial-trace decoherenceNote on terminology: What Finkelstein called “partial-trace decoherence” is really a specialized form of consistency (i.e., a mathematical criterion for sets of consistent histories) that captures some, but not all, of the properties of the physical and dynamical process of decoherence. That’s why I’ve called it “partial-trace consistency” here and here.a   [arXiv:gr-qc/9301004]. In this way, the macrostate states stay the same as normal quantum mechanics but the microstates secretly conspire to confine the universe to a single branch.

I put proposals like this in the same category as Bohmian mechanics. They take as assumptions the initial state and unitary evolution of the universe, along with the conventional decoherence/amplification story that argues for (but never fully specifies from first principles) a fuzzy, time-dependent decomposition of the wavefunction into branches.… [continue reading]

Comments on Cotler, Penington, & Ranard

One way to think about the relevance of decoherence theory to measurement in quantum mechanics is that it reduces the preferred basis problem to the preferred subsystem problem; merely specifying the system of interest (by delineating it from its environment or measuring apparatus) is enough, in important special cases, to derive the measurement basis. But this immediately prompts the question: what are the preferred systems? I spent some time in grad school with my advisor trying to see if I could identify a preferred system just by looking at a large many-body Hamiltonian, but never got anything worth writing up.

I’m pleased to report that Cotler, Penington, and Ranard have tackled a closely related problem, and made a lot more progress:

The paper has a nice, logical layout and is clearly written. It also has an illuminating discussion of the purpose of nets of observables (which appear often in the algebraic QFT literature) as a way to define “physical” states and “local” observables when you have no access to a tensor decomposition into local regions.

For me personally, a key implication is that if I’m right in suggesting that we can uniquely identify the branches (and subsystems) just from the notion of locality, then this paper means we can probably reconstruct the branches just from the spectrum of the Hamiltonian.

Below are a couple other comments.

Uniqueness of locality, not spectrum fundamentality

The proper conclusion to draw from this paper is that if a quantum system can be interpreted in terms of spatially local interactions, this interpretation is probably unique. It is tempting, but I think mistaken, to also conclude that the spectrum of the Hamiltonian is more fundamental than notions of locality.… [continue reading]

Comments on Bousso’s communication bound

Bousso has a recent paper bounding the maximum information that can be sent by a signal from first principles in QFT:

Here’s his first figure:

This all stems from vacuum entanglement, an oft-neglected aspect of QFT that Bousso doesn’t emphasize in the paper as the key ingredient.I thank Scott Aaronson for first pointing this out.a   The gradient term in the Hamiltonian for QFTs means that the value of the field at two nearby locations is always entangled. In particular, the value of \phi(x) and \phi(x+\Delta x) are sometimes considered independent degrees of freedom but, for a state with bounded energy, they can’t actually take arbitrarily different values as \Delta x becomes small, or else the gradient contribution to the Hamiltonian violates the energy bound. Technically this entanglement exists over arbitrary distances, but it is exponentially suppressed on scales larger than the Compton wavelength of the field. For massless fields (infinite Compton wavelength), the entanglement is long range, but the amount you can actually measure is suppressed exponentially on a scale given by the length of your measuring apparatus.

In this case Bob’s measuring apparatus has effective size c \Delta t, which of courseYou can tell this is a HEP theorist playing with some recently-learned quantum information because he sets c=1 but leaves \hbar explicit. 😀b   Bousso just calls \Delta t. (It may actually be of size L = c \Delta t or, like a radio antenna, it may effectively be this size by integrating the measurement over a time long enough for a wave of that length to pass by.) Such a device is necessarily noisy when trying to measure modes whose wavelength is longer than this scale. So Alice can only communicate to Bob with high fidelity through excitations of energy at least \hbar/\Delta t.… [continue reading]

Comments on an essay by Wigner

[PSA: Happy 4th of July. Juno arrives at Jupiter tonight!]

This is short and worth reading:

This essay has no formal abstract; the above is the second paragraph, which I find to be profound. Here is the PDF. The essay shares the same name and much of the material with Wigner’s 1963 Nobel lecture [PDF].The Nobel lecture has a nice bit contrasting invariance principles with covariance principles, and dynamical invariance principles with geometrical invariance principles.a  

Some comments:

  • It is very satisfying to see Wigner — the titan — highlight the deep importance of the seminal work by the grandfather of my field, Dieter Zeh. Likewise for his comments on Bell:

    As to the J.S. Bell inequalities, I consider them truly important, inasmuch as they prove that in the case considered by him, one cannot define a non-negative probability function which describes the state of his system in the classical sense, i.e., gives nonnegative probabilities for all possible events….

    This is a very interesting and very important observation and it is truly surprising that it has not been made before. Perhaps some of those truly interested in the epistemology of quantum mechanics took it for granted but they did not demonstrate it.

  • I like the hierarchy of regularity that Wigner draws: data ➢ laws ➢ symmetries. Symmetries are strong restrictions on, but do not determine, laws in the same way that laws are strong restrictions on, but do not determine, data.
  • It is interesting that Wigner tried to embed relativistic restrictions into the description of initial states:

    Let me mention, finally, one effect which the theory of relativity should have introduced into the description of the initial conditions and perhaps also into the description of all states.

[continue reading]

Comments on Hanson’s The Age of Em

One of the main sources of hubris among physicists is that we think we can communicate essential ideas faster and more exactly than many others.This isn’t just a choice of compact terminology or ability to recall shared knowledge. It also has to do with a responsive throttling of the level of detail to match the listener’s ability to follow, and quick questions which allow the listener to hone in on things they don’t understand. This leads to a sense of frustration when talking to others who use different methods. Of course this sensation isn’t overwhelming evidence that our methods actually are better and function as described above, just that they are different. But come on.a   Robin Hanson‘s Age of Em is an incredible written example of efficient transfer of (admittedly speculative) insights. I highly recommend it.

In places where I am trained to expect writers to insert fluff and repeat themselves — without actually clarifying — Hanson states his case concisely once, then plows through to new topics. There are several times where I think he leaps without sufficient justifications (at least given my level of background knowledge), but there is a stunning lack of fluff. The ideas are jammed in edgewise.

Academic papers usually have two reasons that they must be read slowly: explicit unpacking of complex subjects, and convoluted language. Hanson’s book is a great example of something that must be read slowly because of the former with no hint of the latter. Although he freely calls on economics concepts that non-economists might have to look up, his language is always incredibly direct and clear. Hanson is an academic Hemingway.

Most of what I might have said on the book’s substance was very quickly eclipsed by other reviews, so you should just read Bryan Caplan, Richard Jones, or Scott Alexander, along with some replies by Hanson.… [continue reading]

Comments on Rosaler’s “Reduction as an A Posteriori Relation”

In a previous post of abstracts, I mentioned philosopher Josh Rosaler’s attempt to clarify the distinction between empirical and formal notions of “theoretical reduction”. Reduction is just the idea that one theory reduces to another in some limit, like Galilean kinematics reduces to special relativity in the limit of small velocities.Confusingly, philosophers use a reversed convention; they say that Galilean mechanics reduces to special relativity.a   Formal reduction is when this takes the form of some mathematical limiting procedure (e.g., v/c \to 0), whereas empirical reduction is an explanatory statement about observations (e.g., “special relativity can explains the empirical usefulness of Galilean kinematics”).

Rosaler’s criticism, which I mostly agree with, is that folks often conflate these two. Usually this isn’t a serious problem since the holes can be patched up on the fly by a competent physicist, but sometimes it leads to serious trouble. The most egregious case, and the one that got me interested in all this, is the quantum-classical transition, and in particular the serious insufficiency of existing \hbar \to 0 limits to explain the appearance of macroscopic classicality. In particular, even though this limiting procedure recovers the classical equations of motion, it fails spectacularly to recover the state space.There are multiple quantum states that have the same classical analog as \hbar \to 0, and there are quantum states that have no classical analog as \hbar \to 0.b  

In this post I’m going to comment Rosaler’s recent elaboration on this ideaI thank him for discussion this topic and, full disclosure, we’re drafting a paper about set selection together.c  :

I was tempted to interpret the thesis of this essay like this:

The only useful notion of theory reduction is necessarily intertwined with empirical facts about the domain of applicability.[continue reading]

Comments on Stern, journals, and incentives

David L. Stern on changing incentives in science by getting rid of journals:

Instead, I believe, we will do better to rely simply on the scientific process itself. Over time, good science is replicated, elevated, and established as most likely true; bad science may be unreplicated, flaws may be noted, and it usually is quietly dismissed as untrue. This process may take considerable time—sometimes years, sometimes decades. But, usually, the most egregious papers are detected quickly by experts as most likely garbage. This self-correcting aspect of science often does not involve explicit written documentation of a paper’s flaws. The community simply decides that these papers are unhelpful and the field moves in a different direction.

In sum, we should stop worrying about peer review….

The real question that people seem to be struggling with is “How will we judge the quality of the science if it is not peer reviewed and published in a journal that I ‘respect’?” Of course, the answer is obvious. Read the papers! But here is where we come to the crux of the incentive problem. Currently, scientists are rewarded for publishing in “top” journals, on the assumption that these journals publish only great science. Since this assumption is demonstrably false, and since journal publishing involves many evils that are discussed at length in other posts, a better solution is to cut journals out of the incentive structure altogether.

(H/t Tyler Cowen.)

I think this would make the situation worse, not better, in bringing new ideas to the table. For all of its flaws, peer review has the benefit that any (not obviously terrible) paper gets a somewhat careful reading by a couple of experts.… [continue reading]

Comments on Myrvold’s Taj Mahal

Last week I saw an excellent talk by philosopher Wayne Myrvold.

(Download MP4 video here.)

The topic was well-defined, and of reasonable scope. The theorem is easily and commonly misunderstood. And Wayne’s talk served to dissolve the confusion around it, by unpacking the theorem into a handful of pieces so that you could quickly see where the rub was. I would that all philosophy of physics were so well done.

Here are the key points as I saw them:

  • The vacuum state in QFTs, even non-interacting ones, is entangled over arbitrary distances (albeit by exponentially small amounts). You can think of this as every two space-like separated regions of spacetime sharing extremely diluted Bell pairs.
  • Likewise, by virtue of its non-local nature, the vacuum contains non-zero (but stupendously tiny) overlap with all localized states. If you were able to perform a “Taj-Mahal measurement” on a region R, which ask the Yes-or-No question “Is there a Taj Mahal in R?”, you always have a non-zero (but stupendously tiny) chance of getting “Yes” and finding a Taj Mahal.
  • This non-locality arises directly from requiring the exact spectral condition, i.e., that the Hamiltonian is bounded from below. This is because the spectral condition is a global statement about modes in spacetime. It asserts that allowed states have overlap only with the positive part of the mass shell.
  • This is very analogous to the way that analytic functions are determined by their behavior in an arbitrarily small open patch of the complex plane.
  • This theorem says that some local operator, when acting on the vacuum, can produce the Taj-Mahal in a distant, space-like separated region of space-time.
[continue reading]