Comments on Weingarten’s preferred branch

A senior colleague asked me for thoughts on this paper describing a single-preferred-branch flavor of quantum mechanics, and I thought I’d copy them here. Tl;dr: I did not find an important new idea in it, but this paper nicely illustrates the appeal of Finkelstein’s partial-trace decoherence and the ambiguity inherent in connecting a many-worlds wavefunction to our direct observations.

We propose a method for finding an initial state vector which by ordinary Hamiltonian time evolution follows a single branch of many-worlds quantum mechanics. The resulting deterministic system appears to exhibit random behavior as a result of the successive emergence over time of information present in the initial state but not previously observed.

We start by assuming that a precise wavefunction branch structure has been specified. The idea, basically, is to randomly draw a branch at late times according to the Born probability, then to evolve it backwards in time to the beginning of the universe and take that as your initial condition. The main motivating observation is that, if we assume that all branch splittings are defined by a projective decomposition of some subsystem (‘the system’) which is recorded faithfully elsewhere (‘the environment’), then the lone preferred branch — time-evolving by itself — is an eigenstate of each of the projectors defining the splits. In a sense, Weingarten lays claim to ordered consistency [arxiv:gr-qc/9607073] by assuming partial-trace decoherenceNote on terminology: What Finkelstein called “partial-trace decoherence” is really a specialized form of consistency (i.e., a mathematical criterion for sets of consistent histories) that captures some, but not all, of the properties of the physical and dynamical process of decoherence.[continue reading]

Comments on Cotler, Penington, & Ranard

One way to think about the relevance of decoherence theory to measurement in quantum mechanics is that it reduces the preferred basis problem to the preferred subsystem problem; merely specifying the system of interest (by delineating it from its environment or measuring apparatus) is enough, in important special cases, to derive the measurement basis. But this immediately prompts the question: what are the preferred systems? I spent some time in grad school with my advisor trying to see if I could identify a preferred system just by looking at a large many-body Hamiltonian, but never got anything worth writing up.

I’m pleased to report that Cotler, Penington, and Ranard have tackled a closely related problem, and made a lot more progress:

Locality from the Spectrum
Jordan S. Cotler, Geoffrey R. Penington, Daniel H. Ranard
Essential to the description of a quantum system are its local degrees of freedom, which enable the interpretation of subsystems and dynamics in the Hilbert space. While a choice of local tensor factorization of the Hilbert space is often implicit in the writing of a Hamiltonian or Lagrangian, the identification of local tensor factors is not intrinsic to the Hilbert space itself. Instead, the only basis-invariant data of a Hamiltonian is its spectrum, which does not manifestly determine the local structure. This ambiguity is highlighted by the existence of dualities, in which the same energy spectrum may describe two systems with very different local degrees of freedom. We argue that in fact, the energy spectrum alone almost always encodes a unique description of local degrees of freedom when such a description exists, allowing one to explicitly identify local subsystems and how they interact.
[continue reading]

Comments on Bousso’s communication bound

Bousso has a recent paper bounding the maximum information that can be sent by a signal from first principles in QFT:

I derive a universal upper bound on the capacity of any communication channel between two distant systems. The Holevo quantity, and hence the mutual information, is at most of order E\Delta t/\hbar, where E the average energy of the signal, and \Delta t is the amount of time for which detectors operate. The bound does not depend on the size or mass of the emitting and receiving systems, nor on the nature of the signal. No restrictions on preparing and processing the signal are imposed. As an example, I consider the encoding of information in the transverse or angular position of a signal emitted and received by systems of arbitrarily large cross-section. In the limit of a large message space, quantum effects become important even if individual signals are classical, and the bound is upheld.

Here’s his first figure:

This all stems from vacuum entanglement, an oft-neglected aspect of QFT that Bousso doesn’t emphasize in the paper as the key ingredient.I thank Scott Aaronson for first pointing this out. a   The gradient term in the Hamiltonian for QFTs means that the value of the field at two nearby locations is always entangled. In particular, the value of \phi(x) and \phi(x+\Delta x) are sometimes considered independent degrees of freedom but, for a state with bounded energy, they can’t actually take arbitrarily different values as \Delta x becomes small, or else the gradient contribution to the Hamiltonian violates the energy bound. Technically this entanglement exists over arbitrary distances, but it is exponentially suppressed on scales larger than the Compton wavelength of the field.… [continue reading]

Comments on an essay by Wigner

[PSA: Happy 4th of July. Juno arrives at Jupiter tonight!]

This is short and worth reading:

The sharp distinction between Initial Conditions and Laws of Nature was initiated by Isaac Newton and I consider this to be one of his most important, if not the most important, accomplishment. Before Newton there was no sharp separation between the two concepts. Kepler, to whom we owe the three precise laws of planetary motion, tried to explain also the size of the planetary orbits, and their periods. After Newton's time the sharp separation of initial conditions and laws of nature was taken for granted and rarely even mentioned. Of course, the first ones are quite arbitrary and their properties are hardly parts of physics while the recognition of the latter ones are the prime purpose of our science. Whether the sharp separation of the two will stay with us permanently is, of course, as uncertain as is all future development but this question will be further discussed later. Perhaps it should be mentioned here that the permanency of the validity of our deterministic laws of nature became questionable as a result of the realization, due initially to D. Zeh, that the states of macroscopic bodies are always under the influence of their environment; in our world they can not be kept separated from it.

This essay has no formal abstract; the above is the second paragraph, which I find to be profound. Here is the PDF. The essay shares the same name and much of the material with Wigner’s 1963 Nobel lecture [PDF].The Nobel lecture has a nice bit contrasting invariance principles with covariance principles, and dynamical invariance principles with geometrical invariance principles.[continue reading]

Comments on Hanson’s The Age of Em

One of the main sources of hubris among physicists is that we think we can communicate essential ideas faster and more exactly than many others.This isn’t just a choice of compact terminology or ability to recall shared knowledge. It also has to do with a responsive throttling of the level of detail to match the listener’s ability to follow, and quick questions which allow the listener to hone in on things they don’t understand. This leads to a sense of frustration when talking to others who use different methods. Of course this sensation isn’t overwhelming evidence that our methods actually are better and function as described above, just that they are different. But come on. a   Robin Hanson‘s Age of Em is an incredible written example of efficient transfer of (admittedly speculative) insights. I highly recommend it.

In places where I am trained to expect writers to insert fluff and repeat themselves — without actually clarifying — Hanson states his case concisely once, then plows through to new topics. There are several times where I think he leaps without sufficient justifications (at least given my level of background knowledge), but there is a stunning lack of fluff. The ideas are jammed in edgewise.

Academic papers usually have two reasons that they must be read slowly: explicit unpacking of complex subjects, and convoluted language. Hanson’s book is a great example of something that must be read slowly because of the former with no hint of the latter. Although he freely calls on economics concepts that non-economists might have to look up, his language is always incredibly direct and clear. Hanson is an academic Hemingway.… [continue reading]

Comments on Rosaler’s “Reduction as an A Posteriori Relation”

In a previous post of abstracts, I mentioned philosopher Josh Rosaler’s attempt to clarify the distinction between empirical and formal notions of “theoretical reduction”. Reduction is just the idea that one theory reduces to another in some limit, like Galilean kinematics reduces to special relativity in the limit of small velocities.Confusingly, philosophers use a reversed convention; they say that Galilean mechanics reduces to special relativity. a   Formal reduction is when this takes the form of some mathematical limiting procedure (e.g., v/c \to 0), whereas empirical reduction is an explanatory statement about observations (e.g., “special relativity can explains the empirical usefulness of Galilean kinematics”).

Rosaler’s criticism, which I mostly agree with, is that folks often conflate these two. Usually this isn’t a serious problem since the holes can be patched up on the fly by a competent physicist, but sometimes it leads to serious trouble. The most egregious case, and the one that got me interested in all this, is the quantum-classical transition, and in particular the serious insufficiency of existing \hbar \to 0 limits to explain the appearance of macroscopic classicality. In particular, even though this limiting procedure recovers the classical equations of motion, it fails spectacularly to recover the state space.There are multiple quantum states that have the same classical analog as \hbar \to 0, and there are quantum states that have no classical analog as \hbar \to 0. b  

In this post I’m going to comment Rosaler’s recent elaboration on this ideaI thank him for discussion this topic and, full disclosure, we’re drafting a paper about set selection together. c  :

Reduction between theories in physics is often approached as an a priori relation in the sense that reduction is often taken to depend only on a comparison of the mathematical structures of two theories.
[continue reading]

Comments on Stern, journals, and incentives

David L. Stern on changing incentives in science by getting rid of journals:

Instead, I believe, we will do better to rely simply on the scientific process itself. Over time, good science is replicated, elevated, and established as most likely true; bad science may be unreplicated, flaws may be noted, and it usually is quietly dismissed as untrue. This process may take considerable time—sometimes years, sometimes decades. But, usually, the most egregious papers are detected quickly by experts as most likely garbage. This self-correcting aspect of science often does not involve explicit written documentation of a paper’s flaws. The community simply decides that these papers are unhelpful and the field moves in a different direction.

In sum, we should stop worrying about peer review….

The real question that people seem to be struggling with is “How will we judge the quality of the science if it is not peer reviewed and published in a journal that I ‘respect’?” Of course, the answer is obvious. Read the papers! But here is where we come to the crux of the incentive problem. Currently, scientists are rewarded for publishing in “top” journals, on the assumption that these journals publish only great science. Since this assumption is demonstrably false, and since journal publishing involves many evils that are discussed at length in other posts, a better solution is to cut journals out of the incentive structure altogether.

(H/t Tyler Cowen.)

I think this would make the situation worse, not better, in bringing new ideas to the table. For all of its flaws, peer review has the benefit that any (not obviously terrible) paper gets a somewhat careful reading by a couple of experts.… [continue reading]

Comments on Myrvold’s Taj Mahal

Last week I saw an excellent talk by philosopher Wayne Myrvold.

The Reeh-Schlieder theorem says, roughly, that, in any reasonable quantum field theory, for any bounded region of spacetime R, any state can be approximated arbitrarily closely by operating on the vacuum state (or any state of bounded energy) with operators formed by smearing polynomials in the field operators with functions having support in R. This strikes many as counterintuitive, and Reinhard Werner has glossed the theorem as saying that “By acting on the vacuum with suitable operations in a terrestrial laboratory, an experimenter can create the Taj Mahal on (or even behind) the Moon!” This talk has two parts. First, I hope to convince listeners that the theorem is not counterintuitive, and that it follows immediately from facts that are already familiar fare to anyone who has digested the opening chapters of any standard introductory textbook of QFT. In the second, I will discuss what we can learn from the theorem about how relativistic causality is implemented in quantum field theories.

(Download MP4 video here.)

The topic was well-defined, and of reasonable scope. The theorem is easily and commonly misunderstood. And Wayne’s talk served to dissolve the confusion around it, by unpacking the theorem into a handful of pieces so that you could quickly see where the rub was. I would that all philosophy of physics were so well done.

Here are the key points as I saw them:

  • The vacuum state in QFTs, even non-interacting ones, is entangled over arbitrary distances (albeit by exponentially small amounts). You can think of this as every two space-like separated regions of spacetime sharing extremely diluted Bell pairs.
[continue reading]

Comments on Pikovski et al.’s time dilation decoherence

Folks have been asking about the new Nature Physics article by Pikovski et al., “Universal decoherence due to gravitational time dilation. Here are some comments:

  • I think their calculation is probably correct for the model they are considering. One could imagine that they were placing their object in a superposition of two different locations in an electric (rather than gravitational field), and in this case we really would expect the internal degrees of freedom to evolve in two distinct ways. Any observer who was “part of the superposition” wouldn’t be able to tell locally whether their clock was ticking fast or slow, but it can be determined by bringing both clocks back together and comparing them.
  • It’s possible the center of mass (COM) gets shifted a bit, but you can avoid this complication by just assuming that the superposition separation L is much bigger than the size of the object R, and that the curvature of the gravitational field is very small compared to both.
  • Their model is a little weird, as hinted at by their observation that they get “Gaussian decoherence”, \sim \exp(-T^2), rather than exponential, \sim \exp(-T). The reason is that their evolution isn’t Markovian, as it is for any environment (like scattered or emitted photons) composed of small parts that interact for a bit of time and then leave. Rather, the COM is becoming more and more entangled with each of the internal degrees of freedom as time goes on.
  • Because they don’t emit any radiation, their “environment” (the internal DOF) is finite dimensional, and so you will eventually get recoherence. This isn’t a problem for Avagadro’s number of particles.
  • This only decoheres superpositions in the direction of the gravitational gradient, so it’s not particularly relevant for why things look classical above any given scale.
[continue reading]

Comments on Tegmark’s ‘Consciousness as a State of Matter’

[Edit: Scott Aaronson has posted on his blog with extensive criticism of Integrated Information Theory, which motivated Tegmark’s paper.]

Max Tegmark’s recent paper entitled “Consciousness as a State of Matter” has been making the rounds. See especially Sabine Hossenfelder’s critique on her blog that agrees in several places with what I say below.

Tegmark’s paper didn’t convince me that there’s anything new here with regards to the big questions of consciousness. (In fairness, I haven’t read the work of neuroscientist Giulio Tononi that motivated Tegmark’s claims). However, I was interested in what he has to say about the proper way to define subsystems in a quantum universe (i.e. to “carve reality at its joints”) and how this relates to the quantum-classical transition. There is a sense in which the modern understanding of decoherence simplifies the vague questions “How does (the appearance of) a classical world emerge in a quantum universe? ” to the slightly-less-vague question “what are the preferred subsystems of the universe, and how do they change with time?”. Tegmark describes essentially this as the “quantum factorization problem” on page 3. (My preferred formulation is as the “set-selection problem” by Dowker and Kent. Note that this is a separate problem from the origin of probability in quantum mechanicsThe problem of probability as described by Weinberg: “The difficulty is not that quantum mechanics is probabilistic—that is something we apparently just have to live with. The real difficulty is that it is also deterministic, or more precisely, that it combines a probabilistic interpretation with deterministic dynamics.” HT Steve Hsu. a  .)

Therefore, my comments are going to focus on the “object-level” calculations of the paper, and I won’t have much to say about the implications for consciousness except at the very end.… [continue reading]

Comments on Gell-Mann & Hartle’s latest

Back in December Gell-Mann and Hartle (G&H) posted their latest paper on consistent histories, “Adaptive Coarse Graining, Environment, Strong Decoherence, and Quasiclassical Realms”. Here are my thoughts.

The discussion of adaptive coarse graining was brief and very much in agreement with previous work.

G&H then give a name and formal description to the idea, long part of the intuitive lore, of a history being defined by the values taken by a particular variable over many time step. (This might be the position of an object, which is being recorded to some accuracy by a environment that decoheres it.) The key idea is that all the Schrödinger-picture projectors P^{k}_{\alpha_k} at different times t_k defining the history commute:

(1)   \begin{align*} [P^{k}_{\alpha_k},P^{k'}_{\alpha_{k'}}]=0 \quad \forall k,k' \end{align*}

This they call the narrative condition. From it, one is able to define a smallest set of maximal projectors Q_i (which they call a common framework) that obey either Q_i \le P^{k}_{\alpha_k} or Q_i  P^{k}_{\alpha_k} = P^{k}_{\alpha_k} Q_i = 0 for all P^{k}_{\alpha_k}. For instance, if the P‘s are onto spatial volumes of position, then the Q‘s are just the minimal partition of position space such that the region associated with each Q_i is fully contained in the regions corresponding to some of the P^{k}_{\alpha_k}, and is completely disjoint from the regions corresponding to the others.

For times steps k=1,\ldots,4, the history projectors at a given time project into subspaces which disjointly span Hilbert space. The narrative condition states that all history projectors commute, which means we can think of them as projecting onto disjoint subsets forming a partition of the range of some variable (e.g. position). The common framework is just the set of smaller projectors that also partitions the range of the variable and which obey Q \le P or QP = PQ = 0 for each P and Q.
[continue reading]

Comments on Hotta’s Quantum Energy Teleportation

[This is a “literature impression“.]

Masahiro Hotta has a series of paper about what he calls “quantum energy teleportation (QET)”, modeled after the well-known notion of quantum teleportation (of information). Although it sounds like crazy crack pot stuff, and they contain the red-flag term “zero-point energy”, the basic physics of Hotta’s work are sound. But they don’t appear to have important consequences for energy transmission.

The idea is to exploit the fact that the ground state of the vacuum in QFT is, in principle, entangled over arbitrary distances. In a toy Alice and Bob model with respective systems A and B, you assume a Hamiltonian for which the ground state is unique and entangled. Then, Alice makes a local measurement on her system A. Neither of the two conditional global states for the joint AB system — conditional on the outcome of the measurement — are eigenstates of the Hamiltonian, and so therefore the average energy must increase for the joint system. The source of this energy is the device Alice used to make the measurement. Now, if Bob were to independently make a measurement of his system, he would find that energy would also necessarily flow from his device into the joint system; this follows from the symmetry of the problem. But if he waits for Alice to transmit to him the outcome of her result, it turns out that he can apply a local unitary to his B system and a subsequent local measurement that leads to a net average energy flow to his equipment. The fact that he must wait for the outcome of Alice’s measurement, which travels no faster than the speed of light, is what gives this the flavor of teleportation.… [continue reading]