Comments on “Longtermist Institutional Reform” by John & MacAskill

Tyler John & William MacAskill have recently released a preprint of their paper “Longtermist Institutional Reform” [PDF]. The paper is set to appear in an EA-motivated collection “The Long View” (working title), from Natalie Cargill and Effective Giving.

Here is the abstract:

There is a vast number of people who will live in the centuries and millennia to come. In all probability, future generations will outnumber us by thousands or millions to one; of all the people who we might affect with our actions, the overwhelming majority are yet to come. In the aggregate, their interests matter enormously. So anything we can do to steer the future of civilization onto a better trajectory, making the world a better place for those generations who are still to come, is of tremendous moral importance. Political science tells us that the practices of most governments are at stark odds with longtermism. In addition to the ordinary causes of human short-termism, which are substantial, politics brings unique challenges of coordination, polarization, short-term institutional incentives, and more. Despite the relatively grim picture of political time horizons offered by political science, the problems of political short-termism are neither necessary nor inevitable. In principle, the State could serve as a powerful tool for positively shaping the long-term future. In this chapter, we make some suggestions about how we should best undertake this project. We begin by explaining the root causes of political short-termism. Then, we propose and defend four institutional reforms that we think would be promising ways to increase the time horizons of governments: 1) government research institutions and archivists; 2) posterity impact assessments; 3) futures assemblies; and 4) legislative houses for future generations.

[continue reading]

Quotes from Curtright et al.’s history of quantum mechanics in phase space

Curtright et al. have a monograph on the phase-space formulation of quantum mechanics. I recommend reading their historical introduction.

A Concise Treatise on Quantum Mechanics in Phase Space
Thomas L. Curtright, David B. Fairlie, and Cosmas K. Zachos
Wigner’s quasi-probability distribution function in phase-space is a special (Weyl–Wigner) representation of the density matrix. It has been useful in describing transport in quantum optics, nuclear physics, quantum computing, decoherence, and chaos. It is also of importance in signal processing, and the mathematics of algebraic deformation. A remarkable aspect of its internal logic, pioneered by Groenewold and Moyal, has only emerged in the last quarter-century: It furnishes a third, alternative, formulation of quantum mechanics, independent of the conventional Hilbert space or path integral formulations. In this logically complete and self-standing formulation, one need not choose sides between coordinate or momentum space. It works in full phase-space, accommodating the uncertainty principle; and it offers unique insights into the classical limit of quantum theory: The variables (observables) in this formulation are c-number functions in phase space instead of operators, with the same interpretation as their classical counterparts, but are composed together in novel algebraic ways.

Here are some quotes. First, the phase-space formulation should be placed on equal footing with the Hilbert-space and path-integral formulations:

When Feynman first unlocked the secrets of the path integral formalism and presented them to the world, he was publicly rebuked: “It was obvious”, Bohr said, “that such trajectories violated the uncertainty principle”.

However, in this case, Bohr was wrong. Today path integrals are universally recognized and widely used as an alternative framework to describe quantum behavior, equivalent to although conceptually distinct from the usual Hilbert space framework, and therefore completely in accord with Heisenberg’s uncertainty principle…

Similarly, many physicists hold the conviction that classical-valued position and momentum variables should not be simultaneously employed in any meaningful formula expressing quantum behavior, simply because this would also seem to violate the uncertainty principle…However, they too are wrong.

[continue reading]

Ground-state cooling by Delic et al. and the potential for dark matter detection

The implacable Aspelmeyer group in Vienna announced a gnarly achievement in November (recently published):

Cooling of a levitated nanoparticle to the motional quantum ground state
Uroš Delić, Manuel Reisenbauer, Kahan Dare, David Grass, Vladan Vuletić, Nikolai Kiesel, Markus Aspelmeyer
We report quantum ground state cooling of a levitated nanoparticle in a room temperature environment. Using coherent scattering into an optical cavity we cool the center of mass motion of a 143 nm diameter silica particle by more than 7 orders of magnitude to n_x = 0.43 \pm 0.03 phonons along the cavity axis, corresponding to a temperature of 12 μK. We infer a heating rate of \Gamma_x/2\pi = 21\pm 3 kHz, which results in a coherence time of 7.6 μs – or 15 coherent oscillations – while the particle is optically trapped at a pressure of 10^{-6} mbar. The inferred optomechanical coupling rate of g_x/2\pi = 71 kHz places the system well into the regime of strong cooperativity (C \approx 5). We expect that a combination of ultra-high vacuum with free-fall dynamics will allow to further expand the spatio-temporal coherence of such nanoparticles by several orders of magnitude, thereby opening up new opportunities for macroscopic quantum experiments.

Ground-state cooling of nanoparticles in laser traps is a very important milestone on the way to producing large spatial superpositions of matter, and I have a long-standing obsession with the possibility of using such superpositions to probe for the existence of new particles and forces like dark matter. In this post, I put this milestone in a bit of context and then and then toss up a speculative plot for the estimated dark-matter sensitivity of a follow-up to Delić et al.’s device.

One way to organize the quantum states of a single continuous degree of freedom, like the center-of-mass position of a nanoparticle, is by their sensitivity to displacements in phase space.… [continue reading]

Comments on Weingarten’s preferred branch

A senior colleague asked me for thoughts on this paper describing a single-preferred-branch flavor of quantum mechanics, and I thought I’d copy them here. Tl;dr: I did not find an important new idea in it, but this paper nicely illustrates the appeal of Finkelstein’s partial-trace decoherence and the ambiguity inherent in connecting a many-worlds wavefunction to our direct observations.


We propose a method for finding an initial state vector which by ordinary Hamiltonian time evolution follows a single branch of many-worlds quantum mechanics. The resulting deterministic system appears to exhibit random behavior as a result of the successive emergence over time of information present in the initial state but not previously observed.

We start by assuming that a precise wavefunction branch structure has been specified. The idea, basically, is to randomly draw a branch at late times according to the Born probability, then to evolve it backwards in time to the beginning of the universe and take that as your initial condition. The main motivating observation is that, if we assume that all branch splittings are defined by a projective decomposition of some subsystem (‘the system’) which is recorded faithfully elsewhere (‘the environment’), then the lone preferred branch — time-evolving by itself — is an eigenstate of each of the projectors defining the splits. In a sense, Weingarten lays claim to ordered consistency [arxiv:gr-qc/9607073] by assuming partial-trace decoherenceNote on terminology: What Finkelstein called “partial-trace decoherence” is really a specialized form of consistency (i.e., a mathematical criterion for sets of consistent histories) that captures some, but not all, of the properties of the physical and dynamical process of decoherence.[continue reading]

Comments on Cotler, Penington, & Ranard

One way to think about the relevance of decoherence theory to measurement in quantum mechanics is that it reduces the preferred basis problem to the preferred subsystem problem; merely specifying the system of interest (by delineating it from its environment or measuring apparatus) is enough, in important special cases, to derive the measurement basis. But this immediately prompts the question: what are the preferred systems? I spent some time in grad school with my advisor trying to see if I could identify a preferred system just by looking at a large many-body Hamiltonian, but never got anything worth writing up.

I’m pleased to report that Cotler, Penington, and Ranard have tackled a closely related problem, and made a lot more progress:

Locality from the Spectrum
Jordan S. Cotler, Geoffrey R. Penington, Daniel H. Ranard
Essential to the description of a quantum system are its local degrees of freedom, which enable the interpretation of subsystems and dynamics in the Hilbert space. While a choice of local tensor factorization of the Hilbert space is often implicit in the writing of a Hamiltonian or Lagrangian, the identification of local tensor factors is not intrinsic to the Hilbert space itself. Instead, the only basis-invariant data of a Hamiltonian is its spectrum, which does not manifestly determine the local structure. This ambiguity is highlighted by the existence of dualities, in which the same energy spectrum may describe two systems with very different local degrees of freedom. We argue that in fact, the energy spectrum alone almost always encodes a unique description of local degrees of freedom when such a description exists, allowing one to explicitly identify local subsystems and how they interact.
[continue reading]

Comments on Bousso’s communication bound

Bousso has a recent paper bounding the maximum information that can be sent by a signal from first principles in QFT:

I derive a universal upper bound on the capacity of any communication channel between two distant systems. The Holevo quantity, and hence the mutual information, is at most of order E\Delta t/\hbar, where E the average energy of the signal, and \Delta t is the amount of time for which detectors operate. The bound does not depend on the size or mass of the emitting and receiving systems, nor on the nature of the signal. No restrictions on preparing and processing the signal are imposed. As an example, I consider the encoding of information in the transverse or angular position of a signal emitted and received by systems of arbitrarily large cross-section. In the limit of a large message space, quantum effects become important even if individual signals are classical, and the bound is upheld.

Here’s his first figure:



This all stems from vacuum entanglement, an oft-neglected aspect of QFT that Bousso doesn’t emphasize in the paper as the key ingredient.I thank Scott Aaronson for first pointing this out.a   The gradient term in the Hamiltonian for QFTs means that the value of the field at two nearby locations is always entangled. In particular, the value of \phi(x) and \phi(x+\Delta x) are sometimes considered independent degrees of freedom but, for a state with bounded energy, they can’t actually take arbitrarily different values as \Delta x becomes small, or else the gradient contribution to the Hamiltonian violates the energy bound. Technically this entanglement exists over arbitrary distances, but it is exponentially suppressed on scales larger than the Compton wavelength of the field.… [continue reading]

Comments on an essay by Wigner

[PSA: Happy 4th of July. Juno arrives at Jupiter tonight!]

This is short and worth reading:

The sharp distinction between Initial Conditions and Laws of Nature was initiated by Isaac Newton and I consider this to be one of his most important, if not the most important, accomplishment. Before Newton there was no sharp separation between the two concepts. Kepler, to whom we owe the three precise laws of planetary motion, tried to explain also the size of the planetary orbits, and their periods. After Newton's time the sharp separation of initial conditions and laws of nature was taken for granted and rarely even mentioned. Of course, the first ones are quite arbitrary and their properties are hardly parts of physics while the recognition of the latter ones are the prime purpose of our science. Whether the sharp separation of the two will stay with us permanently is, of course, as uncertain as is all future development but this question will be further discussed later. Perhaps it should be mentioned here that the permanency of the validity of our deterministic laws of nature became questionable as a result of the realization, due initially to D. Zeh, that the states of macroscopic bodies are always under the influence of their environment; in our world they can not be kept separated from it.

This essay has no formal abstract; the above is the second paragraph, which I find to be profound. Here is the PDF. The essay shares the same name and much of the material with Wigner’s 1963 Nobel lecture [PDF].The Nobel lecture has a nice bit contrasting invariance principles with covariance principles, and dynamical invariance principles with geometrical invariance principles.[continue reading]

Comments on Hanson’s The Age of Em

One of the main sources of hubris among physicists is that we think we can communicate essential ideas faster and more exactly than many others.This isn’t just a choice of compact terminology or ability to recall shared knowledge. It also has to do with a responsive throttling of the level of detail to match the listener’s ability to follow, and quick questions which allow the listener to hone in on things they don’t understand. This leads to a sense of frustration when talking to others who use different methods. Of course this sensation isn’t overwhelming evidence that our methods actually are better and function as described above, just that they are different. But come on.a   Robin Hanson‘s Age of Em is an incredible written example of efficient transfer of (admittedly speculative) insights. I highly recommend it.

In places where I am trained to expect writers to insert fluff and repeat themselves — without actually clarifying — Hanson states his case concisely once, then plows through to new topics. There are several times where I think he leaps without sufficient justifications (at least given my level of background knowledge), but there is a stunning lack of fluff. The ideas are jammed in edgewise.



Academic papers usually have two reasons that they must be read slowly: explicit unpacking of complex subjects, and convoluted language. Hanson’s book is a great example of something that must be read slowly because of the former with no hint of the latter. Although he freely calls on economics concepts that non-economists might have to look up, his language is always incredibly direct and clear. Hanson is an academic Hemingway.

Most of what I might have said on the book’s substance was very quickly eclipsed by other reviews, so you should just read Bryan Caplan, Richard Jones, or Scott Alexander, along with some replies by Hanson.… [continue reading]