Comments on “Longtermist Institutional Reform” by John & MacAskill

Tyler John & William MacAskill have recently released a preprint of their paper “Longtermist Institutional Reform” [PDF]. The paper is set to appear in an EA-motivated collection “The Long View” (working title), from Natalie Cargill and Effective Giving.

Here is the abstract:

There is a vast number of people who will live in the centuries and millennia to come. In all probability, future generations will outnumber us by thousands or millions to one; of all the people who we might affect with our actions, the overwhelming majority are yet to come. In the aggregate, their interests matter enormously. So anything we can do to steer the future of civilization onto a better trajectory, making the world a better place for those generations who are still to come, is of tremendous moral importance. Political science tells us that the practices of most governments are at stark odds with longtermism. In addition to the ordinary causes of human short-termism, which are substantial, politics brings unique challenges of coordination, polarization, short-term institutional incentives, and more. Despite the relatively grim picture of political time horizons offered by political science, the problems of political short-termism are neither necessary nor inevitable. In principle, the State could serve as a powerful tool for positively shaping the long-term future. In this chapter, we make some suggestions about how we should best undertake this project. We begin by explaining the root causes of political short-termism. Then, we propose and defend four institutional reforms that we think would be promising ways to increase the time horizons of governments: 1) government research institutions and archivists; 2) posterity impact assessments; 3) futures assemblies; and 4) legislative houses for future generations.

[continue reading]

How to think about Quantum Mechanics—Part 8: The quantum-classical limit as music

[Other parts in this series: 1,2,3,4,5,6,7,8.]

On microscopic scales, sound is air pressure f(t) fluctuating in time t. Taking the Fourier transform of f(t) gives the frequency distribution \hat{f}(\omega), but in an eternal way, applying to the entire time interval for t\in [-\infty,\infty].

Yet on macroscopic scales, sound is described as having a frequency distribution as a function of time, i.e., a note has both a pitch and a duration. There are many formalisms for describing this (e.g., wavelets), but a well-known limitation is that the frequency \omega of a note is only well-defined up to an uncertainty that is inversely proportional to its duration \Delta t.

At the mathematical level, a given wavefunction \psi(x) is almost exactly analogous: macroscopically a particle seems to have a well-defined position and momentum, but microscopically there is only the wavefunction \psi. The mapping of the analogyI am of course not the first to emphasize this analogy. For instance, while writing this post I found “Uncertainty principles in Fourier analysis” by de Bruijn (via Folland’s book), who calls the Wigner function of an audio signal f(t) the “musical score” of f.a   is \{t,\omega,f\} \to \{x,p,\psi\}. Wavefunctions can of course be complex, but we can restrict ourself to a real-valued wavefunction without any trouble; we are not worrying about the dynamics of wavefunctions, so you can pretend the Hamiltonian vanishes if you like.

In order to get the acoustic analog of Planck’s constant \hbar, it helps to imagine going back to a time when the pitch of a note was measured with a unit that did not have a known connection to absolute frequency, i.e.,… [continue reading]

How shocking are rare past events?

This post describes variations on a thought experiment involving the anthropic principle. The variations were developed through discussion with Andreas Albrecht, Charles Bennett, Leonid Levin, and Andrew Arrasmith at a conference at the Neils Bohr Institute in Copenhagen in October of 2019. I have not yet finished reading Bostrom’s “Anthropic Bias“, so I don’t know where it fits in to his framework. I expect it is subsumed into such existing discussion, and I would appreciate pointers.

The point is to consider a few thought experiments that share many of the same important features, but for which we have very different intuitions, and to identify if there are any substantive difference that can be used to justify these intuitions.

I will use the term “shocked” (in the sense of “I was shocked to see Bob levitate off the ground”) to refer to the situation where we have made observations that are extremely unlikely to be generated by our implicit background model of the world, such that good reasoners would likely reject the model and start entertaining previously disfavored alternative models like “we’re all brains in a vat”, the Matrix, etc. In particular, to be shocked is not supposed to be merely a description of human psychology, but rather is a normative claim about how good scientific reasoners should behave.

Here are the three scenarios:

Scenario 1: Through advances in geology, paleontology, theoretical biology, and quantum computer simulation of chemistry, we get very strong theoretical evidence that intelligent life appears with high likelihood following abiogenesis events, but that abiogenesis itself is very rare: there is one expected abiogenesis event per 1022 stars per Hubble time.
[continue reading]

FAQ about experimental quantum Darwinism

I am briefly stirring from my blog-hibernationThis blog will resume at full force sometime in the future, but not just yet.a   to present a collection of frequently asked questions about experiments seeking to investigate quantum Darwinism (QD). Most of the questions were asked by (or evolved from questions asked by) Phillip Ball while we corresponded regarding his recent article “Quantum Darwinism, an Idea to Explain Objective Reality, Passes First Tests” for Quanta magazine, which I recommend you check out.


Who is trying see quantum Darwinism in experiments?

I am aware of two papers out of a group from Arizona State in 2010 (here and here) and three papers from separate groups last year (arXiv: 1803.01913, 1808.07388, 1809.10456). I haven’t looked at them all carefully so I can’t vouch for them, but I think the more recent papers would be the closest thing to a “test” of QD.

What are the experiments doing to put QD the test?

These teams construct a kind of “synthetic environment” from just a few qubits, and then interrogate them to discover the information that they contain about the quantum system to which they are coupled.

What do you think of experimental tests of QD in general?

Considered as a strictly mathematical phenomenon, QD is the dynamical creation of certain kinds of correlations between certain systems and their environments under certain conditions. These experiments directly confirm that, if such conditions are created, the expected correlations are obtained.

The experiments are, unfortunately, not likely to offer many insight or opportunities for surprise; the result can be predicted with very high confidence long in advance.… [continue reading]

Comments on Weingarten’s preferred branch

A senior colleague asked me for thoughts on this paper describing a single-preferred-branch flavor of quantum mechanics, and I thought I’d copy them here. Tl;dr: I did not find an important new idea in it, but this paper nicely illustrates the appeal of Finkelstein’s partial-trace decoherence and the ambiguity inherent in connecting a many-worlds wavefunction to our direct observations.


We propose a method for finding an initial state vector which by ordinary Hamiltonian time evolution follows a single branch of many-worlds quantum mechanics. The resulting deterministic system appears to exhibit random behavior as a result of the successive emergence over time of information present in the initial state but not previously observed.

We start by assuming that a precise wavefunction branch structure has been specified. The idea, basically, is to randomly draw a branch at late times according to the Born probability, then to evolve it backwards in time to the beginning of the universe and take that as your initial condition. The main motivating observation is that, if we assume that all branch splittings are defined by a projective decomposition of some subsystem (‘the system’) which is recorded faithfully elsewhere (‘the environment’), then the lone preferred branch — time-evolving by itself — is an eigenstate of each of the projectors defining the splits. In a sense, Weingarten lays claim to ordered consistency [arxiv:gr-qc/9607073] by assuming partial-trace decoherenceNote on terminology: What Finkelstein called “partial-trace decoherence” is really a specialized form of consistency (i.e., a mathematical criterion for sets of consistent histories) that captures some, but not all, of the properties of the physical and dynamical process of decoherence.[continue reading]

Weinberg on the measurement problem

In his new article in the NY Review of Books, the titan Steven Weinberg expresses more sympathy for the importance of the measurement problem in quantum mechanics. The article has nothing new for folks well-versed in quantum foundations, but Weinberg demonstrates a command of the existing arguments and considerations. The lengthy excerpts below characterize what I think are the most important aspects of his view.

Many physicists came to think that the reaction of Einstein and Feynman and others to the unfamiliar aspects of quantum mechanics had been overblown. This used to be my view. After all, Newton’s theories too had been unpalatable to many of his contemporaries…Evidently it is a mistake to demand too strictly that new physical theories should fit some preconceived philosophical standard.

In quantum mechanics the state of a system is not described by giving the position and velocity of every particle and the values and rates of change of various fields, as in classical physics. Instead, the state of any system at any moment is described by a wave function, essentially a list of numbers, one number for every possible configuration of the system….What is so terrible about that? Certainly, it was a tragic mistake for Einstein and Schrödinger to step away from using quantum mechanics, isolating themselves in their later lives from the exciting progress made by others. Even so, I’m not as sure as I once was about the future of quantum mechanics. It is a bad sign that those physicists today who are most comfortable with quantum mechanics do not agree with one another about what it all means. The dispute arises chiefly regarding the nature of measurement in quantum mechanics…

The introduction of probability into the principles of physics was disturbing to past physicists, but the trouble with quantum mechanics is not that it involves probabilities.

[continue reading]

Three arguments on the measurement problem

When talking to folks about the quantum measurement problem, and its potential partial resolution by solving the set selection problem, I’ve recently been deploying three nonstandard arguments. To a large extent, these are dialectic strategies rather than unique arguments per se. That is, they are notable for me mostly because they avoid getting bogged down in some common conceptual dispute, not necessarily because they demonstrate something that doesn’t formally follow from traditional arguments. At least two of these seem new to me, in the sense that I don’t remember anyone else using them, but I strongly suspect that I’ve just appropriated them from elsewhere and forgotten. Citations to prior art are highly appreciated.

Passive quantum mechanics

There are good reasons to believe that, at the most abstract level, the practice of science doesn’t require a notion of active experiment. Rather, a completely passive observer could still in principle derive all fundamental physical theories simply by sitting around and watching. Science, at this level, is about explaining as many observations as possible starting from as minimal assumptions as possible. Abstractly we frame science as a compression algorithm that tries to find the programs with the smallest Kolmogorov complexity that reproduces observed data.

Active experiments are of course useful for at least two important reasons: (1) They gather strong evidence for causality by feeding a source of randomness into a system to test a causal model, and (2) they produce sources of data that are directly correlated with systems of interest rather than relying on highly indirect (and perhaps computationally intractable) correlations. But ultimately these are practical considerations, and an inert but extraordinarily intelligent observer could in principle derive general relativity, quantum mechanics, and field theoryOf course, there may be RG-reasons to think that scales decouple, and that to a good approximation the large-scale dynamics are compatible with lots of possible small-scale dynamics.[continue reading]

Comments on an essay by Wigner

[PSA: Happy 4th of July. Juno arrives at Jupiter tonight!]

This is short and worth reading:

The sharp distinction between Initial Conditions and Laws of Nature was initiated by Isaac Newton and I consider this to be one of his most important, if not the most important, accomplishment. Before Newton there was no sharp separation between the two concepts. Kepler, to whom we owe the three precise laws of planetary motion, tried to explain also the size of the planetary orbits, and their periods. After Newton's time the sharp separation of initial conditions and laws of nature was taken for granted and rarely even mentioned. Of course, the first ones are quite arbitrary and their properties are hardly parts of physics while the recognition of the latter ones are the prime purpose of our science. Whether the sharp separation of the two will stay with us permanently is, of course, as uncertain as is all future development but this question will be further discussed later. Perhaps it should be mentioned here that the permanency of the validity of our deterministic laws of nature became questionable as a result of the realization, due initially to D. Zeh, that the states of macroscopic bodies are always under the influence of their environment; in our world they can not be kept separated from it.

This essay has no formal abstract; the above is the second paragraph, which I find to be profound. Here is the PDF. The essay shares the same name and much of the material with Wigner’s 1963 Nobel lecture [PDF].The Nobel lecture has a nice bit contrasting invariance principles with covariance principles, and dynamical invariance principles with geometrical invariance principles.[continue reading]