Gravitational transmission of quantum information by Carney et al.

Carney, Müller, and Taylor have a tantalizing paper on how the quantum nature of gravity might be confirmed even though we are quite far from being able to directly create and measure superpositions of gravitationally appreciable amounts of matter (hereafter: “massive superpositions”), and of course very far from being able to probe the Planck scale where quantum gravity effects dominate. More precisely, the idea is to demonstrate (assuming assumptions) that the gravitational field can be used to transmit quantum information from one system to another in the sense that the effective quantum channel is not entanglement breaking.

We suggest a test of a central prediction of perturbatively quantized general relativity: the coherent communication of quantum information between massive objects through gravity. To do this, we introduce the concept of interactive quantum information sensing, a protocol tailored to the verification of dynamical entanglement generation between a pair of systems. Concretely, we propose to monitor the periodic wavefunction collapse and revival in an atomic interferometer which is gravitationally coupled to a mechanical oscillator. We prove a theorem which shows that, under the assumption of time-translation invariance, this collapse and revival is possible if and only if the gravitational interaction forms an entangling channel. Remarkably, as this approach improves at moderate temperatures and relies primarily upon atomic coherence, our numerical estimates indicate feasibility with current devices.
[Edit: See also the November 2021 errata.]

Although I’m not sure they would phrase it this way, the key idea for me was that merely protecting massive superpositions from decoherence is actually not that hard; sufficient isolation can be achieved in lots of systems.… [continue reading]

Comments on Baldijao et al.’s GPT-generalized quantum Darwinism

This nice recent paper considers the “general probabilistic theory” operational framework, of which classical and quantum theories are special cases, and asks what sorts of theories admit quantum Darwinism-like dynamics. It is closely related to my interest in finding a satisfying theory of classical measurement.

Quantum Darwinism and the spreading of classical information in non-classical theories
Roberto D. Baldijão, Marius Krumm, Andrew J. P. Garner, and Markus P. Müller
Quantum Darwinism posits that the emergence of a classical reality relies on the spreading of classical information from a quantum system to many parts of its environment. But what are the essential physical principles of quantum theory that make this mechanism possible? We address this question by formulating the simplest instance of Darwinism – CNOT-like fan-out interactions – in a class of probabilistic theories that contain classical and quantum theory as special cases. We determine necessary and sufficient conditions for any theory to admit such interactions. We find that every non-classical theory that admits this spreading of classical information must have both entangled states and entangled measurements. Furthermore, we show that Spekkens’ toy theory admits this form of Darwinism, and so do all probabilistic theories that satisfy principles like strong symmetry, or contain a certain type of decoherence processes. Our result suggests the counterintuitive general principle that in the presence of local non-classicality, a classical world can only emerge if this non-classicality can be “amplified” to a form of entanglement.

After the intro, the authors give self-contained background information on the two key prerequisites: quantum Darwinism and generalized probabilistic theories (GPTs). The former is an admirable brief summary of what are, to me, the core and extremely simple features of quantum Darwinism.… [continue reading]

Distinguish between straight research and scientific opinion?

Summary: Maybe we should start distinguishing “straight research” from more opinionated scientific work and encourage industrial research labs to commit to protecting the former as a realistic, limited version of academic freedom in the private for-profit sector.

It seems clear enough to me that, within the field of journalism, the distinction between opinion pieces and “straight reporting” is both meaningful and valuable to draw. Both sorts of works should be pursued vigorously, even by the same journalists at the same time, but they should be distinguished (e.g., by being placed in different sections of a newspaper, or being explicitly labeled “opinion”, etc.) and held to different standards.In my opinion it’s unfortunate that this distinction has been partially eroded in recent years and that some thoughtful people have even argued it’s meaningless and should be dropped. That’s not the subject of this blog post, though.a   This is true even though there is of course a continuum between these categories, and it’s infeasible to precisely quantify the axis. (That said, I’d like to see more serious philosophical attempts to identify actionable principles for drawing this distinction more reliably and transparently.)

It’s easy for idealistic outsiders to get the impression that all of respectable scientific research is analogous to straight reporting rather than opinion, but just about any researcher will tell you that some articles are closer than other articles to the opinion category; that’s not to say it’s bad or unscientific, just that such articles go further in the direction of speculative interpretation and selective highlighting of certain pieces of evidence, and are often motivated by normative claims (“this area is more fruitful research avenue than my colleagues believe”, “this evidence implies the government should adopt a certain policy”, etc.).… [continue reading]

Moyal bracket with manifest (affine) symplectic covariance

Moyal’s equation for a Wigner function W of a quantum system with (Wigner-transformed) Hamiltonian H is \partial_t W = \{ H,W \}_\hbar where the Moyal bracket is a binary operator on the space of functions over phase space. Unfortunately, it is often written down mysteriously as

(1)   \begin{align*} \{ A,B \}_\hbar = \frac{2}{\hbar} A \sin \left( \frac{\hbar}{2} \left(\overleftarrow{\partial}_x\overrightarrow{\partial}_p - \overleftarrow{\partial}_p\overrightarrow{\partial}_x \right) \right) B, \end{align*}

where the arrows over partial derivatives tell you which way they act, i.e., C (\overleftarrow{\partial}_x \overrightarrow{\partial}_p ) D = (\partial_x C)(\partial_p D). This only becomes slightly less weird when you use the equivalent formula \{ A,B \}_\hbar = (A \star B - B\star A)/(i\hbar), where “\star” is the Moyal star product given by

(2)   \begin{align*} A \star B =  A \exp \left( \frac{i\hbar}{2} \left(\overleftarrow{\partial}_x\overrightarrow{\partial}_p - \overleftarrow{\partial}_p\overrightarrow{\partial}_x \right) \right) B. \end{align*}

The star product has the crucial feature that \widehat{A \star B} = \widehat{A}\widehat{B}, where we use a hat to denote the Weyl transform (i.e., the inverse of the Wigner transform taking density matrices to Wigner functions), which takes a scalar function over phase-space to an operator over our Hilbert space. The star product also has some nice integral representations, which can be found in books like Curtright, Fairlie, & ZachosThe complete 88-page PDF is here.a  , but none of them help me understand the Moyal equation.

A key problem is that both of these expressions are neglecting the (affine) symplectic symmetry of phase space and the dynamical equations. Although I wouldn’t call it beautiful, we can re-write the star product as

(3)   \begin{align*} A \star B =  A \exp \left( \frac{i\hbar}{2} \overleftarrow{\partial}_a\overrightarrow{\partial}^a \right) B. \end{align*}

where a=x,p is a symplectic index using the Einstein summation convention, and where symplectic indices are raised and lowered using the symplectic form just as for Weyl spinors: v_a = \epsilon_{ab}v^b and w^a = \epsilon^{ab}w_b, where \epsilon is the antisymmetric symplectic form with \epsilon^{xp} = +1 = \epsilon_{px}, and where upper (lower) indices denote symplectic vectors (co-vectors).

With this, we can expand the Moyal equation as

    \begin{align*} \{ H,W \}_\hbar &= \frac{2}{\hbar} H \sin \left( \frac{\hbar}{2} \overleftarrow{\partial}_a\overrightarrow{\partial}^a \right) W \\ &= \sum_{n=0}^\infty \frac{(-\hbar^2/4)^n}{(2n+1)!} \left(\partial_{a_1}\cdots \partial_{a_{2n+1}} H\right)\left(\partial^{a_1}\cdots \partial^{a_{2n+1}} W\right) \\ &= (\partial_a H)(\partial^a W) - \frac{\hbar^2}{24}(\partial_a\partial_b\partial_c H)(\partial^a \partial^b \partial^c W) \\ &\qquad \qquad + \frac{\hbar^4}{80640}(\partial_a\partial_b\partial_c \partial_d \partial_e H)(\partial^a \partial^b \partial^c \partial^d \partial^e W) - \cdots  \end{align*}

where we can see in hideous explicitness that it’s a series in the even powers of \hbar and the odd derivates of the Hamiltonian H and the Wigner function W.… [continue reading]

Quantum computing timelines

[Jaime Sevilla is a Computer Science PhD Student at the University of Aberdeen. In this guest post, he describes our recent forecasting work on quantum computing. – Jess Riedel]

In Short: We attempt to forecast when quantum computers will be able to crack the common cryptographic scheme RSA2048, and develop a model that predicts less than 5% confidence that this capability will be reached before 2039. A preprint is available at arXiv:2009.05045.

Advanced quantum computing comes with some new applications as well as a few risks, most notably threatening the foundations of modern online security.

In light of the recent experimental crossing of the “quantum supremacy” milestone, it is of great interest to estimate when devices capable of attacking typical encrypted communication will be constructed, and whether the development of communication protocols that are secure against quantum computers is progressing at an adequate pace.  

Beyond its intrinsic interest, quantum computing is also fertile ground for quantified forecasting. Exercises on forecasting technological progress have generally been sparse — with some notable exceptions — but it is of great importance: technological progress dictates a large part of human progress.

To date, most systematic predictions about development timelines for quantum computing have been based on expert surveys, in part because quantitative data about realistic architectures has been limited to a small number of idiosyncratic prototypes. However, in the last few years the number of device has been rapidly increasing and it is now possible to squint through the fog of research and make some tentative extrapolations. We emphasize that our quantitative model should be considered to at most augment, not replace, expert predictions.  Indeed, as we discuss in our preprint, this early data is noisy, and we necessarily must make strong assumptions to say anything concrete.[continue reading]

Comments on “Longtermist Institutional Reform” by John & MacAskill

Tyler John & William MacAskill have recently released a preprint of their paper “Longtermist Institutional Reform” [PDF]. The paper is set to appear in an EA-motivated collection “The Long View” (working title), from Natalie Cargill and Effective Giving.

Here is the abstract:

There is a vast number of people who will live in the centuries and millennia to come. In all probability, future generations will outnumber us by thousands or millions to one; of all the people who we might affect with our actions, the overwhelming majority are yet to come. In the aggregate, their interests matter enormously. So anything we can do to steer the future of civilization onto a better trajectory, making the world a better place for those generations who are still to come, is of tremendous moral importance. Political science tells us that the practices of most governments are at stark odds with longtermism. In addition to the ordinary causes of human short-termism, which are substantial, politics brings unique challenges of coordination, polarization, short-term institutional incentives, and more. Despite the relatively grim picture of political time horizons offered by political science, the problems of political short-termism are neither necessary nor inevitable. In principle, the State could serve as a powerful tool for positively shaping the long-term future. In this chapter, we make some suggestions about how we should best undertake this project. We begin by explaining the root causes of political short-termism. Then, we propose and defend four institutional reforms that we think would be promising ways to increase the time horizons of governments: 1) government research institutions and archivists; 2) posterity impact assessments; 3) futures assemblies; and 4) legislative houses for future generations.

[continue reading]

Living bibliography for the problem of defining wavefunction branches

[Last updated: Nov 27, 2021.]

This post is (a seed of) a bibliography covering the primordial research area that goes by some of the following names:

Although the way this problem tends to be formalized varies with context, I don’t think we have confidence in any of the formalizations. The different versions are very tightly related, so that a solution in one context is likely give, or at least strongly point toward, solutions for the others.

As a time-saving device, I will mostly just quote a few paragraphs from existing papers that review the literature, along with the relevant part of their list of references. Currently I am drawing on five papers: Carroll & Singh [arXiv:2005.12938]; Riedel, Zurek, & Zwolak [arXiv:1312.0331]; Weingarten [arXiv:2105.04545]; Kent [arXiv:1311.0249]; and Zampeli, Pavlou, & Wallden [arXiv:2205.15893].

I hope to update this from time to time, and perhaps turn it into a proper review article of its own one day. If you have a recommendation for this bibliography (either a single citation, or a paper I should quote), please do let me know.

Carroll & Singh

From “Quantum Mereology: Factorizing Hilbert Space into Subsystems with Quasi-Classical Dynamics” [arXiv:2005.12938]:

While this question has not frequently been addressed in the literature on quantum foundations and emergence of classicality, a few works have highlighted its importance and made attempts to understand it better.

[continue reading]

How to think about Quantum Mechanics—Part 8: The quantum-classical limit as music

[Other parts in this series: 1,2,3,4,5,6,7,8.]

On microscopic scales, sound is air pressure f(t) fluctuating in time t. Taking the Fourier transform of f(t) gives the frequency distribution \hat{f}(\omega), but in an eternal way, applying to the entire time interval for t\in [-\infty,\infty].

Yet on macroscopic scales, sound is described as having a frequency distribution as a function of time, i.e., a note has both a pitch and a duration. There are many formalisms for describing this (e.g., wavelets), but a well-known limitation is that the frequency \omega of a note is only well-defined up to an uncertainty that is inversely proportional to its duration \Delta t.

At the mathematical level, a given wavefunction \psi(x) is almost exactly analogous: macroscopically a particle seems to have a well-defined position and momentum, but microscopically there is only the wavefunction \psi. The mapping of the analogyI am of course not the first to emphasize this analogy. For instance, while writing this post I found “Uncertainty principles in Fourier analysis” by de Bruijn (via Folland’s book), who calls the Wigner function of an audio signal f(t) the “musical score” of f.a   is \{t,\omega,f\} \to \{x,p,\psi\}. Wavefunctions can of course be complex, but we can restrict ourself to a real-valued wavefunction without any trouble; we are not worrying about the dynamics of wavefunctions, so you can pretend the Hamiltonian vanishes if you like.

In order to get the acoustic analog of Planck’s constant \hbar, it helps to imagine going back to a time when the pitch of a note was measured with a unit that did not have a known connection to absolute frequency, i.e.,… [continue reading]