Comments on Ollivier’s “Emergence of Objectivity for Quantum Many-Body Systems”

Harold Ollivier has put out a nice paper generalizing my best result:

We examine the emergence of objectivity for quantum many-body systems in a setting without an environment to decohere the system’s state, but where observers can only access small fragments of the whole system. We extend the result of Reidel (2017) to the case where the system is in a mixed state, measurements are performed through POVMs, and imprints of the outcomes are imperfect. We introduce a new condition on states and measurements to recover full classicality for any number of observers. We further show that evolutions of quantum many-body systems can be expected to yield states that satisfy this condition whenever the corresponding measurement outcomes are redundant.

Ollivier does a good job of summarizing why there is an urgent need to find a way to identify objectively classical variables in a many-body system without leaning on a preferred system-environment tensor decomposition. He also concisely describes the main results of my paper in somewhat different language, so some of you may find his version nicer to read.A minor quibble: Although this is of course a matter of taste, I disagree that the Shor code example was the “core of the main result” of my paper. In my opinion, the key idea was that there was a sensible way of defining redundancy at all in a way that allowed for proving statements about compatibility without recourse to a preferred non-microscopic tensor structure. The Shor-code example is more important for showing the limits of what redundancy can tell you (which is saturated in a weak sense).[continue reading]

Weingarten’s branches from quantum complexity

Don Weingarten’s newI previously blogged about earlier work by Weingarten on a related topic. This new paper directly addresses my previous concerns.a   attack [2105.04545] on the problem of defining wavefunction branches is the most important paper on this topic in several years — and hence, by my strange twisted lights, one of the most important recent papers in physics. Ultimately I think there are significant barriers to the success of this approach, but these may be surmountable. Regardless, the paper makes tons of progress in understanding the advantages and drawbacks of a definition of branches based on quantum complexity.

Here’s the abstract:

Beginning with the Everett-DeWitt many-worlds interpretation of quantum mechanics, there have been a series of proposals for how the state vector of a quantum system might split at any instant into orthogonal branches, each of which exhibits approximately classical behavior. Here we propose a decomposition of a state vector into branches by finding the minimum of a measure of the mean squared quantum complexity of the branches in the branch decomposition. In a non-relativistic formulation of this proposal, branching occurs repeatedly over time, with each branch splitting successively into further sub-branches among which the branch followed by the real world is chosen randomly according to the Born rule. In a Lorentz covariant version, the real world is a single random draw from the set of branches at asymptotically late time, restored to finite time by sequentially retracing the set of branching events implied by the late time choice. The complexity measure depends on a parameter b with units of volume which sets the boundary between quantum and classical behavior.
[continue reading]

Gravitational transmission of quantum information by Carney et al.

Carney, Müller, and Taylor have a tantalizing paper on how the quantum nature of gravity might be confirmed even though we are quite far from being able to directly create and measure superpositions of gravitationally appreciable amounts of matter (hereafter: “massive superpositions”), and of course very far from being able to probe the Planck scale where quantum gravity effects dominate. More precisely, the idea is to demonstrate (assuming assumptions) that the gravitational field can be used to transmit quantum information from one system to another in the sense that the effective quantum channel is not entanglement breaking.

We suggest a test of a central prediction of perturbatively quantized general relativity: the coherent communication of quantum information between massive objects through gravity. To do this, we introduce the concept of interactive quantum information sensing, a protocol tailored to the verification of dynamical entanglement generation between a pair of systems. Concretely, we propose to monitor the periodic wavefunction collapse and revival in an atomic interferometer which is gravitationally coupled to a mechanical oscillator. We prove a theorem which shows that, under the assumption of time-translation invariance, this collapse and revival is possible if and only if the gravitational interaction forms an entangling channel. Remarkably, as this approach improves at moderate temperatures and relies primarily upon atomic coherence, our numerical estimates indicate feasibility with current devices.
[Edit: See also the November 2021 errata.]

Although I’m not sure they would phrase it this way, the key idea for me was that merely protecting massive superpositions from decoherence is actually not that hard; sufficient isolation can be achieved in lots of systems.… [continue reading]

Comments on Baldijao et al.’s GPT-generalized quantum Darwinism

This nice recent paper considers the “general probabilistic theory” operational framework, of which classical and quantum theories are special cases, and asks what sorts of theories admit quantum Darwinism-like dynamics. It is closely related to my interest in finding a satisfying theory of classical measurement.

Quantum Darwinism and the spreading of classical information in non-classical theories
Roberto D. Baldijão, Marius Krumm, Andrew J. P. Garner, and Markus P. Müller
Quantum Darwinism posits that the emergence of a classical reality relies on the spreading of classical information from a quantum system to many parts of its environment. But what are the essential physical principles of quantum theory that make this mechanism possible? We address this question by formulating the simplest instance of Darwinism – CNOT-like fan-out interactions – in a class of probabilistic theories that contain classical and quantum theory as special cases. We determine necessary and sufficient conditions for any theory to admit such interactions. We find that every non-classical theory that admits this spreading of classical information must have both entangled states and entangled measurements. Furthermore, we show that Spekkens’ toy theory admits this form of Darwinism, and so do all probabilistic theories that satisfy principles like strong symmetry, or contain a certain type of decoherence processes. Our result suggests the counterintuitive general principle that in the presence of local non-classicality, a classical world can only emerge if this non-classicality can be “amplified” to a form of entanglement.

After the intro, the authors give self-contained background information on the two key prerequisites: quantum Darwinism and generalized probabilistic theories (GPTs). The former is an admirable brief summary of what are, to me, the core and extremely simple features of quantum Darwinism.… [continue reading]

Comments on “Longtermist Institutional Reform” by John & MacAskill

Tyler John & William MacAskill have recently released a preprint of their paper “Longtermist Institutional Reform” [PDF]. The paper is set to appear in an EA-motivated collection “The Long View” (working title), from Natalie Cargill and Effective Giving.

Here is the abstract:

There is a vast number of people who will live in the centuries and millennia to come. In all probability, future generations will outnumber us by thousands or millions to one; of all the people who we might affect with our actions, the overwhelming majority are yet to come. In the aggregate, their interests matter enormously. So anything we can do to steer the future of civilization onto a better trajectory, making the world a better place for those generations who are still to come, is of tremendous moral importance. Political science tells us that the practices of most governments are at stark odds with longtermism. In addition to the ordinary causes of human short-termism, which are substantial, politics brings unique challenges of coordination, polarization, short-term institutional incentives, and more. Despite the relatively grim picture of political time horizons offered by political science, the problems of political short-termism are neither necessary nor inevitable. In principle, the State could serve as a powerful tool for positively shaping the long-term future. In this chapter, we make some suggestions about how we should best undertake this project. We begin by explaining the root causes of political short-termism. Then, we propose and defend four institutional reforms that we think would be promising ways to increase the time horizons of governments: 1) government research institutions and archivists; 2) posterity impact assessments; 3) futures assemblies; and 4) legislative houses for future generations.

[continue reading]

Quotes from Curtright et al.’s history of quantum mechanics in phase space

Curtright et al. have a monograph on the phase-space formulation of quantum mechanics. I recommend reading their historical introduction.

A Concise Treatise on Quantum Mechanics in Phase Space
Thomas L. Curtright, David B. Fairlie, and Cosmas K. Zachos
Wigner’s quasi-probability distribution function in phase-space is a special (Weyl–Wigner) representation of the density matrix. It has been useful in describing transport in quantum optics, nuclear physics, quantum computing, decoherence, and chaos. It is also of importance in signal processing, and the mathematics of algebraic deformation. A remarkable aspect of its internal logic, pioneered by Groenewold and Moyal, has only emerged in the last quarter-century: It furnishes a third, alternative, formulation of quantum mechanics, independent of the conventional Hilbert space or path integral formulations. In this logically complete and self-standing formulation, one need not choose sides between coordinate or momentum space. It works in full phase-space, accommodating the uncertainty principle; and it offers unique insights into the classical limit of quantum theory: The variables (observables) in this formulation are c-number functions in phase space instead of operators, with the same interpretation as their classical counterparts, but are composed together in novel algebraic ways.

Here are some quotes. First, the phase-space formulation should be placed on equal footing with the Hilbert-space and path-integral formulations:

When Feynman first unlocked the secrets of the path integral formalism and presented them to the world, he was publicly rebuked: “It was obvious”, Bohr said, “that such trajectories violated the uncertainty principle”.

However, in this case, Bohr was wrong. Today path integrals are universally recognized and widely used as an alternative framework to describe quantum behavior, equivalent to although conceptually distinct from the usual Hilbert space framework, and therefore completely in accord with Heisenberg’s uncertainty principle…

Similarly, many physicists hold the conviction that classical-valued position and momentum variables should not be simultaneously employed in any meaningful formula expressing quantum behavior, simply because this would also seem to violate the uncertainty principle…However, they too are wrong.

[continue reading]

Ground-state cooling by Delic et al. and the potential for dark matter detection

The implacable Aspelmeyer group in Vienna announced a gnarly achievement in November (recently published):

Cooling of a levitated nanoparticle to the motional quantum ground state
Uroš Delić, Manuel Reisenbauer, Kahan Dare, David Grass, Vladan Vuletić, Nikolai Kiesel, Markus Aspelmeyer
We report quantum ground state cooling of a levitated nanoparticle in a room temperature environment. Using coherent scattering into an optical cavity we cool the center of mass motion of a 143 nm diameter silica particle by more than 7 orders of magnitude to n_x = 0.43 \pm 0.03 phonons along the cavity axis, corresponding to a temperature of 12 μK. We infer a heating rate of \Gamma_x/2\pi = 21\pm 3 kHz, which results in a coherence time of 7.6 μs – or 15 coherent oscillations – while the particle is optically trapped at a pressure of 10^{-6} mbar. The inferred optomechanical coupling rate of g_x/2\pi = 71 kHz places the system well into the regime of strong cooperativity (C \approx 5). We expect that a combination of ultra-high vacuum with free-fall dynamics will allow to further expand the spatio-temporal coherence of such nanoparticles by several orders of magnitude, thereby opening up new opportunities for macroscopic quantum experiments.
[EDIT: The same group has more recently achieved ground-state cooling with real-time control feedback.]

Ground-state cooling of nanoparticles in laser traps is a very important milestone on the way to producing large spatial superpositions of matter, and I have a long-standing obsession with the possibility of using such superpositions to probe for the existence of new particles and forces like dark matter. In this post, I put this milestone in a bit of context and then and then toss up a speculative plot for the estimated dark-matter sensitivity of a follow-up to Delić et al.’s device.

One way to organize the quantum states of a single continuous degree of freedom, like the center-of-mass position of a nanoparticle, is by their sensitivity to displacements in phase space.… [continue reading]

Tishby on physics and deep learning

Having heard Geoffrey Hinton’s somewhat dismissive account of the contribution by physicists to machine learning in his online MOOC, it was interesting to listen to one of those physicists, Naftali Tishby, here at PI:


The Information Theory of Deep Neural Networks: The statistical physics aspects
Naftali Tishby
Abstract:

The surprising success of learning with deep neural networks poses two fundamental challenges: understanding why these networks work so well and what this success tells us about the nature of intelligence and our biological brain. Our recent Information Theory of Deep Learning shows that large deep networks achieve the optimal tradeoff between training size and accuracy, and that this optimality is achieved through the noise in the learning process.

In this talk, I will focus on the statistical physics aspects of our theory and the interaction between the stochastic dynamics of the training algorithm (Stochastic Gradient Descent) and the phase structure of the Information Bottleneck problem. Specifically, I will describe the connections between the phase transition and the final location and representation of the hidden layers, and the role of these phase transitions in determining the weights of the network.

Based partly on joint works with Ravid Shwartz-Ziv, Noga Zaslavsky, and Shlomi Agmon.


(See also Steve Hsu’s discussion of a similar talk Tishby gave in Berlin, plus other notes on history.)

I was familiar with the general concept of over-fitting, but I hadn’t realized you could talk about it quantitatively by looking at the mutual information between the output of a network and all the information in the training data that isn’t the target label.… [continue reading]