FAQ about experimental quantum Darwinism

I am briefly stirring from my blog-hibernationThis blog will resume at full force sometime in the future, but not just yet.a   to present a collection of frequently asked questions about experiments seeking to investigate quantum Darwinism (QD). Most of the questions were asked by (or evolved from questions asked by) Phillip Ball while we corresponded regarding his recent article “Quantum Darwinism, an Idea to Explain Objective Reality, Passes First Tests” for Quanta magazine, which I recommend you check out.


Who is trying see quantum Darwinism in experiments?

I am aware of two papers out of a group from Arizona State in 2010 (here and here) and three papers from separate groups last year (arXiv: 1803.01913, 1808.07388, 1809.10456). I haven’t looked at them all carefully so I can’t vouch for them, but I think the more recent papers would be the closest thing to a “test” of QD.

What are the experiments doing to put QD the test?

These teams construct a kind of “synthetic environment” from just a few qubits, and then interrogate them to discover the information that they contain about the quantum system to which they are coupled.

What do you think of experimental tests of QD in general?

Considered as a strictly mathematical phenomenon, QD is the dynamical creation of certain kinds of correlations between certain systems and their environments under certain conditions. These experiments directly confirm that, if such conditions are created, the expected correlations are obtained.

The experiments are, unfortunately, not likely to offer many insight or opportunities for surprise; the result can be predicted with very high confidence long in advance.… [continue reading]

Tishby on physics and deep learning

Having heard Geoffrey Hinton’s somewhat dismissive account of the contribution by physicists to machine learning in his online MOOC, it was interesting to listen to one of those physicists, Naftali Tishby, here at PI:


The Information Theory of Deep Neural Networks: The statistical physics aspects
Naftali Tishby
Abstract:

The surprising success of learning with deep neural networks poses two fundamental challenges: understanding why these networks work so well and what this success tells us about the nature of intelligence and our biological brain. Our recent Information Theory of Deep Learning shows that large deep networks achieve the optimal tradeoff between training size and accuracy, and that this optimality is achieved through the noise in the learning process.

In this talk, I will focus on the statistical physics aspects of our theory and the interaction between the stochastic dynamics of the training algorithm (Stochastic Gradient Descent) and the phase structure of the Information Bottleneck problem. Specifically, I will describe the connections between the phase transition and the final location and representation of the hidden layers, and the role of these phase transitions in determining the weights of the network.

Based partly on joint works with Ravid Shwartz-Ziv, Noga Zaslavsky, and Shlomi Agmon.


(See also Steve Hsu’s discussion of a similar talk Tishby gave in Berlin, plus other notes on history.)

I was familiar with the general concept of over-fitting, but I hadn’t realized you could talk about it quantitatively by looking at the mutual information between the output of a network and all the information in the training data that isn’t the target label.… [continue reading]

Comments on Weingarten’s preferred branch

[Added 2022-March-13: Weingarten has a new paper, discussed by me here, that mostly supercedes the content of this post. In the new approach, the preferred branch decomposition is to be generated using a modification on Nielsen’s measure of quantum circuit complexity.]

A senior colleague asked me for thoughts on this paper describing a single-preferred-branch flavor of quantum mechanics, and I thought I’d copy them here. Tl;dr: I did not find an important new idea in it, but this paper nicely illustrates the appeal of Finkelstein’s partial-trace decoherence and the ambiguity inherent in connecting a many-worlds wavefunction to our direct observations.


We propose a method for finding an initial state vector which by ordinary Hamiltonian time evolution follows a single branch of many-worlds quantum mechanics. The resulting deterministic system appears to exhibit random behavior as a result of the successive emergence over time of information present in the initial state but not previously observed.

We start by assuming that a precise wavefunction branch structure has been specified. The idea, basically, is to randomly draw a branch at late times according to the Born probability, then to evolve it backwards in time to the beginning of the universe and take that as your initial condition. The main motivating observation is that, if we assume that all branch splittings are defined by a projective decomposition of some subsystem (‘the system’) which is recorded faithfully elsewhere (‘the environment’), then the lone preferred branch — time-evolving by itself — is an eigenstate of each of the projectors defining the splits. In a sense, Weingarten lays claim to ordered consistency [arxiv:gr-qc/9607073] by assuming partial-trace decoherenceNote on terminology: What Finkelstein called “partial-trace decoherence” is really a specialized form of consistency (i.e.,[continue reading]

Weinberg on the measurement problem

In his new article in the NY Review of Books, the titan Steven Weinberg expresses more sympathy for the importance of the measurement problem in quantum mechanics. The article has nothing new for folks well-versed in quantum foundations, but Weinberg demonstrates a command of the existing arguments and considerations. The lengthy excerpts below characterize what I think are the most important aspects of his view.

Many physicists came to think that the reaction of Einstein and Feynman and others to the unfamiliar aspects of quantum mechanics had been overblown. This used to be my view. After all, Newton’s theories too had been unpalatable to many of his contemporaries…Evidently it is a mistake to demand too strictly that new physical theories should fit some preconceived philosophical standard.

In quantum mechanics the state of a system is not described by giving the position and velocity of every particle and the values and rates of change of various fields, as in classical physics. Instead, the state of any system at any moment is described by a wave function, essentially a list of numbers, one number for every possible configuration of the system….What is so terrible about that? Certainly, it was a tragic mistake for Einstein and Schrödinger to step away from using quantum mechanics, isolating themselves in their later lives from the exciting progress made by others. Even so, I’m not as sure as I once was about the future of quantum mechanics. It is a bad sign that those physicists today who are most comfortable with quantum mechanics do not agree with one another about what it all means. The dispute arises chiefly regarding the nature of measurement in quantum mechanics…

The introduction of probability into the principles of physics was disturbing to past physicists, but the trouble with quantum mechanics is not that it involves probabilities.

[continue reading]

Three arguments on the measurement problem

When talking to folks about the quantum measurement problem, and its potential partial resolution by solving the set selection problem, I’ve recently been deploying three nonstandard arguments. To a large extent, these are dialectic strategies rather than unique arguments per se. That is, they are notable for me mostly because they avoid getting bogged down in some common conceptual dispute, not necessarily because they demonstrate something that doesn’t formally follow from traditional arguments. At least two of these seem new to me, in the sense that I don’t remember anyone else using them, but I strongly suspect that I’ve just appropriated them from elsewhere and forgotten. Citations to prior art are highly appreciated.

Passive quantum mechanics

There are good reasons to believe that, at the most abstract level, the practice of science doesn’t require a notion of active experiment. Rather, a completely passive observer could still in principle derive all fundamental physical theories simply by sitting around and watching. Science, at this level, is about explaining as many observations as possible starting from as minimal assumptions as possible. Abstractly we frame science as a compression algorithm that tries to find the programs with the smallest Kolmogorov complexity that reproduces observed data.

Active experiments are of course useful for at least two important reasons: (1) They gather strong evidence for causality by feeding a source of randomness into a system to test a causal model, and (2) they produce sources of data that are directly correlated with systems of interest rather than relying on highly indirect (and perhaps computationally intractable) correlations. But ultimately these are practical considerations, and an inert but extraordinarily intelligent observer could in principle derive general relativity, quantum mechanics, and field theoryOf course, there may be RG-reasons to think that scales decouple, and that to a good approximation the large-scale dynamics are compatible with lots of possible small-scale dynamics.[continue reading]

Comments on an essay by Wigner

[PSA: Happy 4th of July. Juno arrives at Jupiter tonight!]

This is short and worth reading:

The sharp distinction between Initial Conditions and Laws of Nature was initiated by Isaac Newton and I consider this to be one of his most important, if not the most important, accomplishment. Before Newton there was no sharp separation between the two concepts. Kepler, to whom we owe the three precise laws of planetary motion, tried to explain also the size of the planetary orbits, and their periods. After Newton's time the sharp separation of initial conditions and laws of nature was taken for granted and rarely even mentioned. Of course, the first ones are quite arbitrary and their properties are hardly parts of physics while the recognition of the latter ones are the prime purpose of our science. Whether the sharp separation of the two will stay with us permanently is, of course, as uncertain as is all future development but this question will be further discussed later. Perhaps it should be mentioned here that the permanency of the validity of our deterministic laws of nature became questionable as a result of the realization, due initially to D. Zeh, that the states of macroscopic bodies are always under the influence of their environment; in our world they can not be kept separated from it.

This essay has no formal abstract; the above is the second paragraph, which I find to be profound. Here is the PDF. The essay shares the same name and much of the material with Wigner’s 1963 Nobel lecture [PDF].The Nobel lecture has a nice bit contrasting invariance principles with covariance principles, and dynamical invariance principles with geometrical invariance principles.[continue reading]

Comments on Rosaler’s “Reduction as an A Posteriori Relation”

In a previous post of abstracts, I mentioned philosopher Josh Rosaler’s attempt to clarify the distinction between empirical and formal notions of “theoretical reduction”. Reduction is just the idea that one theory reduces to another in some limit, like Galilean kinematics reduces to special relativity in the limit of small velocities.Confusingly, philosophers use a reversed convention; they say that Galilean mechanics reduces to special relativity.a   Formal reduction is when this takes the form of some mathematical limiting procedure (e.g., v/c \to 0), whereas empirical reduction is an explanatory statement about observations (e.g., “special relativity can explains the empirical usefulness of Galilean kinematics”).

Rosaler’s criticism, which I mostly agree with, is that folks often conflate these two. Usually this isn’t a serious problem since the holes can be patched up on the fly by a competent physicist, but sometimes it leads to serious trouble. The most egregious case, and the one that got me interested in all this, is the quantum-classical transition, and in particular the serious insufficiency of existing \hbar \to 0 limits to explain the appearance of macroscopic classicality. In particular, even though this limiting procedure recovers the classical equations of motion, it fails spectacularly to recover the state space.There are multiple quantum states that have the same classical analog as \hbar \to 0, and there are quantum states that have no classical analog as \hbar \to 0.b  

In this post I’m going to comment Rosaler’s recent elaboration on this ideaI thank him for discussion this topic and, full disclosure, we’re drafting a paper about set selection together.c  :

Reduction between theories in physics is often approached as an a priori relation in the sense that reduction is often taken to depend only on a comparison of the mathematical structures of two theories.
[continue reading]

Comments on Myrvold’s Taj Mahal

Last week I saw an excellent talk by philosopher Wayne Myrvold.

The Reeh-Schlieder theorem says, roughly, that, in any reasonable quantum field theory, for any bounded region of spacetime R, any state can be approximated arbitrarily closely by operating on the vacuum state (or any state of bounded energy) with operators formed by smearing polynomials in the field operators with functions having support in R. This strikes many as counterintuitive, and Reinhard Werner has glossed the theorem as saying that “By acting on the vacuum with suitable operations in a terrestrial laboratory, an experimenter can create the Taj Mahal on (or even behind) the Moon!” This talk has two parts. First, I hope to convince listeners that the theorem is not counterintuitive, and that it follows immediately from facts that are already familiar fare to anyone who has digested the opening chapters of any standard introductory textbook of QFT. In the second, I will discuss what we can learn from the theorem about how relativistic causality is implemented in quantum field theories.

(Download MP4 video here.)

The topic was well-defined, and of reasonable scope. The theorem is easily and commonly misunderstood. And Wayne’s talk served to dissolve the confusion around it, by unpacking the theorem into a handful of pieces so that you could quickly see where the rub was. I would that all philosophy of physics were so well done.

Here are the key points as I saw them:

  • The vacuum state in QFTs, even non-interacting ones, is entangled over arbitrary distances (albeit by exponentially small amounts). You can think of this as every two space-like separated regions of spacetime sharing extremely diluted Bell pairs.
[continue reading]