Living bibliography for the problem of defining wavefunction branches

[Last updated: Nov 27, 2021.]

This post is (a seed of) a bibliography covering the primordial research area that goes by some of the following names:

Although the way this problem tends to be formalized varies with context, I don’t think we have confidence in any of the formalizations. The different versions are very tightly related, so that a solution in one context is likely give, or at least strongly point toward, solutions for the others.

As a time-saving device, I will mostly just quote a few paragraphs from existing papers that review the literature, along with the relevant part of their list of references. Currently I am drawing on five papers: Carroll & Singh [arXiv:2005.12938]; Riedel, Zurek, & Zwolak [arXiv:1312.0331]; Weingarten [arXiv:2105.04545]; Kent [arXiv:1311.0249]; and Zampeli, Pavlou, & Wallden [arXiv:2205.15893].

I hope to update this from time to time, and perhaps turn it into a proper review article of its own one day. If you have a recommendation for this bibliography (either a single citation, or a paper I should quote), please do let me know.

Carroll & Singh

From “Quantum Mereology: Factorizing Hilbert Space into Subsystems with Quasi-Classical Dynamics” [arXiv:2005.12938]:

While this question has not frequently been addressed in the literature on quantum foundations and emergence of classicality, a few works have highlighted its importance and made attempts to understand it better.

[continue reading]

How to think about Quantum Mechanics—Part 8: The quantum-classical limit as music

[Other parts in this series: 1,2,3,4,5,6,7,8.]

On microscopic scales, sound is air pressure f(t) fluctuating in time t. Taking the Fourier transform of f(t) gives the frequency distribution \hat{f}(\omega), but in an eternal way, applying to the entire time interval for t\in [-\infty,\infty].

Yet on macroscopic scales, sound is described as having a frequency distribution as a function of time, i.e., a note has both a pitch and a duration. There are many formalisms for describing this (e.g., wavelets), but a well-known limitation is that the frequency \omega of a note is only well-defined up to an uncertainty that is inversely proportional to its duration \Delta t.

At the mathematical level, a given wavefunction \psi(x) is almost exactly analogous: macroscopically a particle seems to have a well-defined position and momentum, but microscopically there is only the wavefunction \psi. The mapping of the analogyI am of course not the first to emphasize this analogy. For instance, while writing this post I found “Uncertainty principles in Fourier analysis” by de Bruijn (via Folland’s book), who calls the Wigner function of an audio signal f(t) the “musical score” of f.a   is \{t,\omega,f\} \to \{x,p,\psi\}. Wavefunctions can of course be complex, but we can restrict ourself to a real-valued wavefunction without any trouble; we are not worrying about the dynamics of wavefunctions, so you can pretend the Hamiltonian vanishes if you like.

In order to get the acoustic analog of Planck’s constant \hbar, it helps to imagine going back to a time when the pitch of a note was measured with a unit that did not have a known connection to absolute frequency, i.e.,… [continue reading]

A checkable Lindbladian condition

Summary

Physicists often define a Lindbladian superoperator as one whose action on an operator B can be written as

(1)   \begin{align*} \mathcal{L}[B] = -i [H,B] + \sum_i \left[ L_i B L_i^\dagger - \frac{1}{2}\left(L_i^\dagger L_i B + B L_i^\dagger L_i\right)\right], \end{align*}

for some operator H with positive anti-Hermitian part, H-H^\dagger \ge 0, and some set of operators \{L^{(i)}\}. But how does one efficiently check if a given superoperator is Lindbladian? In this post I give an “elementary” proof of a less well-known characterization of Lindbladians:

A superoperator \mathcal{L} generates completely positive dynamics e^{t\mathcal{L}}, and hence is Lindbladian, if and only if \mathcal{P} \mathcal{L}^{\mathrm{PT}} \mathcal{P} \ge 0, i.e.,

    \[\mathrm{Tr}\left[B^\dagger (\mathcal{P} \mathcal{L}^{\mathrm{PT}} \mathcal{P})[B]\right] \ge 0\]

for all B. Here “\mathrm{PT}” denotes a partial transpose, \mathcal{P} = \mathcal{I} - \mathcal{I}^{\mathrm{PT}}/N = \mathcal{P}^2 is the “superprojector” that removes an operator’s trace, \mathcal{I} is the identity superoperator, and N is the dimension of the space upon which the operators act.

Thus, we can efficiently check if an arbitrary superoperator \mathcal{L} is Lindbladian by diagonalizing \mathcal{P}\mathcal{L}^{\mathrm{PT}} \mathcal{P} and seeing if all the eigenvalues are positive.

A quick note on terminology

The terms superoperator, completely positive (CP), trace preserving (TP), and Lindbladian are defined below in Appendix A in case you aren’t already familiar with them.

Confusingly, the standard practice is to say a superoperator \mathcal{S} is “positive” when it is positivity preserving: B \ge 0 \Rightarrow \mathcal{S}[B]\ge 0. This condition is logically independent from the property of a superoperator being “positive” in the traditional sense of being a positive operator, i.e., \langle B,\mathcal{S}[B]\rangle  \ge 0 for all operators (matrices) B, where

    \[\langle B,C\rangle  \equiv \mathrm{Tr}[B^\dagger C]  = \sum_{n=1}^N \sum_{n'=1}^N   B^\dagger_{nn'} C_{n'n}\]

is the Hilbert-Schmidt inner product on the space of N\times N matrices. We will refer frequently to this latter condition, so for clarity we call it op-positivity, and denote it with the traditional notation \mathcal{S}\ge 0.

Intro

It is reasonably well known by physicists that Lindbladian superoperators, Eq. (1), generate CP time evolution of density matrices, i.e., e^{t\mathcal{L}}[\rho] = \sum_{b=0}^\infty t^b \mathcal{L}^b[\rho]/b! is completely positive when t\ge 0 and \mathcal{L} satisfies Eq.… [continue reading]

Quotes from Curtright et al.’s history of quantum mechanics in phase space

Curtright et al. have a monograph on the phase-space formulation of quantum mechanics. I recommend reading their historical introduction.

A Concise Treatise on Quantum Mechanics in Phase Space
Thomas L. Curtright, David B. Fairlie, and Cosmas K. Zachos
Wigner’s quasi-probability distribution function in phase-space is a special (Weyl–Wigner) representation of the density matrix. It has been useful in describing transport in quantum optics, nuclear physics, quantum computing, decoherence, and chaos. It is also of importance in signal processing, and the mathematics of algebraic deformation. A remarkable aspect of its internal logic, pioneered by Groenewold and Moyal, has only emerged in the last quarter-century: It furnishes a third, alternative, formulation of quantum mechanics, independent of the conventional Hilbert space or path integral formulations. In this logically complete and self-standing formulation, one need not choose sides between coordinate or momentum space. It works in full phase-space, accommodating the uncertainty principle; and it offers unique insights into the classical limit of quantum theory: The variables (observables) in this formulation are c-number functions in phase space instead of operators, with the same interpretation as their classical counterparts, but are composed together in novel algebraic ways.

Here are some quotes. First, the phase-space formulation should be placed on equal footing with the Hilbert-space and path-integral formulations:

When Feynman first unlocked the secrets of the path integral formalism and presented them to the world, he was publicly rebuked: “It was obvious”, Bohr said, “that such trajectories violated the uncertainty principle”.

However, in this case, Bohr was wrong. Today path integrals are universally recognized and widely used as an alternative framework to describe quantum behavior, equivalent to although conceptually distinct from the usual Hilbert space framework, and therefore completely in accord with Heisenberg’s uncertainty principle…

Similarly, many physicists hold the conviction that classical-valued position and momentum variables should not be simultaneously employed in any meaningful formula expressing quantum behavior, simply because this would also seem to violate the uncertainty principle…However, they too are wrong.

[continue reading]

Ground-state cooling by Delic et al. and the potential for dark matter detection

The implacable Aspelmeyer group in Vienna announced a gnarly achievement in November (recently published):

Cooling of a levitated nanoparticle to the motional quantum ground state
Uroš Delić, Manuel Reisenbauer, Kahan Dare, David Grass, Vladan Vuletić, Nikolai Kiesel, Markus Aspelmeyer
We report quantum ground state cooling of a levitated nanoparticle in a room temperature environment. Using coherent scattering into an optical cavity we cool the center of mass motion of a 143 nm diameter silica particle by more than 7 orders of magnitude to n_x = 0.43 \pm 0.03 phonons along the cavity axis, corresponding to a temperature of 12 μK. We infer a heating rate of \Gamma_x/2\pi = 21\pm 3 kHz, which results in a coherence time of 7.6 μs – or 15 coherent oscillations – while the particle is optically trapped at a pressure of 10^{-6} mbar. The inferred optomechanical coupling rate of g_x/2\pi = 71 kHz places the system well into the regime of strong cooperativity (C \approx 5). We expect that a combination of ultra-high vacuum with free-fall dynamics will allow to further expand the spatio-temporal coherence of such nanoparticles by several orders of magnitude, thereby opening up new opportunities for macroscopic quantum experiments.
[EDIT: The same group has more recently achieved ground-state cooling with real-time control feedback.]

Ground-state cooling of nanoparticles in laser traps is a very important milestone on the way to producing large spatial superpositions of matter, and I have a long-standing obsession with the possibility of using such superpositions to probe for the existence of new particles and forces like dark matter. In this post, I put this milestone in a bit of context and then and then toss up a speculative plot for the estimated dark-matter sensitivity of a follow-up to Delić et al.’s device.

One way to organize the quantum states of a single continuous degree of freedom, like the center-of-mass position of a nanoparticle, is by their sensitivity to displacements in phase space.… [continue reading]

The interpretation of free energy as bit-erasure capacity

Our paper discussed in the previous blog post might prompt this question: Is there still a way to use Landauer’s principle to convert the free energy of a system to its bit erasure capacity? The answer is “yes”, which we can demonstrate with a simple argument.


Summary: The correct measure of bit-erasure capacity N for an isolated system is the negentropy, the difference between the system’s current entropy and the entropy it would have if allowed to thermalize with its current internal energy. The correct measure of erasure capacity for a constant-volume system with free access to a bath at constant temperature T is the Helmholtz free energy A (divided by kT, per Landauer’s principle), provided that the additive constant of the free energy is set such that the free energy vanishes when the system thermalizes to temperature T. That is,

    \[N = \frac{A}{kT} = \frac{U-U_0}{kT} - (S - S_0),\]

where U_0 and S_0 are the internal energy and entropy of the system if it were at temperature T. The system’s negentropy lower bounds this capacity, and this bound is saturated when U = U_0.


Traditionally, the Helmholtz free energy of a system is defined as \tilde{A} = U - kTS, where U and S are the internal energy and entropy of the system and T is the constant temperature of an external infinite bath with which the system can exchange energy.Here, there is a factor of Boltzmann’s constant k in front of TS because I am measuring the (absolute) entropy S in dimensionless bits rather than in units of energy per temperature. That way we can write things like N = S_0 - S.a   (I will suppress the “Helmholtz” modifier henceforth; when the system’s pressure rather than volume is constant, my conclusion below holds for the Gibbs free energy if the obvious modifications are made.)… [continue reading]

On computational aestivation

People often say to me “Jess, all this work you do on the foundations of quantum mechanics is fine as far as it goes, but it’s so conventional and safe. When are you finally going to do something unusual and take some career risks?” I’m now pleased to say I have a topic to bring up in such situations: the thermodynamic incentives of powerful civilizations in the far future who seek to perform massive computations. Anders Sandberg, Stuart Armstrong, and Milan M. Ćirković previously argued for a surprising connection between Landauer’s principle and the Fermi paradox, which Charles Bennett, Robin Hanson, and I have now critiqued. Our comment appeared today in the new issue of Foundations of Physics:

Comment on 'The aestivation hypothesis for resolving Fermi's paradox'
Charles H. Bennett, Robin Hanson, C. Jess Riedel
In their article [arXiv:1705.03394], 'That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox', Sandberg et al. try to explain the Fermi paradox (we see no aliens) by claiming that Landauer's principle implies that a civilization can in principle perform far more (~1030 times more) irreversible logical operations (e.g., error-correcting bit erasures) if it conserves its resources until the distant future when the cosmic background temperature is very low. So perhaps aliens are out there, but quietly waiting. Sandberg et al. implicitly assume, however, that computer-generated entropy can only be disposed of by transferring it to the cosmological background. In fact, while this assumption may apply in the distant future, our universe today contains vast reservoirs and other physical systems in non-maximal entropy states, and computer-generated entropy can be transferred to them at the adiabatic conversion rate of one bit of negentropy to erase one bit of error.
[continue reading]

FAQ about experimental quantum Darwinism

I am briefly stirring from my blog-hibernationThis blog will resume at full force sometime in the future, but not just yet.a   to present a collection of frequently asked questions about experiments seeking to investigate quantum Darwinism (QD). Most of the questions were asked by (or evolved from questions asked by) Phillip Ball while we corresponded regarding his recent article “Quantum Darwinism, an Idea to Explain Objective Reality, Passes First Tests” for Quanta magazine, which I recommend you check out.


Who is trying see quantum Darwinism in experiments?

I am aware of two papers out of a group from Arizona State in 2010 (here and here) and three papers from separate groups last year (arXiv: 1803.01913, 1808.07388, 1809.10456). I haven’t looked at them all carefully so I can’t vouch for them, but I think the more recent papers would be the closest thing to a “test” of QD.

What are the experiments doing to put QD the test?

These teams construct a kind of “synthetic environment” from just a few qubits, and then interrogate them to discover the information that they contain about the quantum system to which they are coupled.

What do you think of experimental tests of QD in general?

Considered as a strictly mathematical phenomenon, QD is the dynamical creation of certain kinds of correlations between certain systems and their environments under certain conditions. These experiments directly confirm that, if such conditions are created, the expected correlations are obtained.

The experiments are, unfortunately, not likely to offer many insight or opportunities for surprise; the result can be predicted with very high confidence long in advance.… [continue reading]