How to think about Quantum Mechanics—Part 8: The quantum-classical limit as music

[Other parts in this series: 1,2,3,4,5,6,7,8.]

On microscopic scales, sound is air pressure f(t) fluctuating in time t. Taking the Fourier transform of f(t) gives the frequency distribution \hat{f}(\omega), but in an eternal way, applying to the entire time interval for t\in [-\infty,\infty].

Yet on macroscopic scales, sound is described as having a frequency distribution as a function of time, i.e., a note has both a pitch and a duration. There are many formalisms for describing this (e.g., wavelets), but a well-known limitation is that the frequency \omega of a note is only well-defined up to an uncertainty that is inversely proportional to its duration \Delta t.

At the mathematical level, a given wavefunction \psi(x) is almost exactly analogous: macroscopically a particle seems to have a well-defined position and momentum, but microscopically there is only the wavefunction \psi. The mapping of the analogyI am of course not the first to emphasize this analogy. For instance, while writing this post I found “Uncertainty principles in Fourier analysis” by de Bruijn (via Folland’s book), who calls the Wigner function of an audio signal f(t) the “musical score” of f.a   is \{t,\omega,f\} \to \{x,p,\psi\}. Wavefunctions can of course be complex, but we can restrict ourself to a real-valued wavefunction without any trouble; we are not worrying about the dynamics of wavefunctions, so you can pretend the Hamiltonian vanishes if you like.

In order to get the acoustic analog of Planck’s constant \hbar, it helps to imagine going back to a time when the pitch of a note was measured with a unit that did not have a known connection to absolute frequency, i.e.,… [continue reading]

A checkable Lindbladian condition

Summary

Physicists often define a Lindbladian superoperator as one whose action on an operator B can be written as

(1)   \begin{align*} \mathcal{L}[B] = -i [H,B] + \sum_i \left[ L_i B L_i^\dagger - \frac{1}{2}\left(L_i^\dagger L_i B + B L_i^\dagger L_i\right)\right], \end{align*}

for some operator H with positive anti-Hermitian part, H-H^\dagger \ge 0, and some set of operators \{L^{(i)}\}. But how does one efficiently check if a given superoperator is Lindbladian? In this post I give an “elementary” proof of a less well-known characterization of Lindbladians:

A superoperator \mathcal{L} generates completely positive dynamics e^{t\mathcal{L}}, and hence is Lindbladian, if and only if \mathcal{P} \mathcal{L}^{\mathrm{PT}} \mathcal{P} \ge 0, i.e.,

    \[\mathrm{Tr}\left[B^\dagger (\mathcal{P} \mathcal{L}^{\mathrm{PT}} \mathcal{P})[B]\right] \ge 0\]

for all B. Here “\mathrm{PT}” denotes a partial transpose, \mathcal{P} = \mathcal{I} - \mathcal{I}^{\mathrm{PT}}/N = \mathcal{P}^2 is the “superprojector” that removes an operator’s trace, \mathcal{I} is the identity superoperator, and N is the dimension of the space upon which the operators act.

Thus, we can efficiently check if an arbitrary superoperator \mathcal{L} is Lindbladian by diagonalizing \mathcal{P}\mathcal{L}^{\mathrm{PT}} \mathcal{P} and seeing if all the eigenvalues are positive.

A quick note on terminology

The terms superoperator, completely positive (CP), trace preserving (TP), and Lindbladian are defined below in Appendix A in case you aren’t already familiar with them.

Confusingly, the standard practice is to say a superoperator \mathcal{S} is “positive” when it is positivity preserving: B \ge 0 \Rightarrow \mathcal{S}[B]\ge 0. This condition is logically independent from the property of a superoperator being “positive” in the traditional sense of being a positive operator, i.e., \langle B,\mathcal{S}[B]\rangle  \ge 0 for all operators (matrices) B, where

    \[\langle B,C\rangle  \equiv \mathrm{Tr}[B^\dagger C]  = \sum_{n=1}^N \sum_{n'=1}^N   B^\dagger_{nn'} C_{n'n}\]

is the Hilbert-Schmidt inner product on the space of N\times N matrices. We will refer frequently to this latter condition, so for clarity we call it op-positivity, and denote it with the traditional notation \mathcal{S}\ge 0.

Intro

It is reasonably well known by physicists that Lindbladian superoperators, Eq. (1), generate CP time evolution of density matrices, i.e., e^{t\mathcal{L}}[\rho] = \sum_{b=0}^\infty t^b \mathcal{L}^b[\rho]/b! is completely positive when t\ge 0 and \mathcal{L} satisfies Eq.… [continue reading]

Review of “Lifecycle Investing”

Summary

In this post I review the 2010 book “Lifecycle Investing” by Ian Ayres and Barry Nalebuff. (Amazon link here; no commission received.) They argue that a large subset of investors should adopt a (currently) unconventional strategy: One’s future retirement contributions should effectively be treated as bonds in one’s retirement portfolio that cannot be efficiently sold; therefore, early in life one should balance these low-volatility assets by gaining exposure to volatile high-return equities that will generically exceed 100% of one’s liquid retirement assets, necessitating some form of borrowing.

“Lifecycle Investing” was recommended to me by a friend who said the book “is extremely worth reading…like learning about index funds for the first time…Like worth paying 1% of your lifetime income to read if that was needed to get access to the ideas…potentially a lot more”. Ayres and Nalebuff lived up to this recommendation. Eventually, I expect the basic ideas, which are simple, to become so widespread and obvious that it will be hard to remember that it required an insight.

In part, what makes the main argument so compelling is that (as shown in the next section), it is closely related to an elegant explanation for something we all knew to be true — you should increase the bond-stock ratio of your portfolio as you get older — yet previously had bad justifications for. It also gives new actionable, non-obvious, and potentially very important advice (buy equities on margin when young) that is appropriately tempered by real-world frictions. And, most importantly, it means I personally feel less bad about already being nearly 100% in stocks when I picked up the book.

My main concerns, which are shared by other reviewers and which are only partially addressed by the authors, are:

  • Future income streams might be more like stocks than bonds for the large majority of people.
[continue reading]

COVID Watch and privacy

[Tina White is a friend of mine and co-founder of COVID Watch, a promising app for improving contact tracing for the coronavirus while preserving privacy. I commissioned Tom Higgins to write this post in order to bring attention to this important project and put it in context of related efforts. -Jess Riedel]

Countries around the world have been developing mobile phone apps to alert people to potential exposure to COVID-19. There are two main mechanism used:

  1. Monitoring a user’s location, comparing it to an external (typically, government) source of information about infections, and notifying the user if they are entering, or previously entered, a high-risk area.
  2. Detecting when two users come in close proximity to each other and then, if one user later reports to have been infected, notifying the second user and/or the government.

The first mechanism generally uses the phone’s location data, which is largely inferred from GPS.In urban areas, GPS is rather inaccurate, and is importantly augmented with location information inferred from WiFi signal strength maps.a   The second method can also be accomplished with GPS, by simply measuring the distance between users, but it can instead be accomplished with phone-to-phone bluetooth connectionsA precursor to smartphone-based contact tracing can be found in the FluPhone app, which was developed in the University of Cambridge Computer Laboratory in 2011. (BBC Coverage.) Contact tracing was provided over bluetooth and cases of the flu were voluntarily reported by users so that those with whom they had come into contact would be alerted. Despite media coverage, less than one percent of Cambridge residents downloaded the app, whether due to a lack of concern over the flu or concerns over privacy.[continue reading]

How shocking are rare past events?

This post describes variations on a thought experiment involving the anthropic principle. The variations were developed through discussion with Andreas Albrecht, Charles Bennett, Leonid Levin, and Andrew Arrasmith at a conference at the Neils Bohr Institute in Copenhagen in October of 2019. I have not yet finished reading Bostrom’s “Anthropic Bias“, so I don’t know where it fits in to his framework. I expect it is subsumed into such existing discussion, and I would appreciate pointers.

The point is to consider a few thought experiments that share many of the same important features, but for which we have very different intuitions, and to identify if there are any substantive difference that can be used to justify these intuitions.

I will use the term “shocked” (in the sense of “I was shocked to see Bob levitate off the ground”) to refer to the situation where we have made observations that are extremely unlikely to be generated by our implicit background model of the world, such that good reasoners would likely reject the model and start entertaining previously disfavored alternative models like “we’re all brains in a vat”, the Matrix, etc. In particular, to be shocked is not supposed to be merely a description of human psychology, but rather is a normative claim about how good scientific reasoners should behave.

Here are the three scenarios:

Scenario 1: Through advances in geology, paleontology, theoretical biology, and quantum computer simulation of chemistry, we get very strong theoretical evidence that intelligent life appears with high likelihood following abiogenesis events, but that abiogenesis itself is very rare: there is one expected abiogenesis event per 1022 stars per Hubble time.
[continue reading]

Quotes from Curtright et al.’s history of quantum mechanics in phase space

Curtright et al. have a monograph on the phase-space formulation of quantum mechanics. I recommend reading their historical introduction.

A Concise Treatise on Quantum Mechanics in Phase Space
Thomas L. Curtright, David B. Fairlie, and Cosmas K. Zachos
Wigner’s quasi-probability distribution function in phase-space is a special (Weyl–Wigner) representation of the density matrix. It has been useful in describing transport in quantum optics, nuclear physics, quantum computing, decoherence, and chaos. It is also of importance in signal processing, and the mathematics of algebraic deformation. A remarkable aspect of its internal logic, pioneered by Groenewold and Moyal, has only emerged in the last quarter-century: It furnishes a third, alternative, formulation of quantum mechanics, independent of the conventional Hilbert space or path integral formulations. In this logically complete and self-standing formulation, one need not choose sides between coordinate or momentum space. It works in full phase-space, accommodating the uncertainty principle; and it offers unique insights into the classical limit of quantum theory: The variables (observables) in this formulation are c-number functions in phase space instead of operators, with the same interpretation as their classical counterparts, but are composed together in novel algebraic ways.

Here are some quotes. First, the phase-space formulation should be placed on equal footing with the Hilbert-space and path-integral formulations:

When Feynman first unlocked the secrets of the path integral formalism and presented them to the world, he was publicly rebuked: “It was obvious”, Bohr said, “that such trajectories violated the uncertainty principle”.

However, in this case, Bohr was wrong. Today path integrals are universally recognized and widely used as an alternative framework to describe quantum behavior, equivalent to although conceptually distinct from the usual Hilbert space framework, and therefore completely in accord with Heisenberg’s uncertainty principle…

Similarly, many physicists hold the conviction that classical-valued position and momentum variables should not be simultaneously employed in any meaningful formula expressing quantum behavior, simply because this would also seem to violate the uncertainty principle…However, they too are wrong.

[continue reading]

Ground-state cooling by Delic et al. and the potential for dark matter detection

The implacable Aspelmeyer group in Vienna announced a gnarly achievement in November (recently published):

Cooling of a levitated nanoparticle to the motional quantum ground state
Uroš Delić, Manuel Reisenbauer, Kahan Dare, David Grass, Vladan Vuletić, Nikolai Kiesel, Markus Aspelmeyer
We report quantum ground state cooling of a levitated nanoparticle in a room temperature environment. Using coherent scattering into an optical cavity we cool the center of mass motion of a 143 nm diameter silica particle by more than 7 orders of magnitude to n_x = 0.43 \pm 0.03 phonons along the cavity axis, corresponding to a temperature of 12 μK. We infer a heating rate of \Gamma_x/2\pi = 21\pm 3 kHz, which results in a coherence time of 7.6 μs – or 15 coherent oscillations – while the particle is optically trapped at a pressure of 10^{-6} mbar. The inferred optomechanical coupling rate of g_x/2\pi = 71 kHz places the system well into the regime of strong cooperativity (C \approx 5). We expect that a combination of ultra-high vacuum with free-fall dynamics will allow to further expand the spatio-temporal coherence of such nanoparticles by several orders of magnitude, thereby opening up new opportunities for macroscopic quantum experiments.

Ground-state cooling of nanoparticles in laser traps is a very important milestone on the way to producing large spatial superpositions of matter, and I have a long-standing obsession with the possibility of using such superpositions to probe for the existence of new particles and forces like dark matter. In this post, I put this milestone in a bit of context and then and then toss up a speculative plot for the estimated dark-matter sensitivity of a follow-up to Delić et al.’s device.

One way to organize the quantum states of a single continuous degree of freedom, like the center-of-mass position of a nanoparticle, is by their sensitivity to displacements in phase space.… [continue reading]

The interpretation of free energy as bit-erasure capacity

Our paper discussed in the previous blog post might prompt this question: Is there still a way to use Landauer’s principle to convert the free energy of a system to its bit erasure capacity? The answer is “yes”, which we can demonstrate with a simple argument.


Summary: The correct measure of bit-erasure capacity N for an isolated system is the negentropy, the difference between the system’s current entropy and the entropy it would have if allowed to thermalize with its current internal energy. The correct measure of erasure capacity for a constant-volume system with free access to a bath at constant temperature T is the Helmholtz free energy A (divided by kT, per Landauer’s principle), provided that the additive constant of the free energy is set such that the free energy vanishes when the system thermalizes to temperature T. That is,

    \[N = \frac{A}{kT} = \frac{U-U_0}{kT} - (S - S_0),\]

where U_0 and S_0 are the internal energy and entropy of the system if it were at temperature T. The system’s negentropy lower bounds this capacity, and this bound is saturated when U = U_0.


Traditionally, the Helmholtz free energy of a system is defined as \tilde{A} = U - kTS, where U and S are the internal energy and entropy of the system and T is the constant temperature of an external infinite bath with which the system can exchange energy.Here, there is a factor of Boltzmann’s constant k in front of TS because I am measuring the (absolute) entropy S in dimensionless bits rather than in units of energy per temperature. That way we can write things like N = S_0 - S.a   (I will suppress the “Helmholtz” modifier henceforth; when the system’s pressure rather than volume is constant, my conclusion below holds for the Gibbs free energy if the obvious modifications are made.)… [continue reading]