A checkable Lindbladian condition

Summary

Physicists often define a Lindbladian superoperator as one whose action on an operator B can be written as

(1)   \begin{align*} \mathcal{L}[B] = -i [H,B] + \sum_i \left[ L_i B L_i^\dagger - \frac{1}{2}\left(L_i^\dagger L_i B + B L_i^\dagger L_i\right)\right], \end{align*}

for some operator H with positive anti-Hermitian part, H-H^\dagger \ge 0, and some set of operators \{L^{(i)}\}. But how does one efficiently check if a given superoperator is Lindbladian? In this post I give an “elementary” proof of a less well-known characterization of Lindbladians:

A superoperator \mathcal{L} generates completely positive dynamics e^{t\mathcal{L}}, and hence is Lindbladian, if and only if \mathcal{P} \mathcal{L}^{\mathrm{PT}} \mathcal{P} \ge 0, i.e.,

    \[\mathrm{Tr}\left[B^\dagger (\mathcal{P} \mathcal{L}^{\mathrm{PT}} \mathcal{P})[B]\right] \ge 0\]

for all B. Here “\mathrm{PT}” denotes a partial transpose, \mathcal{P} = \mathcal{I} - \mathcal{I}^{\mathrm{PT}}/N = \mathcal{P}^2 is the “superprojector” that removes an operator’s trace, \mathcal{I} is the identity superoperator, and N is the dimension of the space upon which the operators act.

Thus, we can efficiently check if an arbitrary superoperator \mathcal{L} is Lindbladian by diagonalizing \mathcal{P}\mathcal{L}^{\mathrm{PT}} \mathcal{P} and seeing if all the eigenvalues are positive.

A quick note on terminology

The terms superoperator, completely positive (CP), trace preserving (TP), and Lindbladian are defined below in Appendix A in case you aren’t already familiar with them.

Confusingly, the standard practice is to say a superoperator \mathcal{S} is “positive” when it is positivity preserving: B \ge 0 \Rightarrow \mathcal{S}[B]\ge 0. This condition is logically independent from the property of a superoperator being “positive” in the traditional sense of being a positive operator, i.e., \langle B,\mathcal{S}[B]\rangle  \ge 0 for all operators (matrices) B, where

    \[\langle B,C\rangle  \equiv \mathrm{Tr}[B^\dagger C]  = \sum_{n=1}^N \sum_{n'=1}^N   B^\dagger_{nn'} C_{n'n}\]

is the Hilbert-Schmidt inner product on the space of N\times N matrices. We will refer frequently to this latter condition, so for clarity we call it op-positivity, and denote it with the traditional notation \mathcal{S}\ge 0.

Intro

It is reasonably well known by physicists that Lindbladian superoperators, Eq. (1), generate CP time evolution of density matrices, i.e., e^{t\mathcal{L}}[\rho] = \sum_{b=0}^\infty t^b \mathcal{L}^b[\rho]/b! is completely positive when t\ge 0 and \mathcal{L} satisfies Eq.… [continue reading]

Quotes from Curtright et al.’s history of quantum mechanics in phase space

Curtright et al. have a monograph on the phase-space formulation of quantum mechanics. I recommend reading their historical introduction.

A Concise Treatise on Quantum Mechanics in Phase Space
Thomas L. Curtright, David B. Fairlie, and Cosmas K. Zachos
Wigner’s quasi-probability distribution function in phase-space is a special (Weyl–Wigner) representation of the density matrix. It has been useful in describing transport in quantum optics, nuclear physics, quantum computing, decoherence, and chaos. It is also of importance in signal processing, and the mathematics of algebraic deformation. A remarkable aspect of its internal logic, pioneered by Groenewold and Moyal, has only emerged in the last quarter-century: It furnishes a third, alternative, formulation of quantum mechanics, independent of the conventional Hilbert space or path integral formulations. In this logically complete and self-standing formulation, one need not choose sides between coordinate or momentum space. It works in full phase-space, accommodating the uncertainty principle; and it offers unique insights into the classical limit of quantum theory: The variables (observables) in this formulation are c-number functions in phase space instead of operators, with the same interpretation as their classical counterparts, but are composed together in novel algebraic ways.

Here are some quotes. First, the phase-space formulation should be placed on equal footing with the Hilbert-space and path-integral formulations:

When Feynman first unlocked the secrets of the path integral formalism and presented them to the world, he was publicly rebuked: “It was obvious”, Bohr said, “that such trajectories violated the uncertainty principle”.

However, in this case, Bohr was wrong. Today path integrals are universally recognized and widely used as an alternative framework to describe quantum behavior, equivalent to although conceptually distinct from the usual Hilbert space framework, and therefore completely in accord with Heisenberg’s uncertainty principle…

Similarly, many physicists hold the conviction that classical-valued position and momentum variables should not be simultaneously employed in any meaningful formula expressing quantum behavior, simply because this would also seem to violate the uncertainty principle…However, they too are wrong.

[continue reading]

Ground-state cooling by Delic et al. and the potential for dark matter detection

The implacable Aspelmeyer group in Vienna announced a gnarly achievement in November (recently published):

Cooling of a levitated nanoparticle to the motional quantum ground state
Uroš Delić, Manuel Reisenbauer, Kahan Dare, David Grass, Vladan Vuletić, Nikolai Kiesel, Markus Aspelmeyer
We report quantum ground state cooling of a levitated nanoparticle in a room temperature environment. Using coherent scattering into an optical cavity we cool the center of mass motion of a 143 nm diameter silica particle by more than 7 orders of magnitude to n_x = 0.43 \pm 0.03 phonons along the cavity axis, corresponding to a temperature of 12 μK. We infer a heating rate of \Gamma_x/2\pi = 21\pm 3 kHz, which results in a coherence time of 7.6 μs – or 15 coherent oscillations – while the particle is optically trapped at a pressure of 10^{-6} mbar. The inferred optomechanical coupling rate of g_x/2\pi = 71 kHz places the system well into the regime of strong cooperativity (C \approx 5). We expect that a combination of ultra-high vacuum with free-fall dynamics will allow to further expand the spatio-temporal coherence of such nanoparticles by several orders of magnitude, thereby opening up new opportunities for macroscopic quantum experiments.

Ground-state cooling of nanoparticles in laser traps is a very important milestone on the way to producing large spatial superpositions of matter, and I have a long-standing obsession with the possibility of using such superpositions to probe for the existence of new particles and forces like dark matter. In this post, I put this milestone in a bit of context and then and then toss up a speculative plot for the estimated dark-matter sensitivity of a follow-up to Delić et al.’s device.

One way to organize the quantum states of a single continuous degree of freedom, like the center-of-mass position of a nanoparticle, is by their sensitivity to displacements in phase space.… [continue reading]

The interpretation of free energy as bit-erasure capacity

Our paper discussed in the previous blog post might prompt this question: Is there still a way to use Landauer’s principle to convert the free energy of a system to its bit erasure capacity? The answer is “yes”, which we can demonstrate with a simple argument.


Summary: The correct measure of bit-erasure capacity N for an isolated system is the negentropy, the difference between the system’s current entropy and the entropy it would have if allowed to thermalize with its current internal energy. The correct measure of erasure capacity for a constant-volume system with free access to a bath at constant temperature T is the Helmholtz free energy A (divided by kT, per Landauer’s principle), provided that the additive constant of the free energy is set such that the free energy vanishes when the system thermalizes to temperature T. That is,

    \[N = \frac{A}{kT} = \frac{U-U_0}{kT} - (S - S_0),\]

where U_0 and S_0 are the internal energy and entropy of the system if it were at temperature T. The system’s negentropy lower bounds this capacity, and this bound is saturated when U = U_0.


Traditionally, the Helmholtz free energy of a system is defined as \tilde{A} = U - kTS, where U and S are the internal energy and entropy of the system and T is the constant temperature of an external infinite bath with which the system can exchange energy.Here, there is a factor of Boltzmann’s constant k in front of TS because I am measuring the (absolute) entropy S in dimensionless bits rather than in units of energy per temperature. That way we can write things like N = S_0 - S.a   (I will suppress the “Helmholtz” modifier henceforth; when the system’s pressure rather than volume is constant, my conclusion below holds for the Gibbs free energy if the obvious modifications are made.)… [continue reading]

On computational aestivation

People often say to me “Jess, all this work you do on the foundations of quantum mechanics is fine as far as it goes, but it’s so conventional and safe. When are you finally going to do something unusual and take some career risks?” I’m now pleased to say I have a topic to bring up in such situations: the thermodynamic incentives of powerful civilizations in the far future who seek to perform massive computations. Anders Sandberg, Stuart Armstrong, and Milan M. Ćirković previously argued for a surprising connection between Landauer’s principle and the Fermi paradox, which Charles Bennett, Robin Hanson, and I have now critiqued. Our comment appeared today in the new issue of Foundations of Physics:

Comment on 'The aestivation hypothesis for resolving Fermi's paradox'
Charles H. Bennett, Robin Hanson, C. Jess Riedel
In their article [arXiv:1705.03394], 'That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox', Sandberg et al. try to explain the Fermi paradox (we see no aliens) by claiming that Landauer's principle implies that a civilization can in principle perform far more (~1030 times more) irreversible logical operations (e.g., error-correcting bit erasures) if it conserves its resources until the distant future when the cosmic background temperature is very low. So perhaps aliens are out there, but quietly waiting. Sandberg et al. implicitly assume, however, that computer-generated entropy can only be disposed of by transferring it to the cosmological background. In fact, while this assumption may apply in the distant future, our universe today contains vast reservoirs and other physical systems in non-maximal entropy states, and computer-generated entropy can be transferred to them at the adiabatic conversion rate of one bit of negentropy to erase one bit of error.
[continue reading]

FAQ about experimental quantum Darwinism

I am briefly stirring from my blog-hibernationThis blog will resume at full force sometime in the future, but not just yet.a   to present a collection of frequently asked questions about experiments seeking to investigate quantum Darwinism (QD). Most of the questions were asked by (or evolved from questions asked by) Phillip Ball while we corresponded regarding his recent article “Quantum Darwinism, an Idea to Explain Objective Reality, Passes First Tests” for Quanta magazine, which I recommend you check out.


Who is trying see quantum Darwinism in experiments?

I am aware of two papers out of a group from Arizona State in 2010 (here and here) and three papers from separate groups last year (arXiv: 1803.01913, 1808.07388, 1809.10456). I haven’t looked at them all carefully so I can’t vouch for them, but I think the more recent papers would be the closest thing to a “test” of QD.

What are the experiments doing to put QD the test?

These teams construct a kind of “synthetic environment” from just a few qubits, and then interrogate them to discover the information that they contain about the quantum system to which they are coupled.

What do you think of experimental tests of QD in general?

Considered as a strictly mathematical phenomenon, QD is the dynamical creation of certain kinds of correlations between certain systems and their environments under certain conditions. These experiments directly confirm that, if such conditions are created, the expected correlations are obtained.

The experiments are, unfortunately, not likely to offer many insight or opportunities for surprise; the result can be predicted with very high confidence long in advance.… [continue reading]

Branches as hidden nodes in a neural net

I had been vaguely aware that there was an important connection between tensor network representations of quantum many-body states (e.g., matrix product states) and artificial neural nets, but it didn’t really click together until I saw Roger Melko’s nice talk on Friday about his recent paper with Torlai et al.:There is a title card about “resurgence” from Francesco Di Renzo’s talk at the beginning of the talk you can ignore. This is just a mistake in KITP’s video system.a  





[Download MP4]   [Other options]


In particular, he sketched the essential equivalence between matrix product states (MPS) and restricted Boltzmann machinesThis is discussed in detail by Chen et al. See also good intuition and a helpful physicist-statistician dictionary from Lin and Tegmark.b   (RBM) before showing how he and collaborators could train an efficient RBM representations of the states of the transverse-field Ising and XXZ models with a small number of local measurements from the true state.

As you’ve heard me belabor ad nauseum, I think identifying and defining branches is the key outstanding task inhibiting progress in resolving the measurement problem. I had already been thinking of branches as a sort of “global” tensor in an MPS, i.e., there would be a single index (bond) that would label the branches and serve to efficiently encode a pure state with long-range entanglement due to the amplification that defines a physical measurement process. (More generally, you can imagine branching events with effects that haven’t propagated outside of some region, such as the light-cone or Lieb-Robinson bound, and you might even make a hand-wavy connection to entanglement renormalization.)… [continue reading]

Models of decoherence and branching

[This is akin to a living review, which will hopefully improve from time to time. Last edited 2020-4-8.]

This post will collect some models of decoherence and branching. We don’t have a rigorous definition of branches yet but I crudely define models of branching to be models of decoherenceI take decoherence to mean a model with dynamics taking the form U \approx \sum_i \vert S_i\rangle\langle S_i |\otimes U^{\mathcal{E}}_i for some tensor decomposition \mathcal{H} = \mathcal{S} \otimes \mathcal{E}, where \{\vert S_i\rangle\} is an (approximately) stable orthonormal basis independent of initial state, and where \mathrm{Tr}[ U^{\mathcal{E}}_i \rho^{\mathcal{E} \dagger}_0 U^{\mathcal{E}}_j ] \approx 0 for times t \gtrsim t_D and i \neq j, where \rho^{\mathcal{E}}_0 is the initial state of \mathcal{E} and t_D is some characteristic time scale.a   which additionally feature some combination of amplification, irreversibility, redundant records, and/or outcomes with an intuitive macroscopic interpretation.

(Note in particular that I am not just listing models for which you can mathematically take a classical limit (\hbar\to 0 or N\to\infty) and recover the classical equations of motion; Yaffe has a pleasingly general approach to that task [1], but I’ve previously sketched why that’s an incomplete explanation for classicality.)

I have the following desiderata for models, which tend to be in tension with computational tractability:

  • physically realistic
  • symmetric (e.g., translationally)
  • no ad-hoc system-environment distinction
  • Ehrenfest evolution along classical phase-space trajectories (at least on Lyapunov timescales)

Regarding that last one: we would like to recover “classical behavior” in the sense of classical Hamiltonian flow, which (presumably) means continuous degrees of freedom.… [continue reading]