Comments on Baldijao et al.’s GPT-generalized quantum Darwinism

This nice recent paper considers the “general probabilistic theory” operational framework, of which classical and quantum theories are special cases, and asks what sorts of theories admit quantum Darwinism-like dynamics. It is closely related to my interest in finding a satisfying theory of classical measurement.

Quantum Darwinism and the spreading of classical information in non-classical theories
Roberto D. Baldijão, Marius Krumm, Andrew J. P. Garner, and Markus P. Müller
Quantum Darwinism posits that the emergence of a classical reality relies on the spreading of classical information from a quantum system to many parts of its environment. But what are the essential physical principles of quantum theory that make this mechanism possible? We address this question by formulating the simplest instance of Darwinism – CNOT-like fan-out interactions – in a class of probabilistic theories that contain classical and quantum theory as special cases. We determine necessary and sufficient conditions for any theory to admit such interactions. We find that every non-classical theory that admits this spreading of classical information must have both entangled states and entangled measurements. Furthermore, we show that Spekkens’ toy theory admits this form of Darwinism, and so do all probabilistic theories that satisfy principles like strong symmetry, or contain a certain type of decoherence processes. Our result suggests the counterintuitive general principle that in the presence of local non-classicality, a classical world can only emerge if this non-classicality can be “amplified” to a form of entanglement.

After the intro, the authors give self-contained background information on the two key prerequisites: quantum Darwinism and generalized probabilistic theories (GPTs). The former is an admirable brief summary of what are, to me, the core and extremely simple features of quantum Darwinism. This and the summary of GPT can be skipped by familiar readers, but I recommend reading definitions 1-4 in the latter part of the GPT subsection, plus the “Summary of Assumptions”.

The main results seems to be

  • Baldijão et al.’s Darwinism-related definitions 5 and 6 are equivalent in quantum theory but not in all GPTs. In particular

    ..one needs to consider the possibility of [classical-information-spreading dynamic] that preserves the statistics of [measurements on the system] S but still changes the state of S, even if S is prepared in one of the [(generalized) pointer states]. This is impossible in quantum theory…However, many GPT systems (such as gbits [18]) violate the analogous operational condition…Thus, definition 5 captures the essential features for ideal Darwinism on the operational level, while definition 6 further requests classical features from the frame states themselves.

  • Entanglement generation is a necessary feature of any GPT generalization of a reversible quantum Darwinism process:

    In [quantum theory], the fan-out gate [classical-information-spreading dynamic] can create entanglement whenever the system is not initialized to a pointer state….entanglement–creation is a necessary property of any generalized ideal Darwinism process.

    The reversible qualifier is key here, as this statement excludes the possibility of Darwinism in classical models, whereas we of course know it’s possible to copy classical information in classical models with irreversible Markovian classical dynamics. Thus

    In particular, this rules out Darwinism in boxworld [18] (a theory containing the aforementioned gbits) or any dichotomic maximally nonlocal theory. For these specific examples, one could also infer this from Refs. [40, 44], but here we have shown it without having to determine the complete structure of the reversible transformations.

  • GPT models that admit “decoherence maps” — a GPT generalization (defined by Richens et al.) of what we in the trade sometimes call “pure decoherence” in quantum theory — necessarily admit the ideal GPT generalization of quantum Darwinism.
  • Spekken’s toy model is capable of hosting the GPT-generalized version of quantum Darwinism defined by Baldijão et al. This is notable both as a proof of concept that quantum Darwinism does not require full quantum theory nor decoherence maps nor even universal reversible classical computation (which I guess isn’t possible in Spekken’s toy model, although I don’t understand this well).

Philosophically and aesthetically I like the idea of GPT as an operational framework for thinking about the foundations of quantum mechanics, although we should all be quite skeptical that GPTs besides quantum theory — either more or less expansive — will be found to describe any fundamental physics (even though think it’s quite plausibly that quantum theory will eventually be superseded by something). This is because, among other things, GPTs treat space and time very different and especially because they take time asymmetry as fundamental rather than emergent or a consequence of initial conditions.

The practical downside of GPTs is that there’s been a whole industry of papers exploring non-classical-or-quantum GPTs that don’t maintain contact with what could describe the real world; it’s too much fun to play with the math. This paper was a welcome exception, as it helps clarify which theories could lead to the appearance of classicality, at least insofar as the latter is identified with quantum Darwinsism.

Bookmark the permalink.

Leave a Reply

Required fields are marked with a *. Your email address will not be published.

Contact me if the spam filter gives you trouble.

Basic HTML tags like ❮em❯ work. Type [latexpage] somewhere to render LaTeX in $'s. (Details.)