Living bibliography for the problem of defining wavefunction branches

This post is (a seed of) a bibliography covering the primordial research area that goes by some of the following names:

Although the way this problem tends to be formalized varies with context, I don’t think we have confidence in any of the formalizations. The different versions are very tightly related, so that a solution in one context is likely give, or at least strongly point toward, solutions for the others.

As a time-saving device, I will just quote a few paragraphs from existing papers that review the literature, along with the relevant part of their list of references. I hope to update this from time to time, and perhaps turn it into a proper review article of its own one day. If you have a recommendation for this bibliography (either a single citation, or a paper I should quote), please do let me know.

Carroll & Singh

From “Quantum Mereology: Factorizing Hilbert Space into Subsystems with Quasi-Classical Dynamics”, arXiv:2005.12938:

While this question has not frequently been addressed in the literature on quantum foundations and emergence of classicality, a few works have highlighted its importance and made attempts to understand it better. Brun and Hartle [2] studied the emergence of preferred coarse-grained classical variables in a chain of quantum harmonic oscillators. Efforts to address the closely related question of identifying classical set of histories (also known as the “Set Selection” problem) in the Decoherent Histories formalism [3–7, 10] have also been undertaken. Tegmark [9] has approached the problem from the perspective of information processing ability of subsystems and Piazza [8] focuses on emergence of spatially local sybsystem structure in a field theoretic context. Hamiltonian induced factorization of Hilbert space which exhibit k-local dynamics has also been studied by Cotler et al [14]). The idea that tensor product structures and virtual subsystems can be identified with algebras of observables was originally introduced by Zanardi et al in [15, 16] and was further extended in Kabernik, Pollack and Singh [17] to induce more general structures in Hilbert space. In a series of papers (e.g. [18–21]; see also [22]) Castagnino, Lombardi, and collaborators have developed the self-induced decoherence (SID) program, which conceptualizes decoherence as a dynamical process which identifies the classical variables by inspection of the Hamiltonian, without the need to explicitly identify a set of environment degrees of freedom. Similar physical motivations but different mathematical methods have led Kofler and Brukner [23] to study the emergence of classicality under restriction to coarse-grained measurements.

Selected references

[1] S. M. Carroll and A. Singh, “Mad-Dog Everettianism: Quantum Mechanics at Its Most Minimal,” arXiv:1801.08132 [quant-ph].
[2] T. A. Brun and J. B. Hartle, “Classical dynamics of the quantum harmonic chain,” Physical Review D 60 no. 12, (1999) 123503.
[3] M. Gell-Mann and J. Hartle, “Alternative decohering histories in quantum mechanics,” arXiv preprint arXiv:1905.05859 (2019) .
[4] F. Dowker and A. Kent, “On the consistent histories approach to quantum mechanics,” Journal of Statistical Physics 82 no. 5-6, (1996) 1575–1646.
[5] A. Kent, “Quantum histories,” Physica Scripta 1998 no. T76, (1998) 78.
[6] C. Jess Riedel, W. H. Zurek, and M. Zwolak, “The rise and fall of redundancy in decoherence and quantum Darwinism,” New Journal of Physics 14 no. 8, (Aug, 2012) 083010, arXiv:1205.3197[quant-ph].
[7] R. B. Griffiths, “Consistent histories and the interpretation of quantum mechanics,” J. Statist. Phys.
36 (1984) 219.
[8] F. Piazza, “Glimmers of a pre-geometric perspective,” Found. Phys. 40 (2010) 239–266,
arXiv:hep-th/0506124 [hep-th].
[9] M. Tegmark, “Consciousness as a state of matter,” Chaos, Solitons & Fractals 76 (2015) 238–270.
[10] J. P. Paz and W. H. Zurek, “Environment-induced decoherence, classicality, and consistency of quantum histories,” Physical Review D 48 no. 6, (1993) 2728.
[11] N. Bao, S. M. Carroll, and A. Singh, “The Hilbert Space of Quantum Gravity Is Locally Finite-Dimensional,” arXiv:1704.00066 [hep-th].
[12] T. Banks, “QuantuMechanics and CosMology.” Talk given at the festschrift for L. Susskind, Stanford University, May 2000, 2000.
[13] W. Fischler, “Taking de Sitter Seriously.” Talk given at Role of Scaling Laws in Physics and Biology (Celebrating the 60th Birthday of Geoffrey West), Santa Fe, Dec., 2000.
[14] J. S. Cotler, G. R. Penington, and D. H. Ranard, “Locality from the spectrum,” Communications in Mathematical Physics 368 no. 3, (2019) 1267–1296.
[15] P. Zanardi, “Virtual quantum subsystems,” Phys. Rev. Lett. 87 (2001) 077901, arXiv:quant-ph/0103030 [quant-ph].
[16] P. Zanardi, D. A. Lidar, and S. Lloyd, “Quantum tensor product structures are observable induced,” Phys. Rev. Lett. 92 (2004) 060402, arXiv:quant-ph/0308043 [quant-ph].
[17] O. Kabernik, J. Pollack, and A. Singh, “Quantum State Reduction: Generalized Bipartitions from Algebras of Observables,” Phys. Rev. A 101 no. 3, (2020) 032303, arXiv:1909.12851 [quant-ph].
[18] M. Castagnino and O. Lombardi, “Self-induced decoherence: a new approach,” Studies in the History and Philosophy of Modern Physics 35 no. 1, (Jan, 2004) 73–107.
[19] M. Castagnino, S. Fortin, O. Lombardi, and R. Laura, “A general theoretical framework for decoherence in open and closed systems,” Class. Quant. Grav. 25 (2008) 154002, arXiv:0907.1337 [quant-ph].
[20] O. Lombardi, S. Fortin, and M. Castagnino, “The problem of identifying the system and the environment in the phenomenon of decoherence,” in EPSA Philosophy of Science: Amsterdam 2009, H. W. de Regt, S. Hartmann, and S. Okasha, eds., pp. 161–174. Springer Netherlands, Dordrecht, 2012.
[21] S. Fortin, O. Lombardi, and M. Castagnino, “Decoherence: A Closed-System Approach,” Brazilian Journal of Physics 44 no. 1, (Feb, 2014) 138–153, arXiv:1402.3525 [quant-ph].
[22] M. Schlosshauer, “Self-induced decoherence approach: Strong limitations on its validity in a simple spin bath model and on its general physical relevance,” Phys. Rev. A 72 no. 1, (Jul, 2005) 012109, arXiv:quant-ph/0501138 [quant-ph].
[23] J. Kofler and C. Brukner, “Classical World Arising out of Quantum Physics under the Restriction of Coarse-Grained Measurements,” Phys. Rev. Lett. 99 no. 18, (Nov, 2007) 180403, arXiv:quant-ph/0609079 [quant-ph].

Riedel, Zurek, & Zwolak

From “The Objective past of a quantum universe: Redundant records of consistent histories”, arXiv:1312.0331:

“Into what mixture does the wavepacket collapse?” This is the preferred basis problem in quantum mechanics [1]. It launched the study of decoherence [2, 3], a process central to the modern view of the quantum-classical transition [4–9]. The preferred basis problem has been solved exactly for so-called pure decoherence [1, 10]. In this case, a well-defined pointer basis [1] emerges whose origins can be traced back to the interaction Hamiltonian between the quantum system \mathcal{S} and its environment \mathcal{E} [1, 2, 4]. An approximate pointer basis exists for many other situations (see, e. g., Refs. [11–17]).

The consistent (or decoherent) histories framework [18–21] was originally introduced by Griffiths. It has evolved into a mathematical formalism for applying quantum mechanics to completely closed systems, up to and including the whole universe. It has been argued that quantum mechanics within this framework would be a fully satisfactory physical theory only if it were supplemented with an unambiguous mechanism for identifying a preferred set of histories corresponding, at the least, to the perceptions of observers [22–29] (but see counterarguments [30–35]). This would address the Everettian [36] question: “What are the branches in the wavefunction of the Universe?” This defines the set selection problem, the global analog to the preferred basis problem.

It is natural to demand that such a set of histories satisfy the mathematical requirement of consistency, i.e., that their probabilities are additive. The set selection problem still looms large, however, as almost all consistent sets bear no resemblance to the classical reality we perceive [37–39]. Classical reasoning can only be done relative to a single consistent set [20, 31, 32]; simultaneous reasoning from different sets leads to contradictions [22–24, 40, 41]. A preferred set would allow one to unambiguously compute probabilities1 for all observations from first principles, that is, from (1) a wavefunction of the Universe and (2) a Hamiltonian describing the interactions.

To agree with our expectations, a preferred set would describe macroscopic systems via coarse-grained variables that approximately obey classical equations of motion, thereby constituting a “quasiclassical domain” [14, 23, 24, 40, 49, 50]. Various principles for its identification have been explored, both within the consistent histories formalism [15, 26, 39, 49, 51–56] and outside it [57–61]. None have gathered broad support.


1We take Born’s rule for granted, putting aside the question of whether it should be derived from other principles [9, 36, 42–48] or simply assumed. That issue is independent of (and cleanly separated from) the topic of this paper.

Selected references

[1] W. H. Zurek, Phys. Rev. D 24, 1516 (1981).
[2] W. H. Zurek, Phys. Rev. D 26, 1862 (1982).
[3] E. Joos and H. D. Zeh, Zeitschrift für Physik B Condensed Matter 59, 223 (1985).
[4] H. D. Zeh, Foundations of Physics 3, 109 (1973).
[5] W. H. Zurek, Physics Today 44, 36 (1991).
[6] W. H. Zurek, Rev. Mod. Phys. 75, 715 (2003).
[7] E. Joos, H. D. Zeh, C. Kiefer, D. Giulini, J. Kupsch, and I.-O. Stamatescu, Decoherence and the Appearance of a Classical World in Quantum Theory, 2nd ed. (SpringerVerlag, Berlin, 2003).
[8] M. Schlosshauer, Decoherence and the Quantum-toClassical Transition (Springer-Verlag, Berlin, 2008); in Handbook of Quantum Information, edited by M. Aspelmeyer, T. Calarco, and J. Eisert (Springer, Berlin/Heidelberg, 2014).
[9] W. H. Zurek, Physics Today 67, 44 (2014).
[10] M. Zwolak, C. J. Riedel, and W. H. Zurek, Physical Review Letters 112, 140406 (2014).
[11] J. R. Anglin and W. H. Zurek, Physical Review D 53, 7327 (1996); D. A. R. Dalvit, J. Dziarmaga, and W. H. Zurek, Physical Review A 72, 062101 (2005).
[12] O. Kübler and H. D. Zeh, Annals of Physics 76, 405 (1973).
[13] W. H. Zurek, S. Habib, and J. P. Paz, Phys. Rev. Lett. 70, 1187 (1993).
[14] M. Gell-Mann and J. B. Hartle, Phys. Rev. D 47, 3345 (1993).
[15] M. Gell-Mann and J. B. Hartle, Phys. Rev. A 76, 022104 (2007).
[16] J. J. Halliwell, Phys. Rev. D 58, 105015 (1998).
[17] J. Paz and W. H. Zurek, Phys. Rev. Lett. 82, 5181 (1999).
[18] R. B. Griffiths, Journal of Statistical Physics 36, 219 (1984).
[19] R. Omnès, The Interpretation of Quantum Mechanics (Princeton University Press, Princeton, NJ, 1994).
[20] R. B. Griffiths, Consistent Quantum Theory (Cambridge University Press, Cambridge, UK, 2002).
[21] J. J. Halliwell, in Fundamental Problems in Quantum Theory, Vol. 775, edited by D.Greenberger and A.Zeilinger (Blackwell Publishing Ltd, 1995) arXiv:grqc/9407040.
[22] F. Dowker and A. Kent, Phys. Rev. Lett. 75, 3038 (1995).
[23] F. Dowker and A. Kent, Journal of Statistical Physics. 82, 1575 (1996).
[24] A. Kent, Phys. Rev. A 54, 4670 (1996).
[25] A. Kent, Phys. Rev. Lett. 78, 2874 (1997).
[26] A. Kent and J. McElwaine, Phys. Rev. A 55, 1703 (1997).
[27] A. Kent, in Bohmian Mechanics and Quantum Theory: An Appraisal, edited by A. F. J. Cushing and S. Goldstein (Kluwer Academic Press, Dordrecht, 1996) arXiv:quant-ph/9511032.
[28] E. Okon and D. Sudarsky, Stud. Hist. Philos. Sci. B 48, Part A, 7 (2014).
[29] E. Okon and D. Sudarsky, arXiv:1504.03231 (2015).
[30] R. B. Griffiths and J. B. Hartle, Physical Review Letters 81, 1981 (1998).
[31] R. B. Griffiths, Physical Review A 57, 1604 (1998).
[32] R. B. Griffiths, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 44, 93 (2013).

Footnotes

(↵ returns to text)

  1. Relatedly, I have another blog posts that reviews the consistency conditions in consistent histories.
Bookmark the permalink.

12 Comments

  1. I know I’m a broken record on this, but “An algebraic approach to Koopman classical mechanics” (arXiv:1901.00526, DOI there for Annals of Physics 2020) is relevant to this because Section 7.1 shows how to reconcile the Heisenberg picture of measurement —that at the time of measurement wavepacket collapse is necessary— with the Bohr picture of measurement —that a measurement constrains future measurements— by deriving an elementary identity, Tr[\hat A\hat X\hat\rho_A]=Tr[\hat A\hat X_A\hat\rho], where \hat\rho_A presents collapse as linear projection of an operator to the commutant of \hat A (which is known as the Lüders transformer in the measurement theory literature,) whereas \hat X_A requires —indeed, it enforces— that subsequent joint measurements must commute. What I like a lot about this is that it makes (mathematical) sense of Bohr’s Delphic comments. It’s also not prescriptive, in that it allows us our own choice of whether to work in the Heisenberg or the Bohr picture of quantum measurement, just as we can work either in the Heisenberg or the Schrödinger picture of Quantum evolution.

    “Classical states, quantum field measurement” (arXiv:1709.06711, DOI there for Physica Scripta 2019) is relevant to the question of coarse-graining because it shows that for the free quantized complex KG field and for the free quantized EM field there is a free random field of commuting observables that is arbitrarily fine-grained over all of Minkowski space-time (that is, the maximal commuting subalgebra of such free fields is a random field everywhere, not just over only a space-like hyperplane.)

    Some people find these two papers quite promising for our understanding of QM/QFT, however others don’t.

    • I’m pretty confident your first paragraph does not address the problems discussed in the post. The problem from the post is about an account of a measuring process that derives, rather than presupposing, a measuring apparatus (e.g., as defined by a tensor decomposition), a preferred set of observables, or a preferred basis. It appears you are just leaving the measured observable $\hat{A}$ unexplained.

      Likewise, your second paragraph is unlikely to be addressing the problem because we know there are multiple possible choices of sets of commuting observables that are arbitrarily fine-grained, but do not commute with each other, and we are left with the problem of which to choose. Indeed, in finite-dimensions such a set of commuting observables is equivalent to a choice of basis, and there are many possible choices of basis that are incompatible with each other.

      If you disagree, you should be able to explain your proposed solution in the non-relativistic (and, ideally, finite-dimensional) context, where I think it will be more clear where the issue lies.

  2. > The set selection problem still looms large, however, as almost all consistent sets bear no resemblance to the classical reality we perceive

    What about an average over all consistent sets?

    • How would one define an average of a consistent set?

      (Remember that, at any fixed time t, each consistent set defines a orthonormal basis, possibly coarse-grained into a set of orthogonal projectors. So even if we put aside the problem of the time structure of the consistent set, defining an average of sets should have all the problems with defining an average of bases.)

      • I’m not sure how but it seems like it could be necessary. If we assume that each consistent set is a valid framework for looking at the world, then it would be arbitrary to just pick one and ignore the others.

        Taking the classical approximation as a model for decision making, at least one could compute a single value for each set as the expected payout of one decision. I don’t know how you could weigh the payouts for all sets; perhaps sets with lower Kolmogorov complexity are rare and naturally easier to reason with.

        • > If we assume that each consistent set is a valid framework for looking at the world, then it would be arbitrary to just pick one and ignore the others.

          I agree we don’t yet have a precise principle for picking only one set of consistent histories, but we surely have good and non-arbitrary (albeit imprecise) reasons for focusing on a very small subset of them: they describe the actual reality we observe! (Likewise, we could imagine all sorts of other weird but internally self-consistent modifications of physics, but we reject these because they do not, in fact, describe the real world.)

          > at least one could compute a single value for each set as the expected payout of one decision

          Well, first, personally I do not think it makes sense to treat bets/decisions as more fundamental than probabilities. Adrian Kent articulates well why this doesn’t make sense in his contribution to the book “Many Worlds?”.

          But even putting that aside, you’d face many problems doing what you’re suggesting. Most consistent sets would not feature a projector corresponding to any given decision, and the ones that do would all give the same answer. (So the average wouldn’t accomplish anything.) And you’d also still not have explained from first principles what the real decisions are, as opposed to the decisions you didn’t take. If you declared a preferred set of decisions without justification, you’d just be smuggling in a set of projectors to act as your preferred basis, begging the question.

          • > we surely have good and non-arbitrary (albeit imprecise) reasons for focusing on a very small subset of them: they describe the actual reality we observe

            If there is one set which fully describes our reality that would be interesting. Since it isn’t a physical division of the universe, but more of a mental model, it would suggest either that it’s not possible for one agent to consider multiple sets or that set has some unique survival advantage for us, which is why we don’t consider others.

            > you’d face many problems doing what you’re suggesting. Most consistent sets would not feature a projector corresponding to any given decision, and the ones that do would all give the same answer. (So the average wouldn’t accomplish anything.)

            Interesting – I didn’t know that decisions could be represented by projectors or that all sets would either agree or give no preference.

            I don’t see how these are problems; it actually makes it easier. For any decision, an agent could ignore all irrelevant sets and if they found any relevant one, they could stop looking, knowing all others would give equal payouts.

            > And you’d also still not have explained from first principles what the real decisions are, as opposed to the decisions you didn’t take.

            True; a complete theory would explain that too. I was assuming free will: the ability to at least choose the question.

            • > If there is one set which fully describes our reality that would be interesting. Since it isn’t a physical division of the universe, but more of a mental model,

              If you’re not a wavefunction realists/physicalist, the branch structure describes the difference between things that did vs. did not occur. So although it’s not a division into multiple physical pieces, it identifies what is physical in the first place.

              If you are wavefunction realists/physicalist, then it’s a real physical division.

              > it would suggest either that it’s not possible for one agent to consider multiple sets or that set has some unique survival advantage for us, which is why we don’t consider others.

              You can’t even define things like “survival” without choosing a preferred consistent set. Asking “did this agent survive?” in most consistent sets is as unanswerable as asking which slit an electron went through in a two-slit experiment.

              > I didn’t know that decisions could be represented by projectors

              Any physical thing we could measure even in principle (the location of an electron, whether a nerve fired, whether a button was pressed) corresponds to a projector.

              > I don’t see how these are problems; it actually makes it easier. For any decision, an agent could ignore all irrelevant sets and if they found any relevant one, they could stop looking, knowing all others would give equal payouts

              If you’ve already picked your decision, corresponding to a set of projectors, then you’ve implicitly picked a consistent set. And then of course you can compute a probability. This is just the statement that quantum mechanics is *prior to* (i.e., more fundamental than) decision theory on the ladder of reductionism. But the hard part is to identify the decisions/set from first principles without just sticking them in by hand.

              > > And you’d also still not have explained from first principles what the real decisions are, as opposed to the decisions you didn’t take.

              > True; a complete theory would explain that too. I was assuming free will: the ability to at least choose the question.

              No no, I’m not saying that the actual choice made (yes vs. no) would or should be identified by a set-selection principle. The outcome is still probabilistic (and potentially a place for something like free will to hide if one is sympathetic to that sort of approach). I’m staying that the choice of set determines the question that the decision answers: “Should I press this button?” vs. “Should I exists as a spatially delocalized superposition in both the Milky Way and Andromeda?”.

              • I think I see what you’re saying now: that there is one classical framework imposed by nature and that’s why it doesn’t make sense to consider some sort of average of consistent sets.

                I agree that identifying the principles that lead to the selection of the consistent set we are using is the hard part. It has a lot of nice properties, like as you mention, the ability to express our survival.

                My view is different though. I see that agents develop their own framework and can choose the questions they’ll ask or the decisions they’ll consider; the free-will assumption. Like when an experimenter measures spin he can choose any direction; it’s constrained only by logic, not a set of predetermined values.

                An example of how this might work at a higher level is how one can view the world as lines of competing genes using individuals or as individuals using genes – the primary objects are different.

                > If you are wavefunction realists/physicalist, then it’s a real physical division.

                I would say what’s physical is the state of the universe and it’s evolution. There are many ways one can describe it and divide it; all the consistent ways are valid but some are less useful then others. Whatever choice is made won’t effect it’s evolution as a whole (except in our little part if the choice isn’t random).

                • > I see that agents develop their own framework and can choose the questions they’ll ask or the decisions they’ll consider; the free-will assumption. Like when an experimenter measures spin he can choose any direction; it’s constrained only by logic, not a set of predetermined values.

                  But consider how this actually works in real life: when the experimenter selects a direction to measure a spin, he physically moves a macroscopic object through a series of *classical* states, e.g., the orientation of a big magnet, the turn of a dial, or certain button presses on a keyboard. These classical states already form a preferred basis, as selected by decoherence. Within that classical basis, the actual choice (e.g., 37° vs. 119° angle spin measurement) can be traced back, if you want, to individual nerves firing or not firing, but these also take place in a preferred basis selected by decoherence.

                  • The choice of angle can at least clearly be chosen by a quantum source. I’m not convinced the other details, like the timing and orientation of the magnet are not also quantum in origin, just by accident. Take for example this analysis of coin flips:

                    https://arxiv.org/abs/1212.0953

                    It seems like classical states and are always just an approximation and decoherence is almost never perfect.

                    • I’m very familiar with the ideas in Andy’s paper and it doesn’t conflict with my claims. That the choice of angle can ultimately be traced back to a quantum event does not mean the experimentalists selected the angle by putting a macroscopic dial in a coherent quantum superposition, i.e., that he could have avoided the classically preferred angle basis (rather than an incompatible basis formed of superpositions of angles). Yes, of course if you look at the wavefunction of the universe it will contain two branches, one where the experimentalist made one choice and one where he made the other, but there is still a preferred classical basis for that branch structure.

                      > It seems like classical states and are always just an approximation and decoherence is almost never perfect.

                      If you chose projectors with a simple mathematical description, this is true. But we don’t actually expect the projectors reflecting the real, physical experiments we perform to be perfectly representable in this way. We can take the actual projectors to give perfect decoherence and, indeed, this is necessary if we want them to describe probabilities that actually sum to 1 exactly. The difference between these projectors and the ones with a simple mathematical description is exponentially small.

Leave a Reply

Include [latexpage] in your comment to render LaTeX equations with $'s. (More info.) May not be rendered in the live preview.

Your email address will not be published. Required fields are marked with a *.