Over at PhysicsOverflow, Daniel Ranard asked a question that’s near and dear to my heart:
How deterministic are large open quantum systems (e.g. with humans)?
Consider some large system modeled as an open quantum system — say, a person in a room, where the walls of the room interact in a boring way with some environment. Begin with a pure initial state describing some comprehensible configuration. (Maybe the person is sitting down.) Generically, the system will be in a highly mixed state after some time. Both normal human experience and the study of decoherence suggest that this state will be a mixture of orthogonal pure states that describe classical-like configurations. Call these configurations branches.
How much does a pure state of the system branch over human time scales? There will soon be many (many) orthogonal branches with distinct microscopic details. But to what extent will probabilities be spread over macroscopically (and noticeably) different branches?
I answered the question over there as best I could. Below, I’ll reproduce my answer and indulge in slightly more detail and speculation.
This question is central to my research interests, in the sense that completing that research would necessarily let me give a precise, unambiguous answer. So I can only give an imprecise, hand-wavy one. I’ll write down the punchline, then work backwards.
The instantaneous rate of branching, as measured in entropy/time (e.g., bits/s), is given by the sum of all positive Lyapunov exponents for all non-thermalized degrees of freedom.
Most of the vagueness in this claim comes from defining/identifying degree of freedom that have thermalized, and dealing with cases of partial/incomplete thermalization; these problems exists classically.
The original question postulates that the macroscopic system starts in a quantum state corresponding to some comprehensible classical configuration, i.e., the system is initially in a quantum state whose Wigner function is localized around some classical point in phase space.… [continue reading]
[This is a vague post intended to give some intuition about how particular toy models of decoherence fit in to the much hairier question of why the macroscopic world appears classical.]
A spatial superposition of a large object is a common model to explain the importance of decoherence in understanding the macroscopic classical world. If you take a rock and put it in a coherent superposition of two locations separated by a macroscopic distance, you find that the initial pure state of the rock is very, very, very quickly decohered into an incoherent mixture of the two positions by the combined effect of things like stray thermal photons, gas molecules, or even the cosmic microwave background.
Formally, the thing you are superposing is the center-of-mass (COM) variable of the rock. For simplicity one typically considers the internal state of the rock (i.e., all its degrees of freedom besides the COM) to be in a (possibly mixed) quantum state that is uncorrelated with the COM. This toy model then explains (with caveats) why the COM can be treated as a “classical variable”, but it doesn’t immediately explain why the rock as a whole can be considered classical. On might ask: what would that mean, anyways? Certainly, parts of the rock still have quantum aspects (e.g., its spectroscopic properties). For Schrödinger’s cat, how is the decoherence of its COM related the fact that the cat, considered holistically, is either dead or alive but not both?
Consider a macroscopic object with Avagadro’s number of particles N, which means it would be described classically in microscopic detail by 3N variables parameterizing configuration space in three dimensions. (Ignore spin.) We know at least two things immediately about the corresponding quantum system:
(1) Decoherence with the external environment prevents the system from exploring the entire Hilbert space associated with the 3N continuous degrees of freedom.… [continue reading]
Last month Scott Aaronson was kind enough to invite me out to MIT to give a seminar to the quantum information group. I presented a small uniqueness theorem which I think is an important intermediary result on the way to solving the set selection problem (or, equivalently, to obtaining an algorithm for breaking the wavefunction of the universe up into branches). I’m not sure when I’ll have a chance to write this up formally, so for now I’m just making the slides available here.
Scott’s a fantastic, thoughtful host, and I got a lot of great questions from the audience. Thanks to everyone there for having me.… [continue reading]
(This post is vague, and sheer speculation.)
Following a great conversation with Miles Stoudenmire here at PI, I went back and read a paper I forgot about: “Entanglement and the foundations of statistical mechanics” by Popescu et al.S. Popescu, A. Short, and A. Winter, “Entanglement and the foundations of statistical mechanics” Nature Physics 2, 754 – 758 (2006) [Free PDF].a . This is one of those papers that has a great simple idea, where you’re not sure if it’s profound or trivial, and whether it’s well known or it’s novel. (They cite references 3-6 as “Significant results along similar lines”; let me know if you’ve read any of these and think they’re more useful.) Anyways, here’s some background on how I think about this.
If a pure quantum state is drawn at random (according to the Haar measure) from a -dimensional vector space , then the entanglement entropy
across a tensor decomposition into system and environment is highly likely to be almost the maximum
for any such choice of decomposition . More precisely, if we fix and let , then the fraction of the Haar volume of states that have entanglement entropy more than an exponentially small (in ) amount away from the maximum is suppressed exponentially (in ). This was known as Page’s conjectureD. Page, Average entropy of a subsystem.b , and was later provedS. Foong and S. Kanno, Proof of Page’s conjecture on the average entropy of a subsystem.c J. Sánchez-Ruiz, Simple proof of Page’s conjecture on the average entropy of a subsystem.d ; it is a straightforward consequence of the concentration of measure phenomenon.… [continue reading]
Physics StackExchange user QuestionAnswers asked the question “Is the preferred basis problem solved?“, and I reproduced my “answer” (read: discussion) in a post last week. He had some thoughtful follow-up questions, and (with his permission) I am going to answer them here. His questions are in bold, with minor punctuation changes.
How serious would you consider what you call the “Kent set-selection” problem?
If a set of CHs could be shown to be impossible to find, then this would break QM without necessarily telling us how to correct it. (Similar problems exist with the breakdown of gravity at the Planck scale.) Although I worry about this, I think it’s unlikely and most people think it’s very unlikely. If a set can be found, but no principle can be found to prefer it, I would consider QM to be correct but incomplete. It would kinda be like if big bang neucleosynthesis had not been discovered to explain the primordial frequency of elements.
And what did Zurek think of it, did he agree that it’s a substantial problem?
I think Wojciech believes a set of consistent histories (CHs) corresponding to the branch structure could be found, but that no one will find a satisfying beautiful principle within the CH framework which singles out the preferred set from the many, many other sets. He believes the concept of redundant records (see “quantum Darwinism”) is key, and that a set of CHs could be found after the fact, but that this is probably not important. I am actually leaving for NM on Friday to work with him on a joint paper exploring the connection between redundancy and histories.… [continue reading]
Unfortunately, physicists and philosophers disagree on what exactly the preferred basis problem is, what would constitute a solution, and how this relates (or subsumes) “the measurement problem” more generally. In my opinion, the most general version of the preferred basis problem was best articulated by Adrian Kent and Fey Dowker near the end their 1996 article “On the Consistent Histories Approach to Quantum Mechanics” in the Journal of Statistical Physics. Unfortunately, this article is long so I will try to quickly summarize the idea.
Kent and Dowker analyzed the question of whether the consistent histories formalism provided a satisfactory and complete account of quantum mechanics (QM). Contrary to what is often said, consistent histories and many-worlds need not be opposing interpretations of quantum mechanics Of course, some consistent historians make ontological claims about how the histories are “real”, where as the many-world’ers might say that the wavefunction is more “real”. In this sense they are contradictory. Personally, I think this is purely a matter of taste.a . Instead, consistent histories is a good mathematical framework for rigorously identifying the branch structure of the wavefunction of the universe Note that although many-worlders may not consider the consistent histories formalism the only way possible to mathematically identify branch structure, I believe most would agree that if, in the future, some branch structure was identified using a completely different formalism, it could be described at least approximately by the consistent histories formalism. Consistent histories may not be perfect, but it’s unlikely that the ideas are totally wrong.b . Most many-world’ers would agree that unambiguously describing this branch structure would be very nice (although they might disagree on whether this is “necessary” for QM to be a complete theory).… [continue reading]