Approach to equilibrium in a pure-state universe

(This post is vague, and sheer speculation.)

Following a great conversation with Miles Stoudenmire here at PI, I went back and read a paper I forgot about: “Entanglement and the foundations of statistical mechanics” by Popescu et al.S. Popescu, A. Short, and A. Winter, “Entanglement and the foundations of statistical mechanics” Nature Physics 2, 754 – 758 (2006) [Free PDF]. a  . This is one of those papers that has a great simple idea, where you’re not sure if it’s profound or trivial, and whether it’s well known or it’s novel. (They cite references 3-6 as “Significant results along similar lines”; let me know if you’ve read any of these and think they’re more useful.) Anyways, here’s some background on how I think about this.

If a pure quantum state \vert \psi \rangle is drawn at random (according to the Haar measure) from a d_S d_E-dimensional vector space \mathcal{H}, then the entanglement entropy

    \[S(\rho_S) = \mathrm{Tr}[\rho_S \mathrm{log} \rho_S], \qquad \rho_S = \mathrm{Tr}_E[\vert \psi \rangle \langle \psi \vert]\]

across a tensor decomposition into system \mathcal{S} and environment \mathcal{E} is highly likely to be almost the maximum

    \[S_{\mathrm{max}} = \mathrm{log}_2(\mathrm{min}(d_S,d_E)) \,\, \mathrm{bits},\]

for any such choice of decomposition \mathcal{H} = \mathcal{S} \otimes \mathcal{E}. More precisely, if we fix d_S/d_E and let d_S\to \infty, then the fraction of the Haar volume of states that have entanglement entropy more than an exponentially small (in d_S) amount away from the maximum is suppressed exponentially (in d_S). This was known as Page’s conjectureD. Page, Average entropy of a subsystem. b  , and was later provedS. Foong and S. Kanno, Proof of Page’s conjecture on the average entropy of a subsystem. c  J. Sánchez-Ruiz, Simple proof of Page’s conjecture on the average entropy of a subsystem. d  ; it is a straightforward consequence of the concentration of measure phenomenon.

Now, for any given Hermitian operator H on the vector space acting as a Hamiltonian, we can assign an expected value of the energy \langle H \rangle_\psi = \langle \psi \vert H \vert \psi \rangle to a given vector \vert \psi \rangle. The Haar measure, then, can be considered a Gibbs probability distribution p_\psi \propto e^{-\beta\langle H \rangle_\psi} in the infinite temperature limit (\beta \to 0), i.e. all states are equally likely. The Page conjecture is then exactly what you’d expect if a subsystem of a global pure state is at infinite temperature — maximally mixed — given its size. Stating it in this way prompts the question of whether there is a way to extend this statement about typical entanglement entropies to cases where the probability distribution is one for finite temperature.

Popescu et al. provide the affirmative answer. The main statement, eq. (2) in their article, is that for arbitrarily small \epsilon there exist

    \[\eta = \epsilon +\frac{1}{2}\sqrt{\frac{d_s}{d_E^{\mathrm{eff}}}} \quad , \quad \eta' = 4e^{-C d_R \epsilon^2}\]

such that

    \[\frac{V[\{ \vert \phi\rangle \in \mathcal{H}_R \vert D(\rho_S(\phi),\Omega_S)\ge\eta\}]}{V[\{ \vert \phi\rangle \in \mathcal{H}_R\}]} \le \eta'.\]

Here, D is the trace distance, \mathcal{H}_R is the subspace of the global Hilbert space satisfying any constraint (like having a certain energy), \rho_S is the reduced state of the system, d_S and d_R are the dimensions of the system and the constrained subspace, and

    \[d_E^{\mathrm{eff}} = \frac{1}{\mathrm{Tr} \Omega_E^2} \ge \frac{d_R}{d_S}\]

is the effective size of the environment. In these expression, \Omega_S = \langle \rho_S \rangle_{\mathcal{H}_R} and \Omega_E = \langle \rho_E \rangle_{\mathcal{H}_R} are the Haar-average reduced states conditional on the constraint.

So given a mere energy constraint for the global state of the universe, any given subsystem is in a thermal state with high probability according to the (constraint-conditional) Haar measure. But of course, we see plenty of subsystem in the universe around use that aren’t in thermal states (thank goodness). With caveats, we conclude that the global state is not random up to some simple energy constraint; rather, the universe apparently had a very low-entropy initial condition that is a lot more restrictive than a simple total energy. So the question is, what does this result tell us about the not-fully-thermalized state out there in the real world? Popescu et al.:

However, because almost all states of the universe are such that the system is thermalized, we anticipate that most evolutions will quickly carry any initial state to a thermal state. Furthermore, as information about the system will tend to leak into the environment over time, we might expect that their entanglement, and hence entropy, will increase. It is conceivable that this is the mechanism behind the second law of thermodynamics.

And they conclude:

In future work, we hope to go beyond the kinematic viewpoint presented here to address the dynamics of thermalization. In particular, we will investigate under what conditions the state of the universe will evolve into (and spend almost all of its later time in) the large region of its Hilbert space in which its subsystems are thermalized. Some results in this direction have already been obtained{}^{19-22}, and we hope that the new results and techniques introduced in this paper will lead to further exciting advances in this area.

How exactly one describes this thermalization process is a big open question. My strong intuition is that this inextricably tied up with the set-selection problem (i.e. the non-metaphysical part of the measurement problem); solving one will solve the other. Here is my thinking.

One way to describe the set-selection problem is that we need an objective principle for selecting the “branches” in the wavefunction of the universe. These branches are to be identified with distinct, classical, macroscopic outcomes. Any variable (e.g. the strength of a macroscopic magnetic field, or the position of the Earth’s center of mass) that you might reasonably describe as “classical” needs to be objective in this sense: multiple observers can all infer the variable’s value and come to an agreement on it. This necessarily means that the variable must be recorded in multiple spatial locations, at the very least in the brains of the observers. In other words, if the universe is described by a global wavefunction, then different wavefunction components corresponding to different classical events (e.g. the magnetic field is turned on or not) must be “locally orthogonal” in the sense that you can distinguish the outcomes by making a measurement only at the locations of the records. In yet other words, the subsystems storing those records must be correlated; if the atom has decayed then the cat is dead and the box is cold and the experimenter sees the dead cat or the atom has not decayed and the cat is alive and the box is warm and the experimenter sees a live cat. (All of these systems are correlated.)

This seems obvious and trivial, but in fact the above theorem shows that this does not hold for the exponentially vast bulk of Hilbert space. Given a simple energy constraint on the global state, or no constraint at all as in the special case of Page’s conjecture, the Haar-measure-typical local states of any subsystem containing the two record locations will be thermal with extremely high likelihood, and hence the records (assuming they are sufficiently far apart to not be interacting through the Hamiltonian) will be completely uncorrelated. Thus the presence of records is an extremely strong constraint.

What’s more surprising is that this constraint of redundant records, which I have so far only argued as being a necessary condition of classicality, may actually turn out to be sufficient in a sense which I endeavour to make precise. A big clue is that, given a notation of subsystems, the wavefunction can be decomposed into a unique sum of locally orthogonal branches, at least if we make the unrealistic requirement that the branches are locally orthogonal at every subsystem (i.e. records everywhere). If, as seems to be the case, the number of quantum fluctuations recorded as classical outcomes increases as time goes on, with records that persist into the future, then the number of branches should be ever increasing.

Of course, most possible states of the universe have no such non-trivial decomposition because they have no redundant records. And we expect the state of our universe to approach a thermal state in the distant future — heat death. Therefore, the number of branches will increase for some time, and then eventually will have to decrease. I expect the process of our universe thermalizing is exactly the process of the branches decreasing in the number and, in a way that is still very mysterious to me, “recombining”.


The somber face of a man pondering the heat death of the universe.

Consistent with the fact that thermalization is a process taking place over a significant time interval, I wager that the recombination of the branches will be gradual. This is closely connected to the problem of relaxing the nonphysical idealization I made in the arXiv paper above of requiring that branches are locally orthogonal everywhere. Rather, my intuition is that branches become “classical” when they are locally recorded in so many places that their records are effectively thermodynamically un-erasable. But this un-erasibility isn’t eternal, for the simple reason that the number of records we need to keep branches orthogonal keeps growing as the branches proliferate, but the size of the universe for storing them stays the same. Thermalization, I conjecture, is the exact time scale we start running out of microscopic degrees of freedom in which to drop the entropy generated by quantum fluctuations. This fuzziness and For-All-Practical-Purposes-ness of the branch structure is consistent with the fact that we think our macroscopic bodily movements are completely classical, but we accept that, in principle, grad students could be sent through a two-slit experiments and that this is merely extremely unlikely.

That, in a furiously vague nutshell, is how I think we’ll describe the process of thermalization of a pure-state universe and solve the set-selection problem with the same stroke.

Footnotes

(↵ returns to text)

  1. S. Popescu, A. Short, and A. Winter, “Entanglement and the foundations of statistical mechanics” Nature Physics 2, 754 – 758 (2006) [Free PDF].
  2. D. Page, Average entropy of a subsystem.
  3. S. Foong and S. Kanno, Proof of Page’s conjecture on the average entropy of a subsystem.
  4. J. Sánchez-Ruiz, Simple proof of Page’s conjecture on the average entropy of a subsystem.
Bookmark the permalink.

One Comment

  1. Reminds me of Lev Schulmann’s book, “Time’s Arrows and Quantum Measurement”. It’s an effort to square deterministic evolution equations with apparently-random quantum measurement outputs, by observing that, with the right a prior phase relationships for mixed states, you can get pure states out of measurement apparatus through the regular evolution equations. You get rid of wavefunction collapse entirely, but the price you pay is that you require an enormous conspiracy of the initial conditions of the universal wavefunction to anticipate every measurement process that will ever actually take place, which you can do, because the universe is both quantum and completely deterministic.

    The commonality with the present analysis is the low-entropy initial condition, of course, but also, if I may say so, the speculative character.

Leave a Reply

Include [latexpage] in your comment to render LaTeX equations with $'s. (More info.) May not be rendered in the live preview.

Your email address will not be published. Required fields are marked with a *.