(I know you’re just trying to recall a standard story for the reader here rather than offer an authoritative treatment, but I think this is sufficiently off the mark that you risk misleading people.)

]]>Thanks very much for your comments, and sorry for the big delay in getting back to you. Also, sorry if my response runs long; I think these issues are both important and subtle, so I tend to belabor things.

> 1. …the intrinsically approximate character of the many-worlds + decoherence characterization of branching makes its branches, by themselves, implausible candidates for the substance of reality. The idea is to find something else with precise evolution rules that can be written in underneath to which the macro-reality of many-worlds + decoherence can then be viewed as an approximation.

But if we assume that branches are understood approximately [1] and we just want to pick something that is precise, why not just arbitrarily choose a precise branch structure? I interpret your proposal to be defined by a two step process: First, pick a precise branch structure arbitrarily from the set of all branch structures that are compatible with the range of ambiguity inherent in the smooth process of decoherence in the the wavefunction of conventional quantum mechanics (henceforth “the traditional wavefunction”). Second, pick one of those branches and evolve it backward in time to t=0, then declare that to be the real world. So why not just stop after the first step?

You might retort that under your ontological hypothesis (that the only fundamental object is the preferred branch) the traditional wavefunction with non-realized branches is just a human constructions. And indeed, we can’t rule this out. But my response is that, until we can write down a preferred-branch theory *without* reference to the traditional wavefunction, then we also ought to also consider the alternative ontology of precise branches. These do have precise evolution rules, which personally I prefer because the inelegance is transparent.

It’s true that the evolution of a single preferred branch is smooth (in time) compared to the weird discrete-time nature of branching, which you emphasize in the good new paragraph from the updated version of your paper (“One piece of the formulation of quantum mechanics we now propose remains approximate, but another has become exact….”) but this is achieved by massive microscopic conspiracy. I would characterize this as merely obscuring the inelegance using an implicit definition that draws on an assumed branch structure with discrete times. Note that this criticism would not apply if you had an alternate way of defining the smoothly-evolving preferred branch.

> 2….The environment bits are not additional degrees of freedom in the same class as Bohm trajectories.

When I said “…the bit string *b* used to specify the preferred branch … is an equivalently inelegant structure”, I definitely didn’t mean to suggest that the environmental bits were additional dynamical degrees of freedom like the Bohm particle. My point is just that, from an *information-theoretic* point of view, fully specifying your theory requires a writing down big chunk of entropy (i.e., *some* way of identifying the preferred branch from all others) which is not present in normal quantum mechanics.

Since I understand your proposal to be about repackaging for elegance rather than increasing the observational explanatory power of the theory, this is not a big deal.

3….First, for many (most?) systems, if you require consistent histories criteria to be fulfilled exactly, you will have no takers at all, so no macro-reality.

It’s true that a set of histories specified using mathematically-simple-to-define projectors is unlikely to be exactly consistent, but there will exist an exactly consistent set of histories that is close enough to be observationally indistinguishable. (See J. N. McElwaine, PRA 53, 2021 (1996), especially the first three paragraphs in Sec. II and references therein.) Relying on this sort of existence argument without actually specifying the consistent set is definitely a flaw, but it applies equally well to a preferred-branch theory.

>…Second, there are sets of 4 propositions such that every pair is consistent, but a particular subset of 3 of them can not be consistent.

Yes, without further conditions on the propositions (represented mathematically by projectors) that go into histories, it’s impossible to uniquely identify any preferred set of histories or, indeed, any single true proposition. Another pathology (related to the one you mention) is *contrary inferences*: given an initial state and some observed final data (e.g., the outcome of an experiment), there will exist two incompatible sets of consistent histories such that is true with certainty in one set, is true with certainty in the other, where and are *commuting* contrary propositions: .

For exactly these reason, I agree with folks like Kent, Dowker, Bassi, Ghirardi, Okon, and Sudarsky that there is a set-selection problem, i.e., consistent histories needs to be augmented with a criterion for picking out (at least approximately) a preferred set. (This is equivalent to a full precise specification of branch structure, and is basically the maximal generalization of the decoherence program.) I would characterize consistent histories as a language for making classical logic statements about wavefunction branches rather than a complete theory.

> 4…Whether records of the branch structure of the universe persist into thermalization depends of how the records are encoded.

I predict that if you try to track records through the period of thermalization, you will either find they dissolve or you will be forced to distort your definition of records to become meaningless In particular, records will become completely delocalized, so that measuring the “environment” (or the “system”) would require a joint measurement of the entire universe. I am *extremely* interested in how to usefully and mathematically generalize the concept of records to one that makes sense at late times, so please let me know if you disagree.

> … the idea is that the observable branch structure of the universe is not primary. It is supposed to be an approximate macro feature of an underlying ensemble of initial states. So if it becomes harder to notice at some late time in history unclear to me why that’s a problem.

It’s worrying because you’re privileging (without explanation) some indeterminate intermediate time period that lies between now and heat death. Here’s what I mean:

If we were to look around at the observationally accessible macro features at noon today, the simplest (or otherwise most likely) possible quantum state of the universe consistent with those features would be the traditional wavefunction which is known to branch, i.e., to develop superpositions of distinct macro features later in time.

You are suggesting that instead we should consider a very different state which is consistent with current macro features but which also evolves through a sequence of states that are each consistent with individual macro configurations (i.e., no macro superposition) at their respective times. The *way* you implicitly construct this preferred state is by assuming we can distinguish the orthonormal set of macro-feature eigenstates at some final time — i.e., that the branch structure is at least approximately understood and defined for the traditional wavefunction — and then just choosing to privilege one branch.

The problem is that there is no final time just before thermalization. And if you pick a time long before thermalization, then you’ll get macroscopic superpositions *following* that time in your preferred branch.

I claim branches in the traditional wavefunction (which are inferred through decoherence theory) will start to *smoothly* dissolve into each other as heat death approaches, so that each is a joint eigenstates of *fewer and fewer* macro observables. Similarly, I conjecture that if you tried to go beyond decoherence theory and define a more fine-grained branch structure for the traditional wavefunction, you’d find either that (A) it was unstable from one time step to the next or (B) you had to simply fix macroscopically interpretable branches arbitrarily at some single preferred time prior to heat death and then evolve the branches forward in time without caring about the fact that they didn’t retain any recognizable records or other macro interpretation.

Best,

Jess

[1] PS.: Since it is my personal passion project, let me emphasize that no one yet has a general method for obtaining, given a candidate wavefunction of the universes (or even of just a large many-body system), the branch decomposition in Eq. (5), , *or even an approximation thereto*. Rather, all we have are a collection of toy models where the decomposition is obvious/intuitive, and we extrapolate that it’s possible to find Eq. (5) for the wavefunction of the universe up to an error that is not detectable “for all practical purposes” (FAPP). (This is in contrast to the Bohmian approach, where a simple principle is declared that exactly specifies the ensemble of possibilities, i.e. the probability distribution for the Bohm particle position.) This is mostly a separate issue from my main critique of your paper, so I am assuming for the sake of discussion that a well-defined procedure for finding Eq. (5) exists up to a small error.

1. I agree that the fuzziness of many-worlds + decoherence is simply repackaged. But that repacking is actually the point of the whole thing. The paper starts with the hypothesis that the intrinsically approximate character of the many-worlds + decoherence characterization of branching makes its branches, by themselves, implausible candidates for the substance of reality. The idea is to find something else with precise evolution rules that can be written in underneath to which the macro-reality of many-worlds + decoherence can then be viewed as an approximation.

2. The environment bits are not additional degrees of freedom in the same class as Bohm trajectories. They were meant, in a particular model, to be re labelings of degrees of freedom already present in the system. In any case, an updated version of the paper was posted a while back without the bit vectors.

3. Two problems with consistent histories from my point of view. First, for many (most?) systems, if you require consistent histories criteria to be fulfilled exactly, you will have no takers at all, so no macro-reality. Second, there are sets of 4 propositions such that every pair is consistent, but a particular subset of 3 of them can not be consistent.

4. Two responses also to your comment that eventual thermalization of the universe is a problem for the proposal. Whether records of the branch structure of the universe persist into thermalization depends of how the records are encoded. Also, the idea is that the observable branch structure of the universe is not primary. It is supposed to be an approximate macro feature of an underlying ensemble of initial states. So if it becomes harder to notice at some late time in history unclear to me why that’s a problem.

Best,

Don

I think the most promising model would be a 1D chain of spins with some sort of local interactions. See especially the Brun-Halliwell model [arXiv:quant-ph/9601004]. The idea is that hydrodynamic variables (i.e., locally conserved densities) are excellent candidates for variables that would be redundantly recorded, and would follow quasiclassical trajectories on short timescales.

I don’t know anyone who has tried to do a numerical simulation to find branches based on spatial entanglement structure (rather than by imposing a system-environment distinction by hand). This is something I’ve wanted to do for a while. Email me if you want to discuss more: jessriedel@gmail.com

My technical experience here doesn’t go beyond some simple Mathematica models, but I imagine this could only feasibly be done for a reasonably large spin chain by using a tensor network (in this case a matrix-product state) .

A quantum cellular automata would be very interesting. One drawback would (I think) be that there wouldn’t be quasi-classical evolution, which would be ideal for convincingly deriving the appearance of classical trajectories.

]]>I would be interested in trying to run a simulation of a simple system with some of the features that seem important to branching in the real world, such as locality. A QCA for instance. Seeing such a system evolve, and hopefully form branch-like structures, might be helpful for trying to figure out what ‘branches’ really are (although of course we could only do it for a handful of particles/sites) Has anybody tried to do anything like this? Do you know what sort of software/methods could be used to implement such a simulation?

]]>(1) Strategically, we want to start with the accepted recipe for measurement. As shown by Zurek (discussed here), measurements are really about *amplification*, and the best way I know how to formalize amplification is in terms of *copies of information*. Although I’m very interested in finding another formalization involving time (e.g., divergent information flow?), the simplest mathematical criteria seems to involve just identifying correlated information at some fixed time after the measurement.

(2) Intuitively, if you hand me the macroscopic wavefunction describing Schrödinger’s cat, it seems to me that we can identify the branches without any reference to the historical evolution that generated that state. All I need to do is just look at the entanglement structure of a single time-slice and it’s obvious.

That said, it’s conceivable a toy model with the following features could be found: On a single late-time slice, there are two incompatible candidate branch decompositions with equivalent spatial entanglement structure, but earlier time evolution unambiguously picks out one branch decomposition as the “correct” one. (This might be based on one decomposition having the BFGUTE property.)

I have spent some time trying to find an additional criterion — grounded in the Hamiltonian, the lattice spacing, or the time evolution — that would unambiguously pick out a unique decomposition (unlike in the paper), but everything I tried was ugly/ad-hoc. Obviously, you can just declare a criterion, but I’d like something that was as compelling as the idea (used in the paper) that anything that deserves to be called a measurement must make at least three copies.

]]>I guess my question might be, is it necessarily the case that the branch decomposition can be derived from the tensor product structure/ entanglement alone? It seems that more structure is needed, as in the preferred length scale in the paper. The time evolution operator is a natural source of such a structure, e.g. if the qubits were fixed in some grid such that only neighbours can influence each other, that would be reflected in the Hamiltonian. So then the ‘correct’ branch decomposition would be a function of the Hamiltonian/additional structure, not just the state itself. Is this basically correct?

]]>Our *expectation* (or hope) is that the number of branches increases monotonically in time and that, furthermore, the branches at an earlier time, when evolved forward, are just a coarse-graining of the branches at a later time. Here, “coarse-graining” reflects that fact that individual branches may subdivide (e.g., when a measurement is made), but if you add the sub-branches together you should recover the parent branches (suitably time-evolved). More precisely, we expect that if the branch decomposition is at time , and is at some later time , then , where the form a partition of the set , i.e., and for . Let us call this property “branch fine-graining under time evolution” (BFGUTE)

BFGUTE is merely an unproven desiderata. Indeed, one way to describe the way in which our understanding of quantum mechanics is incomplete is that we have never proven BFGUTE, which is basically the statement that we haven’t proven the Copenhagen and Everett interpretations equivalent. The first step in proving BFGUTE is to find a precise definition of wavefunction branches. Although it would be very nice if we could find a definition of branches that automatically implied BFGUTE, a little thought shows that the BFGUTE property is not enough, on its own, to define branches.

I’m not sure if that answered your questions, but maybe it clarified things enough that you could re-ask?

]]>I’ll settle for surprisingly interesting list of random links

]]>> Wrong: Observables are “represented” (?) by Hermitian operators.

> Right: Measurements necessarily amplify, and therefore (!) are associated

> with an orthogonal basis. This is the Schmidt basis of the entangled

> joint state of the measuring apparatus and the measured system.

> More: Wojciech H. Zurek, Phys. Rev. A 76, 052110 (2007),

> [arXiv:quant-ph/0703160]. Also: [arXiv:1212.3245].

> Implication: Observables can be associated with normal, not just

> Hermitian, operators.

Furthermore, the idea that normal operators are observables follows almost immediately from Wojciech’s 2007 paper (which Hu et al. cite). That paper was one of the reasons I sought him out as my advisor, physically moving from California to New Mexico, and I probably just picked the idea up from him during discussion while I was there.

Of course, the idea itself is much more important than priority. I’d be very glad if their publication makes this idea common knowledge taught in introductory quantum mechanics courses. But I’m not holding my breath…

]]>The Schmidt states can be spatially delocalized while the pointer states are not because the eigenstates of a density matrix are not continuous with the distance between density matrices. That is, two density matrices can be arbitrarily close together while having eigenstates that are always very different. For instance, the maximally mixed state of the qubit can be written

with and orthonormal, and . Then we can define the perturbations

The states and are arbitrarily close for small , but their eigenbases are always maximally different ( and , respectively).

I’m not sure if the Gaussian case discussed by Page also exploits this instability of the Schmidt basis near points of degeneracy, or if the effect is different. I went back to his paper, but it looked non-trivial to fill in the procedure he sketches on page 7 and 8.

]]>I think anyone interested in reading about ETH is likely to understand that linearity is implicitly used in obtaining a superposition of eigenstates, just as addition and multiplication are as well. So I don’t think leaving out linearity is a problem. It now looks like a clear and well written motivation section. Thanks for doing that!

Your initial remarks in this blog certainly brought up an interesting point about the reason for a distinction between classical and quantum systems in the context of “thermalization”. Hopefully someone googling will find this whole rather socratic discussion useful.

Josh

]]>On your prodding, I have now restored that section on the ETH Wikipedia page with the changes I think are appropriate: Eigenstate thermalization hypothesis | Motivation. I did not mention linearity explicitly but, following your lead, I did write down the quantum time evolution of expectation values to clearly demonstrate the persistence of memory about initial conditions, implicitly using aspects of quantum evolution that include linearity. (I implicitly used time-translation invariance too, but likewise did not emphasize this since it is also a property of many chaotic classical systems.) I would welcome additional language discussing how the singular behavior of eigenfunctions of the classical Liouville operator is connected to chaotic evolution, but currently I don’t know enough about it.

Thank you again for the stimulating discussion.

Jess

]]>Thanks for considering what I wrote so carefully. I get the impression that now we’re pretty much on the same page and it’s now becoming a matter of the loose and perhaps faulty wording people have been using in their linearity argument. I think what you raised was an important question that would be in the minds of many readers upon considering the linearity argument that you discussed at such great length. I agree that what was written in the ETH Wikipedia page, which you deleted, concerning this topic could’ve been better written. But instead of gutting it completely, why don’t you edit it to make it an argument that you find reasonable?

There’s way to much stuff in our exchange to condense it to a few sentences, including background information in statistical mechanics and the fine grained ergodic theorem (or whatever else people might want to call it). But I’ve cut and pasted, for my previous response here, what I think are the main points that in mind mind, should be added into any cogent discussion of the role of linearity in quantum mechanics and why this doesn’t hold in classical mechanics although the latter is in some sense, just as linear:

“This fact, which is relevant to your criticism of all those who use linearity to purportedly explain why understanding quantum thermalization is difficult, is really a natural consequence of (1) time translational invariance (2) unitarity, (3) linearity and: (4) not having batshit crazy behavior of energy eigenfunctions like you get in classical mechanics: ”

I think that you would serve the community well if you were to reintroduce the linearity discussion into that ETH entry, but do it in a way that doesn’t seem so bogus, and perhaps uses some less colloquial verbiage.

Is that something that you think you’d be willing to write, or are you still unconvinced about the utility of such a discussion? I find that having intuitive motivations for physics results to be very helpful, and this is a much more subtle point than most people would be equipped to explain, but I think that with your broad understanding, that you should be able to do so very admirably.

Best wishes,

Josh

]]>