I think the most promising model would be a 1D chain of spins with some sort of local interactions. See especially the Brun-Halliwell model [arXiv:quant-ph/9601004]. The idea is that hydrodynamic variables (i.e., locally conserved densities) are excellent candidates for variables that would be redundantly recorded, and would follow quasiclassical trajectories on short timescales.

I don’t know anyone who has tried to do a numerical simulation to find branches based on spatial entanglement structure (rather than by imposing a system-environment distinction by hand). This is something I’ve wanted to do for a while. Email me if you want to discuss more: jessriedel@gmail.com

My technical experience here doesn’t go beyond some simple Mathematica models, but I imagine this could only feasibly be done for a reasonably large spin chain by using a tensor network (in this case a matrix-product state) .

A quantum cellular automata would be very interesting. One drawback would (I think) be that there wouldn’t be quasi-classical evolution, which would be ideal for convincingly deriving the appearance of classical trajectories.

]]>I would be interested in trying to run a simulation of a simple system with some of the features that seem important to branching in the real world, such as locality. A QCA for instance. Seeing such a system evolve, and hopefully form branch-like structures, might be helpful for trying to figure out what ‘branches’ really are (although of course we could only do it for a handful of particles/sites) Has anybody tried to do anything like this? Do you know what sort of software/methods could be used to implement such a simulation?

]]>(1) Strategically, we want to start with the accepted recipe for measurement. As shown by Zurek (discussed here), measurements are really about *amplification*, and the best way I know how to formalize amplification is in terms of *copies of information*. Although I’m very interested in finding another formalization involving time (e.g., divergent information flow?), the simplest mathematical criteria seems to involve just identifying correlated information at some fixed time after the measurement.

(2) Intuitively, if you hand me the macroscopic wavefunction describing Schrödinger’s cat, it seems to me that we can identify the branches without any reference to the historical evolution that generated that state. All I need to do is just look at the entanglement structure of a single time-slice and it’s obvious.

That said, it’s conceivable a toy model with the following features could be found: On a single late-time slice, there are two incompatible candidate branch decompositions with equivalent spatial entanglement structure, but earlier time evolution unambiguously picks out one branch decomposition as the “correct” one. (This might be based on one decomposition having the BFGUTE property.)

I have spent some time trying to find an additional criterion — grounded in the Hamiltonian, the lattice spacing, or the time evolution — that would unambiguously pick out a unique decomposition (unlike in the paper), but everything I tried was ugly/ad-hoc. Obviously, you can just declare a criterion, but I’d like something that was as compelling as the idea (used in the paper) that anything that deserves to be called a measurement must make at least three copies.

]]>I guess my question might be, is it necessarily the case that the branch decomposition can be derived from the tensor product structure/ entanglement alone? It seems that more structure is needed, as in the preferred length scale in the paper. The time evolution operator is a natural source of such a structure, e.g. if the qubits were fixed in some grid such that only neighbours can influence each other, that would be reflected in the Hamiltonian. So then the ‘correct’ branch decomposition would be a function of the Hamiltonian/additional structure, not just the state itself. Is this basically correct?

]]>Our *expectation* (or hope) is that the number of branches increases monotonically in time and that, furthermore, the branches at an earlier time, when evolved forward, are just a coarse-graining of the branches at a later time. Here, “coarse-graining” reflects that fact that individual branches may subdivide (e.g., when a measurement is made), but if you add the sub-branches together you should recover the parent branches (suitably time-evolved). More precisely, we expect that if the branch decomposition is at time , and is at some later time , then , where the form a partition of the set , i.e., and for . Let us call this property “branch fine-graining under time evolution” (BFGUTE)

BFGUTE is merely an unproven desiderata. Indeed, one way to describe the way in which our understanding of quantum mechanics is incomplete is that we have never proven BFGUTE, which is basically the statement that we haven’t proven the Copenhagen and Everett interpretations equivalent. The first step in proving BFGUTE is to find a precise definition of wavefunction branches. Although it would be very nice if we could find a definition of branches that automatically implied BFGUTE, a little thought shows that the BFGUTE property is not enough, on its own, to define branches.

I’m not sure if that answered your questions, but maybe it clarified things enough that you could re-ask?

]]>I’ll settle for surprisingly interesting list of random links

]]>> Wrong: Observables are “represented” (?) by Hermitian operators.

> Right: Measurements necessarily amplify, and therefore (!) are associated

> with an orthogonal basis. This is the Schmidt basis of the entangled

> joint state of the measuring apparatus and the measured system.

> More: Wojciech H. Zurek, Phys. Rev. A 76, 052110 (2007),

> [arXiv:quant-ph/0703160]. Also: [arXiv:1212.3245].

> Implication: Observables can be associated with normal, not just

> Hermitian, operators.

Furthermore, the idea that normal operators are observables follows almost immediately from Wojciech’s 2007 paper (which Hu et al. cite). That paper was one of the reasons I sought him out as my advisor, physically moving from California to New Mexico, and I probably just picked the idea up from him during discussion while I was there.

Of course, the idea itself is much more important than priority. I’d be very glad if their publication makes this idea common knowledge taught in introductory quantum mechanics courses. But I’m not holding my breath…

]]>The Schmidt states can be spatially delocalized while the pointer states are not because the eigenstates of a density matrix are not continuous with the distance between density matrices. That is, two density matrices can be arbitrarily close together while having eigenstates that are always very different. For instance, the maximally mixed state of the qubit can be written

with and orthonormal, and . Then we can define the perturbations

The states and are arbitrarily close for small , but their eigenbases are always maximally different ( and , respectively).

I’m not sure if the Gaussian case discussed by Page also exploits this instability of the Schmidt basis near points of degeneracy, or if the effect is different. I went back to his paper, but it looked non-trivial to fill in the procedure he sketches on page 7 and 8.

]]>I think anyone interested in reading about ETH is likely to understand that linearity is implicitly used in obtaining a superposition of eigenstates, just as addition and multiplication are as well. So I don’t think leaving out linearity is a problem. It now looks like a clear and well written motivation section. Thanks for doing that!

Your initial remarks in this blog certainly brought up an interesting point about the reason for a distinction between classical and quantum systems in the context of “thermalization”. Hopefully someone googling will find this whole rather socratic discussion useful.

Josh

]]>On your prodding, I have now restored that section on the ETH Wikipedia page with the changes I think are appropriate: Eigenstate thermalization hypothesis | Motivation. I did not mention linearity explicitly but, following your lead, I did write down the quantum time evolution of expectation values to clearly demonstrate the persistence of memory about initial conditions, implicitly using aspects of quantum evolution that include linearity. (I implicitly used time-translation invariance too, but likewise did not emphasize this since it is also a property of many chaotic classical systems.) I would welcome additional language discussing how the singular behavior of eigenfunctions of the classical Liouville operator is connected to chaotic evolution, but currently I don’t know enough about it.

Thank you again for the stimulating discussion.

Jess

]]>Thanks for considering what I wrote so carefully. I get the impression that now we’re pretty much on the same page and it’s now becoming a matter of the loose and perhaps faulty wording people have been using in their linearity argument. I think what you raised was an important question that would be in the minds of many readers upon considering the linearity argument that you discussed at such great length. I agree that what was written in the ETH Wikipedia page, which you deleted, concerning this topic could’ve been better written. But instead of gutting it completely, why don’t you edit it to make it an argument that you find reasonable?

There’s way to much stuff in our exchange to condense it to a few sentences, including background information in statistical mechanics and the fine grained ergodic theorem (or whatever else people might want to call it). But I’ve cut and pasted, for my previous response here, what I think are the main points that in mind mind, should be added into any cogent discussion of the role of linearity in quantum mechanics and why this doesn’t hold in classical mechanics although the latter is in some sense, just as linear:

“This fact, which is relevant to your criticism of all those who use linearity to purportedly explain why understanding quantum thermalization is difficult, is really a natural consequence of (1) time translational invariance (2) unitarity, (3) linearity and: (4) not having batshit crazy behavior of energy eigenfunctions like you get in classical mechanics: ”

I think that you would serve the community well if you were to reintroduce the linearity discussion into that ETH entry, but do it in a way that doesn’t seem so bogus, and perhaps uses some less colloquial verbiage.

Is that something that you think you’d be willing to write, or are you still unconvinced about the utility of such a discussion? I find that having intuitive motivations for physics results to be very helpful, and this is a much more subtle point than most people would be equipped to explain, but I think that with your broad understanding, that you should be able to do so very admirably.

Best wishes,

Josh

]]>Still, I think the very next sentence of mine still holds: “*…‘some classical tests of chaos/thermalization become ill-defined in quantum mechanics’ rather than ‘some classical tests give incorrect answers in quantum mechanics’.*” Classical systems with a small number of particles (like N=2 Sinai billiards) are just not analogous to quantum systems with a small number of dimensions, so it’s not surprising that they behave differently. Likewise, to go back to my analogy with fluids, it would not be useful to emphasize that the number of atoms is conserved in the atomic theory but the number of eddies is not preserved in the continuum theory (because eddies are not the continuum analog of atoms).

You wrote “*The only way of making the quantum case the same as the classical one, is to only consider the limit of very large energies, the semi-classical limit. However you’re left with no quantum effects in this case…*“. My claim isn’t that we should only compare thermalization of classical systems to the limit of quantum systems, it’s that the mathematical objects employed in the quantum thermalization *criteria* must limit to the objects employed in the classical criteria.

Thanks for walking me through this. I have broken my response into two parts. Immediately below I address your rebuttal on behalf of the authors I criticize (in the main post) for statements about linearity in quantum chaos. In a separate comment, I address your critique of my claims (outside the main post) about how criteria for thermalization behave in the limit.

Jess

—

You wrote “*So why does the linearity argument fail in the classical case? That’s where I pointed out what seemed to be the underlying pathology that prevented the classical case from working… this fine grained ergodic theorem, which is the basis for the linearity argument of these authors, would apparently not apply in the classical case… So I can’t see then why authors using the theorem in the quantum case should be worried at all about the classical limit*“. Here by the “linearity argument”, I take you to mean the argument that the linearity of a system’s dynamical equation implies that there can be no notion of sensitive dependence on initial conditions, and hence a need for other criteria of chaos. Thus, I take you to be arguing that since [linearity]+[something else] prevents sensitive dependence on initial conditions, and since quantum mechanics has [something else] but classical mechanics does not, it’s reasonable for these authors to assert, at the level of precision appropriate for an introduction, that [linearity] is (part of) the reason quantum and classical chaos must be treated different. Especially if later, after the introduction, they will give the details about [something else]. Indeed, in a previous comment you said “*You want to toss out linearity because by itself, it doesn’t actually explain ‘thermalization’ or the absence thereof, in quantum systems. But there are lots of examples where one piece of a puzzle by itself isn’t enough to explain much, but combined with other information can be quite informative… Linearity could easily be step 1 of an explanation. Then we’d need a step 2 and 3 to actually say something that’s not totally stupid. Or maybe you’re right and step 1 is totally bogus. What I’m saying is that you can’t definitively prove that step 1 won’t be part of the explanation.*

But your argument works when you replace [linearity] with *any* property that quantum and classical mechanics share (e.g., the fact that both have a single universal time coordinate) so long as it is used *somewhere* in the author’s eventual exact explanation! This is not a good reason for saying, even at an imprecise level, that [linearity] is a key difference. More abstractly, if X and Y apply to case 1, and X and Z apply to case 2, it is not reasonable to assert “Basically, X is the reason case 1 has property P but case 2 does not” even if one uses X+Y to prove property P in case 1. For something to be responsible — even in part — for a difference between two cases, it needs to vary between the two cases; it’s not sufficient for it to just be used in the derivation of the difference.

Are the Schimdt states spatially delocalized because (a) the Schmidt states haven’t yet approached close enough to the pointer states (in which case, why would you say the particle is already decohered?), or is it because (b) the pointer states themselves aren’t localized in position (but then, in which sense does the decoherence takes place wrt the position basis)?

If the answer is (a) and the whole point is, as you say, “all sorts of things can happen to the system that depart from its idealized dynamics,” in what sense is the particle actually decohered in the position basis during these non-ideal dynaimcs?

]]>