A senior colleague asked me for thoughts on this paper describing a single-preferred-branch flavor of quantum mechanics, and I thought I’d copy them here. Tl;dr: I did not find an important new idea in it, but this paper nicely illustrates the appeal of Finkelstein’s partial-trace decoherence and the ambiguity inherent in connecting a many-worlds wavefunction to our direct observations.
We start by assuming that a precise wavefunction branch structure has been specified. The idea, basically, is to randomly draw a branch at late times according to the Born probability, then to evolve it backwards in time to the beginning of the universe and take that as your initial condition. The main motivating observation is that, if we assume that all branch splittings are defined by a projective decomposition of some subsystem (‘the system’) which is recorded faithfully elsewhere (‘the environment’), then the lone preferred branch — time-evolving by itself — is an eigenstate of each of the projectors defining the splits. In a sense, Weingarten lays claim to ordered consistency [arxiv:gr-qc/9607073] by assuming partial-trace decoherencea [arXiv:gr-qc/9301004]. In this way, the macrostate states stay the same as normal quantum mechanics but the microstates secretly conspire to confine the universe to a single branch.
I put proposals like this in the same category as Bohmian mechanics. They take as assumptions the initial state and unitary evolution of the universe, along with the conventional decoherence/amplification story that argues for (but never fully specifies from first principles) a fuzzy, time-dependent decomposition of the wavefunction into branches. To this they add something that “picks out” one of the branches as preferred. By Bell’s theorem, the added thing has to have at least one very unattractive quality (e.g., non-locality, superdeterminism, etc.), and then the game is to try to convince oneself that something else about it makes it attractive enough to choose on aesthetic grounds over normal quantum mechanics.b
Weingarten is refreshingly clear about this and correctly characterizes his proposal as a hidden variable theory. I’d say its virtue is that, on its face, it doesn’t introduce new mathematical objects like the Bohm particle. However, if we try and quantify theory elegance with something like algorithmic complexity, then the bit string b used to specify the preferred branch (which is necessary to write down the complete theory, unlike normal quantum mechanics) is an equivalently inelegant structure.
Weingarten argues (in the paragraph beginning “Each |Ψ(hj,t)> may be viewed…”) that this proposal solves most of the fuzziness problems associated with the decoherence story, but I’d say it just repackages them. You need to help yourself to a precise choice of splitting events (when exactly they happen, etc.) to even define the ensemble of branches |Ψ(hj,t)⟩, but if you assume you already have that precision, then what’s the problem? Why not just declare that the set of branches is nothing but an ensemble of potential outcomes, exactly one of which is chosen at random (according to the Born probability), thereby reducing quantum mechanics to a classical non-local stochastic theory?
Perhaps Weingarten’s issue is that Many-Worlders like Davide Wallace often embrace a fuzzy/emergent nature of branches, likening them to the fuzzy/emergent nature of a tiger, and refuse to specify an arbitrary precise definition. But then it seems Weingarten would be happy with a consistent histories interpretation, where the branches are specified precisely with projectors…whose precision, insofar as it exceeds the fuzziness inherent to the decoherence story, is just picked arbitrarily.
Indeed, despite the marked inelegance of Bohmian mechanics, it has at least one advantage over Weingarten’s proposal: the Bohm particle path is automatically precise and this requires only an initial random sample from a well-defined probability distribution (to be compared with an arbitrary and still unspecified choice of branch structure). This means that, if we accept the Bohm story, branches can be fuzzy for the same reason that we’re OK with tigers being fuzzy in a universe where we understand atomic physics precisely.
Finally, note that, in the far future, this single-branch theory shares a problem with all theories that take branches as fundamental: eventually, the universe will thermalize and branch structure must break down.
EDIT (2021-5-18): Weingarten has recently posted a substantial expansion on his work: arXiv:2105.04545. The preferred branch decomposition is to be generated using a modification on Nielsen’s measure of quantum circuit complexity.
Footnotes
(↵ returns to text)
- Note on terminology: What Finkelstein called “partial-trace decoherence” is really a specialized form of consistency (i.e., a mathematical criterion for sets of consistent histories) that captures some, but not all, of the properties of the physical and dynamical process of decoherence. That’s why I’ve called it “partial-trace consistency” here and here.↵
- In addition to Bohmian mechanics, see important examples like Kent’s late-time photodetection [arXiv:1608.04805] and the “Many Interacting Worlds” [PRX 4, 041013 (2014)].↵
Pingback: Various Topics in Interpretation of Quantum Mechanics | Not Even Wrong
Hi Jess,
1. I agree that the fuzziness of many-worlds + decoherence is simply repackaged. But that repacking is actually the point of the whole thing. The paper starts with the hypothesis that the intrinsically approximate character of the many-worlds + decoherence characterization of branching makes its branches, by themselves, implausible candidates for the substance of reality. The idea is to find something else with precise evolution rules that can be written in underneath to which the macro-reality of many-worlds + decoherence can then be viewed as an approximation.
2. The environment bits are not additional degrees of freedom in the same class as Bohm trajectories. They were meant, in a particular model, to be re labelings of degrees of freedom already present in the system. In any case, an updated version of the paper was posted a while back without the bit vectors.
3. Two problems with consistent histories from my point of view. First, for many (most?) systems, if you require consistent histories criteria to be fulfilled exactly, you will have no takers at all, so no macro-reality. Second, there are sets of 4 propositions such that every pair is consistent, but a particular subset of 3 of them can not be consistent.
4. Two responses also to your comment that eventual thermalization of the universe is a problem for the proposal. Whether records of the branch structure of the universe persist into thermalization depends of how the records are encoded. Also, the idea is that the observable branch structure of the universe is not primary. It is supposed to be an approximate macro feature of an underlying ensemble of initial states. So if it becomes harder to notice at some late time in history unclear to me why that’s a problem.
Best,
Don
Hi Don,
Thanks very much for your comments, and sorry for the big delay in getting back to you. Also, sorry if my response runs long; I think these issues are both important and subtle, so I tend to belabor things.
[EDIT 17-Dec-2017: the four distinct issues have been split up into four separate comments below to make the conversation easier to read.]
Best,
Jess
Hi Jess,
My half of our new round. Following in your footsteps, have taken a while to write this, and given up on terse. :) I begin with a digression:
The 90 year debate over the relation between the mathematics of microscopic quantum mechanics and macroscopic events is to some degree similar to the 100 year debate over the formulation of the self-interaction of a point (or at least very small) charge in classical electrodynamics. Many, many proposed solutions along the way. Some intended to be as much as possible satisfactory according to first principles, some only “for all practical purposes”.
Along with attempts to dodge the problem such as S-matrix theory which rested in part on some philosophical contortions. From which no clear consensus emerged for about a century. The debate was not settled by logical argument. Rather the eventual solution to the problem came only with new physics, quantum field theory, and even then only after the embedding of quantum electrodynamics in some larger asymptotically free field theory. One of the useful intermediate results of the self-interaction debate was the utility of imposing a short distance cutoff in a logically consistent way, which I think (but am not totally sure) was the subject of a paper by Dirac, “Classical Theory of Radiating Electrons”, Proc. Roy. Soc. A 167, 148-169 (1938). I have a vague recollection that Feynman somewhere said his invention of renormalized perturbation theory was influenced by Dirac’s cutoff version of classical electrodynamics. Another useful product of course was to keep alive the view that there was actually a problem to be solved.
My guess is that the relation between micro quantum mechanics and macro reality will eventually be settled not by logical argument but by some new piece of physics. And when that comes along the debate will (mostly) end. A critical and amazing difference between the present debate and the self-energy debate, however, is that while self-energy clearly had to do with something missing at extremely short distance, the puzzle for quantum mechanics appears to entail pieces at length scales with which we are by now entirely familiar. How can there possibly be a puzzle about what happens between atomic length scales and macroscopic length scales? That’s really strange. The strangeness of which I suspect might be part of the reason a large number of physicists still believe that, in some way or other, there really isn’t a problem at all.
Well, of course, lots of people might disagree with this characterization of the puzzle. Actually, for pretty much anything that can be said on the subject I think there is someone who would disagree. Makes it fun.
Best,
Don
> 1. …the intrinsically approximate character of the many-worlds + decoherence characterization of branching makes its branches, by themselves, implausible candidates for the substance of reality. The idea is to find something else with precise evolution rules that can be written in underneath to which the macro-reality of many-worlds + decoherence can then be viewed as an approximation.
But if we assume that branches are understood approximately [1] and we just want to pick something that is precise, why not just arbitrarily choose a precise branch structure? I interpret your proposal to be defined by a two step process: First, pick a precise branch structure arbitrarily from the set of all branch structures that are compatible with the range of ambiguity inherent in the smooth process of decoherence in the the wavefunction of conventional quantum mechanics (henceforth “the traditional wavefunction”). Second, pick one of those branches and evolve it backward in time to t=0, then declare that to be the real world. So why not just stop after the first step?
You might retort that under your ontological hypothesis (that the only fundamental object is the preferred branch) the traditional wavefunction with non-realized branches is just a human constructions. And indeed, we can’t rule this out. But my response is that, until we can write down a preferred-branch theory without reference to the traditional wavefunction, then we also ought to also consider the alternative ontology of precise branches. These do have precise evolution rules, which personally I prefer because the inelegance is transparent.
It’s true that the evolution of a single preferred branch is smooth (in time) compared to the weird discrete-time nature of branching, which you emphasize in the good new paragraph from the updated version of your paper (“One piece of the formulation of quantum mechanics we now propose remains approximate, but another has become exact….”) but this is achieved by massive microscopic conspiracy. I would characterize this as merely obscuring the inelegance using an implicit definition that draws on an assumed branch structure with discrete times. Note that this criticism would not apply if you had an alternate way of defining the smoothly-evolving preferred branch.
[1] PS.: Since it is my personal passion project, let me emphasize that no one yet has a general method for obtaining, given a candidate wavefunction of the universes (or even of just a large many-body system), the branch decomposition in Eq. (5), |\Psi(t)\rangle = \sum_h |\Psi(h,t)\rangle, or even an approximation thereto. Rather, all we have are a collection of toy models where the decomposition is obvious/intuitive, and we extrapolate that it’s possible to find Eq. (5) for the wavefunction of the universe up to an error that is not detectable “for all practical purposes” (FAPP). (This is in contrast to the Bohmian approach, where a simple principle is declared that exactly specifies the ensemble of possibilities, i.e. the probability distribution for the Bohm particle position.) This is mostly a separate issue from my main critique of your paper, so I am assuming for the sake of discussion that a well-defined procedure for finding Eq. (5) exists up to a small error.
1)
One more try at my view of what’s in the paper. I think there are three pieces. The first proves a technical result, the relation between state ensembles and the branching arising from environmentally induced decoherence supplemented by some additional outside parameters to make the branch definitions precise. The second piece proposes the possible utility of this result as a step toward finding an adequate (at least to me) ontological base for quantum mechanics. And the third piece is the qualification that the proposal is intended as a step and not an end result.
Said in more detail:
Construction of branches from environmentally induced decoherence, in all versions I’ve seen, is intrinsically approximate and requires the addition of parameters from outside the theory to make the approximations precise. The problem with the branches defined by parameters from outside the theory is not that they are inelegant. The problem to me is that they look like unacceptable candidates for quantum mechanics fundamental elements of reality. The theory is missing something. The branches do not stand on their own. You need something from outside the theory to get them. This problem has some qualitatively similarity to the difficulty with Bohr’s original version of the theory. There, you need to put in a measurement apparatus to get something real out of the microscopic world. Its reality does not stand on its own. No micro reality. The branches of environmentally induced decoherence have a better status than this. But not absolutely, totally better. To me, they look like approximate descriptions of some other underlying real objects.
So what could the underlying real objects be? The initial (final) states are intended as possible candidates. What is shown in the paper is that it is possible to construct an initial (final) state ensemble which will give a time trajectory to which environmentally induced decoherence branches defined by any particular set of externally imposed parameters are a macroscopic approximation.
But somewhat changing my position from what I said in reply to your first set of comments, I now think that the initial (final) states need not be simply repackaging of the external parameters. I think it is possible for any particular branch, to find an initial (final) state independent of at some of these parameters even though the dependence is present in a step by step branch constructed by environmentally induced decoherence.
In particular, by working from the long term records which occur after entanglement with the environment has proceeded to a limit, the final state might be independent of parameters determining the timing of the onset of branch formation. If you then go back and find the time trajectory for a final state and from that try to reconstruct a step by step macroscopic view of the branching process, the macroscopic reconstruction will depend on restoring some arbitrary external parameters to define branch point timing. But they will be missing from the state trajectory and from the final state.
Your counter argument, I would guess, is that as a consequence of eventual thermalization there is no long term limit. My counter to your counter is that I think there are models which do not totally thermalize and which support the picture I’ve just described.
I may try to come up a with detailed model to support this point. But don’t want to delay the present response any further.
In any case, to make a long story short, the proposal is a research program, a proposed candidate for the elements of reality missing from accounts I have seen of environmentally induced decoherence. To find an OK ontological base for quantum mechanics, I don’t think environmentally induced decoherence in the forms I’ve seen is an adequate endpoint.
Incidentally, I do not view the final state ensembles as micro tuned or contrived. They are simply the final state sliced up into parts to expose its branch structure. The final state becomes a probability distribution on branches rather than on “observables” according to the Born rule.
(1)
> Construction of branches from environmentally induced decoherence… is intrinsically approximate and requires the addition of parameters from outside the theory to make the approximations precise.
Agreed.
> The problem with the branches defined by parameters from outside the theory is not that they are inelegant. The problem to me is that they look like unacceptable candidates for quantum mechanics fundamental elements of reality.
Hmmm. Insofar as we’re engaging in the dirty business of choosing between ontologies which are experimentally indistinguishable by construction, it seems to me that elegance, broadly construed, is the only sort of criterion we can use (which would include Occam’s razor as a special case). It might be that I’m using that term too broadly and our only disagreement is on semantics.
> The theory is missing something… You need something from outside the theory to get them. This problem has some qualitatively similarity to the difficulty with Bohr’s original version of the theory. There, you need to put in a measurement apparatus to get something real out of the microscopic world. Its reality does not stand on its own. No micro reality. The branches of environmentally induced decoherence have a better status than this. But not absolutely, totally better.
Yes, strongly agree. I believe the situation in environmentally induced decoherence is analogous. Instead of “the measuring apparatus” (represented mathematically by a preferred measurement basis), we require that something outside the theory define what “the system” is — more precisely, the system-environment decomposition, as represented mathematically by a tensor-product structure.
> What is shown in the paper is that it is possible to construct an initial (final) state ensemble which will give a time trajectory to which environmentally induced decoherence branches defined by any particular set of externally imposed parameters are a macroscopic approximation.
My key disagreement is that I think your demonstration in the paper is nonconstructive. You show that such an ensemble exists, but you don’t (yet!) have a way to write it down without first passing though the decoherence story as an intermediary.
> … I now think that the initial (final) states need not be simply repackaging of the external parameters. I think it is possible for any particular branch, to find an initial (final) state independent of at some of these parameters even though the dependence is present in a step by step branch constructed by environmentally induced decoherence….In particular, by working from the long term records which occur after entanglement with the environment has proceeded to a limit, the final state might be independent of parameters determining the timing of the onset of branch formation.
You have now described an approach that is so close to my own that I cannot resist linking you to my work. Just as you say, the idea is to ignore the detail of branch formation and simply identify branches by the long-term (and, I argue, redundant) records that characterize them. Intuitively, redundant records are a necessary feature of branches, so one of the first things you might want to know is the extent to which such records are sufficient. There’s a uniqueness theorem that shows that large redundancy might get you almost all the way there: arXiv:1608.05377.
A very appealing property of this approach is that is drops the reliance on a preferred tensor decomposition between system and environment. Instead, it relies only on the tensor structure associated with spatial locality. (So nothing inserted from outside the theory.) And as you suggest, once you have such branches defined you can pick one and backward-evolve it to early times.
> Your counter argument, I would guess, is that as a consequence of eventual thermalization there is no long term limit.
Right.
> My counter to your counter is that I think there are models which do not totally thermalize and which support the picture I’ve just described….I may try to come up a with detailed model to support this point.
I’d very much be interested in this. I’m definitely confused about how to think about thermalization in closed quantum systems, and I think most condensed matter physicists would agree something is missing.
> 2….The environment bits are not additional degrees of freedom in the same class as Bohm trajectories.
When I said “…the bit string b used to specify the preferred branch … is an equivalently inelegant structure”, I definitely didn’t mean to suggest that the environmental bits were additional dynamical degrees of freedom like the Bohm particle. My point is just that, from an information-theoretic point of view, fully specifying your theory requires a writing down big chunk of entropy (i.e., some way of identifying the preferred branch from all others) which is not present in normal quantum mechanics.
Since I understand your proposal to be about repackaging for elegance rather than increasing the observational explanatory power of the theory, this is not a big deal.
2)
Not sure I understand your point about adding entropy. There is an ensemble of possible final states. I just reach my hand into a box and pulling out one of them. I am not specifying in advance which one I am going to get. Just taking whatever comes up. No more of an addition of entropy than for any ordinary state pulled out of a thermal ensemble.
Though actually, I don’t view the preferred branch as preferred. Possibly should also make that clearer in some future edition of the paper. I try to take an Everett view of the whole thing. To the extent that I am up for the consequential nihilism. I’m guessing you may have read the story of Everett’s life?
(2)
> There is an ensemble of possible final states. I just reach my hand into a box and pulling out one of them.
Ahh, OK I see. Yes, then I definitely misinterpreted the paper. It might be worthwhile emphasizing in a future version that the (in)determinism properties are very similar to Bohmian mechanics: there is a single random draw at t=0 followed by formally deterministic evolution that appears indeterministic to macroscopic observers,
> To the extent that I am up for the consequential nihilism. I’m guessing you may have read the story of Everett’s life?
I have, or at least everything on the Wikipedia page. Another reason not to be a many-worlds-er!
3….First, for many (most?) systems, if you require consistent histories criteria to be fulfilled exactly, you will have no takers at all, so no macro-reality.
It’s true that a set of histories specified using mathematically-simple-to-define projectors is unlikely to be exactly consistent, but there will exist an exactly consistent set of histories that is close enough to be observationally indistinguishable. (See J. N. McElwaine, PRA 53, 2021 (1996), especially the first three paragraphs in Sec. II and references therein.) Relying on this sort of existence argument without actually specifying the consistent set is definitely a flaw, but it applies equally well to a preferred-branch theory.
>…Second, there are sets of 4 propositions such that every pair is consistent, but a particular subset of 3 of them can not be consistent.
Yes, without further conditions on the propositions (represented mathematically by projectors) that go into histories, it’s impossible to uniquely identify any preferred set of histories or, indeed, any single true proposition. Another pathology (related to the one you mention) is contrary inferences: given an initial state and some observed final data (e.g., the outcome of an experiment), there will exist two incompatible sets of consistent histories such that P is true with certainty in one set, Q is true with certainty in the other, where P and Q are commuting contrary propositions: PQ = 0 = QP.
For exactly these reason, I agree with folks like Kent, Dowker, Bassi, Ghirardi, Okon, and Sudarsky that there is a set-selection problem, i.e., consistent histories needs to be augmented with a criterion for picking out (at least approximately) a preferred set. (This is equivalent to a full precise specification of branch structure, and is basically the maximal generalization of the decoherence program.) I would characterize consistent histories as a language for making classical logic statements about wavefunction branches rather than a complete theory.
3)
Consistent histories:
Ugh. A sore point. I think I am actually by some measure the inventor of the whole thing. Though you’d never know it from reading the literature. I had a preprint out on a version of it in 1973 when I was at Bohr Institute. I had thought about it at first for propositions on multiple time slices, but then realized there was a potential problem with showing consistency of the whole scheme. Also, I didn’t particularly like the multiple time slice version. It seemed to me contrived. So I boiled it down to a two time slice version, which I thought was transparently self-consistent. Then I realized that even that version was not. Well, 1973 the physics world largely believed there was no problem in the foundations of quantum mechanics to begin with. So having found what I thought was a solution to the problem and then discovering that it wasn’t seemed not particularly newsworthy. So I withdrew the article, which was scheduled to go into Nuovo Cimento. Actually getting it accepted anywhere had been quite a battle. A long story. I still have a few copies of the preprint. I’ve thought of scanning and posting it on History and Philosophy section of arXiv. But given that I think the whole thing is a dead end have not.
The version of consistent histories which I worked on in 1973, and which I was assuming in my comment last time around, as you say, uses only simple-to-define projection operators. But for more complicated projections, tailored to be exactly consistent you wind up with other problems.
The problem for the 1973 paper I did not state exactly correctly in my last posting. Was trying to be concise. Stated correctly, though still in somewhat condensed summary, the 1973 paper and the problem it leads to go as follows.
The paper gives an approximate condition, determined by some small number
, which if fulfilled by a pair of propositions
, at some time
, and
at some time
, then leads to the claim that if
happens to be true it is highly probable that
is true, and vice-versa.
The problem I found is for a system built out of a large number of copies of some template single system and a corresponding large number of copies of template propositions on the template system. Then from this your form composite propositions which assert the approximate fraction of the systems on which the template proposition happens to be true. I took a template system consisting of two spins in a total angular momentum 0 state and and 4 template propositions measuring these spins in various different directions at 4 different time. The final result, from which proposition
at
drops out, is that for template propositions
at
,
at
and
at
that the average across the large set of system copies of products of 1’s and 0’s for occurrences of true and false give
for unit vectors
,
,
. For the right set of
,
,
these relations violate the requirement that averages of products of 1’s and 0’s must all be nonnegative.
OK. Enough of that. In any case, looks like we agree that consistent histories does not solve the ontological problem.
(3)
I would very much like to read your 1973 preprint! Please re-consider uploading it. I am really interested in understanding the germination of these ideas. What got you thinking about it? Were you at all motivated (or aware) of the work by H. Dieter Zeh around that time? [“On the interpretation of measurement in quantum theory”, Found. Phys. 1, 69 (1970); Kubler & Zeh, “Dynamics of quantum correlations”, Annals of Physics, 76, 405 (1973); Zeh, “Toward a quantum theory of observation” Found. Phys. 3, 109 (1973)]
There is something mystifying about the fact that, for instance, all the basic math and experimental observations necessary to produce the theory of environmental decoherence was available shortly after the birth of quantum mechanics but it wasn’t actually realized until the 70’s and 80’s.
> 4…Whether records of the branch structure of the universe persist into thermalization depends of how the records are encoded.
I predict that if you try to track records through the period of thermalization, you will either find they dissolve or you will be forced to distort your definition of records to become meaningless In particular, records will become completely delocalized, so that measuring the “environment” (or the “system”) would require a joint measurement of the entire universe. I am extremely interested in how to usefully and mathematically generalize the concept of records to one that makes sense at late times, so please let me know if you disagree.
> … the idea is that the observable branch structure of the universe is not primary. It is supposed to be an approximate macro feature of an underlying ensemble of initial states. So if it becomes harder to notice at some late time in history unclear to me why that’s a problem.
It’s worrying because you’re privileging (without explanation) some indeterminate intermediate time period that lies between now and heat death. Here’s what I mean:
If we were to look around at the observationally accessible macro features at noon today, the simplest (or otherwise most likely) possible quantum state of the universe consistent with those features would be the traditional wavefunction which is known to branch, i.e., to develop superpositions of distinct macro features later in time.
You are suggesting that instead we should consider a very different state which is consistent with current macro features but which also evolves through a sequence of states that are each consistent with individual macro configurations (i.e., no macro superposition) at their respective times. The way you implicitly construct this preferred state is by assuming we can distinguish the orthonormal set of macro-feature eigenstates at some final time — i.e., that the branch structure is at least approximately understood and defined for the traditional wavefunction — and then just choosing to privilege one branch.
The problem is that there is no final time just before thermalization. And if you pick a time long before thermalization, then you’ll get macroscopic superpositions following that time in your preferred branch.
I claim branches in the traditional wavefunction (which are inferred through decoherence theory) will start to smoothly dissolve into each other as heat death approaches, so that each is a joint eigenstates of fewer and fewer macro observables. Similarly, I conjecture that if you tried to go beyond decoherence theory and define a more fine-grained branch structure for the traditional wavefunction, you’d find either that (A) it was unstable from one time step to the next or (B) you had to simply fix macroscopically interpretable branches arbitrarily at some single preferred time prior to heat death and then evolve the branches forward in time without caring about the fact that they didn’t retain any recognizable records or other macro interpretation.
4)
As far as eventual thermalization of everything, I pretty much agree with your list of possible issues. But whether they turn into flat out obstacles or just risks to be avoided depends on what comes next. Will get there when I get there. Or may be not…
The derivation of branching we agree remains an incomplete project. The accounts with which I am familiar are models rather than general solutions. My equation (5) is a hypothesis abstracted from the collection of models. That dependence is specifically advertised in the introduction to the paper. But for sure I am also signed up for trying to do something about branching. Part of trying to get a better derivation of the initial (final) state ensembles.
(4)
> The derivation of branching we agree remains an incomplete project… But for sure I am also signed up for trying to do something about branching.
Excellent, My non-humble opinion is that it’s the most important open problem in physics!
Also, as a separate issue, note that the “branch decomposition ambiguity” you describe in Eq. (2) is not quite right. The branches you identify are essentially determined by the Schmidt decomposition (equivalently, the singular-value decomposition), and this decomposition is only ambiguous when the coefficients
in the state are exactly equal, which is a set of (Haar-)measure zero in Hilbert space. Otherwise, the decomposition into
is unique. The real issue with this story, which is partially solved by decoherence and redundant records (aka “quantum Darwinism”), is that the basis determined by the Schmidt decomposition is highly unstable and does not generically correspond to macroscopic outcomes. See D. Page, arXiv:1108.2709 and my related comments.
(I know you’re just trying to recall a standard story for the reader here rather than offer an authoritative treatment, but I think this is sufficiently off the mark that you risk misleading people.)
Finally, I think you are right that my proposed account of the preferred basis problem in equation (2) is a bit misleading. But replacing a mixing angle of
with some arbitrary
solves the problem only after you buy into the ideology of environmentally induced decoherence. Which is not supposed to show up until the next paragraph of the paper. In the next edition will use arbitrary
, in equation (1), then replace (2) with a four term expansion
(1) ( | m_1\rangle + | m_2\rangle) +\\ &( 1/ 2) [\cos( \theta) - \sin( \theta)]( | s_1\rangle + | s_2\rangle) ( | m_1\rangle - | m_2\rangle) +\\ &( 1/ 2) [\cos( \theta) - \sin( \theta)]( | s_1\rangle - | s_2\rangle) ( | m_1\rangle + | m_2\rangle) +\\ &( 1/ 2) [\cos( \theta) + \sin( \theta)]( | s_1\rangle - | s_2\rangle) ( | m_1\rangle - | m_2\rangle) \end{align*}](https://blog.jessriedel.com/wp-content/ql-cache/quicklatex.com-fcf924c027dc9345f4f3914eb6ef5d5a_l3.svg)
Four universes, none with definite meter positions. Don’t want to get into a discussion of the Page reference at that point. Too far afield.
(5)
Unfortunately, I still don’t think this is a great framing of the problem that the story of environmentally induced decoherence purports to solve. The equation (1) you have written above is indeed an orthogonal decomposition of the state
, but I don’t think it satisfies a key requirement of the (pre-environment-decoherence) “relative state” interpretation: that different outcomes correspond to states of the measuring device, and of the measured system, that are distinguishable, and in particular not strictly proportional to each other. (For instance, two of the pseudo-branches in the decomposition are proportional to
.) In other words, I think Everett would have rejected the branch decomposition (1), and I think he could have done so for a principled reason.
Of course, It is difficult to get a good account of the pre-decoherence solution since there wasn’t even an expert consensus on what the measurement problem itself was (if it even was a problem at all). Ideally, you might consider summarizing Everett’s argument as the “canonical” explanation. Less exhaustively, you could just point out that:
(a) it is not clear when we are supposed to define the branches. If we choose a time period in the middle of measurement, before it has completed, then the Schmidt decomposition will be in a very different basis.
(b) the branch decomposition is dependent on what we define to be the measurement device in the sense that it would be different if we considered some other subset of atoms in the universe to be the correct measuring device.
I do not think Everett could have solved these problems in a principled way.
(Happy to discuss this further by email if it’s useful to you.)