Back in December Gell-Mann and Hartle (G&H) posted their latest paper on consistent histories, “Adaptive Coarse Graining, Environment, Strong Decoherence, and Quasiclassical Realms”. Here are my thoughts.

The discussion of adaptive coarse graining was brief and very much in agreement with previous work.

G&H then give a name and formal description to the idea, long part of the intuitive lore, of a history being defined by the values taken by a particular variable over many time step. (This might be the position of an object, which is being recorded to some accuracy by a environment that decoheres it.) The key idea is that all the Schrödinger-picture projectors at different times defining the history commute:

(1)

This they call the *narrative condition*. From it, one is able to define a smallest set of maximal projectors (which they call a *common framework*) that obey either or for all . For instance, if the ‘s are onto spatial volumes of position, then the ‘s are just the minimal partition of position space such that the region associated with each is fully contained in the regions corresponding to some of the , and is completely disjoint from the regions corresponding to the others.

For times steps , the history projectors at a given time project into subspaces which disjointly span Hilbert space. The narrative condition states that all history projectors commute, which means we can think of them as projecting onto disjoint subsets forming a partition of the range of some variable (e.g. position). The common framework is just the set of smaller projectors that also partitions the range of the variable and which obey or for each and .

Next, they discuss the idea of defining a system-environment tensor factorization by taking the subspaces selected by the projectors and labeling them with the history labels . (See previous work by Hartle and Brun in the appendix of this.) Since the number of branches is much smaller than the size of the Hilbert space, the projectors which select them may in general be of dimension much larger than unity. In principle, this should allow one use, for each fixed , a second variable to label an arbitrary basis of the subspace. In the case of infinite Hilbert space dimension, or when the dimensions are finite with the right values, we can use the label to represent and the second variable to represent . There is a lot of ambiguity here, but in principle we should be able to cludge together a sort of system-environment tensor factorization. The extra Hilbert space contained in then becomes the sink into which new records about the evolution of the system are thrown, using the idea of “nested records” as described by G&H in earlier work on “strong decoherence”. In particular, the class of operators which distinguish G&H’s strong decoherence condition^{ a }

(2)

from their medium decoherence condition^{ b } are in this case just defined to be the complete algebra of operators acting *only* on the system: . The intuition here is that since the records of the history are stored in the environment, you can operate on the system’s part of the branch vectors however you want and you can’t increase the overlap from zero. Finally, G&H connect this to the idea of the permanence of the past (or rather, the existence of *present* records about the past) and reformulate the findings of Finkelstein to show that we can calculate the expectation value of observables in the natural a “branch-by-branch” way.

Now, the system-environment factorization—when constructed in the above abstract manner—will change in time and need have nothing to do with our intuitive, physical notions of systems and environments. It’s also completely dependent on the choice of projectors used to select out the branch vectors of a history, rather than just the branches themselves. In that sense, the tensor structure must be hidden inside the projectors. As there are many more projectors than there are system-environment tensor factorizations, this is an *increase* in the amount of ambiguity in the formalism, which I dislike.

In contrast, I have been recently warming to the idea that only the branch vectors themselves should be the import things, so that what we really care about are all the equivalence classes of histories with the same branches. This is a wavefunction-realism sort of idea, as opposed to history-realism, but it’s really motivated by this question: what could possibly be the physically significant difference between two descriptions of the universe given by two different sets of histories that select out the *same* branches? If each branch is a different macroscopic outcome, how could be ever distinguish between these descriptions? If I’m right, then Gell-Mann and Hartle are heading in the wrong direction here.

### Footnotes

(↵ returns to text)

- Note that I am glossing over some important points about when records form, and when we expect the strong decoherence condition to hold.↵
- We would call medium decoherence just “consistency“. It is recovered from strong decoherence when the set contains only the identity .↵

Your email address will not be published. Required fields are marked with a *.