# Branching theories vs. collapse theories

This post explains the relationship between (objective) collapse theories and theories of wavefunction branching. The formalizations are mathematically very simple, but it’s surprisingly easily to get confused about observational consequences unless laid out explicitly.

(Below I work in the non-relativistic case.)

#### Branching: An augmentation

In its most general form, a branching theory is some time-dependent orthogonalSome people have discussed non-orthogonal branches, but this breaks the straightforward probabilistic interpretation of the branches using the Born rule. This can be repaired, but generally only by introducing additional structure or principles that, in my experience, usually turns the theory into something more like a collapse theory, which is what I’m trying to constrast with here.a   decomposition of the wavefunction: where is some time-dependent set of orthogonal vectors. I’ve expressed this in the Heisenberg picture, but the Schrödinger picture wavefunction and branches are obtained in the usual (non-branch-dependent) way by evolution with the overall unitary: and .

We generally expect the branches to fine-grain in time. That is, for any two times and , it must be possible to partition the branches at the later time into subsets of child branches, each labeled by a parent branch at the earlier time, so that each subset of children sums up to its corresponding earlier-time parent: for all where and for . By the orthogonality, a child will be a member of the subset corresponding to a parent if and only if the overlap of the child and the parent is non-zero. In other words, a branching theory fine-grains in time if the elements of and are formed by taking partitions and of the same set of orthogonal vectors, where is a refinement of , and vector-summing each subset of the respective partition. These relationships naturally induce a directed rooted tree graph and a set of consistent histories formed by projectors onto the branches.

A graph of branches fine-graining in time, most generally in a branch-dependent way.

It’s hard to interpret a branching theory that does not fine-grain in time since it loses the clear Markov structure and hence is no longer consistent withBy “consistent with”, I’m not trying to say such a branching theory is the same as the Copenhagen interpretation, or even that it’s observationally indistinguishable. Indeed, an arbitrary branching theory that fine-grains in time need not describe branches that correspond to the intuitive notion of measurement outcomes.b   a Copenhagen description. Indeed, fine-graining in time can reasonably be considered a necessary condition for a theory to be worth calling a theory of wavefunction branching. (What would we make of a proposal where the putative branch decomposition flips from to and back again? If we were observers in such a universe living on the branch initially, what would our expectations be for the future?) Nonetheless we mention non-fine-graining branching theories because we want to discuss a family of theories indexed by a parameter such that the theories correctly fine-grain in time only for some values of (e.g., ).

Because the overall wavefunction evolves normally, a branching theory is an “augmentation” (rather than modification) of quantum theory, somewhat like Bohmian mechanics.

#### Collapse: A modification

In its most general form, a collapse theory is some (possibly time-dependent) rule for mapping an evolving wavefunction to a classical probability distribution over new wavefunctions. If the probability distribution is trivial (all probability on a single state) over some time interval, then the evolution is effectively unitary during that interval. When the probability distribution is non-trivial, we end up with a time-dependent weighted ensemble of wavefunctions whose mixedness is strictly increasing in time. This baked-in time asymmetry is a characteristic feature of collapse theories.

If the ensemble has finite (i.e., non-infinitesimal) entropy at discrete set of times, we get a stroboscopic collapse model. (It cannot sensibly have finite entropy continuously in time because the entropy would explode.) More elegantly, the probability distribution could have infinitesimal entropy during each infinitesimal time interval, leading to time-continuous dynamics for the ensemble. In such theories, single trajectories from the ensemble can still have discrete jumps in Hilbert space (e.g., GRW), but if the probability distribution is furthermore always localized to an infinitesimal neighborhood around the current state then it merely diffuses in Hilbert space (e.g, CSL).

If the entropy of the collapse distribution depends on the state, then different parts of the ensemble are becoming more mixed at different rates, e.g., different diffusion coefficients at different points in Hilbert space.

Collapse theories generically are a modification of quantum theory in the sense that a superposition of the states in the ensemble is not equal to the corresponding un-collapsed state evolving according to traditional quantum mechanics. In particular, there are experiments which can clearly distinguish a particular collapse theory from unmodified quantum theory, e.g., superposition experiments and anomalous heating experiments.

#### Equivalent branching and collapse theories

Any branching theory that fine-grains in time can be used to define a stroboscopic collapse theory by simply going to every discreteMost evidence suggest that, at an intuitive level, branches form smoothly in time, and that a formalization in terms of a discrete set of times when the integer number of branches increases is artificial. However, I haven’t seen any formalizations of time-continuous branching that have sensible probabilistic interpretations. If found, these would presumably correspond to the time-continuous collapse theories described above.c   time where (in the branching theory) the number of branches increases and instead asserting that (in the collapse theory) each parent state collapses onto its corresponding children according to a probability distribution proportional to the norm squared of the children. For instance, the collapse version of Weingarten’s proposal would postulate that as soon as the wavefunction decomposition that minimizes the net complexity becomes non-trivial (more than one branch), collapse happens according to the Born probability: one of the two branches is selected (or “reified”) and the original wavefunction is thrown out. Then the process repeats, so the selected branch evolves unitarily until the next collapse.

Collapse theories that are defined in this way from stroboscopic branching theories are not observationally distinguishable from them. In particular, there is never interference of branches at later times.

On the other hand, for a branching theory that does not fine-grain in time, ambiguity arises. It much less obvious what the natural corresponding collapse theory is. A crude one we can construct is to simply take the evolution rule until the first branching event, declare the state to have collapsed, and then start over with each branch individually. This procedure achieves Markovianity using brute force, and ignores all aspects of the branching theory after the first branching event.

#### Equivalent branching theory and consistent histories

Any branching theory that fine-grains in time can also be used to define a set of consistent histories. The simplest way to do this is to consider every point in time when the number of branches changes and associate with each such time a complete set of orthogonal projectors onto the various branches at that moment.For completeness, one can also include the projector onto the subspace orthogonal to all branches at that time.d   The set of histories is then the set of all possible time-order products of projectors, one selected at each time.

The most general notion of consistent histories does not respect an arrow of time, so a generic set of consistent histories does not define a branching theory that fine-grains in time. But it’s straightforward to impose a fine-graining consistency condition so that any set of consistent histories with this property defines a branching theory that fine-grains in time.

Given the dubious interpretation of branches that do not fine-grain in time, I have not thought much about their relationship to consistent histories.

#### Distinguishing branching and collapse theories

Suppose we are given a family of branching theories indexed by some parameter that controls the rate of branching in a way I won’t make precise except to say that the theory correctly fine-grains only for .

• If , the branching theory has a sensible interpretation and is indistinguishable from the corresponding collapse theory (and also from the corresponding set of consistent histories).
• If , the branching theory has no sensible interpretation and the corresponding collapse theory (in the sense above) is experimentally distinguishable from quantum theory, and indeed we expect for the collapse theory to be experimentally measurable.

Thus would have the very unusual feature of being empirically measurable in some range and not measurable in some other range.

If we want to use a branching theory to better understand/define/augment quantum theory without modifying it (my approach), it must correctly fine-grain, i.e., . Importantly, this is a property that can in principle be checked mathematically — no empirical test necessary. But this is only a necessary condition.

A further (sufficient?) condition is that the branches defined by the branching theory map on to what has been called “the quasiclassical domain”. This is only understood intuitively/vaguely, but one characterization is that branches would be eigenstates of “macroscopic observables” (perhaps, macroscopic hydrodynamic observables, which include the approximate position and momentum of the center of mass of rigid objects as a special case). This might alternatively be characterized as “observational equivalence to the Copenhagen interpretation”, since distinct measurement outcomes would presumably all be eigenstates of corresponding macroscopic observables.

If one knew how to precisely define these preferred observables, this also would be something that could be checked purely mathematically, but we don’t know how to do this. My paper on the uniqueness of branches from redundancy tries to overcome this by considering a mathematical property that intuitively seems necessary: that macroscopic observables are redundantly recorded. Unfortunately, I couldn’t avoid introducing a preferred scale like Weingarten and, implicitly, Taylor & McCulloch.

[I thank Adam Brown, Adrian Kent, and Don Weingarten for conversation that informed this post.]

### Footnotes

(↵ returns to text)

1. Some people have discussed non-orthogonal branches, but this breaks the straightforward probabilistic interpretation of the branches using the Born rule. This can be repaired, but generally only by introducing additional structure or principles that, in my experience, usually turns the theory into something more like a collapse theory, which is what I’m trying to constrast with here.
2. By “consistent with”, I’m not trying to say such a branching theory is the same as the Copenhagen interpretation, or even that it’s observationally indistinguishable. Indeed, an arbitrary branching theory that fine-grains in time need not describe branches that correspond to the intuitive notion of measurement outcomes.
3. Most evidence suggest that, at an intuitive level, branches form smoothly in time, and that a formalization in terms of a discrete set of times when the integer number of branches increases is artificial. However, I haven’t seen any formalizations of time-continuous branching that have sensible probabilistic interpretations. If found, these would presumably correspond to the time-continuous collapse theories described above.
4. For completeness, one can also include the projector onto the subspace orthogonal to all branches at that time.