*[Other parts in this series: 1,2,3, 4,5,6,7.]*

In discussions of the many-worlds interpretation (MWI) and the process of wavefunction branching, folks sometimes ask whether the branching process conflicts with conservations laws like the conservation of energy.^{ a } There are actually two completely different objections that people sometimes make, which have to be addressed separately.

**First possible objection**: “If the universe splits into two branches, doesn’t the total amount of energy have to double?” This is the question Frank Wilczek appears to be addressing at the end of these notes.

I think this question can only be asked by someone who believes that many worlds is an interpretation that is just like Copenhagen (including, in particular, the idea that measurement events are different than normal unitary evolution) except that it simply declares that new worlds are created following measurements. But this is a misunderstanding of many worlds. MWI dispenses with collapse or any sort of departure from unitary evolution. The wavefunction just evolves along, maintaining its energy distributions, and energy doesn’t double when you mathematically identify a decomposition of the wavefunction into two orthogonal components.

**Second possible objection**: “If the universe starts out with some finite spread in energy, what happens if it then ‘branches’ into multiple worlds, some of which overlap with energy eigenstates outside that energy spread?” Or, another phrasing: “What happens if the basis in which the universe decoheres doesn’t commute with energy basis? Is it then possible to create energy, at least in some branches?” The answer is “no”, but it’s not obvious.

The argument is as follows: We describe a sequence of historical events in a quantum universe using a set of consistent histories, i.e. time-ordered strings of Heisenberg-picture projectors . For a pure state of the universe, the condition of consistency is equivalent to the orthogonality of the branches, which are defined by . Because each branch must be orthogonal to all the other ones, they define a basis (on some subspace, at least). They all sum up to the global wavefunction , and the norm of each branch is given by projecting onto the relevant basis vector. Now, if we end up, after many branching events, with a branch with an exact amount of energy (i.e. it’s an energy eigenstate, which might be lying in a degenerate subspace of a given energy), then we can see that the norm of this vector (and hence the probability associated with the branch) must be zero unless the vectors lies in the subspace spanned by the energy eigenstates overlapping with the original global state .

Of course, it might be that we have branches at a given time that aren’t energy eigenstates. In this case it’s hard to even say what you mean by energy conservation. The branch isn’t an eigenstate, so it’s energy is undefined. But *if* it later decoheres into branches with specific energy, then this energy must lie in the support of the energy spectrum of .

As it turns out, Hartle et al. have a paper that discusses this in pretty good detail:

James B. Hartle, Raymond Laflamme, and Donald Marolf.

“Conservation laws in the quantum mechanics of closed systems.”

Phys. Rev. D 51, 7007 (1995).

Their argument in Sec. 2 (that they attribute to Griffiths) is equivalent to the one given above^{ b }.

*[I thank Elliot Nelson and Luciano Combi for discussion leading to this post.]*

Hello, thank you so much for the very thoughtful reply, I think this is an issue that’s not address by the everettians, or at least I don’t know any written material about it.

1) Regarding the first objection:

You say “But this is a misunderstanding of many worlds. MWI dispenses with collapse or any sort of departure from unitary evolution. The wavefunction just evolves along, maintaining its energy distributions”. Yes, definitely it’s very hard for me to overcome some notions about orthodox QM and CM. I’m picturing this situation: I’ve got a system in a superposition of energy eigenstates |E1>+|E2> and I perform a measurement and obtain say E1. My other self in another branch of the multiverse obtain E2. Maybe there’s a conflict over the meaning of energy in QM and CM or my reasoning is wrong, but I don’t understand why I can’t say that now the energy of the two branches is E1+E2 (I’ve read Wilczek arguments but I still have problems copings with this) Perhaps “branches” is just a quantum mechanical concept and I’m thinking them in a classical way.

2) About the second objection:

The results presented by Hartle et al. involve probabilities, do you think these results apply just as easy to a MWI? This is another of my concerns about assigning independent existence to all these branches, the “branch weights” are meaningless unless they are 0. Of course, you may interprete them as probabilities, but in a pure realistic formulation of the theory (without any subjective component) I don’t see how. I’m quite fond of the idea that probabilities are an objective feature of the world though I know this is controversial.

Finally, do you think that an axiomatic approach to a MWI interpretation is possible? Something in the spirit of Mario Bunge philosophy of physics like these works:

http://arxiv.org/pdf/quant-ph/9510020.pdf

http://arxiv.org/pdf/quant-ph/9510019.pdf

Thank you again!

> Maybe there’s a conflict over the meaning of energy in QM and CM or my reasoning is wrong, but I don’t understand why I can’t say that now the energy of the two branches is E1+E2

Before the measurement, you calculate the expectation value of the energy as p1*E1+p2*E2, where p1 and p2 are the respective probabilities. You should do this after the measurement too. Indeed, the wavefunction of the universe is only correlating the measuring apparatus with the measurement outcome, under something like a von Neumann measurement scheme.

If you don’t want to use probabilities then…what did you think the energy was before the measurement? It wasn’t E1, it wasn’t E2, and it definitely wasn’t E1+E2.

> Of course, you may interprete them as probabilities, but in a pure realistic formulation of the theory (without any subjective component) I don’t see how.

Let me ask you this: assume for the sake of argument that the world really was described by a single pure wavefunction of the universe, and our brains really were encoded as subsystems within. If that’s true, what would we expect see following a measurement? Surely we would only see one outcome or the other; there are no observers in the wavefunction

in the stateof seeing both. But if we could see either one, how could we be confident which it would be?It’s worth comparing this to an intrinsically non-deterministic classical theory, e.g., a probabilistic cellular automata. In both this case and the case of a unitarily evolving wavefunction of the universe, we have just a single observer before the branching event, and multiple descendent observers afterwards. In the classical case, we just declare that things are probabilisitic, and assign the probabilities by fiat. Is this really more compelling than QM, where we can identify the probabilities as the unique ones satisfying certain desiderata? What if I modified the cellular automata, but instead of declaring probabilities by fiat, I just encoded them in some sort of geometric object. Is this suddenly less compelling than fiat? I find them equally displeasing (but begrudgingly acceptable).

Once you accept that a pure, unitarily (and so deterministically) evolving wavefunction leads to

stochasticobservations for creatures inside the universe, then there are dozens of arguments that show that the Born rule is the only one that makes the slightest bit of sense for assigning probabilities.Alternatively, you can consider that at any given time we have made exactly

oneobservation: we have observed that we are on this particular branch, which includes the outcomes of all previous measurements. Insofar as a probabilistic theory is falsifiable, it predicts that we willnotfind ourselves on a so-called maverick branch. It’s not too hard to show that the Born rule is the simplest rule for assigning probabilities to branches such that our branch isn’t maverick.For some views that partially disagree with mine, I again highly recommend Kent: here and here.

I only skimmed the first arXiv paper (9510020), but I don’t really get it. There are a million different axiomizations of QM, and it looks like I’d have to read the several paper by Bunge they cite as a way to motivate what they’re doing. Without more info, I’m not inclined to spend the time.

Fair enough, I appreciate the answer. I mentioned those papers because it seems to me that many philosophical assumptions used by the neo everettian are pretty vague and also I think that the best way to present a theory or an interpretation is by an axiomatic. Even though MMI claims that is just unitary QM I think semantic postulates are in order, this is, some postulates saying what are the symbols present in the theory.