One way to think about the relevance of decoherence theory to measurement in quantum mechanics is that it reduces the *preferred basis problem* to the *preferred subsystem problem*; merely specifying the system of interest (by delineating it from its environment or measuring apparatus) is enough, in important special cases, to derive the measurement basis. But this immediately prompts the question: what are the preferred systems? I spent some time in grad school with my advisor trying to see if I could identify a preferred system just by looking at a large many-body Hamiltonian, but never got anything worth writing up.

I’m pleased to report that Cotler, Penington, and Ranard have tackled a closely related problem, and made a lot more progress:

The paper has a nice, logical layout and is clearly written. It also has an illuminating discussion of the purpose of *nets* of observables (which appear often in the algebraic QFT literature) as a way to define “physical” states and “local” observables when you have no access to a tensor decomposition into local regions.

For me personally, a key implication is that if I’m right in suggesting that we can uniquely identify the branches (and subsystems) just from the notion of locality, then this paper means we can probably reconstruct the branches just from the spectrum of the Hamiltonian.

Below are a couple other comments.

##### Uniqueness of locality, not spectrum fundamentality

The proper conclusion to draw from this paper is that if a quantum system can be interpreted in terms of spatially local interactions, this interpretation is probably unique. It is tempting, but I think mistaken, to also conclude that the spectrum of the Hamiltonian is more fundamental than notions of locality. Let me explain.

I am a big fan of trying to condense our physical knowledge down to the smallest number of elegant axioms, and I think that sort of work should be ongoing.^{ a } Of course, axiom counting and elegance assessment are subjective. One way to try and formalize this is with algorithmic complexity (although there is still plenty of hand waving). We consider two physical theories *(observationally) equivalent* if they make the same experimental predictions and, when multiple equivalent theories are available, we consider the theory described by the shortest algorithm (as measured in bits) to be preferred.^{ b }

Cotler et al. don’t address it, but it’s natural to wonder whether we should conclude from their work that the spectrum gives a more fundamental theoretical description of a quantum many-body system in the above sense of algorithmic complexity. But I think it probably does *not* offer improved compression for the simple reason that specifying a Hamiltonian by its spectrum, rather than a lattice with a notion of locality, will require more bits. The reason isn’t surprising, and basically follows from their discussion: under plausible quantification, the abstract space of possible local lattices is smaller than the space of possible spectra. Most Hamiltonian spectra do not correspond to local theories, which of course is closely related to the main idea that there generically aren’t multiple local theories corresponding to the same spectrum. This is especially true for a symmetric lattice, which, even if arbitrarily large, is specified by just a compact list of symmetries plus maybe a small number of additional parameters.

Now, one of the motivations of this work is that the spectrum of an operator does seem somehow more fundamental than its representation in any particular basis. Although this is true in a certain sense, and I’m not completely sure how to think about all this, it’s worth remembering that the the notion of locality itself is more strongly grounded in observational evidence than the spectrum of the Hamiltonian. This is just the statement that we do not directly measure the spectrum, but rather infer it from a bunch of experimental observations that are all interpreted through locality. If we discovered that locality broke down, it would not be by measuring the spectrum directly.

So instead, the tentative conclusion is that, under the assumptions taken in the paper, the notion of locality is *objective*. That is, locality isn’t subjective/arbitrary like the inertial frame of a particular observer, or a choice of gauge. This is very relevant in the context of holographic approaches to quantum gravity, where there may be incompatible notions of locality. In that context, the appearance of conventional spatial locality is an approximation that breaks down in extreme regimes, evading the assumptions of this paper. (See Cao, Carroll, & Michalakis for a complimentary approach.)

##### A caveat on genericness

Many of Cotler et al.’s claims are about *generic* local Hamiltonians in the sense that they do not apply to some measure-zero sets in the space of local Hamiltonians. Such claims always need to be considered with care. Recall the following: if we take the space of all pure states on a lattice with a finite spacing, and then let the lattice spacing go to zero, we find that the *physical* states with bounded energy (according to any smooth field-theoric Hamiltonian) forms a (rather small) measure-zero subspace of all states. In other words, in this naive construction of a continuous field theory, the generic pure state has discontinuous spatial derivative and divergent/undefined energy, and all the physical states (and physical Hamiltonians) we care about are *not* generic.

Indeed, there really is no well-defined notion of a spatially local tensor-product structure for a continuous field theory.^{ c } This isn’t just the statement that you can’t define the Hilbert space as “tensor-product integral” of Hilbert spaces attached to each infinitesimal point in space. You can’t even break the state space up as a tensor product of two spatially disjoint regions. This means that extending their analysis to field theory will be nontrivial.

Cotler et al. acknowledge some of this briefly at the end of section 6, but the issue is distinct from the problems posed by continuous spectra that they emphasize most. [The tensor-structure breaks down as we remove the short-distance (UV) cutoff but, so long as there is still an long-distance (IR) cutoff, the spectrum remains discrete, though unbounded.] They discuss observable nets as a replacement for tensor-product structures, which have long been used to deal with these sorts of issues, but it’s not clear whether this will actually be successful for the task of defining locality from the spectrum of the Hamiltonian.

Of course, there are *always* hairy issues with taking the continuum limit, and one can make two retorts:

- If anything, the parameter-counting argument gets more severe in the continuum limit; so shouldn’t we suspect that the core intuitive idea — that locality is, more or less, uniquely defined by the Hamiltonian — will survive for a continuous QFT, modulo details?
- All we know is that the world is approximately described by an effective field theory, and here’s no overwhelming reason to think things are truly continuous at the most fundamental level; so isn’t it valuable to know that locality is unique in a spatially discretized theory?

I think the answer to both of these questions is “yes, probably”. But here is my best guess at how things could go wrong: It could be that, at any finite lattice spacing, generic Hamiltonians admit at most a single notion of locality, but that they admit multiple incompatible *approximate* notions of locality. Even if these are bad approximations for some coarse lattice spacing, it could be that in the continuum limit they become arbitrarily good approximations.

It goes without saying that these arm-chair worries don’t detract from the value of Cotler et al.’s result. One always handles the exact case before doing an epsilon-delta treatment.

*[I thank the authors for discussion that significantly clarified my thinking.]*

### Footnotes

(↵ returns to text)

- For instance, I celebrate using
*amplification*to identify observables with Hermitian (or normal!) operator rather than simply postulating this. The general constructive approach in the introductory chapter of Weinberg’s QFT test is also excellent on this front (although much could be improved).↵ - There are objections to this approach. It’s not at all obvious what language such an algorithm would be written in, leading to an constant-factor ambiguity in the size of the program. The choice of language is intertwined with questions about the form that fundamental physical axioms are allowed to take. The subject assessment of elegence is at least partly driven by the extent to which physical axioms can be matched up to sensory experience and our intuition.↵
- They put it clearly: “…a subspace of a space with an explicit TPS [tensor-product structure] will not inherit the TPS in any natural way”.↵

Your email address will not be published. Required fields are marked with a *.