Generalizing wavefunction branches to indistinguishable subspaces

[This post describes ideas generated in discussion with Markus Hauru, Curt von Keyserlingk, and Daniel Ranard.]

An original dream of defining branches based on redundant records (aka redundant classical information, aka GHZ-like correlations) was that it would be possible to decompose the wavefunction of an evolving non-integrable quantum system at each point in time into macroscopically distinguishable branches that individually had bounded amounts of long-range entanglement (i.e., could be efficiently expressed as a matrix product state) even though the amount of long-range entanglement for the overall state diverges in time. If one could numerically perform such a decomposition, and if the branches only “fine-grain in time”, then one could classically sample from the branches to accurately estimate local observables even if the number of branches increases exponentially in time (which we expect them to do).

However, we now think that only a fairly small fraction of all long range entanglement can be attributed to redundantly recorded branches. Thus, even if we found and efficiently handled all such classical information using a decomposition into a number of branches that was increasing exponentially in time (polynomial branch entropy), most branches would nevertheless still have an entanglement entropy across any spatial partition that grew ~linearly in time (i.e., exponentially increasing bond dimension in the MPS representation) until saturating.

In this post I’ll first write down a simple model that suggests the need to generalize the idea of branches in order to account for most long-range entanglement. Then I will give some related reasons to think that this generalized structure will take the form not of a preferred basis, but rather preferred subspaces and subsystems, and together these will combine into a preferred “branch algebra”.… [continue reading]

What logical structure for branches?

[This post describes ideas generated in discussion with Markus Hauru, Curt von Keyserlingk, and Daniel Ranard.]

Taylor & McCulloch have a tantalizing paper about which I’ll have much to say in the future. However, for now I want to discuss the idea of the “compatibility” of branch decompositions, which is raised in their appendix. In particular, the differences between their approach and mine prompted me to think more about how we could narrow down on what sorts of logicalThis is “logic” in the same sense of identifying sets of propositions in consistent histories that comport with the axioms of a classical probability space, before discussing any questions of physics.a   axioms for branches could be identified even before we pin down a physical definition. Indeed, as I will discuss below, the desire for compatibility raises the hope that some natural axioms for branches might enable the construction of a preferred decomposition of the Hilbert space into branching subspaces, and that this might be done independently of the particular overall wavefunction. However, the axioms that I write down prove to be insufficient for this task.

Logical branch axioms

Suppose we have a binary relation “\perp\hspace{-1.1 em}\triangle” on the vectors in a (finite-dimensional) Hilbert space that indicates that two vectors (states), when superposed, should be considered to live on distinct branches. I will adopt the convention that “z = v\perp\hspace{-1.1 em}\triangle w” is interpreted to assert that z=v+w and that the branch relation v\perp\hspace{-1.1 em}\triangle w holds.This doesn’t constrain us because if we just want to assert the binary relation without asserting equality of the sum to a third vector, we write v\perp\hspace{-1.1 em}\triangle w without setting it equal to anything, and if we just want addition without asserting the relation, we write v+w=z.[continue reading]

Branching theories vs. collapse theories

This post explains the relationship between (objective) collapse theories and theories of wavefunction branching. The formalizations are mathematically very simple, but it’s surprisingly easily to get confused about observational consequences unless laid out explicitly.

(Below I work in the non-relativistic case.)

Branching: An augmentation

In its most general form, a branching theory is some time-dependent orthogonalSome people have discussed non-orthogonal branches, but this breaks the straightforward probabilistic interpretation of the branches using the Born rule. This can be repaired, but generally only by introducing additional structure or principles that, in my experience, usually turns the theory into something more like a collapse theory, which is what I’m trying to constrast with here.a   decomposition of the wavefunction: \psi= \sum_{\phi\in B(t)} \phi where B(t) is some time-dependent set of orthogonal vectors. I’ve expressed this in the Heisenberg picture, but the Schrödinger picture wavefunction and branches are obtained in the usual (non-branch-dependent) way by evolution with the overall unitary: \psi(t)=U_t \psi and \phi(t)=U_t \phi.

We generally expect the branches to fine-grain in time. That is, for any two times t and t'>t, it must be possible to partition the branches B(t') at the later time into subsets B(t',\phi) of child branches, each labeled by a parent branch \phi at the earlier time, so that each subset of children sums up to its corresponding earlier-time parent: \phi = \sum_{\phi' \in B(t',\phi)} \phi' for all \phi\in B(t) where B(t') = \bigcup_{\phi\in B(t)} B(t',\phi) and B(t',\phi)\cap B(t',\tilde\phi) for \phi\neq\tilde\phi. By the orthogonality, a child \phi' will be a member of the subset B(t',\phi) corresponding to a parent \phi if and only if the overlap of the child and the parent is non-zero. In other words, a branching theory fine-grains in time if the elements of B(t) and B(t') are formed by taking partitions P(t) and P(t') of the same set of orthogonal vectors, where P(t') is a refinement of P(t), and vector-summing each subset of the respective partition.… [continue reading]

Comments on Ollivier’s “Emergence of Objectivity for Quantum Many-Body Systems”

Harold Ollivier has put out a nice paper generalizing my best result:

We examine the emergence of objectivity for quantum many-body systems in a setting without an environment to decohere the system’s state, but where observers can only access small fragments of the whole system. We extend the result of Reidel (2017) to the case where the system is in a mixed state, measurements are performed through POVMs, and imprints of the outcomes are imperfect. We introduce a new condition on states and measurements to recover full classicality for any number of observers. We further show that evolutions of quantum many-body systems can be expected to yield states that satisfy this condition whenever the corresponding measurement outcomes are redundant.

Ollivier does a good job of summarizing why there is an urgent need to find a way to identify objectively classical variables in a many-body system without leaning on a preferred system-environment tensor decomposition. He also concisely describes the main results of my paper in somewhat different language, so some of you may find his version nicer to read.A minor quibble: Although this is of course a matter of taste, I disagree that the Shor code example was the “core of the main result” of my paper. In my opinion, the key idea was that there was a sensible way of defining redundancy at all in a way that allowed for proving statements about compatibility without recourse to a preferred non-microscopic tensor structure. The Shor-code example is more important for showing the limits of what redundancy can tell you (which is saturated in a weak sense).[continue reading]

Compact precise definition of a transformer function

Although I’ve been repeatedly advised it’s not a good social strategy, a glorious way to start a research paper is with specific, righteous criticism of your anonymous colleagues:For read-ability, I have dropped the citations and section references from these quotes without marking the ellipses.a  

Transformers are deep feed-forward artificial neural networks with a (self)attention mechanism. They have been tremendously successful in natural language processing tasks and other domains. Since their inception 5 years ago, many variants have been suggested. Descriptions are usually graphical, verbal, partial, or incremental. Despite their popularity, it seems no pseudocode has ever been published for any variant. Contrast this to other fields of computer science, even to “cousin” discipline reinforcement learning.

So begin Phuong & Hutter in a great, rant-filled paper that “covers what Transformers are, how they are trained, what they’re used for, their key architectural components, tokenization, and a preview of practical considerations, and the most prominent models.” As an exercise, in this post I’m dig into the first item by writing down an even more compact definition of a transformer than theirs, in the form of a mathematical function rather than pseudocode, while avoiding the ambiguities rampant in the rest of the literature. I will consider only what a single forward-pass of a transformer does, considered as a map from token sequences to probability distributions over the token vocabulary. I do not try to explain the transformer, nor do I address other important aspects like motivation, training, and computational.

(This post also draws on a nice introduction by Turner. If you are interested in understanding and interpretation, you might check out — in descending order of sophistication — Elhage et al.[continue reading]

Unital dynamics are mixedness increasing

After years of not having an intuitive interpretation of the unital condition on CP maps, I recently learned a beautiful one: unitality means the dynamics never decreases the state’s mixedness, in the sense of the majorization partial order.

Consider the Lindblad dynamics generated by a set of Lindblad operators L_k, corresponding to the Lindbladian

(1)   \begin{align*} \mathcal{L}[\rho] = \sum_k\left(L_k\rho L_k^\dagger - \{L_k^\dagger L_k,\rho\}/2\right) \end{align*}

and the resulting quantum dynamical semigroup \Phi_t[\rho] = e^{t\mathcal{L}}[\rho]. Let

(2)   \begin{align*} S_\alpha[\rho] = \frac{\ln\left(\mathrm{Tr}[\rho^\alpha]\right)}{1-\alpha}, \qquad \alpha\ge 0 \end{align*}

be the Renyi entropies, with S_{\mathrm{vN}}[\rho]:=\lim_{\alpha\to 1} S_\alpha[\rho] = -\mathrm{Tr}[\rho\ln\rho] the von Neumann entropy. Finally, let \prec denote the majorization partial order on density matrices: \rho\prec\rho' exactly when \mathrm{spec}[\rho]\prec\mathrm{spec}[\rho'] exactly when \sum_{i=1}^r \lambda_i \le \sum_{i=1}^r \lambda_i^\prime for all r, where \lambda_i and \lambda_i^\prime are the respective eigenvalues in decreasing order. (In words: \rho\prec\rho' means \rho is more mixed than \rho'.) Then the following conditions are equivalent:None of this depends on the dynamics being Lindbladian. If you drop the first condition and drop the “t” subscript, so that \Phi is just some arbitrary (potentially non-divisible) CP map, the remaining conditions are all equivalent.a  

  • \mathcal{L}[I]=0
  • \Phi_t[I]=I: “\Phi_t is a unital map (for all t)”
  • \frac{\mathrm{d}}{\mathrm{d}t}S_\alpha[\Phi_t[\rho]] \ge 0 for all \rho, t, and \alpha: “All Renyi entropies are non-decreasing”
  • \Phi_t[\rho]\prec\rho for all t: “\Phi_t is mixedness non-decreasing”
  • \Phi_t[\rho] = \sum_j p_j U^{(t)}_j\rho U^{(t)\dagger}_j for all t and some unitaries U^{(t)}_j and probabilities p_j.

The non-trivial equivalences above are proved in Sec. 8.3 of Wolf, “Quantum Channels and Operations Guided Tour“.See also “On the universal constraints for relaxation rates for quantum dynamical semigroup” by Chruscinski et al [2011.10159] for further interesting discussion.b  

Note that having all Hermitian Lindblad operators (L_k = L_k^\dagger) implies, but is not implied by, the above conditions. Indeed, the condition of Lindblad operator Hermiticity (or, more generally, normality) is not preserved under the unitary gauge freedom L_k\to L_k^\prime = \sum_j u_{kj} L_j (which leaves the Lindbladian \mathcal{L} invariant for unitary u.)… [continue reading]

AI goalpost moving is not unreasonable

[Summary: Constantly evolving tests for what counts as worryingly powerful AI is mostly a consequence of how hard it is to design tests that will identify the real-world power of future automated systems. I argue that Alan Turing in 1950 could not reliably distinguish a typical human from an appropriately-fine-tuned GPT-4, yet all our current automated systems cannot produce growth above historic trends.A draft of this a  ]

What does the phenomena of “moving the goalposts” for what counts as AI tell us about AI?

It’s often said that people repeatedly revising their definition of AI, often in response to previous AI tests being passed, is evidence that people are denying/afraid of reality, and want to put their head in the sand or whatever. There’s some truth to that, but that’s a comment about humans and I think it’s overstated.

Closer to what I want to talk about is the idea AI is continuously redefined to mean “whatever humans can do that hasn’t been automated yet”, often taken to be evidence that AI is not a “natural” kind out there in the world, but rather just a category relative to current tech. There’s also truth to this, but not exactly what I’m interested in.

To me, it is startling that (I claim) we have systems today that would likely pass the Turing test if administered by Alan Turing, but that have negligible impact on a global scale. More specifically, consider fine-tuning GPT-4 to mimic a typical human who lacks encyclopedic knowledge of the contents of the internet. Suppose that it’s mimicking a human with average intelligence whose occupation has no overlap with Alan Turing’s expertise.… [continue reading]

Notable reviews of arguments for AGI ruin

Here’s a collection of reviews of the arguments that artificial general intelligence represents an existential risk to humanity. They vary greatly in length and style. I may update this from time to time.

  • Here, from my perspective, are some different true things that could be said, to contradict various false things that various different people seem to believe, about why AGI would be survivable on anything remotely resembling the current pathway, or any other pathway we can easily jump to.
  • This report examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I proceed in two stages. First, I lay out a backdrop picture that informs such concern. On this picture, intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire -- especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. Second, I formulate and evaluate a more specific six-premise argument that creating agents of this kind will lead to existential catastrophe by 2070. On this argument, by 2070: (1) it will become possible and financially feasible to build relevantly powerful and agentic AI systems; (2) there will be strong incentives to do so; (3) it will be much harder to build aligned (and relevantly powerful/agentic) AI systems than to build misaligned (and relevantly powerful/agentic) AI systems that are still superficially attractive to deploy; (4) some such misaligned systems will seek power over humans in high-impact ways; (5) this problem will scale to the full disempowerment of humanity; and (6) such disempowerment will constitute an existential catastrophe.
[continue reading]