Abstracts for July 2017

  • It is well known that, despite the misleading imagery conjured by the name, entanglement in a multipartite system cannot be understood in terms of pair-wise entanglement of the parts. Indeed, there are only N(N-1) pairs of N systems, but the number of qualitatively distinct types of entanglement scales exponentially in N. A good way to think about this is to recognize that a quantum state of a multipartite system is, in terms of parameters, much more akin to a classical probability distribution than a classical state. When we ask about the information stored in a probability distributions, there are lots and lots of “types” of information, and correlations can be much more complex than just knowing all the pairwise correlations. (“It’s not just that A knows something about B, it’s that A knows something about B conditional on a state of C, and that information can only be unlocked by knowing information from either D or E, depending on the state of F…”).

    However, Gaussian distributions (both quantum and classical) are described by a number of parameters that grows on quadratically with the number of variables. The pairwise correlations really do tell you everything there is to know about the quantum state or classical distribution. The above paper makes me wonder to what extent we can understand multipartite Gaussian entanglement in terms of pairs of modes. They have shown that this works at a single level, that entanglement across a bipartition can be decomposed into modewise entangled pairs. But since this doesn’t work for mixed states, it’s not clear how to proceed in understanding the remain entanglement within a partition. My intuition is that there is a canonical decomposition of the Gaussian state that, in some sense, lays bare all the multipartite entanglement it has in any possible partitioning, in much the same way that the eigendecomposition of a matrix exposes its the inner workings.

[continue reading]

Legendre transform

The way that most physicists teach and talk about partial differential equations is horrible, and has surprisingly big costs for the typical understanding of the foundations of the field even among professionals. The chief victims are students of thermodynamics and analytical mechanics, and I’ve mentioned before that the preface of Sussman and Wisdom’s Structure and Interpretation of Classical Mechanics is a good starting point for thinking about these issues. As a pointed example, in this blog post I’ll look at how badly the Legendre transform is taught in standard textbooks,I was pleased to note as this essay went to press that my choice of Landau, Goldstein, and Arnold were confirmed as the “standard” suggestions by the top Google results.a   and compare it to how it could be taught. In a subsequent post, I’ll used this as a springboard for complaining about the way we record and transmit physics knowledge.

Before we begin: turn away from the screen and see if you can remember what the Legendre transform accomplishes mathematically in classical mechanics.If not, can you remember the definition? I couldn’t, a month ago.b   I don’t just mean that the Legendre transform converts the Lagrangian into the Hamiltonian and vice versa, but rather: what key mathematical/geometric property does the Legendre transform have, compared to the cornucopia of other function transforms, that allows it to connect these two conceptually distinct formulations of mechanics?

(Analogously, the question “What is useful about the Fourier transform for understanding translationally invariant systems?” can be answered by something like “Translationally invariant operations in the spatial domain correspond to multiplication in the Fourier domain” or “The Fourier transform is a change of basis, within the vector space of functions, using translationally invariant basis elements, i.e., the Fourier modes”.)

The status quo

Let’s turn to the canonical text by Goldstein for an example of how the Legendre transform is usually introduced.… [continue reading]

Links for May 2017

  • Methane hydrates will be the new shale gas. There is perhaps an order of magnitude more methane worldwide in hydrates than in shale deposits, but it’s harder to extract. “…it’s thought that only by 2025 at the earliest we might be able to look at realistic commercial options.”
  • Sperm whales have no (external) teeth on their upper jaw, which instead features holes into which the teeth on their narrow lower jaw fit.
  • Surprising and heartening to me: GiveWell finds that distributing antiretroviral therapy drugs to HIV positive patients (presumably in developing countries) is potentially cost-effective compared to their top recommendations.
  • Relatedly: the general flow of genetic information is DNA-RNA-protein. At a crude level, viruses are classified as either RNA viruses or DNA viruses depending on what sort of genetic material they carry. Generally, as parasites dependent on the host cell machinery, this determines where in the protein construction process they inject their payload. However, retroviruses (like HIV) are RNA viruses that bring along their own reverse transcriptase enzyme that, once inside the cell, converts their payload back into DNA and then grafts it into the host’s genome (which is then copied as part of the host cell’s lifecycle). Once this happens, it is very difficult to tell which cells have been infected and very difficult to root out the infection.
  • Claims about what makes Amazon’s vertical integration different:

    I remember reading about the common pitfalls of vertically integrated companies when I was in school. While there are usually some compelling cost savings to be had from vertical integration (either through insourcing services or acquiring suppliers/customers), the increased margins typically evaporate over time as the “supplier” gets complacent with a captive, internal “customer.”

    There are great examples of this in the automotive industry, where automakers have gone through alternating periods of supplier acquisitions and subsequent divestitures as component costs skyrocketed.

[continue reading]

Toward relativistic branches of the wavefunction

I prepared the following extended abstract for the Spacetime and Information Workshop as part of my continuing mission to corrupt physicists while they are still young and impressionable. I reproduce it here for your reading pleasure.


Finding a precise definition of branches in the wavefunction of closed many-body systems is crucial to conceptual clarity in the foundations of quantum mechanics. Toward this goal, we propose amplification, which can be quantified, as the key feature characterizing anthropocentric measurement; this immediately and naturally extends to non-anthropocentric amplification, such as the ubiquitous case of classically chaotic degrees of freedom decohering. Amplification can be formalized as the production of redundant records distributed over spatial disjoint regions, a certain form of multi-partite entanglement in the pure quantum state of a large closed system. If this definition can be made rigorous and shown to be unique, it is then possible to ask many compelling questions about how branches form and evolve.

A recent result shows that branch decompositions are highly constrained just by this requirement that they exhibit redundant local records. The set of all redundantly recorded observables induces a preferred decomposition into simultaneous eigenstates unless their records are highly extended and delicately overlapping, as exemplified by the Shor error-correcting code. A maximum length scale for records is enough to guarantee uniqueness. However, this result is grounded in a preferred tensor decomposition into independent microscopic subsystems associated with spatial locality. This structure breaks down in a relativistic setting on scales smaller than the Compton wavelength of the relevant field. Indeed, a key insight from algebraic quantum field theory is that finite-energy states are never exact eigenstates of local operators, and hence never have exact records that are spatially disjoint, although they can approximate this arbitrarily well on large scales.… [continue reading]

Links for April 2017

  • Why does a processor need billions of transistors if it’s only ever executing a few dozen instructions per clock cycle?
  • Nuclear submarines as refuges from global catastrophes.
  • Elite Law Firms Cash in on Market Knowledge“:

    …corporate transactions such as mergers and acquisitions or financings are characterized by several salient facts that lack a complete theoretical account. First, they are almost universally negotiated through agents. Transactional lawyers do not simply translate the parties’ bargain into legally enforceable language; rather, they are actively involved in proposing and bargaining over the transaction terms. Second, they are negotiated in stages, often with the price terms set first by the parties, followed by negotiations primarily among lawyers over the remaining non-price terms. Third, while the transaction terms tend to be tailored to the individual parties, in negotiations the parties frequently resort to claims that specific terms are (or are not) “market.” Fourth, the legal advisory market for such transactions is highly concentrated, with a half-dozen firms holding a majority of the market share.

    [Our] claim is that, for complex transactions experiencing either sustained innovation in terms or rapidly changing market conditions, (1) the parties will maximize their expected surplus by investing in market information about transaction terms, even under relatively competitive conditions, and (2) such market information can effectively be purchased by hiring law firms that hold a significant market share for a particular type of transaction.

    …The considerable complexity of corporate transaction terms creates an information problem: One or both parties may simply be unaware of the complete set of surplus-increasing terms for the transaction, and of their respective outside options should negotiations break down. This problem is distinct from the classic problem of valuation uncertainty.

[continue reading]

Branches and matrix-product states

I’m happy to use this bully pulpit to advertise that the following paper has been deemed “probably not terrible”, i.e., published.

Here’s the figureThe editor tried to convince me that this figure appeared on the cover for purely aesthetic reasons and this does not mean my letter is the best thing in the issue…but I know better!a   and caption:

It is my highly unusual opinion that identifying a definition for the branches in the wavefunction is the most conceptually important problem is physics. The reasoning is straightforward: (1) quantum mechanics is the most profound thing we know about the universe, (2) the measurement process is at the heart of the weirdness, and (3) the critical roadblock to analysis is a definition of what we’re talking about. (Each step is of course highly disputed, and I won’t defend the reasoning here.) In my biased opinion, the paper represents the closest yet anyone has gotten to giving a mathematically precise definition.

On the last page of the paper, I speculate on the possibility that branch finding may have practical (!) applications for speeding up numerical simulations of quantum many-body systems using matrix-product states (MPS), or tensor networks in general. The rough idea is this: Generic quantum systems are exponentially hard to simulate, but classical systems (even stochastic ones) are not. A definition of branches would identify which degrees of freedom of a quantum system could be accurately simulated classically, and when. Although classical computational transitions are understood in many certain special cases, our macroscopic observations of the real world strongly suggest that all systems we study admit classical descriptions on large enough scales.Note that whether certain degrees of freedom admit a classical effective description is a computational question.[continue reading]

Comments on Cotler, Penington, & Ranard

One way to think about the relevance of decoherence theory to measurement in quantum mechanics is that it reduces the preferred basis problem to the preferred subsystem problem; merely specifying the system of interest (by delineating it from its environment or measuring apparatus) is enough, in important special cases, to derive the measurement basis. But this immediately prompts the question: what are the preferred systems? I spent some time in grad school with my advisor trying to see if I could identify a preferred system just by looking at a large many-body Hamiltonian, but never got anything worth writing up.

I’m pleased to report that Cotler, Penington, and Ranard have tackled a closely related problem, and made a lot more progress:

The paper has a nice, logical layout and is clearly written. It also has an illuminating discussion of the purpose of nets of observables (which appear often in the algebraic QFT literature) as a way to define “physical” states and “local” observables when you have no access to a tensor decomposition into local regions.

For me personally, a key implication is that if I’m right in suggesting that we can uniquely identify the branches (and subsystems) just from the notion of locality, then this paper means we can probably reconstruct the branches just from the spectrum of the Hamiltonian.

Below are a couple other comments.

Uniqueness of locality, not spectrum fundamentality

The proper conclusion to draw from this paper is that if a quantum system can be interpreted in terms of spatially local interactions, this interpretation is probably unique. It is tempting, but I think mistaken, to also conclude that the spectrum of the Hamiltonian is more fundamental than notions of locality.… [continue reading]

Links for March 2017

[continue reading]