Moyal’s equation for a Wigner function of a quantum system with (Wigner-transformed) Hamiltonian is where the Moyal bracket is a binary operator on the space of functions over phase space. Unfortunately, it is often written down mysteriously as
where the arrows over partial derivatives tell you which way they act, i.e., . This only becomes slightly less weird when you use the equivalent formula , where “” is the Moyal star product given by
The star product has the crucial feature that , where we use a hat to denote the Weyl transform (i.e., the inverse of the Wigner transform taking density matrices to Wigner functions), which takes a scalar function over phase-space to an operator over our Hilbert space. The star product also has some nice integral representations, which can be found in books like Curtright, Fairlie, & ZachosThe complete 88-page PDF is here.a , but none of them help me understand the Moyal equation.
A key problem is that both of these expressions are neglecting the (affine) symplectic symmetry of phase space and the dynamical equations. Although I wouldn’t call it beautiful, we can re-write the star product as
where is a symplectic index using the Einstein summation convention, and where symplectic indices are raised and lowered using the symplectic form just as for Weyl spinors: and , where is the antisymmetric symplectic form with , and where upper (lower) indices denote symplectic vectors (co-vectors).
With this, we can expand the Moyal equation as
where we can see in hideous explicitness that it’s a series in the even powers of and the odd derivates of the Hamiltonian and the Wigner function .… [continue reading]
Tyler John & William MacAskill have recently released a preprint of their paper “Longtermist Institutional Reform” [PDF]. The paper is set to appear in an EA-motivated collection “The Long View” (working title), from Natalie Cargill and Effective Giving.
Here is the abstract:
There is a vast number of people who will live in the centuries and millennia to come. In all probability, future generations will outnumber us by thousands or millions to one; of all the people who we might affect with our actions, the overwhelming majority are yet to come. In the aggregate, their interests matter enormously. So anything we can do to steer the future of civilization onto a better trajectory, making the world a better place for those generations who are still to come, is of tremendous moral importance. Political science tells us that the practices of most governments are at stark odds with longtermism. In addition to the ordinary causes of human short-termism, which are substantial, politics brings unique challenges of coordination, polarization, short-term institutional incentives, and more. Despite the relatively grim picture of political time horizons offered by political science, the problems of political short-termism are neither necessary nor inevitable. In principle, the State could serve as a powerful tool for positively shaping the long-term future. In this chapter, we make some suggestions about how we should best undertake this project. We begin by explaining the root causes of political short-termism. Then, we propose and defend four institutional reforms that we think would be promising ways to increase the time horizons of governments: 1) government research institutions and archivists; 2) posterity impact assessments; 3) futures assemblies; and 4) legislative houses for future generations.
… [continue reading]
This post is (a seed of) a bibliography covering the primordial research area that goes by some of the following names:
Although the way this problem tends to be formalized varies with context, I don’t think we have confidence in any of the formalizations. The different versions are very tightly related, so that a solution in one context is likely give, or at least strongly point toward, solutions for the others.
As a time-saving device, I will just quote a few paragraphs from existing papers that review the literature, along with the relevant part of their list of references. I hope to update this from time to time, and perhaps turn it into a proper review article of its own one day. If you have a recommendation for this bibliography (either a single citation, or a paper I should quote), please do let me know.
Carroll & Singh
From “Quantum Mereology: Factorizing Hilbert Space into Subsystems with Quasi-Classical Dynamics”, arXiv:2005.12938:
While this question has not frequently been addressed in the literature on quantum foundations and emergence of classicality, a few works have highlighted its importance and made attempts to understand it better. Brun and Hartle  studied the emergence of preferred coarse-grained classical variables in a chain of quantum harmonic oscillators. Efforts to address the closely related question of identifying classical set of histories (also known as the “Set Selection” problem) in the Decoherent Histories formalism [3–7, 10] have also been undertaken.
… [continue reading]
[Other parts in this series: 1,2,3,4,5,6,7,8.]
On microscopic scales, sound is air pressure fluctuating in time . Taking the Fourier transform of gives the frequency distribution , but in an eternal way, applying to the entire time interval for .
Yet on macroscopic scales, sound is described as having a frequency distribution as a function of time, i.e., a note has both a pitch and a duration. There are many formalisms for describing this (e.g., wavelets), but a well-known limitation is that the frequency of a note is only well-defined up to an uncertainty that is inversely proportional to its duration .
At the mathematical level, a given wavefunction is almost exactly analogous: macroscopically a particle seems to have a well-defined position and momentum, but microscopically there is only the wavefunction . The mapping of the analogyI am of course not the first to emphasize this analogy. For instance, while writing this post I found “Uncertainty principles in Fourier analysis” by de Bruijn (via Folland’s book), who calls the Wigner function of an audio signal the “musical score” of .a is . Wavefunctions can of course be complex, but we can restrict ourself to a real-valued wavefunction without any trouble; we are not worrying about the dynamics of wavefunctions, so you can pretend the Hamiltonian vanishes if you like.
In order to get the acoustic analog of Planck’s constant , it helps to imagine going back to a time when the pitch of a note was measured with a unit that did not have a known connection to absolute frequency, i.e.,… [continue reading]
Physicists often define a Lindbladian superoperator as one whose action on an operator can be written as
for some operator with positive anti-Hermitian part, , and some set of operators . But how does one efficiently check if a given superoperator is Lindbladian? In this post I give an “elementary” proof of a less well-known characterization of Lindbladians:
Thus, we can efficiently check if an arbitrary superoperator is Lindbladian by diagonalizing and seeing if all the eigenvalues are positive.
A quick note on terminology
The terms superoperator, completely positive (CP), trace preserving (TP), and Lindbladian are defined below in Appendix A in case you aren’t already familiar with them.
Confusingly, the standard practice is to say a superoperator is “positive” when it is positivity preserving: . This condition is logically independent from the property of a superoperator being “positive” in the traditional sense of being a positive operator, i.e., for all operators (matrices) , where
is the Hilbert-Schmidt inner product on the space of matrices. We will refer frequently to this latter condition, so for clarity we call it op-positivity, and denote it with the traditional notation .
It is reasonably well known by physicists that Lindbladian superoperators, Eq. (1), generate CP time evolution of density matrices, i.e., is completely positive when and satisfies Eq.… [continue reading]
In this post I review the 2010 book “Lifecycle Investing” by Ian Ayres and Barry Nalebuff. (Amazon link here; no commission received.) They argue that a large subset of investors should adopt a (currently) unconventional strategy: One’s future retirement contributions should effectively be treated as bonds in one’s retirement portfolio that cannot be efficiently sold; therefore, early in life one should balance these low-volatility assets by gaining exposure to volatile high-return equities that will generically exceed 100% of one’s liquid retirement assets, necessitating some form of borrowing.
“Lifecycle Investing” was recommended to me by a friend who said the book “is extremely worth reading…like learning about index funds for the first time…Like worth paying 1% of your lifetime income to read if that was needed to get access to the ideas…potentially a lot more”. Ayres and Nalebuff lived up to this recommendation. Eventually, I expect the basic ideas, which are simple, to become so widespread and obvious that it will be hard to remember that it required an insight.
In part, what makes the main argument so compelling is that (as shown in the next section), it is closely related to an elegant explanation for something we all knew to be true — you should increase the bond-stock ratio of your portfolio as you get older — yet previously had bad justifications for. It also gives new actionable, non-obvious, and potentially very important advice (buy equities on margin when young) that is appropriately tempered by real-world frictions. And, most importantly, it means I personally feel less bad about already being nearly 100% in stocks when I picked up the book.
My main concerns, which are shared by other reviewers and which are only partially addressed by the authors, are:
Future income streams might be more like stocks than bonds for the large majority of people.
… [continue reading]
This post describes variations on a thought experiment involving the anthropic principle. The variations were developed through discussion with Andreas Albrecht, Charles Bennett, Leonid Levin, and Andrew Arrasmith at a conference at the Neils Bohr Institute in Copenhagen in October of 2019. I have not yet finished reading Bostrom’s “Anthropic Bias“, so I don’t know where it fits in to his framework. I expect it is subsumed into such existing discussion, and I would appreciate pointers.
The point is to consider a few thought experiments that share many of the same important features, but for which we have very different intuitions, and to identify if there are any substantive difference that can be used to justify these intuitions.
I will use the term “shocked” (in the sense of “I was shocked to see Bob levitate off the ground”) to refer to the situation where we have made observations that are extremely unlikely to be generated by our implicit background model of the world, such that good reasoners would likely reject the model and start entertaining previously disfavored alternative models like “we’re all brains in a vat”, the Matrix, etc. In particular, to be shocked is not supposed to be merely a description of human psychology, but rather is a normative claim about how good scientific reasoners should behave.
Here are the three scenarios:
: Through advances in geology, paleontology, theoretical biology, and quantum computer simulation of chemistry, we get very strong theoretical evidence that intelligent life appears with high likelihood following abiogenesis
events, but that abiogenesis itself is very rare: there is one expected abiogenesis event per 1022
stars per Hubble time.
… [continue reading]
Curtright et al. have a monograph on the phase-space formulation of quantum mechanics. I recommend reading their historical introduction.
Wigner’s quasi-probability distribution function in phase-space is a special (Weyl–Wigner) representation of the density matrix. It has been useful in describing transport in quantum optics, nuclear physics, quantum computing, decoherence, and chaos. It is also of importance in signal processing, and the mathematics of algebraic deformation. A remarkable aspect of its internal logic, pioneered by Groenewold and Moyal, has only emerged in the last quarter-century: It furnishes a third, alternative, formulation of quantum mechanics, independent of the conventional Hilbert space or path integral formulations. In this logically complete and self-standing formulation, one need not choose sides between coordinate or momentum space. It works in full phase-space, accommodating the uncertainty principle; and it offers unique insights into the classical limit of quantum theory: The variables (observables) in this formulation are c-number functions in phase space instead of operators, with the same interpretation as their classical counterparts, but are composed together in novel algebraic ways.
Here are some quotes. First, the phase-space formulation should be placed on equal footing with the Hilbert-space and path-integral formulations:
When Feynman first unlocked the secrets of the path integral formalism and presented them to the world, he was publicly rebuked: “It was obvious”, Bohr said, “that such trajectories violated the uncertainty principle”.
However, in this case, Bohr was wrong. Today path integrals are universally recognized and widely used as an alternative framework to describe quantum behavior, equivalent to although conceptually distinct from the usual Hilbert space framework, and therefore completely in accord with Heisenberg’s uncertainty principle…
Similarly, many physicists hold the conviction that classical-valued position and momentum variables should not be simultaneously employed in any meaningful formula expressing quantum behavior, simply because this would also seem to violate the uncertainty principle…However, they too are wrong.
… [continue reading]