Here is a table of proposals for creating enormous superpositions of matter. Importantly, all of them describe superpositions whose spatial extent is comparable to or larger than the size of the object itself. Many are quite speculative. I’d like to keep this table updated, so send me references if you think they should be included.
|KDTL||[1-3]||OligoporphyrinTo achieve their highest masses, the KDTL interferometer has superposed molecules of functionalized oligoporphyrin, a family of organic molecules composed of C, H, F, N, S, and Zn with molecular weights ranging from ~19,000 Da to ~29,000 Da. (The units here are Daltons, also known as atomic mass units (amu), i.e., the number of protons and neutrons.) The distribution is peaked around 27,000 Da.a ||∼1||2.7 × 104||266||1.24||10,000|
|OTIMA||[4-6]||Gold (Au)||5||66 × 10||79||94||600|
|Bateman et al.||||Silicon (Si)||5.5||1.1 × 106||150||140||0.5|
|Geraci et al.||||Silica (SiO2)||6.5||1.6 × 106||250||250||0.5|
|Wan et al.||||Diamond (C)||95||7.5 × 109||100||0.05||1|
|MAQRO||[10-13]||Silica (SiO2)||120||110 × 10||100||100,000||0.01|
|Pino et al.||||Niobium (Nb)||1,000||2.2 × 1013||290||450||0.1|
|Stickler et al.|
… [continue reading]
[This topic is way outside my expertise. Just thinking out loud.
Here is Google’s new language model PaLM having a think:
Alex Tabarrok writes
It seems obvious that the computer is reasoning. It certainly isn’t simply remembering. It is reasoning and at a pretty high level! To say that the computer doesn’t “understand” seems little better than a statement of religious faith or speciesism…
It’s true that AI is just a set of electronic neurons none of which “understand” but my neurons don’t understand anything either. It’s the system that understands. The Chinese room understands in any objective evaluation and the fact that it fails on some subjective impression of what it is or isn’t like to be an AI or a person is a failure of imagination not an argument…
These arguments aren’t new but Searle’s thought experiment was first posed at a time when the output from AI looked stilted, limited, mechanical. It was easy to imagine that there was a difference in kind. Now the output from AI looks fluid, general, human. It’s harder to imagine there is a difference in kind.
Tabarrok uses an illustration of Searle’s Chinese room featuring a giant look-up table:
But as Scott Aaronson has emphasized [PDF], a machine that simply maps inputs to outputs by consulting a giant look-up table should not be considered “thinking” (although it could be considered to “know”). First, such a look-up table would be beyond astronomically large for any interesting AI task and hence physically infeasible to implement in the real universe. But more importantly, the fact that something is being looked up rather than computed undermines the idea that the system understands or is reasoning.… [continue reading]
In many derivations of the Lindblad equation, the authors say something like “There is a gauge freedomA gauge freedom of the Lindblad equation means a transformation we can to both the Lindblad operators and (possibly) the system’s self-Hamiltonian, without changing the reduced dynamics.a in our choice of Lindblad (“jump”) operators that we can use to make those operators traceless for convenience”. However, the nature of this freedom and convenience is often obscure to non-experts.
While reading Hayden & Sorce’s nice recent paper [arXiv:2108.08316] motivating the choice of traceless Lindblad operators, I noticed for the first time that the trace-ful parts of Lindblad operators are just the contributions to Hamiltonian part of the reduced dynamics that arise at first order in the system-environment interaction. In contrast, the so-called “Lamb shift” Hamiltonian is second order.
Consider a system-environment decomposition of Hilbert space with a global Hamiltonian , where , , and are the system’s self Hamiltonian, the environment’s self-Hamiltonian, and the interaction, respectively. Here, we have (without loss of generality) decomposed the interaction Hamiltonian into a tensor product of Hilbert-Schmidt-orthogonal sets of operators and , with a real parameter that control the strength of the interaction.
This Hamiltonian decomposition is not unique in the sense that we can alwaysThere is also a similar freedom with the environment in the sense that we can send and .b send and , where is any Hermitian operator acting only on the system. When reading popular derivations of the Lindblad equation
like in the textbook by Breuer & Petruccione, one could be forgivenSpecifically, I have forgiven myself for doing this…c for thinking that this freedom is eliminated by the necessity of satisfying the assumption that , which is crucially deployed in the “microscopic” derivation of the Lindblad equation operators and from the global dynamics generated by .… [continue reading]
Don Weingarten’s newI previously blogged about earlier work by Weingarten on a related topic. This new paper directly addresses my previous concerns.a attack [2105.04545] on the problem of defining wavefunction branches is the most important paper on this topic in several years — and hence, by my strange twisted lights, one of the most important recent papers in physics. Ultimately I think there are significant barriers to the success of this approach, but these may be surmountable. Regardless, the paper makes tons of progress in understanding the advantages and drawbacks of a definition of branches based on quantum complexity.
Here’s the abstract:
Beginning with the Everett-DeWitt many-worlds interpretation of quantum mechanics, there have been a series of proposals for how the state vector of a quantum system might split at any instant into orthogonal branches, each of which exhibits approximately classical behavior. Here we propose a decomposition of a state vector into branches by finding the minimum of a measure of the mean squared quantum complexity of the branches in the branch decomposition. In a non-relativistic formulation of this proposal, branching occurs repeatedly over time, with each branch splitting successively into further sub-branches among which the branch followed by the real world is chosen randomly according to the Born rule. In a Lorentz covariant version, the real world is a single random draw from the set of branches at asymptotically late time, restored to finite time by sequentially retracing the set of branching events implied by the late time choice. The complexity measure depends on a parameter b with units of volume which sets the boundary between quantum and classical behavior.
… [continue reading]
In this post, I derive an identity showing the sense in which information about coherence over long distances in phase space for a quantum state is encoded in its quasicharacteristic function , the (symplectic) Fourier transform of its Wigner function. In particular I show
where and are coherent states, is the mean phase space position of the two states, “” denotes the convolution, and is the (Gaussian) quasicharacteristic function of the ground state of the Harmonic oscillator.
The quasicharacteristic function for a quantum state of a single degree of freedom is defined as
where is the Weyl phase-space displacement operator, are coordinates on “reciprocal” (i.e., Fourier transformed) phase space, is the phase-space location operator, and are the position and momentum operators, “” denotes the Hilbert-Schmidt inner product on operators, , and “” denotes the symplectic form, . (Throughout this post I use the notation established in Sec. 2 of my recent paper with Felipe Hernández.) It has variously been called the quantum characteristic function, the chord function, the Wigner characteristic function, the Weyl function, and the moment-generating function. It is the quantum analog of the classical characteristic function.
Importantly, the quasicharacteristic function obeys and , just like the classical characteristic function, and provides a definition of the Wigner function where the linear symplectic symmetry of phase space is manifest:
where is the phase-space coordinate and is the position-space representation of the quantum state. This first line says that and are related by the symplectic Fourier transform. (This just means the inner product “” in the regular Fourier transform is replaced with the symplectic form, and has the simple effect of exchanging the reciprocal variables, , simplifying many expressions.)… [continue reading]
Carney, Müller, and Taylor have a tantalizing paper on how the quantum nature of gravity might be confirmed even though we are quite far from being able to directly create and measure superpositions of gravitationally appreciable amounts of matter (hereafter: “massive superpositions”), and of course very far from being able to probe the Planck scale where quantum gravity effects dominate. More precisely, the idea is to demonstrate (assuming assumptions) that the gravitational field can be used to transmit quantum information from one system to another in the sense that the effective quantum channel is not entanglement breaking.
We suggest a test of a central prediction of perturbatively quantized general relativity: the coherent communication of quantum information between massive objects through gravity. To do this, we introduce the concept of interactive quantum information sensing, a protocol tailored to the verification of dynamical entanglement generation between a pair of systems. Concretely, we propose to monitor the periodic wavefunction collapse and revival in an atomic interferometer which is gravitationally coupled to a mechanical oscillator. We prove a theorem which shows that, under the assumption of time-translation invariance, this collapse and revival is possible if and only if the gravitational interaction forms an entangling channel. Remarkably, as this approach improves at moderate temperatures and relies primarily upon atomic coherence, our numerical estimates indicate feasibility with current devices.
[Edit: See also the November 2021 errata.
Although I’m not sure they would phrase it this way, the key idea for me was that merely protecting massive superpositions from decoherence is actually not that hard; sufficient isolation can be achieved in lots of systems.… [continue reading]
This nice recent paper considers the “general probabilistic theory” operational framework, of which classical and quantum theories are special cases, and asks what sorts of theories admit quantum Darwinism-like dynamics. It is closely related to my interest in finding a satisfying theory of classical measurement.
Quantum Darwinism posits that the emergence of a classical reality relies on the spreading of classical information from a quantum system to many parts of its environment. But what are the essential physical principles of quantum theory that make this mechanism possible? We address this question by formulating the simplest instance of Darwinism – CNOT-like fan-out interactions – in a class of probabilistic theories that contain classical and quantum theory as special cases. We determine necessary and sufficient conditions for any theory to admit such interactions. We find that every non-classical theory that admits this spreading of classical information must have both entangled states and entangled measurements. Furthermore, we show that Spekkens’ toy theory admits this form of Darwinism, and so do all probabilistic theories that satisfy principles like strong symmetry, or contain a certain type of decoherence processes. Our result suggests the counterintuitive general principle that in the presence of local non-classicality, a classical world can only emerge if this non-classicality can be “amplified” to a form of entanglement.
After the intro, the authors give self-contained background information on the two key prerequisites: quantum Darwinism and generalized probabilistic theories (GPTs). The former is an admirable brief summary of what are, to me, the core and extremely simple features of quantum Darwinism.… [continue reading]
Summary: Maybe we should start distinguishing “straight research” from more opinionated scientific work and encourage industrial research labs to commit to protecting the former as a realistic, limited version of academic freedom in the private for-profit sector.
It seems clear enough to me that, within the field of journalism, the distinction between opinion pieces and “straight reporting” is both meaningful and valuable to draw. Both sorts of works should be pursued vigorously, even by the same journalists at the same time, but they should be distinguished (e.g., by being placed in different sections of a newspaper, or being explicitly labeled “opinion”, etc.) and held to different standards.In my opinion it’s unfortunate that this distinction has been partially eroded in recent years and that some thoughtful people have even argued it’s meaningless and should be dropped. That’s not the subject of this blog post, though.a This is true even though there is of course a continuum between these categories, and it’s infeasible to precisely quantify the axis. (That said, I’d like to see more serious philosophical attempts to identify actionable principles for drawing this distinction more reliably and transparently.)
It’s easy for idealistic outsiders to get the impression that all of respectable scientific research is analogous to straight reporting rather than opinion, but just about any researcher will tell you that some articles are closer than other articles to the opinion category; that’s not to say it’s bad or unscientific, just that such articles go further in the direction of speculative interpretation and selective highlighting of certain pieces of evidence, and are often motivated by normative claims (“this area is more fruitful research avenue than my colleagues believe”, “this evidence implies the government should adopt a certain policy”, etc.).… [continue reading]