After years of not having an intuitive interpretation of what the unital condition on CP maps, I recently learned a beautiful one: unitality means the dynamics never decreases the state’s mixedness, in the sense of the majorization partial order.
Consider the Lindblad dynamics generated by a set of Lindblad operators , corresponding to the Lindbladian
and the resulting quantum dynamical semigroup . Let
be the Renyi entropies, with the von Neumann entropy. Finally, let denote the majorization partial order on density matrices: exactly when exactly when for all , where and are the respective eigenvalues in decreasing order. (In words: means is more mixed than .) Then the following conditions are equivalent:None of this depends on the dynamics being Lindbladian. If you drop the first condition and drop the “” subscript, so that is just some arbitrary (potentially non-divisible) CP map, the remaining conditions are all equivalent.a
- : “ is a unital map (for all )”
- for all , , and : “All Renyi entropies are non-decreasing”
- for all : “ is mixedness non-decreasing”
- for all and some unitaries and probabilities .
The non-trivial equivalences above are proved in Sec. 8.3 of Wolf, “Quantum Channels and Operations Guided Tour“.
Note that having all Hermitian Lindblad operators () implies, but is not implied by, the above conditions. Indeed, the condition of Lindblad operator Hermiticity (or, more generally, normality) is not preserved under the unitary gauge freedom (which leaves the Lindbladian invariant for unitary .) I was curious whether unital dynamics can always be expressed in terms of Hermitian operators, but based on some quick numerics it looks like this is not the case.… [continue reading]
[Summary: Constantly evolving tests for what counts as worryingly powerful AI is mostly a consequence of how hard it is to design tests that will identify the real-world power of future automated systems. I argue that Alan Turing in 1950 could not reliably distinguish a typical human from an appropriately-fine-tuned GPT-4, yet all our current automated systems cannot produce growth above historic trends.A draft of this a
What does the phenomena of “moving the goalposts” for what counts as AI tell us about AI?
It’s often said that people repeatedly revising their definition of AI, often in response to previous AI tests being passed, is evidence that people are denying/afraid of reality, and want to put their head in the sand or whatever. There’s some truth to that, but that’s a comment about humans and I think it’s overstated.
Closer to what I want to talk about is the idea AI is continuously redefined to mean “whatever humans can do that hasn’t been automated yet”, often taken to be evidence that AI is not a “natural” kind out there in the world, but rather just a category relative to current tech. There’s also truth to this, but not exactly what I’m interested in.
To me, it is startling that (I claim) we have systems today that would likely pass the Turing test if administered by Alan Turing, but that have negligible impact on a global scale. More specifically, consider fine-tuning GPT-4 to mimic a typical human who lacks encyclopedic knowledge of the contents of the internet. Suppose that it’s mimicking a human with average intelligence whose occupation has no overlap with Alan Turing’s expertise.… [continue reading]
Here’s a collection of reviews of the arguments that artificial general intelligence represents an existential risk to humanity. They vary greatly in length and style. I may update this from time to time.
Here, from my perspective, are some different true things that could be said, to contradict various false things that various different people seem to believe, about why AGI would be survivable on anything remotely remotely resembling the current pathway, or any other pathway we can easily jump to.
This report examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I proceed in two stages. First, I lay out a backdrop picture that informs such concern. On this picture, intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire -- especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. Second, I formulate and evaluate a more specific six-premise argument that creating agents of this kind will lead to existential catastrophe by 2070. On this argument, by 2070: (1) it will become possible and financially feasible to build relevantly powerful and agentic AI systems; (2) there will be strong incentives to do so; (3) it will be much harder to build aligned (and relevantly powerful/agentic) AI systems than to build misaligned (and relevantly powerful/agentic) AI systems that are still superficially attractive to deploy; (4) some such misaligned systems will seek power over humans in high-impact ways; (5) this problem will scale to the full disempowerment of humanity; and (6) such disempowerment will constitute an existential catastrophe.
… [continue reading]
Here is a table of proposals for creating enormous superpositions of matter. Importantly, all of them describe superpositions whose spatial extent is comparable to or larger than the size of the object itself. Many are quite speculative. I’d like to keep this table updated, so send me references if you think they should be included.
|KDTL||[1-3]||OligoporphyrinTo achieve their highest masses, the KDTL interferometer has superposed molecules of functionalized oligoporphyrin, a family of organic molecules composed of C, H, F, N, S, and Zn with molecular weights ranging from ~19,000 Da to ~29,000 Da. (The units here are Daltons, also known as atomic mass units (amu), i.e., the number of protons and neutrons.) The distribution is peaked around 27,000 Da.a ||∼1||2.7 × 104||266||1.24||10,000|
|OTIMA||[4-6]||Gold (Au)||5||66 × 10||79||94||600|
|Bateman et al.||||Silicon (Si)||5.5||1.1 × 106||150||140||0.5|
|Geraci et al.||||Silica (SiO2)||6.5||1.6 × 106||250||250||0.5|
|Wan et al.||||Diamond (C)||95||7.5 × 109||100||0.05||1|
|MAQRO||[10-13]||Silica (SiO2)||120||110 × 10||100||100,000||0.01|
|Pino et al.||||Niobium (Nb)||1,000||2.2 × 1013||290||450||0.1|
|Stickler et al.|
… [continue reading]
[This topic is way outside my expertise. Just thinking out loud.
Here is Google’s new language model PaLM having a think:
Alex Tabarrok writes
It seems obvious that the computer is reasoning. It certainly isn’t simply remembering. It is reasoning and at a pretty high level! To say that the computer doesn’t “understand” seems little better than a statement of religious faith or speciesism…
It’s true that AI is just a set of electronic neurons none of which “understand” but my neurons don’t understand anything either. It’s the system that understands. The Chinese room understands in any objective evaluation and the fact that it fails on some subjective impression of what it is or isn’t like to be an AI or a person is a failure of imagination not an argument…
These arguments aren’t new but Searle’s thought experiment was first posed at a time when the output from AI looked stilted, limited, mechanical. It was easy to imagine that there was a difference in kind. Now the output from AI looks fluid, general, human. It’s harder to imagine there is a difference in kind.
Tabarrok uses an illustration of Searle’s Chinese room featuring a giant look-up table:
But as Scott Aaronson has emphasized [PDF], a machine that simply maps inputs to outputs by consulting a giant look-up table should not be considered “thinking” (although it could be considered to “know”). First, such a look-up table would be beyond astronomically large for any interesting AI task and hence physically infeasible to implement in the real universe. But more importantly, the fact that something is being looked up rather than computed undermines the idea that the system understands or is reasoning.… [continue reading]
In many derivations of the Lindblad equation, the authors say something like “There is a gauge freedomA gauge freedom of the Lindblad equation means a transformation we can to both the Lindblad operators and (possibly) the system’s self-Hamiltonian, without changing the reduced dynamics.a in our choice of Lindblad (“jump”) operators that we can use to make those operators traceless for convenience”. However, the nature of this freedom and convenience is often obscure to non-experts.
While reading Hayden & Sorce’s nice recent paper [arXiv:2108.08316] motivating the choice of traceless Lindblad operators, I noticed for the first time that the trace-ful parts of Lindblad operators are just the contributions to Hamiltonian part of the reduced dynamics that arise at first order in the system-environment interaction. In contrast, the so-called “Lamb shift” Hamiltonian is second order.
Consider a system-environment decomposition of Hilbert space with a global Hamiltonian , where , , and are the system’s self Hamiltonian, the environment’s self-Hamiltonian, and the interaction, respectively. Here, we have (without loss of generality) decomposed the interaction Hamiltonian into a tensor product of Hilbert-Schmidt-orthogonal sets of operators and , with a real parameter that control the strength of the interaction.
This Hamiltonian decomposition is not unique in the sense that we can alwaysThere is also a similar freedom with the environment in the sense that we can send and .b send and , where is any Hermitian operator acting only on the system. When reading popular derivations of the Lindblad equation
like in the textbook by Breuer & Petruccione, one could be forgivenSpecifically, I have forgiven myself for doing this…c for thinking that this freedom is eliminated by the necessity of satisfying the assumption that , which is crucially deployed in the “microscopic” derivation of the Lindblad equation operators and from the global dynamics generated by .… [continue reading]
Don Weingarten’s newI previously blogged about earlier work by Weingarten on a related topic. This new paper directly addresses my previous concerns.a attack [2105.04545] on the problem of defining wavefunction branches is the most important paper on this topic in several years — and hence, by my strange twisted lights, one of the most important recent papers in physics. Ultimately I think there are significant barriers to the success of this approach, but these may be surmountable. Regardless, the paper makes tons of progress in understanding the advantages and drawbacks of a definition of branches based on quantum complexity.
Here’s the abstract:
Beginning with the Everett-DeWitt many-worlds interpretation of quantum mechanics, there have been a series of proposals for how the state vector of a quantum system might split at any instant into orthogonal branches, each of which exhibits approximately classical behavior. Here we propose a decomposition of a state vector into branches by finding the minimum of a measure of the mean squared quantum complexity of the branches in the branch decomposition. In a non-relativistic formulation of this proposal, branching occurs repeatedly over time, with each branch splitting successively into further sub-branches among which the branch followed by the real world is chosen randomly according to the Born rule. In a Lorentz covariant version, the real world is a single random draw from the set of branches at asymptotically late time, restored to finite time by sequentially retracing the set of branching events implied by the late time choice. The complexity measure depends on a parameter b with units of volume which sets the boundary between quantum and classical behavior.
… [continue reading]
In this post, I derive an identity showing the sense in which information about coherence over long distances in phase space for a quantum state is encoded in its quasicharacteristic function , the (symplectic) Fourier transform of its Wigner function. In particular I show
where and are coherent states, is the mean phase space position of the two states, “” denotes the convolution, and is the (Gaussian) quasicharacteristic function of the ground state of the Harmonic oscillator.
The quasicharacteristic function for a quantum state of a single degree of freedom is defined as
where is the Weyl phase-space displacement operator, are coordinates on “reciprocal” (i.e., Fourier transformed) phase space, is the phase-space location operator, and are the position and momentum operators, “” denotes the Hilbert-Schmidt inner product on operators, , and “” denotes the symplectic form, . (Throughout this post I use the notation established in Sec. 2 of my recent paper with Felipe Hernández.) It has variously been called the quantum characteristic function, the chord function, the Wigner characteristic function, the Weyl function, and the moment-generating function. It is the quantum analog of the classical characteristic function.
Importantly, the quasicharacteristic function obeys and , just like the classical characteristic function, and provides a definition of the Wigner function where the linear symplectic symmetry of phase space is manifest:
where is the phase-space coordinate and is the position-space representation of the quantum state. This first line says that and are related by the symplectic Fourier transform. (This just means the inner product “” in the regular Fourier transform is replaced with the symplectic form, and has the simple effect of exchanging the reciprocal variables, , simplifying many expressions.)… [continue reading]