Harold Ollivier has put out a nice paper generalizing my best result:
We examine the emergence of objectivity for quantum many-body systems in a setting without an environment to decohere the system’s state, but where observers can only access small fragments of the whole system. We extend the result of Reidel (2017) to the case where the system is in a mixed state, measurements are performed through POVMs, and imprints of the outcomes are imperfect. We introduce a new condition on states and measurements to recover full classicality for any number of observers. We further show that evolutions of quantum many-body systems can be expected to yield states that satisfy this condition whenever the corresponding measurement outcomes are redundant.
Ollivier does a good job of summarizing why there is an urgent need to find a way to identify objectively classical variables in a many-body system without leaning on a preferred system-environment tensor decomposition. He also concisely describes the main results of my paper in somewhat different language, so some of you may find his version nicer to read.A minor quibble: Although this is of course a matter of taste, I disagree that the Shor code example was the “core of the main result” of my paper. In my opinion, the key idea was that there was a sensible way of defining redundancy at all in a way that allowed for proving statements about compatibility without recourse to a preferred non-microscopic tensor structure. The Shor-code example is more important for showing the limits of what redundancy can tell you (which is saturated in a weak sense).… [continue reading]
Although I’ve been repeatedly advised it’s not a good social strategy, a glorious way to start a research paper is with specific, righteous criticism of your anonymous colleagues:For read-ability, I have dropped the citations and section references from these quotes without marking the ellipses.a
Transformers are deep feed-forward artificial neural networks with a (self)attention mechanism. They have been tremendously successful in natural language processing tasks and other domains. Since their inception 5 years ago, many variants have been suggested. Descriptions are usually graphical, verbal, partial, or incremental. Despite their popularity, it seems no pseudocode has ever been published for any variant. Contrast this to other fields of computer science, even to “cousin” discipline reinforcement learning.
So begin Phuong & Hutter in a great, rant-filled paper that “covers what Transformers are, how they are trained, what they’re used for, their key architectural components, tokenization, and a preview of practical considerations, and the most prominent models.” As an exercise, in this post I’m dig into the first item by writing down an even more compact definition of a transformer than theirs, in the form of a mathematical function rather than pseudocode, while avoiding the ambiguities rampant in the rest of the literature. I will consider only what a single forward-pass of a transformer does, considered as a map from token sequences to probability distributions over the token vocabulary. I do not try to explain the transformer, nor do I address other important aspects like motivation, training, and computational.
(This post also draws on a nice introduction by Turner. If you are interested in understanding and interpretation, you might check out — in descending order of sophistication — Elhage et al.… [continue reading]
After years of not having an intuitive interpretation of the unital condition on CP maps, I recently learned a beautiful one: unitality means the dynamics never decreases the state’s mixedness, in the sense of the majorization partial order.
Consider the Lindblad dynamics generated by a set of Lindblad operators , corresponding to the Lindbladian
and the resulting quantum dynamical semigroup . Let
be the Renyi entropies, with the von Neumann entropy. Finally, let denote the majorization partial order on density matrices: exactly when exactly when for all , where and are the respective eigenvalues in decreasing order. (In words: means is more mixed than .) Then the following conditions are equivalent:None of this depends on the dynamics being Lindbladian. If you drop the first condition and drop the “” subscript, so that is just some arbitrary (potentially non-divisible) CP map, the remaining conditions are all equivalent.a
- : “ is a unital map (for all )”
- for all , , and : “All Renyi entropies are non-decreasing”
- for all : “ is mixedness non-decreasing”
- for all and some unitaries and probabilities .
The non-trivial equivalences above are proved in Sec. 8.3 of Wolf, “Quantum Channels and Operations Guided Tour“.See also “On the universal constraints for relaxation rates for quantum dynamical semigroup” by Chruscinski et al [2011.10159] for further interesting discussion.b
Note that having all Hermitian Lindblad operators () implies, but is not implied by, the above conditions. Indeed, the condition of Lindblad operator Hermiticity (or, more generally, normality) is not preserved under the unitary gauge freedom (which leaves the Lindbladian invariant for unitary .)… [continue reading]
[Summary: Constantly evolving tests for what counts as worryingly powerful AI is mostly a consequence of how hard it is to design tests that will identify the real-world power of future automated systems. I argue that Alan Turing in 1950 could not reliably distinguish a typical human from an appropriately-fine-tuned GPT-4, yet all our current automated systems cannot produce growth above historic trends.A draft of this a
What does the phenomena of “moving the goalposts” for what counts as AI tell us about AI?
It’s often said that people repeatedly revising their definition of AI, often in response to previous AI tests being passed, is evidence that people are denying/afraid of reality, and want to put their head in the sand or whatever. There’s some truth to that, but that’s a comment about humans and I think it’s overstated.
Closer to what I want to talk about is the idea AI is continuously redefined to mean “whatever humans can do that hasn’t been automated yet”, often taken to be evidence that AI is not a “natural” kind out there in the world, but rather just a category relative to current tech. There’s also truth to this, but not exactly what I’m interested in.
To me, it is startling that (I claim) we have systems today that would likely pass the Turing test if administered by Alan Turing, but that have negligible impact on a global scale. More specifically, consider fine-tuning GPT-4 to mimic a typical human who lacks encyclopedic knowledge of the contents of the internet. Suppose that it’s mimicking a human with average intelligence whose occupation has no overlap with Alan Turing’s expertise.… [continue reading]
Here’s a collection of reviews of the arguments that artificial general intelligence represents an existential risk to humanity. They vary greatly in length and style. I may update this from time to time.
Here, from my perspective, are some different true things that could be said, to contradict various false things that various different people seem to believe, about why AGI would be survivable on anything remotely resembling the current pathway, or any other pathway we can easily jump to.
This report examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I proceed in two stages. First, I lay out a backdrop picture that informs such concern. On this picture, intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire -- especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. Second, I formulate and evaluate a more specific six-premise argument that creating agents of this kind will lead to existential catastrophe by 2070. On this argument, by 2070: (1) it will become possible and financially feasible to build relevantly powerful and agentic AI systems; (2) there will be strong incentives to do so; (3) it will be much harder to build aligned (and relevantly powerful/agentic) AI systems than to build misaligned (and relevantly powerful/agentic) AI systems that are still superficially attractive to deploy; (4) some such misaligned systems will seek power over humans in high-impact ways; (5) this problem will scale to the full disempowerment of humanity; and (6) such disempowerment will constitute an existential catastrophe.
… [continue reading]
Here is a table of proposals for creating enormous superpositions of matter. Importantly, all of them describe superpositions whose spatial extent is comparable to or larger than the size of the object itself. Many are quite speculative. I’d like to keep this table updated, so send me references if you think they should be included.
|KDTL||[1-3]||OligoporphyrinTo achieve their highest masses, the KDTL interferometer has superposed molecules of functionalized oligoporphyrin, a family of organic molecules composed of C, H, F, N, S, and Zn with molecular weights ranging from ~19,000 Da to ~29,000 Da. (The units here are Daltons, also known as atomic mass units (amu), i.e., the number of protons and neutrons.) The distribution is peaked around 27,000 Da.a ||∼1||2.7 × 104||266||1.24||10,000|
|OTIMA||[4-6]||Gold (Au)||5||66 × 10||79||94||600|
|Bateman et al.||||Silicon (Si)||5.5||1.1 × 106||150||140||0.5|
|Geraci et al.||||Silica (SiO2)||6.5||1.6 × 106||250||250||0.5|
|Wan et al.||||Diamond (C)||95||7.5 × 109||100||0.05||1|
|MAQRO||[10-13]||Silica (SiO2)||120||110 × 10||100||100,000||0.01|
|Pino et al.||||Niobium (Nb)||1,000||2.2 × 1013||290||450||0.1|
|Stickler et al.|
… [continue reading]
[This topic is way outside my expertise. Just thinking out loud.
Here is Google’s new language model PaLM having a think:
Alex Tabarrok writes
It seems obvious that the computer is reasoning. It certainly isn’t simply remembering. It is reasoning and at a pretty high level! To say that the computer doesn’t “understand” seems little better than a statement of religious faith or speciesism…
It’s true that AI is just a set of electronic neurons none of which “understand” but my neurons don’t understand anything either. It’s the system that understands. The Chinese room understands in any objective evaluation and the fact that it fails on some subjective impression of what it is or isn’t like to be an AI or a person is a failure of imagination not an argument…
These arguments aren’t new but Searle’s thought experiment was first posed at a time when the output from AI looked stilted, limited, mechanical. It was easy to imagine that there was a difference in kind. Now the output from AI looks fluid, general, human. It’s harder to imagine there is a difference in kind.
Tabarrok uses an illustration of Searle’s Chinese room featuring a giant look-up table:
But as Scott Aaronson has emphasized [PDF], a machine that simply maps inputs to outputs by consulting a giant look-up table should not be considered “thinking” (although it could be considered to “know”). First, such a look-up table would be beyond astronomically large for any interesting AI task and hence physically infeasible to implement in the real universe. But more importantly, the fact that something is being looked up rather than computed undermines the idea that the system understands or is reasoning.… [continue reading]
In many derivations of the Lindblad equation, the authors say something like “There is a gauge freedomA gauge freedom of the Lindblad equation means a transformation we can to both the Lindblad operators and (possibly) the system’s self-Hamiltonian, without changing the reduced dynamics.a in our choice of Lindblad (“jump”) operators that we can use to make those operators traceless for convenience”. However, the nature of this freedom and convenience is often obscure to non-experts.
While reading Hayden & Sorce’s nice recent paper [arXiv:2108.08316] motivating the choice of traceless Lindblad operators, I noticed for the first time that the trace-ful parts of Lindblad operators are just the contributions to Hamiltonian part of the reduced dynamics that arise at first order in the system-environment interaction. In contrast, the so-called “Lamb shift” Hamiltonian is second order.
Consider a system-environment decomposition of Hilbert space with a global Hamiltonian , where , , and are the system’s self Hamiltonian, the environment’s self-Hamiltonian, and the interaction, respectively. Here, we have (without loss of generality) decomposed the interaction Hamiltonian into a tensor product of Hilbert-Schmidt-orthogonal sets of operators and , with a real parameter that control the strength of the interaction.
This Hamiltonian decomposition is not unique in the sense that we can alwaysThere is also a similar freedom with the environment in the sense that we can send and .b send and , where is any Hermitian operator acting only on the system. When reading popular derivations of the Lindblad equation
like in the textbook by Breuer & Petruccione, one could be forgivenSpecifically, I have forgiven myself for doing this…c for thinking that this freedom is eliminated by the necessity of satisfying the assumption that , which is crucially deployed in the “microscopic” derivation of the Lindblad equation operators and from the global dynamics generated by .… [continue reading]