We report quantum ground state cooling of a levitated nanoparticle in a room temperature environment. Using coherent scattering into an optical cavity we cool the center of mass motion of a nm diameter silica particle by more than orders of magnitude to phonons along the cavity axis, corresponding to a temperature of μK. We infer a heating rate of kHz, which results in a coherence time of μs – or coherent oscillations – while the particle is optically trapped at a pressure of mbar. The inferred optomechanical coupling rate of kHz places the system well into the regime of strong cooperativity (). We expect that a combination of ultra-high vacuum with free-fall dynamics will allow to further expand the spatio-temporal coherence of such nanoparticles by several orders of magnitude, thereby opening up new opportunities for macroscopic quantum experiments.
[EDIT: The same group has more recently achieved ground-state cooling with real-time control feedback.]
Ground-state cooling of nanoparticles in laser traps is a very important milestone on the way to producing large spatial superpositions of matter, and I have a long-standing obsession with the possibility of using such superpositions to probe for the existence of new particles and forces like dark matter. In this post, I put this milestone in a bit of context and then and then toss up a speculative plot for the estimated dark-matter sensitivity of a follow-up to Delić et al.’s device.
One way to organize the quantum states of a single continuous degree of freedom, like the center-of-mass position of a nanoparticle, is by their sensitivity to displacements in phase space.… [continue reading]
Our paper discussed in the previous blog post might prompt this question: Is there still a way to use Landauer’s principle to convert the free energy of a system to its bit erasure capacity? The answer is “yes”, which we can demonstrate with a simple argument.
Summary: The correct measure of bit-erasure capacity N for an isolated system is the negentropy, the difference between the system’s current entropy and the entropy it would have if allowed to thermalize with its current internal energy. The correct measure of erasure capacity for a constant-volume system with free access to a bath at constant temperature is the Helmholtz free energy (divided by , per Landauer’s principle), provided that the additive constant of the free energy is set such that the free energy vanishes when the system thermalizes to temperature . That is,
where and are the internal energy and entropy of the system if it were at temperature . The system’s negentropy lower bounds this capacity, and this bound is saturated when .
Traditionally, the Helmholtz free energy of a system is defined as , where and are the internal energy and entropy of the system and is the constant temperature of an external infinite bath with which the system can exchange energy.Here, there is a factor of Boltzmann’s constant in front of because I am measuring the (absolute) entropy in dimensionless bits rather than in units of energy per temperature. That way we can write things like .a (I will suppress the “Helmholtz” modifier henceforth; when the system’s pressure rather than volume is constant, my conclusion below holds for the Gibbs free energy if the obvious modifications are made.)… [continue reading]
People often say to me “Jess, all this work you do on the foundations of quantum mechanics is fine as far as it goes, but it’s so conventional and safe. When are you finally going to do something unusual and take some career risks?” I’m now pleased to say I have a topic to bring up in such situations: the thermodynamic incentives of powerful civilizations in the far future who seek to perform massive computations. Anders Sandberg, Stuart Armstrong, and Milan M. Ćirković previously argued for a surprising connection between Landauer’s principle and the Fermi paradox, which Charles Bennett, Robin Hanson, and I have now critiqued. Our comment appeared today in the new issue of Foundations of Physics:
In their article [arXiv:1705.03394], 'That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox', Sandberg et al. try to explain the Fermi paradox (we see no aliens) by claiming that Landauer's principle implies that a civilization can in principle perform far more (~1030 times more) irreversible logical operations (e.g., error-correcting bit erasures) if it conserves its resources until the distant future when the cosmic background temperature is very low. So perhaps aliens are out there, but quietly waiting. Sandberg et al. implicitly assume, however, that computer-generated entropy can only be disposed of by transferring it to the cosmological background. In fact, while this assumption may apply in the distant future, our universe today contains vast reservoirs and other physical systems in non-maximal entropy states, and computer-generated entropy can be transferred to them at the adiabatic conversion rate of one bit of negentropy to erase one bit of error.
I am briefly stirring from my blog-hibernationThis blog will resume at full force sometime in the future, but not just yet.a to present a collection of frequently asked questions about experiments seeking to investigate quantum Darwinism (QD). Most of the questions were asked by (or evolved from questions asked by) Phillip Ball while we corresponded regarding his recent article “Quantum Darwinism, an Idea to Explain Objective Reality, Passes First Tests” for Quanta magazine, which I recommend you check out.
Who is trying see quantum Darwinism in experiments?
I am aware of two papers out of a group from Arizona State in 2010 (here and here) and three papers from separate groups last year (arXiv: 1803.01913, 1808.07388, 1809.10456). I haven’t looked at them all carefully so I can’t vouch for them, but I think the more recent papers would be the closest thing to a “test” of QD.
What are the experiments doing to put QD the test?
These teams construct a kind of “synthetic environment” from just a few qubits, and then interrogate them to discover the information that they contain about the quantum system to which they are coupled.
What do you think of experimental tests of QD in general?
Considered as a strictly mathematical phenomenon, QD is the dynamical creation of certain kinds of correlations between certain systems and their environments under certain conditions. These experiments directly confirm that, if such conditions are created, the expected correlations are obtained.
The experiments are, unfortunately, not likely to offer many insight or opportunities for surprise; the result can be predicted with very high confidence long in advance.… [continue reading]
Having heard Geoffrey Hinton’s somewhat dismissive account of the contribution by physicists to machine learning in his online MOOC, it was interesting to listen to one of those physicists, Naftali Tishby, here at PI:
The surprising success of learning with deep neural networks poses two fundamental challenges: understanding why these networks work so well and what this success tells us about the nature of intelligence and our biological brain. Our recent Information Theory of Deep Learning shows that large deep networks achieve the optimal tradeoff between training size and accuracy, and that this optimality is achieved through the noise in the learning process.
In this talk, I will focus on the statistical physics aspects of our theory and the interaction between the stochastic dynamics of the training algorithm (Stochastic Gradient Descent) and the phase structure of the Information Bottleneck problem. Specifically, I will describe the connections between the phase transition and the final location and representation of the hidden layers, and the role of these phase transitions in determining the weights of the network.
Based partly on joint works with Ravid Shwartz-Ziv, Noga Zaslavsky, and Shlomi Agmon.
I was familiar with the general concept of over-fitting, but I hadn’t realized you could talk about it quantitatively by looking at the mutual information between the output of a network and all the information in the training data that isn’t the target label.… [continue reading]
In particular, he sketched the essential equivalence between matrix product states (MPS) and restricted Boltzmann machinesThis is discussed in detail by Chen et al. See also good intuition and a helpful physicist-statistician dictionary from Lin and Tegmark.b (RBM) before showing how he and collaborators could train an efficient RBM representations of the states of the transverse-field Ising and XXZ models with a small number of local measurements from the true state.
[This is akin to a living review, which will hopefully improve from time to time. Last edited 2020-4-8.]
This post will collect some models of decoherence and branching. We don’t have a rigorous definition of branches yet but I crudely define models of branching to be models of decoherenceI take decoherence to mean a model with dynamics taking the form for some tensor decomposition , where is an (approximately) stable orthonormal basis independent of initial state, and where for times and , where is the initial state of and is some characteristic time scale.a which additionally feature some combination of amplification, irreversibility, redundant records, and/or outcomes with an intuitive macroscopic interpretation.
(Note in particular that I am not just listing models for which you can mathematically take a classical limit ( or ) and recover the classical equations of motion; Yaffe has a pleasingly general approach to that task , but I’ve previously sketched why that’s an incomplete explanation for classicality.)
I have the following desiderata for models, which tend to be in tension with computational tractability:
[Added 2022-March-13: Weingarten has a new paper, discussed by me here, that mostly supercedes the content of this post. In the new approach, the preferred branch decomposition is to be generated using a modification on Nielsen’s measure of quantum circuit complexity.]
A senior colleague asked me for thoughts on this paper describing a single-preferred-branch flavor of quantum mechanics, and I thought I’d copy them here. Tl;dr: I did not find an important new idea in it, but this paper nicely illustrates the appeal of Finkelstein’s partial-trace decoherence and the ambiguity inherent in connecting a many-worlds wavefunction to our direct observations.
We propose a method for finding an initial state vector which by ordinary Hamiltonian time evolution follows a single branch of many-worlds quantum mechanics. The resulting deterministic system appears to exhibit random behavior as a result of the successive emergence over time of information present in the initial state but not previously observed.
We start by assuming that a precise wavefunction branch structure has been specified. The idea, basically, is to randomly draw a branch at late times according to the Born probability, then to evolve it backwards in time to the beginning of the universe and take that as your initial condition. The main motivating observation is that, if we assume that all branch splittings are defined by a projective decomposition of some subsystem (‘the system’) which is recorded faithfully elsewhere (‘the environment’), then the lone preferred branch — time-evolving by itself — is an eigenstate of each of the projectors defining the splits. In a sense, Weingarten lays claim to ordered consistency [arxiv:gr-qc/9607073] by assuming partial-trace decoherenceNote on terminology: What Finkelstein called “partial-trace decoherence” is really a specialized form of consistency (i.e.,… [continue reading]
LaTeX in comments
Include [latexpage] to render LaTeX in comments. Basic HTML works too. (More.)