Abstracts for May-June 2016

Lots of matter interference experiments this time, because they are awesome.

  • Quantum Interference of a Microsphere
    H. Pino, J. Prat-Camps, K. Sinha, B. P. Venkatesh, and O. Romero-Isart
    We propose and analyze an all-magnetic scheme to perform a Young’s double slit experiment with a micron-sized superconducting sphere of mass 10^{13} amu. We show that its center of mass could be prepared in a spatial quantum superposition state with an extent of the order of half a micrometer. The scheme is based on magnetically levitating the sphere above a superconducting chip and letting it skate through a static magnetic potential landscape where it interacts for short intervals with quantum circuits. In this way a protocol for fast quantum interferometry is passively implemented. Such a table-top earth-based quantum experiment would operate in a parameter regime where gravitational energy scales become relevant. In particular we show that the faint parameter-free gravitationally-induced decoherence collapse model, proposed by Diósi and Penrose, could be unambiguously falsified.


    An extremely exciting and ambitious proposal. I have no ability to assess the technical feasibility, and my prior is that this is too hard, but the authors are solid. Their formalism and thinking is very clean, and hence quite abstracted away from the nitty gritty of the experiment.

  • Do the laws of quantum physics still hold for macroscopic objects -- this is at the heart of Schrodinger's cat paradox -- or do gravitation or yet unknown effects set a limit for massive particles? What is the fundamental relation between quantum physics and gravity? Ground-based experiments addressing these questions may soon face limitations due to limited free-fall times and the quality of vacuum and microgravity.
[continue reading]

Comments on Hanson’s The Age of Em

One of the main sources of hubris among physicists is that we think we can communicate essential ideas faster and more exactly than many others.This isn’t just a choice of compact terminology or ability to recall shared knowledge. It also has to do with a responsive throttling of the level of detail to match the listener’s ability to follow, and quick questions which allow the listener to hone in on things they don’t understand. This leads to a sense of frustration when talking to others who use different methods. Of course this sensation isn’t overwhelming evidence that our methods actually are better and function as described above, just that they are different. But come on. Robin Hanson‘s Age of Em is an incredible written example of efficient transfer of (admittedly speculative) insights. I highly recommend it.

In places where I am trained to expect writers to insert fluff and repeat themselves — without actually clarifying — Hanson states his case concisely once, then plows through to new topics. There are several times where I think he leaps without sufficient justifications (at least given my level of background knowledge), but there is a stunning lack of fluff. The ideas are jammed in edgewise.



Academic papers usually have two reasons that they must be read slowly: explicit unpacking of complex subjects, and convoluted language. Hanson’s book is a great example of something that must be read slowly because of the former with no hint of the latter. Although he freely calls on economics concepts that non-economists might have to look up, his language is always incredibly direct and clear. Hanson is an academic Hemingway.

Most of what I might have said on the book’s substance was very quickly eclipsed by other reviews, so you should just read Bryan Caplan, Richard Jones, or Scott Alexander, along with some replies by Hanson.… [continue reading]

My talk on ideal quantum Brownian motion

I have blogged before about the conceptual importance of ideal, symplectic covariant quantum Brownian motion (QBM). In short: QBM is to open quantum systems as the harmonic oscillator is to closed quantum systems. Like the harmonic oscillator, (a) QBM is universal because it’s the leading-order behavior of a taylor series expansion; (b) QBM evolution has a very intuitive interpretation in terms of wavepackets evolving under classical flow; and (c) QBM is exactly solvable.

If that sounds like a diatribe up your alley, then you are in luck. I recently ranted about it here at PI. It’s just a summary of the literature; there are no new results. As always, I recommend downloading the raw video file so you can run it at arbitrary speed.


Abstract: In the study of closed quantum system, the simple harmonic oscillator is ubiquitous because all smooth potentials look quadratic locally, and exhaustively understanding it is very valuable because it is exactly solvable. Although not widely appreciated, Markovian quantum Brownian motion (QBM) plays almost exactly the same role in the study of open quantum systems. QBM is ubiquitous because it arises from only the Markov assumption and linear Lindblad operators, and it likewise has an elegant and transparent exact solution. QBM is often introduced with specific non-Markovian models like Caldeira-Leggett, but this makes it very difficult to see which phenomena are universal and which are idiosyncratic to the model. Like frictionless classical mechanics or nonrenormalizable field theories, the exact Markov property is aphysical, but handling this subtlety is a small price to pay for the extreme generality.
[continue reading]

Bullshit in science

Francisco Azuaje (emphasis mine):

According to American philosopher Harry FrankfurtHere’s Frankfurt’s popular essay [PDF]., a key difference between liars and bullshitters is that the former tend to accept that they are not telling the truth, while the latter simply do not care whether something is true or not.

Bullshitters strive to maximize personal gain through a continuing distortion of reality. If something is true and can be manipulated to achieve their selfish objectives, then good. If something is not true, who cares? All the same. These attributes make bullshitting worse than lying.

Furthermore, according to Frankfurt, it is the bullshitter’s capacity to get away with bullshitting so easily that makes them particularly dangerous. Individuals in prominent positions of authority may be punished for lying, especially if lying has serious damaging consequences. Professional and casual bullshitters at all levels of influence typically operate with freedom. Regardless of their roles in society, their exposure is not necessarily accompanied by negative legal or intellectual consequences, at least for the bullshitter…

Researchers may also be guilty of bullshitting by omission. This is the case when they do not openly challenge bullshitting positions, either in the public or academic settings. Scientists frequently wrongly assume that the public always has knowledge of well-established scientific facts. Moreover, scientists sometimes over-estimate the moderating role of the media or their capacity to differentiate facts from falsehood, and solid from weaker evidence.

Bullshitting happens. But very often it is a byproduct of indifference. Indifference frequently masking a fear of appearing confrontational to peers and funders. Depending on where you are or with whom you work, frontal bullshit fighting may not be good for career advancement.

[continue reading]

Comments on Rosaler’s “Reduction as an A Posteriori Relation”

In a previous post of abstracts, I mentioned philosopher Josh Rosaler’s attempt to clarify the distinction between empirical and formal notions of “theoretical reduction”. Reduction is just the idea that one theory reduces to another in some limit, like Galilean kinematics reduces to special relativity in the limit of small velocities.Confusingly, philosophers use a reversed convention; they say that Galilean mechanics reduces to special relativity. Formal reduction is when this takes the form of some mathematical limiting procedure (e.g., v/c \to 0), whereas empirical reduction is an explanatory statement about observations (e.g., “special relativity can explains the empirical usefulness of Galilean kinematics”).

Rosaler’s criticism, which I mostly agree with, is that folks often conflate these two. Usually this isn’t a serious problem since the holes can be patched up on the fly by a competent physicist, but sometimes it leads to serious trouble. The most egregious case, and the one that got me interested in all this, is the quantum-classical transition, and in particular the serious insufficiency of existing \hbar \to 0 limits to explain the appearance of macroscopic classicality. In particular, even though this limiting procedure recovers the classical equations of motion, it fails spectacularly to recover the state space.There are multiple quantum states that have the same classical analog as \hbar \to 0, and there are quantum states that have no classical analog as \hbar \to 0.

In this post I’m going to comment Rosaler’s recent elaboration on this ideaI thank him for discussion this topic and, full disclosure, we’re drafting a paper about set selection together.:

Reduction between theories in physics is often approached as an a priori relation in the sense that reduction is often taken to depend only on a comparison of the mathematical structures of two theories.
[continue reading]

Links for May 2016

  • The Peacock Spider (Maratus speciosus):

    If you haven’t long ago seen the BBC Earth bit on the birds of paradise, check it out.
  • If you use Zotero and iOS, then check out PaperShip. I have two or three minor complaints, but on the whole it is very high quality.
  • The New Mexico whiptail is like a mule in that it’s a hybrid of two species, but unlike the mule it can reproduce semi-cloning:

    The New Mexico whiptail (Cnemidophorus neomexicanus) is a female species of lizard found in the southern United States in New Mexico and Arizona, and in northern Mexico in Chihuahua. It is the official state reptile of New Mexico. It is one of many lizard species known to be parthenogenic. Individuals of the species can be created either through the hybridization of the little striped whiptail (C. inornatus) and the western whiptail (C. tigris), or through the parthenogenic reproduction of an adult New Mexico whiptail.

    The hybridization of these species prevents healthy males from forming whereas males do exist in both parent species (see Sexual differentiation). Parthenogenesis allows the resulting all-female population to reproduce and thus evolve into a unique species capable of reproduction. This combination of interspecific hybridization and parthenogenesis exists as a reproductive strategy in several species of whiptail lizard within the Cnemidophorus genus to which the New Mexico whiptail belongs.

    And in the extremely unlikely event that you don’t already know what parthenogenesis is…

    Parthenogenesis… is a natural form of asexual reproduction in which growth and development of embryos occur without fertilization. In animals, parthenogenesis means development of an embryo from an unfertilized egg cell and is a component process of apomixis.

[continue reading]

Links for April 2016

  • Paul Christiano has bet me $500 at even odds that a self-driving car can be reliably hailed by a member of the general public in at least 10 North American cities by July 2023.

    Details: At least 8 cities must be outside San Fransisco Bay Area. The car must available on at least 50% of days, i.e., not confined to very narrow weather or traffic situations. The car must be self-delivering, in the sense that it drives itself to the user, but not necessarily fully self-driving, in the sense that the user might need to drive it to the destination. (It’s easy to imagine tech and regulatory scenarios where self-driven cars are limited to speeds that are unacceptably slow during transportation of passengers, like the ~20 mph that Google’s car usually does, but are sufficient for getting to the hailing passenger if the density is high enough.) Carl Shulman will adjudicate any edge cases.

    I ascribe a 45% chance that a self-delivering car reaches this threshold, and 38% chance that a fully self-driving car does.

    Here’s a list of optimistic predictions for self-driving car timelines, which notably doesn’t mention the recent Google pessimism.

  • People I know build great stuff!

    My brother Will is an electrical engineer at Apple. He has been heavily involved in improving Apple display technology for the past two years, especially the True Tone feature and especially with the iPad Pro 9. Well, the reviews from the experts are in:

    The Absolute Color Accuracy of the iPad Pro 9.7 is Truly Impressive as shown in these Figures. It is the most color accurate display that we have ever measured. It is visually indistinguishable from perfect, and is very likely considerably better than any mobile display, monitor, TV or UHD TV that you have.

[continue reading]

Abstracts for March-April 2016

  • Unruh effect without trans-horizon entanglement
    Carlo Rovelli and Matteo Smerlak
    We estimate the transition rates of a uniformly accelerated Unruh-DeWitt detector coupled to a quantum field with reflecting conditions on a boundary plane (a “mirror”). We find that these are essentially indistinguishable from the usual Unruh rates, viz. that the Unruh effect persists in the presence of the mirror. This shows that the Unruh effect (thermality of detector rates) is not merely a consequence of the entanglement between left and right Rindler quanta in the Minkowski vacuum. Since in this setup the state of the field in the Rindler wedge is pure, we argue furthermore that the relevant entropy in the Unruh effect cannot be the von Neumann entanglement entropy. We suggest, an alternative, that it is the Shannon entropy associated with Heisenberg uncertainty.

    See also the related works by Gooding and Unruh, which connect to Pikovski et al. (blogged here).

  • What is the Entropy in Entropic Gravity?
    Sean M. Carroll and Grant N. Remmen
    We investigate theories in which gravity arises as a consequence of entropy. We distinguish between two approaches to this idea: holographic gravity, in which Einstein's equation arises from keeping entropy stationary in equilibrium under variations of the geometry and quantum state of a small region, and thermodynamic gravity, in which Einstein's equation emerges as a local equation of state from constraints on the area of a dynamical lightsheet in a fixed spacetime background. Examining holographic gravity, we argue that its underlying assumptions can be justified in part using recent results on the form of the modular energy in quantum field theory. For thermodynamic gravity, on the other hand, we find that it is difficult to formulate a self-consistent definition of the entropy, which represents an obstacle for this approach.
[continue reading]

Redundant consistency

I’m happy to announce the recent publication of a paper by Mike, Wojciech, and myself.

The Objective Past of a Quantum Universe: Redundant Records of Consistent Histories
C. Jess Riedel, Wojciech H. Zurek, and Michael Zwolak
Motivated by the advances of quantum Darwinism and recognizing the role played by redundancy in identifying the small subset of quantum states with resilience characteristic of objective classical reality, we explore the implications of redundant records for consistent histories. The consistent histories formalism is a tool for describing sequences of events taking place in an evolving closed quantum system. A set of histories is consistent when one can reason about them using Boolean logic, i.e., when probabilities of sequences of events that define histories are additive. However, the vast majority of the sets of histories that are merely consistent are flagrantly nonclassical in other respects. This embarras de richesses (known as the set selection problem) suggests that one must go beyond consistency to identify how the classical past arises in our quantum universe. The key intuition we follow is that the records of events that define the familiar objective past are inscribed in many distinct systems, e.g., subsystems of the environment, and are accessible locally in space and time to observers. We identify histories that are not just consistent but redundantly consistent using the partial-trace condition introduced by Finkelstein as a bridge between histories and decoherence. The existence of redundant records is a sufficient condition for redundant consistency. It selects, from the multitude of the alternative sets of consistent histories, a small subset endowed with redundant records characteristic of the objective classical past. The information about an objective history of the past is then simultaneously within reach of many, who can independently reconstruct it and arrive at compatible conclusions in the present.
[continue reading]

ArXiv and Zotero surveys

Quick note: the arXiv is administering a survey of user opinion on potential future changes, many of which were discussed previously on this blog. It can be reached by clicking the banner on the top of the arXiv homepage. I encourage you to take the survey if you haven’t already. (Doubly so if you agree with me…)

Likewise, Zotero is administering a somewhat shorter survey about what sorts of folks use Zotero and what they do with it.

To the question “Do you have suggestions for any of the above-mentioned new services, or any other new services you would like to see in arXiv?”, I responded:

I think the most important thing the arXiv to do would be to “nudge” authors toward releasing their work with a copyleft, e.g., Creative Commons – Attribution. (Or at least stop nudging them toward the minimal arXiv license, as is done now in the submission process.) For instance, make it clear to authors that if they publish in various open access journals that they should release the arXiv post on a similarly permissive license. Also, make is easier for authors to make the license more permissive at a later date once they know where they are publishing. So long as there is informed consent, anything that would increase the number of papers which can be built on (not just distributed) would be an improvement.

I would also like the arXiv to think about allowing for more fine-grained contribution tracking in the long term. I predict that collaboratively written documents will become much more common, and for this it will be necessary to produce a record of who changes what, like GitHub, with greater detail than merely the list of authors.

[continue reading]

Links for March 2016

  • With AlphaGo’s victory, Carl Shulman won his $100 bet with me (announced before the match here). For hindsight, here is a bit more evidence that AlphaGo’s win isn’t that shocking — perhaps even right on schedule — and therefore shouldn’t cause you to update much on overall AI progress:

    Comment from mjn:

    Fwiw, the point where the Go curve massively changes slope is when Monte-Carlo Tree Search (MCTS) began to be used in its modern form. I think that’s been an underreported part of AlphaGo’s success: deep networks get the lion’s share of the press, but AlphaGo is a hybrid deep-learning / MCTS system, and MCTS is arguably the most important of the algorithmic breakthroughs that led to computer Go being able to reach expert human level strength.

    (HN discussion.) John Langford concurs on the importance of MCTS.

  • Also: Ken Jennings welcomes Lee Sedol to the Human Loser Club. And: Do the Go prodigies of Asia have a future? (H/t Tyler Cowen.) These articles basically write themselves.
  • Also from Tyler: It was only a matter of time before Facebook began to hire reporters. And: “Will all of economic growth be absorbed into life extension?“:

    Some technologies save lives—new vaccines, new surgical techniques, safer highways. Others threaten lives—pollution, nuclear accidents, global warming, and the rapid global transmission of disease. How is growth theory altered when technologies involve life and death instead of just higher consumption? This paper shows that taking life into account has first-order consequences. Under standard preferences, the value of life may rise faster than consumption, leading society to value safety over consumption growth. As a result, the optimal rate of consumption growth may be substantially lower than what is feasible, in some cases falling all the way to zero.

[continue reading]

PhysWell

Question: What sort of physics — if any — should be funded on the margin right now by someone trying to maximize positive impact for society, perhaps over the very long term?

First, it’s useful to separate the field into fundamental physics and non-fundamental physics, where the former is concerned with discovering new fundamental laws of the universe (particle physics, high-energy theory, cosmology, some astrophysics) and the latter applies accepted laws to understand physical systems (condensed matter, material physics, quantum information and control, plasma physics, nuclear physics, fluid dynamics, biophysics, atomic/molecular/optical physics, geophysics).Some folks like David Nelson dispute the importance/usefulness of this distinction: PDF. In my opinion, he is correct, but only about the most boring part of fundamental physics (which has unfortunately dominated most of those subfields). More speculative research, such as the validity (!!!) of quantum mechanics, is undeniably of a different character from the investigation of low-energy field theories. But that point isn’t important for the present topic.

That distinction made, let’s dive in.

Non-fundamental physics

Let’s first list some places where non-fundamental physics might have a social impact:

  1. condensed matter and material science discoveries that give high-temperature superconductors, stronger/lighter/better-insulating/better-conducting materials, higher density batteries, new computing architectures, better solar cells;
  2. quantum information discoveries that make quantum computers more useful than we currently think they will be, especially a killer app for quantum simulations;
  3. plasma physics discoveries that make fusion power doable, or fission power cheaper;
  4. quantum device technologies that allow for more precise measurements;
  5. climate physics (vague);Added 2016-Dec-20.
  6. biophysics discoveries (vague);
  7. nanotech discoveries (vague).
Fusion

In my mostly uninformed opinion, only fusion power (#3) could be among the most valuable causes in the world, plausibly scoring very highly on importance, tractability, and neglectedness — with the notable caveat that the measurable progress would necessitate an investment of billions rather than millions of dollars.… [continue reading]

Links for February 2016

Just in the nick of time…

  • Eliezer Yudkowsky has a large Facebook thread resulting in many public bets on the Lee Sedol vs DeepMind’s AlphaGo match.

    In particular, I have bet Carl Shulman $100 at even odd that Sedol will win. (For the record, my confidence is low, and if I win it will be mostly luck.) The match, taking place March 9-15, will be streamed live on YouTube.

    Relatedly, here is excellent (if slightly long winded) discussion of why the apparent jump in AI Go ability may be partially attributable to a purposeful application of additional computing power and researcher GO-specific expertise, rather than purely a large jump in domain-general AI power.

  • SciHub has been in the news recently, and I guess they decided to upgrade their appearance.
  • Victorian Humor.
  • Want a postdoc doing theoretical physics, machine learning, and genomics? You’re in luck.
  • Luke Muehlhauser has good quote from Bill Gates on AI timelines.
  • Assortative Mating—A Missing Piece in the Jigsaw of Psychiatric Genetics“.

    Why are psychiatric disorders so highly heritable when they are associated with reduced fecundity? Why are some psychiatric disorders so much more highly heritable than others? Why is there so much genetic comorbidity across psychiatric disorders?

    Although you can see assortative mating for physical traits, like height and weight, with your own eyes, the correlation between spouses is only approximately 0.20 for these traits. For personality, assortative mating is even lower at approximately 0.10. In contrast, Nordsletten and colleagues1 find an amazing amount of assortative mating within psychiatric disorders. Spouse tetrachoric correlations are greater than 0.40 for attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD), and schizophrenia.

[continue reading]

Abstracts for February 2016

  • Non-Markovianity hinders Quantum Darwinism
    Fernando Galve, Roberta Zambrini, and Sabrina Maniscalco
    We investigate Quantum Darwinism and the emergence of a classical world from the quantum one in connection with the spectral properties of the environment. We use a microscopic model of quantum environment in which, by changing a simple system parameter, we can modify the information back flow from environment into the system, and therefore its non-Markovian character. We show that the presence of memory effects hinders the emergence of classical objective reality, linking these two apparently unrelated concepts via a unique dynamical feature related to decoherence factors.

    Galve and collaborators recognize that the recent Nat. Comm. by Brandao et al is not as universal as it is sometimes interpretted, because the records that are proved to exist can be trivial (no info). So Galve et al. correctly emphasize that Darwinism is dependent on the particular dynamics found in our universe, and the effectiveness of record production is in principle an open question.

    Their main model is a harmonic oscillator in an oscillator bath (with bilinear spatial couplings, as usual) and with a spectral density that is concentrated as a hump in some finite window. (See black line with grey shading in Fig 3.) They then vary the system’s frequency with respect to this window. Outside the window, the system and environment decouple and nothing happens. Inside the window, there is good productions of records and Darwinism. At the edges of the window, there is non-Markovianity as information about the system leaks into the environment but then flows back into the system from time to time. They measure non-Markovianity as the time when the fidelity between the system’s state at two different times is going up (rather than down monotonically, as it must for completely positive dynamics).

[continue reading]

Comments on Stern, journals, and incentives

David L. Stern on changing incentives in science by getting rid of journals:

Instead, I believe, we will do better to rely simply on the scientific process itself. Over time, good science is replicated, elevated, and established as most likely true; bad science may be unreplicated, flaws may be noted, and it usually is quietly dismissed as untrue. This process may take considerable time—sometimes years, sometimes decades. But, usually, the most egregious papers are detected quickly by experts as most likely garbage. This self-correcting aspect of science often does not involve explicit written documentation of a paper’s flaws. The community simply decides that these papers are unhelpful and the field moves in a different direction.

In sum, we should stop worrying about peer review….

The real question that people seem to be struggling with is “How will we judge the quality of the science if it is not peer reviewed and published in a journal that I ‘respect’?” Of course, the answer is obvious. Read the papers! But here is where we come to the crux of the incentive problem. Currently, scientists are rewarded for publishing in “top” journals, on the assumption that these journals publish only great science. Since this assumption is demonstrably false, and since journal publishing involves many evils that are discussed at length in other posts, a better solution is to cut journals out of the incentive structure altogether.

(H/t Tyler Cowen.)

I think this would make the situation worse, not better, in bringing new ideas to the table. For all of its flaws, peer review has the benefit that any (not obviously terrible) paper gets a somewhat careful reading by a couple of experts.… [continue reading]