Links for March 2016

  • With AlphaGo’s victory, Carl Shulman won his $100 bet with me (announced before the match here). For hindsight, here is a bit more evidence that AlphaGo’s win isn’t that shocking — perhaps even right on schedule — and therefore shouldn’t cause you to update much on overall AI progress:

    Comment from mjn:

    Fwiw, the point where the Go curve massively changes slope is when Monte-Carlo Tree Search (MCTS) began to be used in its modern form. I think that’s been an underreported part of AlphaGo’s success: deep networks get the lion’s share of the press, but AlphaGo is a hybrid deep-learning / MCTS system, and MCTS is arguably the most important of the algorithmic breakthroughs that led to computer Go being able to reach expert human level strength.

    (HN discussion.) John Langford concurs on the importance of MCTS.

  • Also: Ken Jennings welcomes Lee Sedol to the Human Loser Club. And: Do the Go prodigies of Asia have a future? (H/t Tyler Cowen.) These articles basically write themselves.
  • Also from Tyler: It was only a matter of time before Facebook began to hire reporters. And: “Will all of economic growth be absorbed into life extension?“:

    Some technologies save lives—new vaccines, new surgical techniques, safer highways. Others threaten lives—pollution, nuclear accidents, global warming, and the rapid global transmission of disease. How is growth theory altered when technologies involve life and death instead of just higher consumption? This paper shows that taking life into account has first-order consequences. Under standard preferences, the value of life may rise faster than consumption, leading society to value safety over consumption growth. As a result, the optimal rate of consumption growth may be substantially lower than what is feasible, in some cases falling all the way to zero.

[continue reading]

PhysWell

Question: What sort of physics — if any — should be funded on the margin right now by someone trying to maximize positive impact for society, perhaps over the very long term?

First, it’s useful to separate the field into fundamental physics and non-fundamental physics, where the former is concerned with discovering new fundamental laws of the universe (particle physics, high-energy theory, cosmology, some astrophysics) and the latter applies accepted laws to understand physical systems (condensed matter, material physics, quantum information and control, plasma physics, nuclear physics, fluid dynamics, biophysics, atomic/molecular/optical physics, geophysics).Some folks like David Nelson dispute the importance/usefulness of this distinction: PDF. In my opinion, he is correct, but only about the most boring part of fundamental physics (which has unfortunately dominated most of those subfields). More speculative research, such as the validity (!!!) of quantum mechanics, is undeniably of a different character from the investigation of low-energy field theories. But that point isn’t important for the present topic.a  

That distinction made, let’s dive in.

Non-fundamental physics

Let’s first list some places where non-fundamental physics might have a social impact:

  1. condensed matter and material science discoveries that give high-temperature superconductors, stronger/lighter/better-insulating/better-conducting materials, higher density batteries, new computing architectures, better solar cells;
  2. quantum information discoveries that make quantum computers more useful than we currently think they will be, especially a killer app for quantum simulations;
  3. plasma physics discoveries that make fusion power doable, or fission power cheaper;
  4. quantum device technologies that allow for more precise measurements;
  5. climate physics (vague);Added 2016-Dec-20.b  
  6. biophysics discoveries (vague);
  7. nanotech discoveries (vague).
Fusion

In my mostly uninformed opinion, only fusion power (#3) could be among the most valuable causes in the world, plausibly scoring very highly on importance, tractability, and neglectedness — with the notable caveat that the measurable progress would necessitate an investment of billions rather than millions of dollars.… [continue reading]

Links for February 2016

Just in the nick of time…

  • Eliezer Yudkowsky has a large Facebook thread resulting in many public bets on the Lee Sedol vs DeepMind’s AlphaGo match.

    In particular, I have bet Carl Shulman $100 at even odd that Sedol will win. (For the record, my confidence is low, and if I win it will be mostly luck.) The match, taking place March 9-15, will be streamed live on YouTube.

    Relatedly, here is excellent (if slightly long winded) discussion of why the apparent jump in AI Go ability may be partially attributable to a purposeful application of additional computing power and researcher GO-specific expertise, rather than purely a large jump in domain-general AI power.

  • SciHub has been in the news recently, and I guess they decided to upgrade their appearance.
  • Victorian Humor.
  • Want a postdoc doing theoretical physics, machine learning, and genomics? You’re in luck.
  • Luke Muehlhauser has good quote from Bill Gates on AI timelines.
  • Assortative Mating—A Missing Piece in the Jigsaw of Psychiatric Genetics“.

    Why are psychiatric disorders so highly heritable when they are associated with reduced fecundity? Why are some psychiatric disorders so much more highly heritable than others? Why is there so much genetic comorbidity across psychiatric disorders?

    Although you can see assortative mating for physical traits, like height and weight, with your own eyes, the correlation between spouses is only approximately 0.20 for these traits. For personality, assortative mating is even lower at approximately 0.10. In contrast, Nordsletten and colleagues1 find an amazing amount of assortative mating within psychiatric disorders. Spouse tetrachoric correlations are greater than 0.40 for attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD), and schizophrenia.

[continue reading]

Abstracts for February 2016

  • Galve and collaborators recognize that the recent Nat. Comm. by Brandao et al is not as universal as it is sometimes interpretted, because the records that are proved to exist can be trivial (no info). So Galve et al. correctly emphasize that Darwinism is dependent on the particular dynamics found in our universe, and the effectiveness of record production is in principle an open question.

    Their main model is a harmonic oscillator in an oscillator bath (with bilinear spatial couplings, as usual) and with a spectral density that is concentrated as a hump in some finite window. (See black line with grey shading in Fig 3.) They then vary the system’s frequency with respect to this window. Outside the window, the system and environment decouple and nothing happens. Inside the window, there is good productions of records and Darwinism. At the edges of the window, there is non-Markovianity as information about the system leaks into the environment but then flows back into the system from time to time. They measure non-Markovianity as the time when the fidelity between the system’s state at two different times is going up (rather than down monotonically, as it must for completely positive dynamics).

  • Although this little paper has several non-sequitors suggesting it’s been assembled like Frankenstein (Z_i is the Z Pauli operator/error for the ith qubit; don’t worry the first time he mentions Shor’s code without explaining it; etc.), it’s actually a very nice little introduction. Gottesman introduces several key ideas very quickly and logically. Good for beginners like me. See also “Operator quantum error correction” (arXiv:quant-ph/0504189) by Kribs et a.

  • (H/t Martin Ganahl.)

    Another excellent introduction, this time to matrix product states and the density-matrix renormalization group, albeit as part of a much larger review.

[continue reading]

Comments on Stern, journals, and incentives

David L. Stern on changing incentives in science by getting rid of journals:

Instead, I believe, we will do better to rely simply on the scientific process itself. Over time, good science is replicated, elevated, and established as most likely true; bad science may be unreplicated, flaws may be noted, and it usually is quietly dismissed as untrue. This process may take considerable time—sometimes years, sometimes decades. But, usually, the most egregious papers are detected quickly by experts as most likely garbage. This self-correcting aspect of science often does not involve explicit written documentation of a paper’s flaws. The community simply decides that these papers are unhelpful and the field moves in a different direction.

In sum, we should stop worrying about peer review….

The real question that people seem to be struggling with is “How will we judge the quality of the science if it is not peer reviewed and published in a journal that I ‘respect’?” Of course, the answer is obvious. Read the papers! But here is where we come to the crux of the incentive problem. Currently, scientists are rewarded for publishing in “top” journals, on the assumption that these journals publish only great science. Since this assumption is demonstrably false, and since journal publishing involves many evils that are discussed at length in other posts, a better solution is to cut journals out of the incentive structure altogether.

(H/t Tyler Cowen.)

I think this would make the situation worse, not better, in bringing new ideas to the table. For all of its flaws, peer review has the benefit that any (not obviously terrible) paper gets a somewhat careful reading by a couple of experts.… [continue reading]

KS entropy generated by entanglement-breaking quantum Brownian motion

A new paper of mine (PRA 93, 012107 (2016), arXiv:1507.04083) just came out. The main theorem of the paper is not deep, but I think it’s a clarifying result within a formalism that is deep: ideal quantum Brownian motion (QBM) in symplectic generality. In this blog post, I’ll refresh you on ideal QBM, quote my abstract, explain the main result, and then — going beyond the paper — show how it’s related to the Kolmogorov-Sinai entropy and the speed at which macroscopic wavefunctions branch.

Ideal QBM

If you Google around for “quantum Brownian motion”, you’ll come across a bunch of definitions that have quirky features, and aren’t obviously related to each other. This is a shame. As I explained in an earlier blog post, ideal QBM is the generalization of the harmonic oscillator to open quantum systems. If you think harmonic oscillator are important, and you think decoherence is important, then you should understand ideal QBM.

Harmonic oscillators are ubiquitous in the world because all smooth potentials look quadratic locally. Exhaustively understanding harmonic oscillators is very valuable because they are exactly solvable in addition to being ubiquitous. In an almost identical way, all quantum Markovian degrees of freedom look locally like ideal QBM, and their completely positive (CP) dynamics can be solved exactly.

To get true generality, both harmonic oscillators and ideal QBM should be expressed in manifestly symplectic covariant form. Just like for Lorentz covariance, a dynamical equation that exhibits manifest symplectic covariance takes the same form under linear symplectic transformations on phase space. At a microscopic level, all physics is symplectic covariant (and Lorentz covariant), so this better hold.… [continue reading]

Links for January 2016

  • Mechanistic insight into schizophrenia?
  • Wide-ranging (and starry-eyed) discussion on HackerNews about what startup can do to make the world a better place.
  • All six naked-eye-visible planets in one wide-angle image.

    (Source.) You can see the current configuration of the solar system here.
  • Holden Karnofsky argues persuasively that selection bias implies that we should have fewer and more high-quality studies than we would in a hypothetical world with ideal, unbiased researchers.

    Chris Blattman worries that there is too much of a tendency toward large, expensive, perfectionist studies, writing:

     

    …each study is like a lamp post. We might want to have a few smaller lamp posts illuminating our path, rather than the world’s largest and most awesome lamp post illuminating just one spot. I worried that our striving for perfect, overachieving studies could make our world darker on average.

    My feeling – shared by most of the staff I’ve discussed this with – is that the trend toward “perfect, overachieving studies” is a good thing…

    Bottom line. Under the status quo, I get very little value out of literatures that have large numbers of flawed studies – because I tend to suspect the flaws of running in the same direction. On a given research question, I tend to base my view on the very best, most expensive, most “perfectionist” studies, because I expect these studies to be the most fair and the most scrutinized, and I think focusing on them leaves me in better position than trying to understand all the subtleties of a large number of flawed studies.

    If there were more diversity of research methods, I’d worry less about pervasive and correlated selection bias.

[continue reading]

Abstracts for January 2016

  • $, which suppress off-diagonal components of the reduced density matrix, leaving a diagonal mixture of different classical configurations. Gravitational nonlinearities thus provide a minimal mechanism for generating classical stochastic perturbations from inflation. We identify the time when decoherence occurs, which is delayed after horizon crossing due to the weak coupling, and find that Hubble-scale modes act as the decohering environment. We also comment on the observational relevance of decoherence and its relation to the squeezing of the quantum state.’]
  • Right now I am wondering as to whether this can be connected to Crooks fluctuation theorem. (The Jarzynski equality is a special case of Crooks, and the second law is a special case of Jarzynski.) It also makes me want to spend more time reading Fleming’s imposing thesis, which is essentially a textbook on open quantum systems.

  • Look at how amazing this nested mirror is from Fig 1:

    That’s a real optical photograph! And they think that they will cool it to the ground state in the future.

  • H/t John Preskill, who advises you to take this seriously despite the many problems with previous attempts at finding quantum computation in the brain. See, e.g., Tegmark’s “The importance of quantum decoherence in brain processes”.

[continue reading]