ArXiv and Zotero surveys

Quick note: the arXiv is administering a survey of user opinion on potential future changes, many of which were discussed previously on this blog. It can be reached by clicking the banner on the top of the arXiv homepage. I encourage you to take the survey if you haven’t already. (Doubly so if you agree with me…)

Likewise, Zotero is administering a somewhat shorter survey about what sorts of folks use Zotero and what they do with it.

To the question “Do you have suggestions for any of the above-mentioned new services, or any other new services you would like to see in arXiv?”, I responded:

I think the most important thing the arXiv to do would be to “nudge” authors toward releasing their work with a copyleft, e.g., Creative Commons – Attribution. (Or at least stop nudging them toward the minimal arXiv license, as is done now in the submission process.) For instance, make it clear to authors that if they publish in various open access journals that they should release the arXiv post on a similarly permissive license. Also, make is easier for authors to make the license more permissive at a later date once they know where they are publishing. So long as there is informed consent, anything that would increase the number of papers which can be built on (not just distributed) would be an improvement.

I would also like the arXiv to think about allowing for more fine-grained contribution tracking in the long term. I predict that collaboratively written documents will become much more common, and for this it will be necessary to produce a record of who changes what, like GitHub, with greater detail than merely the list of authors.

[continue reading]

Links for March 2016

  • With AlphaGo’s victory, Carl Shulman won his $100 bet with me (announced before the match here). For hindsight, here is a bit more evidence that AlphaGo’s win isn’t that shocking — perhaps even right on schedule — and therefore shouldn’t cause you to update much on overall AI progress:

    Comment from mjn:

    Fwiw, the point where the Go curve massively changes slope is when Monte-Carlo Tree Search (MCTS) began to be used in its modern form. I think that’s been an underreported part of AlphaGo’s success: deep networks get the lion’s share of the press, but AlphaGo is a hybrid deep-learning / MCTS system, and MCTS is arguably the most important of the algorithmic breakthroughs that led to computer Go being able to reach expert human level strength.

    (HN discussion.) John Langford concurs on the importance of MCTS.

  • Also: Ken Jennings welcomes Lee Sedol to the Human Loser Club. And: Do the Go prodigies of Asia have a future? (H/t Tyler Cowen.) These articles basically write themselves.
  • Also from Tyler: It was only a matter of time before Facebook began to hire reporters. And: “Will all of economic growth be absorbed into life extension?“:

    Some technologies save lives—new vaccines, new surgical techniques, safer highways. Others threaten lives—pollution, nuclear accidents, global warming, and the rapid global transmission of disease. How is growth theory altered when technologies involve life and death instead of just higher consumption? This paper shows that taking life into account has first-order consequences. Under standard preferences, the value of life may rise faster than consumption, leading society to value safety over consumption growth. As a result, the optimal rate of consumption growth may be substantially lower than what is feasible, in some cases falling all the way to zero.

[continue reading]

PhysWell

Question: What sort of physics — if any — should be funded on the margin right now by someone trying to maximize positive impact for society, perhaps over the very long term?

First, it’s useful to separate the field into fundamental physics and non-fundamental physics, where the former is concerned with discovering new fundamental laws of the universe (particle physics, high-energy theory, cosmology, some astrophysics) and the latter applies accepted laws to understand physical systems (condensed matter, material physics, quantum information and control, plasma physics, nuclear physics, fluid dynamics, biophysics, atomic/molecular/optical physics, geophysics).Some folks like David Nelson dispute the importance/usefulness of this distinction: PDF. In my opinion, he is correct, but only about the most boring part of fundamental physics (which has unfortunately dominated most of those subfields). More speculative research, such as the validity (!!!) of quantum mechanics, is undeniably of a different character from the investigation of low-energy field theories. But that point isn’t important for the present topic.

That distinction made, let’s dive in.

Non-fundamental physics

Let’s first list some places where non-fundamental physics might have a social impact:

  1. condensed matter and material science discoveries that give high-temperature superconductors, stronger/lighter/better-insulating/better-conducting materials, higher density batteries, new computing architectures, better solar cells;
  2. quantum information discoveries that make quantum computers more useful than we currently think they will be, especially a killer app for quantum simulations;
  3. plasma physics discoveries that make fusion power doable, or fission power cheaper;
  4. quantum device technologies that allow for more precise measurements;
  5. climate physics (vague);Added 2016-Dec-20.
  6. biophysics discoveries (vague);
  7. nanotech discoveries (vague).
Fusion

In my mostly uninformed opinion, only fusion power (#3) could be among the most valuable causes in the world, plausibly scoring very highly on importance, tractability, and neglectedness — with the notable caveat that the measurable progress would necessitate an investment of billions rather than millions of dollars.… [continue reading]

Links for February 2016

Just in the nick of time…

  • Eliezer Yudkowsky has a large Facebook thread resulting in many public bets on the Lee Sedol vs DeepMind’s AlphaGo match.

    In particular, I have bet Carl Shulman $100 at even odd that Sedol will win. (For the record, my confidence is low, and if I win it will be mostly luck.) The match, taking place March 9-15, will be streamed live on YouTube.

    Relatedly, here is excellent (if slightly long winded) discussion of why the apparent jump in AI Go ability may be partially attributable to a purposeful application of additional computing power and researcher GO-specific expertise, rather than purely a large jump in domain-general AI power.

  • SciHub has been in the news recently, and I guess they decided to upgrade their appearance.
  • Victorian Humor.
  • Want a postdoc doing theoretical physics, machine learning, and genomics? You’re in luck.
  • Luke Muehlhauser has good quote from Bill Gates on AI timelines.
  • Assortative Mating—A Missing Piece in the Jigsaw of Psychiatric Genetics“.

    Why are psychiatric disorders so highly heritable when they are associated with reduced fecundity? Why are some psychiatric disorders so much more highly heritable than others? Why is there so much genetic comorbidity across psychiatric disorders?

    Although you can see assortative mating for physical traits, like height and weight, with your own eyes, the correlation between spouses is only approximately 0.20 for these traits. For personality, assortative mating is even lower at approximately 0.10. In contrast, Nordsletten and colleagues1 find an amazing amount of assortative mating within psychiatric disorders. Spouse tetrachoric correlations are greater than 0.40 for attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD), and schizophrenia.

[continue reading]

Abstracts for February 2016

  • Non-Markovianity hinders Quantum Darwinism
    Fernando Galve, Roberta Zambrini, and Sabrina Maniscalco
    We investigate Quantum Darwinism and the emergence of a classical world from the quantum one in connection with the spectral properties of the environment. We use a microscopic model of quantum environment in which, by changing a simple system parameter, we can modify the information back flow from environment into the system, and therefore its non-Markovian character. We show that the presence of memory effects hinders the emergence of classical objective reality, linking these two apparently unrelated concepts via a unique dynamical feature related to decoherence factors.

    Galve and collaborators recognize that the recent Nat. Comm. by Brandao et al is not as universal as it is sometimes interpretted, because the records that are proved to exist can be trivial (no info). So Galve et al. correctly emphasize that Darwinism is dependent on the particular dynamics found in our universe, and the effectiveness of record production is in principle an open question.

    Their main model is a harmonic oscillator in an oscillator bath (with bilinear spatial couplings, as usual) and with a spectral density that is concentrated as a hump in some finite window. (See black line with grey shading in Fig 3.) They then vary the system’s frequency with respect to this window. Outside the window, the system and environment decouple and nothing happens. Inside the window, there is good productions of records and Darwinism. At the edges of the window, there is non-Markovianity as information about the system leaks into the environment but then flows back into the system from time to time. They measure non-Markovianity as the time when the fidelity between the system’s state at two different times is going up (rather than down monotonically, as it must for completely positive dynamics).

[continue reading]

Comments on Stern, journals, and incentives

David L. Stern on changing incentives in science by getting rid of journals:

Instead, I believe, we will do better to rely simply on the scientific process itself. Over time, good science is replicated, elevated, and established as most likely true; bad science may be unreplicated, flaws may be noted, and it usually is quietly dismissed as untrue. This process may take considerable time—sometimes years, sometimes decades. But, usually, the most egregious papers are detected quickly by experts as most likely garbage. This self-correcting aspect of science often does not involve explicit written documentation of a paper’s flaws. The community simply decides that these papers are unhelpful and the field moves in a different direction.

In sum, we should stop worrying about peer review….

The real question that people seem to be struggling with is “How will we judge the quality of the science if it is not peer reviewed and published in a journal that I ‘respect’?” Of course, the answer is obvious. Read the papers! But here is where we come to the crux of the incentive problem. Currently, scientists are rewarded for publishing in “top” journals, on the assumption that these journals publish only great science. Since this assumption is demonstrably false, and since journal publishing involves many evils that are discussed at length in other posts, a better solution is to cut journals out of the incentive structure altogether.

(H/t Tyler Cowen.)

I think this would make the situation worse, not better, in bringing new ideas to the table. For all of its flaws, peer review has the benefit that any (not obviously terrible) paper gets a somewhat careful reading by a couple of experts.… [continue reading]

KS entropy generated by entanglement-breaking quantum Brownian motion

A new paper of mine (PRA 93, 012107 (2016), arXiv:1507.04083) just came out. The main theorem of the paper is not deep, but I think it’s a clarifying result within a formalism that is deep: ideal quantum Brownian motion (QBM) in symplectic generality. In this blog post, I’ll refresh you on ideal QBM, quote my abstract, explain the main result, and then — going beyond the paper — show how it’s related to the Kolmogorov-Sinai entropy and the speed at which macroscopic wavefunctions branch.

Ideal QBM

If you Google around for “quantum Brownian motion”, you’ll come across a bunch of definitions that have quirky features, and aren’t obviously related to each other. This is a shame. As I explained in an earlier blog post, ideal QBM is the generalization of the harmonic oscillator to open quantum systems. If you think harmonic oscillator are important, and you think decoherence is important, then you should understand ideal QBM.

Harmonic oscillators are ubiquitous in the world because all smooth potentials look quadratic locally. Exhaustively understanding harmonic oscillators is very valuable because they are exactly solvable in addition to being ubiquitous. In an almost identical way, all quantum Markovian degrees of freedom look locally like ideal QBM, and their completely positive (CP) dynamics can be solved exactly.

To get true generality, both harmonic oscillators and ideal QBM should be expressed in manifestly symplectic covariant form. Just like for Lorentz covariance, a dynamical equation that exhibits manifest symplectic covariance takes the same form under linear symplectic transformations on phase space. At a microscopic level, all physics is symplectic covariant (and Lorentz covariant), so this better hold.… [continue reading]

Links for January 2016

  • Mechanistic insight into schizophrenia?
  • Wide-ranging (and starry-eyed) discussion on HackerNews about what startup can do to make the world a better place.
  • All six naked-eye-visible planets in one wide-angle image.

    (Source.) You can see the current configuration of the solar system here.
  • Holden Karnofsky argues persuasively that selection bias implies that we should have fewer and more high-quality studies than we would in a hypothetical world with ideal, unbiased researchers.

    Chris Blattman worries that there is too much of a tendency toward large, expensive, perfectionist studies, writing:

     

    …each study is like a lamp post. We might want to have a few smaller lamp posts illuminating our path, rather than the world’s largest and most awesome lamp post illuminating just one spot. I worried that our striving for perfect, overachieving studies could make our world darker on average.

    My feeling – shared by most of the staff I’ve discussed this with – is that the trend toward “perfect, overachieving studies” is a good thing…

    Bottom line. Under the status quo, I get very little value out of literatures that have large numbers of flawed studies – because I tend to suspect the flaws of running in the same direction. On a given research question, I tend to base my view on the very best, most expensive, most “perfectionist” studies, because I expect these studies to be the most fair and the most scrutinized, and I think focusing on them leaves me in better position than trying to understand all the subtleties of a large number of flawed studies.

    If there were more diversity of research methods, I’d worry less about pervasive and correlated selection bias.

[continue reading]

Abstracts for January 2016

  • We study the inflationary quantum-to-classical transition for the adiabatic curvature perturbation \zeta due to quantum decoherence, focusing on the role played by squeezed-limit mode couplings. We evolve the quantum state \Psi in the Schrodinger picture, for a generic cubic coupling to additional environment degrees of freedom. Focusing on the case of minimal gravitational interactions, we find the evolution of the reduced density matrix for a given long-wavelength fluctuation by tracing out the other (mostly shorter wavelength) modes of \zeta as an environment. We show that inflation produces phase oscillations in the wave functional \Psi[\zeta(x)], which suppress off-diagonal components of the reduced density matrix, leaving a diagonal mixture of different classical configurations. Gravitational nonlinearities thus provide a minimal mechanism for generating classical stochastic perturbations from inflation. We identify the time when decoherence occurs, which is delayed after horizon crossing due to the weak coupling, and find that Hubble-scale modes act as the decohering environment. We also comment on the observational relevance of decoherence and its relation to the squeezing of the quantum state.
  • The fluctuation-dissipation relation is usually formulated for a system interacting with a heat bath at finite temperature, and often in the context of linear response theory, where only small deviations from the mean are considered. We show that for an open quantum system interacting with a nonequilibrium environment, where temperature is no longer a valid notion, a fluctuation-dissipation inequality exists. Instead of being proportional, quantum fluctuations are bounded below by quantum dissipation, whereas classically the fluctuations vanish at zero temperature. The lower bound of this inequality is exactly satisfied by (zero-temperature) quantum noise and is in accord with the Heisenberg uncertainty principle, in both its microscopic origins and its influence upon systems.
[continue reading]

Inaccessible Wikipedia science articles as inclusionism

I thought this criticism by Ars Technica of the woeful state of Wikipedia’s science articles was mostly off the mark. (HN Comments.) The author framed it as a conflict between laymen and specialists, claiming that scientific articles are targeted at specialists at the expense of laymen, with lots of jargon, etc. I eagerly agree with him that there are lots of terrible science articles, and that some technical articles could use better context and introductory bits. But I think this is largely a problem of not having enough skilled science writers rather than a purposeful choice between laymen and specialists. Due to the curse of knowledge the specialists literally do not understand what is and isn’t accessible to laymen; they see through the jargon like the matrix. And the laymen do not get in their gut how many true technical dependencies there really are, that unless you understand topics X and Y, topic Z is pretty much useless. They assume that all this jargon is used by the specialists either because they are too lazy to translate, or are purposefully constructing barriers to entry. I empathize with skilled science writers (which are unfortunately rare), because their best articles often go unnoticed as both laymen and scientists read them and shrug “Yea, that’s pretty clear. Was that really so hard?”.

The examples used in the editorial, like “Rabi oscillation“, are the sort of scientifically deep topics, with many dependencies, that one will never be able to write more than a few layman-accessible sentences about. If you don’t know what a Hilbert space is, there’s just not that much to say about Rabi oscillations.… [continue reading]

Links for December 2015

[continue reading]

PI accepting 2016 master’s student applications

Perimeter Institute runs a pretty great and unusual 1-year master’s program called Perimeter Scholars International.PSI…ha! If you’re in your last year as an undergrad, I strongly advise you (seriously) to consider applying. Your choice of grad school is 80% of the selection power determining your thesis topic, and that topic places very strong constraints on your entire academic career. The more your choice is informed by actual physics knowledge (rather than the apparent impressiveness of professors and institutions), the better. An additional year at a new institution taking classes with new teachers can really help.

(Older academics can advertise this to students by printing out this poster.)

Here’s the blurb:

Each year, Canada’s Perimeter Institute for Theoretical Physics recruits approximately 30 exceptional science graduates for an immersive, 10-month physics boot camp: Perimeter Scholars International (PSI). This unique Master’s program seeks not only students with stellar undergraduate physics track records, but also those with diverse backgrounds, collaborative spirit, creativity, and other attributes that will set them apart as future innovators.

Features of the program include:

  • All student costs (tuition and living) are covered, removing financial and/or geographical barriers to entry
    Students learn from world-leading theoretical physicists – resident Perimeter researchers and visiting scientists – within the inspiring environment of Perimeter Institute.
  • Collaboration is valued over competition; deep understanding and creativity are valued over rote learning and examination.
  • PSI recruits worldwide: 85 percent of students come from outside of Canada.
  • PSI takes calculated risks, seeking extraordinary talent who may have non-traditional academic backgrounds but have demonstrated exceptional scientific aptitude.

PSI is now accepting applications for the class of 2016/17. Applications are due by February 1, 2016.

[continue reading]

Robert Zubrin’s reasoning on space exploration

[Just shooting from the hip here, for fun.]

I think we should send humans to Mars, but I don’t really think it’s possible to justify it as an instrumental means of achieving other more concrete goals. (I just take it as an intrinsic goal.) But here is Robert Zubrin making the best instrumental case I’ve heard.

My biggest criticism is that not finding evidence of life on Mars does not imply life is extraordinarily rare, because there are other options besides easy-starting life (with the great filter somewhere after) and extremely-hard-starting life. If you think it’s possible that there’s a filter strong enough to prevent single-cell life from developing interstellar travelI’m skeptical. When it comes to estimating extremely unlikely events, with multiple independent unlikely steps that all need to happen quickly, the development of the first replicator seems to require vastly more steps than relatively simple things like sexual reproduction. The only thing that makes me uncertain is the possibility that there are extremely simple replicators that resemble nothing like minimal cells, and there is a relatively natural progression to minimal cells that simply isn’t large enough to leave fossils. I would love to update on this if you know something I’m not thinking of., then it’s still very possible that single-cell life is hard-enough to start that we wouldn’t expect to find it on Mars, yet is still relatively common in the galaxy. Indeed, it’s easy (assuming a strong late filter) to imagine that life is easy to start with liquid water plus X, where X is a relative common planetary condition that just has never existed on Mars.

That said, Zubrin’s argument is better than I was expecting, and I agree that getting a quasi-definitive answer on whether there was ever life on Mars — even with my strong prior against it — is probably the best new evidence we are likely to collect for a very long time with regard to the prevalence of life in the universe.… [continue reading]

My talk on dark matter decoherence detection

I gave a talk recently on Itay’s and my latests results for detecting dark matter through the decoherence it induces in matter interferometers.

Quantum superpositions of matter are unusually sensitive to decoherence by tiny momentum transfers, in a way that can be made precise with a new diffusion standard quantum limit. Upcoming matter interferometers will produce unprecedented spatial superpositions of over a million nucleons. What sorts of dark matter scattering events could be seen in these experiments as anomalous decoherence? We show that it is extremely weak but medium range interaction between matter and dark matter that would be most visible, such as scattering through a Yukawa potential. We construct toy models for these interactions, discuss existing constraints, and delineate the expected sensitivity of forthcoming experiments. In particular, the OTIMA interferometer developing at the University of Vienna will directly probe many orders of magnitude of parameter space, and the proposed MAQRO satellite experiment would be vastly more sensitive yet. This is a multidisciplinary talk that will be accessible to a non-specialized audience.
[Download MP4]If you ever have problems finding the direct download link for videos on PI’s website (they are sometimes missing), this Firefox extension seems to do the trick.

Relevant paper on the diffusion SQL is here: arXiv:1504.03250. The main dark matter paper is still a work in progress.

Footnotes

(↵ returns to text)

  1. If you ever have problems finding the direct download link for videos on PI’s website (they are sometimes missing), this Firefox extension seems to do the trick.
[continue reading]

Abstracts for October-November 2015

  • High energy particle colliders have been in the forefront of particle physics for more than three decades. At present the near term US, European and international strategies of the particle physics community are centered on full exploitation of the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). A number of the next generation collider facilities have been proposed and are currently under consideration for the medium and far-future of accelerator-based high energy physics. In this paper we offer a uniform approach to evaluation of various accelerators based on the feasibility of their energy reach, performance potential and cost range.
    (H/t Sabine.) The contraction has been happening for quite some time:

    maximum c.o.m. energy has drastically slowed down since the early 1990’s and the lepton colliders even went backwards in energy to study rare processes…Moreover, the number of the colliding beam facilities in operation has dropped from 9 two decades ago to 5 now…

  • Conditions for Quantum Violation of Macroscopic Realism
    Johannes Kofler and Časlav Brukner
    Why do we not experience a violation of macroscopic realism in everyday life. Normally, no violation can be seen either because of decoherence or the restriction of coarse-grained measurements, transforming the time evolution of any quantum state into a classical time evolution of a statistical mixture. We find the sufficient condition for these classical evolutions for spin systems under coarse-grained measurements. However, there exist ‘‘nonclassical’’ Hamiltonians whose time evolution cannot be understood classically, although at every instant of time the quantum state appears as a classical mixture. We suggest that such Hamiltonians are unlikely to be realized in nature because of their high computational complexity.
[continue reading]