In a previous post of abstracts, I mentioned philosopher Josh Rosaler’s attempt to clarify the distinction between empirical and formal notions of “theoretical reduction”. Reduction is just the idea that one theory reduces to another in some limit, like Galilean kinematics reduces to special relativity in the limit of small velocities.Confusingly, philosophers use a reversed convention; they say that Galilean mechanics reduces to special relativity. a Formal reduction is when this takes the form of some mathematical limiting procedure (e.g., ), whereas empirical reduction is an explanatory statement about observations (e.g., “special relativity can explains the empirical usefulness of Galilean kinematics”).
Rosaler’s criticism, which I mostly agree with, is that folks often conflate these two. Usually this isn’t a serious problem since the holes can be patched up on the fly by a competent physicist, but sometimes it leads to serious trouble. The most egregious case, and the one that got me interested in all this, is the quantum-classical transition, and in particular the serious insufficiency of existing limits to explain the appearance of macroscopic classicality. In particular, even though this limiting procedure recovers the classical equations of motion, it fails spectacularly to recover the state space.… [continue reading]
I’m happy to announce the recent publication of a paper by Mike, Wojciech, and myself.
Motivated by the advances of quantum Darwinism and recognizing the role played by redundancy in identifying the small subset of quantum states with resilience characteristic of objective classical reality, we explore the implications of redundant records for consistent histories. The consistent histories formalism is a tool for describing sequences of events taking place in an evolving closed quantum system. A set of histories is consistent when one can reason about them using Boolean logic, i.e., when probabilities of sequences of events that define histories are additive. However, the vast majority of the sets of histories that are merely consistent are flagrantly nonclassical in other respects. This embarras de richesses (known as the set selection problem) suggests that one must go beyond consistency to identify how the classical past arises in our quantum universe. The key intuition we follow is that the records of events that define the familiar objective past are inscribed in many distinct systems, e.g., subsystems of the environment, and are accessible locally in space and time to observers.
… [continue reading]
Quick note: the arXiv is administering a survey of user opinion on potential future changes, many of which were discussed previously on this blog. It can be reached by clicking the banner on the top of the arXiv homepage. I encourage you to take the survey if you haven’t already. (Doubly so if you agree with me…)
Likewise, Zotero is administering a somewhat shorter survey about what sorts of folks use Zotero and what they do with it.
To the question “Do you have suggestions for any of the above-mentioned new services, or any other new services you would like to see in arXiv?”, I responded:
I think the most important thing the arXiv to do would be to “nudge” authors toward releasing their work with a copyleft, e.g., Creative Commons – Attribution. (Or at least stop nudging them toward the minimal arXiv license, as is done now in the submission process.) For instance, make it clear to authors that if they publish in various open access journals that they should release the arXiv post on a similarly permissive license. Also, make is easier for authors to make the license more permissive at a later date once they know where they are publishing.
… [continue reading]
Question: What sort of physics — if any — should be funded on the margin right now by someone trying to maximize positive impact for society, perhaps over the very long term?
First, it’s useful to separate the field into fundamental physics and non-fundamental physics, where the former is concerned with discovering new fundamental laws of the universe (particle physics, high-energy theory, cosmology, some astrophysics) and the latter applies accepted laws to understand physical systems (condensed matter, material physics, quantum information and control, plasma physics, nuclear physics, fluid dynamics, biophysics, atomic/molecular/optical physics, geophysics).Some folks like David Nelson dispute the importance/usefulness of this distinction: PDF. In my opinion, he is correct, but only about the most boring part of fundamental physics (which has unfortunately dominated most of those subfields). More speculative research, such as the validity (!!!) of quantum mechanics, is undeniably of a different character from the investigation of low-energy field theories. But that point isn’t important for the present topic. a
That distinction made, let’s dive in.
Let’s first list some places where non-fundamental physics might have a social impact:
condensed matter and material science discoveries that give high-temperature superconductors, stronger/lighter/better-insulating/better-conducting materials, higher density batteries, new computing architectures, better solar cells;
quantum information discoveries that make quantum computers more useful than we currently think they will be, especially a killer app for quantum simulations;
plasma physics discoveries that make fusion power doable, or fission power cheaper;
quantum device technologies that allow for more precise measurements;
climate physics (vague);Added 2016-Dec-20.
… [continue reading]
Just in the nick of time…
Eliezer Yudkowsky has a large Facebook thread resulting in many public bets on the Lee Sedol vs DeepMind’s AlphaGo match.
In particular, I have bet Carl Shulman $100 at even odd that Sedol will win. (For the record, my confidence is low, and if I win it will be mostly luck.) The match, taking place March 9-15, will be streamed live on YouTube.
Relatedly, here is excellent (if slightly long winded) discussion of why the apparent jump in AI Go ability may be partially attributable to a purposeful application of additional computing power and researcher GO-specific expertise, rather than purely a large jump in domain-general AI power.
SciHub has been in the news recently, and I guess they decided to upgrade their appearance.
Want a postdoc doing theoretical physics, machine learning, and genomics? You’re in luck.
Luke Muehlhauser has good quote from Bill Gates on AI timelines.
“Assortative Mating—A Missing Piece in the Jigsaw of Psychiatric Genetics“.
Why are psychiatric disorders so highly heritable when they are associated with reduced fecundity? Why are some psychiatric disorders so much more highly heritable than others? Why is there so much genetic comorbidity across psychiatric disorders?
… [continue reading]
David L. Stern on changing incentives in science by getting rid of journals:
Instead, I believe, we will do better to rely simply on the scientific process itself. Over time, good science is replicated, elevated, and established as most likely true; bad science may be unreplicated, flaws may be noted, and it usually is quietly dismissed as untrue. This process may take considerable time—sometimes years, sometimes decades. But, usually, the most egregious papers are detected quickly by experts as most likely garbage. This self-correcting aspect of science often does not involve explicit written documentation of a paper’s flaws. The community simply decides that these papers are unhelpful and the field moves in a different direction.
In sum, we should stop worrying about peer review….
The real question that people seem to be struggling with is “How will we judge the quality of the science if it is not peer reviewed and published in a journal that I ‘respect’?” Of course, the answer is obvious. Read the papers! But here is where we come to the crux of the incentive problem. Currently, scientists are rewarded for publishing in “top” journals, on the assumption that these journals publish only great science. Since this assumption is demonstrably false, and since journal publishing involves many evils that are discussed at length in other posts, a better solution is to cut journals out of the incentive structure altogether.
… [continue reading]
A new paper of mine (PRA 93, 012107 (2016), arXiv:1507.04083) just came out. The main theorem of the paper is not deep, but I think it’s a clarifying result within a formalism that is deep: ideal quantum Brownian motion (QBM) in symplectic generality. In this blog post, I’ll refresh you on ideal QBM, quote my abstract, explain the main result, and then — going beyond the paper — show how it’s related to the Kolmogorov-Sinai entropy and the speed at which macroscopic wavefunctions branch.
If you Google around for “quantum Brownian motion”, you’ll come across a bunch of definitions that have quirky features, and aren’t obviously related to each other. This is a shame. As I explained in an earlier blog post, ideal QBM is the generalization of the harmonic oscillator to open quantum systems. If you think harmonic oscillator are important, and you think decoherence is important, then you should understand ideal QBM.
Harmonic oscillators are ubiquitous in the world because all smooth potentials look quadratic locally. Exhaustively understanding harmonic oscillators is very valuable because they are exactly solvable in addition to being ubiquitous. In an almost identical way, all quantum Markovian degrees of freedom look locally like ideal QBM, and their completely positive (CP) dynamics can be solved exactly.… [continue reading]
I thought this criticism by Ars Technica of the woeful state of Wikipedia’s science articles was mostly off the mark. (HN Comments.) The author framed it as a conflict between laymen and specialists, claiming that scientific articles are targeted at specialists at the expense of laymen, with lots of jargon, etc. I eagerly agree with him that there are lots of terrible science articles, and that some technical articles could use better context and introductory bits. But I think this is largely a problem of not having enough skilled science writers rather than a purposeful choice between laymen and specialists. Due to the curse of knowledge the specialists literally do not understand what is and isn’t accessible to laymen; they see through the jargon like the matrix. And the laymen do not get in their gut how many true technical dependencies there really are, that unless you understand topics X and Y, topic Z is pretty much useless. They assume that all this jargon is used by the specialists either because they are too lazy to translate, or are purposefully constructing barriers to entry. I empathize with skilled science writers (which are unfortunately rare), because their best articles often go unnoticed as both laymen and scientists read them and shrug “Yea, that’s pretty clear.… [continue reading]