KS entropy generated by entanglement-breaking quantum Brownian motion

A new paper of mine (PRA 93, 012107 (2016), arXiv:1507.04083) just came out. The main theorem of the paper is not deep, but I think it’s a clarifying result within a formalism that is deep: ideal quantum Brownian motion (QBM) in symplectic generality. In this blog post, I’ll refresh you on ideal QBM, quote my abstract, explain the main result, and then — going beyond the paper — show how it’s related to the Kolmogorov-Sinai entropy and the speed at which macroscopic wavefunctions branch.

Ideal QBM

If you Google around for “quantum Brownian motion”, you’ll come across a bunch of definitions that have quirky features, and aren’t obviously related to each other. This is a shame. As I explained in an earlier blog post, ideal QBM is the generalization of the harmonic oscillator to open quantum systems. If you think harmonic oscillator are important, and you think decoherence is important, then you should understand ideal QBM.

Harmonic oscillators are ubiquitous in the world because all smooth potentials look quadratic locally. Exhaustively understanding harmonic oscillators is very valuable because they are exactly solvable in addition to being ubiquitous. In an almost identical way, all quantum Markovian degrees of freedom look locally like ideal QBM, and their completely positive (CP) dynamics can be solved exactly.

To get true generality, both harmonic oscillators and ideal QBM should be expressed in manifestly symplectic covariant form. Just like for Lorentz covariance, a dynamical equation that exhibits manifest symplectic covariance takes the same form under linear symplectic transformations on phase space. At a microscopic level, all physics is symplectic covariant (and Lorentz covariant), so this better hold.… [continue reading]

Links for January 2016

  • Mechanistic insight into schizophrenia?
  • Wide-ranging (and starry-eyed) discussion on HackerNews about what startup can do to make the world a better place.
  • All six naked-eye-visible planets in one wide-angle image.

    (Source.) You can see the current configuration of the solar system here.
  • Holden Karnofsky argues persuasively that selection bias implies that we should have fewer and more high-quality studies than we would in a hypothetical world with ideal, unbiased researchers.

    Chris Blattman worries that there is too much of a tendency toward large, expensive, perfectionist studies, writing:

     

    …each study is like a lamp post. We might want to have a few smaller lamp posts illuminating our path, rather than the world’s largest and most awesome lamp post illuminating just one spot. I worried that our striving for perfect, overachieving studies could make our world darker on average.

    My feeling – shared by most of the staff I’ve discussed this with – is that the trend toward “perfect, overachieving studies” is a good thing…

    Bottom line. Under the status quo, I get very little value out of literatures that have large numbers of flawed studies – because I tend to suspect the flaws of running in the same direction. On a given research question, I tend to base my view on the very best, most expensive, most “perfectionist” studies, because I expect these studies to be the most fair and the most scrutinized, and I think focusing on them leaves me in better position than trying to understand all the subtleties of a large number of flawed studies.

    If there were more diversity of research methods, I’d worry less about pervasive and correlated selection bias.

[continue reading]

Abstracts for January 2016

  • We study the inflationary quantum-to-classical transition for the adiabatic curvature perturbation \zeta due to quantum decoherence, focusing on the role played by squeezed-limit mode couplings. We evolve the quantum state \Psi in the Schrodinger picture, for a generic cubic coupling to additional environment degrees of freedom. Focusing on the case of minimal gravitational interactions, we find the evolution of the reduced density matrix for a given long-wavelength fluctuation by tracing out the other (mostly shorter wavelength) modes of \zeta as an environment. We show that inflation produces phase oscillations in the wave functional \Psi[\zeta(x)], which suppress off-diagonal components of the reduced density matrix, leaving a diagonal mixture of different classical configurations. Gravitational nonlinearities thus provide a minimal mechanism for generating classical stochastic perturbations from inflation. We identify the time when decoherence occurs, which is delayed after horizon crossing due to the weak coupling, and find that Hubble-scale modes act as the decohering environment. We also comment on the observational relevance of decoherence and its relation to the squeezing of the quantum state.
  • The fluctuation-dissipation relation is usually formulated for a system interacting with a heat bath at finite temperature, and often in the context of linear response theory, where only small deviations from the mean are considered. We show that for an open quantum system interacting with a nonequilibrium environment, where temperature is no longer a valid notion, a fluctuation-dissipation inequality exists. Instead of being proportional, quantum fluctuations are bounded below by quantum dissipation, whereas classically the fluctuations vanish at zero temperature. The lower bound of this inequality is exactly satisfied by (zero-temperature) quantum noise and is in accord with the Heisenberg uncertainty principle, in both its microscopic origins and its influence upon systems.
[continue reading]

Inaccessible Wikipedia science articles as inclusionism

I thought this criticism by Ars Technica of the woeful state of Wikipedia’s science articles was mostly off the mark. (HN Comments.) The author framed it as a conflict between laymen and specialists, claiming that scientific articles are targeted at specialists at the expense of laymen, with lots of jargon, etc. I eagerly agree with him that there are lots of terrible science articles, and that some technical articles could use better context and introductory bits. But I think this is largely a problem of not having enough skilled science writers rather than a purposeful choice between laymen and specialists. Due to the curse of knowledge the specialists literally do not understand what is and isn’t accessible to laymen; they see through the jargon like the matrix. And the laymen do not get in their gut how many true technical dependencies there really are, that unless you understand topics X and Y, topic Z is pretty much useless. They assume that all this jargon is used by the specialists either because they are too lazy to translate, or are purposefully constructing barriers to entry. I empathize with skilled science writers (which are unfortunately rare), because their best articles often go unnoticed as both laymen and scientists read them and shrug “Yea, that’s pretty clear. Was that really so hard?”.

The examples used in the editorial, like “Rabi oscillation“, are the sort of scientifically deep topics, with many dependencies, that one will never be able to write more than a few layman-accessible sentences about. If you don’t know what a Hilbert space is, there’s just not that much to say about Rabi oscillations.… [continue reading]

Links for December 2015

[continue reading]

PI accepting 2016 master’s student applications

Perimeter Institute runs a pretty great and unusual 1-year master’s program called Perimeter Scholars International.PSI…ha! If you’re in your last year as an undergrad, I strongly advise you (seriously) to consider applying. Your choice of grad school is 80% of the selection power determining your thesis topic, and that topic places very strong constraints on your entire academic career. The more your choice is informed by actual physics knowledge (rather than the apparent impressiveness of professors and institutions), the better. An additional year at a new institution taking classes with new teachers can really help.

(Older academics can advertise this to students by printing out this poster.)

Here’s the blurb:

Each year, Canada’s Perimeter Institute for Theoretical Physics recruits approximately 30 exceptional science graduates for an immersive, 10-month physics boot camp: Perimeter Scholars International (PSI). This unique Master’s program seeks not only students with stellar undergraduate physics track records, but also those with diverse backgrounds, collaborative spirit, creativity, and other attributes that will set them apart as future innovators.

Features of the program include:

  • All student costs (tuition and living) are covered, removing financial and/or geographical barriers to entry
    Students learn from world-leading theoretical physicists – resident Perimeter researchers and visiting scientists – within the inspiring environment of Perimeter Institute.
  • Collaboration is valued over competition; deep understanding and creativity are valued over rote learning and examination.
  • PSI recruits worldwide: 85 percent of students come from outside of Canada.
  • PSI takes calculated risks, seeking extraordinary talent who may have non-traditional academic backgrounds but have demonstrated exceptional scientific aptitude.

PSI is now accepting applications for the class of 2016/17. Applications are due by February 1, 2016.

[continue reading]

Robert Zubrin’s reasoning on space exploration

[Just shooting from the hip here, for fun.]

I think we should send humans to Mars, but I don’t really think it’s possible to justify it as an instrumental means of achieving other more concrete goals. (I just take it as an intrinsic goal.) But here is Robert Zubrin making the best instrumental case I’ve heard.

My biggest criticism is that not finding evidence of life on Mars does not imply life is extraordinarily rare, because there are other options besides easy-starting life (with the great filter somewhere after) and extremely-hard-starting life. If you think it’s possible that there’s a filter strong enough to prevent single-cell life from developing interstellar travelI’m skeptical. When it comes to estimating extremely unlikely events, with multiple independent unlikely steps that all need to happen quickly, the development of the first replicator seems to require vastly more steps than relatively simple things like sexual reproduction. The only thing that makes me uncertain is the possibility that there are extremely simple replicators that resemble nothing like minimal cells, and there is a relatively natural progression to minimal cells that simply isn’t large enough to leave fossils. I would love to update on this if you know something I’m not thinking of., then it’s still very possible that single-cell life is hard-enough to start that we wouldn’t expect to find it on Mars, yet is still relatively common in the galaxy. Indeed, it’s easy (assuming a strong late filter) to imagine that life is easy to start with liquid water plus X, where X is a relative common planetary condition that just has never existed on Mars.

That said, Zubrin’s argument is better than I was expecting, and I agree that getting a quasi-definitive answer on whether there was ever life on Mars — even with my strong prior against it — is probably the best new evidence we are likely to collect for a very long time with regard to the prevalence of life in the universe.… [continue reading]

My talk on dark matter decoherence detection

I gave a talk recently on Itay’s and my latests results for detecting dark matter through the decoherence it induces in matter interferometers.

Quantum superpositions of matter are unusually sensitive to decoherence by tiny momentum transfers, in a way that can be made precise with a new diffusion standard quantum limit. Upcoming matter interferometers will produce unprecedented spatial superpositions of over a million nucleons. What sorts of dark matter scattering events could be seen in these experiments as anomalous decoherence? We show that it is extremely weak but medium range interaction between matter and dark matter that would be most visible, such as scattering through a Yukawa potential. We construct toy models for these interactions, discuss existing constraints, and delineate the expected sensitivity of forthcoming experiments. In particular, the OTIMA interferometer developing at the University of Vienna will directly probe many orders of magnitude of parameter space, and the proposed MAQRO satellite experiment would be vastly more sensitive yet. This is a multidisciplinary talk that will be accessible to a non-specialized audience.
[Download MP4]If you ever have problems finding the direct download link for videos on PI’s website (they are sometimes missing), this Firefox extension seems to do the trick.

Relevant paper on the diffusion SQL is here: arXiv:1504.03250. The main dark matter paper is still a work in progress.

Footnotes

(↵ returns to text)

  1. If you ever have problems finding the direct download link for videos on PI’s website (they are sometimes missing), this Firefox extension seems to do the trick.
[continue reading]

Abstracts for October-November 2015

  • High energy particle colliders have been in the forefront of particle physics for more than three decades. At present the near term US, European and international strategies of the particle physics community are centered on full exploitation of the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). A number of the next generation collider facilities have been proposed and are currently under consideration for the medium and far-future of accelerator-based high energy physics. In this paper we offer a uniform approach to evaluation of various accelerators based on the feasibility of their energy reach, performance potential and cost range.
    (H/t Sabine.) The contraction has been happening for quite some time:

    maximum c.o.m. energy has drastically slowed down since the early 1990’s and the lepton colliders even went backwards in energy to study rare processes…Moreover, the number of the colliding beam facilities in operation has dropped from 9 two decades ago to 5 now…

  • Conditions for Quantum Violation of Macroscopic Realism
    Johannes Kofler and Časlav Brukner
    Why do we not experience a violation of macroscopic realism in everyday life. Normally, no violation can be seen either because of decoherence or the restriction of coarse-grained measurements, transforming the time evolution of any quantum state into a classical time evolution of a statistical mixture. We find the sufficient condition for these classical evolutions for spin systems under coarse-grained measurements. However, there exist ‘‘nonclassical’’ Hamiltonians whose time evolution cannot be understood classically, although at every instant of time the quantum state appears as a classical mixture. We suggest that such Hamiltonians are unlikely to be realized in nature because of their high computational complexity.
[continue reading]

Links for November 2015

[continue reading]

Comments on Myrvold’s Taj Mahal

Last week I saw an excellent talk by philosopher Wayne Myrvold.

The Reeh-Schlieder theorem says, roughly, that, in any reasonable quantum field theory, for any bounded region of spacetime R, any state can be approximated arbitrarily closely by operating on the vacuum state (or any state of bounded energy) with operators formed by smearing polynomials in the field operators with functions having support in R. This strikes many as counterintuitive, and Reinhard Werner has glossed the theorem as saying that “By acting on the vacuum with suitable operations in a terrestrial laboratory, an experimenter can create the Taj Mahal on (or even behind) the Moon!” This talk has two parts. First, I hope to convince listeners that the theorem is not counterintuitive, and that it follows immediately from facts that are already familiar fare to anyone who has digested the opening chapters of any standard introductory textbook of QFT. In the second, I will discuss what we can learn from the theorem about how relativistic causality is implemented in quantum field theories.

(Download MP4 video here.)

The topic was well-defined, and of reasonable scope. The theorem is easily and commonly misunderstood. And Wayne’s talk served to dissolve the confusion around it, by unpacking the theorem into a handful of pieces so that you could quickly see where the rub was. I would that all philosophy of physics were so well done.

Here are the key points as I saw them:

  • The vacuum state in QFTs, even non-interacting ones, is entangled over arbitrary distances (albeit by exponentially small amounts). You can think of this as every two space-like separated regions of spacetime sharing extremely diluted Bell pairs.
[continue reading]

China to lead particle physics

China will build the successor to the LHC.

Note that the China Daily article above incorrectly suggests that they will build a 50-70km circular electron-positron accelerator at ~100 TeV CoM. In fact, the project comes in two phases inside the same tunnel: first a 250 GeV electron-positron ‘precision’ machineNote that the 250 GeV electron-positron collisions will produce only one Higgs, and the fact that the COM energy is double the Higgs mass is a coincidence. See slides 9-16 here for some of the processes that will be studied., the Circular Electron-Positron Collider (CEPC), followed by an upgrade to a 70 TeV proton-proton ‘discovery’ machine, the Super Proton-Proton Collider (SPPC). The current timeline for operations, which will inevitably be pushed back, projects that data taking will start in 2028 and 2042, respectively. (H/t Graeme Smith.)

The existence of this accelerator has lots of interesting implications for accelerators in the Wester hemisphere. For instance, the International Linear Collider (ILC) was planning on using a ‘push-pull’ configuration where they would alternate beam time between two devices (by keeping them on huge rolling platforms!). The idea is that having two completely separate and competing detectors is critical for maintaining objectivity in world where you only have a single accelerator. Since ILC is linear, there is only one interaction region (unlike for the common circular accelerator). So to use two detectors, you need to be able to swap them in and out! But this becomes largely unnecessary if CEPC exists to keep ILC honest.

I think this is a bad development for physics because I am pessimistic about particle accelerators telling us something truly deep and novel about the universe, at least in the next century.… [continue reading]

Links for October 2015

  • More well-deserved praise for the Stanford Encyclopedia of Philosophy. Lots to be learned from how the SEP was created. A key chicken-or-egg problem:

    …several SEP authors and editors…said that the encyclopedia is used frequently both as a reference and as a teaching tool. This means that philosophers are some of the SEP’s core readers, and they can alert authors or subject editors to incorrect or insufficient entries.

    Stanford does pay most of the operating costs. But the SEP has a paid staff of only three—Zalta, Nodelman, and Allen—plus five other Stanford employees who spend 20% of their time on technical support. Neither the authors, nor the dozens of subject editors, get so much as a dime for their troubles.

    To pay running expenses not covered by Stanford, the team obtained nearly $2 million in grants over the first 15 years. But they wanted something more sustainable… The SEP asks academic libraries to make a one-time contribution [that now provides around a third of the budget]. That doesn’t get them access to the SEP, since it’s already freely accessible, but they enjoy some extra “member benefits,” like the ability to use their own branding on a version of the encyclopedia, and to save the full archives.

    Moreover, their money goes into an SEP endowment, managed by the same company that takes care of Stanford University’s endowment of over $20 billion. If the SEP ever shuts down, Stanford promises to give the libraries that contributed to SEP all their money back, with interest. “It became a no-risk investment for the libraries, and it’s a way for them to invest in open access,” says Zalta.

    Libraries were enthusiastic. The SEP was able to raise over $2 million from the long list of contributors, and Stanford added $1 million to the library endowment.

[continue reading]

How fast do macroscopic wavefunctions branch?

Over at PhysicsOverflow, Daniel Ranard asked a question that’s near and dear to my heart:

How deterministic are large open quantum systems (e.g. with humans)?

Consider some large system modeled as an open quantum system — say, a person in a room, where the walls of the room interact in a boring way with some environment. Begin with a pure initial state describing some comprehensible configuration. (Maybe the person is sitting down.) Generically, the system will be in a highly mixed state after some time. Both normal human experience and the study of decoherence suggest that this state will be a mixture of orthogonal pure states that describe classical-like configurations. Call these configurations branches.

How much does a pure state of the system branch over human time scales? There will soon be many (many) orthogonal branches with distinct microscopic details. But to what extent will probabilities be spread over macroscopically (and noticeably) different branches?

I answered the question over there as best I could. Below, I’ll reproduce my answer and indulge in slightly more detail and speculation.

This question is central to my research interests, in the sense that completing that research would necessarily let me give a precise, unambiguous answer. So I can only give an imprecise, hand-wavy one. I’ll write down the punchline, then work backwards.

Punchline

The instantaneous rate of branching, as measured in entropy/time (e.g., bits/s), is given by the sum of all positive Lyapunov exponents for all non-thermalized degrees of freedom.

Most of the vagueness in this claim comes from defining/identifying degree of freedom that have thermalized, and dealing with cases of partial/incomplete thermalization; these problems exists classically.

Elaboration

The original question postulates that the macroscopic system starts in a quantum state corresponding to some comprehensible classical configuration, i.e., the system is initially in a quantum state whose Wigner function is localized around some classical point in phase space.… [continue reading]

Links for September 2015

  • Chris Blattman on the Center for Global Development’s endorsement of cash transfers. (Report.)
  • Here’s to several decades of grinding out a couple of decimal places to parameterize a charged Higgs:

    Three years ago the BaBar collaboration at SLAC measured the branching ratios for B-meson decay to produce either a muon or a tau. For two slightly different decays, they found 2σ or greater deviations from the democratic standard-model expectation. Now the LHCb collaboration at CERN has confirmed the BaBar result for one of the decays. In a preprint, the Belle group at KEK in Japan has also announced results that show a similar though less strong deviation from the standard model. The figure below (from the Heavy Flavor Averaging Group) shows the branching ratios (R) measured by the groups for the two decays, denoted D and D*, along with the standard-model prediction. Taken together, the groups’ measurements have struck a 3.9-σ blow to the principle of lepton democracy. If they hold up, the standard model will have to be modified—perhaps by the addition of a new charged Higgs boson, whose interactions would depend on mass.

    Importantly, this is a combination of several experiments rather than easily attributable to a systematic mistake in one.

  • Advanced LIGO turns on after completing upgrade. From now on, LIGO will be able to notify any number of 75 astronomical observatories around the world who have agreed to, at a moment’s notice, point their telescopes to the sky in search of light signals corresponding to possible gravitational wave detections.
  • New data on great filter from density of habitable planets.

    these new results offer little support for the scenario where we have a good chance of growing out into the universe and meeting other aliens before a billion of years have passed.

[continue reading]