Integrating with functional derivatives

I saw a neat talk at Perimeter a couple weeks ago on new integration techniques:

Speaker: Achim Kempf from University of Waterloo.
Title: “How to integrate by differentiating: new methods for QFTs and gravity”.

Abstract: I present a simple new all-purpose integration technique. It is quick to use, applies to functions as well as distributions and it is often easier than contour integration. (And it is not Feynman’s method). It also yields new quick ways to evaluate Fourier and Laplace transforms. The new methods express integration in terms of differentiation. Applied to QFT, the new methods can be used to express functional integration, i.e., path integrals, in terms of functional differentiation. This naturally yields the weak and strong coupling expansions as well as a host of other expansions that may be of use in quantum field theory, e.g., in the context of heat traces.

(Many talks hosted on PIRSA have a link to the mp4 file so you can directly download it. This talk does not, but you can right-click here and select “save as” to get the f4v file.This file format can be watched with VLC player. You can find it for any talk hosted by PIRSA by viewing the page source and searching the text for “.f4v”. There are many nice things about learning physics from videos, one of which is the ability to easily speed up the playback speed and skip around. In VLC player, playback speed can be incremented in 10% steps by pressing the left and right square brackets, ‘[‘ and ‘]’.a  )

The technique is based on the familiar trick of extracting a functional derivate inside a path integral and using integration by parts.… [continue reading]

Comments on Pikovski et al.’s time dilation decoherence

Folks have been asking about the new Nature Physics article by Pikovski et al., “Universal decoherence due to gravitational time dilation. Here are some comments:

  • I think their calculation is probably correct for the model they are considering. One could imagine that they were placing their object in a superposition of two different locations in an electric (rather than gravitational field), and in this case we really would expect the internal degrees of freedom to evolve in two distinct ways. Any observer who was “part of the superposition” wouldn’t be able to tell locally whether their clock was ticking fast or slow, but it can be determined by bringing both clocks back together and comparing them.
  • It’s possible the center of mass (COM) gets shifted a bit, but you can avoid this complication by just assuming that the superposition separation L is much bigger than the size of the object R, and that the curvature of the gravitational field is very small compared to both.
  • Their model is a little weird, as hinted at by their observation that they get “Gaussian decoherence”, \sim \exp(-T^2), rather than exponential, \sim \exp(-T). The reason is that their evolution isn’t Markovian, as it is for any environment (like scattered or emitted photons) composed of small parts that interact for a bit of time and then leave. Rather, the COM is becoming more and more entangled with each of the internal degrees of freedom as time goes on.
  • Because they don’t emit any radiation, their “environment” (the internal DOF) is finite dimensional, and so you will eventually get recoherence. This isn’t a problem for Avagadro’s number of particles.
  • This only decoheres superpositions in the direction of the gravitational gradient, so it’s not particularly relevant for why things look classical above any given scale.
[continue reading]

Undetected photon imaging

Lemos et al. have a relatively recent letterG. Lemos, V. Borish, G. Cole, S. Ramelow, R. Lapkiewicz, and A. Zeilinger, “Quantum imaging with undetected photons”, Nature 512, 409 (2014) [ arXiv:1401.4318 ].a   in Nature where they describe a method of imaging with undetected photons. (An experiment with the same essential quantum features was performed by Zou et al.X. Y. Zou, L. J. Wang, and L. Mandel, “Induced coherence and indistinguishability in optical interference”, Phys. Rev. Lett. 67, 318 (1991) [ PDF ].b   way back in 1991, but Lemos et al. have emphasized its implications for imaging.) The idea is conceptually related to decoherence detection, and I want to map one onto the other to flesh out the connection. Their figure 1 gives a schematic of the experiment, and is copied below.


Figure 1 from Lemos et al.: ''Schematic of the experiment. Laser light (green) splits at beam splitter BS1 into modes a and b. Beam a pumps nonlinear crystal NL1, where collinear down-conversion may produce a pair of photons of different wavelengths called signal (yellow) and idler (red). After passing through the object O, the idler reflects at dichroic mirror D2 to align with the idler produced in NL2, such that the final emerging idler f does not contain any information about which crystal produced the photon pair. Therefore, signals c and e combined at beam splitter BS2 interfere. Consequently, signal beams g and h reveal idler transmission properties of object O.''

The first two paragraphs of the letter contain all the meat, encrypted and condensed into an opaque nugget of the kind that Nature loves; it stands as a good example of the lamentable way many quantum experimental articles are written.… [continue reading]

A dark matter model for decoherence detection

[Added 2015-1-30: The paper is now in print and has appeared in the popular press.]

One criticism I’ve had to address when proselytizing the indisputable charms of using decoherence detection methods to look at low-mass dark matter (DM) is this: I’ve never produced a concrete model that would be tested. My analysis (arXiv:1212.3061) addressed the possibility of using matter interferometry to rule out a large class of dark matter models characterized by a certain range for the DM mass and the nucleon-scattering cross section. However, I never constructed an explicit model as a representative of this class to demonstrate in detail that it was compatible with all existing observational evidence. This is a large and complicated task, and not something I could accomplish on my own.

I tried hard to find an existing model in the literature that met my requirements, but without luck. So I had to argue (with referees and with others) that this was properly beyond the scope of my work, and that the idea was interesting enough to warrant publication without a model. This ultimately was successful, but it was an uphill battle. Among other things, I pointed out that new experimental concepts can inspire theoretical work, so it is important that they be disseminated.

I’m thrilled to say this paid off in spades. Bateman, McHardy, Merle, Morris, and Ulbricht have posted their new pre-print “On the Existence of Low-Mass Dark Matter and its Direct Detection” (arXiv:1405.5536). Here is the abstract:

Dark Matter (DM) is an elusive form of matter which has been postulated to explain astronomical observations through its gravitational effects on stars and galaxies, gravitational lensing of light around these, and through its imprint on the Cosmic Microwave Background (CMB).

[continue reading]

Comments on Tegmark’s ‘Consciousness as a State of Matter’

[Edit: Scott Aaronson has posted on his blog with extensive criticism of Integrated Information Theory, which motivated Tegmark’s paper.]

Max Tegmark’s recent paper entitled “Consciousness as a State of Matter” has been making the rounds. See especially Sabine Hossenfelder’s critique on her blog that agrees in several places with what I say below.

Tegmark’s paper didn’t convince me that there’s anything new here with regards to the big questions of consciousness. (In fairness, I haven’t read the work of neuroscientist Giulio Tononi that motivated Tegmark’s claims). However, I was interested in what he has to say about the proper way to define subsystems in a quantum universe (i.e. to “carve reality at its joints”) and how this relates to the quantum-classical transition. There is a sense in which the modern understanding of decoherence simplifies the vague questions “How does (the appearance of) a classical world emerge in a quantum universe? ” to the slightly-less-vague question “what are the preferred subsystems of the universe, and how do they change with time?”. Tegmark describes essentially this as the “quantum factorization problem” on page 3. (My preferred formulation is as the “set-selection problem” by Dowker and Kent. Note that this is a separate problem from the origin of probability in quantum mechanicsThe problem of probability as described by Weinberg: “The difficulty is not that quantum mechanics is probabilistic—that is something we apparently just have to live with. The real difficulty is that it is also deterministic, or more precisely, that it combines a probabilistic interpretation with deterministic dynamics.” HT Steve Hsu.a  .)

Therefore, my comments are going to focus on the “object-level” calculations of the paper, and I won’t have much to say about the implications for consciousness except at the very end.… [continue reading]

New review of decoherence by Schlosshauer

Max Schlosshauer has a new review of decoherence and how it relates to understanding the quantum-classical transition. The abstract is:

I give a pedagogical overview of decoherence and its role in providing a dynamical account of the quantum-to-classical transition. The formalism and concepts of decoherence theory are reviewed, followed by a survey of master equations and decoherence models. I also discuss methods for mitigating decoherence in quantum information processing and describe selected experimental investigations of decoherence processes.

I found it very concise and clear for its impressive breadth, and it has extensive cites to the literature. (As you may suspect, he cites me and my collaborators generously!) I think this will become one of the go-to introductions to decoherence, and I highly recommend it to beginners.

Other introductory material is Schlosshauer’s textbook and RMP (quant-ph/0312059), Zurek’s RMP (quant-ph/0105127) and Physics Today article, and the textbook by Joos et al.… [continue reading]

Comments on Gell-Mann & Hartle’s latest

Back in December Gell-Mann and Hartle (G&H) posted their latest paper on consistent histories, “Adaptive Coarse Graining, Environment, Strong Decoherence, and Quasiclassical Realms”. Here are my thoughts.

The discussion of adaptive coarse graining was brief and very much in agreement with previous work.

G&H then give a name and formal description to the idea, long part of the intuitive lore, of a history being defined by the values taken by a particular variable over many time step. (This might be the position of an object, which is being recorded to some accuracy by a environment that decoheres it.) The key idea is that all the Schrödinger-picture projectors P^{k}_{\alpha_k} at different times t_k defining the history commute:

(1)   \begin{align*} [P^{k}_{\alpha_k},P^{k'}_{\alpha_{k'}}]=0 \quad \forall k,k' \end{align*}

This they call the narrative condition. From it, one is able to define a smallest set of maximal projectors Q_i (which they call a common framework) that obey either Q_i \le P^{k}_{\alpha_k} or Q_i  P^{k}_{\alpha_k} = P^{k}_{\alpha_k} Q_i = 0 for all P^{k}_{\alpha_k}. For instance, if the P‘s are onto spatial volumes of position, then the Q‘s are just the minimal partition of position space such that the region associated with each Q_i is fully contained in the regions corresponding to some of the P^{k}_{\alpha_k}, and is completely disjoint from the regions corresponding to the others.


For times steps k=1,\ldots,4, the history projectors at a given time project into subspaces which disjointly span Hilbert space. The narrative condition states that all history projectors commute, which means we can think of them as projecting onto disjoint subsets forming a partition of the range of some variable (e.g. position). The common framework is just the set of smaller projectors that also partitions the range of the variable and which obey Q \le P or QP = PQ = 0 for each P and Q.
[continue reading]

Comments on Hotta’s Quantum Energy Teleportation

[This is a “literature impression“.]

Masahiro Hotta has a series of paper about what he calls “quantum energy teleportation (QET)”, modeled after the well-known notion of quantum teleportation (of information). Although it sounds like crazy crack pot stuff, and they contain the red-flag term “zero-point energy”, the basic physics of Hotta’s work are sound. But they don’t appear to have important consequences for energy transmission.

The idea is to exploit the fact that the ground state of the vacuum in QFT is, in principle, entangled over arbitrary distances. In a toy Alice and Bob model with respective systems A and B, you assume a Hamiltonian for which the ground state is unique and entangled. Then, Alice makes a local measurement on her system A. Neither of the two conditional global states for the joint AB system — conditional on the outcome of the measurement — are eigenstates of the Hamiltonian, and so therefore the average energy must increase for the joint system. The source of this energy is the device Alice used to make the measurement. Now, if Bob were to independently make a measurement of his system, he would find that energy would also necessarily flow from his device into the joint system; this follows from the symmetry of the problem. But if he waits for Alice to transmit to him the outcome of her result, it turns out that he can apply a local unitary to his B system and a subsequent local measurement that leads to a net average energy flow to his equipment. The fact that he must wait for the outcome of Alice’s measurement, which travels no faster than the speed of light, is what gives this the flavor of teleportation.… [continue reading]