My talk on dark matter decoherence detection

I gave a talk recently on Itay’s and my latests results for detecting dark matter through the decoherence it induces in matter interferometers.

Quantum superpositions of matter are unusually sensitive to decoherence by tiny momentum transfers, in a way that can be made precise with a new diffusion standard quantum limit. Upcoming matter interferometers will produce unprecedented spatial superpositions of over a million nucleons. What sorts of dark matter scattering events could be seen in these experiments as anomalous decoherence? We show that it is extremely weak but medium range interaction between matter and dark matter that would be most visible, such as scattering through a Yukawa potential. We construct toy models for these interactions, discuss existing constraints, and delineate the expected sensitivity of forthcoming experiments. In particular, the OTIMA interferometer developing at the University of Vienna will directly probe many orders of magnitude of parameter space, and the proposed MAQRO satellite experiment would be vastly more sensitive yet. This is a multidisciplinary talk that will be accessible to a non-specialized audience.
[Download MP4]If you ever have problems finding the direct download link for videos on PI’s website (they are sometimes missing), this Firefox extension seems to do the trick.

Relevant paper on the diffusion SQL is here: arXiv:1504.03250. The main dark matter paper is still a work in progress.

Footnotes

(↵ returns to text)

  1. If you ever have problems finding the direct download link for videos on PI’s website (they are sometimes missing), this Firefox extension seems to do the trick.
[continue reading]

Standard quantum limit for diffusion

I just posted my newest paper: “Decoherence from classically undetectable sources: A standard quantum limit for diffusion” (arXiv:1504.03250). [Edit: Now published as PRA 92, 010101(R) (2015).] The basic idea is to prove a standard quantum limit (SQL) that shows that some particles can be detected through the anomalous decoherence they induce even though they cannot be detected with any classical experiment. Hopefully, this is more evidence that people should think of big spatial superpositions as sensitive detectors, not just neat curiosities.

Here’s the abstract:

In the pursuit of speculative new particles, forces, and dimensions with vanishingly small influence on normal matter, understanding the ultimate physical limits of experimental sensitivity is essential. Here, I show that quantum decoherence offers a window into otherwise inaccessible realms. There is a standard quantum limit for diffusion that restricts some entanglement-generating phenomena, like soft collisions with new particle species, from having appreciable classical influence on normal matter. Such phenomena are classically undetectable but can be revealed by the anomalous decoherence they induce on non-classical superpositions with long-range coherence in phase space. This gives strong, novel motivation for the construction of matter interferometers and other experimental sources of large superpositions, which recently have seen rapid progress. Decoherence is always at least second order in the coupling strength, so such searches are best suited for soft, but not weak, interactions.

Here’s Figure 2:


Standard quantum limit for forces and momentum diffusion. A test mass is initially placed in a minimal uncertainty wavepacket with a Wigner distribution W(x,p) over phase space (top) that contains the bulk of its mass within a 2\sigma-contour of a Gaussian distribution (dashed black line).
[continue reading]

Decoherence detection and micromechanical resonators

In this post I want to lay out why I am a bit pessimistic about using quantum micromechanical resonators, usually of the optomechanical variety, for decoherence detection. I will need to rely on some simple ideas from 3-4 papers I have “in the pipeline” (read: partially written TeX files) that seek to make precise the sense in which decoherence detection allows us to detect classical undetectable phenomena, and to figure out exactly what sort of phenomena we should apply it to. So this post will sound vague without that supporting material. Hopefully it will still be useful, at least for the author.

The overarching idea is that decoherence detection is only particularly useful when the experimental probe can be placed in a superposition with respect to a probe’s natural pointer basis. Recall that the pointer basis is the basis in which the density matrix of the probe is normally restricted to be approximately diagonal by the interaction with the natural environment. Classically detectable phenomena are those which cause transitions within the pointer basis, i.e. driving the system from one pointer state to another. Classically undetectable phenomena are those which cause pure decoherence with respect to this basis, i.e. they add a decoherence factor to off-diagonal terms in this basis, but preserve on-diagonal terms.

The thing that makes this tricky to think about is that the pointer basis is overcomplete for most physically interesting situations, in particular for any continuous degree of freedom like the position of a molecule or a silicon nanoparticle. It’s impossible to perfectly localize a particle, and the part of the Hamiltonian that fights you on this, p^2/2m, causes a smearing effect that leads to the overcompleteness.… [continue reading]

Undetected photon imaging

Lemos et al. have a relatively recent letterG. Lemos, V. Borish, G. Cole, S. Ramelow, R. Lapkiewicz, and A. Zeilinger, “Quantum imaging with undetected photons”, Nature 512, 409 (2014) [ arXiv:1401.4318 ]. in Nature where they describe a method of imaging with undetected photons. (An experiment with the same essential quantum features was performed by Zou et al.X. Y. Zou, L. J. Wang, and L. Mandel, “Induced coherence and indistinguishability in optical interference”, Phys. Rev. Lett. 67, 318 (1991) [ PDF ]. way back in 1991, but Lemos et al. have emphasized its implications for imaging.) The idea is conceptually related to decoherence detection, and I want to map one onto the other to flesh out the connection. Their figure 1 gives a schematic of the experiment, and is copied below.


Figure 1 from Lemos et al.: ''Schematic of the experiment. Laser light (green) splits at beam splitter BS1 into modes a and b. Beam a pumps nonlinear crystal NL1, where collinear down-conversion may produce a pair of photons of different wavelengths called signal (yellow) and idler (red). After passing through the object O, the idler reflects at dichroic mirror D2 to align with the idler produced in NL2, such that the final emerging idler f does not contain any information about which crystal produced the photon pair. Therefore, signals c and e combined at beam splitter BS2 interfere. Consequently, signal beams g and h reveal idler transmission properties of object O.''

The first two paragraphs of the letter contain all the meat, encrypted and condensed into an opaque nugget of the kind that Nature loves; it stands as a good example of the lamentable way many quantum experimental articles are written.… [continue reading]

A dark matter model for decoherence detection

[Added 2015-1-30: The paper is now in print and has appeared in the popular press.]

One criticism I’ve had to address when proselytizing the indisputable charms of using decoherence detection methods to look at low-mass dark matter (DM) is this: I’ve never produced a concrete model that would be tested. My analysis (arXiv:1212.3061) addressed the possibility of using matter interferometry to rule out a large class of dark matter models characterized by a certain range for the DM mass and the nucleon-scattering cross section. However, I never constructed an explicit model as a representative of this class to demonstrate in detail that it was compatible with all existing observational evidence. This is a large and complicated task, and not something I could accomplish on my own.

I tried hard to find an existing model in the literature that met my requirements, but without luck. So I had to argue (with referees and with others) that this was properly beyond the scope of my work, and that the idea was interesting enough to warrant publication without a model. This ultimately was successful, but it was an uphill battle. Among other things, I pointed out that new experimental concepts can inspire theoretical work, so it is important that they be disseminated.

I’m thrilled to say this paid off in spades. Bateman, McHardy, Merle, Morris, and Ulbricht have posted their new pre-print “On the Existence of Low-Mass Dark Matter and its Direct Detection” (arXiv:1405.5536). Here is the abstract:

Dark Matter (DM) is an elusive form of matter which has been postulated to explain astronomical observations through its gravitational effects on stars and galaxies, gravitational lensing of light around these, and through its imprint on the Cosmic Microwave Background (CMB).

[continue reading]

Entanglement never at first order

When two initially uncorrelated quantum systems interact through a weak coupling, no entanglement is generated at first order in the coupling constant. This is a useful and very easy to prove fact that I haven’t seen pointed out anywhere, although I assume someone has. I’d love a citation reference if you have one.

Suppose two systems \mathcal{A} and \mathcal{B} evolve under U = \exp(- i H t) where the Hamiltonian coupling them is of the form

(1)   \begin{align*} H=H_A + H_B + \epsilon H_I, \end{align*}

with H_A = H_A \otimes I_B and H_B = I_A \otimes H_B as usual. We’ll show that when the systems start out uncorrelated, \vert \psi^0 \rangle = \vert \psi_A^0 \rangle \otimes \vert \psi_B^0 \rangle, they remain unentangled (and therefore, since the global state is pure, uncorrelated) to first order in \epsilon. First, note that local unitaries cannot change the entanglement, so without loss of generality we can consider the modified unitary

(2)   \begin{align*} U' = e^{+i H_A t} e^{+i H_B t} e^{-i H t} \end{align*}

which peels off the unimportant local evolution of \mathcal{A} and \mathcal{B}. Then the Baker–Campbell–Hausdorff formula gives

(3)   \begin{align*} U' = e^{+i H_A t} e^{+i H_B t} e^{-i (H_A + H_B) t} e^{-i \epsilon H_I t}  e^{Z_2} e^{Z_3} \cdots \end{align*}

where the first few Z‘s are given by

(4)   \begin{align*} Z_2 &= \frac{(-i t)^2}{2} [H_A+H_B,\epsilon H_I] \\ Z_3 &= \frac{(-i t)^3}{12} \Big( [H_A+H_B,[H_A+H_B,\epsilon H_I]]-  [\epsilon H_I,[H_A+H_B,\epsilon H_I]] \Big) \\ Z_4 &= \cdots. \end{align*}

The key feature here is that every commutators in each of the Z‘s contains at least one copy of \epsilon H_I, i.e. all the Z‘s are at least first order in \epsilon. That allows us to write

(5)   \begin{align*} U' = e^{-i \epsilon H'_I t} \big(1 + O(\epsilon^2) \big) \end{align*}

for some new H'_I that is independent of \epsilon. Then we note just that a general Hamiltonian cannot produce entanglement to first order:

(6)   \begin{align*} \rho_A &= \mathrm{Tr}_B \left[ U' \vert \psi^0 \rangle \langle \psi^0 \vert {U'}^\dagger \right] \\ &=  \vert \psi'_A \rangle \langle \psi'_A \vert + O(\epsilon^2) \end{align*}

where

(7)   \begin{align*} \vert \psi'_A \rangle &= \left( I - i \epsilon t \langle \psi^0_B  \vert H_I' \vert  \psi^0_B \rangle \right) \vert \psi^0_A \rangle . \end{align*}

This is potentially a very important (negative) result when considering decoherence detection of very weakly coupled particles. If the coupling is so small that terms beyond first order are negligible (e.g. relic neutrinos), then there is no hope of being sensitive to any decoherence.

Of course, non-entangling (unitary) effect may be important. Another way to say this result is: Two weakly coupled systems act only unitarily on each other to first order in the coupling constant.… [continue reading]

Wavepacket spreading produces force sensitivity

I’m still trying to decide if I understand this correctly, but it looks like coherent wavepacket spreading is sufficient to produce states of a test-mass that are highly sensitive to weak forces. The Wigner function of a coherent wavepacket is sheared horizontally in phase space (see hand-drawn figure). A force that perturbs it slightly with a small momentum shift will still produce an orthogonal state of the test mass.


The Gaussian wavepacket of a test mass (left) will be sheared horizontally in phase space by the free-particle evolution governed by H=p^2/2m. A small vertical (i.e. momentum) shift by a weak force can then produce an orthogonal state of the test mass, while it would not for the unsheared state. However, discriminating between the shifted and unshifted wavepackets requires a momentum-like measurement; position measurements would not suffice.

Of course, we could simply start with a wavepacket with a very wide spatial width and narrow momentum width. Back when this was being discussed by Caves and others in the ’80s, they recognized that these states would have such sensitivity. However, they pointed out, this couldn’t really be exploited because of the difficulty in making true momentum measurements. Rather, we usually measure momentum indirectly by allowing the normal free-particle (H=p^2/2m) evolution carry the state to different points in space, and then measuring position. But this doesn’t work under the condition in which we’re interested: when the time between measurements is limited.The original motivation was for detecting gravitational waves, which transmit zero net momentum when averaged over the time interval on which the wave interacts with the test mass. The only way to notice the wave is to measure it in the act since the momentum transfer can be finite for intermediate times.[continue reading]

Decoherence Detection FAQ—Part 1: Dark matter

[Updated 2016-7-2]

I’ve submitted my papers (long and short arXiv versions) on detecting classically undetectable new particles through decoherence. The short version introduces the basic idea and states the main implications for dark matter and gravitons. The long version covers the dark matter case in depth. Abstract for the short version:

Detecting Classically Undetectable Particles through Quantum Decoherence

Some hypothetical particles are considered essentially undetectable because they are far too light and slow-moving to transfer appreciable energy or momentum to the normal matter that composes a detector. I propose instead directly detecting such feeble particles, like sub-MeV dark matter or even gravitons, through their uniquely distinguishable decoherent effects on quantum devices like matter interferometers. More generally, decoherence can reveal phenomena that have arbitrarily little classical influence on normal matter, giving new motivation for the pursuit of macroscopic superpositions.

This is figure 1:

MZ2_cropped
Decoherence detection with a Mach-Zehnder interferometer. System \mathcal{N} is placed in a coherent superposition of spatially displaced wavepackets \vert N_{L} \rangle and \vert N_{R} \rangle that each travel a separate path and then are recombined. In the absence of system \mathcal{E}, the interferometer is tuned so that \mathcal{N} will be detected at the bright port with near unit probability, and at the dim port with near vanishing probability. However, if system \mathcal{D} scatters off \mathcal{N}, these two paths can decohere and \mathcal{N} will be detected at the dim port 50% of the time.

Below are some FAQs I have received.

Won’t there always be momentum transfer in any nontrivial scattering?

For any nontrivial scattering of two particles, there must be some momentum transfer.  But the momentum transfer can be arbitrarily small by simply making the mass of the dark particle as tiny as desired (while keeping its velocity fixed).  … [continue reading]