A dark matter model for decoherence detection

[Added 2015-1-30: The paper is now in print and has appeared in the popular press.]

One criticism I’ve had to address when proselytizing the indisputable charms of using decoherence detection methods to look at low-mass dark matter (DM) is this: I’ve never produced a concrete model that would be tested. My analysis (arXiv:1212.3061) addressed the possibility of using matter interferometry to rule out a large class of dark matter models characterized by a certain range for the DM mass and the nucleon-scattering cross section. However, I never constructed an explicit model as a representative of this class to demonstrate in detail that it was compatible with all existing observational evidence. This is a large and complicated task, and not something I could accomplish on my own.

I tried hard to find an existing model in the literature that met my requirements, but without luck. So I had to argue (with referees and with others) that this was properly beyond the scope of my work, and that the idea was interesting enough to warrant publication without a model. This ultimately was successful, but it was an uphill battle. Among other things, I pointed out that new experimental concepts can inspire theoretical work, so it is important that they be disseminated.

I’m thrilled to say this paid off in spades. Bateman, McHardy, Merle, Morris, and Ulbricht have posted their new pre-print “On the Existence of Low-Mass Dark Matter and its Direct Detection” (arXiv:1405.5536). Here is the abstract:

Dark Matter (DM) is an elusive form of matter which has been postulated to explain astronomical observations through its gravitational effects on stars and galaxies, gravitational lensing of light around these, and through its imprint on the Cosmic Microwave Background (CMB). This indirect evidence implies that DM accounts for as much as 84.5% of all matter in our Universe, yet it has so far evaded all attempts at direct detection, leaving such confirmation and the consequent discovery of its nature as one of the biggest challenges in modern physics. Here we present a novel form of low-mass DM \chi that would have been missed by all experiments so far. While its large interaction strength might at first seem unlikely, neither constraints from particle physics nor cosmological/astronomical observations are sufficient to rule out this type of DM, and it motivates our proposal for direct detection by optomechanics technology which should soon be within reach, namely, through the precise position measurement of a levitated mesoscopic particle which will be perturbed by elastic collisions with \chi particles. We show that a recently proposed nanoparticle matter-wave interferometer, originally conceived for tests of the quantum superposition principle, is sensitive to these collisions, too.

I am sadly not fit to evaluate the astrophysical aspects of their paper, but I look forward to it withstanding criticism.

This is part (a) and (b) of Bateman et al.'s Fig. 1. They consider a scalar DM candidate \chi with m_\chi \approx 100\, \mathrm{eV} and an elastic scattering cross-section \sigma \approx 5 \cdot 10^{-27}\, \mathrm{cm}^2. Part (a) corresponds to elastic scattering and part (b) corresponds to annihilation to photons.

It’s important to note that, in addition to decoherence detection with matter interferometers, they also discuss the possibility of detecting this dark matter candidate using classical methods, to wit a satellite-based target-mass experiment:

Given the possibility of a measurable eff ect upon nanometre-sized particles, and the uncertainty about whether particles will penetrate the Earth’s atmosphere, we propose a space-based experiment, as illustrated in FIG. 3. Particle radii in the range 10\, \mathrm{nm}\le r \le 1 \mu\mathrm{m} are expected to show accelerations a \gtrsim 0.1 \, \mu \mathrm{m}/\mathrm{s}^2, with possibly much higher values and a rich size dependent structure. Recently, 140\, \mathrm{nm} particles have been held in vacuum in a 120 kHz harmonic trap provided by a tight laser focus and feedback `cooled’ to reduce the uncertainty in both their position (<1 \,\mathrm{nm}) and velocity (500 \,\mu \mathrm{m}/\mathrm{s}) [6]. For a thermal state, the velocity uncertainty is the product of trap frequency and position uncertainty and, in ultra-high vacuum where gas collisions are negligible, one may decrease the trap frequency considerably; for a 10\, \mathrm{kHz} trap frequency, we expect a velocity uncertainty below 50 \,\mu \mathrm{m}/\mathrm{s}. After several minutes of free flight under these conditions, the positional uncertainty will be sub-millimetre while acceleration from collisions with particles will give a millimetre-sized displacement.

This conflicts with the notion that this dark matter is “classically undetectable” (which is embarrassing to my title). The reason is that, for test masses that are quantum systems localized in phase space, the property of a force being classically undetectable is always defined with respect to a given time scale. Given infinite time, it’s known that arbitrarily small forces can in principle be detected so long as the force doesn’t average to zero. Caves PRL 1985, Giovannetti et al. Science 2004.a   If you have unlimited time, you can just prepare hugely wide wavepackets and let them drift apart extremely slowly. The standard quantum limit was originally discussed in the context of gravitational waves, for which the time-averaged displacement and momentum transfer is zero; therefore there is a naturally time scale to use. Dark matter has no such time scale because there is a non-zero wind, with collisions leading to a Brownian walk with a finite drift speed. Now, the long drift times discussed by Bateman et al. (several minutes) necessitate a space mission, but of course so does the matter interferometry proposal (MAQRO) that would be sensitive to this DM model.

There is in principle always a regime for which quantum techniques outperform any classical testFix a maximum time and then take the negligible-momentum-transfer limit, increasing the size of the superpositions to always be larger than the de Broglie wavelength of the particle causing the decoherence.b  . But there will also be cases (like this model) where both techniques may be viable and we need to look at the details to determine which is a more promising experiment.

[Edited 2014-6-24]


(↵ returns to text)

  1. Caves PRL 1985, Giovannetti et al. Science 2004.
  2. Fix a maximum time and then take the negligible-momentum-transfer limit, increasing the size of the superpositions to always be larger than the de Broglie wavelength of the particle causing the decoherence.
Bookmark the permalink.

Leave a Reply

Required fields are marked with a *. Your email address will not be published.

Contact me if the spam filter gives you trouble.

Basic HTML tags like ❮em❯ work. Type [latexpage] somewhere to render LaTeX in $'s. (Details.)