Ground-state cooling by Delic et al. and the potential for dark matter detection

The implacable Aspelmeyer group in Vienna announced a gnarly achievement in November (recently published):

Cooling of a levitated nanoparticle to the motional quantum ground state
Uroš Delić, Manuel Reisenbauer, Kahan Dare, David Grass, Vladan Vuletić, Nikolai Kiesel, Markus Aspelmeyer
We report quantum ground state cooling of a levitated nanoparticle in a room temperature environment. Using coherent scattering into an optical cavity we cool the center of mass motion of a 143 nm diameter silica particle by more than 7 orders of magnitude to n_x = 0.43 \pm 0.03 phonons along the cavity axis, corresponding to a temperature of 12 μK. We infer a heating rate of \Gamma_x/2\pi = 21\pm 3 kHz, which results in a coherence time of 7.6 μs – or 15 coherent oscillations – while the particle is optically trapped at a pressure of 10^{-6} mbar. The inferred optomechanical coupling rate of g_x/2\pi = 71 kHz places the system well into the regime of strong cooperativity (C \approx 5). We expect that a combination of ultra-high vacuum with free-fall dynamics will allow to further expand the spatio-temporal coherence of such nanoparticles by several orders of magnitude, thereby opening up new opportunities for macroscopic quantum experiments.

Ground-state cooling of nanoparticles in laser traps is a very important milestone on the way to producing large spatial superpositions of matter, and I have a long-standing obsession with the possibility of using such superpositions to probe for the existence of new particles and forces like dark matter. In this post, I put this milestone in a bit of context and then and then toss up a speculative plot for the estimated dark-matter sensitivity of a follow-up to Delić et al.’s device.

One way to organize the quantum states of a single continuous degree of freedom, like the center-of-mass position of a nanoparticle, is by their sensitivity to displacements in phase space. This can be formalized as the fidelity between a state \rho and its displacement \tilde{\rho } = D(\Delta \alpha) \rho D(\Delta \alpha)^\dagger,

    \[F(\rho, \tilde\rho) = \left|\left|\sqrt{\rho}\sqrt{\tilde{\rho}}\right|\right|_{\mathrm{tr}}^2 = \left(\mathrm{Tr} \left|\sqrt{\rho}\sqrt{\tilde{\rho}}\right|\right)^2,\]

where \Delta \alpha = (\Delta x, \Delta p) has displacement components \Delta x and \Delta p in space and momentum, and where D(\Delta \alpha) = \exp[i\sqrt{2}(\Delta p \hat{x} - \Delta x \hat{p})] is the displacement operator. (The fidelity reduces to the squared overlap |\langle \psi | \tilde\psi\rangle|^2 when the states are pure.) If the displaced state is highly distinguishable from (has low fidelity with) the undisplaced state, then there are no quantum limitationsAdded: Here, “quantum limitations” is just shorthand for the fundamental limits on our ability to distinguish two quantum states which are each individually localized around distinct points in phase space, but are close enough to have substantial overlap/fidelity. This is to be contrasted with the analogous classical case where points in phase space separated by arbitrarily small distances are perfectly distinguishable given sufficiently accurate measuring equipment.a   on distinguishing the two potential outcomes. This might be mean, e.g., detecting the momentum transfer from a scattering dark-matter particle. States that are hot (large mixedness, smeared over phase space) have low sensitivity to displacements, and sensitivity goes up as the state is cooled, localizing it toward a known location and momentum. However, the sensitivity saturates at a fixed finite value at zero temperature, when the Wigner function has irreducible area O(\hbar) in phase space.

To increase sensitivity beyond this limit (the standard quantum limit, SQL), we need to move to non-classical states. One possibility is squeezing, producing increased sensitivity in one direction (e.g., position) at the expense of decreased sensitivity in the other (e.g., momentum). Another class of possibilities are “cat states”, i.e., a coherent superposition of two states which are individually roughly classical (localized in phase space) but are distant from each other in phase space. Squeezing or superposing states lets one keep increasing the displacement sensitivity as far as one’s equipment can manage. In a restricted sense, squeezed and superposed states have a “negative effective temperature” with regards to displacement sensitivity. Ground state cooling is a crucial step on the road from a hot messy state to an exquisitely sensitive quantum superposition. Here’s a cartoon I’ve posted previously:

Simple representation of the Wigner function for some quantum states of a continuous degree of freedom. Green (purple) represents areas where the Wigner function is significantly positive (negative). The black dotted circle represents the minimal uncertainty wavepacket associated with the ground state, with phase space area O(\hbar). The circular states are Gibbs states for some well-defined entropy, and with decreasing size for lower temperature. Other states are "negative effective temperature", being more sensitive to displacements than the zero-temperature ground state. The squashed ellipse is a squeezed state, while the states with two (four) green circles represent two-way (four-way) superpositions (i.e., cat states). Superpositions over larger spatial distances are associated with higher-frequency momentum-space structure in the patch of the Wigner function between the two components, making them more sensitive to small momentum transfers.

Joyously, Delić et al. not only report cooling a nanoparticle to its ground state, they also ambitiously claim that producing and verifying a spatial superposition of the nanoparticle over length scales similar to its radius may be achieved with some relatively straightforward modifications.Other mechanical modes have been put in their motional ground state before, but these tend to be very difficult to extend to superpositions over large distances. Relative to these, laser-trapped tend to have a simpler path: shut off the laser, allow the nanoparticle wavepacket to expand under free-fall conditions, hit it with something like a (higher-frequency, beam-shaped) laser to simulate a double-slit, wait some more, and then observe it.b   First, here are parameters from the (super-impressive) completed experiment:

  • Nanoparticle radius: 71 nm
  • Nanoparticle mass: 2\cdot 10^{9} amu.
  • Frequency of particle motion in trap: 305 kHz
  • Spatial width of the ground-state wavefunction in trap: 3.1 pm.
  • Residual phonons: 0.43 \pm 0.03 (effective temp.: 12\ \mu\mathrm{K})
  • Coherence time: 7.6\ \mu s (\sim \! 15 oscillations)

Now their delicious speculation:

We expect that a combination of ultra-high vacuum with free-fall dynamics will allow to further expand the spatio-temporal coherence of such nanoparticles by several orders of magnitude, thereby opening up new opportunities for macrosopic quantum experiments….What conditions are required to achieve an expansion of the wavepacket until it reaches the size of the nanosphere itself?…Given the expansion of the undisturbed wavepacket…we require an expansion time of …12 ms, demanding a decoherence rate below 84 Hz. This is achievable by a reduction of the pressure by at least a factor of 5\times 10^4 to below 2 \times 10^{-11} mbar. However, at these pressures blackbody radiation of the internally hot particle becomes relevant. To further reduce decoherence to the desired level (below the gas scattering contribution) requires cryogenic temperatures (below 130K) for both the internal particle temperature and the environment. This could be achieved either by combining a cryogenic (ultra-high) vacuum environment with laser refrigeration of the nanoparticle or with low-absorption materials.

Assuming they can do this, we are looking at a spatial superposition with something in the neighborhood of these properties:

  • Nanoparticle radius: 71 nm
  • Nanoparticle mass: 2\cdot 10^{9} amu.
  • Spatial extent of superposition: 100 nm.
  • Lifetime of superposition: 10 ms.

This would be a truly mammoth amount of matter to superpose, beating the current world record — also in Vienna! — by some five orders of magnitude.

It’s known that, in a way that can be made precise, big superpositions are sensitive to very small momentum transfers that are otherwise undetectable. In our PRD, Itay Yavin and I looked at some simple models of dark matter to see if any would be identifiable by recently proposed experiments pushing the bounds of superposition size. There were two experiments that would be highly sensitive to a range of parameter space, but which would require many years of technical advances to achieve. (One of them was to operate in space, at a cost of hundreds of millions of euros.) The other, nearer-term experiments could not be sensitive to dark matter except under quite optimistic assumptions in a narrow region of parameter space. In particular, the improved limits on new light scalar mediators from estimated plasma mixing effects in stellar cores by Hardy and Lasenby probably rule out all models that these more tractable experiments might have been sensitive to.

The hypothetical interaction between the dark matter and matter we considered looks like this:

Dark matter \psi of mass m_{\mathrm{DM}} scatters off a nucleon N through the exchange of a mediator \phi of mass m_{\mathrm{med}}. The coupling of the mediator to normal matter and dark matter is \alpha_{\mathrm{M}} \ll 10^{-20} and \alpha_{\mathrm{DM}} \sim 1, respectively.

Emboldened by Delić et al., let’s rashly modify one of the sensitivity plots from our paper to get a sense for what we could do with the huge superpositions they suggest are achievable. The solid green curve in the figure below delineates the fraction of the allowed parameter space where dark matter would induce detectable decoherence in such an experiment.

Allowed parameter space and potential sensitivity of superposition experiments for the dark-matter model and methods discussed in ArXiv:1609.04145. Grey regions are excluded by 5th-force experiments (diagonal boundary) and analysis of stellar cooling (horizontal boundary). The three colored lines bound regions of sensitivity for three experiments: The superconducting nanosphere "skatepark" of Pino et al. (red), the MAQRO satellite proposal of Kaltenbaek and collaborators (blue), and the speculative superposition extending the recent results of Delić et al. discussed here (green). Solid (dashed) lines denote the reach of decoherence (coherent phase shift). See our paper for more details.

Pretty rad. Producing superpositions of the kind suggested by Delić et al. are likely the most tractable path to begin probing dark matter through decoherence. Of course, there are many caveats:

  • The modifications to the plot were very rough-and-ready, so I might have made a mistake.
  • This is an ad-hoc theory of dark matter, involving at least two new massive species contributing different amounts (~10% and ~90%) to the observed Milky Way density of 0.4 \mathrm{GeV}/\mathrm{cm}^3, plus a new light scalar mediator to couple them to normal matter. The model was constructed specifically so that it would cause large amounts of decoherence while evading constraints from previous observations.
  • The section of parameter space probed is quite small (for now). You can see from the plot above that it’s roughly only an order of magnitude in the coupling constant \alpha_M and mediator mass m_{\mathrm{med}}. Based on eyeballing the other plots in my paper with Itay, its probably only an order of magnitude in the dark matter mass m_{\mathrm{DM}} too.
  • The above plot assumes an increase in sensitivity by about a factor of 160 (same as the MAQRO experiment in blue) from integrating over O(10^4) shots (compared to running the experiment only once). This could be done in a few hours but, assuming no additional heroic measures are taken, such a technique is only effective down to the noise floor imposed by the “sidereal decoherence background” — the fraction of the uncontrolled decoherence background from conventional sources that has a period of one sidereal day.Here, the fraction is computed depending on how long you’re willing to wait to distinguish small frequency difference. You need to take measurements at different times of year to clearly distinguish between decoherence fluctuating with the solar day (24 hours) vs. the sidereal day (23 hours, 56 min, 4 sec). “Heroic measures” would be things like operating the device at multiple latitudes; see our paper for details.c   If that fraction is, say, 1 in 20 rather than 1 in 160, the sensitivity will be correspondingly reduced. However, since I know of no reason for uncontrolled decoherence from conventional sources to track the sidereal day, I think this number (or something even more aggressive) is reasonable.Once a sidereal decoherence signal is observed, there are several possible methods for identifying whether it has galactic origins, which we discuss in the paper.d  

The primary reason to be optimistic about future experiments is the strong R^6 scaling of the sensitivity with nanoparticle radius R due to the coherent scattering enhancement.For much of the parameter space we are considering, the momentum transfer during the matter-dark-matter scattering event is much longer than the size of the nanoparticle, so the dark matter can’t resolve the different nucleons that compose it. The reflected dark-matter waves from each nucleon add together coherently, leading to a cross section that scales quadratically with the number of nucleons in the superposed nanoparticle target.e   In contrast, the primary sources of decoherence (collisions with ambient gas molecules and emission of blackbody radiation) scale much more slowly with the radius (\sim\! R^2 and \sim\! R^3, respectively).

[I thank Robert Lasenby for discussion.]

Footnotes

(↵ returns to text)

  1. Added: Here, “quantum limitations” is just shorthand for the fundamental limits on our ability to distinguish two quantum states which are each individually localized around distinct points in phase space, but are close enough to have substantial overlap/fidelity. This is to be contrasted with the analogous classical case where points in phase space separated by arbitrarily small distances are perfectly distinguishable given sufficiently accurate measuring equipment.
  2. Other mechanical modes have been put in their motional ground state before, but these tend to be very difficult to extend to superpositions over large distances. Relative to these, laser-trapped tend to have a simpler path: shut off the laser, allow the nanoparticle wavepacket to expand under free-fall conditions, hit it with something like a (higher-frequency, beam-shaped) laser to simulate a double-slit, wait some more, and then observe it.
  3. Here, the fraction is computed depending on how long you’re willing to wait to distinguish small frequency difference. You need to take measurements at different times of year to clearly distinguish between decoherence fluctuating with the solar day (24 hours) vs. the sidereal day (23 hours, 56 min, 4 sec). “Heroic measures” would be things like operating the device at multiple latitudes; see our paper for details.
  4. Once a sidereal decoherence signal is observed, there are several possible methods for identifying whether it has galactic origins, which we discuss in the paper.
  5. For much of the parameter space we are considering, the momentum transfer during the matter-dark-matter scattering event is much longer than the size of the nanoparticle, so the dark matter can’t resolve the different nucleons that compose it. The reflected dark-matter waves from each nucleon add together coherently, leading to a cross section that scales quadratically with the number of nucleons in the superposed nanoparticle target.
Bookmark the permalink.

Leave a Reply

Include [latexpage] in your comment to render LaTeX equations with $'s. (More info.) May not be rendered in the live preview.

Your email address will not be published. Required fields are marked with a *.