Carney, Müller, and Taylor have a tantalizing paper on how the quantum nature of gravity might be confirmed even though we are quite far from being able to directly create and measure superpositions of gravitationally appreciable amounts of matter (hereafter: “massive superpositions”), and of course very far from being able to probe the Planck scale where quantum gravity effects dominate. More precisely, the idea is to demonstrate (assuming assumptions) that the gravitational field can be used to transmit quantum information from one system to another in the sense that the effective quantum channel is not entanglement breaking.
Daniel Carney, Holger Müller, and Jacob M. Taylor
Although I’m not sure they would phrase it this way, the key idea for me was that merely protecting massive superpositions from decoherence is actually not that hard; sufficient isolation can be achieved in lots of systems. Rather, much like quantum computing, the challenge is to achieve this level of protection while simultaneously having sufficient control to create and measure superpositions.
Carney et al. observe that you do not need to be able to implement a Hadamard-like gate (i.e., a gate that takes a state in the preferred quasi-classical basis^{a } to superpositions thereof) on the massive system in order to demonstrate that it’s storing quantum information. You just need to be able to implement a controlled unitary on it, in any basis, that is controlled by a second (smaller) quantum system that you do have more complete control over. More specifically, they suggest starting with the control system in a superposition of two near “gravitational eigenstates” and , allowing this state to become entangled with an initial state of a massive oscillator (decohering both the oscillator and the control system), and then witnessing recoherence (revival) of the control system as the oscillator disentangles into a final state . (By “gravitational eigenstates”, I just mean states of the oscillator that, to a good approximation, source a quasiclassical state of gravitational field; in this case, it’s something like a wavepacket that’s well localized in space, rather than being in a superposition of widely separated positions that would have distinctly different corresponding gravitational fields.) For this, all that needs to be achieved is evolution of the form
(1)
where (or at least is less than for partial decoherence). Importantly, at no time does the massive oscillator need to be brought into a coherent superposition like of gravitational eigenstates. Furthermore, it doesn’t even matter whether and are the same. If you can implement this evolution and witness the revival, and you can convince yourself that the control system couldn’t have been entangling with anything else, then you have shown that the gravitational field is transmitting quantum information.
The general idea of leveraging quantum control of a small system to gain partial quantum control of a large system isn’t itself a novel idea, but the authors go on to show that
- this can work even if the massive oscillator is quite hot (mixed) so long as it’s well isolated during the key phase of the experiment and it is confined to a trap with extremely low anharmonicity;
- the size of the effect can be made linear (rather than quadratic) in the weak gravitational coupling , at least if an initial entangling operation can be performed with a stronger-than-gravity coupling; and
- the necessary experimental parameters, using laser-trapped atoms for the control system, are sorta within reach.
The first part is unintuitive, but you can basically read it off from Eq. (1): the decoherence and recoherence can still happen even if the massive oscillator starts and ends in mixtures of states and , just as long as the disentangling happens at the same moment in time (to very high accuracy) for all members of the ensemble. (That’s where the strong harmonicity assumption comes in.) Furthermore, the initial and final ensembles don’t have to be the same. In particular, the contraction of the state of the oscillator by, say, one quanta over the course of a single period (e.g., from to ) doesn’t prevent you from having a pretty clean revival. (The visibility will only go down insofar as the contraction is so strong that the oscillator states are piling up on top of each other near the ground state, preventing full disentanglement.)
Instead, the only thing you’re really worrying about is isolation, i.e., the extent to which you can prevent the two paths of the oscillator (each conditional on different states and of the control system) from getting decohered by the larger environment.
Concerns
Here are some concerns that I haven’t fleshed out yet:
- Presumably to claim that we’ve demonstrated that the gravitational field is quantum mechanical, we need an alternate theory to test it against. However, it’s famously difficult to write down a non-hideous self-consistent semiclassical theory of gravity (e.g., where the field is sourced by the quantum expectation value of the mass distribution). Relatedly: how exactly is proving a theorem about the form of a quantum channel, acting on the joint atom and oscillator systems that are both treated as quantum, supposed to prove a point about gravity being fundamentally quantum? In other words, I think the authors need to say more about why separable channels, and only separable channels, are ones where we can think of gravity as semiclassical.^{b } Now, you can still justify the experiment along the lines of “Given our crappy ability to imagine radically different alternative theories of gravity, we should confirm as many of the striking and qualitatively distinct features of our current best theory as possible”. (In this case, the feature is “gravity can transmit quantum info”.) But it does dampen the enthusiasm a bit.
- The key fundamental property that makes this all work is the fact that the oscillator is perfectly harmonic. The authors consider damping (finite factor), but this is not the same as having non-zero cubic or quartic terms in the oscillator potential, or having an oscillator frequency that varies slightly in time. In particular, in their model, everything gets better for larger temperatures of the oscillator, so they need to explain what the limiting factor is on just heating the oscillator up arbitrarily to get a better signal.^{c } They do promise that “…a detailed study of such systematic effects will be the subject of a separate publication…” so I guess we will have to wait.
Decoherence detection?
Naturally, I was very interested to know whether I could shoe-horn this idea into my hobby horse: decoherence detection. Unfortunately, it looks on first blush like the ideas can’t be combined. Here’s why.
The standard QBM parameters used by Carney et al. for the open-system dynamics of the oscillator is the mean excitation number (were the oscillator allowed to thermalized to the bath) and the dissipation coefficient . My preferred parameters are the decoherence-and-diffusion matrix and (described in detail here), and they are related by (where my parameters are more general in the sense that they allow for the decoherence-and-diffusion matrix to be not proportional to the identity ).
Anomalous pure^{d } decoherence (e.g, from collisional decoherence like dark matter (DM), or from objective collapse models like Diosi-Penrose) is the case of the simultaneous limits , while holding constant. This is the sense in which idealized collisional decoherence looks like an infinite temperature bath, and it’s natural model to consider for DM because 1 MeV virialized DM is at ~6000 Kelvin. (Once the DM mass is below 10 keV, then the infinite-temp approximation breaks down. Also, for usual collisional decoherence, is not actually proportional to the identity, but I don’t think it matters much for what I’m going to say…) To get the complete reduced dynamics for the oscillator, you would basically just add this pure decoherence to the other conventional sources of noise (which generally are dissipative, ).
This means when you have an oscillator with conventional sources of noise, and you add anomalous decoherence, you expect to raise the equilibrium temperature of the oscillator, and hence raise the thermalized occupation number , but you do not change or . Generally the experimentally measurable quantities are and , and the bare diffusion matrix for conventional sources alone is inaccessible.
Unfortunately, the protocol of Carney et al. doesn’t really change this. All it can detect is the total strength of . You could try and distinguish anomalous decoherence from conventional sources during the protocol by using the various tricks I’ve talked about (shielding the experiment from DM, looking for sidereal variations, etc.), but it would be a hell of a lot easier to just use those tricks while simply measuring the equilibrium temperature of the oscillator — no quantum mechanics required.^{e }
This also fits with my interpretation of superpositions as “negative temperature detectors” (see first figure in this blog post). Superpositions are a useful way to get increased sensitivity when you’ve already maxed out the amount of sensitivity you can get from cooling your target (because you’ve hit the ground state). But the whole point of the Carney et al. protocol is that is doesn’t care what the temperature of the massive oscillator is.
A bit disappointing, but I will keep thinking about variations on this…
Footnotes
(↵ returns to text)
- This is analogous to the logical basis in quantum computing. For massive particles this is an overcomplete basis of wavepackets, so mathematically it’s really a frame.↵
- Something along these lines, at least in the non-relativistic limit, appears to be argued in this paper by one of the authors (Taylor), but I haven’t read it or thought deeply about the issue.↵
- At one point the authors require , but I think that’s just the region where their approximation is accurate, not where increased heat stops helping.↵
- Here, the “pure” in “pure decoherence” is a non-widely-used term to mean that the system is being decohered in a way that has the minimal possible impact on the quasiclassical dynamics. If it’s being decohered in an orthogonal basis, it means that only the off-diagonal terms are being affected and the on-diagonal terms are unchanged by the source of the decoherence. In this particular case, where the oscillator is being decohered in the overcomplete basis of wavepackets, there is a corresponding quasiclassical effect of increased diffusion, which cannot be avoided due to the uncertainty principle, but still there is no “unnecessary” change to the quasiclassical dynamics, i.e., no dissipation from the dark matter.↵
- I think this isn’t-useful-for-decoherence-detection conclusion still holds even if the DM is inducing measurable dissipation. Again, you would just have an easier time trying to measure that dissipation classically.↵
Your email address will not be published. Required fields are marked with a *.