Undetected photon imaging

Lemos et al. have a relatively recent letterG. Lemos, V. Borish, G. Cole, S. Ramelow, R. Lapkiewicz, and A. Zeilinger, “Quantum imaging with undetected photons”, Nature 512, 409 (2014) [ arXiv:1401.4318 ].a   in Nature where they describe a method of imaging with undetected photons. (An experiment with the same essential quantum features was performed by Zou et al.X. Y. Zou, L. J. Wang, and L. Mandel, “Induced coherence and indistinguishability in optical interference”, Phys. Rev. Lett. 67, 318 (1991) [ PDF ].b   way back in 1991, but Lemos et al. have emphasized its implications for imaging.) The idea is conceptually related to decoherence detection, and I want to map one onto the other to flesh out the connection. Their figure 1 gives a schematic of the experiment, and is copied below.


Figure 1 from Lemos et al.: ''Schematic of the experiment. Laser light (green) splits at beam splitter BS1 into modes a and b. Beam a pumps nonlinear crystal NL1, where collinear down-conversion may produce a pair of photons of different wavelengths called signal (yellow) and idler (red). After passing through the object O, the idler reflects at dichroic mirror D2 to align with the idler produced in NL2, such that the final emerging idler f does not contain any information about which crystal produced the photon pair. Therefore, signals c and e combined at beam splitter BS2 interfere. Consequently, signal beams g and h reveal idler transmission properties of object O.''

The first two paragraphs of the letter contain all the meat, encrypted and condensed into an opaque nugget of the kind that Nature loves; it stands as a good example of the lamentable way many quantum experimental articles are written.… [continue reading]

Quantum Brownian motion: Definition

In this post I’m going to give a clean definition of idealized quantum Brownian motion and give a few entry points into the literature surrounding its abstract formulation. A follow-up post will give an interpretation to the components in the corresponding dynamical equation, and some discussion of how the model can be generalized to take into account the ways the idealization may break down in the real world.

I needed to learn this background for a paper I am working on, and I was motivated to compile it here because the idiosyncratic results returned by Google searches, and especially this MathOverflow question (which I’ve answered), made it clear that a bird’s eye view is not easy to find. All of the material below is available in the work of other authors, but not logically developed in the way I would prefer.

Preliminaries

Quantum Brownian motion (QBM) is a prototypical and idealized case of a quantum system \mathcal{S}, consisting of a continuous degree of freedom, that is interacting with a large multi-partite environment \mathcal{E}, in general leading to varying degrees of dissipation, dispersion, and decoherence of the system. Intuitively, the distinguishing characteristics of QBM is Markovian dynamics induced by the cumulative effect of an environment with many independent, individually weak, and (crucially) “phase-space local” components. We will defined QBM as a particular class of ways that a density matrix may evolve, which may be realized (or approximately realized) by many possible system-environment models. There is a more-or-less precise sense in which QBM is the simplest quantum model capable of reproducing classical Brownian motion in a \hbar \to 0 limit.

In words to be explained: QBM is a class of possible dynamics for an open, quantum, continuous degree of freedom in which the evolution is specified by a quadratic Hamiltonian and linear Lindblad operators.… [continue reading]

In what sense is the Wigner function a quasiprobability distribution?

For the upteenth time I have read a paper introducing the Wigner function essentially like this:

The Wigner-representation of a quantum state \rho is a real-valued function on phase space definedActually, they usually use a more confusing definition. See my post on the intuitive definition of the Wigner function.a   (with \hbar=1) as

(1)   \begin{align*} W_\rho(x,p) \equiv \int \! \mathrm{d}\Delta x \, e^{i p \Delta x} \langle x+\Delta x /2 \vert \rho \vert x-\Delta x /2 \rangle. \end{align*}

It’s sort of like a probability distribution because the marginals reproduce the probabilities for position and momentum measurements:

(2)   \begin{align*} P(x) \equiv \langle x \vert \rho \vert x \rangle = \int \! \mathrm{d}p \, W_\rho(x,p) \end{align*}

and

(3)   \begin{align*} P(p) \equiv  \langle p\vert \rho \vert p \rangle = \int \! \mathrm{d}x \, W_\rho(x,p). \end{align*}

But the reason it’s not a real probability distribution is that it can be negative.

The fact that W_\rho(x,p) can be negative is obviously a reason you can’t think about it as a true PDF, but the marginals property is a terribly weak justification for thinking about W_\rho as a “quasi-PDF”. There are all sorts of functions one could write down that would have this same property but wouldn’t encode much information about actual phase space structure, e.g., the Jigner“Jess” + “Wigner” = “Jigner”. Ha!b   function

    \[J_\rho(x,p) \equiv P(x)P(p) = \langle x \vert \rho \vert x \rangle \langle p \vert \rho \vert p \rangle,\]

which tells as nothing whatsoever about how position relates to momentum.

Here is the real reason you should think the Wigner function W_\rho is almost, but not quite, a phase-space PDF for a state \rho:

  1. Consider an arbitrary length scale \sigma_x, which determines a corresponding momentum scale \sigma_p = 1/2\sigma_x and a corresponding setNot just a set of states, actually, but a Parseval tight frame. They have a characteristic spatial and momentum width \sigma_x and \sigma_p, and are indexed by \alpha = (x,p) as it ranges over phase space.c   of coherent states \{ \vert \alpha \rangle \}.
  2. If a measurement is performed on \rho with the POVM of coherent states \{ \vert \alpha \rangle \langle \alpha \vert \}, then the probability of obtaining outcome \alpha is given by the Husimi Q function representation of \rho:

    (4)   \begin{align*} Q_\rho(\alpha) = \langle \alpha \vert \rho \vert \alpha \rangle. \end{align*}

  3. If \rho can be constructed as a mixture of the coherent states \{ \vert \alpha \rangle \}, thenOf course, the P function cannot always be defined, and sometimes it can be defined but only if it takes negative values.
[continue reading]

Planck, BICEP2, dust, and science news

The Planck Collaboration has released a paper describing the dust polarization in the CMB for the patch of sky used recently by BICEP2 to announce evidence for primordial gravitational waves. Things look bleak for BICEP2’s claims. See Peter Woit, Sean Carroll, Quanta, Nature, and the New York Times.

In the comments, Peter Woit criticizes the asymmetric way the whole story is likely to be reported:

I think it’s completely accurate at this point to say that BICEP2 has provided zero evidence for primordial gravitational waves, instead is seeing pretty much exactly the expected dust signal.

This may change in the future, based on Planck data, new BICEP2 data, and a joint analysis of the two data sets (although seeing a significant signal this way doesn’t appear very likely), but that’s a separate issue. I don’t think it’s fair to use this possibility to try and evade the implications of the bad science that BICEP2 has done, promoted by press conference, and gotten on the front pages of prominent newspapers and magazines.

This is a perfectly good example of normal science: a group makes claims, they are checked and found to be incorrect. What’s not normal is a massive publicity campaign for an incorrect result, and the open question is what those responsible will now do to inform the public of what has happened. “Science communicators” often are very interested in communicating over-hyped news of a supposed great advance in science, much less interested in explaining that this was a mistake. Some questions about what happens next:

1. Will the New York Times match their front page story “Space Ripples Reveal Big Bang’s Smoking Gun” with a new front page story “Sorry, these guys had it completely wrong?”

[continue reading]

Links for September 2014

  • In discussions about the dangers of increasing the prevalence of antibiotic-resistant bacteria by treating farm animals with antibotics, it’s a common (and understandable) misconception that antibiotics serve the same purpose with animals as for people: to prevent disease. In fact, antibiotics serve mainly as a way to increase animal growth. We know that this arises from the effect on bacteria (and not, say, by the effect of the antibiotic molecule on the animal’s cells), but it is not because antibiotics are reducing visible illness among animals:

    Studies conducted in germ free animals have shown that the actions of these AGP [antimicrobial growth promoters] substances are mediated through their antibacterial activity. There are four hypotheses to explain their effect (Butaye et al., 2003). These include: 1) antibiotics decrease the toxins produced by the bacteria; 2) nutrients may be protected against bacterial destruction; 3) increase in the absorption of nutrients due to a thinning of the intestinal wall; and 4) reduction in the incidence of sub clinical infections. However, no study has pinpointed the exact mechanism by which the AGP work in the animal intestine. [More.]

  • You’ve probably noticed that your brain will try to reconcile contradictory visual info. Showing different images to each eye will causes someone to essentially see only one or the other at a time (although it will switch back and forth). Various other optical illusions bring out the brain’s attempts to solve visual puzzles. But did you know the brain jointly reconciles visual info with audio info? Behold, the McGurk effect:

  • The much-hyped nanopore technique for DNA sequencing is starting to mature. Eventually this should dramatically lower the cost and difficulty of DNA sequencing in the field, but the technology is still buggy.

[continue reading]

State-independent consistent sets

In May, Losada and Laura wrote a paperM. Losada and R. Laura, Annals of Physics 344, 263 (2014).a   pointing out the equivalence between two conditions on a set of “elementary histories” (i.e. fine-grained historiesGell-Mann and Hartle usually use the term “fine-grained set of histories” to refer to a set generated by the finest possible partitioning of histories in path integral (i.e. a point in space for every point in time), but this is overly specific. As far as the consistent histories framework is concerned, the key mathematical property that defines a fine-grained set is that it’s an exhaustive and exclusive set where each history is constructed by choosing exactly one projector from a fixed orthogonal resolution of the identity at each time.b  ). Let the elementary histories \alpha = (a_1, \dots, a_N) be defined by projective decompositions of the identity P^{(i)}_{a_i}(t_i) at time steps t_i (i=1,\ldots,N), so that

(1)   \begin{align*} P^{(i)}_a &= (P^{(i)}_a)^\dagger \quad \forall i,a \\ P^{(i)}_a P^{(i)}_b &= \delta_{a,b} P^{(i)}_a \quad \forall i,a,b\\ \sum_{a} P^{(i)}_a (t_i) &= I \quad  \forall i,k \\ C_\alpha &= P^{(N)}_{a_N} (t_N) \cdots P^{(1)}_{a_1} (t_1) \\ I &= \sum_\alpha C_\alpha = \sum_{a_1}\cdots \sum_{a_N} C_\alpha \\ \end{align*}

where C_\alpha are the class operators. Then Losada and Laura showed that the following two conditions are equivalent

  1. The set is consistent“Medium decoherent” in Gell-Mann and Hartle’s terminology. Also note that Losada and Laura actually work with the obsolete condition of “weak decoherence”, but this turns out to be an unimportance difference. For a summary of these sorts of consistency conditions, see my round-up.c   for any state: D(\alpha,\beta) = \mathrm{Tr}[C_\alpha \rho C_\beta^\dagger] = 0 \quad \forall \alpha \neq \beta, \forall \rho.
  2. The Heisenberg-picture projectors at all times commute: [P^{(i)}_{a} (t_i),P^{(j)}_{b} (t_j)]=0 \quad \forall i,j,a,b.

However, this is not as general as one would like because assuming the set of histories is elementary is very restrictive. (It excludes branch-dependent sets, sets with inhomogeneous histories, and many more types of sets that we would like to work with.) Luckily, their proof can be extended a bit.

Let’s forget that we have any projectors P^{(i)}_{a} and just consider a consistent set \{ C_\alpha \}.… [continue reading]

How to think about Quantum Mechanics—Part 3: The pointer and Schmidt bases

[Other parts in this series: 1,2,3,4,5,6,7,8.]

A common mistake made by folks newly exposed to the concept of decoherence is to conflate the Schmidt basis with the pointer basis induced by decoherence.

[Show refresher on Schmidt decompsition]
Given any two quantum systems \mathcal{S} and \mathcal{E} and a pure joint state \vert \psi (t) \rangle \in \mathcal{H} = \mathcal{S} \otimes \mathcal{E}, there always exists a Schmidt decomposition of the form

(1)   \begin{align*} \vert \psi (t) \rangle = \sum_k c_k \vert S_k (t) \rangle \vert E_k (t) \rangle \end{align*}

where \vert S_k (t) \rangle and \vert E_k (t) \rangle are local orthonormal Schmidt bases on \mathcal{S} and \mathcal{E}, respectively.

Now, any state in such a joint Hilbert space can be expressed as \vert \psi \rangle = \sum_{i,j} d_{i,j} \vert S_i \rangle \vert E_j \rangle for arbitrary fixed orthonormal bases \vert S_i \rangle and \vert E_j \rangle. What makes the Schmidt decomposition non-trivial is that it has only a single index k rather than two indices i and j. (In particular, this means that the Schmidt decomposition constains at most \mathrm{min}(\mathrm{dim}\,\mathcal{S},\mathrm{dim}\,\mathcal{E}) non-vanishing terms, even if \mathrm{dim}\,\mathcal{E} \gg \mathrm{dim}\,\mathcal{S}.) The price paid is that the Schmidt bases, \vert S_k \rangle and \vert E_k \rangle, depend on the state \vert \psi \rangle.

When the values \vert c_i \vert in the Schmidt decomposition are non-degenerate, the local bases are unique up to a phase. As \vert \psi (t) \rangle evolves in time, this decomposition is defined for each time t. The bases \vert S_i (t) \rangle and \vert E_i (t) \rangle evolve along with it, and can be considered to be a property of the state \vert \psi (t) \rangle. In fact, they correspond to the eigenvectors of the respective reduced density matrices of \mathcal{S} and \mathcal{E}.

In the ideal case of so-called pure decoherence, the environment \mathcal{E} begins in an initial state \vert E_0 \rangle and is coupled to the system \mathcal{S} through a unitary of the form

(2)   \begin{align*} U(t) = \sum_k \vert S_k \rangle \langle S_k \vert \otimes U^{\mathcal{E}}_k(t) \end{align*}

with \langle E_k(t) \vert E_l(t) \rangle \to \delta_{k,l} as t \to \infty, where U^{\mathcal{E}}_k(t) is a conditional unitary on \mathcal{E} and \vert E_k(t) \rangle \equiv U^{\mathcal{E}}_k(t) \vert E_0 \rangle. The elements of the density matrix \rho of the system evolve as \rho_{k,l}(t) = \langle E_k(t) \vert E_l(t) \rangle \rho_{k,l}(0), i.e.… [continue reading]

Links for August 2014

  • Jester (Adam Falkowski) on physics breakthroughs:

    This year’s discoveries follow the well-known 5-stage Kübler-Ross pattern: 1) announcement, 2) excitement, 3) debunking, 4) confusion, 5) depression. While BICEP is approaching the end of the cycle, the sterile neutrino dark matter signal reported earlier this year is now entering stage 3.

  • The ultimate bounds on possible nuclides are more-or-less known from first principles.
  • UPower Technologies is a nuclear power start-up backed by Y-Combinator.
  • It is not often appreciated that “[s]mallpox eradication saved more than twice the number of people 20th century world peace would have achieved.” Malaria eradication would be much harder, but the current prospects are encouraging. Relatedly, the method for producing live but attenuated viruses is super neat:

    Attenuated vaccines can be made in several different ways. Some of the most common methods involve passing the disease-causing virus through a series of cell cultures or animal embryos (typically chick embryos). Using chick embryos as an example, the virus is grown in different embryos in a series. With each passage, the virus becomes better at replicating in chick cells, but loses its ability to replicate in human cells. A virus targeted for use in a vaccine may be grown through—“passaged” through—upwards of 200 different embryos or cell cultures. Eventually, the attenuated virus will be unable to replicate well (or at all) in human cells, and can be used in a vaccine. All of the methods that involve passing a virus through a non-human host produce a version of the virus that can still be recognized by the human immune system, but cannot replicate well in a human host.

    When the resulting vaccine virus is given to a human, it will be unable to replicate enough to cause illness, but will still provoke an immune response that can protect against future infection.

[continue reading]