(This post is vague, and sheer speculation.)
Following a great conversation with Miles Stoudenmire here at PI, I went back and read a paper I forgot about: “Entanglement and the foundations of statistical mechanics” by Popescu et al.S. Popescu, A. Short, and A. Winter, “Entanglement and the foundations of statistical mechanics” Nature Physics 2, 754 – 758 (2006) [Free PDF].a . This is one of those papers that has a great simple idea, where you’re not sure if it’s profound or trivial, and whether it’s well known or it’s novel. (They cite references 3-6 as “Significant results along similar lines”; let me know if you’ve read any of these and think they’re more useful.) Anyways, here’s some background on how I think about this.
If a pure quantum state is drawn at random (according to the Haar measure) from a -dimensional vector space , then the entanglement entropy
across a tensor decomposition into system and environment is highly likely to be almost the maximum
for any such choice of decomposition . More precisely, if we fix and let , then the fraction of the Haar volume of states that have entanglement entropy more than an exponentially small (in ) amount away from the maximum is suppressed exponentially (in ). This was known as Page’s conjectureD. Page, Average entropy of a subsystem.b , and was later provedS. Foong and S. Kanno, Proof of Page’s conjecture on the average entropy of a subsystem.c J. Sánchez-Ruiz, Simple proof of Page’s conjecture on the average entropy of a subsystem.d ; it is a straightforward consequence of the concentration of measure phenomenon.… [continue reading]
Lemos et al. have a relatively recent letterG. Lemos, V. Borish, G. Cole, S. Ramelow, R. Lapkiewicz, and A. Zeilinger, “Quantum imaging with undetected photons”, Nature 512, 409 (2014) [ arXiv:1401.4318 ].a in Nature where they describe a method of imaging with undetected photons. (An experiment with the same essential quantum features was performed by Zou et al.X. Y. Zou, L. J. Wang, and L. Mandel, “Induced coherence and indistinguishability in optical interference”, Phys. Rev. Lett. 67, 318 (1991) [ PDF ].b way back in 1991, but Lemos et al. have emphasized its implications for imaging.) The idea is conceptually related to decoherence detection, and I want to map one onto the other to flesh out the connection. Their figure 1 gives a schematic of the experiment, and is copied below.
Figure 1 from Lemos et al.: ''Schematic of the experiment. Laser light (green) splits at beam splitter BS1 into modes a and b. Beam a pumps nonlinear crystal NL1, where collinear down-conversion may produce a pair of photons of different wavelengths called signal (yellow) and idler (red). After passing through the object O, the idler reflects at dichroic mirror D2 to align with the idler produced in NL2, such that the final emerging idler f does not contain any information about which crystal produced the photon pair. Therefore, signals c and e combined at beam splitter BS2 interfere. Consequently, signal beams g and h reveal idler transmission properties of object O.''
The first two paragraphs of the letter contain all the meat, encrypted and condensed into an opaque nugget of the kind that Nature loves; it stands as a good example of the lamentable way many quantum experimental articles are written.… [continue reading]
In this post I’m going to give a clean definition of idealized quantum Brownian motion and give a few entry points into the literature surrounding its abstract formulation. A follow-up post will give an interpretation to the components in the corresponding dynamical equation, and some discussion of how the model can be generalized to take into account the ways the idealization may break down in the real world.
I needed to learn this background for a paper I am working on, and I was motivated to compile it here because the idiosyncratic results returned by Google searches, and especially this MathOverflow question (which I’ve answered), made it clear that a bird’s eye view is not easy to find. All of the material below is available in the work of other authors, but not logically developed in the way I would prefer.
Quantum Brownian motion (QBM) is a prototypical and idealized case of a quantum system , consisting of a continuous degree of freedom, that is interacting with a large multi-partite environment , in general leading to varying degrees of dissipation, dispersion, and decoherence of the system. Intuitively, the distinguishing characteristics of QBM is Markovian dynamics induced by the cumulative effect of an environment with many independent, individually weak, and (crucially) “phase-space local” components. We will defined QBM as a particular class of ways that a density matrix may evolve, which may be realized (or approximately realized) by many possible system-environment models. There is a more-or-less precise sense in which QBM is the simplest quantum model capable of reproducing classical Brownian motion in a limit.
In words to be explained: QBM is a class of possible dynamics for an open, quantum, continuous degree of freedom in which the evolution is specified by a quadratic Hamiltonian and linear Lindblad operators.… [continue reading]
For the upteenth time I have read a paper introducing the Wigner function essentially like this:
The Wigner-representation of a quantum state is a real-valued function on phase space definedActually, they usually use a more confusing definition. See my post on the intuitive definition of the Wigner function.a (with ) as
It’s sort of like a probability distribution because the marginals reproduce the probabilities for position and momentum measurements:
But the reason it’s not a real probability distribution is that it can be negative.
The fact that can be negative is obviously a reason you can’t think about it as a true PDF, but the marginals property is a terribly weak justification for thinking about as a “quasi-PDF”. There are all sorts of functions one could write down that would have this same property but wouldn’t encode much information about actual phase space structure, e.g., the Jigner“Jess” + “Wigner” = “Jigner”. Ha!b function , which tells as nothing whatsoever about how position relates to momentum.
Here is the real reason you should think the Wigner function is almost, but not quite, a phase-space PDF for a state :
Consider an arbitrary length scale , which determines a corresponding momentum scale and a corresponding setNot just a set of states, actually, but a Parseval tight frame. They have a characteristic spatial and momentum width and , and are indexed by as it ranges over phase space.c of coherent states .
If a measurement is performed on with the POVM of coherent states , then the probability of obtaining outcome is given by the Husimi Q function representation of :
If can be constructed as a mixture of the coherent states , thenOf course, the P function cannot always be defined, and sometimes it can be defined but only if it takes negative values.
… [continue reading]
The Planck Collaboration has released a paper describing the dust polarization in the CMB for the patch of sky used recently by BICEP2 to announce evidence for primordial gravitational waves. Things look bleak for BICEP2’s claims. See Peter Woit, Sean Carroll, Quanta, Nature, and the New York Times.
In the comments, Peter Woit criticizes the asymmetric way the whole story is likely to be reported:
I think it’s completely accurate at this point to say that BICEP2 has provided zero evidence for primordial gravitational waves, instead is seeing pretty much exactly the expected dust signal.
This may change in the future, based on Planck data, new BICEP2 data, and a joint analysis of the two data sets (although seeing a significant signal this way doesn’t appear very likely), but that’s a separate issue. I don’t think it’s fair to use this possibility to try and evade the implications of the bad science that BICEP2 has done, promoted by press conference, and gotten on the front pages of prominent newspapers and magazines.
This is a perfectly good example of normal science: a group makes claims, they are checked and found to be incorrect. What’s not normal is a massive publicity campaign for an incorrect result, and the open question is what those responsible will now do to inform the public of what has happened. “Science communicators” often are very interested in communicating over-hyped news of a supposed great advance in science, much less interested in explaining that this was a mistake. Some questions about what happens next:
1. Will the New York Times match their front page story “Space Ripples Reveal Big Bang’s Smoking Gun” with a new front page story “Sorry, these guys had it completely wrong?” Or will they bury it in the specialized “Science” section tomorrow with some sort of mealy-mouthed headline like the BBC’s today that BICEP just “underestimated” a problem?
… [continue reading]
In May, Losada and Laura wrote a paperM. Losada and R. Laura, Annals of Physics 344, 263 (2014).a pointing out the equivalence between two conditions on a set of “elementary histories” (i.e. fine-grained historiesGell-Mann and Hartle usually use the term “fine-grained set of histories” to refer to a set generated by the finest possible partitioning of histories in path integral (i.e. a point in space for every point in time), but this is overly specific. As far as the consistent histories framework is concerned, the key mathematical property that defines a fine-grained set is that it’s an exhaustive and exclusive set where each history is constructed by choosing exactly one projector from a fixed orthogonal resolution of the identity at each time.b ). Let the elementary histories be defined by projective decompositions of the identity at time steps (), so that
where are the class operators. Then Losada and Laura showed that the following two conditions are equivalent
The set is consistent“Medium decoherent” in Gell-Mann and Hartle’s terminology. Also note that Losada and Laura actually work with the obsolete condition of “weak decoherence”, but this turns out to be an unimportance difference. For a summary of these sorts of consistency conditions, see my round-up.c for any state: .
The Heisenberg-picture projectors at all times commute: .
However, this is not as general as one would like because assuming the set of histories is elementary is very restrictive. (It excludes branch-dependent sets, sets with inhomogeneous histories, and many more types of sets that we would like to work with.) Luckily, their proof can be extended a bit.
Let’s forget that we have any projectors and just consider a consistent set .… [continue reading]
[Other parts in this series: 1,2,3,4,5,6,7.]
A common mistake made by folks newly exposed to the concept of decoherence is to conflate the Schmidt basis with the pointer basis induced by decoherence.… [continue reading]
Here is Raphael Boleslavsky and Christopher Cotton discussing their model of grade deflation in selective undergraduate programs:
Grade inflation is widely viewed as detrimental, compromising the quality of education and reducing the information content of student transcripts for employers. This column argues that there may be benefits to allowing grade inflation when universities’ investment decisions are taken into account. With grade inflation, student transcripts convey less information, so employers rely less on transcripts and more on universities’ reputations. This incentivises universities to make costly investments to improve the quality of their education and the average ability of their graduates. [Link. h/t Ben Kuhn.]
I’ve only read the column rather than the full paper, but it sounds like their model simply posits that “schools can undertake costly investments to improve the quality of education that they provide, increasing the average ability of graduates”.
But if you believe folks like Bryan Caplan, then you think colleges add very little value. (Even if you think the best schools do add more value than worse schools, it doesn’t at all follow that this can be increased in a positive-sum way by additional investment. It could be that all the value-added is from being around other smart students, who can only be drawn away from other schools.) Under Boleslavsky and Cotton’s model, schools are only incentivized to increase the quality of their exiting graduates, and this seems much easier to accomplish by doing better advertising to prospective students than by actually investing more in the students that matriculate.
Princeton took significant steps to curb grade inflation, with some success. However, they now look to be relaxing the only part of the policy that had teeth.… [continue reading]
[Other parts in this series: 1,2,3,4,5,6,7.]
Although it is possible to use the term “vacuum fluctuations” in a consistent manner, referring to well-defined phenomena, people are usually way too sloppy. Most physicists never think clearly about quantum measurements, so the term is widely misunderstood and should be avoided if possible. Maybe the most dangerous result of this is the confident, unexplained use of this term by experienced physicists talking to students; it has the awful effect of giving these student the impression that their inevitable confusion is normal and not indicative of deep misunderstanding“Professor, where do the wiggles in the cosmic microwave background come from?” “Quantum fluctuations”. “Oh, um…OK.” (Yudkowsky has usefully called this a “curiosity-stopper”, although I’m sure there’s another term for this used by philosophers of science.)a .
Here is everything you need to know:
A measurement is specified by a basis, not by an observable. (If you demand to think in terms of observables, just replace “measurement basis” with “eigenbasis of the measured observable” in everything that follows.)
Real-life processes amplify microscopic phenomena to macroscopic scales all the time, thereby effectively performing a quantum measurement. (This includes inducing the implied wave-function collapse). These do not need to involve a physicist in a lab, but the basis being measured must be an orthogonal one.W. H. Zurek, Phys. Rev. A 76, 052110 (2007). [arXiv:quant-ph/0703160]b
“Quantum fluctuations” are when any measurement (whether involving a human or not) is made in a basis which doesn’t commute with the initial state of the system.
A “vacuum fluctuation” is when the ground state of a system is measured in a basis that does not include the ground state; it’s merely a special case of a quantum fluctuation.
… [continue reading]
The two sides in the potentially massive class-action lawsuit by silicon-valley engineers against Google, Apple, and other big tech companies reached an agreement, but that settlement was rejected by the judge. New York Times:
After the plaintiffs’ lawyers took their 25 percent cut, the settlement would have given about $4,000 to every member of the class.
Judge Koh said that she believed the case was stronger than that, and that the plaintiffs’ lawyers were taking the easy way out by settling. The evidence against the defendants was compelling, she said.
(Original court order.)
I would like to be able to explain this by understanding the economic/sociological motivations of the lawyers. People often complain about a huge chunk of the money going to the class-action lawyers who are too eager to settle, but the traditional argument is that a fixed percentage structure (rather than an hourly or flat rate) gives the lawyers the proper incentive to pursue the interests of the class by tying their compensation directly to the legal award. So this should lead to maximizing the award to the plaintiffs.
My best guess, doubtlessly considered by many others, is this: Lawyers, like most people, are risk adverse for sufficiently large amounts of money. (They would rather have $10 million for sure than a 50% chance at $50 million.) On the other hand, the legal award will be distributed over many more plaintiffs. Since it will be much smaller per person, the plaintiffs are significantly less risk adverse. So the lawyers settle even though it’s not in the best interests of the plaintiffs.
This suggests the following speculative solution for correctly aligning the incentives of the lawyers and the class action plaintiffs: Ensure that the person with the final decision-making power for the plaintiff legal team receives a percentage of the award that is small enough for that person’s utility function to be roughly as linear as the plaintiffs’.… [continue reading]
The Master equation in Lindblad form (aka the Lindblad equation) is the most general possible evolution of an open quantum system that is Markovian and time-homogeneous. Markovian means that the way in which the density matrix evolves is determined completely by the current density matrix. This is the assumption that there are no memory effects, i.e. that the environment does not store information about earlier state of the system that can influence the system in the future.Here’s an example of a memory effect: An atom immersed in an electromagnetic field can be in one of two states, excited or ground. If it is in an excited state then, during a time interval, it has a certain probability of decaying to the ground state by emitting a photon. If it is in the ground state then it also has a chance of becoming excited by the ambient field. The situation where the atom is in a space of essentially infinite size would be Markovian, because the emitted photon (which embodies a record of the atom’s previous state of excitement) would travel away from the atom never to interact with it again. It might still become excited because of the ambient field, but its chance of doing so isn’t influenced by its previous state. But if the atom is in a container with reflecting walls, then the photon might be reflected back towards the atom, changing the probability that it becomes excited during a later period.a Time-homogeneous just means that the rule for stochastically evolving the system from one time to the next is the same for all times.
Given an arbitrary orthonormal basis of the space of operators on the -dimensional Hilbert space of the system (according to the Hilbert-Schmidt inner product ), the Lindblad equation takes the following form:
with .… [continue reading]
[This post was originally “Part 1” of my HTTAQM series. However, it’s old, haphazardly written, and not a good starting point. Therefore, I’ve removed it from that series, which now begins with “Measurements are about bases”. Other parts are here: 1,2,3,4,5,6,7. I hope to re-write this post in the future.]
It’s often remarked that the Aharonov–Bohm (AB) effect says something profound about the “reality” of potentials in quantum mechanics. In one version of the relevant experiment, charged particles are made to travel coherently along two alternate paths, such as in a Mach-Zehnder interferometer. At the experimenter’s discretion, an external electromagnetic potential (either vector or scalar) can be applied so that the two paths are at different potentials yet still experience zero magnetic and electric field. The paths are recombined, and the size of the potential difference determines the phase of the interference pattern. The effect is often interpreted as a demonstration that the electromagnetic potential is physically “real”, rather than just a useful mathematical concept.
The magnetic Aharanov-Bohm effect. The wavepacket of an electron approaches from the left and is split coherently over two paths, L and R. The red solenoid in between contains magnetic flux
. The region outside the solenoid has zero field, but there is a non-zero curl to the vector potential as measured along the two paths. The relative phase between the L and R wavepackets is given by
However, Vaidman recently pointed out that this is a mistaken interpretation which is an artifact of the semi-classical approximation used to describe the AB effect. Although it is true that the superposed test charges experience zero field, it turns out that the source charges creating that macroscopic potential do experience a non-zero field, and that the strength of this field is dependent on which path is taken by the test charges.… [continue reading]
[Added 2015-1-30: The paper is now in print and has appeared in the popular press.]
One criticism I’ve had to address when proselytizing the indisputable charms of using decoherence detection methods to look at low-mass dark matter (DM) is this: I’ve never produced a concrete model that would be tested. My analysis (arXiv:1212.3061) addressed the possibility of using matter interferometry to rule out a large class of dark matter models characterized by a certain range for the DM mass and the nucleon-scattering cross section. However, I never constructed an explicit model as a representative of this class to demonstrate in detail that it was compatible with all existing observational evidence. This is a large and complicated task, and not something I could accomplish on my own.
I tried hard to find an existing model in the literature that met my requirements, but without luck. So I had to argue (with referees and with others) that this was properly beyond the scope of my work, and that the idea was interesting enough to warrant publication without a model. This ultimately was successful, but it was an uphill battle. Among other things, I pointed out that new experimental concepts can inspire theoretical work, so it is important that they be disseminated.
I’m thrilled to say this paid off in spades. Bateman, McHardy, Merle, Morris, and Ulbricht have posted their new pre-print “On the Existence of Low-Mass Dark Matter and its Direct Detection” (arXiv:1405.5536). Here is the abstract:
Dark Matter (DM) is an elusive form of matter which has been postulated to explain astronomical observations through its gravitational effects on stars and galaxies, gravitational lensing of light around these, and through its imprint on the Cosmic Microwave Background (CMB).
… [continue reading]