Wavepacket spreading produces force sensitivity

I’m still trying to decide if I understand this correctly, but it looks like coherent wavepacket spreading is sufficient to produce states of a test-mass that are highly sensitive to weak forces. The Wigner function of a coherent wavepacket is sheared horizontally in phase space (see hand-drawn figure). A force that perturbs it slightly with a small momentum shift will still produce an orthogonal state of the test mass.


The Gaussian wavepacket of a test mass (left) will be sheared horizontally in phase space by the free-particle evolution governed by H=p^2/2m. A small vertical (i.e. momentum) shift by a weak force can then produce an orthogonal state of the test mass, while it would not for the unsheared state. However, discriminating between the shifted and unshifted wavepackets requires a momentum-like measurement; position measurements would not suffice.

Of course, we could simply start with a wavepacket with a very wide spatial width and narrow momentum width. Back when this was being discussed by Caves and others in the ’80s, they recognized that these states would have such sensitivity. However, they pointed out, this couldn’t really be exploited because of the difficulty in making true momentum measurements. Rather, we usually measure momentum indirectly by allowing the normal free-particle (H=p^2/2m) evolution carry the state to different points in space, and then measuring position.… [continue reading]

Comments on Gell-Mann & Hartle’s latest

Back in December Gell-Mann and Hartle (G&H) posted their latest paper on consistent histories, “Adaptive Coarse Graining, Environment, Strong Decoherence, and Quasiclassical Realms”. Here are my thoughts.

The discussion of adaptive coarse graining was brief and very much in agreement with previous work.

G&H then give a name and formal description to the idea, long part of the intuitive lore, of a history being defined by the values taken by a particular variable over many time step. (This might be the position of an object, which is being recorded to some accuracy by a environment that decoheres it.) The key idea is that all the Schrödinger-picture projectors P^{k}_{\alpha_k} at different times t_k defining the history commute:

(1)   \begin{align*} [P^{k}_{\alpha_k},P^{k'}_{\alpha_{k'}}]=0 \quad \forall k,k' \end{align*}

This they call the narrative condition. From it, one is able to define a smallest set of maximal projectors Q_i (which they call a common framework) that obey either Q_i \le P^{k}_{\alpha_k} or Q_i  P^{k}_{\alpha_k} = P^{k}_{\alpha_k} Q_i = 0 for all P^{k}_{\alpha_k}. For instance, if the P‘s are onto spatial volumes of position, then the Q‘s are just the minimal partition of position space such that the region associated with each Q_i is fully contained in the regions corresponding to some of the P^{k}_{\alpha_k}, and is completely disjoint from the regions corresponding to the others.… [continue reading]

Contextuality versus nonlocality

I wanted to understand Rob Spekkens’ self-described lonely view that the contextual aspect of quantum mechanics is more important than the non-local aspect. Although I like to think I know a thing or two about the foundations of quantum mechanics, I’m embarrassingly unfamiliar with the discussion surrounding contextuality. 90% of my understanding is comes from this famous explanation by David Bacon at his old blog. (Non-experts should definitely take the time to read that nice little post.) What follows are my thoughts before diving into the literature.

I find the map-territory distinction very important for thinking about this. Bell’s theorem isn’t a theorem about quantum mechanics (QM) per se, it’s a theorem about locally realistic theories. It says if the universe satisfies certain very reasonable assumption, then it will behave in a certain manner. We observe that it doesn’t behave in this manner, therefore the universe doesn’t satisfy those assumption. The only reason that QM come into it is that QM correctly predicts the misbehavior, whereas classical mechanics does not (since classical mechanics satisfies the assumptions).

Now, if you’re comfortable writing down a unitarily evolving density matrix of macroscopic systems, then the mechanism by which QM is able to misbehave is actually fairly transparent.… [continue reading]

Consistency conditions in consistent histories

[This is akin to a living review, which may improve from time to time. Last edited 2015-4-27.]

This post will summarize the various consistency conditions that can be found discussed in the consistent histories literature. Most of the conditions have gone by different names under different authors (and sometimes even under the same author), so I’ll try to give all the aliases I know; just hover over the footnote markers.

There is an overarching schism in the choice of terminology in the literature between the terms “consistent” and “decoherent”. Most authors, including Gell-Mann and Hartle, now use the term “decoherent” very loosely and no longer employ “consistent” as an official label for any particular condition (or for the formalism as a whole). Zurek and I believe this is a significant loss in terminology, and we are stubbornly resisting it. In our recent arXiv offering, our rant was thus:

…we emphasize that decoherence is a dynamical physical process predicated on a distinction between system and environment, whereas consistency is a static property of a set of histories, a Hamiltonian, and an initial state. For a given decohering quantum system, there is generally a preferred basis of pointer states [1, 8]. In contrast, the mere requirement of consistency does not distinguish a preferred set of histories which describe classical behavior from any of the many sets with no physical interpretation.

[continue reading]

Direct versus indirect measurements

andrelaszlo on HackerNews asked how someone could draw a reasonable distinction between “direct” and “indirect” measurements in science. Below is how I answered. This is old hat to many folks and, needless to say, none of this is original to me.

There’s a good philosophy of science argument to be made that there’s no precise and discrete distinction between direct and indirect measurement. In our model of the universe, there are always multiple physical steps that link the phenomena under investigation to our conscious perception. Therefore, any conclusions we draw from a perception are conditional on our confidence in the entire causal chain performing reliably (e.g. a gravitational wave induces a B-mode in the CMB, which propagates as a photon to our detectors, which heats up a transition-edge sensor, which increases the resistivity of the circuit, which flips a bit in the flash memory, which is read out to a monitor, which emits photons to our eye, which change the nerves firing in our brain). “Direct” measurements, then, are just ones that rely on a small number of reliable inferences, while “indirect” measurements rely on a large number of less reliable inferences.

Nonetheless, in practice there is a rather clear distinction which declares “direct” measurements to be those that take place locally (in space) using well-characterized equipment that we can (importantly) manipulate, and which is conditional only on physical laws which are very strongly established.

[continue reading]

Comments on Hotta’s Quantum Energy Teleportation

[This is a “literature impression“.]

Masahiro Hotta has a series of paper about what he calls “quantum energy teleportation (QET)”, modeled after the well-known notion of quantum teleportation (of information). Although it sounds like crazy crack pot stuff, and they contain the red-flag term “zero-point energy”, the basic physics of Hotta’s work are sound. But they don’t appear to have important consequences for energy transmission.

The idea is to exploit the fact that the ground state of the vacuum in QFT is, in principle, entangled over arbitrary distances. In a toy Alice and Bob model with respective systems A and B, you assume a Hamiltonian for which the ground state is unique and entangled. Then, Alice makes a local measurement on her system A. Neither of the two conditional global states for the joint AB system — conditional on the outcome of the measurement — are eigenstates of the Hamiltonian, and so therefore the average energy must increase for the joint system. The source of this energy is the device Alice used to make the measurement. Now, if Bob were to independently make a measurement of his system, he would find that energy would also necessarily flow from his device into the joint system; this follows from the symmetry of the problem.… [continue reading]

Literature impressions

I have often been frustrated by the inefficiency of reading through the physics literature. One problem is that physicists are sometimes bad teachers and are usually bad writers, and so it can take a long time of reading a paper before you even figure out what the author is trying to say. This gets worse when you look at papers that aren’t in your immediate physics niche, because then the author will probably use assumptions, mathematical techniques, and terminology you aren’t familiar with. If you had infinite time, you could spend days reading every paper that looks reasonably interesting, but you don’t. A preferred technique is to ask your colleagues to explain it to you, because they are more likely to speak your language and (unlike a paper) can answer your questions when you come up against a confusion. But generally your colleagues haven’t read it; they want you to read it so you can explain it to them. I spend a lot of time reading papers that end up being uninteresting, but it’s worth it for the occasional gems. And it seems clear that there is a lot of duplicated work being done sorting through the chaff.

So on the one hand we have a lengthy, fixed document from a single, often unfamiliar perspective (i.e.… [continue reading]

Hanson-ism: Travel isn’t about intellectual exposure

I often hear very smart and impressive people say that others (especially Americans) who don’t travel much have too narrow a view of the world. They haven’t been exposed to different perspectives because they haven’t traveled much. They focus on small difference of opinion within their own sphere while remaining ignorant of larger differences abroad.

Now, I think that there is a grain of truth to this, maybe even with the direction of causality pointing in the correct way. And I think it’s plausible that it really does affect Americans more than folks of similar means in Europe.Of course, here I would say the root cause is mostly economic rather than cultural; America’s size gives it a greater degree of self sufficiency in a way that means its citizens have fewer reasons to travel. This is similar to the fact that its much less profitable for the average American to become fluent in a second language than for a typical European (even a British). I think it’s obvious that if you could magically break up the American states into 15 separate nations, each with a different language, you’d get a complete reversal of these effects almost immediately. a   But it’s vastly overstated because of the status boost to people saying it.… [continue reading]

Impact discrepancies persist under uncertainty

[Tomasik has updated his essay to address some of these issues]

Brian Tomasik’s website, utilitarian-essays.com, contains many thoughtful pieces he has written over the years from the perspective of a utilitarian who is concerned deeply with wild animal suffering. His work has been a great resource of what is now called the effective altrusim community, and I have a lot of respect for his unflinching acceptance and exploration of our large obligations conditional on the moral importance of all animals.

I want to briefly take issue with a small but important part of Brain’s recent essay “Charity cost effectiveness in an uncertain world“. He discusses the difficult problem facing consequentialists who care about the future, especially the far future, on account of how difficult it is predict the many varied flow-through effects of our actions. In several places, he suggests that this uncertainty will tend to wash out the enormous differences in effectiveness attributed to various charities (and highlighted by effective altruists) when measured by direct impact (e.g. lives saved per dollar).

…When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing…

…For example, insofar as a charity encourages cooperation, philosophical reflection, and meta-thinking about how to best reduce suffering in the future — even if only by accident — it has valuable flow-through effects, and it’s unlikely these can be beaten by many orders of magnitude by something else…

…I don’t expect some charities to be astronomically better than others…

Although I agree on the importance of the uncertain implications of flow-through effects, I disagree with the suggestion that this should generally be expected to even out differences in effectiveness.… [continue reading]

Citation indices do not avoid subjectivity

Peter Higgs used his recent celebrity to criticize the current academic job system: “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” In this context, it was argued to me that using citation count, publication count, or some other related index during the hiring process for academics is a necessary evil. In particular, single academic job openings are often deluded with dozens or hundreds of applications, and there needs to be some method of narrowing down the search to a manageable number of applicants. Furthermore, it has been said, it’s important that this method is more objective rather than subjective.

I don’t think it makes sense at all to describe citation indices as less subjective measures than individual judgement calls. They just push the subjectivity from a small group (the hiring committee) to a larger group (the physics community); the decision to publish and cite is always held by human beings. Contrast this to an objective measure of how fast someone is: their 100m dash time. The subjectivity of asking a judge to guess how fast a runner appears to be going as he runs by, and the possible sources of error due to varying height or gait, are not much fixed by asking many judges and taking an “objective” vote tally.… [continue reading]

Cosmology meets philanthropy

[This was originally posted at the Quantum Pontiff.]

People sometimes ask me what how my research will help society.  This question is familiar to physicists, especially those of us whose research is connected to every-day life only… shall we say…tenuously.  And of course, this is a fair question from the layman; tax dollars support most of our work.

I generally take the attitude of former Fermilab director Robert R. Wilson.  During his testimony before the Joint Committee on Atomic Energy in the US Congress, he was asked how discoveries from the proposed accelerator would contribute to national security during a time of intense Cold War competition with the USSR.  He famously replied “this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”

Still, it turns out there are philosophers of practical ethics who think a few of the academic questions physicists study could have tremendous moral implications, and in fact might drive key decisions we all make each day. Oxford philosopher Nick Bostrom has in particular written about the idea of “astronomical waste“.  As is well known to physicists, the universe has a finite, ever-dwindling supply of negentropy, i.e.… [continue reading]

Decoherence Detection FAQ—Part 1: Dark matter

[Updated 2016-7-2]

I’ve submitted my papers (long and short arXiv versions) on detecting classically undetectable new particles through decoherence. The short version introduces the basic idea and states the main implications for dark matter and gravitons. The long version covers the dark matter case in depth. Abstract for the short version:

Detecting Classically Undetectable Particles through Quantum Decoherence

Some hypothetical particles are considered essentially undetectable because they are far too light and slow-moving to transfer appreciable energy or momentum to the normal matter that composes a detector. I propose instead directly detecting such feeble particles, like sub-MeV dark matter or even gravitons, through their uniquely distinguishable decoherent effects on quantum devices like matter interferometers. More generally, decoherence can reveal phenomena that have arbitrarily little classical influence on normal matter, giving new motivation for the pursuit of macroscopic superpositions.

This is figure 1:

MZ2_cropped
Decoherence detection with a Mach-Zehnder interferometer. System \mathcal{N} is placed in a coherent superposition of spatially displaced wavepackets \vert N_{L} \rangle and \vert N_{R} \rangle that each travel a separate path and then are recombined. In the absence of system \mathcal{E}, the interferometer is tuned so that \mathcal{N} will be detected at the bright port with near unit probability, and at the dim port with near vanishing probability.
[continue reading]

Happier livestock through genetic bundling

Carl Shulman posted on OvercomingBias about an easier way to produce animals that suffer less: selective breeding.  In contrast to a lot of the pie-in-the-sky talk about genetically engineering animals to feel less pain, selective breeding is a proven and relatively cheap method that can produce animals with traits that increase a complicated weighted sum of many parameters.  As Shulman points out, the improvements are impressive, and breeding methods are likely to work just as well for reducing suffering as increasing milk output (although these goals may conflict in the same animal).

So suppose an animal-welfare organization is able to raise the resources necessary to run such a breeding program.  They immediately run up against the problem of how to induce large-scale farming operations to use their new breed of less-suffering chickens.  Indeed, in the comments to Shulman’s post, Gaverick Matheny pointed out that an example of a welfare-enhanced breed exists but is rarely used because it is less productive.

It’s true that there should be some low-hanging welfare fruit that has negligible effect on farm profits.  But even these are unlikely to be adopted due to economic frictions.  (Why would most farmers risk changing to an untested breed produced by an organization generally antagonistic toward them?)  So how can an animal-welfare organization induce adoption of their preferred breed?  … [continue reading]

PRISM and the exclusionary rule

I distinctly remember thinking in my high school Gov’t class that the exclusionary rule was weird.  Basically, the idea is that the primary mechanism for enforcing the 4th Amendment’s protection against unjustified searches is that evidence collected in violation of this Amendment cannot be used to convict someone.  But this is weird because (a) it can lead to setting free people guilty of egregious crimes because of minute privacy violations and (b) it offers zero protection against privacy violations by the government for other purposes, such as convicting third parties. I always thought it was kind of missing the point.

(There doesn’t seem to be a good pure check againt privacy violations to be found in the court system.  Right now, you can apparently sue the federal government through the Federal Tort Claims Act for privacy violations, but only if the government agrees.  Similar situations exist with the states.)

Now, as it turns out, problem (b) is front-and-center in the debate over FISC‘s powers.  It’s true that normal criminal courts grant warrants in a non-adversarial setting, just like FISC does.  But tptacek and dragonwriter point out on HackerNews that this is defensible because there is an adversary when this warrant is actually executed, and exclusionary rule can be used to rebuff unjustified warrants.… [continue reading]

Discriminating smartness

It seems to me that I can accurately determine which of two people is smarter by just listening to them talk if at least one person is less smart than I am, but that this is very difficult or impossible if both people are much smarter than me. When both people are smarter than me, I fall back on crude heuristics for inferring intelligence. I look for which person seems more confident, answers more quickly, and corrects the other person more often. This, of course, is a very flawed method because I can be fooled into thinking that people who project unjustified confidence are smarter than timid but brilliant people.

In the intermediate case, when I am only slightly dumber than at least one party, the problem is reduced. I am better able to detect over-confidence, often because I can understand what’s going on when the timid smart person catches the over-confident person making mistakes (even if I couldn’t have caught them myself).

(To an extent, this may all be true when you replace “smarter” with “more skilled in domain X”.)

This suggests that candidate voting systems (whether for governments or otherwise) should have more “levels”. If we all want to elect the best person, where “bestness” is hard to identify by most of us mediocre participants, we would do better by identifying which of our neighbors are smarter than us, and then electing them to make decisions for us (possibly continuing into a hierarchy of committees).… [continue reading]