Comments on Hotta’s Quantum Energy Teleportation

[This is a “literature impression“.]

Masahiro Hotta has a series of paper about what he calls “quantum energy teleportation (QET)”, modeled after the well-known notion of quantum teleportation (of information). Although it sounds like crazy crack pot stuff, and they contain the red-flag term “zero-point energy”, the basic physics of Hotta’s work are sound. But they don’t appear to have important consequences for energy transmission.

The idea is to exploit the fact that the ground state of the vacuum in QFT is, in principle, entangled over arbitrary distances. In a toy Alice and Bob model with respective systems A and B, you assume a Hamiltonian for which the ground state is unique and entangled. Then, Alice makes a local measurement on her system A. Neither of the two conditional global states for the joint AB system — conditional on the outcome of the measurement — are eigenstates of the Hamiltonian, and so therefore the average energy must increase for the joint system. The source of this energy is the device Alice used to make the measurement. Now, if Bob were to independently make a measurement of his system, he would find that energy would also necessarily flow from his device into the joint system; this follows from the symmetry of the problem. But if he waits for Alice to transmit to him the outcome of her result, it turns out that he can apply a local unitary to his B system and a subsequent local measurement that leads to a net average energy flow to his equipment. The fact that he must wait for the outcome of Alice’s measurement, which travels no faster than the speed of light, is what gives this the flavor of teleportation.… [continue reading]

Literature impressions

I have often been frustrated by the inefficiency of reading through the physics literature. One problem is that physicists are sometimes bad teachers and are usually bad writers, and so it can take a long time of reading a paper before you even figure out what the author is trying to say. This gets worse when you look at papers that aren’t in your immediate physics niche, because then the author will probably use assumptions, mathematical techniques, and terminology you aren’t familiar with. If you had infinite time, you could spend days reading every paper that looks reasonably interesting, but you don’t. A preferred technique is to ask your colleagues to explain it to you, because they are more likely to speak your language and (unlike a paper) can answer your questions when you come up against a confusion. But generally your colleagues haven’t read it; they want you to read it so you can explain it to them. I spend a lot of time reading papers that end up being uninteresting, but it’s worth it for the occasional gems. And it seems clear that there is a lot of duplicated work being done sorting through the chaff.

So on the one hand we have a lengthy, fixed document from a single, often unfamiliar perspective (i.e. the actual paper in a different field) and on the other hand we have a breathing human being in your own field who will patiently explain things to you. An intermediate solution would be to have a few people in different fields read the paper and then translate the key parts into their field’s language, which could then be passed around.… [continue reading]

Hanson-ism: Travel isn’t about intellectual exposure

I often hear very smart and impressive people say that others (especially Americans) who don’t travel much have too narrow a view of the world. They haven’t been exposed to different perspectives because they haven’t traveled much. They focus on small difference of opinion within their own sphere while remaining ignorant of larger differences abroad.

Now, I think that there is a grain of truth to this, maybe even with the direction of causality pointing in the correct way. And I think it’s plausible that it really does affect Americans more than folks of similar means in Europe.Of course, here I would say the root cause is mostly economic rather than cultural; America’s size gives it a greater degree of self sufficiency in a way that means its citizens have fewer reasons to travel. This is similar to the fact that its much less profitable for the average American to become fluent in a second language than for a typical European (even a British). I think it’s obvious that if you could magically break up the American states into 15 separate nations, each with a different language, you’d get a complete reversal of these effects almost immediately.a   But it’s vastly overstated because of the status boost to people saying it.

The same people who claim that foreign travel is very important for intellectual exposure almost never emphasize reading foreign writing. Perhaps in the past one had to travel thousands of miles to really get exposed to the brilliant writers and artists who huddled in Parisian cafes, but this is no longer true in the age of the internet. (And maybe it hasn’t been true since the printing press.)… [continue reading]

Impact discrepancies persist under uncertainty

[Tomasik has updated his essay to address some of these issues]

Brian Tomasik’s website, utilitarian-essays.com, contains many thoughtful pieces he has written over the years from the perspective of a utilitarian who is concerned deeply with wild animal suffering. His work has been a great resource of what is now called the effective altrusim community, and I have a lot of respect for his unflinching acceptance and exploration of our large obligations conditional on the moral importance of all animals.

I want to briefly take issue with a small but important part of Brain’s recent essay “Charity cost effectiveness in an uncertain world“. He discusses the difficult problem facing consequentialists who care about the future, especially the far future, on account of how difficult it is predict the many varied flow-through effects of our actions. In several places, he suggests that this uncertainty will tend to wash out the enormous differences in effectiveness attributed to various charities (and highlighted by effective altruists) when measured by direct impact (e.g. lives saved per dollar).

…When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing…

…For example, insofar as a charity encourages cooperation, philosophical reflection, and meta-thinking about how to best reduce suffering in the future — even if only by accident — it has valuable flow-through effects, and it’s unlikely these can be beaten by many orders of magnitude by something else…

…I don’t expect some charities to be astronomically better than others…

Although I agree on the importance of the uncertain implications of flow-through effects, I disagree with the suggestion that this should generally be expected to even out differences in effectiveness.… [continue reading]

Citation indices do not avoid subjectivity

Peter Higgs used his recent celebrity to criticize the current academic job system: “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” In this context, it was argued to me that using citation count, publication count, or some other related index during the hiring process for academics is a necessary evil. In particular, single academic job openings are often deluded with dozens or hundreds of applications, and there needs to be some method of narrowing down the search to a manageable number of applicants. Furthermore, it has been said, it’s important that this method is more objective rather than subjective.

I don’t think it makes sense at all to describe citation indices as less subjective measures than individual judgement calls. They just push the subjectivity from a small group (the hiring committee) to a larger group (the physics community); the decision to publish and cite is always held by human beings. Contrast this to an objective measure of how fast someone is: their 100m dash time. The subjectivity of asking a judge to guess how fast a runner appears to be going as he runs by, and the possible sources of error due to varying height or gait, are not much fixed by asking many judges and taking an “objective” vote tally.

Of course, if the hiring committee doesn’t have the time or expertise to evaluate the work done by a job applicant, then what a citation index does effectively do is farm out that evaluative work to the greater physics community. And that can be OK if you are clear that that’s what you’re doing.… [continue reading]

Cosmology meets philanthropy

[This was originally posted at the Quantum Pontiff.]

People sometimes ask me what how my research will help society.  This question is familiar to physicists, especially those of us whose research is connected to every-day life only… shall we say…tenuously.  And of course, this is a fair question from the layman; tax dollars support most of our work.

I generally take the attitude of former Fermilab director Robert R. Wilson.  During his testimony before the Joint Committee on Atomic Energy in the US Congress, he was asked how discoveries from the proposed accelerator would contribute to national security during a time of intense Cold War competition with the USSR.  He famously replied “this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”

Still, it turns out there are philosophers of practical ethics who think a few of the academic questions physicists study could have tremendous moral implications, and in fact might drive key decisions we all make each day. Oxford philosopher Nick Bostrom has in particular written about the idea of “astronomical waste“.  As is well known to physicists, the universe has a finite, ever-dwindling supply of negentropy, i.e. the difference between our current low-entropy state and the bleak maximal entropy state that lies in our far future.  And just about everything we might value is ultimately powered by it.  As we speak (or blog), the stupendously vast majority of negentropy usage is directed toward rather uninspiring ends, like illuminating distant planets no one will ever see.

These resources can probably be put to better use.  … [continue reading]

Decoherence Detection FAQ—Part 1: Dark matter

[Updated 2016-7-2]

I’ve submitted my papers (long and short arXiv versions) on detecting classically undetectable new particles through decoherence. The short version introduces the basic idea and states the main implications for dark matter and gravitons. The long version covers the dark matter case in depth. Abstract for the short version:

Detecting Classically Undetectable Particles through Quantum Decoherence

Some hypothetical particles are considered essentially undetectable because they are far too light and slow-moving to transfer appreciable energy or momentum to the normal matter that composes a detector. I propose instead directly detecting such feeble particles, like sub-MeV dark matter or even gravitons, through their uniquely distinguishable decoherent effects on quantum devices like matter interferometers. More generally, decoherence can reveal phenomena that have arbitrarily little classical influence on normal matter, giving new motivation for the pursuit of macroscopic superpositions.

This is figure 1:

MZ2_cropped
Decoherence detection with a Mach-Zehnder interferometer. System \mathcal{N} is placed in a coherent superposition of spatially displaced wavepackets \vert N_{L} \rangle and \vert N_{R} \rangle that each travel a separate path and then are recombined. In the absence of system \mathcal{E}, the interferometer is tuned so that \mathcal{N} will be detected at the bright port with near unit probability, and at the dim port with near vanishing probability. However, if system \mathcal{D} scatters off \mathcal{N}, these two paths can decohere and \mathcal{N} will be detected at the dim port 50% of the time.

Below are some FAQs I have received.

Won’t there always be momentum transfer in any nontrivial scattering?

For any nontrivial scattering of two particles, there must be some momentum transfer.  But the momentum transfer can be arbitrarily small by simply making the mass of the dark particle as tiny as desired (while keeping its velocity fixed).  … [continue reading]

Happier livestock through genetic bundling

Carl Shulman posted on OvercomingBias about an easier way to produce animals that suffer less: selective breeding.  In contrast to a lot of the pie-in-the-sky talk about genetically engineering animals to feel less pain, selective breeding is a proven and relatively cheap method that can produce animals with traits that increase a complicated weighted sum of many parameters.  As Shulman points out, the improvements are impressive, and breeding methods are likely to work just as well for reducing suffering as increasing milk output (although these goals may conflict in the same animal).

So suppose an animal-welfare organization is able to raise the resources necessary to run such a breeding program.  They immediately run up against the problem of how to induce large-scale farming operations to use their new breed of less-suffering chickens.  Indeed, in the comments to Shulman’s post, Gaverick Matheny pointed out that an example of a welfare-enhanced breed exists but is rarely used because it is less productive.

It’s true that there should be some low-hanging welfare fruit that has negligible effect on farm profits.  But even these are unlikely to be adopted due to economic frictions.  (Why would most farmers risk changing to an untested breed produced by an organization generally antagonistic toward them?)  So how can an animal-welfare organization induce adoption of their preferred breed?  My proposal is to bundle their welfare-enhancing modifications with productivity-enhancing modifications that are not already exploited because of inefficiencies in the market for livestock breeds.

The incentives which exist for businesses to invest in selective breeding almost certainly do not lead to the maximum total value.  Breeder businesses are only able to capture some of the value that accrues from their animal improvements.  … [continue reading]