Argument against EA warmth risk

When I’m trying to persuade someone that people ought to concentrate on effectiveness when choosing which charities to fund, I sometime hear the worry that this sort of emphasis on cold calculation risks destroying the crucial human warmth and emotion that should surround charitable giving. It’s tempting to dismiss this sort of worry out of hand, but it’s much more constructive to address it head on.I also think it gestures at a real aspect of “EA cultural”, although the direction of causality is unclear. It could just be that EA ideas are particularly attractive to us cold unfeeling robots. a   This situations happened to me today, and I struggled for a short and accessible response. I came up with the following argument later, so I’m posting it here.

It’s often noticed that many of the best surgeons treat their patients like a broken machine to be fixed, and lack any sort of bedside manner. Surgeons are also well known for their gallows humor, which has been thought to be a coping mechanism to deal with death and with the unnatural act of cutting open a living human body. Should we be worried that surgery dehumanizes the surgeon? Well, yes, this is a somewhat valid concern, which is even being addressed (with mixed results).

But in context this is only a very mild concern. The overwhelmingly most important thing is that the surgery is actually performed, and that it is done well. If someone said “I don’t think we should have doctors perform surgery because of the potential for it to take the human warmth out of medicine”, you’d rightly call them crazy! No one wants to die from a treatable appendicitis, no matter how comforting the warm and heartfelt doctors are.… [continue reading]

State of EA organizations

Below is a small document I prepared to summarize the current slate of organizations that are strongly related to effective altruismThere is a bit of arbitrariness about what to include.  There are some organizations that are strongly aligned with EA principles even if they do not endorse that name or the full philosophy.I am not including cause-level organization for mainstream causes, e.g. developing world health like Innovations for Poverty Action. I am including all existential risk organizations, since these are so unusual and potentially important for EA. a  . (A Google doc is available here, which can be exported to many other formats.) If I made a mistake please comment below or email me. Please feel free to take this document and build on it, especially if you would like to expand the highlights section.

By Category

Charity Evaluation

Existential risk

Meta

Former names

“Effective Fundraising” → Charity Science (Greatest Good Foundation)
“Singularity Institute” → Machine Intelligence Research Institute
“Effective Animal Activism” → Animal Charity Evaluators

Organizational relationships

FHI and CEA share office space at Oxford.  CEA is essentially an umbrella organization.  It contains GWWC and 80k, and formerly contained TLYCS and ACE.  Now the latter two organizations operate independently.

MIRI and CFAR currently share office space in Berkeley and collaborate from time to time.… [continue reading]

New review of decoherence by Schlosshauer

Max Schlosshauer has a new review of decoherence and how it relates to understanding the quantum-classical transition. The abstract is:

I give a pedagogical overview of decoherence and its role in providing a dynamical account of the quantum-to-classical transition. The formalism and concepts of decoherence theory are reviewed, followed by a survey of master equations and decoherence models. I also discuss methods for mitigating decoherence in quantum information processing and describe selected experimental investigations of decoherence processes.

I found it very concise and clear for its impressive breadth, and it has extensive cites to the literature. (As you may suspect, he cites me and my collaborators generously!) I think this will become one of the go-to introductions to decoherence, and I highly recommend it to beginners.

Other introductory material is Schlosshauer’s textbook and RMP (quant-ph/0312059), Zurek’s RMP (quant-ph/0105127) and Physics Today article, and the textbook by Joos et al.… [continue reading]

Entanglement never at first order

When two initially uncorrelated quantum systems interact through a weak coupling, no entanglement is generated at first order in the coupling constant. This is a useful and very easy to prove fact that I haven’t seen pointed out anywhere, although I assume someone has. I’d love a citation reference if you have one.

Suppose two systems \mathcal{A} and \mathcal{B} evolve under U = \exp(- i H t) where the Hamiltonian coupling them is of the form

(1)   \begin{align*} H=H_A + H_B + \epsilon H_I, \end{align*}

with H_A = H_A \otimes I_B and H_B = I_A \otimes H_B as usual. We’ll show that when the systems start out uncorrelated, \vert \psi^0 \rangle = \vert \psi_A^0 \rangle \otimes \vert \psi_B^0 \rangle, they remain unentangled (and therefore, since the global state is pure, uncorrelated) to first order in \epsilon. First, note that local unitaries cannot change the entanglement, so without loss of generality we can consider the modified unitary

(2)   \begin{align*} U' = e^{+i H_A t} e^{+i H_B t} e^{-i H t} \end{align*}

which peels off the unimportant local evolution of \mathcal{A} and \mathcal{B}. Then the Baker–Campbell–Hausdorff formula gives

(3)   \begin{align*} U' = e^{+i H_A t} e^{+i H_B t} e^{-i (H_A + H_B) t} e^{-i \epsilon H_I t}  e^{Z_2} e^{Z_3} \cdots \end{align*}

where the first few Z‘s are given by

(4)   \begin{align*} Z_2 &= \frac{(-i t)^2}{2} [H_A+H_B,\epsilon H_I] \\ Z_3 &= \frac{(-i t)^3}{12} \Big( [H_A+H_B,[H_A+H_B,\epsilon H_I]]-  [\epsilon H_I,[H_A+H_B,\epsilon H_I]] \Big) \\ Z_4 &= \cdots. \end{align*}

The key feature here is that every commutators in each of the Z‘s contains at least one copy of \epsilon H_I, i.e. all the Z‘s are at least first order in \epsilon. That allows us to write

(5)   \begin{align*} U' = e^{-i \epsilon H'_I t} \big(1 + O(\epsilon^2) \big) \end{align*}

for some new H'_I that is independent of \epsilon. Then we note just that a general Hamiltonian cannot produce entanglement to first order:

(6)   \begin{align*} \rho_A &= \mathrm{Tr}_B \left[ U' \vert \psi^0 \rangle \langle \psi^0 \vert {U'}^\dagger \right] \\ &=  \vert \psi'_A \rangle \langle \psi'_A \vert + O(\epsilon^2) \end{align*}

where

(7)   \begin{align*} \vert \psi'_A \rangle &= \left( I - i \epsilon t \langle \psi^0_B  \vert H_I' \vert  \psi^0_B \rangle \right) \vert \psi^0_A \rangle . \end{align*}

This is potentially a very important (negative) result when considering decoherence detection of very weakly coupled particles. If the coupling is so small that terms beyond first order are negligible (e.g. relic neutrinos), then there is no hope of being sensitive to any decoherence.

Of course, non-entangling (unitary) effect may be important. Another way to say this result is: Two weakly coupled systems act only unitarily on each other to first order in the coupling constant.… [continue reading]

Follow up on contextuality and non-locality

This is a follow up on my earlier post on contextuality and non-locality. As far as I can tell, Spekken’s paper is the gold standard for how to think about contextuality in the messy real world. In particular, since the idea of “equivalent” measurements is key, we might never be able to establish that we are making “the same” measurement from one experiment to the next; there could always be small microscopic differences for which we are unable to account. However, Spekken’s idea of forming equivalence classes from measurement protocols that always produce the same results is very natural. It isolates, as much as possible, the inherent ugliness of a contextual model that gives different ontological descriptions for measurements that somehow always seem to give identical results.

I also learned an extremely important thing in my background reading. Apparently John Bell discovered contextuality a few years before Kochen and Specker (KS).This is according to Mermin’s RMP on contextuality and locality. I haven’t gone back and read Bell’s papers to make sure he really did describe something equivalent to the KS theorem. a   More importantly, Bell’s theorem on locality grew out of this discovery; the theorem is just a special case of contextuality where “the context” is a space-like separated measurement.

So now I think I can get behind Spekken’s idea that contextuality is more important than non-locality, especially non-locality per se. It seems very plausible to me that the general idea of contextuality is driving at the key thing that’s weird about quantum mechanics (QM) and that — if QM is one day more clearly explained by a successor theory — we will find that the non-local special case of contextuality isn’t particularly different from local versions.… [continue reading]

Wigner function = Fourier transform + Coordinate rotation

[Follow-up post: In what sense is the Wigner function a quasiprobability distribution?]

I’ve never liked how people introduce the Wigner function (aka the Wigner quasi-probability distribution). Usually, they just write down a definition like

(1)   \begin{align*} W(x,p) = \frac{1}{\pi \hbar} \int \mathrm{d}y \rho(x+y, x-y) e^{-2 i p y/\hbar} \end{align*}

and say that it’s the “closest phase-space representation” of a quantum state. One immediately wonders: What’s with the weird factor of 2, and what the heck is y? Usually, the only justification given for the probability interpretation is that integrating over one of the variables recovers the probability distribution for the other (if it were measured):

(2)   \begin{align*} \int \! \mathrm{d}p \, W(x,p) = \vert \rho(x,x) \vert^2 , \\ \int \! \mathrm{d}x \, W(x,p) = \vert \hat{\rho}(p,p) \vert^2 , \end{align*}

where \hat{\rho}(p,p') is just the density matrix in the momentum basis. But of course, that doesn’t really tell us why we should think of W(x,p), as having anything to do with the (rough) value of x conditional on a (rough) value of p.

Well now I have a much better idea of what the Wigner function actually is and how to interpret it. We start by writing it down in sane variables (and suppress \hbar):

(3)   \begin{align*} W(\bar{x},\bar{p}) = \frac{1}{2 \pi} \int \! \mathrm{d}\Delta x \,\rho \left(\bar{x}+\frac{\Delta x}{2}, \bar{x}-\frac{\Delta x}{2} \right) e^{-i \bar{p} \Delta x}. \end{align*}

So the first step in the interpretation is to consider the function

(4)   \begin{align*} M(\bar{x},\Delta x) \equiv  \rho \left(\bar{x}+\frac{\Delta x}{2}, \bar{x}-\frac{\Delta x}{2} \right) , \end{align*}

which appears in the integrand. This is just the (position-space) density matrix in rotated coordinates \bar{x} \equiv (x+x')/2 and \Delta x = x-x'. There is a strong sense in which the off-diagonal terms of the density matrix represent the quantum coherence of the state between different positions, so \Delta x indexes how far this coherence extends; large values of \Delta x indicate large spatial coherence. On the other hand, \bar{x} indexes how far down the diagonal of the density matrix we move; it’s the average position of the two points between which the off-diagonal terms of the density matrix measures coherence. (See the figure below.)


The function M(\bar{x},\Delta x) is just the position-space density matrix \rho(x,x') rotated in new coordinates: (\bar{x},\Delta x) = ((x+x')/2,x-x').
[continue reading]

Wavepacket spreading produces force sensitivity

I’m still trying to decide if I understand this correctly, but it looks like coherent wavepacket spreading is sufficient to produce states of a test-mass that are highly sensitive to weak forces. The Wigner function of a coherent wavepacket is sheared horizontally in phase space (see hand-drawn figure). A force that perturbs it slightly with a small momentum shift will still produce an orthogonal state of the test mass.


The Gaussian wavepacket of a test mass (left) will be sheared horizontally in phase space by the free-particle evolution governed by H=p^2/2m. A small vertical (i.e. momentum) shift by a weak force can then produce an orthogonal state of the test mass, while it would not for the unsheared state. However, discriminating between the shifted and unshifted wavepackets requires a momentum-like measurement; position measurements would not suffice.

Of course, we could simply start with a wavepacket with a very wide spatial width and narrow momentum width. Back when this was being discussed by Caves and others in the ’80s, they recognized that these states would have such sensitivity. However, they pointed out, this couldn’t really be exploited because of the difficulty in making true momentum measurements. Rather, we usually measure momentum indirectly by allowing the normal free-particle (H=p^2/2m) evolution carry the state to different points in space, and then measuring position. But this doesn’t work under the condition in which we’re interested: when the time between measurements is limited.The original motivation was for detecting gravitational waves, which transmit zero net momentum when averaged over the time interval on which the wave interacts with the test mass. The only way to notice the wave is to measure it in the act since the momentum transfer can be finite for intermediate times.[continue reading]

Comments on Gell-Mann & Hartle’s latest

Back in December Gell-Mann and Hartle (G&H) posted their latest paper on consistent histories, “Adaptive Coarse Graining, Environment, Strong Decoherence, and Quasiclassical Realms”. Here are my thoughts.

The discussion of adaptive coarse graining was brief and very much in agreement with previous work.

G&H then give a name and formal description to the idea, long part of the intuitive lore, of a history being defined by the values taken by a particular variable over many time step. (This might be the position of an object, which is being recorded to some accuracy by a environment that decoheres it.) The key idea is that all the Schrödinger-picture projectors P^{k}_{\alpha_k} at different times t_k defining the history commute:

(1)   \begin{align*} [P^{k}_{\alpha_k},P^{k'}_{\alpha_{k'}}]=0 \quad \forall k,k' \end{align*}

This they call the narrative condition. From it, one is able to define a smallest set of maximal projectors Q_i (which they call a common framework) that obey either Q_i \le P^{k}_{\alpha_k} or Q_i  P^{k}_{\alpha_k} = P^{k}_{\alpha_k} Q_i = 0 for all P^{k}_{\alpha_k}. For instance, if the P‘s are onto spatial volumes of position, then the Q‘s are just the minimal partition of position space such that the region associated with each Q_i is fully contained in the regions corresponding to some of the P^{k}_{\alpha_k}, and is completely disjoint from the regions corresponding to the others.


For times steps k=1,\ldots,4, the history projectors at a given time project into subspaces which disjointly span Hilbert space. The narrative condition states that all history projectors commute, which means we can think of them as projecting onto disjoint subsets forming a partition of the range of some variable (e.g. position). The common framework is just the set of smaller projectors that also partitions the range of the variable and which obey Q \le P or QP = PQ = 0 for each P and Q.
[continue reading]

Contextuality versus nonlocality

I wanted to understand Rob Spekkens’ self-described lonely view that the contextual aspect of quantum mechanics is more important than the non-local aspect. Although I like to think I know a thing or two about the foundations of quantum mechanics, I’m embarrassingly unfamiliar with the discussion surrounding contextuality. 90% of my understanding is comes from this famous explanation by David Bacon at his old blog. (Non-experts should definitely take the time to read that nice little post.) What follows are my thoughts before diving into the literature.

I find the map-territory distinction very important for thinking about this. Bell’s theorem isn’t a theorem about quantum mechanics (QM) per se, it’s a theorem about locally realistic theories. It says if the universe satisfies certain very reasonable assumption, then it will behave in a certain manner. We observe that it doesn’t behave in this manner, therefore the universe doesn’t satisfy those assumption. The only reason that QM come into it is that QM correctly predicts the misbehavior, whereas classical mechanics does not (since classical mechanics satisfies the assumptions).

Now, if you’re comfortable writing down a unitarily evolving density matrix of macroscopic systems, then the mechanism by which QM is able to misbehave is actually fairly transparent. Write down an initial state, evolve it, and behold: the wavefunction is a sum of branches of macroscopically distinct outcomes with the appropriate statistics (assuming the Born rule). The importance of Bell’s Theorem is not that it shows that QM is weird, it’s that it shows that the universe is weird. After all, we knew that the QM formalism violated all sorts of our intuitions: entanglement, Heisenberg uncertainty, wave-particle duality, etc.; we didn’t need Bell’s theorem to tell us QM was strange.… [continue reading]

Consistency conditions in consistent histories

[This is akin to a living review, which may improve from time to time. Last edited 2015-4-27.]

This post will summarize the various consistency conditions that can be found discussed in the consistent histories literature. Most of the conditions have gone by different names under different authors (and sometimes even under the same author), so I’ll try to give all the aliases I know; just hover over the footnote markers.

There is an overarching schism in the choice of terminology in the literature between the terms “consistent” and “decoherent”. Most authors, including Gell-Mann and Hartle, now use the term “decoherent” very loosely and no longer employ “consistent” as an official label for any particular condition (or for the formalism as a whole). Zurek and I believe this is a significant loss in terminology, and we are stubbornly resisting it. In our recent arXiv offering, our rant was thus:

…we emphasize that decoherence is a dynamical physical process predicated on a distinction between system and environment, whereas consistency is a static property of a set of histories, a Hamiltonian, and an initial state. For a given decohering quantum system, there is generally a preferred basis of pointer states [1, 8]. In contrast, the mere requirement of consistency does not distinguish a preferred set of histories which describe classical behavior from any of the many sets with no physical interpretation.

(See also the first footnote on page 3347 of “Classical Equations for Quantum Systems”Gell-Mann and Hartle a   which agrees with the importance of this conceptual distinction.) Since Gell-Mann and Hartle did many of the investigations of consistency conditions, some conditions have only appeared in the literature using their terminology (like “medium-strong decoherence”).… [continue reading]

Direct versus indirect measurements

andrelaszlo on HackerNews asked how someone could draw a reasonable distinction between “direct” and “indirect” measurements in science. Below is how I answered. This is old hat to many folks and, needless to say, none of this is original to me.

There’s a good philosophy of science argument to be made that there’s no precise and discrete distinction between direct and indirect measurement. In our model of the universe, there are always multiple physical steps that link the phenomena under investigation to our conscious perception. Therefore, any conclusions we draw from a perception are conditional on our confidence in the entire causal chain performing reliably (e.g. a gravitational wave induces a B-mode in the CMB, which propagates as a photon to our detectors, which heats up a transition-edge sensor, which increases the resistivity of the circuit, which flips a bit in the flash memory, which is read out to a monitor, which emits photons to our eye, which change the nerves firing in our brain). “Direct” measurements, then, are just ones that rely on a small number of reliable inferences, while “indirect” measurements rely on a large number of less reliable inferences.

Nonetheless, in practice there is a rather clear distinction which declares “direct” measurements to be those that take place locally (in space) using well-characterized equipment that we can (importantly) manipulate, and which is conditional only on physical laws which are very strongly established. All other measurements are called “indirect”, generally because they are observational (i.e. no manipulation of the experimental parameters), are conditional on tenuous ideas (i.e. naturalness arguments as indirect evidence for supersymmetry), and/or involve intermediary systems that are not well understood (e.g.

[continue reading]

Comments on Hotta’s Quantum Energy Teleportation

[This is a “literature impression“.]

Masahiro Hotta has a series of paper about what he calls “quantum energy teleportation (QET)”, modeled after the well-known notion of quantum teleportation (of information). Although it sounds like crazy crack pot stuff, and they contain the red-flag term “zero-point energy”, the basic physics of Hotta’s work are sound. But they don’t appear to have important consequences for energy transmission.

The idea is to exploit the fact that the ground state of the vacuum in QFT is, in principle, entangled over arbitrary distances. In a toy Alice and Bob model with respective systems A and B, you assume a Hamiltonian for which the ground state is unique and entangled. Then, Alice makes a local measurement on her system A. Neither of the two conditional global states for the joint AB system — conditional on the outcome of the measurement — are eigenstates of the Hamiltonian, and so therefore the average energy must increase for the joint system. The source of this energy is the device Alice used to make the measurement. Now, if Bob were to independently make a measurement of his system, he would find that energy would also necessarily flow from his device into the joint system; this follows from the symmetry of the problem. But if he waits for Alice to transmit to him the outcome of her result, it turns out that he can apply a local unitary to his B system and a subsequent local measurement that leads to a net average energy flow to his equipment. The fact that he must wait for the outcome of Alice’s measurement, which travels no faster than the speed of light, is what gives this the flavor of teleportation.… [continue reading]

Literature impressions

I have often been frustrated by the inefficiency of reading through the physics literature. One problem is that physicists are sometimes bad teachers and are usually bad writers, and so it can take a long time of reading a paper before you even figure out what the author is trying to say. This gets worse when you look at papers that aren’t in your immediate physics niche, because then the author will probably use assumptions, mathematical techniques, and terminology you aren’t familiar with. If you had infinite time, you could spend days reading every paper that looks reasonably interesting, but you don’t. A preferred technique is to ask your colleagues to explain it to you, because they are more likely to speak your language and (unlike a paper) can answer your questions when you come up against a confusion. But generally your colleagues haven’t read it; they want you to read it so you can explain it to them. I spend a lot of time reading papers that end up being uninteresting, but it’s worth it for the occasional gems. And it seems clear that there is a lot of duplicated work being done sorting through the chaff.

So on the one hand we have a lengthy, fixed document from a single, often unfamiliar perspective (i.e. the actual paper in a different field) and on the other hand we have a breathing human being in your own field who will patiently explain things to you. An intermediate solution would be to have a few people in different fields read the paper and then translate the key parts into their field’s language, which could then be passed around.… [continue reading]

Hanson-ism: Travel isn’t about intellectual exposure

I often hear very smart and impressive people say that others (especially Americans) who don’t travel much have too narrow a view of the world. They haven’t been exposed to different perspectives because they haven’t traveled much. They focus on small difference of opinion within their own sphere while remaining ignorant of larger differences abroad.

Now, I think that there is a grain of truth to this, maybe even with the direction of causality pointing in the correct way. And I think it’s plausible that it really does affect Americans more than folks of similar means in Europe.Of course, here I would say the root cause is mostly economic rather than cultural; America’s size gives it a greater degree of self sufficiency in a way that means its citizens have fewer reasons to travel. This is similar to the fact that its much less profitable for the average American to become fluent in a second language than for a typical European (even a British). I think it’s obvious that if you could magically break up the American states into 15 separate nations, each with a different language, you’d get a complete reversal of these effects almost immediately. a   But it’s vastly overstated because of the status boost to people saying it.

The same people who claim that foreign travel is very important for intellectual exposure almost never emphasize reading foreign writing. Perhaps in the past one had to travel thousands of miles to really get exposed to the brilliant writers and artists who huddled in Parisian cafes, but this is no longer true in the age of the internet. (And maybe it hasn’t been true since the printing press.) Today, one can be exposed to vastly more—and more detailed—views by reading foreign journals, newspapers, blogs, and books.… [continue reading]

Impact discrepancies persist under uncertainty

[Tomasik has updated his essay to address some of these issues]

Brian Tomasik’s website, utilitarian-essays.com, contains many thoughtful pieces he has written over the years from the perspective of a utilitarian who is concerned deeply with wild animal suffering. His work has been a great resource of what is now called the effective altrusim community, and I have a lot of respect for his unflinching acceptance and exploration of our large obligations conditional on the moral importance of all animals.

I want to briefly take issue with a small but important part of Brain’s recent essay “Charity cost effectiveness in an uncertain world“. He discusses the difficult problem facing consequentialists who care about the future, especially the far future, on account of how difficult it is predict the many varied flow-through effects of our actions. In several places, he suggests that this uncertainty will tend to wash out the enormous differences in effectiveness attributed to various charities (and highlighted by effective altruists) when measured by direct impact (e.g. lives saved per dollar).

…When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing…

…For example, insofar as a charity encourages cooperation, philosophical reflection, and meta-thinking about how to best reduce suffering in the future — even if only by accident — it has valuable flow-through effects, and it’s unlikely these can be beaten by many orders of magnitude by something else…

…I don’t expect some charities to be astronomically better than others…

Although I agree on the importance of the uncertain implications of flow-through effects, I disagree with the suggestion that this should generally be expected to even out differences in effectiveness.… [continue reading]