Argument against EA warmth risk

When I’m trying to persuade someone that people ought to concentrate on effectiveness when choosing which charities to fund, I sometime hear the worry that this sort of emphasis on cold calculation risks destroying the crucial human warmth and emotion that should surround charitable giving. It’s tempting to dismiss this sort of worry out of hand, but it’s much more constructive to address it head on.I also think it gestures at a real aspect of “EA cultural”, although the direction of causality is unclear. It could just be that EA ideas are particularly attractive to us cold unfeeling robots.a   This situations happened to me today, and I struggled for a short and accessible response. I came up with the following argument later, so I’m posting it here.

It’s often noticed that many of the best surgeons treat their patients like a broken machine to be fixed, and lack any sort of bedside manner. Surgeons are also well known for their gallows humor, which has been thought to be a coping mechanism to deal with death and with the unnatural act of cutting open a living human body. Should we be worried that surgery dehumanizes the surgeon? Well, yes, this is a somewhat valid concern, which is even being addressed (with mixed results).

But in context this is only a very mild concern. The overwhelmingly most important thing is that the surgery is actually performed, and that it is done well. If someone said “I don’t think we should have doctors perform surgery because of the potential for it to take the human warmth out of medicine”, you’d rightly call them crazy! No one wants to die from a treatable appendicitis, no matter how comforting the warm and heartfelt doctors are.… [continue reading]

State of EA organizations

Below is a small document I prepared to summarize the current slate of organizations that are strongly related to effective altruismThere is a bit of arbitrariness about what to include.  There are some organizations that are strongly aligned with EA principles even if they do not endorse that name or the full philosophy.I am not including cause-level organization for mainstream causes, e.g. developing world health like Innovations for Poverty Action. I am including all existential risk organizations, since these are so unusual and potentially important for EA.a  . (A Google doc is available here, which can be exported to many other formats.) If I made a mistake please comment below or email me. Please feel free to take this document and build on it, especially if you would like to expand the highlights section.

By Category

Charity Evaluation

Existential risk

Meta

Former names

“Effective Fundraising” → Charity Science (Greatest Good Foundation)
“Singularity Institute” → Machine Intelligence Research Institute
“Effective Animal Activism” → Animal Charity Evaluators

Organizational relationships

FHI and CEA share office space at Oxford.  CEA is essentially an umbrella organization.  It contains GWWC and 80k, and formerly contained TLYCS and ACE.  Now the latter two organizations operate independently.

MIRI and CFAR currently share office space in Berkeley and collaborate from time to time.… [continue reading]

New review of decoherence by Schlosshauer

Max Schlosshauer has a new review of decoherence and how it relates to understanding the quantum-classical transition. The abstract is:

I give a pedagogical overview of decoherence and its role in providing a dynamical account of the quantum-to-classical transition. The formalism and concepts of decoherence theory are reviewed, followed by a survey of master equations and decoherence models. I also discuss methods for mitigating decoherence in quantum information processing and describe selected experimental investigations of decoherence processes.

I found it very concise and clear for its impressive breadth, and it has extensive cites to the literature. (As you may suspect, he cites me and my collaborators generously!) I think this will become one of the go-to introductions to decoherence, and I highly recommend it to beginners.

Other introductory material is Schlosshauer’s textbook and RMP (quant-ph/0312059), Zurek’s RMP (quant-ph/0105127) and Physics Today article, and the textbook by Joos et al.… [continue reading]

Entanglement never at first order

When two initially uncorrelated quantum systems interact through a weak coupling, no entanglement is generated at first order in the coupling constant. This is a useful and very easy to prove fact that I haven’t seen pointed out anywhere, although I assume someone has. I’d love a citation reference if you have one.

Suppose two systems \mathcal{A} and \mathcal{B} evolve under U = \exp(- i H t) where the Hamiltonian coupling them is of the form

(1)   \begin{align*} H=H_A + H_B + \epsilon H_I, \end{align*}

with H_A = H_A \otimes I_B and H_B = I_A \otimes H_B as usual. We’ll show that when the systems start out uncorrelated, \vert \psi^0 \rangle = \vert \psi_A^0 \rangle \otimes \vert \psi_B^0 \rangle, they remain unentangled (and therefore, since the global state is pure, uncorrelated) to first order in \epsilon. First, note that local unitaries cannot change the entanglement, so without loss of generality we can consider the modified unitary

(2)   \begin{align*} U' = e^{+i H_A t} e^{+i H_B t} e^{-i H t} \end{align*}

which peels off the unimportant local evolution of \mathcal{A} and \mathcal{B}. Then the Baker–Campbell–Hausdorff formula gives

(3)   \begin{align*} U' = e^{+i H_A t} e^{+i H_B t} e^{-i (H_A + H_B) t} e^{-i \epsilon H_I t}  e^{Z_2} e^{Z_3} \cdots \end{align*}

where the first few Z‘s are given by

(4)   \begin{align*} Z_2 &= \frac{(-i t)^2}{2} [H_A+H_B,\epsilon H_I] \\ Z_3 &= \frac{(-i t)^3}{12} \Big( [H_A+H_B,[H_A+H_B,\epsilon H_I]]-  [\epsilon H_I,[H_A+H_B,\epsilon H_I]] \Big) \\ Z_4 &= \cdots. \end{align*}

The key feature here is that every commutators in each of the Z‘s contains at least one copy of \epsilon H_I, i.e. all the Z‘s are at least first order in \epsilon. That allows us to write

(5)   \begin{align*} U' = e^{-i \epsilon H'_I t} \big(1 + O(\epsilon^2) \big) \end{align*}

for some new H'_I that is independent of \epsilon. Then we note just that a general Hamiltonian cannot produce entanglement to first order:

(6)   \begin{align*} \rho_A &= \mathrm{Tr}_B \left[ U' \vert \psi^0 \rangle \langle \psi^0 \vert {U'}^\dagger \right] \\ &=  \vert \psi'_A \rangle \langle \psi'_A \vert + O(\epsilon^2) \end{align*}

where

(7)   \begin{align*} \vert \psi'_A \rangle &= \left( I - i \epsilon t \langle \psi^0_B  \vert H_I' \vert  \psi^0_B \rangle \right) \vert \psi^0_A \rangle . \end{align*}

This is potentially a very important (negative) result when considering decoherence detection of very weakly coupled particles. If the coupling is so small that terms beyond first order are negligible (e.g. relic neutrinos), then there is no hope of being sensitive to any decoherence.

Of course, non-entangling (unitary) effect may be important. Another way to say this result is: Two weakly coupled systems act only unitarily on each other to first order in the coupling constant.… [continue reading]

Follow up on contextuality and non-locality

This is a follow up on my earlier post on contextuality and non-locality. As far as I can tell, Spekken’s paper is the gold standard for how to think about contextuality in the messy real world. In particular, since the idea of “equivalent” measurements is key, we might never be able to establish that we are making “the same” measurement from one experiment to the next; there could always be small microscopic differences for which we are unable to account. However, Spekken’s idea of forming equivalence classes from measurement protocols that always produce the same results is very natural. It isolates, as much as possible, the inherent ugliness of a contextual model that gives different ontological descriptions for measurements that somehow always seem to give identical results.

I also learned an extremely important thing in my background reading. Apparently John Bell discovered contextuality a few years before Kochen and Specker (KS).This is according to Mermin’s RMP on contextuality and locality. I haven’t gone back and read Bell’s papers to make sure he really did describe something equivalent to the KS theorem.a   More importantly, Bell’s theorem on locality grew out of this discovery; the theorem is just a special case of contextuality where “the context” is a space-like separated measurement.

So now I think I can get behind Spekken’s idea that contextuality is more important than non-locality, especially non-locality per se. It seems very plausible to me that the general idea of contextuality is driving at the key thing that’s weird about quantum mechanics (QM) and that — if QM is one day more clearly explained by a successor theory — we will find that the non-local special case of contextuality isn’t particularly different from local versions.… [continue reading]

Wigner function = Fourier transform + Coordinate rotation

[Follow-up post: In what sense is the Wigner function a quasiprobability distribution?]

I’ve never liked how people introduce the Wigner function (aka the Wigner quasi-probability distribution). Usually, they just write down a definition like

(1)   \begin{align*} W(x,p) = \frac{1}{\pi \hbar} \int \mathrm{d}y \rho(x+y, x-y) e^{-2 i p y/\hbar} \end{align*}

and say that it’s the “closest phase-space representation” of a quantum state. One immediately wonders: What’s with the weird factor of 2, and what the heck is y? Usually, the only justification given for the probability interpretation is that integrating over one of the variables recovers the probability distribution for the other (if it were measured):

(2)   \begin{align*} \int \! \mathrm{d}p \, W(x,p) = \vert \rho(x,x) \vert^2 , \\ \int \! \mathrm{d}x \, W(x,p) = \vert \hat{\rho}(p,p) \vert^2 , \end{align*}

where \hat{\rho}(p,p') is just the density matrix in the momentum basis. But of course, that doesn’t really tell us why we should think of W(x,p), as having anything to do with the (rough) value of x conditional on a (rough) value of p.

Well now I have a much better idea of what the Wigner function actually is and how to interpret it. We start by writing it down in sane variables (and suppress \hbar):

(3)   \begin{align*} W(\bar{x},\bar{p}) = \frac{1}{2 \pi} \int \! \mathrm{d}\Delta x \,\rho \left(\bar{x}+\frac{\Delta x}{2}, \bar{x}-\frac{\Delta x}{2} \right) e^{-i \bar{p} \Delta x}. \end{align*}

So the first step in the interpretation is to consider the function

(4)   \begin{align*} M(\bar{x},\Delta x) \equiv  \rho \left(\bar{x}+\frac{\Delta x}{2}, \bar{x}-\frac{\Delta x}{2} \right) , \end{align*}

which appears in the integrand. This is just the (position-space) density matrix in rotated coordinates \bar{x} \equiv (x+x')/2 and \Delta x = x-x'. There is a strong sense in which the off-diagonal terms of the density matrix represent the quantum coherence of the state between different positions, so \Delta x indexes how far this coherence extends; large values of \Delta x indicate large spatial coherence. On the other hand, \bar{x} indexes how far down the diagonal of the density matrix we move; it’s the average position of the two points between which the off-diagonal terms of the density matrix measures coherence. (See the figure below.)… [continue reading]

Wavepacket spreading produces force sensitivity

I’m still trying to decide if I understand this correctly, but it looks like coherent wavepacket spreading is sufficient to produce states of a test-mass that are highly sensitive to weak forces. The Wigner function of a coherent wavepacket is sheared horizontally in phase space (see hand-drawn figure). A force that perturbs it slightly with a small momentum shift will still produce an orthogonal state of the test mass.


The Gaussian wavepacket of a test mass (left) will be sheared horizontally in phase space by the free-particle evolution governed by H=p^2/2m. A small vertical (i.e. momentum) shift by a weak force can then produce an orthogonal state of the test mass, while it would not for the unsheared state. However, discriminating between the shifted and unshifted wavepackets requires a momentum-like measurement; position measurements would not suffice.

Of course, we could simply start with a wavepacket with a very wide spatial width and narrow momentum width. Back when this was being discussed by Caves and others in the ’80s, they recognized that these states would have such sensitivity. However, they pointed out, this couldn’t really be exploited because of the difficulty in making true momentum measurements. Rather, we usually measure momentum indirectly by allowing the normal free-particle (H=p^2/2m) evolution carry the state to different points in space, and then measuring position. But this doesn’t work under the condition in which we’re interested: when the time between measurements is limited.The original motivation was for detecting gravitational waves, which transmit zero net momentum when averaged over the time interval on which the wave interacts with the test mass. The only way to notice the wave is to measure it in the act since the momentum transfer can be finite for intermediate times.[continue reading]

Comments on Gell-Mann & Hartle’s latest

Back in December Gell-Mann and Hartle (G&H) posted their latest paper on consistent histories, “Adaptive Coarse Graining, Environment, Strong Decoherence, and Quasiclassical Realms”. Here are my thoughts.

The discussion of adaptive coarse graining was brief and very much in agreement with previous work.

G&H then give a name and formal description to the idea, long part of the intuitive lore, of a history being defined by the values taken by a particular variable over many time step. (This might be the position of an object, which is being recorded to some accuracy by a environment that decoheres it.) The key idea is that all the Schrödinger-picture projectors P^{k}_{\alpha_k} at different times t_k defining the history commute:

(1)   \begin{align*} [P^{k}_{\alpha_k},P^{k'}_{\alpha_{k'}}]=0 \quad \forall k,k' \end{align*}

This they call the narrative condition. From it, one is able to define a smallest set of maximal projectors Q_i (which they call a common framework) that obey either Q_i \le P^{k}_{\alpha_k} or Q_i  P^{k}_{\alpha_k} = P^{k}_{\alpha_k} Q_i = 0 for all P^{k}_{\alpha_k}. For instance, if the P‘s are onto spatial volumes of position, then the Q‘s are just the minimal partition of position space such that the region associated with each Q_i is fully contained in the regions corresponding to some of the P^{k}_{\alpha_k}, and is completely disjoint from the regions corresponding to the others.


For times steps k=1,\ldots,4, the history projectors at a given time project into subspaces which disjointly span Hilbert space. The narrative condition states that all history projectors commute, which means we can think of them as projecting onto disjoint subsets forming a partition of the range of some variable (e.g. position). The common framework is just the set of smaller projectors that also partitions the range of the variable and which obey Q \le P or QP = PQ = 0 for each P and Q.
[continue reading]