Links for September 2014

  • In discussions about the dangers of increasing the prevalence of antibiotic-resistant bacteria by treating farm animals with antibotics, it’s a common (and understandable) misconception that antibiotics serve the same purpose with animals as for people: to prevent disease. In fact, antibiotics serve mainly as a way to increase animal growth. We know that this arises from the effect on bacteria (and not, say, by the effect of the antibiotic molecule on the animal’s cells), but it is not because antibiotics are reducing visible illness among animals:

    Studies conducted in germ free animals have shown that the actions of these AGP [antimicrobial growth promoters] substances are mediated through their antibacterial activity. There are four hypotheses to explain their effect (Butaye et al., 2003). These include: 1) antibiotics decrease the toxins produced by the bacteria; 2) nutrients may be protected against bacterial destruction; 3) increase in the absorption of nutrients due to a thinning of the intestinal wall; and 4) reduction in the incidence of sub clinical infections. However, no study has pinpointed the exact mechanism by which the AGP work in the animal intestine. [More.]

  • You’ve probably noticed that your brain will try to reconcile contradictory visual info. Showing different images to each eye will causes someone to essentially see only one or the other at a time (although it will switch back and forth). Various other optical illusions bring out the brain’s attempts to solve visual puzzles. But did you know the brain jointly reconciles visual info with audio info? Behold, the McGurk effect:

  • The much-hyped nanopore technique for DNA sequencing is starting to mature. Eventually this should dramatically lower the cost and difficulty of DNA sequencing in the field, but the technology is still buggy.

[continue reading]

State-independent consistent sets

In May, Losada and Laura wrote a paperM. Losada and R. Laura, Annals of Physics 344, 263 (2014). pointing out the equivalence between two conditions on a set of “elementary histories” (i.e. fine-grained historiesGell-Mann and Hartle usually use the term “fine-grained set of histories” to refer to a set generated by the finest possible partitioning of histories in path integral (i.e. a point in space for every point in time), but this is overly specific. As far as the consistent histories framework is concerned, the key mathematical property that defines a fine-grained set is that it’s an exhaustive and exclusive set where each history is constructed by choosing exactly one projector from a fixed orthogonal resolution of the identity at each time.). Let the elementary histories \alpha = (a_1, \dots, a_N) be defined by projective decompositions of the identity P^{(i)}_{a_i}(t_i) at time steps t_i (i=1,\ldots,N), so that

(1)   \begin{align*} P^{(i)}_a &= (P^{(i)}_a)^\dagger \quad \forall i,a \\ P^{(i)}_a P^{(i)}_b &= \delta_{a,b} P^{(i)}_a \quad \forall i,a,b\\ \sum_{a} P^{(i)}_a (t_i) &= I \quad  \forall i,k \\ C_\alpha &= P^{(N)}_{a_N} (t_N) \cdots P^{(1)}_{a_1} (t_1) \\ I &= \sum_\alpha C_\alpha = \sum_{a_1}\cdots \sum_{a_N} C_\alpha \\ \end{align*}

where C_\alpha are the class operators. Then Losada and Laura showed that the following two conditions are equivalent

  1. The set is consistent“Medium decoherent” in Gell-Mann and Hartle’s terminology. Also note that Losada and Laura actually work with the obsolete condition of “weak decoherence”, but this turns out to be an unimportance difference. For a summary of these sorts of consistency conditions, see my round-up. for any state: D(\alpha,\beta) = \mathrm{Tr}[C_\alpha \rho C_\beta^\dagger] = 0 \quad \forall \alpha \neq \beta, \forall \rho.
  2. The Heisenberg-picture projectors at all times commute: [P^{(i)}_{a} (t_i),P^{(j)}_{b} (t_j)]=0 \quad \forall i,j,a,b.

However, this is not as general as one would like because assuming the set of histories is elementary is very restrictive. (It excludes branch-dependent sets, sets with inhomogeneous histories, and many more types of sets that we would like to work with.) Luckily, their proof can be extended a bit.

Let’s forget that we have any projectors P^{(i)}_{a} and just consider a consistent set \{ C_\alpha \}.… [continue reading]

Links for August 2014

  • Jester (Adam Falkowski) on physics breakthroughs:

    This year’s discoveries follow the well-known 5-stage Kübler-Ross pattern: 1) announcement, 2) excitement, 3) debunking, 4) confusion, 5) depression. While BICEP is approaching the end of the cycle, the sterile neutrino dark matter signal reported earlier this year is now entering stage 3.

  • The ultimate bounds on possible nuclides are more-or-less known from first principles.
  • UPower Technologies is a nuclear power start-up backed by Y-Combinator.
  • It is not often appreciated that “[s]mallpox eradication saved more than twice the number of people 20th century world peace would have achieved.” Malaria eradication would be much harder, but the current prospects are encouraging. Relatedly, the method for producing live but attenuated viruses is super neat:

    Attenuated vaccines can be made in several different ways. Some of the most common methods involve passing the disease-causing virus through a series of cell cultures or animal embryos (typically chick embryos). Using chick embryos as an example, the virus is grown in different embryos in a series. With each passage, the virus becomes better at replicating in chick cells, but loses its ability to replicate in human cells. A virus targeted for use in a vaccine may be grown through—“passaged” through—upwards of 200 different embryos or cell cultures. Eventually, the attenuated virus will be unable to replicate well (or at all) in human cells, and can be used in a vaccine. All of the methods that involve passing a virus through a non-human host produce a version of the virus that can still be recognized by the human immune system, but cannot replicate well in a human host.

    When the resulting vaccine virus is given to a human, it will be unable to replicate enough to cause illness, but will still provoke an immune response that can protect against future infection.

[continue reading]

Grade inflation and college investment incentives

Here is Raphael Boleslavsky and Christopher Cotton discussing their model of grade deflation in selective undergraduate programs:

Grade inflation is widely viewed as detrimental, compromising the quality of education and reducing the information content of student transcripts for employers. This column argues that there may be benefits to allowing grade inflation when universities’ investment decisions are taken into account. With grade inflation, student transcripts convey less information, so employers rely less on transcripts and more on universities’ reputations. This incentivises universities to make costly investments to improve the quality of their education and the average ability of their graduates. [Link. h/t Ben Kuhn.]

I’ve only read the column rather than the full paper, but it sounds like their model simply posits that “schools can undertake costly investments to improve the quality of education that they provide, increasing the average ability of graduates”.

But if you believe folks like Bryan Caplan, then you think colleges add very little value. (Even if you think the best schools do add more value than worse schools, it doesn’t at all follow that this can be increased in a positive-sum way by additional investment. It could be that all the value-added is from being around other smart students, who can only be drawn away from other schools.) Under Boleslavsky and Cotton’s model, schools are only incentivized to increase the quality of their exiting graduates, and this seems much easier to accomplish by doing better advertising to prospective students than by actually investing more in the students that matriculate.

Princeton took significant steps to curb grade inflation, with some success. However, they now look to be relaxing the only part of the policy that had teeth.… [continue reading]

How to think about Quantum Mechanics—Part 2: Vacuum fluctuations

[Other parts in this series: 1,2,3,4,5,6,7.]

Although it is possible to use the term “vacuum fluctuations” in a consistent manner, referring to well-defined phenomena, people are usually way too sloppy. Most physicists never think clearly about quantum measurements, so the term is widely misunderstood and should be avoided if possible. Maybe the most dangerous result of this is the confident, unexplained use of this term by experienced physicists talking to students; it has the awful effect of giving these student the impression that their inevitable confusion is normal and not indicative of deep misunderstanding“Professor, where do the wiggles in the cosmic microwave background come from?” “Quantum fluctuations”. “Oh, um…OK.” (Yudkowsky has usefully called this a “curiosity-stopper”, although I’m sure there’s another term for this used by philosophers of science.).

Here is everything you need to know:

  1. A measurement is specified by a basis, not by an observable. (If you demand to think in terms of observables, just replace “measurement basis” with “eigenbasis of the measured observable” in everything that follows.)
  2. Real-life processes amplify microscopic phenomena to macroscopic scales all the time, thereby effectively performing a quantum measurement. (This includes inducing the implied wave-function collapse). These do not need to involve a physicist in a lab, but the basis being measured must be an orthogonal one.W. H. Zurek, Phys. Rev. A 76, 052110 (2007). [arXiv:quant-ph/0703160]
  3. “Quantum fluctuations” are when any measurement (whether involving a human or not) is made in a basis which doesn’t commute with the initial state of the system.
  4. A “vacuum fluctuation” is when the ground state of a system is measured in a basis that does not include the ground state; it’s merely a special case of a quantum fluctuation.
[continue reading]

Risk aversion of class-action lawyers

The two sides in the potentially massive class-action lawsuit by silicon-valley engineers against Google, Apple, and other big tech companies reached an agreement, but that settlement was rejected by the judge. New York Times:

After the plaintiffs’ lawyers took their 25 percent cut, the settlement would have given about $4,000 to every member of the class.

Judge Koh said that she believed the case was stronger than that, and that the plaintiffs’ lawyers were taking the easy way out by settling. The evidence against the defendants was compelling, she said.

(Original court order.)

I would like to be able to explain this by understanding the economic/sociological motivations of the lawyers. People often complain about a huge chunk of the money going to the class-action lawyers who are too eager to settle, but the traditional argument is that a fixed percentage structure (rather than an hourly or flat rate) gives the lawyers the proper incentive to pursue the interests of the class by tying their compensation directly to the legal award. So this should lead to maximizing the award to the plaintiffs.

My best guess, doubtlessly considered by many others, is this: Lawyers, like most people, are risk adverse for sufficiently large amounts of money. (They would rather have $10 million for sure than a 50% chance at $50 million.) On the other hand, the legal award will be distributed over many more plaintiffs. Since it will be much smaller per person, the plaintiffs are significantly less risk adverse. So the lawyers settle even though it’s not in the best interests of the plaintiffs.

This suggests the following speculative solution for correctly aligning the incentives of the lawyers and the class action plaintiffs: Ensure that the person with the final decision-making power for the plaintiff legal team receives a percentage of the award that is small enough for that person’s utility function to be roughly as linear as the plaintiffs’.… [continue reading]

Lindblad Equation is differential form of CP map

The Master equation in Lindblad form (aka the Lindblad equation) is the most general possible evolution of an open quantum system that is Markovian and time-homogeneous. Markovian means that the way in which the density matrix evolves is determined completely by the current density matrix. This is the assumption that there are no memory effects, i.e. that the environment does not store information about earlier state of the system that can influence the system in the future.Here’s an example of a memory effect: An atom immersed in an electromagnetic field can be in one of two states, excited or ground. If it is in an excited state then, during a time interval, it has a certain probability of decaying to the ground state by emitting a photon. If it is in the ground state then it also has a chance of becoming excited by the ambient field. The situation where the atom is in a space of essentially infinite size would be Markovian, because the emitted photon (which embodies a record of the atom’s previous state of excitement) would travel away from the atom never to interact with it again. It might still become excited because of the ambient field, but its chance of doing so isn’t influenced by its previous state. But if the atom is in a container with reflecting walls, then the photon might be reflected back towards the atom, changing the probability that it becomes excited during a later period. Time-homogeneous just means that the rule for stochastically evolving the system from one time to the next is the same for all times.

Given an arbitrary orthonormal basis L_n of the space of operators on the N-dimensional Hilbert space of the system (according to the Hilbert-Schmidt inner product \langle A,B \rangle = \mathrm{Tr}[A^\dagger B]), the Lindblad equation takes the following form:

(1)   \begin{align*} \frac{\mathrm{d}}{\mathrm{d}t} \rho=- i[H,\rho]+\sum_{n,m = 1}^{N^2-1} h_{n,m}\left(L_n\rho L_m^\dagger-\frac{1}{2}\left(\rho L_m^\dagger L_n + L_m^\dagger L_n\rho\right)\right) , \end{align*}

with \hbar=1.… [continue reading]

Potentials and the Aharonov–Bohm effect

[This post was originally “Part 1” of my HTTAQM series. However, it’s old, haphazardly written, and not a good starting point. Therefore, I’ve removed it from that series, which now begins with “Measurements are about bases”. Other parts are here: 1,2,3,4,5,6,7. I hope to re-write this post in the future.]

It’s often remarked that the Aharonov–Bohm (AB) effect says something profound about the “reality” of potentials in quantum mechanics. In one version of the relevant experiment, charged particles are made to travel coherently along two alternate paths, such as in a Mach-Zehnder interferometer. At the experimenter’s discretion, an external electromagnetic potential (either vector or scalar) can be applied so that the two paths are at different potentials yet still experience zero magnetic and electric field. The paths are recombined, and the size of the potential difference determines the phase of the interference pattern. The effect is often interpreted as a demonstration that the electromagnetic potential is physically “real”, rather than just a useful mathematical concept.


The magnetic Aharanov-Bohm effect. The wavepacket of an electron approaches from the left and is split coherently over two paths, L and R. The red solenoid in between contains magnetic flux \Phi. The region outside the solenoid has zero field, but there is a non-zero curl to the vector potential as measured along the two paths. The relative phase between the L and R wavepackets is given by \Theta = e \Phi/c \hbar.

However, Vaidman recently pointed out that this is a mistaken interpretation which is an artifact of the semi-classical approximation used to describe the AB effect. Although it is true that the superposed test charges experience zero field, it turns out that the source charges creating that macroscopic potential do experience a non-zero field, and that the strength of this field is dependent on which path is taken by the test charges.… [continue reading]

A dark matter model for decoherence detection

[Added 2015-1-30: The paper is now in print and has appeared in the popular press.]

One criticism I’ve had to address when proselytizing the indisputable charms of using decoherence detection methods to look at low-mass dark matter (DM) is this: I’ve never produced a concrete model that would be tested. My analysis (arXiv:1212.3061) addressed the possibility of using matter interferometry to rule out a large class of dark matter models characterized by a certain range for the DM mass and the nucleon-scattering cross section. However, I never constructed an explicit model as a representative of this class to demonstrate in detail that it was compatible with all existing observational evidence. This is a large and complicated task, and not something I could accomplish on my own.

I tried hard to find an existing model in the literature that met my requirements, but without luck. So I had to argue (with referees and with others) that this was properly beyond the scope of my work, and that the idea was interesting enough to warrant publication without a model. This ultimately was successful, but it was an uphill battle. Among other things, I pointed out that new experimental concepts can inspire theoretical work, so it is important that they be disseminated.

I’m thrilled to say this paid off in spades. Bateman, McHardy, Merle, Morris, and Ulbricht have posted their new pre-print “On the Existence of Low-Mass Dark Matter and its Direct Detection” (arXiv:1405.5536). Here is the abstract:

Dark Matter (DM) is an elusive form of matter which has been postulated to explain astronomical observations through its gravitational effects on stars and galaxies, gravitational lensing of light around these, and through its imprint on the Cosmic Microwave Background (CMB).

[continue reading]

Diagonal operators in the coherent state basis

I asked a question back in November on Physics.StackExchange, but that didn’t attract any interest from anyone. I started thinking about it again recently and figured out a good solution. The question and answer are explained below.I posted the answer on Physics.SE too since they encourage the answering of one’s own question. How lonely is that?!?

Q: Is there a good notion of a “diagonal” operator with respect the overcomplete basis of coherent states?
A: Yes. The operators that are “coherent-state diagonal” are those that have a smooth Glauber–Sudarshan P transform.

The primary motivation for this question is to get a clean mathematical condition for diagonality (presumably with a notion of “approximately diagonal”) for the density matrix of a system of a continuous degree of freedom being decohered. More generally, one might like to know the intuitive sense in which X, P, and X+P are all approximately diagonal in the basis of wavepackets, but RXR^\dagger is not, where R is the unitary operator which maps

(1)   \begin{align*} \vert x \rangle \to (\vert x \rangle + \mathrm{sign}(x) \vert - x \rangle) / \sqrt{2}. \end{align*}

(This operator creates a Schrodinger’s cat state by reflecting about x=0.)

For two different coherent states \vert \alpha \rangle and \vert \beta \rangle, we want to require an approximately diagonal operator A to satisfy \langle \alpha \vert A \vert \beta \rangle \approx 0, but we only want to do this if \langle \alpha \vert \beta \rangle \approx 0. For \langle \alpha \vert \beta \rangle \approx 1, we sensibly expect \langle \alpha \vert A \vert \beta \rangle to be within the eigenspectrum of A.

One might consider the negativity of the Wigner-Weyl transformCase has a pleasingly gentle introduction. of the density matrix (i.e. the Wigner phase-space quasi-probability distribution aka the Wigner function) as a sign of quantum coherence, since it is known that coherent superpositions (which are clearly not diagonal in the coherent state basis) have negative oscillations that mark the superposition, and also that these oscillations are destroyed by decoherence.… [continue reading]

Comments on Tegmark’s ‘Consciousness as a State of Matter’

[Edit: Scott Aaronson has posted on his blog with extensive criticism of Integrated Information Theory, which motivated Tegmark’s paper.]

Max Tegmark’s recent paper entitled “Consciousness as a State of Matter” has been making the rounds. See especially Sabine Hossenfelder’s critique on her blog that agrees in several places with what I say below.

Tegmark’s paper didn’t convince me that there’s anything new here with regards to the big questions of consciousness. (In fairness, I haven’t read the work of neuroscientist Giulio Tononi that motivated Tegmark’s claims). However, I was interested in what he has to say about the proper way to define subsystems in a quantum universe (i.e. to “carve reality at its joints”) and how this relates to the quantum-classical transition. There is a sense in which the modern understanding of decoherence simplifies the vague questions “How does (the appearance of) a classical world emerge in a quantum universe? ” to the slightly-less-vague question “what are the preferred subsystems of the universe, and how do they change with time?”. Tegmark describes essentially this as the “quantum factorization problem” on page 3. (My preferred formulation is as the “set-selection problem” by Dowker and Kent. Note that this is a separate problem from the origin of probability in quantum mechanicsThe problem of probability as described by Weinberg: “The difficulty is not that quantum mechanics is probabilistic—that is something we apparently just have to live with. The real difficulty is that it is also deterministic, or more precisely, that it combines a probabilistic interpretation with deterministic dynamics.” HT Steve Hsu..)

Therefore, my comments are going to focus on the “object-level” calculations of the paper, and I won’t have much to say about the implications for consciousness except at the very end.… [continue reading]

Argument against EA warmth risk

When I’m trying to persuade someone that people ought to concentrate on effectiveness when choosing which charities to fund, I sometime hear the worry that this sort of emphasis on cold calculation risks destroying the crucial human warmth and emotion that should surround charitable giving. It’s tempting to dismiss this sort of worry out of hand, but it’s much more constructive to address it head on.I also think it gestures at a real aspect of “EA cultural”, although the direction of causality is unclear. It could just be that EA ideas are particularly attractive to us cold unfeeling robots. This situations happened to me today, and I struggled for a short and accessible response. I came up with the following argument later, so I’m posting it here.

It’s often noticed that many of the best surgeons treat their patients like a broken machine to be fixed, and lack any sort of bedside manner. Surgeons are also well known for their gallows humor, which has been thought to be a coping mechanism to deal with death and with the unnatural act of cutting open a living human body. Should we be worried that surgery dehumanizes the surgeon? Well, yes, this is a somewhat valid concern, which is even being addressed (with mixed results).

But in context this is only a very mild concern. The overwhelmingly most important thing is that the surgery is actually performed, and that it is done well. If someone said “I don’t think we should have doctors perform surgery because of the potential for it to take the human warmth out of medicine”, you’d rightly call them crazy! No one wants to die from a treatable appendicitis, no matter how comforting the warm and heartfelt doctors are.… [continue reading]

State of EA organizations

Below is a small document I prepared to summarize the current slate of organizations that are strongly related to effective altruismThere is a bit of arbitrariness about what to include.  There are some organizations that are strongly aligned with EA principles even if they do not endorse that name or the full philosophy.I am not including cause-level organization for mainstream causes, e.g. developing world health like Innovations for Poverty Action. I am including all existential risk organizations, since these are so unusual and potentially important for EA.. (A Google doc is available here, which can be exported to many other formats.) If I made a mistake please comment below or email me. Please feel free to take this document and build on it, especially if you would like to expand the highlights section.

By Category

Charity Evaluation

Existential risk

Meta

Former names

“Effective Fundraising” → Charity Science (Greatest Good Foundation)
“Singularity Institute” → Machine Intelligence Research Institute
“Effective Animal Activism” → Animal Charity Evaluators

Organizational relationships

FHI and CEA share office space at Oxford.  CEA is essentially an umbrella organization.  It contains GWWC and 80k, and formerly contained TLYCS and ACE.  Now the latter two organizations operate independently.

MIRI and CFAR currently share office space in Berkeley and collaborate from time to time.… [continue reading]

New review of decoherence by Schlosshauer

Max Schlosshauer has a new review of decoherence and how it relates to understanding the quantum-classical transition. The abstract is:

I give a pedagogical overview of decoherence and its role in providing a dynamical account of the quantum-to-classical transition. The formalism and concepts of decoherence theory are reviewed, followed by a survey of master equations and decoherence models. I also discuss methods for mitigating decoherence in quantum information processing and describe selected experimental investigations of decoherence processes.

I found it very concise and clear for its impressive breadth, and it has extensive cites to the literature. (As you may suspect, he cites me and my collaborators generously!) I think this will become one of the go-to introductions to decoherence, and I highly recommend it to beginners.

Other introductory material is Schlosshauer’s textbook and RMP (quant-ph/0312059), Zurek’s RMP (quant-ph/0105127) and Physics Today article, and the textbook by Joos et al.… [continue reading]