Citation indices do not avoid subjectivity

Peter Higgs used his recent celebrity to criticize the current academic job system: “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” In this context, it was argued to me that using citation count, publication count, or some other related index during the hiring process for academics is a necessary evil. In particular, single academic job openings are often deluded with dozens or hundreds of applications, and there needs to be some method of narrowing down the search to a manageable number of applicants. Furthermore, it has been said, it’s important that this method is more objective rather than subjective.

I don’t think it makes sense at all to describe citation indices as less subjective measures than individual judgement calls. They just push the subjectivity from a small group (the hiring committee) to a larger group (the physics community); the decision to publish and cite is always held by human beings. Contrast this to an objective measure of how fast someone is: their 100m dash time. The subjectivity of asking a judge to guess how fast a runner appears to be going as he runs by, and the possible sources of error due to varying height or gait, are not much fixed by asking many judges and taking an “objective” vote tally.

Of course, if the hiring committee doesn’t have the time or expertise to evaluate the work done by a job applicant, then what a citation index does effectively do is farm out that evaluative work to the greater physics community. And that can be OK if you are clear that that’s what you’re doing.… [continue reading]

Cosmology meets philanthropy

[This was originally posted at the Quantum Pontiff.]

People sometimes ask me what how my research will help society.  This question is familiar to physicists, especially those of us whose research is connected to every-day life only… shall we say…tenuously.  And of course, this is a fair question from the layman; tax dollars support most of our work.

I generally take the attitude of former Fermilab director Robert R. Wilson.  During his testimony before the Joint Committee on Atomic Energy in the US Congress, he was asked how discoveries from the proposed accelerator would contribute to national security during a time of intense Cold War competition with the USSR.  He famously replied “this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.”

Still, it turns out there are philosophers of practical ethics who think a few of the academic questions physicists study could have tremendous moral implications, and in fact might drive key decisions we all make each day. Oxford philosopher Nick Bostrom has in particular written about the idea of “astronomical waste“.  As is well known to physicists, the universe has a finite, ever-dwindling supply of negentropy, i.e. the difference between our current low-entropy state and the bleak maximal entropy state that lies in our far future.  And just about everything we might value is ultimately powered by it.  As we speak (or blog), the stupendously vast majority of negentropy usage is directed toward rather uninspiring ends, like illuminating distant planets no one will ever see.

These resources can probably be put to better use.  … [continue reading]

Decoherence Detection FAQ—Part 1: Dark matter

[Updated 2016-7-2]

I’ve submitted my papers (long and short arXiv versions) on detecting classically undetectable new particles through decoherence. The short version introduces the basic idea and states the main implications for dark matter and gravitons. The long version covers the dark matter case in depth. Abstract for the short version:

Detecting Classically Undetectable Particles through Quantum Decoherence

Some hypothetical particles are considered essentially undetectable because they are far too light and slow-moving to transfer appreciable energy or momentum to the normal matter that composes a detector. I propose instead directly detecting such feeble particles, like sub-MeV dark matter or even gravitons, through their uniquely distinguishable decoherent effects on quantum devices like matter interferometers. More generally, decoherence can reveal phenomena that have arbitrarily little classical influence on normal matter, giving new motivation for the pursuit of macroscopic superpositions.

This is figure 1:

MZ2_cropped
Decoherence detection with a Mach-Zehnder interferometer. System \mathcal{N} is placed in a coherent superposition of spatially displaced wavepackets \vert N_{L} \rangle and \vert N_{R} \rangle that each travel a separate path and then are recombined. In the absence of system \mathcal{E}, the interferometer is tuned so that \mathcal{N} will be detected at the bright port with near unit probability, and at the dim port with near vanishing probability. However, if system \mathcal{D} scatters off \mathcal{N}, these two paths can decohere and \mathcal{N} will be detected at the dim port 50% of the time.

Below are some FAQs I have received.

Won’t there always be momentum transfer in any nontrivial scattering?

For any nontrivial scattering of two particles, there must be some momentum transfer.  But the momentum transfer can be arbitrarily small by simply making the mass of the dark particle as tiny as desired (while keeping its velocity fixed).  … [continue reading]

Happier livestock through genetic bundling

Carl Shulman posted on OvercomingBias about an easier way to produce animals that suffer less: selective breeding.  In contrast to a lot of the pie-in-the-sky talk about genetically engineering animals to feel less pain, selective breeding is a proven and relatively cheap method that can produce animals with traits that increase a complicated weighted sum of many parameters.  As Shulman points out, the improvements are impressive, and breeding methods are likely to work just as well for reducing suffering as increasing milk output (although these goals may conflict in the same animal).

So suppose an animal-welfare organization is able to raise the resources necessary to run such a breeding program.  They immediately run up against the problem of how to induce large-scale farming operations to use their new breed of less-suffering chickens.  Indeed, in the comments to Shulman’s post, Gaverick Matheny pointed out that an example of a welfare-enhanced breed exists but is rarely used because it is less productive.

It’s true that there should be some low-hanging welfare fruit that has negligible effect on farm profits.  But even these are unlikely to be adopted due to economic frictions.  (Why would most farmers risk changing to an untested breed produced by an organization generally antagonistic toward them?)  So how can an animal-welfare organization induce adoption of their preferred breed?  My proposal is to bundle their welfare-enhancing modifications with productivity-enhancing modifications that are not already exploited because of inefficiencies in the market for livestock breeds.

The incentives which exist for businesses to invest in selective breeding almost certainly do not lead to the maximum total value.  Breeder businesses are only able to capture some of the value that accrues from their animal improvements.  … [continue reading]

PRISM and the exclusionary rule

I distinctly remember thinking in my high school Gov’t class that the exclusionary rule was weird.  Basically, the idea is that the primary mechanism for enforcing the 4th Amendment’s protection against unjustified searches is that evidence collected in violation of this Amendment cannot be used to convict someone.  But this is weird because (a) it can lead to setting free people guilty of egregious crimes because of minute privacy violations and (b) it offers zero protection against privacy violations by the government for other purposes, such as convicting third parties. I always thought it was kind of missing the point.

(There doesn’t seem to be a good pure check againt privacy violations to be found in the court system.  Right now, you can apparently sue the federal government through the Federal Tort Claims Act for privacy violations, but only if the government agrees.  Similar situations exist with the states.)

Now, as it turns out, problem (b) is front-and-center in the debate over FISC‘s powers.  It’s true that normal criminal courts grant warrants in a non-adversarial setting, just like FISC does.  But tptacek and dragonwriter point out on HackerNews that this is defensible because there is an adversary when this warrant is actually executed, and exclusionary rule can be used to rebuff unjustified warrants.

On the other hand, there is no defendant to challenge the warrant in the case of mass surveillance of the public.  Anyone charged with with a crime as a result of this surveillance cannot claim the exclusionary rule, and people whose privacy was violated cannot (almost assuredly) get compensation.  This second effect is even more true when the government uses the information, not to convict anyone of a crime, but to pursue extra-national goals like hunting terrorists.  … [continue reading]

Discriminating smartness

It seems to me that I can accurately determine which of two people is smarter by just listening to them talk if at least one person is less smart than I am, but that this is very difficult or impossible if both people are much smarter than me. When both people are smarter than me, I fall back on crude heuristics for inferring intelligence. I look for which person seems more confident, answers more quickly, and corrects the other person more often. This, of course, is a very flawed method because I can be fooled into thinking that people who project unjustified confidence are smarter than timid but brilliant people.

In the intermediate case, when I am only slightly dumber than at least one party, the problem is reduced. I am better able to detect over-confidence, often because I can understand what’s going on when the timid smart person catches the over-confident person making mistakes (even if I couldn’t have caught them myself).

(To an extent, this may all be true when you replace “smarter” with “more skilled in domain X”.)

This suggests that candidate voting systems (whether for governments or otherwise) should have more “levels”. If we all want to elect the best person, where “bestness” is hard to identify by most of us mediocre participants, we would do better by identifying which of our neighbors are smarter than us, and then electing them to make decisions for us (possibly continuing into a hierarchy of committees). This is an argument for having federal senators chosen by state legislatures.

Of course, there are many problems with additional levels, e.g. it is difficult to align incentives across just two levels.… [continue reading]

Follow-up questions on the set-selection problem

Physics StackExchange user QuestionAnswers asked the question “Is the preferred basis problem solved?“, and I reproduced my “answer” (read: discussion) in a post last week.  He had some thoughtful follow-up questions, and (with his permission) I am going to answer them here. His questions are in bold, with minor punctuation changes.

How serious would you consider what you call the “Kent set-selection” problem?

If a set of CHs could be shown to be impossible to find, then this would break QM without necessarily telling us how to correct it. (Similar problems exist with the breakdown of gravity at the Planck scale.) Although I worry about this, I think it’s unlikely and most people think it’s very unlikely. If a set can be found, but no principle can be found to prefer it, I would consider QM to be correct but incomplete. It would kinda be like if big bang neucleosynthesis had not been discovered to explain the primordial frequency of elements.

And what did Zurek think of it, did he agree that it’s a substantial problem?

I think Wojciech believes a set of consistent histories (CHs) corresponding to the branch structure could be found, but that no one will find a satisfying beautiful principle within the CH framework which singles out the preferred set from the many, many other sets. He believes the concept of redundant records (see “quantum Darwinism”) is key, and that a set of CHs could be found after the fact, but that this is probably not important. I am actually leaving for NM on Friday to work with him on a joint paper exploring the connection between redundancy and histories.… [continue reading]

Macro superpostions of the metric

Now I would like to apply the reasoning of the last post to the case of verifying macroscopic superpositions of the metric.  It’s been 4 years since I’ve touched GR, so I’m going to rely heavily on E&M concepts and pray I don’t miss any key changes in the translation to gravity.

In the two-slit experiment with light, we don’t take the visibility of interference fringes as evidence of quantum mechanics when there are many photons.  This is because the observations are compatible with a classical field description. We could interfere gravitational waves in a two-slit set up, and this would also have a purely classical explanation.

But in this post I’m not concentrating on evidence for pure quantum mechanics (i.e. a Bell-like argument grounded in locality), or evidence of the discrete nature of gravitons. Rather, I am interested in superpositions of two macroscopically distinct states of the metric as might be produced by a superposition of a large mass in two widely-separated positions.  Now, we can only call a quantum state a (proper) superposition by first identifying a preferred basis that it can be a superposition with respect to.  For now, I will wave my hands and say that the preferred states of the metric are just those metric states produced by the preferred states of matter, where the preferred states of matter are wavepackets of macroscopic amounts of mass localized in phase space (e.g. L/R).  Likewise, the conjugate basis states (e.g. L+R/L-R) are proper superpositions in the preferred basis, and these two bases do not commute.

There are two very distinct ways to produce a superposition with different states of the metric: (1) a coherent superposition of just gravitational radiation Note that we expect to produce this superposition by moving a macroscopic amount of matter into a superposition of two distinct position or momentum states.  [continue reading]

Verifying superpositions

Suppose we are given an ensemble of systems which are believed to contain coherent superposition of the metric. How would we confirm this?

Well, in order to verify that an arbitrary system is in a coherent superposition, which is always relative to a preferred basis, it’s well known that we need to make measurements with respect to (at least?) two non-commuting bases. If we can make measurement M we expect it to be possible to make measurement M` = RM for some symmetry R.

I consider essentially two types of Hilbert spaces: the infinite-dimensional space associated with position, and the finite-dimensional space associated with spin. They have a very different relationship with the fundamental symmetries of spacetime.

For spin, an arbitrary rotation in space is represented by a unitary which can produce proper superpositions. Rotating 90 degrees about the y axis takes a z-up eigenstate to an equal superposition of z-up and z-down. The rotation takes one basis to another with which it does not commute.

In contrast, for position, the unitary representing spatial translation is essentially just a permutation on the space of position eigenstates. It does not produce superpositions from non-superpositions with respect to this basis.

You might think things are different when you consider more realistic measurements with respect to the over-complete basis of wavepackets. (Not surprisingly, the issue is one of preferred basis!) If you imagine the wavepackets as discretely tiling space, it’s tempting to think that translating a single wavepacket a half-integer number of tile spacing will yield an approximate superposition of two wavepackets. But the wavepackets are of course not discrete, and a POVM measurement of “fuzzy” position (for any degree of fuzziness σ) is invariant under spatial translations.… [continue reading]

Kent’s set-selection problem

Unfortunately, physicists and philosophers disagree on what exactly the preferred basis problem is, what would constitute a solution, and how this relates (or subsumes) “the measurement problem” more generally. In my opinion, the most general version of the preferred basis problem was best articulated by Adrian Kent and Fey Dowker near the end their 1996 article “On the Consistent Histories Approach to Quantum Mechanics” in the Journal of Statistical Physics. Unfortunately, this article is long so I will try to quickly summarize the idea.

Kent and Dowker analyzed the question of whether the consistent histories formalism provided a satisfactory and complete account of quantum mechanics (QM). Contrary to what is often said, consistent histories and many-worlds need not be opposing interpretations of quantum mechanics Of course, some consistent historians make ontological claims about how the histories are “real”, where as the many-world’ers might say that the wavefunction is more “real”. In this sense they are contradictory. Personally, I think this is purely a matter of taste. a  . Instead, consistent histories is a good mathematical framework for rigorously identifying the branch structure of the wavefunction of the universe Note that although many-worlders may not consider the consistent histories formalism the only way possible to mathematically identify branch structure, I believe most would agree that if, in the future, some branch structure was identified using a completely different formalism, it could be described at least approximately by the consistent histories formalism.  Consistent histories may not be perfect, but it’s unlikely that the ideas are totally wrong. b  . Most many-world’ers would agree that unambiguously describing this branch structure would be very nice (although they might disagree on whether this is “necessary” for QM to be a complete theory).… [continue reading]