Comments on Stern, journals, and incentives

David L. Stern on changing incentives in science by getting rid of journals:

Instead, I believe, we will do better to rely simply on the scientific process itself. Over time, good science is replicated, elevated, and established as most likely true; bad science may be unreplicated, flaws may be noted, and it usually is quietly dismissed as untrue. This process may take considerable time—sometimes years, sometimes decades. But, usually, the most egregious papers are detected quickly by experts as most likely garbage. This self-correcting aspect of science often does not involve explicit written documentation of a paper’s flaws. The community simply decides that these papers are unhelpful and the field moves in a different direction.

In sum, we should stop worrying about peer review….

The real question that people seem to be struggling with is “How will we judge the quality of the science if it is not peer reviewed and published in a journal that I ‘respect’?” Of course, the answer is obvious. Read the papers! But here is where we come to the crux of the incentive problem. Currently, scientists are rewarded for publishing in “top” journals, on the assumption that these journals publish only great science. Since this assumption is demonstrably false, and since journal publishing involves many evils that are discussed at length in other posts, a better solution is to cut journals out of the incentive structure altogether.

(H/t Tyler Cowen.)

I think this would make the situation worse, not better, in bringing new ideas to the table. For all of its flaws, peer review has the benefit that any (not obviously terrible) paper gets a somewhat careful reading by a couple of experts. Furthermore, once they have done this they publicly disseminate a few very valuable bits of information: this paper is not a complete waste of time. That makes the reading of other experts more efficient.

When you stop disseminating such information, I expect two things to happen: (1) Academics will be even less likely to read anything outside their narrow niche. (2) Hiring committees will rely more on soft signals which are even easier to game than journal publications (e.g., reputation, confidence, impressive-looking mathematics, raw number of pre-prints).

Saying “let’s just let the normal scientific process operate” is close to saying “I feel comfortable evaluating the small number of papers in my tiny niche field, and I don’t intend to try and evaluate anything outside of that”. Because evaluating something outside your niche is hard work, and you should be be scrambling for as much help as you can get.

Remember, when it comes to identifying new good ideas in the chaff of terrible research, the valuable commodity is the attention and careful consideration of an expert. Any change that impedes the careful rationing of that resource, or that squanders the fruit of that resourceThis is why I am also interested in any change to the referee process that makes referee reports more widely available, at the minimum by having them follow the paper from journal to journal, up to making them publicly available. Needless to say, this is a tricky thing to change without breaking things. a   (e.g., badges of approval), is very likely making things worse.

(That said, I am very sympathetic to the argument that institutionalizing worse measures of scientific quality may be preferable if they distort incentives less. I just don’t see how getting rid of traditional journals, or an equivalent “badge of approval” system like arXiv overlay journals, obviously changes incentives much for the better.)


There are likely to be many ways to improve the structure of review processes.

First, we should eliminate CVs from packages….A long list of Nature articles in one CV versus a short list of trade journals in a second CV will almost certainly lead most reviewers to favor the first applicant, even if they haven’t read a single word of any of the articles. That is precisely the bias we want to eliminate….

Second, applicants should submit several papers with their package and these papers should give no indication of the journals they may have been published in or the authors’ names. It would probably be useful to indicate how many authors were involved in the study and whether the applicant is a major contributor to the work.

Third, applicants should write a short summary of each submitted paper in plain language, so that a broader community of scientists can understand the major results and implications of the work.

This hints at, but doesn’t come close to addressing, the key scarcity: the time and attention of the hiring committee. Committees don’t rely on counting Nature articles because they think the Nature reviewers are smarter than them — everyone considers themselves the smartest — they do it because reading and understanding papers from dozen (or hundreds!) of applicants is an enormous burden. There are conceivable ways to fix this (e.g., radical suggestions like replacing the hiring committees at many institutions with a cross-institution evaluation and hiring board), but just taking away the information from the hiring committee members does nothing except force them to rely on even noisier signals. (“Hmm, a 4 page papers. I wonder if this was a PRL…”)


If we don’t believe that we can judge scientists based on their science, and in the absence of the peripherals—journal names, CV junk, and unhelpful letters­—then we are in a dire state indeed.



(↵ returns to text)

  1. This is why I am also interested in any change to the referee process that makes referee reports more widely available, at the minimum by having them follow the paper from journal to journal, up to making them publicly available. Needless to say, this is a tricky thing to change without breaking things.
Bookmark the permalink.

Leave a Reply

Include [latexpage] in your comment to render LaTeX equations with $'s. (More.)

Your email address will not be published. Required fields are marked with a *.