Megajournals unbundling assessments of correctness and importance

Can the judgement of scientific correctness and importance be separated in journal publishing? Progress in this direction is being made by megajournals (a misleading name) that assess only correctness, leaving impact evaluation to other post-publication metrics. The first link suggests that such journals may have saturated the market, but actually this result is overwhelmingly dominated by PLOS ONE, and the other megajournals look like they are still growing. (H/t Tyler Cowen.)

Although I am generally for the “unbundling” of the various roles played by the journal, I think this actually could have bad results. There currently is a stupendous amount of academic writing being produced, and only a tiny fraction of it can be read carefully by thoughtful people. Folks are fighting for the attention of their colleagues, and most papers are not worth it. Right now, if you think you have a good result you can submit to a high-impact journal, and there is at least a chance that the editor will send it out for review, and at least two reasonably qualified referees will be forced to read it. If they decide your paper is important, it gets published in a way that marks its importance.

But consider the alternate universe, where everything correct just goes up on the arXiv, and ex post facto certifications are applied to work that someone important later decides is super interesting. In this case, an article is not guaranteed to get any qualified readers at all. Rather, new articles will be read or not read based on some combination of author prestige, abstract salesmanship, and the amplification of initial random noiseTo explain the latter: In any given set of papers with indistinguishable external features, some will get read and others won’t by chance. Papers that get read can get cited, causing more people to read them.a  . I believe this leads to even more entrenchment of established academics and incentives for misleading abstracts.

The scarce resource in academia is thoughtful expert assessmentNow this could be different for different fields. It could be in some fields that technical correctness is very subtle and hard to judge, whereas the results — if they are correct — speak for themselves. But math and theoretical physics papers are more like art, where technical correctness is assumed (or relatively easy to check) but the value of the results varies very widely, and requires significant effort and expertise to judge.b  . It’s those few minutes or hours when a person well-versed in the subject sits down, reads a paper, and passes explicit judgement on its importance. Essentially all other methods of ascertaining importance (citations, journal impact factor, recommendations from your colleagues) are just signals that are meant to noisily track such judgements without adding additional information. In a sense, megajournals are wasteful because the expert assessment that is unavoidably created when referees read the paper is lost when those referees judge only the technical correctness.

You might think that referees have to be a tiny minority of the thoughtful readers that a work has over its lifetime, but that is often wrong. This can be inferred from the fact that academics typically cite hundreds of articles per year in their writing but complain about having just a handful of referee assignments in the same time. Most citations are cursory, while reading and thinking about a paper long enough to pass judgement on it is taxing — even when the only penalty for a bad anonymous referee report is an annoyed editor!An alternate theory is that checking the correctness of an article is laborious but necessary for referees, in comparison to reading off results whose correctness can arguably be taken on faith for articles already published, and whose importance is transparent. At least within physics you can be disabused of this notion by looking at a selection of referee report and how often they fail to catch technical failures.c  

There are alternative systems that preserve the impact evaluation function (contrary to the stated motivation of many mega journals) while still not incentivizing the wasteful practice by author of iteratively submitting manuscripts and getting rejected from a string of journals in descending order of prestige. Megajournal could start attaching markers to particularly important papers, perhaps with significant degrees of gradation. A traditional framing for this would be to break the megajournal into several separately named journals (e.g., MegaJournal and MegaJournal Awesome) with a unified referee process. All technically correct papers would get published in one of them, and the paper would only need to go through a single set of referees who could communicate more than a single bit of information (i.e., “publish at prestige level X” rather than just “publish / don’t publish”).This would also preserve the convenient practice of being able to quickly estimate the importance of a cited work directly as it is listed in a bibliography by reading off the journal name while maintaining the fiction that the journal names are only there to enable one to locate the work. In contract, other post-publication assessment tools (e.g., citation numbers) require relatively more work for the reader to obtain.d  

Footnotes

(↵ returns to text)

  1. To explain the latter: In any given set of papers with indistinguishable external features, some will get read and others won’t by chance. Papers that get read can get cited, causing more people to read them.
  2. Now this could be different for different fields. It could be in some fields that technical correctness is very subtle and hard to judge, whereas the results — if they are correct — speak for themselves. But math and theoretical physics papers are more like art, where technical correctness is assumed (or relatively easy to check) but the value of the results varies very widely, and requires significant effort and expertise to judge.
  3. An alternate theory is that checking the correctness of an article is laborious but necessary for referees, in comparison to reading off results whose correctness can arguably be taken on faith for articles already published, and whose importance is transparent. At least within physics you can be disabused of this notion by looking at a selection of referee report and how often they fail to catch technical failures.
  4. This would also preserve the convenient practice of being able to quickly estimate the importance of a cited work directly as it is listed in a bibliography by reading off the journal name while maintaining the fiction that the journal names are only there to enable one to locate the work. In contract, other post-publication assessment tools (e.g., citation numbers) require relatively more work for the reader to obtain.
Bookmark the permalink.

Leave a Reply

Required fields are marked with a *. Your email address will not be published.

Contact me if the spam filter gives you trouble.

Basic HTML tags like ❮em❯ work. Type [latexpage] somewhere to render LaTeX in $'s. (Details.)