[Other posts in this series: 1,3,4.]
In a follow-up to my GitWikXiv post on making the academic paper more collaborative, I’d like to quickly lay out two important distinctions as a way to anchor further discussion.
Revision vs. attribution vs. evaluation
Any system for allowing hundreds of academics to collaborate on new works needs to track and incentivize who contributes what. But it’s key to keep these parts separate conceptually (and perhaps structurally).
- Revisions are the bare data necessary to reconstruct the evolution of a document through time. This is the well trodden ground of revision control software like GitHub.
- Attribution is the assigning of credit. At the minimum this includes tagging individual revisions with the name/ID of the revisor(s). But more generally it includes the sort of information that can be found in footnotes (“I thank J. Smith for alerting me to this possibility”), acknowledgements (“We are grateful to J. Doe for discussion”), and author contributions statements (“A. Atkins ran the experiment; B. Bonkers analyzed the data”).
- Evaluation of the revisions is done to assess how much they are worth. This can be expressed as an upvote (as on StackExchange), as a number of citations or other bibliometric like the h-index, or as being published in a certain venue like Nature.a
Note that for small groups I do not think that detailed tracking and incentives within the group are good. Human beings are complicated, and there is a reason that groups with a handful of members tend to operate internally with informal incentive mechanism (compliments, thanks, guilt, etc.) while committing to share external rewards evenly or through some predetermined split.b
To better understand how humans respond at intermediate scales (between small groups and a complete free-for-all), it would probably be beneficial to look at large experimental collaborations such as in particle physics, where there is a complicated power and incentive structure in place that appears to function remarkably well. Although I have heard members of these collaborations complain about other individuals, I have almost never heard them criticize the basic structure in the way that people criticize the journal publishing process, or hiring committees.
Collaborative tools vs. central repository vs. discussion forums
- Collaborative tools allow people to work together on a document, either privately or in public. Surprisingly, it looks like this problem is on track to being solved. Important examples: Authorea, ShareLaTeX, Overleaf (formerly WriteLaTeX), and FindusWriter.
- Central repositories serve as the hub to which anyone who wishes to engage with the academic consensus links up. Although Wikipedia pages represent a consensus, centrality alone does not necessarily mean that the information there is authoritative or represents the consensus position. For instance, posting to the arXiv certainly does not imply acceptance. However, it is sufficient to claim scientific priority in physics because of its widely recognized central character, in a way that posting on a university’s website does not. (Posting on the arXiv is probably not sufficient to claim priority in biology, because the arXiv has not achieved the status of central hub there.) Centrality is extremely important for the universal character of collaboration, since it means that there is one place to go if you want to affect the consensus, and any two parties who disagree are forced to meet. Important examples: ArXiv, Wikipedia, PubMed, and (aspirationally) Knowen.
- Discussion forums allow discussion of work at varying levels of formality, ranging from water-cooler chat and blog comments to published comments and replies. These forums might be central (e.g., the bug tracker for a particular software project) or they might not (e.g., blogs). Important examples: SciRate, ThinkLab, and PubPeer.
Footnotes
(↵ returns to text)
- In general I am against most evaluation metrics. I actually think that these metric correlate pretty strongly with academic accomplishment, all else being equal, but I think all else is very not equal, and that the metrics become gamed as soon as you attach incentives to them. For instance, the number of times an actor is mentioned on twitter probably correlates pretty strongly with how good an actor they are, but it drastically underrates broadway actors compared to movie actors, or niche art-film actors compared to Adam Sandler. This is OK if you want to estimate the total amount of discussion that an actor generates, but in physics (and hopefully most of academics!), the goal is not the discussion. Any incentivized metric that tracks amount of discussion is going to have bad effects.↵
- That’s why, at least among theorist, one often has a brief serious conversation before anyone else is brought on board to write a paper. Once that person joins, the credit for the paper is irrevocably split N+1 ways instead of N ways.↵
I agree with footnote a. Discussion is both useful and fun, but mere discussion is not the goal of scientific research.
Yea! I’ve been thinking more about this recently. People justify citation metrics by appealing to the idea that papers are the output of scientific research, but most papers are just formalized discussion rather than the true fruits. I think one can think about this as an incorrect *weighting* function for the importance of a given chunk of science. A vastly better way to weight it, at least conceptually, is by how high up it would appear in the hierarchy of Knowen or Scientifica. This naturally and correctly penalizes extremely niche research that generates lots of discussion but leads nowhere.
Pingback: Beyond papers – GitWikXiv – foreXiv