China will build the successor to the LHC.
Note that the China Daily article above incorrectly suggests that they will build a 50-70km circular electron-positron accelerator at ~100 TeV CoM. In fact, the project comes in two phases inside the same tunnel: first a 250 GeV electron-positron ‘precision’ machineNote that the 250 GeV electron-positron collisions will produce only one Higgs, and the fact that the COM energy is double the Higgs mass is a coincidence. See slides 9-16 here for some of the processes that will be studied.a , the Circular Electron-Positron Collider (CEPC), followed by an upgrade to a 70 TeV proton-proton ‘discovery’ machine, the Super Proton-Proton Collider (SPPC). The current timeline for operations, which will inevitably be pushed back, projects that data taking will start in 2028 and 2042, respectively. (H/t Graeme Smith.)
The existence of this accelerator has lots of interesting implications for accelerators in the Wester hemisphere. For instance, the International Linear Collider (ILC) was planning on using a ‘push-pull’ configuration where they would alternate beam time between two devices (by keeping them on huge rolling platforms!). The idea is that having two completely separate and competing detectors is critical for maintaining objectivity in world where you only have a single accelerator. Since ILC is linear, there is only one interaction region (unlike for the common circular accelerator). So to use two detectors, you need to be able to swap them in and out! But this becomes largely unnecessary if CEPC exists to keep ILC honest.
I think this is a bad development for physics because I am pessimistic about particle accelerators telling us something truly deep and novel about the universe, at least in the next century.… [continue reading]
Perimeter Institute is now accepting applications for 3- and 5-year postdoc positions to start Fall 2016. After having been here a year, I can tell you that PI is amazing. This is the greatest place for fundamental physics research in the world. Stop working on problems that someone else would do anyway and come tackle the big questions with me!
Here is the poster, and here is the blurb:
Perimeter Institute for Theoretical Physics invites applications for postdoctoral positions from new and recent PhDs working in fundamental theoretical physics. Our areas of strength include classical gravity, condensed matter theory, cosmology, particle physics, mathematical physics, quantum fields and strings, quantum foundations, quantum information, and quantum gravity. We also encourage applications from scientists whose work falls in more than one of these categories. Our postdoctoral positions are normally for a period of three years. Outstanding candidates may also be considered for a senior postdoctoral position with a five-year term.
Perimeter Institute offers a dynamic, multi-disciplinary environment with maximum research freedom and opportunity to collaborate within and across fields. Our postdoctoral positions are intended for highly original and intellectually adventurous young theorists. Perimeter offers comprehensive support including a generous research and travel fund, opportunities to invite visiting collaborators, and help in organizing workshops and conferences. A unique mentoring system gives early-career scientists the feedback and support they need to flourish as independent researchers.
The Institute offers an exceptional research environment and is currently staffed with 40 full-time and part-time faculty members, 42 Distinguished Visiting Research Chairs, 55 Postdoctoral Researchers, 47 Graduate Students, and 28 exceptional master’s-level students participating in Perimeter Scholars International. Perimeter also hosts hundreds of visitors and conference participants throughout the academic year.
… [continue reading]
The arXiv admin board is considering adding more options for linking to material related to a submission. Some examples: blog posts, news items, video lectures, scientific video, software, lecture slides, simulations,
follow-up articles, author’s personal website. What else might be useful?
Here is a mockup of what things could look like (link to HTML):
… [continue reading]
Can the judgement of scientific correctness and importance be separated in journal publishing? Progress in this direction is being made by megajournals (a misleading name) that assess only correctness, leaving impact evaluation to other post-publication metrics. The first link suggests that such journals may have saturated the market, but actually this result is overwhelmingly dominated by PLOS ONE, and the other megajournals look like they are still growing. (H/t Tyler Cowen.)
Although I am generally for the “unbundling” of the various roles played by the journal, I think this actually could have bad results. There currently is a stupendous amount of academic writing being produced, and only a tiny fraction of it can be read carefully by thoughtful people. Folks are fighting for the attention of their colleagues, and most papers are not worth it. Right now, if you think you have a good result you can submit to a high-impact journal, and there is at least a chance that the editor will send it out for review, and at least two reasonably qualified referees will be forced to read it. If they decide your paper is important, it gets published in a way that marks its importance.
But consider the alternate universe, where everything correct just goes up on the arXiv, and ex post facto certifications are applied to work that someone important later decides is super interesting. In this case, an article is not guaranteed to get any qualified readers at all. Rather, new articles will be read or not read based on some combination of author prestige, abstract salesmanship, and the amplification of initial random noiseTo explain the latter: In any given set of papers with indistinguishable external features, some will get read and others won’t by chance.… [continue reading]
[Other posts in this series: 1,2,3.]
I now have a more concrete idea of some of the pie-in-the-sky changes I would like to see in academic publishing in the long term. I envision three pillars:
“Scientifica”: a linked, universally collaborative document that takes the reader from the most basic introductory concepts to the forefront of research.Edit 2016-4-22: I am embarrassed that I did not make it clear when this was initially posted that the Scientifica idealization is mostly a product of Godfrey Miller. Hopefully he didn’t notice…a Imagine a Wikipedia for all of science, maintained by researchers. Knowen and Scholarpedia are early prototypes, although I believe a somewhat stronger consensus mechanism akin to particle physics collaborations will be necessary.
ArXiv++: a central repository of articles that enables universal collaboration through unrestricted forking of papers. This could arise by equipping the arXiv with an open attribution standard and moving toward a copyleft norm (see below).
Discussion overlay: There is a massive need for quick, low-threshold commentary on articles, although I have fewer concrete things to say about this at the moment. For the time being, imagine that each arXiv article accumulated nestedNested comments are just comments that allow comment-specific replies, organized in a hierarchy; see here for a visual example.b comments (or other annotations) that the reader could choose to view or suppress, and which could be added to with the click of a button.
The conceptual flow here is that bleeding-edge research is documented on the arXiv, is discussed on the overlay, and — when it has been hashed out through consensus — it is folded into Scientifica.… [continue reading]
[Other posts in this series: 1,2,4.]
My GitWikXiv post on making the academic paper universally collaborative got a lot of good comments. In particular, I recommend reading Ivar Martin, who sees a future of academic writing that is very different from what we have now.
Along a slightly more conventional route, the folks working on Authorea made a good case that they have several of the components that are needed to allow universal collaboration, and they seem to have a bit of traction.More generally, the comments on the post gave me the impression that lots of people are working on tools, but not many people are working on open standards. (This isn’t surprising, since software tools are a lot easier to develop by a handful of people.) It may be that a lot of the social/cultural obstacles (in contrast to technical ones), that we all seem to agree are the most difficult, aren’t actually mental problems so much as coordination problems. In other words, it might not have anything to do with old researchers being set in their ways as much as tragedy-of-the-commons-type obstacles. So maybe there should be more focus on open standards like ORCID, smart citations, data accessibility, and an attribution standard like I discuss here.a I was asked what it would take to solve the remaining problems by my lights, and I sketched a hypothetical way to let Authorea (which is a for-profit company) interface with the arXiv to enable universal collaboration with proper attribution. The key step would be the introduction of an attribution open file standard that could be agreed upon by the academic community, and especially by the arXiv advisory board.… [continue reading]
[Other posts in this series: 1,3,4.]
In a follow-up to my GitWikXiv post on making the academic paper more collaborative, I’d like to quickly lay out two important distinctions as a way to anchor further discussion.
Revision vs. attribution vs. evaluation
Any system for allowing hundreds of academics to collaborate on new works needs to track and incentivize who contributes what. But it’s key to keep these parts separate conceptually (and perhaps structurally).
Revisions are the bare data necessary to reconstruct the evolution of a document through time. This is the well trodden ground of revision control software like GitHub.
Attribution is the assigning of credit. At the minimum this includes tagging individual revisions with the name/ID of the revisor(s). But more generally it includes the sort of information that can be found in footnotes (“I thank J. Smith for alerting me to this possibility”), acknowledgements (“We are grateful to J. Doe for discussion”), and author contributions statements (“A. Atkins ran the experiment; B. Bonkers analyzed the data”).
Evaluation of the revisions is done to assess how much they are worth. This can be expressed as an upvote (as on StackExchange), as a number of citations or other bibliometric like the h-index, or as being published in a certain venue like Nature.In general I am against most evaluation metrics. I actually think that these metric correlate pretty strongly with academic accomplishment, all else being equal, but I think all else is very not equal, and that the metrics become gamed as soon as you attach incentives to them. For instance, the number of times an actor is mentioned on twitter probably correlates pretty strongly with how good an actor they are, but it drastically underrates broadway actors compared to movie actors, or niche art-film actors compared to Adam Sandler.
… [continue reading]
[Other posts in this series: 2,3,4.]
I had the chance to have dinner tonight with Paul Ginsparg of arXiv fame, and he graciously gave me some feedback on a very speculative idea that I’ve been kicking around: augmenting — or even replacing — the current academic article model with collaborative documents.
Even after years of mulling it over, my thoughts on this aren’t fully formed. But I thought I’d share my thinking, however incomplete, after incorporating Paul’s commentary while it is still fresh in my memory. First, let me start with some of the motivating problems as I see them:
People still reference papers from 40 years ago for key calculations (not just for historical interest or apportioning credit). They often have such poor typesetting that they are hard to read, don’t have machine-readable text, no URL links, etc.
Getting oriented on a topic often requires reading a dozen or more scattered papers with varying notation, where the key advances (as judged with hindsight) are mixed in with material that is much less important.
More specifically, papers sometimes have a small crucial idea that is buried in tangential details having to do with that particular author’s use for the idea, even if the idea has grown way beyond the author.
Some authors could contribute the key idea, but others could contribute clarity of thought, or make connections to other fields. In general these people may not know each other, or be able to easily collaborate.
There aren’t enough good review articles.When the marginal cost of producing a textbook is near zero, the fact that no one gets proper credit for writing good textbooks isn’t so bad simply because you only need one or two good ones, and the audience is huge.
… [continue reading]