Perimeter Institute runs a pretty great and unusual 1-year master’s program called Perimeter Scholars International.a If you’re in your last year as an undergrad, I strongly advise you (seriously) to consider applying. Your choice of grad school is 80% of the selection power determining your thesis topic, and that topic places very strong constraints on your entire academic career. The more your choice is informed by actual physics knowledge (rather than the apparent impressiveness of professors and institutions), the better. An additional year at a new institution taking classes with new teachers can really help.
(Older academics can advertise this to students by printing out this poster.)
Here’s the blurb:
Each year, Canada’s Perimeter Institute for Theoretical Physics recruits approximately 30 exceptional science graduates for an immersive, 10-month physics boot camp: Perimeter Scholars International (PSI). This unique Master’s program seeks not only students with stellar undergraduate physics track records, but also those with diverse backgrounds, collaborative spirit, creativity, and other attributes that will set them apart as future innovators.
Features of the program include:
- All student costs (tuition and living) are covered, removing financial and/or geographical barriers to entry
Students learn from world-leading theoretical physicists – resident Perimeter researchers and visiting scientists – within the inspiring environment of Perimeter Institute.
- Collaboration is valued over competition; deep understanding and creativity are valued over rote learning and examination.
- PSI recruits worldwide: 85 percent of students come from outside of Canada.
- PSI takes calculated risks, seeking extraordinary talent who may have non-traditional academic backgrounds but have demonstrated exceptional scientific aptitude.
PSI is now accepting applications for the class of 2016/17. Applications are due by February 1, 2016.
… [continue reading]
China will build the successor to the LHC.
Note that the China Daily article above incorrectly suggests that they will build a 50-70km circular electron-positron accelerator at ~100 TeV CoM. In fact, the project comes in two phases inside the same tunnel: first a 250 GeV electron-positron ‘precision’ machinea , the Circular Electron-Positron Collider (CEPC), followed by an upgrade to a 70 TeV proton-proton ‘discovery’ machine, the Super Proton-Proton Collider (SPPC). The current timeline for operations, which will inevitably be pushed back, projects that data taking will start in 2028 and 2042, respectively. (H/t Graeme Smith.)
The existence of this accelerator has lots of interesting implications for accelerators in the Wester hemisphere. For instance, the International Linear Collider (ILC) was planning on using a ‘push-pull’ configuration where they would alternate beam time between two devices (by keeping them on huge rolling platforms!). The idea is that having two completely separate and competing detectors is critical for maintaining objectivity in world where you only have a single accelerator. Since ILC is linear, there is only one interaction region (unlike for the common circular accelerator). So to use two detectors, you need to be able to swap them in and out! But this becomes largely unnecessary if CEPC exists to keep ILC honest.
I think this is a bad development for physics because I am pessimistic about particle accelerators telling us something truly deep and novel about the universe, at least in the next century.… [continue reading]
Perimeter Institute is now accepting applications for 3- and 5-year postdoc positions to start Fall 2016. After having been here a year, I can tell you that PI is amazing. This is the greatest place for fundamental physics research in the world. Stop working on problems that someone else would do anyway and come tackle the big questions with me!
Here is the poster, and here is the blurb:
Perimeter Institute for Theoretical Physics invites applications for postdoctoral positions from new and recent PhDs working in fundamental theoretical physics. Our areas of strength include classical gravity, condensed matter theory, cosmology, particle physics, mathematical physics, quantum fields and strings, quantum foundations, quantum information, and quantum gravity. We also encourage applications from scientists whose work falls in more than one of these categories. Our postdoctoral positions are normally for a period of three years. Outstanding candidates may also be considered for a senior postdoctoral position with a five-year term.
Perimeter Institute offers a dynamic, multi-disciplinary environment with maximum research freedom and opportunity to collaborate within and across fields. Our postdoctoral positions are intended for highly original and intellectually adventurous young theorists. Perimeter offers comprehensive support including a generous research and travel fund, opportunities to invite visiting collaborators, and help in organizing workshops and conferences. A unique mentoring system gives early-career scientists the feedback and support they need to flourish as independent researchers.
The Institute offers an exceptional research environment and is currently staffed with 40 full-time and part-time faculty members, 42 Distinguished Visiting Research Chairs, 55 Postdoctoral Researchers, 47 Graduate Students, and 28 exceptional master’s-level students participating in Perimeter Scholars International. Perimeter also hosts hundreds of visitors and conference participants throughout the academic year.
… [continue reading]
The arXiv admin board is considering adding more options for linking to material related to a submission. Some examples: blog posts, news items, video lectures, scientific video, software, lecture slides, simulations,
follow-up articles, author’s personal website. What else might be useful?
Here is a mockup of what things could look like (link to HTML):
… [continue reading]
Can the judgement of scientific correctness and importance be separated in journal publishing? Progress in this direction is being made by megajournals (a misleading name) that assess only correctness, leaving impact evaluation to other post-publication metrics. The first link suggests that such journals may have saturated the market, but actually this result is overwhelmingly dominated by PLOS ONE, and the other megajournals look like they are still growing. (H/t Tyler Cowen.)
Although I am generally for the “unbundling” of the various roles played by the journal, I think this actually could have bad results. There currently is a stupendous amount of academic writing being produced, and only a tiny fraction of it can be read carefully by thoughtful people. Folks are fighting for the attention of their colleagues, and most papers are not worth it. Right now, if you think you have a good result you can submit to a high-impact journal, and there is at least a chance that the editor will send it out for review, and at least two reasonably qualified referees will be forced to read it. If they decide your paper is important, it gets published in a way that marks its importance.
But consider the alternate universe, where everything correct just goes up on the arXiv, and ex post facto certifications are applied to work that someone important later decides is super interesting. In this case, an article is not guaranteed to get any qualified readers at all. Rather, new articles will be read or not read based on some combination of author prestige, abstract salesmanship, and the amplification of initial random noise[continue reading]
[Other posts in this series: 1,2,3.]
I now have a more concrete idea of some of the pie-in-the-sky changes I would like to see in academic publishing in the long term. I envision three pillars:
- “Scientifica”: a linked, universally collaborative document that takes the reader from the most basic introductory concepts to the forefront of research.a Imagine a Wikipedia for all of science, maintained by researchers. Knowen and Scholarpedia are early prototypes, although I believe a somewhat stronger consensus mechanism akin to particle physics collaborations will be necessary.
- ArXiv++: a central repository of articles that enables universal collaboration through unrestricted forking of papers. This could arise by equipping the arXiv with an open attribution standard and moving toward a copyleft norm (see below).
- Discussion overlay: There is a massive need for quick, low-threshold commentary on articles, although I have fewer concrete things to say about this at the moment. For the time being, imagine that each arXiv article accumulated nestedb comments (or other annotations) that the reader could choose to view or suppress, and which could be added to with the click of a button.
The conceptual flow here is that bleeding-edge research is documented on the arXiv, is discussed on the overlay, and — when it has been hashed out through consensus — it is folded into Scientifica.… [continue reading]
[Other posts in this series: 1,2,4.]
My GitWikXiv post on making the academic paper universally collaborative got a lot of good comments. In particular, I recommend reading Ivar Martin, who sees a future of academic writing that is very different from what we have now.
Along a slightly more conventional route, the folks working on Authorea made a good case that they have several of the components that are needed to allow universal collaboration, and they seem to have a bit of traction.a I was asked what it would take to solve the remaining problems by my lights, and I sketched a hypothetical way to let Authorea (which is a for-profit company) interface with the arXiv to enable universal collaboration with proper attribution. The key step would be the introduction of an attribution open file standard that could be agreed upon by the academic community, and especially by the arXiv advisory board.… [continue reading]
[Other posts in this series: 1,3,4.]
In a follow-up to my GitWikXiv post on making the academic paper more collaborative, I’d like to quickly lay out two important distinctions as a way to anchor further discussion.
Revision vs. attribution vs. evaluation
Any system for allowing hundreds of academics to collaborate on new works needs to track and incentivize who contributes what. But it’s key to keep these parts separate conceptually (and perhaps structurally).
- Revisions are the bare data necessary to reconstruct the evolution of a document through time. This is the well trodden ground of revision control software like GitHub.
- Attribution is the assigning of credit. At the minimum this includes tagging individual revisions with the name/ID of the revisor(s). But more generally it includes the sort of information that can be found in footnotes (“I thank J. Smith for alerting me to this possibility”), acknowledgements (“We are grateful to J. Doe for discussion”), and author contributions statements (“A. Atkins ran the experiment; B. Bonkers analyzed the data”).
- Evaluation of the revisions is done to assess how much they are worth. This can be expressed as an upvote (as on StackExchange), as a number of citations or other bibliometric like the h-index, or as being published in a certain venue like Nature.
… [continue reading]