Distinguish between straight research and scientific opinion?

Summary: Maybe we should start distinguishing “straight research” from more opinionated scientific work and encourage industrial research labs to commit to protecting the former as a realistic, limited version of academic freedom in the private for-profit sector.

It seems clear enough to me that, within the field of journalism, the distinction between opinion pieces and “straight reporting” is both meaningful and valuable to draw. Both sorts of works should be pursued vigorously, even by the same journalists at the same time, but they should be distinguished (e.g., by being placed in different sections of a newspaper, or being explicitly labeled “opinion”, etc.) and held to different standards.In my opinion it’s unfortunate that this distinction has been partially eroded in recent years and that some thoughtful people have even argued it’s meaningless and should be dropped. That’s not the subject of this blog post, though.a   This is true even though there is of course a continuum between these categories, and it’s infeasible to precisely quantify the axis. (That said, I’d like to see more serious philosophical attempts to identify actionable principles for drawing this distinction more reliably and transparently.)

It’s easy for idealistic outsiders to get the impression that all of respectable scientific research is analogous to straight reporting rather than opinion, but just about any researcher will tell you that some articles are closer than other articles to the opinion category; that’s not to say it’s bad or unscientific, just that such articles go further in the direction of speculative interpretation and selective highlighting of certain pieces of evidence, and are often motivated by normative claims (“this area is more fruitful research avenue than my colleagues believe”, “this evidence implies the government should adopt a certain policy”, etc.).… [continue reading]

Talk on Collaborative Pedagogical Documents

We recently hosted a conference at Perimeter Institute on “Open Science”. Video from all the talks is available here. I spokeIt might be more accurate to say that I occasionally mumbled something intelligible in between long stretches of the words “um” and “ah”. Luckily, you can watch the video at high speed by using using a browser plugin like Video Speed Controller for Chrome. Unfortunately, I don’t know a simple way to embed playback speed controls directly into the HTML rather than forcing you to install a plugin or download the video and watch it with a player featuring such controls.a   on the importance of “knowledge ratchets”, i.e., pedagogical documents (textbooks, monographs, and review papers) that allow for continuous improvement by anyone. After starting off with my new favorite example of how basic physics textbooks, and physicists, are egregiously uninformed about central elementary things, I ranted about how important it is to allow for people who are not the original author to contribute easily to the documents composing our educational pipeline (broadly construed to include the training researchers on recent developments).

Collaborative Knowledge Ratchets and Fermat's Library
Jess Riedel and Luis Batalha

(I forgot to put on the microphone for the first minute and a half; the sound quality improves after that.)

Luckily, when I wanted to illustrate the idea of in-PDF commenting on articles that generated feedback for the authors, I didn’t have to just use mock-ups. Luis Batalha from Fermat’s Library took the mic for the second half of the talk to show off their Chrome plugin “Librarian” and talk about their strategy for gaining users.… [continue reading]

Research debt

Chris Olah coins the term “research debt” to discuss a bundle of related destructive phenomena in research communities:

  • Poor Exposition – Often, there is no good explanation of important ideas and one has to struggle to understand them. This problem is so pervasive that we take it for granted and don’t appreciate how much better things could be.
  • Undigested Ideas – Most ideas start off rough and hard to understand. They become radically easier as we polish them, developing the right analogies, language, and ways of thinking.
  • Bad abstractions and notation – Abstractions and notation are the user interface of research, shaping how we think and communicate. Unfortunately, we often get stuck with the first formalisms to develop even when they’re bad. For example, an object with extra electrons is negative, and pi is wrong.
  • Noise – Being a researcher is like standing in the middle of a construction site. Countless papers scream for your attention and there’s no easy way to filter or summarize them. We think noise is the main way experts experience research debt.

Shout it from the rooftops (my emphasis):

It’s worth being clear that research debt isn’t just about ideas not being explained well. It’s a lack of digesting ideas – or, at least, a lack of the public version of ideas being digested. It’s a communal messiness of thought.

Developing good abstractions, notations, visualizations, and so forth, is improving the user interfaces for ideas. This helps both with understanding ideas for the first time and with thinking clearly about them. Conversely, if we can’t explain an idea well, that’s often a sign that we don’t understand it as well as we could…

Distillation is also hard.

[continue reading]

Sank argues for a SciRate issue tracker

SciRate is the best location I know of for public discussion and feedback on academic papers, and is an impressive open-source achievement by Adam Harrow and collaborators. Right now it has the most traction in the field of quantum informationQuantum info leading the way, as usual…a  , but it could stand to become more popular, and to expand into other fields.

My colleague and good friend Dan Sank proposes a small but important tweak for SciRate: issue tracking, à la GitHub.

Issues in Scirate?

Scirate enables us to express comments/opinions on published works. Another very useful kind of feedback for research papers is issues. By “issue” I mean exactly the kind of thing I’m writing right now: a description of

  1. a problem with the work which can be definitively fixed, or
  2. a possible improvement to that product.

This differs from comments which are just statements of opinion which don’t require any reaction from the author. We all know that issues are essential in developing software, and based on a recent experience where I used github to host development of a research paper with three coauthors and more than a dozen group members providing feedback, I think that issues should also be used for research papers.

It might be nice to attach an issue tracker to Scirate, or at least have Scirate give links to an external issue tracker attached to each paper.

Why not just use a public github repo and get the issue tracker for free?

Making a github repo public makes everything public, including any sensitive information including comments about particular works/people. Having written a paper using github, I can imagine the authors would not want to make that repo public before going through the entire issue history making sure nobody said anything embarrassing/demeaning/etc.

[continue reading]

Bullshit in science

Francisco Azuaje (emphasis mine):

According to American philosopher Harry FrankfurtHere’s Frankfurt’s popular essay [PDF].a  , a key difference between liars and bullshitters is that the former tend to accept that they are not telling the truth, while the latter simply do not care whether something is true or not.

Bullshitters strive to maximize personal gain through a continuing distortion of reality. If something is true and can be manipulated to achieve their selfish objectives, then good. If something is not true, who cares? All the same. These attributes make bullshitting worse than lying.

Furthermore, according to Frankfurt, it is the bullshitter’s capacity to get away with bullshitting so easily that makes them particularly dangerous. Individuals in prominent positions of authority may be punished for lying, especially if lying has serious damaging consequences. Professional and casual bullshitters at all levels of influence typically operate with freedom. Regardless of their roles in society, their exposure is not necessarily accompanied by negative legal or intellectual consequences, at least for the bullshitter…

Researchers may also be guilty of bullshitting by omission. This is the case when they do not openly challenge bullshitting positions, either in the public or academic settings. Scientists frequently wrongly assume that the public always has knowledge of well-established scientific facts. Moreover, scientists sometimes over-estimate the moderating role of the media or their capacity to differentiate facts from falsehood, and solid from weaker evidence.

Bullshitting happens. But very often it is a byproduct of indifference. Indifference frequently masking a fear of appearing confrontational to peers and funders. Depending on where you are or with whom you work, frontal bullshit fighting may not be good for career advancement.

[continue reading]

ArXiv and Zotero surveys

Quick note: the arXiv is administering a survey of user opinion on potential future changes, many of which were discussed previously on this blog. It can be reached by clicking the banner on the top of the arXiv homepage. I encourage you to take the survey if you haven’t already. (Doubly so if you agree with me…)

Likewise, Zotero is administering a somewhat shorter survey about what sorts of folks use Zotero and what they do with it.

To the question “Do you have suggestions for any of the above-mentioned new services, or any other new services you would like to see in arXiv?”, I responded:

I think the most important thing the arXiv to do would be to “nudge” authors toward releasing their work with a copyleft, e.g., Creative Commons – Attribution. (Or at least stop nudging them toward the minimal arXiv license, as is done now in the submission process.) For instance, make it clear to authors that if they publish in various open access journals that they should release the arXiv post on a similarly permissive license. Also, make is easier for authors to make the license more permissive at a later date once they know where they are publishing. So long as there is informed consent, anything that would increase the number of papers which can be built on (not just distributed) would be an improvement.

I would also like the arXiv to think about allowing for more fine-grained contribution tracking in the long term. I predict that collaboratively written documents will become much more common, and for this it will be necessary to produce a record of who changes what, like GitHub, with greater detail than merely the list of authors.

[continue reading]


Question: What sort of physics — if any — should be funded on the margin right now by someone trying to maximize positive impact for society, perhaps over the very long term?

First, it’s useful to separate the field into fundamental physics and non-fundamental physics, where the former is concerned with discovering new fundamental laws of the universe (particle physics, high-energy theory, cosmology, some astrophysics) and the latter applies accepted laws to understand physical systems (condensed matter, material physics, quantum information and control, plasma physics, nuclear physics, fluid dynamics, biophysics, atomic/molecular/optical physics, geophysics).Some folks like David Nelson dispute the importance/usefulness of this distinction: PDF. In my opinion, he is correct, but only about the most boring part of fundamental physics (which has unfortunately dominated most of those subfields). More speculative research, such as the validity (!!!) of quantum mechanics, is undeniably of a different character from the investigation of low-energy field theories. But that point isn’t important for the present topic.a  

That distinction made, let’s dive in.

Non-fundamental physics

Let’s first list some places where non-fundamental physics might have a social impact:

  1. condensed matter and material science discoveries that give high-temperature superconductors, stronger/lighter/better-insulating/better-conducting materials, higher density batteries, new computing architectures, better solar cells;
  2. quantum information discoveries that make quantum computers more useful than we currently think they will be, especially a killer app for quantum simulations;
  3. plasma physics discoveries that make fusion power doable, or fission power cheaper;
  4. quantum device technologies that allow for more precise measurements;
  5. climate physics (vague);Added 2016-Dec-20.b  
  6. biophysics discoveries (vague);
  7. nanotech discoveries (vague).

In my mostly uninformed opinion, only fusion power (#3) could be among the most valuable causes in the world, plausibly scoring very highly on importance, tractability, and neglectedness — with the notable caveat that the measurable progress would necessitate an investment of billions rather than millions of dollars.… [continue reading]

Comments on Stern, journals, and incentives

David L. Stern on changing incentives in science by getting rid of journals:

Instead, I believe, we will do better to rely simply on the scientific process itself. Over time, good science is replicated, elevated, and established as most likely true; bad science may be unreplicated, flaws may be noted, and it usually is quietly dismissed as untrue. This process may take considerable time—sometimes years, sometimes decades. But, usually, the most egregious papers are detected quickly by experts as most likely garbage. This self-correcting aspect of science often does not involve explicit written documentation of a paper’s flaws. The community simply decides that these papers are unhelpful and the field moves in a different direction.

In sum, we should stop worrying about peer review….

The real question that people seem to be struggling with is “How will we judge the quality of the science if it is not peer reviewed and published in a journal that I ‘respect’?” Of course, the answer is obvious. Read the papers! But here is where we come to the crux of the incentive problem. Currently, scientists are rewarded for publishing in “top” journals, on the assumption that these journals publish only great science. Since this assumption is demonstrably false, and since journal publishing involves many evils that are discussed at length in other posts, a better solution is to cut journals out of the incentive structure altogether.

(H/t Tyler Cowen.)

I think this would make the situation worse, not better, in bringing new ideas to the table. For all of its flaws, peer review has the benefit that any (not obviously terrible) paper gets a somewhat careful reading by a couple of experts.… [continue reading]