More well-deserved praise for the Stanford Encyclopedia of Philosophy. Lots to be learned from how the SEP was created. A key chicken-or-egg problem:
…several SEP authors and editors…said that the encyclopedia is used frequently both as a reference and as a teaching tool. This means that philosophers are some of the SEP’s core readers, and they can alert authors or subject editors to incorrect or insufficient entries.
Stanford does pay most of the operating costs. But the SEP has a paid staff of only three—Zalta, Nodelman, and Allen—plus five other Stanford employees who spend 20% of their time on technical support. Neither the authors, nor the dozens of subject editors, get so much as a dime for their troubles.
To pay running expenses not covered by Stanford, the team obtained nearly $2 million in grants over the first 15 years. But they wanted something more sustainable… The SEP asks academic libraries to make a one-time contribution [that now provides around a third of the budget]. That doesn’t get them access to the SEP, since it’s already freely accessible, but they enjoy some extra “member benefits,” like the ability to use their own branding on a version of the encyclopedia, and to save the full archives.
Moreover, their money goes into an SEP endowment, managed by the same company that takes care of Stanford University’s endowment of over $20 billion. If the SEP ever shuts down, Stanford promises to give the libraries that contributed to SEP all their money back, with interest. “It became a no-risk investment for the libraries, and it’s a way for them to invest in open access,” says Zalta.
Libraries were enthusiastic. The SEP was able to raise over $2 million from the long list of contributors, and Stanford added $1 million to the library endowment. The university also provides 60% of SEP’s budget—not much to ask from such a rich institution. The remaining 10% comes from a “friends of the SEP” program, which for $5, $10, or $25 a year lets individual users download nicely formatted PDFs of the articles, good for printing or archiving for personal use.
Although it would be a much bigger undertaking, this is what’s needed for physics.
Acceleration provided by cable drawn chassis for virtual reality
(Is ‘virtual reality’ a dated term?)
More intense than Giving What We Can is the Radical Givers Pledge:
To become a Radical Giver, you must donate 33% of your pre-tax income (less education costs, debt payments, and catastrophic expenses)…We only recommend you do this if you’re making at least 50% more than the median income in your country (around 60K USD in the United States)…To count toward your 33%, you must make donations toward maximally effective causes…You’re free to donate to other charitable causes, like your local church or your favorite nonprofit; we don’t discourage that in any way. But they will not count toward your 33%…All members will be entered into a Radical Givers community member list, which is publicly available online.
- Arguments for stillbirths being counted as a key humanitarian index in developing countries. (H/t Hauke Hillebrandt.)
- Coverage in Forbes about recent study claiming evidence against the linear no-threshold model for radiation damage in organisms. Seems hard to believe this hasn’t been carried out before. This is a politically charged topic making sensible research hard, and I’m interested in it for that meta reason.
Highest resolution shot of pluto downlinked from New Horizons (8000×8000, 67.5 Mb):
- New observation of relativistic black holes binary, with relevance to the outstanding final-parsec problem. I thought this popular article was notable for it’s excellent scientific journalism. No cliches about Einstein, no sensationalism, and I love the side bar.
- The difference between weak typing and dynamic typing in programming languages.
- Ben Kuhn: Rah autocomplete.
- Bending subway train from the inside.
How misbehavior in school pays off for some kids:
Together with two co-authors, I have studied how classroom misbehavior relates to both educational attainment and labor market performance. Surprisingly, we find evidence that some non-cognitive skills that manifest as childhood misbehavior in the classroom (and are predictive of lower schooling attainment) are also predictive of higher earnings later in life….It has been long established in the psychology literature that the survey information collected from teachers on classroom behavior is statistically well summarized by two underlying factors, each reflecting a different non-cognitive skill. One of the factors captures anxious, aggressive, or restless outwardly expressed (thus: externalizing) behaviors. The second embodies withdrawn, inhibited (thus: internalizing) behaviors. …If we simply summarize all misbehavior, as some previous research has done, we find that misbehavior lowers schooling attainment and also lowers earnings. This is the basis for the widespread view that childhood misbehavior has a detrimental impact on all economically relevant outcomes. …However, when we recognize that misbehavior in the classroom can be reflective of two very different non-cognitive skills—externalizing and internalizing behaviors—a much more nuanced story emerges. Both of these characteristics are associated with lower schooling attainment. However, whereas internalizing behaviors, like being unforthcoming, depressive or withdrawn, predict lower earnings, externalizing behaviors, such as aggression, predict higher earnings.
(H/t Robin Hanson.)
- From Tyler Cowen: Chinese view of what China and the US agreed to.
- CRISPR advances.
- “Gates funded Terrapower inks deal with China’s CNNC to build fast reactor.”
- The ineffable Gwern on how to slow down Moore’s law.
- Downward spiral of the Human Brain Project.
- Can’t tell if this recent kerfuffle about bacteria eating styrofoam is important. (Original paper.) The real question is whether bags will still cost 5 cents at the grocery store.
Enough with the Trolley problem, already:
The morbid focus on the trolley problem creates, to some irony, a meta-trolley problem. If people (especially lawyers advising companies or lawmakers) start expressing the view that “we can’t deploy this technology until we have a satisfactory answer to this quandry” then they face the reality that if the technology is indeed life-saving, then people will die through their advised inaction who could have been saved, in order to be sure to save the right people in very rare, complex situations. Of course, the problem itself speaks mostly about the difference between “failure to save” and “overt action” to our views of the ethics of harm.
It turns out the problem has a simple answer which is highly likely to be the one taken. In almost every situation of this sort, the law already specifies who has the right of way, and who doesn’t. The vehicles will be programmed to follow the law, which means that when presented with a choice of hitting something in their right-of-way and hitting something else outside the right-of-way, the car will obey the law and stay in its right-of-way. The law says this, even if it’s 3 people jaywalking vs. one in the oncoming lane. If people don’t like the law, they should follow the process to change it. This sort of question is actually one of the rare ones where it makes sense for policymakers, not vendors to decide the answer.
(H/t Carl Shulman.)
Advanced LIGO just came online, and significant further improvements are planned in the future, under the name LIGO-3.
…it seems possible to upgrade the aLIGO instruments gaining a broadband sensitivity improvement by a factor of 3-5 (roughly equivalent to increasing the event rate by a factor 25-100).
(The event rate scales with the volume enclosed by a sphere with a maximum sensitivity radius.) My understanding is that Advanced LIGO already utilizes squeezed vacuum injection to get slightly past the standard quantum limit, but that LIGO-3 will do better by using frequency-dependent squeezing.
- Real-time lightening map.
I wish this plot had axes, or a link to the actual data: US-Soviet submarine noise arms race…
(Allegedly based on “Cold War Submarines” by N. Polmar.)
- Visualization of Longevity and Mortality.
- Nice to see the XKCD radiation chart on Creative Commons. Interesting that the EPA has explicitly different maximum dose limitations for radiations workers protecting valuable property and protecting life.
IBM Watson’s still got it?:
…a team at the company’s Thomas J. Watson Research Center said it has found a new way to make transistors from parallel rows of carbon nanotubes. The advance is based on a new way to connect ultrathin metal wires to the nanotubes that will make it possible to continue shrinking the width of the wires without increasing electrical resistance. One of the principal challenges facing chip makers is that resistance and heat increase as wires become smaller, and that limits the speed of chips, which contain transistors. The advance would make it possible, probably sometime after the beginning of the next decade, to shrink the contact point between the two materials to just 40 atoms in width, the researchers said. Three years later, the number will shrink to just 28 atoms, they predicted.
The article in Science:
Moving beyond the limits of silicon transistors requires both a high-performance channel and high-quality electrical contacts. Carbon nanotubes provide high-performance channels below 10 nanometers, but as with silicon, the increase in contact resistance with decreasing size becomes a major performance roadblock. We report a single-walled carbon nanotube (SWNT) transistor technology with an end-bonded contact scheme that leads to size-independent contact resistance to overcome the scaling limits of conventional side-bonded or planar contact schemes. A high-performance SWNT transistor was fabricated with a sub–10-nanometer contact length, showing a device resistance below 36 kilohms and on-current above 15 microampere per tube. The p-type end-bonded contact, formed through the reaction of molybdenum with the SWNT to form carbide, also exhibited no Schottky barrier. This strategy promises high-performance SWNT transistors, enabling future ultimately scaled device technologies.
- Why Science Needs Metaphysics.
- Going beyond just interpreting everything as faces: Deep Neural Networks convert photographs into the style of impressionists painters.
- GiveWell is hiring summer research interns. I know many folks who work here, and I find them very impressive on an intellectual level (as well as, obviously, in terms of the work they produce).
- Words of cautions from young UCL researchers on Modafinil following that rather enthusiastic meta-analysis. See also Gwern.
- Software bugginess is an economic tradeoff. (This isn’t the first place I’ve seen this pointed out, but he does a fine job.)
“What Has Been Learned from the Deworming Replications: A Nonpartisan View”
A heated discussion on the value of mass deworming campaigns followed the release by a team of epidemiologists (Aiken et al, 2015; Davey et al 2015) of a replication analysis of an influential study on the educational benefits of deworming in Western Kenya (Miguel and Kremer 2004). Despite strong critiques of a seminal paper, and many rounds of responses, neither side appeared to change their views much on the quality of the evidence in the paper. Here I go back to the data and the arguments on both sides. My conclusion is that the replication has raised (or in some cases, highlighted) important questions both over the strength of evidence for spillovers and for the strength of the direct effects of deworming on school attendance – at least insofar as these pass through a worms mechanism. There should have been learning here. I point to structural factors that may contribute to the polarization of this discussion, make it hard for authors to acknowledge errors, and inhibit learning from this kind of replication exercise.
- The largest sexual dimorphism in the animal kingdom is the blanket octopus, which is even more extreme than the anglerfish.
..the Global Commission for the Certification of Poliomyelitis Eradication (GCC) today concluded that wild poliovirus type 2 (WPV2) has been eradicated worldwide. The GCC reached its conclusion after reviewing formal documentation submitted by Member States, global poliovirus laboratory network and surveillance systems. The last detected WPV2 dates to 1999, from Aligarh, northern India….This announcement marks a major landmark in the global efforts to eradicate all three wild poliovirus serotypes: WPV1, WPV2 and WPV3. WPV3 has not been detected globally since November 2012 (in Nigeria); the only remaining endemic WPV1 strains are now restricted to Pakistan and Afghanistan.
This is why the phase-out is important:
OPV contains attenuated (weakened) polioviruses. On extremely rare occasions, use of OPV can result in cases of polio due to vaccine-associated paralytic polio (VAPP) and circulating vaccine-derived polioviruses (cVDPVs). For this reason, the global eradication of polio requires the eventual cessation of all OPV. With WPV2 transmission already having been successfully interrupted, the only type 2 poliovirus which still, on very rare occasions, causes paralysis is the type 2 serotype component in trivalent OPV. The continues use of this vaccine component is therefore inconsistent with the goal of eliminating all paralytic polio disease.
“By the time I started my search [in 1969] over 240,000 compounds had been screened in the US and China without any positive results,” she told the magazine. But, she added: “The work was the top priority, so I was certainly willing to sacrifice my personal life.”…And the work was absolutely painstaking. Along with three assistants, she reviewed thousands of traditional Chinese remedies, testing them in mice….[The recipe for obtaining the component, later called artemisinin, was] written more than 1600 years ago in a text appositely titled “Emergency Prescriptions Kept Up One’s Sleeve”
- Four more postdoc jobs are available at the Centre for the Study of Existential Risk at Cambridge.
- Reddit thread for people with terminal illnesses.
Y Combinator Research:
startups aren’t ideal for some kinds of innovation—for example, work that requires a very long time horizon, seeks to answer very open-ended questions, or develops technology that shouldn’t be owned by any one company….We think research institutions can be better than they are today. So we’re starting a new research lab, which we’re calling YC Research, to work on some of these areas….We’re going to start YCR with one group (which we should be ready to announce in a month or two) and if that goes well, we’ll add others….YCR is a non-profit….the researchers will be able to freely collaborate with people in other institutions…To start off, I’m [Sam Altman] going to personally donate $10 million…YCR researchers will be full-time YC employees…We’ll especially welcome outsiders working on slightly heretical ideas…The researchers will have full access to YC and the YC network.
We talked to Alan at great lengths in the process of putting this together. He is the most insightful person I’ve ever met on how to structure an organization for great research.
This is very relevant to the apparent decline of corporate basic R&D.
One phenomena that took me a while to appreciate in academia is the degree to which time, effort, and expertise are used as scarce resource for bargaining, defense, threats, and long-term investment. A non-trivial fraction of emails between academics are spent trying to extract work out of colleagues while avoiding work that doesn’t further one’s own ends. At best, this sort of negotiation leads to an efficient allocation of effort, through compromise and comparative advantage. At worst, it’s used to defend bad research programs through obfuscation. Most of time it’s somewhere in the middle.
The wrangling behind the ABC conjecture appears to be an example of this phenomena on large scales.
- The Stephen Hawking AMA on Reddit is heavy on x-risk/AI questions. Hawkings takes pains to address common misconceptions about these risks, largely in agreement with EA x-risk institutions.
This, I think, is why economists talk about “theory vs. data”, whereas you almost never hear lab scientists frame it as a conflict. In econ policy-making or policy-recommending, you’re often left with a choice of A) extending a local empirical result with a simple linear theory and hoping it holds, or B) buying into a complicated nonlinear theory that sounds plausible but which hasn’t really been tested in the relevant domain. That choice is really what the “theory vs. data” argument is all about.
Phenomenon: patent protection incentivizing the commercialization of tech rather than the invention of tech. Usually appears in context of orphan (i.e., off-patent) drugs, where “commercialization” refers to regulatory approval. But here’s an HN commentator referring to this outside pharmaceuticals (and, apparently, outside regulation in general):
> Maybe I’m just disregarding a lot of patent law, but wouldn’t it be a better plan to just have all NASA patents open to anyone?
I think they’re learning from university tech transfer programs. A number of universities tried the exact approach you describe. What they found is that companies weren’t interested in the ideas because of the substantial risk of someone out-developing them. A risk significantly increased by a lack of exclusivity. By patenting them and offering transfer of the patent, they got much better uptake.So this is a case of ideals trading off against reality and results.
In regulated case, why not grant mini-patents for first company to get clearance for a drug? 5 years from FDA approval? Or perhaps a mandatory licensing set up?
Air shows are serious business:
Since the Blue Angels began performing in 1946, 26 pilots have been killed. There have been a total of 262 Blue Angels which makes the fatality rate around 10%.
- A self-consistent time-travel schematic mechanism?
A model of Telephone: Information distortion increases exponentially in a reporting chain (e.g., between scientists and the public).
In particular, it is essentially impossible to transmit extraordinary claims reliably through more than a couple of links. PDF. (H/t Carl Shulman.)
Some data about the usefulness of letters of recommendation. In particular, it looks like letter could be replaced with just asking the recommender a few specific questions, like explicit ranking of the subject compared to their peers, since such tidbits are pretty much the only pieces with predictive power.
See also the secret code.
- New article on Philip Tetlock and The Good Judgement Project in The Chronicle of Higher Education. And high praise from Bryan Caplan.
- This happened a while ago, but I just now found out from my officemate that Library Genesis (alternate) has returned. However, there is no “scientific articles” search anymore. This search was apparently still working by directly accessing the URL for a while, but it’s not working right now. More info.
Kilogram finally redefined in terms of fundamental constants:
The breakthrough comes in time for the kilogram to be included in a broader redefinition of units —including the ampere, mole and kelvin —scheduled for 2018. And this week, the International Committee for Weights and Measures (CIPM) will meet in Paris to thrash out the next steps….The kilogram is the only SI unit still based on a physical object. Although experiments that could define it in terms of fundamental constants were described in the 1970s, only in the past year have teams using two completely different methods achieved results that are both precise enough, and in sufficient agreement, to topple the physical definition. Redefinition will not make the kilogram more precise, but it will make it more stable. A physical object can lose or gain atoms over time, or be destroyed, but constants remain the same. And a definition based on constants would, at least in theory, allow the exact kilogram measure to be available to someone anywhere on the planet, rather than just those who can access the safe in France…
This got me a little worried though:
The CIPM’s committee on mass recommends that three independent measurements of Planck’s constant agree, and that two of them use different methods….Reaching agreement proved difficult….That still left the result from NIST as an outlier… “We brought in a whole new research team, we went over every component, went through every system,” he says. They never found the cause for the disagreement, but in late 2014 the NIST team achieved a match with the other two, who in the meantime had shrunk their relative uncertainties to within the required levels.
- Somewhat related.
- Charities GiveWell would like to see.
“Great 5 minute explanation of who is fighting who in Syria and why”:
(H/t Rob Wiblin.)
- Solar Probe Plus is a space mission to travel to within 9 solar radii of the Sun, into the outer reachers of the solar atmosphere. It will be the fastest man-made object ever (by a factor of 3).
At HN, John Nagle’s comment on Stanford’s relationship to startups:
Stanford is really an investment fund that runs a school on the side for the tax break. This started in 1991, when Stanford spun off their endowment management as the Stanford Management Company. SMC’s headquarters was on Sand Hill Road, across from all the VCs. This ended up putting Stanford into venture capital in a big way. This was new. Before that, universities tended to put their endowments into passive investments – real estate, stocks, and bonds. Investing in startups worked out very well for Stanford. Stanford had pre-IPO stock in Cisco, Yahoo, Google… They have money in various VC funds. Buying into YCombinator is consistent with that investing approach. As SMC became more powerful, executives from SMC started moving into positions in the university itself. SMC moved its HQ onto the main campus. Not clear where this will end; we’ll have to see who replaces Henessey as president.
- “I find it funny that the framers thought it necessary to specify that congress was allowed to institute copyrights and patents, when if they hadn’t done so, and we decided to institute copyright today, no one would bat an eye at justifying it through the commerce clause.” – Mike Blume. In terms of Schelling points, I wonder if it would be better to pass a constitutional amendment that explicitly legitimized all the laws that have been squeezed in under the commerce clause, but that also said “…but seriously, no more stuff like that — we mean it“.
- David Divincenzo on quantum computing, cultural transformation at IBM Research, and German vs US science. (H/t Graeme Smith.)
“Wikipedia is significantly amplifying the impact of Open Access publications“:
When you edit Wikipedia to include a claim, you are required to substantiate that edit by referencing a reliable source. According to a recent study, the single biggest predictor of a journal’s appearance in Wikipedia is its impact factor. One of the exciting findings, writes Eamon Duede, is that it appears Wikipedia editors are putting a premium on open access content. When given a choice between journals of similar impact factors, editors are significantly more likely to select the “open access” option.
- The forthcoming 21 Bitcoin chip.
LaTeX in commentsInclude [latexpage] to render LaTeX in comments. (More.)
- Comments on "Longtermist Institutional Reform" by John & MacAskill (1)
- Tyler John Thanks so much for these excellent questions! Here are my replies, which I've cross-posted to... – Jul 31, 10:00 AM
- Living bibliography for the problem of defining wavefunction branches (12)
- Jess Riedel I'm very familiar with the ideas in Andy's paper and it doesn't conflict with my... – Jul 26, 1:26 PM
- Devin The choice of angle can at least clearly be chosen by a quantum source. I'm... – Jul 26, 12:54 PM
- Jess Riedel > I see that agents develop their own framework and can choose the questions they’ll... – Jul 26, 10:20 AM
- Devin I think I see what you're saying now: that there is one classical framework imposed... – Jul 26, 2:42 AM
- Jess Riedel > If there is one set which fully describes our reality that would be interesting.... – Jul 23, 12:47 PM
- Devin > we surely have good and non-arbitrary (albeit imprecise) reasons for focusing on a very... – Jul 22, 10:21 AM
- Jess Riedel > If we assume that each consistent set is a valid framework for looking at... – Jul 20, 5:33 PM
- Devin I'm not sure how but it seems like it could be necessary. If we assume... – Jul 20, 3:49 AM
- Jess Riedel How would one define an average of a consistent set? (Remember that, at any fixed... – Jul 19, 6:12 AM
- Devin > The set selection problem still looms large, however, as almost all consistent sets bear... – Jul 19, 5:20 AM
- Comments on "Longtermist Institutional Reform" by John & MacAskill (1)
foreXiv by C. Jess Riedel is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.