The Future of Life Institute (FLI) is a group of folks, mostly academics with a few notable celebrities, who are jointly concerned about existential risk, especially risks from technologies that are expected to emerge in the coming decades. In particular, prominent physicists Anthony Aguirre, Alan Guth, Stephen Hawking, Saul Perlmutter, Max Tegmark, and Frank Wilczek are on the advisory board. I attended their public launch event at MIT in May (a panel discussion), and I am lucky to be acquainted with a few of the members and supporters. Although not a self-described Effective Altruist (EA) organization, FLI has significant overlap in philosophy, methods, and personnel with other EA groups.
One of the chief risks that FLI is concerned with is the safe development of artificial intelligence in the long term. Oxford Philosopher Nick Bostrom has a new book out on this topic, which seems to have convinced nerd hero Elon Musk that AI risk is a valid concern. Yesterday, Musk made a $10 million donation to FLI to fund grants for researchers in this area.
This is a big deal for those who think that there is a huge underinvestment in this sort of work. It’s also good fodder for journalists who like to write about killer robots. I expect the quality of the public discussion about this to be…low.