It’s the largest global engagement strategy since the Marshall Plan — only…like 40 X as large in real dollars.
Here’s a slightly hokey 6-minute introduction from Vox (“7 out of the 10 biggest construction firms in the world are now Chinese”):
(H/t Malcom Ocean.)
Relatedly, here’s diplomat Kishore Mahbubani on the potential for conflicts between the US and China (45 minute of lecture and 45 minutes of questions):Interestingly, I’ve found when increasing video playback speed that YouTube on Chrome has fewer skips and clips that impede intelligibility than VLC does playing back the file (at the same speed). Does anyone know why? Or can anyone recommend an alternative to VLC (or a new VLC plugin)?a
(H/t Julia Peng.) Some of the important/interesting claims: (1) The Chinese people are largely accepting of authoritarianism and generally believe that their long history makes democracy less suitable there. (2) The Chinese economic rise has been meteoric, demonstrating that economic liberalism can be pretty cleanly separated from political liberalism. (3) The US ought to submit to more multi-lateralism and international rule-of-law now in order to establish norms that will constrain China later. (4) China likes the US’s strong military involvement with Japan because Japan potentially becomes a nuclear power without the promise of the US’s umbrella.
Having heard Geoffrey Hinton’s somewhat dismissive account of the contribution by physicists to machine learning in his online MOOC, it was interesting to listen to one of those physicists, Naftali Tishby, here at PI:
The surprising success of learning with deep neural networks poses two fundamental challenges: understanding why these networks work so well and what this success tells us about the nature of intelligence and our biological brain. Our recent Information Theory of Deep Learning shows that large deep networks achieve the optimal tradeoff between training size and accuracy, and that this optimality is achieved through the noise in the learning process.
In this talk, I will focus on the statistical physics aspects of our theory and the interaction between the stochastic dynamics of the training algorithm (Stochastic Gradient Descent) and the phase structure of the Information Bottleneck problem. Specifically, I will describe the connections between the phase transition and the final location and representation of the hidden layers, and the role of these phase transitions in determining the weights of the network.
Based partly on joint works with Ravid Shwartz-Ziv, Noga Zaslavsky, and Shlomi Agmon.
I was familiar with the general concept of over-fitting, but I hadn’t realized you could talk about it quantitatively by looking at the mutual information between the output of a network and all the information in the training data that isn’t the target label.… [continue reading]
Alotofpeoplesoundworried that new and improving techniques for creating very convincing videos of anyone saying and doing anything will lead to widespread misinformation and even a break down of trust in society.
I’m not very worried. Two hundred years ago, essentially all communication, other than in-person conversation, was done through written word, which is easy to fake and impersonate. In particular, legal contracts were (and are) typeset, and so are trivially fakeable. But although there were (and are) cases of fraud and deception through foraged documents, society has straightforward mechanisms for correctly attributing such communication to individuals. Note, for instance, that multi-billion-dollar contracts between companies are written in text, and we have never felt it necessary to record difficult-to-fake videos of the CEOs reciting them.
The 20th century was roughly a technological Goldilocks period where technology existed to capture images and video but not to fake them. Images, of course, have been fakeable at modest cost for many years. Even in 1992, Michael Crichton’s Rising Sun used high-tech fraudulent security footage as a realistic plot point in then-current day. Although we may see some transition costs as folks are tricked into believing fraudulent videos because the ease of faking them has not yet entered the conventional wisdom, eventually people will learn that video can’t be trusted much more than the written word.Which is to say, most of the time you can trust both text and video because most people aren’t trying to defraud you, but extra confirmatory steps are taken for important cases.a This will not be catastrophic because our trust networks are not critically dependent on faithful videos and images.… [continue reading]
We recently hosted a conference at Perimeter Institute on “Open Science”. Video from all the talks is available here. I spokeIt might be more accurate to say that I occasionally mumbled something intelligible in between long stretches of the words “um” and “ah”. Luckily, you can watch the video at high speed by using using a browser plugin like Video Speed Controller for Chrome. Unfortunately, I don’t know a simple way to embed playback speed controls directly into the HTML rather than forcing you to install a plugin or download the video and watch it with a player featuring such controls.a on the importance of “knowledge ratchets”, i.e., pedagogical documents (textbooks, monographs, and review papers) that allow for continuous improvement by anyone. After starting off with my new favorite example of how basic physics textbooks, and physicists, are egregiously uninformed about central elementary things, I ranted about howimportant it is to allow for people who are not the original author to contribute easily to the documents composing our educational pipeline (broadly construed to include the training researchers on recent developments).
(I forgot to put on the microphone for the first minute and a half; the sound quality improves after that.)
Luckily, when I wanted to illustrate the idea of in-PDF commenting on articles that generated feedback for the authors, I didn’t have to just use mock-ups. Luis Batalha from Fermat’s Library took the mic for the second half of the talk to show off their Chrome plugin “Librarian” and talk about their strategy for gaining users.… [continue reading]
Extrapolating my current trajectory, I will combine more and more links posts into larger and larger multi-month collections until eventually I release one giant list for all time and shutdown the blog.Just kidding. I will get back to actual, non-link blogging before too long…a
We socially unskilled people tend to prefer things to be out in the open and clear, where we can read them and understand them and react, at least at some very basic level. That’s who I am. I am a nerdy person. So personally, I prefer things to be more out in the open where I can have some idea what the heck’s going on, and I will notice them.
But I think that has given me some advantage in being a social scientist, in that when you’re really socially skilled and you move about in the social world, you just intuitively do all the right things, and you don’t think explicitly about it.
Daniel Bernstein is the author of qmail. Bernstein created qmail because he was fed us with all of the security vulnerabilities in sendmail. Ten years after the launch of qmail 1.0, and at a time when more than a million of the Internet’s SMTP servers ran either qmail or netqmail, only four known bugs had been found in the qmail 1.0 releases, and no security issues. This paper lays out the principles which made this possible
Project Excalibur was a Lawrence Livermore National Laboratory (LLNL) research program to develop [a space-based] x-ray laser as a ballistic missile defense (BMD). The concept involved packing large numbers of expendable x-ray lasers around a nuclear device [on an orbiting satellite]. When the device detonated, the x-rays released by the bomb would be focused by the lasers, each of which would be aimed at a target missile. In space, the lack of atmosphere to block the x-rays allowed for attacks over thousands of kilometers.
Jeff Kaufman reports on the excellent news that Charity Navigator is beginning the slow push to accounting for effectiveness! GiveWell deserves tremendous credit for instigating this long ago.
Useful, basic arguments for and against whether cryptocurrencies (and tokens) are good for anything.
Given the high-profile book reviews that are probably forthcoming from places like the Wall Street Journal, I thank Robin for taking the time to engage with the little guys!
I’ll follow Robin’s lead and switch to first names.
Some say we should have been more academic and detailed, while other say we should have been more accessible and less detailed….Count Jess as someone who wanted a longer book.
It’s true that I’d have preferred a longer book with more details, but I think I gestured at ways Kevin and Robin could hold length constant while increasing convincingness. And there are ways of keeping the book accessible while augmenting the rigor (e.g., endnotes), although of course they are more work.
Yes for each motive one can distinguish both a degree of consciousness and also a degree of current vs past adaptation. But these topics were not essential for our main thesis, making credible claims on them takes a lot more evidence and argument, and we already had trouble with trying to cover too much material for one book.
I was mostly happy with how the authors handled the degree of consciousness. However, I think the current- vs past-adaptation distinction is very important for designing institutions, which Kevin and Robin correctly list as one of the main applications of the book’s material. For instance, should the arXiv host comments on papers, and how should they be implemented to avoid pissing contests?… [continue reading]
Drawing on a large academic literature in topics like sociology, behavioral economics, anthropology, and psychology, and especially the (generalized) theory of signalling, Robin Hanson has assembled a large toolbox for systemically understanding hypocrisy, i.e., the ways in which people’s actions systematically and selfishly deviate from their verbalized explanations. Although he would be the first to admit that many of these ideas have been discovered and rediscovered repeatedly over centuries (or millennia) with varying degrees of clarity, and although there is much I am not convinced by, I find the general framework deeply insightful, and his presentation to be more clear, analytical, and descriptive (rather than disruptively normative) than other accounts. Most of this I have gathered from his renowned blogging at Overcoming Bias, but I have always wished for a more concise (and high status!) form factor that I could point others to. At long last, Hanson and his co-author Kevin Simler have written a nice book that largely satisfies me: The Elephant in the Brain (Amazon). I highly recommend it.
The reason I title these sorts of blog posts “Comments on…” is so I can present some disorganized responses to a work without feeling like I need to build a coherent thesis or pass overall judgment. I do not summarize the book below, so this post will mostly be useful for people who have read it. (You might think of this as a one-sided book club discussion rather than a book review.) But since I will naturally tend to focus on the places where I disagree with the authors, let me emphasize: the basic ideas of this book strike me as profound and probably mostly true.… [continue reading]
The primary use for my iPad is reading and annotating papers. It’s new secondary use is as a whiteboard during Skype. WebWhiteboard and AWW App both facilitate public whiteboards without needing a login/signup, and work pretty well with your browser on iPad. WebWhiteboard has a limited and dated interface, but is fairly reliable. AWW App has a more modern interface, but seems to have slow/unreliable servers. Then there are a ton of options that require signup, but I don’t know whether any are worth using.
By way of Eric Rogstad and Tyler Cowen is this new-to-me idea: In the same way that, theoretically, the value of fiat currency is set by a given demand for a medium of exchange, the “fundamental” value of a bitcoin might be determined by a given demand for stores of value.
I’ve only used it for a week, but it’s the best PDF reader I’ve experienced for reading academic articles. It’s snappy and reminds me of Chrome when it first came out. Draggable tabs. Split view. Plays well with Zotero. Can easily add native PDF annotation and search through the existing ones. (And it saves annotations fast when you close the file.Competitors either do this slowly or, like Skim, use a non-native annotation format that can’t be read by other PDF readers.a ) The UI for “find” displays a lot of info intuitively.Edit 2018-1-12: And you can search for any unicode character! Really useful for searching a document for math.b Everything is just nicely designed. I haven’t yet run into a limitation on the free version, but it’s worth upgrading to Pro to support the developer (only $20).
Beware that this is the first version following a big re-write of PDF Reader X, and it’s not completely stable. I’ve gotten it to crash a few times, but the developer has been very responsive to feedback and I’d wager on the stability improving soon. (Edit 2018-1-7: After upgrading to the new version, 3.0.20A, a couple weeks ago, I haven’t experienced any crashes. Looks stable.)
I’m advertising Guru because I think the current selection of PDF readers for academic reading is pretty bad.I have no connection to the developer.c I strongly prefer Guru (instability and all) over these other PDF readers on maxOS that I have tried: FoxIt, Preview, Adobe Acrobat Reader, and Skim.… [continue reading]
For several months, Fermat’s Library has offered a Chrome extension called Librarian for browsing PDFs on the arXiv that automatically parses references to clickable journal links and bibtex entries. Very recently they added the ability to publicly comment, visible to anyone else running Librarian. Should be lower friction than commenting on (also excellent) SciRate.
Three weeks into his new job as Arizona’s governor, Doug Ducey made a move that won over Silicon Valley and paved the way for his state to become a driverless car utopia.
It was January 2015 and the Phoenix area was about to host the Super Bowl. Mr. Ducey learned that a local regulator was planning a sting on Lyft and Uber drivers to shut down the ride-hailing services for operating illegally. Mr. Ducey, a Republican who was the former chief executive of the ice cream chain Cold Stone Creamery, was furious.
“It was the exact opposite message we should have been sending,” Mr. Ducey said in an interview. “We needed our message to Uber, Lyft and other entrepreneurs in Silicon Valley to be that Arizona was open to new ideas.” If the state had a slogan, he added, it would include the words “open for business.”
Mr. Ducey fired the regulator who hatched the idea of going after ride-hailing drivers and shut down the entire agency, the Department of Weights and Measures. By April 2015, Arizona had legalized ride-sharing.
In particular, he sketched the essential equivalence between matrix product states (MPS) and restricted Boltzmann machinesThis is discussed in detail by Chen et al. See also good intuition and a helpful physicist-statistician dictionary from Lin and Tegmark.b (RBM) before showing how he and collaborators could train an efficient RBM representations of the states of the transverse-field Ising and XXZ models with a small number of local measurements from the true state.
“There are now five different lines of observational evidence pointing to the existence of Planet Nine,” Konstantin Batygin, a planetary astrophysicist at the California Institute of Technology (Caltech) in Pasadena, said….
…a study that examined the elliptical orbits of six known objects in the Kuiper Belt…all of those Kuiper Belt objects have elliptical orbits that point in the same direction and are tilted about 30 degrees “downward” compared to the plane in which the eight official planets circle the sun…
Using computer simulations of the solar system with a Planet Nine…there should be even more objects tilted a whopping 90 degrees with respect to the solar plane.
[This is akin to a living review, which will hopefully improve from time to time. Last edited 2017-11-26.]
This post will collect some models of decoherence and branching. We don’t have a rigorous definition of branches yet but I crudely define models of branching to be models of decoherenceI take decoherence to mean a model with dynamics taking the form for some tensor decomposition , where is an (approximately) stable orthonormal basis independent of initial state, and where for times and , where is the initial state of and is some characteristic time scale.a which additionally feature some combination of amplification, irreversibility, redundant records, and/or outcomes with an intuitive macroscopic interpretation. I have the following desiderata for models, which tend to be in tension with computational tractability:
Regarding that last one: we would like to recover “classical behavior” in the sense of classical Hamiltonian flow, which (presumably) means continuous degrees of freedom.In principle you could have discrete degrees of freedom that limit, as , to some sort of discrete classical systems, but most people find this unsatisfying.b Branching only becomes unambiguous in some large-N limit, so it seems satisfying models are necessarily messy and difficult to numerically simulate.… [continue reading]
A senior colleague asked me for thoughts on this paper describing a single-preferred-branch flavor of quantum mechanics, and I thought I’d copy them here. Tl;dr: I did not find an important new idea in it, but this paper nicely illustrates the appeal of Finkelstein’s partial-trace decoherence and the ambiguity inherent in connecting a many-worlds wavefunction to our direct observations.
We propose a method for finding an initial state vector which by ordinary Hamiltonian time evolution follows a single branch of many-worlds quantum mechanics. The resulting deterministic system appears to exhibit random behavior as a result of the successive emergence over time of information present in the initial state but not previously observed.
We start by assuming that a precise wavefunction branch structure has been specified. The idea, basically, is to randomly draw a branch at late times according to the Born probability, then to evolve it backwards in time to the beginning of the universe and take that as your initial condition. The main motivating observation is that, if we assume that all branch splittings are defined by a projective decomposition of some subsystem (‘the system’) which is recorded faithfully elsewhere (‘the environment’), then the lone preferred branch — time-evolving by itself — is an eigenstate of each of the projectors defining the splits. In a sense, Weingarten lays claim to ordered consistency [arxiv:gr-qc/9607073] by assuming partial-trace decoherenceNote on terminology: What Finkelstein called “partial-trace decoherence” is really a specialized form of consistency (i.e., a mathematical criterion for sets of consistent histories) that captures some, but not all, of the properties of the physical and dynamical process of decoherence.… [continue reading]
LaTeX in comments
Include [latexpage] to render LaTeX in comments. (More.)