I was privileged to receive a reply from Robin Hanson on my critique of his largely excellent book The Elephant in the Brain with co-author Kevin Simler. I think in several cases he rebutted something other than what I argued, but I encourage you to read it and judge for yourself.
Given the high-profile book reviews that are probably forthcoming from places like the Wall Street Journal, I thank Robin for taking the time to engage with the little guys!
I’ll follow Robin’s lead and switch to first names.
Some say we should have been more academic and detailed, while other say we should have been more accessible and less detailed….Count Jess as someone who wanted a longer book.
It’s true that I’d have preferred a longer book with more details, but I think I gestured at ways Kevin and Robin could hold length constant while increasing convincingness. And there are ways of keeping the book accessible while augmenting the rigor (e.g., endnotes), although of course they are more work.
Yes for each motive one can distinguish both a degree of consciousness and also a degree of current vs past adaptation. But these topics were not essential for our main thesis, making credible claims on them takes a lot more evidence and argument, and we already had trouble with trying to cover too much material for one book.
I was mostly happy with how the authors handled the degree of consciousness. However, I think the current- vs past-adaptation distinction is very important for designing institutions, which Kevin and Robin correctly list as one of the main applications of the book’s material. For instance, should the arXiv host comments on papers, and how should they be implemented to avoid pissing contests? The hinges critically on the extent to which academics want to win arguments for their own sake (which might have been adaptive in the ancestral environment) vs. strategically wanting to publicly dominate other academics.
On several remaining points I tried to argue a modest view, that Kevin and Robin’s claims were too sweeping and their stories too just-so-y to justify the confidence they express, but Robin seems to keep interpreting me as arguing against his conclusions and for the conventional wisdom. Instead, I am trying to point out that some evidence is ambiguous and there is plenty of room for theories not yet considered:
Yes, people use prestige to guide their choice of treatment, and they often neglect careful analytical signals, but we can’t confidently conclude they value signaling their own prestige more than their own life
Sure, given any goal and any behavior, one can invoke an error theory to explained that behavior as a mistaken attempt to achieve that goal. The problem is that according to the error theory these deviations should be random. Thus theories that can explain the behavior more systematically can get stronger evidential support. Our book tries to offer such systematic theories.
It seems clear that there are both error effects and signaling effects going on, so just noting the sign of the total bias and confidently attributing each puzzle to hidden motives is a mistake. Just because our marginal dollar of healthcare buys negligible benefit does not mean that the customer buying the bogus energy vitamins in 7-11 is signaling something to the clerk.
Yes, one can make such excuses. But as a long-time academic I’ll say that the theory that our apparent focus on impressiveness is all really a complex clever plan to maximize long term research progress just doesn’t pass the laugh test.
In fact, I am not making such excuses. I was saying that I could have explained the opposite (counter-factual) data with the sort of arguments Robin has deployed in the past, e.g., subjective criteria are used to hide bias. This is evidence that his argument in this case is weak, not that the conclusion is wrong.
Obviously most academics will agree that refereeing is biased, politicized, and highly imperfect, but would a majority academics agree with Robin’s specific theory of referee motivations (and in particular that it is best summarized as a focus on impressiveness)? Or even the relative amount of influence these hidden motives have over praiseworthy ones, based on their personal experience of the refereeing process? I bet not.
Rather, other academics will probably, like Robin, report an impression of the process that fits with their personal theory of academic disfunction, e.g., unconditional defense of the status quo, or bias against women.
Isn’t it really obvious that overall people pay a lot of attention to how others will interpret their clothing choices?
Yes of course! But the evidence for the particular, detailed signaling story Kevin and Robin are trying to tell is weak. In particular, I wager they cannot actually make good predictions about future fashion developments using their theory…but that they’d be able to explain most possible outcomes after the fact!
We see a reasonably strong consensus in the literature that it is very hard to design brains to block all possible paths by which conscious motives can leak.
First, I would like to know if this is something that can be cited or if it just reflects the authors’ overall impressions? I’m honestly curious what sort of evidence could lead to any sort of strong conclusions about the difficulty of brain design. In particular, why is it hard to prevent conscious motives from leaking but not unconscious ones?
Second, Robin still hasn’t addressed the apparent tensions of this argument with his and Kevin’s self-esteem explanation that I raised in my initial post. Is there a consensus the literature that it’s not very hard to design a brain with self-esteem that does not require self-deception?
Edit: In his recent post, Robin says
In my view, the key problem is that, to experts in each area, no modest amount of evidence seems sufficient support for claims that sound to them so surprising and extraordinary. Our story isn’t the usual one that people tell, after all. It is only by seeing that substantial if not overwhelming evidence is available for similar claims covering a great many areas of life that each claim can become plausible enough that modest evidence sufficient can make these conclusions believable…I expect that experts in each policy area X will be much more skeptical about our claims on X than about our claims on the other areas.
Let me be clear that I do not think Kevin and Robin’s claims are extraordinary and thus requiring overwhelming evidence. Rather, I think their claims are specific, representing a relative small region of theory space, and that in places they do not give sufficient evidence to distinguish their theory from the other, equally a-priori plausible options. Furthermore, I am more sympathetic to such theories in my own field than others (although this is suspiciously self-serving since I am unconventional within my field).
Edit: Fixed stray bulletpoints.