Comments on Simler and Hanson

Drawing on a large academic literature in topics like sociology, behavioral economics, anthropology, and psychology, and especially the (generalized) theory of signalling, Robin Hanson has assembled a large toolbox for systemically understanding hypocrisy, i.e., the ways in which people’s actions systematically and selfishly deviate from their verbalized explanations. Although he would be the first to admit that many of these ideas have been discovered and rediscovered repeatedly over centuries (or millennia) with varying degrees of clarity, and although there is much I am not convinced by, I find the general framework deeply insightful, and his presentation to be more clear, analytical, and descriptive (rather than disruptively normative) than other accounts. Most of this I have gathered from his renowned blogging at Overcoming Bias, but I have always wished for a more concise (and high status!) form factor that I could point others to. At long last, Hanson and his co-author Kevin Simler have written a nice book that largely satisfies me: The Elephant in the Brain (Amazon). I highly recommend it.

The reason I title these sorts of blog posts “Comments on…” is so I can present some disorganized responses to a work without feeling like I need to build a coherent thesis or pass overall judgment. I do not summarize the book below, so this post will mostly be useful for people who have read it. (You might think of this as a one-sided book club discussion rather than a book review.) But since I will naturally tend to focus on the places where I disagree with the authors, let me emphasize: the basic ideas of this book strike me as profound and probably mostly true. The authors pack a huge number of observations and strong claims into a small space, and the relatively limited criticisms I am able to muster below indicate just how much they have accomplished.

Without further ado…

Neat ideas

These things were new to me (though not necessarily invented by the authors):

  • The Freudian story about self-deception — that it’s a self-defense mechanism — has become the common sense view for polite society, even as Freud himself has been undermined in popular opinion. In other words, the reputation of bad ideas can long outlast that of the creator.
  • Speculatively, deadly weapons may have been key to creating political incentives that drive the intelligence arms race. Before weapons, fights are in close quarters and rarely deadly. Individual strength is key, even when taking by surprise, and it is difficult (without ranged weapons) for a group to attack a single strong individual without some members of the group putting themselves at risk. With weapons, (1) planning and surprise are more effective, incentivizing long-term planning, and (2) coalitions are more powerful, so that dominance becomes grounded in politics.
  • The key distinction between norms versus other apparent rule-following (as arise in, say, game theory) is this: Norms imply that 3rd parties will intervene to enforce them. In this specific sense, norms are defined by universalization of rules, and they require the meta norms about punishing norm violations. Sort of obvious in retrospect, but important to get clear for (positive) discussion of norms in animals, where you can’t draw on intuitive notions of morality.
  • Gossip can be thought of as a perversion of a pro-social mechanism for norm enforcement. That is: First norms develop, including the norm of third parties to reveal and punish norm violations. Then this mechanism gets co-opted as a weapon against one’s adversaries (i.e., gossip). Then new norms arise to suppress gossip.

Conscious, unconscious, dysfunction, and ground truth

I like that Hanson and Simler emphasize up front that many of the selfish motives they discuss are unconscious, and I think Hanson’s reasons for not doing this more often in his other writing are unconvincing.Presumably, this is more Simler speaking than Hanson.a   Lots of signaling discussions get derailed by people confusing this.

I wish the book had attempted to operationally define consciousness in this context. A crude way you might do it is by taking consciousness to be “that which is accessible through introspection”, where we can get an approximate measure of this by offering payment or other rewards to people who are able to describe their motivations in a way that actually explains their behavior. (Fully unconscious motives would be cases where people can’t manage to adequately explain their actions despite overwhelming incentives to do so.) It’s true that, in many circumstances, it’s difficult to unambiguously distinguish between conscious and unconscious effects, and there are varying degrees of conscious awareness. Furthermore, it’s certainly reasonable to study hidden motives — cases where our revealed preferences do not match our verbalized preferences — without taking a position on the degree to which these motives are conscious. But the conscious/unconscious distinction is another important thing to understand, and indeed it may reveal some evidence (see next section) against hidden-motive theories.

Unfortunately, when ascribing hidden motives, the authors mostly ignoreThis distinction is mentioned in a footnote at the end of the Chapter on religion, but it is not appropriately raised as an alternate possible explanation of apparently selfish behavior.b   the further distinction between (a) the evolutionary goal of an adaptation in the ancestral environment and (b) the execution of that adaptation. This is often even more clear than the conscious/unconscious distinction, and it is directly relevant for generating new predictions.A cynical explanation is that the authors are more interested in exposing hypocrisy because it’s satisfying to tear others down and show off how self-aware they are, and less interested in understanding the details of these mechanisms.c   For instance, people (especially men) plausibly like to win arguments to signal intelligence or social domination, but when they spend hours arguing anonymously on the internetGuilty.d  , they aren’t gaining status or prestige. They are just executing an instinct to win arguments that may have been adaptive in the ancestral environment but misfires in the modern era.Of course, there seems to be cases where we are much more in tune to the actual social impact, and distinguishing these cases could yield interesting insights. For example, are nerds with worse social skills more likely to blindly execute low-level adaptations like winning arguments while more socially savvy people learn that this doesn’t actually win over the people they care about?e   I emphasize: these are real, empirically relevant differences, and we confuse things when we ascribe hidden high-level motives to simple dysfunction.

I think Simler and Hanson are making this mistake, at least in part, when they eagerly attribute so many ineffective or dangerous medical treatments to the desire to demonstrate care. Yes, people use prestige to guide their choice of treatment, and they often neglect careful analytical signals, but we can’t confidently conclude they value signaling their own prestige more than their own life.The more I learn about the extent of scientific illiteracy, the more amazed I am that more people don’t succumb to charlatans pedaling bogus medicine. I think a robust system of prestige is largely what enables this.f   Indeed, insofar as we are going to approximate humans as agents with consistent goals at all, “what they do to preserve their life when it’s on the line” is the closest thing we have to ground truth for motives.

I was surprised that the authors would argue that England’s King Charles II, who willingly endured horrifically painful and pointless treatments for his ultimately fatal disease, did so because

receiving these treatments was proof that he had the best doctors in the kingdom…And by agreeing to the especially painful treatments, he demonstrated that he was resolved to get well by any means necessary — which would have inspired confidence among his subjects…

Is that really more plausible than him simply being mistaken about medical effectiveness?

This is a little like taking Ghengis Kahn and putting him in a modern automobile. Kahn’s best tool for getting vehicles to move was to yell “giddy up” and whipping, and he may try this to get the car to drive. But it’s silly to say that if he really wanted to move he’d turn the ignition and operate the gas and steering wheel, so he probably just enjoys whipping horses.

This isn’t to say that the hidden desire to appear caring doesn’t drive billions of dollars of waste in medicine, it just means people also make honest mistakes pursuing conventional goals. Surely, counter-signaling explanations championed elsewhere in the book are at least this convoluted (and often true).

Just-so-ness

In evolutionary ‹blank›ology, there is always the worry that we are being seduced by a just-so story. Barring exceptional circumstances where there is quantitative genetic data showing signs of selective pressure on genotypes that have been separately linked to phenotypes, I don’t know a better method of avoiding this trap besides the standard Ockhamian technique: favoring stories that explain many observations assuming only simple biological mechanisms (e.g., a small number of single-nucleotide polymorphisms). Another check is just to ask “If adaptation A putatively arose for goal G, could G have been accomplished with simpler adaptations B, C, or D?”. I’s true that this line of thinking is often frustrated by the difficulty of estimating simpleness in biology, since complex behavior often arises from simple mechanisms and vice versa. Still, I think Simler and Hanson could have fruitfully put much more effort in this direction.

For instance, why are so many of these selfish human motivation kept unconscious in the first place, rather than simply being conscious and well hidden? Unconscious motives cannot easily leverage consciously known facts and conscious (system 1?) reasoning. Why don’t we simply bald-face lie more?

Simler and Hanson acknowledge this objection, but their explanations do not convince me:

First of all, lying is cognitively demanding. Huckleberry Finn, for example, struggled to keep his stories straight, and was eventually caught in a number of lies. And it’s even harder when we’re being grilled and expected to produce answers quickly. As Mark Twain may have said elsewhere, “If you tell the truth you don’t have to remember anything.”

First, note that humans in fact developed some pretty good computational machinery for spinning lies and keeping track of their dependencies. There’s plenty of conscious lying being done on a daily basis (mostly white lies) that would be damaging to have revealed, but this happens only very rarely.This is especially true for internal mental experiences which need not, in principle, be checkable against any external verifiable facts.g   But more importantly, computational demands don’t explain why we believe the story of our false motivations. If we were literally computationally limited to building and remembering only one model of past internal feelings and motivations that was sufficiently detailed to withstand probing inquiry from outsiders, thereby explaining why the brain only performs the computations necessary to generate the noble-sounding internal experiences, the individual would still be better off to remember that it’s false. The putative computational limitations would just mean we would not also have a detailed model of the selfish motivations that was accessible to introspection (as in fact we don’t for our unconscious motivations, by definition).

The authors continue:

Beyond the cognitive demands, lying is also difficult because we have to overcome our fear of getting caught…The point is, our minds aren’t as “private” as we like to imagine. Other people have partial visibility into what we’re thinking. Faced with the translucency of our own minds, then, self-deception is often the most robust way to mislead others.

The obvious rebuttal here is that we should just have evolved to not have those leaky, difficult-to-suppress emotional responses, or for those responses not to have evolved in the first place.Sociopaths exist, and my understanding is that they have reduced responses of this type. Sociopathy probably isn’t fitness enhancing, but it’s also a proof-of-concept that stopping emotional leaks is not that difficult in principle.h   And indeed, earlier in the book Simler and Hanson reject almost an identical defense of the Freudian theories that self-deception is about reducing anxiety or protecting one’s self-esteem:

Why would Nature, by way of evolution, design our brains this way? Information is the lifeblood of the human brain; ignoring or distorting it isn’t something to be undertaken lightly. If the goal is to preserve self-esteem, there’s a more efficient way to go about it: simply make the brain’s self-esteem mechanism stronger, more robust to threatening information. Similarly, if the goal is to reduce anxiety, the straightforward solution is to design the brain to feel less anxiety for a given amount of stress.

In contrast, using self-deception to preserve self-esteem or reduce anxiety is a sloppy hack, and ultimately self-defeating. It would be like trying to warm yourself during winter by aiming a blow-dryer at the thermostat. The temperature reading will rise, but it won’t reflect a properly-heated house, and it won’t stop you from shivering.

Likewise, there are more efficient ways to avoid leaking our selfish motivations to outsiders: just stop the leaking process! Reduce translucency, turn down the primal fear of being caught, disable facial tics, etc.

Now, it could certainly turn out that there was more selective pressure toward self-deceit (rather than improved lying) for complicated reasons involving how the brain is structured and the set of available genomes which were “nearby”. But this sort of defense could be deployed just as well to self-esteem preservation as to deceiving others about our motivations. (Yes, self-esteem is internal to the brain, but all the paths that leak information to outsiders are also contained in the brain.) To simultaneously argue for the signaling explanations but against the Freudian explanations would require additional evidence, which may exist but which the authors do not clearly exhibit.

Until we know enough about brain structure and development to understand why self-deception is deployed in lieu of better lying, I think this objection remains a serious one to always keep in mind for these sorts of theories.

Laughter more than a signal of play

Although their chapter on laughter highlights many interesting and subtle properties of the behavior, I do not think the author live up to this claim:

In this chapter we’re going to demystify laughter — to “crack the code” and explain it as clearly as possible. (It turns out there’s a very crisp, satisfying answer.)

It’s unfortunately difficult to tell what this answer is intended to be, even afterOn a stylistic note, I’m not a fan of the authors keeping the reader in suspense regarding some of their sub-claims, presumably for narrative reasons. For instance, we don’t find out that “laughter is a play-signal” until halfway through the chapter although we are teased much earlier.i   reading the chapter, but my best guess is this, which drives the later discussion:

Laughter is a play-signal.

The authors present some evidence that laughter-like sounds are used by many species of great apes to signal play and to distinguish it from seriousness, and they argue persuasively that it is sometimes used in this way by humans as well. However, although I agree laughter is a social signal of some sort, and its deep evolutionary origins may be about signaling play among apes, and it may be used sometimes by humans to signal play, I can’t see why we should think it is primarily about this, even in the ancestral environment. (We use our mouths to hold things sometimes when our hands are full, but this is definitely not the primary purpose of our mouths.)

The problems with the primarily-play-signaling theory becomes especially clear if we attempt to define play vs. seriousness non-circularly, i.e., as not just as the presence or absence of laughter, but as (say) the presence or absence of physical danger, or of important zero-sum stakes. Yes, we don’t laugh when there is danger, but we don’t sleep, mate, eat, or flatter then either. Further, laughter does not occur very often in competitive but physically safe games. (You can say that this is because games are “serious” insofar as they are about signaling athletic ability or whatever, but this just becomes circular; laughter also appears in social situations where speakers are trying to show off who is more witty, with serious social consequences.)

Other observations that don’t really seem to fit with play-signaling: (1) Laughter is used by women to signal their romantic interest in men, and more generally by anyone to signal that they like and approve of someone else. (2) Laughter is mostly restricted to situations in which expectations are violated (a property observed in the literature reviewed by the authors), but there are other times we need to signal play, and instead we use smiling, relaxed body language, etc. (3) Laughter is used to indicate we can distinguish jokes from non-jokes quickly, signaling intelligence.

Based on the evidence in the chapter, signaling play seems to be only one use of laughter, and I’d say the authors (along with everyone else) are still very far from a complete and crisp theory of the behavior.

Alternate interpretations

Throughout the book, the authors collect some observations and argue that the cynical theories better explain the data than conventional wisdom. Although I agree in many cases, I dispute others, and I think the authors are too eager to interpret everything in terms of hidden motives rather than, e.g., cognitive limitations.

In the past, Hanson has often complained that “apologists” who defend various behaviors as non-hypocritical must invoke new excuses for each case, and the consistent pattern of hypocritical behavior make the general cynical theory more robust than any individual case. This is not surprising, he says, because plausible deniability in any particular case is essential for evading punishment.

However, this argument doesn’t work well if the cynical theory is also invoking new assumptions in each case. Here are some additional examples where the cynic’s explanation seems just as strained to me as the apologist’s:

  • Regarding the academic journal review process, the authors write:

    But in the long experience of one of us (Robin), the judgments of referees in these cases typically focus on whether a submission makes the author seem impressive. That is, referees pay great attention to spit and polish, i.e., whether a paper covers every possible ambiguity and detail. They show a distinct preference for papers that demonstrate a command for difficult methods. And referees almost never discuss a work’s long-term potential for substantial social benefit.

    But wouldn’t judgment of long-term benefit be highly subjective (and idiosyncratic), whereas technical mastery is relatively objective? In the past, Robin has complained about subjective criteria as a hidden tool for gate-keepers to (say) help their friends and punish their rivals.

    Furthermore, ensuring “spit and polish” — that each individual detail is precise and correct — strikes me as something that may hinder authors but aids the literature as a whole. Shooting from the hip allows authors to more quickly build grand impressive theories, but careful attention to detail and solid foundations allow others to build on existing work more reliably, benefiting the field.

  • Regarding mental avoidance, the authors write:

    Now think about the time you mistreated your significant other, or when you were caught stealing as a child, or when you botched a big presentation at work. Feel the pang of shame? That’s your brain telling you not to dwell on that particular information. Flinch away, hide from it, pretend it’s not there. Punish those neural pathways, so the information stays as discreet as possible.

    However, ruminating is a common psychological phenomena where people spend unusually (and sometimes destructively) large amounts of time thinking about times when they were caught behaving badly or were embarrassed (among other things). More commonly this doesn’t rise to the level of disease, but is experienced from time to time by most everyone. Yes we often flinch away from remembering our misdeeds, but it’s not clear to me that this actually is more frequent, or what its functional purpose is.

  • Regarding clothing choice, the authors write:

    Blue jeans, for example, are a symbol of egalitarian values, in part because denim is a cheap, durable, low-maintenance fabric that make wealth and class distinctions harder to detect.

    So conspicuous consumption signals wealth, but not doing this signals egalitarian values. What would the authors not interpret as signaling?

    I get it, social dynamics are hard and plausible deniability is central to a lot of signaling, so we shouldn’t be surprised that there are many ambiguous layers, counter-signaling, etc. But we should also admit this allows a signaling explanation for most isolated observations, and we should be more modest and careful about our claims.

Perhaps in these cases I am too used to the conventional explanations and do not fully appreciate how strained they are, or how parsimonious the authors’ preferred cynical explanations are. But even if so, the authors should spend more time drawing this out in each specific exampleGathering many strained examples does not help if they each invoke different assumptions.j  , perhaps by quantifying simplicity, or by proposing novel experiments for which apologists would actually make contrary predictions.

Instead, I think we often have a situation where the cynics have one lens though which they see the world and the apologists have another. Both worldviews seem more simple and clarifying to their adherents.There’s probably a similarity with Marxist views of how society is organized.k   To convert folks from one worldview to another, more work needs to be done estimating the ratio of assumptions made to data explained. In particular, I tentatively assert that gathering more evidence for a smaller number of very convincing examples is more valuable than Hanson’s call for applying this framework to more examples.Note that I’m not saying there’s anything wrong with continuing to trawl for more examples, and I gladly acknowledge the key role of tentative exploration and triangulation from weak evidence. But the health of the field, and especially the ability of its insights to be efficiently transmitted to people who have not spent a decade immersed in the literature, requires that the foundations be reinforced.l  

Other critiques

Here are some other critical notes I made as I was reading

  • A few chapters, especially the one discussing religion, were very thin on data. It was more like a fun introduction to how to think about social systems functionally — rather than the more popular moralizing approaches — with little attempt to show that any particular functional theory had much explanatory power. These accounts strongly risk being just-so stories, and I don’t think there’s any alternative than to be a lot more systematic.
  • The authors consider a thought experiment in which human are rendered oblivious to one anothers’s possessions, and they argue that consumer products would become more uniform since they would no longer be used to signal individual differences. But some people will claim that in obliviated world there will be more quirkiness and difference as the social pressure to conform fades. (Surely the authors agree there are times we give up weird habits and products because they are weird.) I don’t think they give us enough data or theory to predict which effect will be bigger, although in either case one can blame hidden motives.
  • The authors complain about how theories that celebrate human goodness tend to propagate more easily than cynical ones. I don’t know if that’s true generally, but I am pretty certain it’s not true in academics. In my experience, cynicism is a way of exerting status and projecting worldliness. I am a cynic, especially about human motives, but I think it’s important to beware the seductive feeling that you get when you believe that you are uncovering deep truths about the world that nobody else is smart enough or brave enough to discover. I wish the authors acknowledged this.
  • I would have preferred if the summary descriptions of things like the egalitarian norms in forager bands were more concrete. I understand they can’t give us a complete understanding of decades of anthropology research, but I worry a lot about the degree to which some of these high-level descriptions of social phenomena are interpretation-laden.
  • I’m honestly unsure whether the sheepskin effect is evidence of conformity signaling. Insofar as continuously surviving school takes conscientiousness and conformity, it’s not clear why its super-linear in time with a bump at the degree date. If anything, I’d expect time in school to yield diminishing marginal information about the conscientiousness and conformity of the student. Yes, staying in school for 4 years rather than 1 suggests more conscientiousness and conformity, but staying in school for indefinitely does not tell you an arbitrary amount about how conscientious and conforming a student will be on the job; eventually you just conclude the person likes school. And yes, insofar as schoolwork is bunched up near the end (in the form of final tests and projects), it’s possible that it requires a burst of conscientiousness and/or conformity to make it through the end. But it’s just as possible that the actual learning is also bunched up in this way. I was pleased to see that Noah Smith recently raised a similar issue, with Bryan Caplan responding.

Other thoughts

  • Effective Altruism (EA) is given a surprisingly generous treatment, without the typical Hansonian warning that EAs may be just signaling on another level. The entire chapter on charity is a good summary of the signaling critique that EA effectively makes of traditional charity, but the arguments will be familiar to EAs who read Overcoming Bias.
  • This would be a great book to experiment with collaborative annotating. I would like to see someone “fill in” some of the general claims with greater detail. Even the claims that cite a journal article would often benefit from a layman summary and a critical eye on the interpretation. (Then again, this system gives the authors incentives to be lazy…)
  • One aspect of unconscious selfish motives that I wish I knew more about is the amount of computational resources devoted to pursuing those selfish goals, and the interaction with the system-1/system-2 distinction.

Edit: Robin has responded, with my replies here. See also Bryan Caplan’s critique.

Footnotes

(↵ returns to text)

  1. Presumably, this is more Simler speaking than Hanson.
  2. This distinction is mentioned in a footnote at the end of the Chapter on religion, but it is not appropriately raised as an alternate possible explanation of apparently selfish behavior.
  3. A cynical explanation is that the authors are more interested in exposing hypocrisy because it’s satisfying to tear others down and show off how self-aware they are, and less interested in understanding the details of these mechanisms.
  4. Guilty.
  5. Of course, there seems to be cases where we are much more in tune to the actual social impact, and distinguishing these cases could yield interesting insights. For example, are nerds with worse social skills more likely to blindly execute low-level adaptations like winning arguments while more socially savvy people learn that this doesn’t actually win over the people they care about?
  6. The more I learn about the extent of scientific illiteracy, the more amazed I am that more people don’t succumb to charlatans pedaling bogus medicine. I think a robust system of prestige is largely what enables this.
  7. This is especially true for internal mental experiences which need not, in principle, be checkable against any external verifiable facts.
  8. Sociopaths exist, and my understanding is that they have reduced responses of this type. Sociopathy probably isn’t fitness enhancing, but it’s also a proof-of-concept that stopping emotional leaks is not that difficult in principle.
  9. On a stylistic note, I’m not a fan of the authors keeping the reader in suspense regarding some of their sub-claims, presumably for narrative reasons. For instance, we don’t find out that “laughter is a play-signal” until halfway through the chapter although we are teased much earlier.
  10. Gathering many strained examples does not help if they each invoke different assumptions.
  11. There’s probably a similarity with Marxist views of how society is organized.
  12. Note that I’m not saying there’s anything wrong with continuing to trawl for more examples, and I gladly acknowledge the key role of tentative exploration and triangulation from weak evidence. But the health of the field, and especially the ability of its insights to be efficiently transmitted to people who have not spent a decade immersed in the literature, requires that the foundations be reinforced.
Bookmark the permalink.

Leave a Reply

Include [latexpage] in your comment to render LaTeX equations with $'s. (More info.) May not be rendered in the live preview.

Your email address will not be published. Required fields are marked with a *.