[Tomasik has updated his essay to address some of these issues]
Brian Tomasik’s website, utilitarian-essays.com, contains many thoughtful pieces he has written over the years from the perspective of a utilitarian who is concerned deeply with wild animal suffering. His work has been a great resource of what is now called the effective altrusim community, and I have a lot of respect for his unflinching acceptance and exploration of our large obligations conditional on the moral importance of all animals.
I want to briefly take issue with a small but important part of Brain’s recent essay “Charity cost effectiveness in an uncertain world“. He discusses the difficult problem facing consequentialists who care about the future, especially the far future, on account of how difficult it is predict the many varied flow-through effects of our actions. In several places, he suggests that this uncertainty will tend to wash out the enormous differences in effectiveness attributed to various charities (and highlighted by effective altruists) when measured by direct impact (e.g. lives saved per dollar).
…When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing…
…For example, insofar as a charity encourages cooperation, philosophical reflection, and meta-thinking about how to best reduce suffering in the future — even if only by accident — it has valuable flow-through effects, and it’s unlikely these can be beaten by many orders of magnitude by something else…
…I don’t expect some charities to be astronomically better than others…
Although I agree on the importance of the uncertain implications of flow-through effects, I disagree with the suggestion that this should generally be expected to even out differences in effectiveness. Here’s a toy example of how this reasoning can fail: Consider two charities, X and Y, that have exactly the same cost and take and take exactly the same actions. In order to assume linearity, suppose their action is some small contribution to a continuous parameter, e.g. scrubbing a tiny percentage of CO2 from the atmosphere, or giving money to poor Kenyans when many other and larger charities already have similar projects. By symmetry, X and Y should have the same expected impact on the future. Now imagine one of the charities X starts burning 99% of their money and using the remaining 1% to accomplish just 1% of the effect of the other charity Y.
Obviously, charity X is now two orders of magnitude less efficient than charity Y in the direct, object-level impact. But is this difference smoothed out when we more broadly consider the uncertainty of flow-through effects? Of course not. There’s still a gap by factor of 100. Unless you think the sign of the expected effect of the action changes, you should not think effectiveness of Y compared to X becomes any less upon consideration of flow-through effects. And even if you have some uncertainty in the sign, you still must assign an expected utility to a unit of action, and you will still see the same proportion discrepancy between X and Y. (If the sign is negative, you contribute to neither.)
Of course, as Tomasik suggests in the second quoted sentence, tangential effects of the charity on people who work for or interact with it may be very important and could even out the expected impact. But it could also magnify them. You simply cannot escape the necessity of choosing a measure (in the sense of measure theory) over which otherwise indistinguishable charities have equal expected impact. The measure could be “number of dollars spent” (which is the measure I think Tomasik is implicitly using), but it could just as well be “number of people employed”, “number of charities” (defined by legal incorporation), “amount of new coverage”, “number of DALYs saved”, “number of schools build”, “amount of time charity exists”, or “number of donors”.
Assuming a general tenancy for uncertainty to smooth out expectation differences is basically the same two-envelope/scope-insensitivity mistake as assuming that two charities you’ve never heard of, but whose very different methods are described to you, have similar expected impact by default. In other words: How much more (or less) impact would saving 3000 sea lions of millions have compared to teaching one child to swim? What if I had asked you about 300 sea lions and 10 children? Since you cannot consistently answer “they have equal expected impact, given my ignorance” to both questions — even when you know nothing about swimming or sea lions — it’s clear that mere uncertainty cannot lead to a universal smoothing.
Lastly, I want to draw out one other statement by Tomasik about research:
But it’s not clear that directly studying future suffering is many orders of magnitude more important. The tools and insights developed in one science tend to transfer to others. And in general, it’s important to pursue a diversity of projects, to discover things you never knew you never knew. Given the choice between 1000 papers on future suffering and 1000 on dung beetles, versus 1001 papers on future suffering and 0 on dung beetles, I would choose the former.
Here, it seems to me like he’s not properly distinguishing between two ideas. I think he is arguing against the claim that some types of research are thousands of times more important than others. A precise statement of that claim is that the marginal impact of a research dollar for some fields is thousands of times more than for others. But the example Tomasik is giving is really about the diminishing returns of research, and that this might smooth out the initially vastly different marginal impacts. Now, in agreement with his earlier reference to broad-market efficiency, we do expect a smart and flexible allocation of resources toward research avenues with sharply diminishing marginal returns to even out the marginal value of investment in various field over time. I certainly think this happens.
But this only works insofar as the research market is efficient, and there are good reasons to believe that (a) new discoveries act as shocks, which can dramatically change (i.e. by many orders of magnitude) the marginal value of different fields and (b) the research market has inefficiencies resulting in large and long-lasting discrepancies in marginal value between fields. (Reasons for (b) include the fact that scientists and funding agencies often are beset by biases, are self-interested, are slow to adapt, and don’t think about the far future.) Therefore, it’s very likely that dramatic differences in marginal value of research fields currently exist and will persist into the future. In particular, I suspect insights about the potential importance of artificial intelligence have not been appreciated by most practicing academics and their funders, so I think it’s certainly possible (but by no means obvious) that the marginal research dollar in this field could be orders of magnitude more important than most others.
Eliezer Yudkowsky provides this pithy summary:
I think it’s kinda disingenuous to use the title ‘Charities Don’t Differ Astronomically’ to defend the thesis that some charities can be a billion times more effective than another, but are unlikely to be 10^30 times more effective than a positive-but-useless charity because the second charity, assuming it is positive, probably has a greater-than-1-in-10^30 accidental and unintentional effect on promoting the goals of the first charity, unless of course that effect is negative. ‘Charities Differ In Impact By A Factor of Billions, Not Decillions, Due To Bad Charities’ Unintentional Flow-Through Effects on Astronomical Stakes, Which May Be Negative’ would be a more accurate title here. More importantly that doesn’t refute any of the main policy points in the astronomical-stakes argument.