Catastrophe: Risk and Response
Richard A. Posner
As someone who works in an area focused on improbable risks (though not nearly as dire as those discussed in this book), I was very interested in the topic. There were a few interesting ideas in this book, although I had to persevere through Posner’s quasi-Talebian self-importance and needless swipes at things he doesn’t like. Off the top of my head, I recall him basically putting down the entire genre of science fiction as not worthy (although he feels OK as long as he can use the fancier name of “speculative fiction”), and taking repeated digs at disaster movies for featuring “plucky racial- and gender-diverse groups” who save the world, as though the idea that anyone but a group of white guys saving the world is risible. Honestly, I almost put the book down because of this, and I wouldn’t blame others who did. Perhaps worst of all, because not only boorish but also illustrating Posner’s substantive blind spots, is a passage (I meant to mark it but can’t find it now) where he says that we should focus more resources on science education and less on fields such as sociology, ethnic studies, and literature–but parenthetically, that we should still provide resources for foreign language education, because that could help us prevent bioterrorism! This was so laughable to me–as though sociology, the study of colonialism and oppression, and of course most of all history, were completely irrelevant for understanding and responding to terrorism—that other people are like machines you should learn to talk at but needn’t understand.
The book’s overall message, once you push past all of that garbage, is fairly straightforward: we face some highly improbable risks, such as an asteroid strike, that could have enormous negative consequences, and thus may have high costs even when converted to an expectation and discounted to present value—yet public policy tends to pay fairly little attention to such risks, and thus we may be under-investing in preventative measures. Posner’s main point is to argue for more application of cost-benefit analysis to policy, particularly in assessing and combating catstrophic risks. He focuses on a small subset of those risks: asteroid strikes, particle accelerator disasters that would essentially create a black hole, bioterrorism, and climate change. So far, so credible. I think his argument is particularly compelling in the case of asteroid strikes, where probabilities and severities are reasonably knowable (if not currently known), and the costs basically consist of fiscal outlays to develop better detection mechanisms and research defensive technology.
But on the other hand, we have the risk of bioterrorism—in Posner’s analysis, primarily the risk of an attack that would wipe out a large proportion of the species (with, say, a genetically engineered super-smallpox). This is one where the risks and severities are extremely difficult to estimate, and the costs of preventive measures even more so. Prominently, Posner discusses curbs on civil liberties, “enhanced interrogation” techniques, and restrictions on non-Americans studying certain subjects at U.S. universities. To me, this quickly gets into territory where cost-benefit analysis is simply no longer a practical tool, because the probabilities and marginal policy impacts are almost impossible to estimate. Posner largely handwaves this practical question. One of the most difficult challenges of extreme and hard-to-quantify risks, in my view, is that sort of by definition, it is even more difficult to estimate the marginal impact of any countermeasure. In fact, even though Posner mentions the marginal vs. total impact issue, he completely borks it with one of his key concepts. He proposes a variety of quasi-cost-benefit methods to be used in cases where some costs or benefits are difficult to quantify. One of these is “inverse cost-benefit analysis,” which entails dividing total spending on a risk by the estimated cost of the catastrophe’s occurrence to imply out an approximate probability of the catastrophe (and then comparing that with independent probability estimates to see if we are spending too much or too little). What Posner misses is that this only works if you further assume that the defensive spending has reduced the probability to zero, and furthermore, it only works in one direction, since it can be rational to spend less than the expected cost to prevent a catastrophe—either if countermeasures are extremely cost effective, or if they would not be effective at all. Therefore, this technique might provide a clue that spending was too high, but can never reliably tell you that spending is too low. (To take a silly example, note that federal spending to combat the Rapture is zero. This doesn’t tell you that the government thinks there is a zero probability of this happening, but only that no countermeasures are known.)
I was hoping that Posner would address some more philosophical related questions, such as whether the total extinction of the human race should have some additional negative utility value beyond the sum of all the individual lives lost, or the rationality of people’s implied life valuations changing with the order of magnitude of the probability of the relevant risk. But he really doesn’t get into any of this—I think he is just not that type of thinker, unfortunately.
At any rate, as I said, I do believe in the value of cost-benefit analysis for some more tractable cases, such as asteroid strikes. But I want to close with one more critique, which I think was actually my biggest issue with the book. I think there is an unspoken but notable class angle to an attempt to focus more attention on catastrophic risks. Surveying the wide range of meliorative expenditures we have available to us, the vast majority of them would naturally focus heavily on the worst-off humans. Thus, for example, the charitable research organization GiveWell (of which I am a big fan), which focuses on cost-effectiveness assessment of charities, recommends its donors to give money to charities distributing anti-malaria bednets and schistosomiasis de-worming medication in Africa, on the grounds that these are the most cost-effective known ways of combating human suffering (by individual donors anyway). There is a pretty air-tight economic logic to this—it’s obvious that it’s cheaper to “buy utility” for poor people, and generally speaking everything is cheaper in poor countries, so a given amount of money goes further. To me, the “catastrophic risks” angle has the effect, whether intentional or otherwise, of potentially re-focusing meliorative spending on things that equally benefit rich people in rich countries. (In fact, potentially more so, since life-valuation techniques are likely to imply that the life of a rich person is worth more than the life of a poor person—another philosophical issue Posner does not address.) The “name of the game” in catastrophic risks studies is to come up with a disaster that would be so costly that, even multiplied by its tiny probability, gives an expected cost greater than immediate challenges facing poor people. (Thank god this book was written before the current wave of AI-risk scaremongering, which allows negative utility to be increased almost without limit. See for example Maciej Ceglowski’s excellent presentation “The Idea That Eats Smart People.”) Posner waves a hand at this general concern by saying that spending more on one risk doesn’t necessarily mean spending less on others—we can always reduce farm subsidies or whatever—but of course we do have a limited budget of time, money, and attention. In general, rich people can spend an almost unlimited amount of money insuring themselves (and their heirs) against ever more improbable risks, and to my mind this book is sort of that mindset shifted to a policy context. I don’t mean to imply that Posner wrote the book with this intention at all, but I do think there is a risk (ha) of being seduced by this particular flavor of policy analysis for that underlying reason.