- We can't call it a failure to predict. I think few people in the rationalist community would have argued that a pandemic could NOT happen, before NYE 2019. It's a failure to react, even once we saw THIS non-hypothetical pandemic coming. Am I missing people who were sounding the alarm? If not, it seems rationalists are no better at spotting information important to survival than anyone else.
(Side lesson: most cognitive skills are not as generalizeable as we would like to think. Because you are good at thinking critically about software does not necessarily mean you're good at thinking critically about epidemiology. I suspect this is because understanding the relevant variables is mostly about memorized instinctive system 1 associations and weightings that come from experience.)
- Very few people saw this coming - "this" meaning "a possibility of a pandemic we must plan for". Including rationalists. Including superforecasters. People in epidemiology knew it was possible but it's hard to evaluate their claims of danger over any other profession that predicts low-probability high-consequence events in a way connected to professional success (they're always thinking about pandemics, appropriately); Bill Gates and a few other smart people outside the epidemiology world tried to raise consciousness about the possibility prior to this particular event. Was there a way to pull the signal out from them above all the other constantly-broadcasted jeremiads at the time? And it wasn't like an earthquake where one second it was there, the next it wasn't, and so far as we know there's no way to spot it early; it has been there since December and the large majority of us in the US, including rationalists, did not care much until early March. This was in no way a black swan. We knew it could happen, it had happened several times before, and we had weeks of growing warnings. It was a white swan, walking slowly toward us from the horizon, just like the last few white swans did.
[Added later: Nassim Nicholas Taleb uses exactly the same language in this Bloomberg interview. And read more here about why it was so hard to raise the alarm.
- Most depressingly, all this occurred after we (in the rationalist community, in parts of the psychology and media and data world) had for years pointed out the failures of predictors repeatedly and tried explicitly to improve. It's depressing because this raises the question of what else we're missing, and indeed if we ever can NOT miss things like this. Again: not even a failure to predict. A failure to react. Why? Denial, fear of social censure by others not on board? Bounded rationality, ie most of us are too stupid to extract important signals and extrapolate?
- As a result, I am now particularly concerned about the likelihood of Carrington events and nuclear war - see here and here for near-misses (never mind their intentional use, which is also possible - indeed, that's why they were built and why they continue to be maintained.) The 1983 event is particularly chilling and came down to the career-risking, intuitive, principled judgment of ONE MAN. Petrov should be a name repeated with reverence around the world, since arguably it's because of him that there still IS a world. Our overconfidence that it can't happen occurs on a similar time scale with the Asian flu of 1957-58 which resulted in school closures and an economic downturn, though not on the scale we're seeing with COVID-19.
- We have never seen runaway AI. We have seen nuclear weapons used in war. I wouldn't argue against the possibility of a hard AI takeoff, but you canNOT argue against the possibility of nuclear weapons used in war, because it has already happened once. Interestingly, of all the stupid denialisms out there, I have never run into Hiroshima-Nagasaki denialists.
Another white swan on the horizon that rationalists should spend more time stopping.
Here's Scott Alexander's review of the book written by Toby Ord, which besides AI lists pandemics and nuclear war. Before you're too thrilled that he gives lower numbers for nuclear war than AI, those numbers are for TOTAL EXTINCTION OF THE HUMAN RACE, not the chance of it happening. There's a lot of space between "extinct" and "a lot of the people you love will die and all of you will suffer horribly" just like there's space between "okay" and "needs intubation" with COVID-19, so don't think mild to moderate means okay.] Yet another time we survived by dumb luck:
...even when people seem to care about distant risks, it can feel like a half-hearted effort. During a Berkeley meeting of the Manhattan Project, Edward Teller brought up the basic idea behind the hydrogen bomb. You would use a nuclear bomb to ignite a self-sustaining fusion reaction in some other substance, which would produce a bigger explosion than the nuke itself. The scientists got to work figuring out what substances could support such reactions, and found that they couldn’t rule out nitrogen-14. The air is 79% nitrogen-14. If a nuclear bomb produced nitrogen-14 fusion, it would ignite the atmosphere and turn the Earth into a miniature sun, killing everyone. They hurriedly convened a task force to work on the problem, and it reported back that neither nitrogen-14 nor a second candidate isotope, lithium-7, could support a self-sustaining fusion reaction.
They seem to have been moderately confident in these calculations. But there was enough uncertainty that, when the Trinity test produced a brighter fireball than expected, Manhattan Project administrator James Conant was “overcome with dread”, believing that atmospheric ignition had happened after all and the Earth had only seconds left. And later, the US detonated a bomb whose fuel was contaminated with lithium-7, the explosion was much bigger than expected, and some bystanders were killed. It turned out atomic bombs could initiate lithium-7 fusion after all! [my emphasis] As Ord puts it, “of the two major thermonuclear calculations made that summer at Berkeley, they got one right and one wrong”. This doesn’t really seem like the kind of crazy anecdote you could tell in a civilization that was taking existential risk seriously enough.
Added still later: depressing results that cognitive biases are extremely difficult to avoid even with explicit high stakes incentives.]