Are you saying we should panic?
We’re saying government officials should take the problem seriously.
We don’t see how panicking would help the situation. Panicking isn’t how society survived the threat of fascism during World War II, nor the threat of nuclear annihilation during the Cold War.
Preventing machine superintelligence from coming into existence is everyone’s problem. In Chapter 13, we discuss the next steps we think the world should take to avert the danger. Suffice to say, this problem will require coordination, cooperation, level heads, and mature communication.
Acts of extreme panic don’t yield good results.
Sometimes people ask how we could possibly be earnest in what we say if we haven’t, for example, started attacking AI researchers. The answer is that violent outbursts would make things worse. (If you’re the kind of naive utilitarian who thinks they would help, you should probably just back off from attempting consequentialist reasoning entirely and stick to following deontological rules, as we’ve argued before.)
We aren’t hardline pacifists who think that a nation should never go to war, no matter the cause, on the grounds that it risks lives. Some things are worth risking lives for. But there’s a world of difference between “I’m not a hardline pacifist,” and “I think violent mayhem is a sensible way to ensure that the world handles this complicated issue of technological proliferation well.”
Usually these terrible suggestions are brought up by someone who doesn’t actually believe that AI is on the brink of killing us, and who hasn’t tried really looking at the world through that lens. It doesn’t seem to occur to them to ask if acts of lawless violence would actually help. (Despite our efforts to spell it out repeatedly, as in the addenda one of Yudkowsky’s essays.)
We aren’t telepaths, but it seems to us that this kind of AI disaster skeptic possibly views violence as a form of personal expression, as though expressing extreme feelings in extreme ways will cause the world to hand you what you want.
The world doesn’t work like that. We don’t live in a world where everyone has the option to sell their soul for success in their endeavors, and where the reason that most people don’t is because they haven’t found an endeavor that’s worth their soul. Terrorism is not a magic “I win!” button that people refrain from only out of a conviction that it wouldn’t be right. The Unabomber did not succeed in reversing the industrialization of society.
You can still shred your soul with acts of hatred or violence, but all you’ll get in exchange is a more broken world. A world where the discourse is that much more poisoned, and where the feats of international coordination needed to actually solve this problem are now that much more difficult to achieve. A terrible act of desperation will not grant you terrible power as part of some Faustian bargain. You can try your very best to sell your soul, but the devil isn’t buying.
Notes
[1] argued before: See, e.g., Yudkowsky’s answer to Q3 in his LessWrong post about how to inch humanity closer to survival even when the situation looks grim.