Are you just pessimistic?

We’re optimistic about many things, but superintelligence isn’t like most things.

We would consider ourselves much more optimistic and gung-ho than the average person about nuclear power, geothermal power, genetic engineering, neuroengineering, biotech, nanotech, pharmaceutical development, and many other technologies.*

We expect that we’re at least somewhat less worried than most people about the risk of nuclear war, worst-case climate change scenarios, and many other potential risks and disasters. We think humanity is broadly on a good trajectory, and that if we avoid wiping ourselves out, the future is likely (though not certain) to be wonderful for everyone, with social and technological progress making things better and better over time.

We are also more optimistic than many about human nature. We believe in the goodness of humanity and in the potential for that goodness to deepen and grow if we survive to become more of who we wish to be. We mostly aren’t afraid of humanity ending up in a bleak or dystopian future, if we don’t make AI that keeps us from having a future at all.

Our concern about smarter-than-human AI is not driven by generic cynicism or pessimism. Smarter-than-human AI is different from other technologies that came before it.

Other technologies don’t think for themselves, or plot ways to escape, or build even more powerful technology. Smarter-than-human AI is a special case.

We view our worries about AI as generalizing to very few other things, because very few things are remotely this dangerous.

And even in the case of superintelligence, which poses a uniquely large threat and a huge challenge for the international community, we think there’s hope for the future to go well. We think humanity has the ability to hit the brakes on AI development, and that this could be enough to set us on a positive trajectory. We even think that (with a lot more time) humanity could put itself in a good position to build superintelligence safely.

But in order to get there, we first need to face up to the reality of the situation.

The point is the arguments, not the dire-sounding stories.

We provided a long list of ways that, e.g., “superintelligence is fascinated with humans” would probably go wrong in real life. Reading a list like that, we imagine that some readers might have a response like:

The AI optimists have all these hopeful-sounding stories. You have all these scary-sounding stories. Everyone acknowledges, though, that the future is hard to predict. So, hearing all these stories, I feel like I should have a medium-sized probability of AI catastrophe, not an extreme probability in either direction.

But you don’t say, “There are scary stories, and there are also hopeful stories, so we can’t be sure what’s going to happen, and we should ban superintelligence just to be on the safe side.” You say that the hopeful stories are cherry-picked and unlikely, and that your own stories should get more weight. Why?

The short answer is: You can’t make good predictions about the future by just counting up all of the gloomy tales and all of the happy tales and weighing them like marbles on a scale. Thinking through different scenarios can sometimes be helpful, but not in quite that fashion.

To illustrate the general point: Imagine that someone says, “Two hundred years from now, there will be exactly eight whales in existence, and they will all be purple.”

Humans have wild imaginations. Someone could fill a book with hundreds of stories of how it came to pass that the whale population shrank to exactly eight members, all of them purple. Someone else could fill a book with hundreds of stories in which there aren’t exactly eight whales. You can’t make accurate predictions by saying, “Well, both sides have plausible-sounding stories, so surely the truth is somewhere in the middle.”

To figure out which is true, you’ve got to look at the actual arguments. In the case of the purple whales, the argument is essentially that the outcome is too narrow and specific, and won’t be achieved unless the dominant forces steering the world are trying to achieve it. We can say much the same about superintelligent AI producing good, human-compatible outcomes.

Someone who was tasked with dispelling the “eight purple whale” stories one by one would wind up caught in a fairly repetitive loop of saying: “No, that’s overly specific; there are a bunch of other ways the future could go that would not lead exactly there; to imagine that it goes exactly that way is wishful thinking.”

This is more or less the role we authors find ourselves in with regard to the AI situation: Humans can tell all sorts of stories where everything goes fine, but those all ultimately involve imagining that the future follows a single narrow pathway when in fact there are a bunch of other ways for the future to go. This is why we keep repeating that humans aren’t the most efficient solution to almost any problem and that AIs won’t care about us even a little.

If Anyone Builds It, Everyone Dies does not just rattle off a bunch of gloomy stories and thereby conclude that AI is dangerous. In the book, we lay out an argument — an argument that is, in some ways, fairly simple: Researchers are trying to build AIs that are far smarter than any human. At some point, they’re likely to succeed. Current methods give humans very little ability to pick what sort of future the AIs steer toward. There are many different directions they could go, and most directions aren’t good.

The reason we’re rattling off all the counterarguments isn’t to overwhelm you with pessimism (if you’re the sort of person to read the online resources end-to-end). It’s that we actually get asked all these different questions over and over, and it’s nice to have a repository of responses somewhere. You don’t need to read all of them through. The answers all echo each other anyway.

What matters is the arguments themselves, not someone’s bias toward optimism or pessimism, and not the number of stories someone can trot out.

* When we say we are more optimistic than average (about one technology or another), we mean that we actually believe the technology is more promising than the average person believes. Dispositionally, we see ourselves neither as optimists nor as pessimists, but as realists trying to navigate a complicated world. We are not trying to find a rosy picture to put our faith in, and we are not trying to find a dour picture to fuel our cynicism; we are simply trying to believe the truth. We believe this is the correct disposition when faced with high-stakes decisions.

 To be clear: If the best you can do is say “I don’t know, there are some happy tales and some gloomy tales, maybe it’s fifty-fifty as to whether superintelligence would kill us or not,” that’s way more than sufficient to justify an aggressive international response, even if you weren’t quite as worried as we personally are. But it also matters that people understand the problem, because otherwise the policy response is unlikely to be well-targeted and effective. And if you’re just roughly comparing the number of good-sounding stories to the number of bad-sounding stories, then you aren’t engaging with the arguments on either side, which is what would help build understanding.

Your question not answered here?Submit a Question.