Isn’t the danger from smarter-than-human AI a distraction from other issues?
The world is, unfortunately, big enough for multiple issues.
Nuclear war and bioterrorism are real threats. Unfortunately, machine superintelligence is also a real threat. The world is big and troubled enough for all three.*
The threat from superintelligence is unlike many other threats that humanity faces, and it seems uniquely pressing. One distinguishing feature is that a significant fraction of the world’s economy is being spent to make AI more and more capable. In contrast: Although biosecurity is a serious issue, investors aren’t pouring tens of billions of dollars into creating superviruses. Supervirus engineers aren’t pulling salaries of millions or tens of millions (or sometimes even hundreds of millions) of dollars per year.
The world is putting effort into making nuclear power, but nuclear power plants are a pretty different technology from nuclear weapons. We don’t live in a world where private companies are scrambling to build larger and larger nuclear weapons with huge amounts of investment and talent. If we did, there’d be a much greater risk of nuclear war.
AI is also a trickier situation because it provides great wealth and power right up until it crosses some critical threshold, at which point it kills everyone. And nobody knows where that threshold is.
Imagine nuclear power plants got more and more profitable as the uranium they used was more and more enriched, but at some unknown enrichment threshold they blew up and ignited the atmosphere, killing everyone. Now imagine that half a dozen companies were enriching uranium as fast as they could, each saying they’d rather be a participant than a spectator. That’s a little like what humanity is doing with artificial superintelligence.†
The danger from artificial superintelligence is urgent. Corporations are rushing to build this technology. We don’t know how long it will take them to succeed, but it seems to us that a child born in the U.S.A. today is more likely to die from AI than to graduate from high school. We think that you, the reader, are likely to die of this in your lifetime, perhaps in the next few years. The whole world is at stake.
We aren’t saying that other issues should be ignored. We’re saying that this issue must be dealt with.
* See also our extended discussion (following Chapter 13) on making an inclusive coalition.