But what about the benefits of smarter-than-human AI?
Rushing ahead destroys those benefits.
We’re optimistic about how wonderful superintelligence could be, if it were steering the world toward wonderful ends. We’d personally consider it a great tragedy if humanity never created smarter-than-human minds.
But superintelligence alignment doesn’t come free. If we rush to try to reap those benefits, we get nothing, and worse than nothing.
I (Yudkowsky) spent several years as an accelerationist myself, hoping to create AI as quickly as possible, before recognizing that AI alignment didn’t come free. And both of us authors dream of wonderful transhumanist futures. But we don’t get there by racing ahead on superintelligence.
The choice isn’t between gambling on the benefits of AI now (no matter how small the chance) versus never accessing those benefits. The true choice is between rushing recklessly ahead and killing everyone, versus taking the time to do the job properly.*
“Now or never” is a false dichotomy.
* Some people argue that we must take the gamble now, for the shot at saving dying humans from their natural deaths by aging. Human bodies are formidably complicated, but with enough scientific progress, we could solve many of the maladies that we take for granted today — such as cancer, heart disease, and the varied diseases of aging. Smarter-than-human AI could get us there far faster. Delaying superintelligence literally costs lives.
Or, well, it would cost lives, if it weren’t for the fact that superintelligence kills exactly the same people.
In fact, sick and dying people today very likely have a better chance of survival if humanity backs off from the brink:
- Biomedical research and the hunt for treatments and cures can proceed in the absence of superintelligence. Gene therapy, cancer vaccines, and other new approaches hold enormous promise that researchers are only just beginning to tap.
- Narrowly focused AI technology can even help accelerate this effort, without any need to put the whole human endeavor in jeopardy by building toward smarter-than-human general AI.
- Brain preservation methods can be used to preserve people even after their heart stops pumping, until medical science advances to the point of being able to revive them and restore their health. The sort of AI that could offer immortality could also almost surely restore somebody from an appropriately preserved brain.
(More quietly, a subset of these people will tell you that they are in it for their own personal immortality, and that they’re willing to risk the lives of every adult and child on the planet even for a small chance that they and their loved ones can achieve it. This strikes us as mustache-twirling villainy. To these villains, our recommendation is the same as it is to the altruists: Sign up for brain preservation. It gives you better odds than a rogue superintelligence would, and you also get to avoid putting every human alive in grave peril in your quest for immortality! Win-win.)
Even if we only cared about the welfare of the sick and dying, rolling the dice on some combination of these methods seems like a better option than rolling the dice on building vastly superhuman AI and hoping that it likes us. (And that it likes us in just the right ways.) The dice for superhuman AI are dramatically loaded against us.
But also: To the best of our knowledge, nobody has actually asked the sick and dying if they want to put their families and countrymen in severe danger in order to roll the dice on a possible superintelligence-derived cure. And the families and countrymen in question certainly haven’t been asked if they consent to having their lives put on the line for this mad science experiment.
We don’t have to gamble all of our lives on this option, when many other options exist.
We implore anyone concerned for the welfare of people today to accelerate the above methods instead, while steering as wide a berth as possible around anything that could move us even incrementally closer to artificial superintelligence.
If you simply don’t believe that a rogue superintelligence would kill us, that’s one thing. But to accept that it would likely kill us all and saying we have to take the gamble anyway is madness. There are other options for resolving the problems of the modern world. By analogy: If living in a high-altitude environment makes you uncomfortable, that’s no excuse for jumping off a cliff. Find a different pathway to the bottom of the mountain.