Will AI find us useful to keep around?
Happy, healthy, free people aren’t the most efficient solution to almost any problem.
Once you’re a superintelligence, almost no problems benefit from including humans in the mix.
If you’re building a power plant or designing an experiment, humans will just slow you down.
We’ve already seen this start to be true in narrow domains like chess. When AIs team up with humans, they play better than a solo human but worse than a solo AI. When doctors combine their knowledge with AI to diagnose patients, they often do worse than the AI operating alone.
Some argue that a diversity of perspectives is naturally helpful and that human input will therefore be valuable in many domains. But even if we assume that this is true for superintelligences, humans aren’t the best possible way to produce diverse advice. A superintelligence could do better by designing a wide range of AI minds, which could be far more diverse than humans (and a lot more energy-efficient to run).
Humans are useful for many things, but they’re not the best solution to most of those things. The idea that AI could never find a better option seems like it stems from a lack of imagination, plus perhaps some wishful thinking.
A common issue we see is that people don’t think things through from the AI’s perspective.
They aren’t asking, “What does this thing want, and how can it get more of that, cheaply and efficiently?” and then discovering that human-desirable outcomes just happen to be the best possible way for the AI to get what it wants.*
Instead, people are starting with a pleasant-feeling outcome (such as a world where AIs keep us around), and then coming up with post-hoc stories about why an AI might want those outcomes too.
Doing this tends to create a false sense of optimism, because you’re putting all your creativity and mental energy into coming up with stories where the AI does exactly what humans want — and putting none of that creativity, energy, or attention into considering the vastly larger number of scenarios where the AI does one of a million other things instead.
There are far more scenarios where AI does literally anything else than scenarios where it builds a flourishing human civilization in particular. There are far more reasons pushing AI to not preserve humanity than reasons pushing AI to preserve it. For an AI to bother keeping humanity around, we would need to be the best way for it to achieve some preference it possesses. And, realistically, for almost any preference you can imagine, we are not.
For more on these topics, see the relevant extended discussion below.
* For more on this, see the extended discussion on Taking the AI’s Perspective.