Won’t AI find us fascinating or historically important?
If AI values “fascination,” it probably has better options.
The story here is similar to the story for filial love:
By default, a superintelligence probably wouldn’t value “fascination” or “interestingness.” Chess AIs don’t win at chess by feeling emotions like “dedication” or “drive to win.” These emotions are important in human chess players, but AIs can do the same work in different ways. By the same token, a superintelligence would probably do the useful work of learning about the world, testing hypotheses, etc., without using “curiosity” or “fascination” to do it.
An AI wouldn’t necessarily be cold and logical, but if it has its own messy pile of urges and instincts, these probably look radically different from the human pile.
Even if the AI winds up with something like an “interestingness” drive, and even if humans are interesting to the AI in some sense, there are inevitably going to be ways to use our matter and energy that are far more “interesting.”
A superintelligent AI might build other minds in order to study them or interact with them. But for almost any particular arrangement of values, the most fascinating possible minds to study wouldn’t be humans. For more on this, see “Humans Are Almost Never the Most Efficient Solution.”
- If the AI did view something at all like humans as the most interesting or fascinating thing possible, the outcome would likely be horrible. See the discussion in Chapter 4.
It isn’t literally impossible for a superintelligence to value everything needed for humans to flourish, and value it just right. But there is an enormous space of possibilities outside of this one. Humans don’t usually think about the rest of the possibility space, because normally, we have no reason to, because normally, we don’t interact with truly alien optimizers optimizing toward strange ends.*
We have never encountered anything quite like artificial intelligence before, and many normal intuitions about how people behave simply won’t apply to superintelligences.
If AI valued us as historical relics, this would be horrible too.
It’s very unlikely that AI would care specifically about preserving its history, and specifically about keeping humans alive to that end. But even if the AI cares about preserving its history for one reason or another, that doesn’t mean it keeps us alive and well.
Perhaps it preserves our brains in amber (or records how our atoms used to be arranged in some digital file), and keeps us as a record of how Earth once was. That doesn’t sound like a great outcome to us.
We mostly expect artificial superintelligence to just kill us — but only mostly. We can’t rule out that the AI would keep records of us for one reason or another, and there are some exotic scenarios where emulations of humans get run in a controlled setting every once in a while.† Those endings are mostly not happy ones.
* For a case study where humanity did interact with an alien optimizer of a sort, see the beetle study in the extended discussion on taking the AI’s perspective.
† The most plausible story we know of where humanity gets to keep living in the wake of AI is: Perhaps an AI keeps records of the humans that once lived, and perhaps it sends probes out in all directions to harvest the energy of all the stars it can reach, and perhaps somewhere out there in the depths of space it meets distant alien life forms, defended by their own superintelligence. Perhaps some of those distant civilizations are interested in buying a copy of the record of Earth, for one reason or another. Perhaps those aliens run digital copies of humans for their own alien purposes. Then those digitized humans in the alien zoo can, if they like, debate whether or not it was technically true that “everyone died.”
We do not consider this sort of wild possibility to be a happy one.