Why did you tell a story with only one AI as smart as Sable?
In part because it’s realistic.
AlphaGo (the first AI to beat a human at Go) was basically alone in its class when it was released. ChatGPT was basically alone in its class when it was released.
AI experts will sometimes talk about how various other competitors weren’t that far behind.*
But similar systems can have dramatically different effects. A nuclear chain reaction that produces 0.98 neutrons per neutron is very similar, in some sense, to a nuclear chain reaction that produces 1.02 neutrons per neutron; but the former peters out and the latter explodes. Chimpanzee brains are in some sense very similar to human brains, but they have very different impacts on the world.
And in AI development in real life, OpenAI actually produced a useful chatbot before everyone else. A bunch of other players were working on AIs that were somewhat similar; a bunch of other players caught up later. But there was one AI that crossed the qualitative boundary first, ahead of the pack.
There seems to be some important boundary that humanity crossed and chimpanzees didn’t, a boundary which let us build a technological civilization while they hang around in trees. Our best guess is that there’s a similar boundary (or possibly several such boundaries) somewhere between modern AIs, and AIs whose thinking “comes together” well enough for them to develop their own varied technologies.†
Our argument doesn’t require that there be a qualitative gap for machines, the way there was for biological life. Maybe there won’t be! We could have written an alternative story where there wasn’t. But we wrote the story this way because our best guess is that there is such a gap.
In part because it’s easier to write.
Maybe there will turn out to be no qualitative gap between the LLMs of today and artificial superintelligence. Maybe many competing AI companies will slowly improve their AIs in lockstep. Maybe, for some reason, there is no collection of skills and abilities that allows one AI to take off ahead of the pack, the way that humans took off from the rest of the animals. It’s not our best guess, but it’s possible, for all we know.
But a story like that would be harder to write, and would be full of unnecessary details about factions of AI and their internal politics. We expect it would be rather distracting. We also expect that it doesn’t matter all that much for the later stages of the story. It doesn’t really matter whether it’s one AI or a collection of AIs that are executing some plan to empower themselves at the expense of humanity.
See also our discussion of how coordinating AIs won’t leave anything behind for humans (unless one of them already cares about us).
* For one such analysis, see “How Far Behind Are Open Models?”, a report by Cottier et al. of Epoch AI.
† We’re not saying there was a sharp discontinuity in the evolution of primates; human society differed from chimpanzee society slowly at first, and then quickly. We’re saying that there’s a large and extremely important gap between the kinds of things humans can do today and the kinds of things chimpanzees can do, regardless of how gradual or smooth the transition was at the time. See also our discussion about cognitive thresholds in the supplement to Chapter 1.