If the story started later, would the world be better prepared?
We can hope so.
Extra time can be significant, but only if humanity uses it to change its course.
In Part III, we’ll turn to the question of how terrifyingly unprepared humanity is for superintelligence, and how large changes are necessary to prevent the kinds of bad outcome depicted in the story of Sable.
There are various ways that the world could get a little more secure against rogue artificial superintelligences. Governments around the world could require that all DNA synthesis laboratories verify that they aren’t synthesizing anything known to be dangerous. Earth could undergo a great effort to radically improve the cybersecurity of the internet, in ways that would make it harder for AIs to hide code in some dark corner.
But realistically, even a massive effort here probably wouldn’t help much against an adversarial smarter-than-human AI. And the herculean effort required to win a little bit more security on this front shouldn’t be confused with the efforts along these lines that humanity is currently undertaking, which are far smaller, far easier to achieve, and fully ineffective for this purpose.
In the case of DNA synthesis: Even if U.S. regulators required that U.S. DNA synthesizers avoid synthesizing dangerous material,* would a lab anywhere else in the world synthesize suspicious DNA for a high enough price? And would the restrictions on DNA synthesis be a simple blacklist that ruled out known viruses (like smallpox), or would it involve some more intelligent analysis? How hard would it be for a sufficiently smart AI to subvert such an analysis?
Or, when it comes to cybersecurity: Many leading tech companies might attempt to use AI to harden their own computer networks against attacks. Meanwhile, the U.S. telephone network is easily hackable in ways that let foreign spies listen in on the calls of U.S. officials, and U.S. regulators struggle to close the hole. Dumb AIs could find and patch a bunch of superficial problems with the world’s cybersecurity, but the problems run pretty deep. Artificial intelligence smart enough to overhaul the whole internet to the point where a superintelligence couldn’t find a gap would almost surely be dangerous in its own right.
And even if Earth could lock down the internet and its DNA synthesis laboratories, that wouldn’t actually change the story in the long run. A superintelligence that has any channel to affect the world for good also has a channel to affect the world for ill.
A rogue superintelligence would just find some other channel that wasn’t locked down, such as by starting its own cult or religion. Or by purchasing robots and steering them to build its own secret wet lab where it can do all the DNA synthesis it needs. Or, perhaps most likely of all, the AI would find a channel that we can’t anticipate today, because the AI is a superintelligence and we are not.
Hardening the entire world against the most obvious and foreseeable AI attack vectors would be the most difficult thing humanity has ever done. It would take an incredible, long-term, concerted effort. And it almost surely wouldn’t work.
The window of time when we can stop a rogue superintelligence, realistically, is before it gets created.
* Some such regulations exist, but as of mid-2025, they are not comprehensive and are still in their infancy. For some discussion, see the PennState framework.