Wouldn’t some nations reject a ban?
Not if they understand the threat.
We are talking about a technology that would kill everyone on the planet. If any country seriously understood the issue, and seriously understood how far any group on the planet is from making AI follow the intent of its operators even after transitioning into a superintelligence, then there would be no incentive for them to rush ahead. They, too, would desperately wish to sign onto a treaty and help enforce it, out of fear for their own lives.
Even nations like North Korea, which flouted international law to develop its own nuclear weapons, have not used those weapons against their enemies, because they understand that there are no winners in a nuclear holocaust. Nations and their leaders sometimes engage in brinksmanship or war, but they don’t actively pursue their own destruction.
People who imagine that some foreign nation would defect from the treaty are, we think, imagining a nation whose leaders simply don’t understand the threat. We think they’re imagining a scenario where AI has a 95 percent chance of conferring great wealth and power to its creator, and a 5 percent chance of killing everyone. In that case, sure — some nation-state might be reckless enough to try it. And perhaps some nation-state will believe that that’s how the odds look.
We think that this situation is not what the theory and evidence imply. As we’ve argued extensively throughout the book, the theory and evidence all suggest that this technology would straightforwardly be global suicide. No one is remotely close to being able to harness machine superintelligence for benefits. If most of the world understood that, there would be much less reason for rogue nations to violate a treaty. They don’t want to die either.
And even if some hypothetical rogue nation has a leader that truly does not understand the threat posed by ASI, if that nation is surrounded by an international alliance of world powers that do appreciate the threat, the concerned powers of the world can intervene and shift the incentive landscape for the rogue power.
If (for example) the leaders of the United States, China, Russia, Germany, Japan, and the United Kingdom all genuinely believe that their own survival depends on no one building a superintelligence, and they are crystal clear in their communication that they will treat any attempts to build a superintelligence as a threat to their lives and livelihood and that they stand ready to react in their own self-defense, then — well, even a world leader who disagrees probably wouldn’t want to try their luck against that coalition.
AI development is not a race to great military dominance; it is a race to suicide. We think that if world leaders understand this — if they expect themselves and their children to die from it — then they will sincerely stick to a treaty, and sincerely help enforce it.
It is not actually that hard of an argument to follow, that creating machines which are smarter than all of humanity combined is liable to send the world off a cliff. It is not that hard to see how little humanity understands about the intelligent machines we’re building, once you pause long enough to genuinely ask the question. We think there is a question of whether world leaders will come to believe these facts. But if they do, we do not think it’s actually unrealistic to stop this suicide race.
A treaty would require real monitoring and enforcement.
Even if most nations understood that if anyone builds it, everyone dies, some nations might not, and might be reckless enough to proceed with building machine superintelligence anyway.
Monitoring is necessary. Enforcement is necessary. Nuclear, biological, and chemical weapons treaties provide some precedent for ways to verify compliance. We can and should render efforts to circumvent said treaties difficult and costly.
An international ban on frontier AI will need to be strictly enforced. If any nation-state is determined to press ahead in the face of international pressure, then the use of military force by signatory nations may be required.
This is not ideal! Every effort should be made to make it clear that force would be used in such situations, so as to avoid miscalculations where force must be used in reality. But if there is any cause that could justify limited military action — or even war, if a non-compliant nation chooses to escalate — saving the human race ought to qualify.
This method has worked before.
It has been over eighty years since the development of the atomic bomb, and humanity has done a pretty good job at managing nuclear proliferation. There has been no large-scale nuclear war, contrary to many experts’ predictions in the wake of World War II.
In June of 2025, the U.S. government even performed a limited strike on Iran in an attempt to disrupt its ability to create nuclear weapons. This sort of treaty and enforcement regime is precedented in the world order.
If we could buy ourselves eighty years before the development of ASI, that might well be enough.