Workable Plans Will Involve Telling AI Companies “No”
We do somewhat caution people with influence in governments from making a plan that involves sitting down and negotiating with AI companies.
If you’re new to this topic and want to vet the labs or their arguments yourself, then we encourage you to check out some of their public blog posts and see if you find them compelling.*
But if you’re working on finding solutions to the issues discussed in If Anyone Builds It, Everyone Dies and you have a plan that requires OpenAI CEO Sam Altman to say “Yes” to it, we worry you must be trying to do the wrong thing in the first place.
The right plans are probably ones that the heads of AI companies will vociferously object to. Furthermore, Sam Altman doesn’t have the power to save the world: If he tried to shut down OpenAI tomorrow, OpenAI and Microsoft would go against it, and might well replace him with someone who prefers to keep the money flowing.
If OpenAI did shut down, then Anthropic, or Google DeepMind, or Meta, or DeepSeek, or some other company or nation, would destroy the world in its stead. Sam Altman might make things worse if he tries; he has little power to make things better.
We’d like to be wrong about this, but the broad picture we’ve gotten, both from public reports and private interactions, is that the executives at top AI companies (as of 2025) don’t seem like the kind of rule-abiding or honest people with whom it’s all that feasible to make deals.†
It seems to us like what needs to happen now is a globally coordinated halt in the race to superintelligence. For that, policymakers will likely need input from people who are experts in making AI chips, building datacenters, and monitoring the compliance of foreign actors. People who are experts in growing more and more capable AIs? They’re competent managers, sure, but they shouldn’t be getting veto power over any of the efforts to shut their own work down.
If, for any reason, the AI companies get a vote in what happens next, that sounds to us like something has gone wrong. Is the plan that Earth makes to avoid dying to superintelligence the sort of plan that fails if Sam Altman or the head of Google or the people behind DeepSeek say “No”? Then it is no plan at all.
If AI companies retain the authority to choose to destroy the world — if that decision is somehow still in their hands — then the world ends on full automatic. There must be a step in the plan that strips AI companies of their unfettered power to build doomsday devices.
* In our experience, these writings tend to be heavy on spin and short on substance, often quietly swapping between contradictory claims based on what’s fashionable or politically convenient in the moment. We don’t come away with the sense that these are honest and transparent descriptions even of the lab heads’ actual perspectives, which makes them less useful compared to reading up on others’ dissenting views. But that’s our own take; if you’re coming to this issue with fresh eyes and want to assess for yourselves if other parties have good counter-arguments that we haven’t addressed here, then you shouldn’t necessarily take our word for it about who the best sources are.
† If it turns out that you do need a lab leader’s input for something, and you’re asking for our advice, we’d say that the least bad option is probably Demis Hassabis. Among the leading lab heads with which at least one of us has engaged — which, as of 2025, is all of them — Hassabis is the only one we’ve seen to consistently stick to his word in dealings, and he has seemed to make fewer destructive decisions.
That said, this is a low-confidence recommendation, and a purely relative one. In absolute terms, anyone who hasn’t started a company with a substantial probability of destroying the world is starting with a large credibility advantage over the lab heads. We’ve certainly heard stories from people who said they were scared enough of Hassabis that they had no choice but to start their own frontier AI companies to beat him to the punch; those people may know something we don’t.
Our headline recommendation to policymakers on this count is therefore: If you’re convinced of the danger, don’t give lab heads any sway.
Talk to independent researchers, or business leaders with no horse in the race, or external scientists with a track record of saying and doing reasonable things in this space. Don’t put yourself in a position to be burned by people whose main distinguishing feature is that they lie to the public and put people in danger.