Isn’t it smarter to avoid talking about extinction?
The time has passed for playing political games.
Some have argued that people concerned about the race to build superintelligence should hide their views and instead talk about AI-caused job loss, or the problem of ChatGPT-enabled bioterrorists, or how much water it takes to cool the computers inside datacenters.* We think that this approach is too clever by half and is likely to backfire. Indeed, we have already seen it backfire on various occasions.
The four main problems we see with this approach are:
It’s not honest, and people are good at spotting dishonesty and game-playing.
Even if you’re an unusually good liar, arguments about issues that you think are secondary are likely to end up looking “off” in various ways. They won’t quite seem to make sense, for the same reasons that you think those issues are actually secondary. The more you share your sanitized arguments, the more likely it is that people will conclude that you’re either confused about this issue or not being fully honest about what you actually think. And in either case, you won’t seem like a promising person to ally with or treat as a reliable information source.
It’s probably unnecessary. In our experience, honest and direct conversation about superintelligence gets far better reception than trying to redirect to other issues, such as AI deepfakes. Since mid-2023 and with increasing frequency, I (Soares) have spoken to a variety of elected officials. I have sat in dinners where people “concerned about AI” brought up the possibility of AI-assisted terrorists, and a sitting elected official replied that his fears are much more urgent and dire, because he worries about recursively self-improving AIs that might yield superintelligence that wipes us completely off the map, and that could be created inside of three years.
Folks up to and including elected officials in the U.S. Congress are open to taking this issue seriously and looking for ways to address it.† This issue may seem more niche and controversial than it actually is, because there hasn’t been a proper national or international conversation about it yet, as of the publication of this book. But we’ve had many a frank DC conversation on this topic go encouragingly well.‡
Responding to those other issues doesn’t address the superintelligence issue. AI companies are racing to build superintelligence. If they get there, everyone dies. The solutions that make sense for this problem are quite different from the solutions that make sense for dealing with AI-generated deepfakes, or even with AI-enabled bioterrorism.
There isn’t zero overlap, and we can potentially build more support for tackling smarter-than-human AI by emphasizing ways that different issues overlap. But it’s extremely unlikely that the world will stumble into an adequate response to an issue as complicated as superintelligence without orienting to the actual issue.
- Time is plausibly short. It’s unlikely that we have time to slowly ease people into considering this risk over many years, starting with simpler and more familiar issues and then climbing a ladder up to superintelligence. If we don’t mobilize an effort to respond to this problem quickly, it’s plausible that we won’t get a chance to respond at all.
This is not to say that job loss, bioterrorism, etc., aren’t real issues in their own right. It’s just that society isn’t going to actually put a stop to the reckless suicide race if it doesn’t know that there’s a reckless suicide race happening.
We have spent years watching friends and acquaintances in the policy space shop around problems like ChatGPT-enabled bioterrorists. It doesn’t seem to have amounted to anything that will actually prevent the creation of machine superintelligence, as far as we can tell.
We are nerds to our bones, well out of our comfort zones in writing a popular book at all. We don’t claim to have expertise on effective politicking. But it does seem to us that humanity has reached the limit of which problems it can navigate with discourse consisting of carefully couched, strategically chosen, non-“alarmist” arguments.
At some point, as human beings, we have to start talking about the looming threat. Policy needs to be grounded in the actual realities of the situation, not in safe-seeming messaging.
The heads of AI labs say we could see AI researchers that outperform humans in the next one to four years. We dearly hope that they are wrong, but we do not, with all our expertise, know them to be wrong. Policymakers don’t know them to be wrong. Humanity is simply not responding appropriately to the challenge before us. If the alarm is not sounded now, then when?
And: In the time since we first drafted the paragraph above, this strategy we advocate seems to be paying off more and more, as you can see in our list of what politicians have been saying about superintelligence over the summer of 2025. The time seems ripe for a real discussion of the impending danger from artificial superintelligence.
* Which is not, in fact, very much, contra widespread reporting.
† Although various DC officials have agreed with our concerns about superintelligence, they lack the power to solve this issue unless far more officials in the U.S. and in other nations get involved. Early conversations have been promising, but there’s a lot of work still to be done.
‡ A related promising sign available to we authors as we put the finishing touches on these online resources is that a number of national security professionals and former DC officials have expressed positive reactions to advance reader copies of If Anyone Builds It, Everyone Dies. Examples include:
From Ben Bernanke, Nobel laureate and former Chairman of the Federal Reserve: “A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended.”
From Jon Wolfsthal, Director of Global Risk at the Federation of American Scientists and former Special Assistant to the President for National Security Affairs: “A compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action.”
From Lieutenant General John N.T. “Jack” Shanahan (USAF, Ret.), Inaugural Director, Department of Defense Joint AI Center: “While I’m skeptical that the current trajectory of AI development will lead to human extinction, I acknowledge that this view may reflect a failure of imagination on my part. Given AI’s exponential pace of change there’s no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration.”
From Fiona Hill, former Senior Director on the White House National Security Council: “A serious book in every respect. In Yudkowsky and Soares’s chilling analysis, a super-empowered AI will have no need for humanity and ample capacity to eliminate us. If Anyone Builds It, Everyone Dies is an eloquent and urgent plea for us to step back from the brink of self-annihilation.”
From R.P. Eddy, former Director on the White House National Security Council: “This is our warning. Read today. Circulate tomorrow. Demand the guardrails. I’ll keep betting on humanity, but first we must wake up.”
From Suzanne Spaulding, former Under Secretary for the Department of Homeland Security: “The authors raise an incredibly serious issue that merits — really demands — our attention.”
From Emma Sky, Senior Fellow at the Yale Jackson School of Global Affairs and former Political Advisor to the Commanding General of U.S. Forces Iraq: “In If Anyone Builds It, Everyone Dies, Eliezer Yudkowsky and Nate Soares deliver a stark and urgent warning: humanity is racing toward the creation of superintelligent AI without the safeguards to survive it. With credibility, clarity and conviction, they argue that advanced AI systems, if misaligned even slightly, could spell the end of human civilization. This provocative book challenges technologists, policymakers, and citizens alike to confront the existential risks of artificial intelligence before it’s too late. An appeal for awareness and a call for caution, this is essential reading for anyone who cares about the future.”