"A sober but highly readable book on the very real risks of AI."
"The most important book of the decade."
"The authors raise an incredibly serious issue that merits – really demands – our attention."
"The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it!"
Superhuman AI threatens human extinction. But it's not too late to change course.
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
For decades, two signatories of that letter — Eliezer Yudkowsky and Nate Soares — have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us — and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close.
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.
September 16, 2025
"Essential reading for policymakers, journalists, researchers, and the general public."
"Brilliant…Shows how we can and should prevent superhuman AI from killing us all."
"While I'm skeptical that the current trajectory of AI development will lead to human extinction, I acknowledge that this view may reflect a failure of imagination on my part. Given AI's exponential pace of change there's no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration."
"Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous."
PRE-ORDER
Pre-Order Bonuses
Pre Order to access two exclusive virtual events with the authors:
Q&A with Eliezer Yudkowsky and Nate Soares – Sept 4, 2025
Join Eliezer Yudkowsky and Nate Soares, the authors of If Anyone Builds It, Everyone Dies, as they answer questions from the audience.
Hosted virtually over Zoom on Thursday, September 4 at noon PST | 3pm EST
[Past] Tim Urban and Nate Soares Chat Q&A – Aug 10, 2025
Join Tim Urban (Wait But Why) and Nate Soares as they chat about AI and answer questions from the audience about Nate and Eliezer's forthcoming book, If Anyone Builds It, Everyone Dies.
This event has passed, but people who pre-order and submit the form can still download a recording of it
To access, submit proof of purchase prior to either event via this form.
FROM THE BOOK
If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case, in the hope of rallying enough key decision-makers and regular people to take AI seriously. The default outcome is lethal, but the situation is not hopeless; machine superintelligence doesn't exist yet, and its creation can yet be prevented.
September 16, 2025