Published by Little, Brown and Company - September 16, 2025
The scramble to create superhuman AI has put us on the path to extinction — but it's not too late to change course, as two of the field's earliest researchers explain in this clarion call for humanity.
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
For decades, two signatories of that letter — Eliezer Yudkowsky and Nate Soares — have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us — and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close.
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.
Endorsements
"The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it!"
"A compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action."
"The authors raise an incredibly serious issue that merits — really demands — our attention."
FROM THE BOOK:
If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case, in the hope of rallying enough key decision-makers and regular people to take AI seriously. The default outcome is lethal, but the situation is not hopeless; machine superintelligence doesn't exist yet, and its creation can yet be prevented.
Pre-order
More Endorsements
"This is our warning. Read today. Circulate tomorrow. Demand the guardrails. I'll keep betting on humanity, but first we must wake up."
"A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended."
"Essential reading for policymakers, journalists, researchers, and the general public."
"Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous."
"A sober but highly readable book on the very real risks of AI."
"While I'm skeptical that the current trajectory of AI development will lead to human extinction, I acknowledge that this view may reflect a failure of imagination on my part. Given AI's exponential pace of change there's no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration."
"The most important book of the decade."
"This book offers brilliant insights into history's most consequential standoff between technological utopia and dystopia, and shows how we can and should prevent superhuman AI from killing us all."