Earth with ominous red glow

IF ANYONE BUILDS IT,EVERYONE DIES

Eliezer Yudkowsky & Nate Soares

Pre-order Now

Published by Little, Brown and Company - September 16, 2025

The scramble to create superhuman AI has put us on the path to extinction — but it's not too late to change course, as two of the field's earliest researchers explain in this clarion call for humanity.

In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.

For decades, two signatories of that letter — Eliezer Yudkowsky and Nate Soares — have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us — and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close.

How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.

The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.


Endorsements

"The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it!"
— Stephen Fry, actor and writer
"A compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action."
— Jon Wolfsthal, former Special Assistant to the President for National Security Affairs
"The authors raise an incredibly serious issue that merits — really demands — our attention."
— Suzanne Spaulding, former Under Secretary for the Department of Homeland Security

FROM THE BOOK:

If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.

We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case, in the hope of rallying enough key decision-makers and regular people to take AI seriously. The default outcome is lethal, but the situation is not hopeless; machine superintelligence doesn't exist yet, and its creation can yet be prevented.

Pre-order


More Endorsements

"This is our warning. Read today. Circulate tomorrow. Demand the guardrails. I'll keep betting on humanity, but first we must wake up."
— R.P. Eddy, former Director, White House, National Security Council
"A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended."
— Ben Bernanke, Nobel laureate and former Chairman of the Federal Reserve
"Essential reading for policymakers, journalists, researchers, and the general public."
— Bart Selman, Professor of Computer Science, Cornell University
"Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous."
— Emmett Shear, former interim CEO of OpenAI
"A sober but highly readable book on the very real risks of AI."
— Bruce Schneier, leading computer security expert and Lecturer, Harvard Kennedy School
"While I'm skeptical that the current trajectory of AI development will lead to human extinction, I acknowledge that this view may reflect a failure of imagination on my part. Given AI's exponential pace of change there's no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration."
— Lieutenant General John N.T. "Jack" Shanahan (USAF, Ret.),
Inaugural Director, Department of Defense Joint AI Center
"The most important book of the decade."
— Max Tegmark, Professor of Physics, MIT
"This book offers brilliant insights into history's most consequential standoff between technological utopia and dystopia, and shows how we can and should prevent superhuman AI from killing us all."
— George Church, Founding Core Faculty, Wyss Institute, Harvard University

About the Authors

Eliezer Yudkowsky

Eliezer Yudkowsky is a founding researcher of the field of AI alignment and the co-founder of the Machine Intelligence Research Institute. With influential work spanning more than twenty years, Yudkowsky has played a major role in shaping the public conversation about smarter-than-human AI. He appeared on Time magazine's 2023 list of the 100 Most Influential People In AI, and has been discussed or interviewed in The New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, the Washington Post, and elsewhere.

Nate Soares

Nate Soares is the President of the Machine Intelligence Research Institute. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.

For media inquiries, email media@intelligence.org


Pre-order