IF ANYONE BUILDS IT,EVERYONE
DIES

Eliezer Yudkowsky
& Nate Soares

Pre-order Now
Get exclusive content
"A sober but highly readable book on the very real risks of AI."
Bruce Schneier
leading computer security expert and Lecturer, Harvard Kennedy School
"The most important book of the decade."
Max Tegmark
Professor of Physics, MIT
"The authors raise an incredibly serious issue that merits – really demands – our attention."
Suzanne Spaulding
former Under Secretary for the Department of Homeland Security
"The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it!"
Stephen Fry
actor and writer

Superhuman AI threatens human extinction. But it's not too late to change course.

In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.

For decades, two signatories of that letter — Eliezer Yudkowsky and Nate Soares — have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us — and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close.

How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.

The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.

Published by Little, Brown and Company,
September 16, 2025
Pre-order
"Essential reading for policymakers, journalists, researchers, and the general public."
Bart Selman
Professor of Computer Science, Cornell University
"Brilliant…Shows how we can and should prevent superhuman AI from killing us all."
George Church
Founding Core Faculty, Wyss Institute, Harvard University
"While I'm skeptical that the current trajectory of AI development will lead to human extinction, I acknowledge that this view may reflect a failure of imagination on my part. Given AI's exponential pace of change there's no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration."
Lieutenant General John N.T. "Jack" Shanahan (USAF, Ret.)
Inaugural Director, Department of Defense Joint AI Center
"Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous."
Emmett Shear
former interim CEO of OpenAI

PRE-ORDER

Pre-Order Bonuses

Pre Order to access two exclusive virtual events with the authors:

Q&A with Eliezer Yudkowsky and Nate Soares – Sept 4, 2025

@ESYudkowsky | @So8res

Join Eliezer Yudkowsky and Nate Soares, the authors of If Anyone Builds It, Everyone Dies, as they answer questions from the audience.

Hosted virtually over Zoom on Thursday, September 4 at noon PST | 3pm EST

[Past] Tim Urban and Nate Soares Chat Q&A – Aug 10, 2025

@waitbutwhy | @So8res

Join Tim Urban (Wait But Why) and Nate Soares as they chat about AI and answer questions from the audience about Nate and Eliezer's forthcoming book, If Anyone Builds It, Everyone Dies.

This event has passed, but people who pre-order and submit the form can still download a recording of it

To access, submit proof of purchase prior to either event via this form.

FROM THE BOOK

If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.

We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case, in the hope of rallying enough key decision-makers and regular people to take AI seriously. The default outcome is lethal, but the situation is not hopeless; machine superintelligence doesn't exist yet, and its creation can yet be prevented.

ABOUT THE AUTHORS

Eliezer Yudkowsky

Eliezer Yudkowsky is a founding researcher of the field of AI alignment and the co-founder of the Machine Intelligence Research Institute. With influential work spanning more than twenty years, Yudkowsky has played a major role in shaping the public conversation about smarter-than-human AI. He appeared on Time magazine's 2023 list of the 100 Most Influential People In AI, and has been discussed or interviewed in The New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, the Washington Post, and elsewhere.

Nate Soares

Nate Soares is the President of the Machine Intelligence Research Institute. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.

For media inquiries, email media@intelligence.org.

Pre-order Now
Published by Little, Brown and Company
September 16, 2025