Earth with ominous red glow

Praise for If Anyone Builds It, Everyone Dies

"A compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action."
— Jon Wolfsthal, former Special Assistant to the President for National Security Affairs
"If Anyone Builds It, Everyone Dies isn't just a wake-up call; it's a fire alarm ringing with clarity and urgency. Yudkowsky and Soares pull no punches: unchecked superhuman AI poses an existential threat. It's a sobering reminder that humanity's future depends on what we do right now."
— Mark Ruffalo, actor
"A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended."
— Ben Bernanke, Nobel laureate and former Chairman of the Federal Reserve
"The most important book of the decade."
— Max Tegmark, Professor of Physics, MIT
"A sober but highly readable book on the very real risks of AI."
— Bruce Schneier, leading computer security expert and Lecturer, Harvard Kennedy School
"The authors raise an incredibly serious issue that merits — really demands — our attention."
— Suzanne Spaulding, former Under Secretary for the Department of Homeland Security
"If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can."
— Tim Urban, writer, Wait But Why
"Essential reading for policymakers, journalists, researchers, and the general public. A masterfully written and groundbreaking text."
— Bart Selman, Professor of Computer Science, Cornell University
"Brilliant…Shows how we can and should prevent superhuman AI from killing us all."
— George Church, Founding Core Faculty, Wyss Institute, Harvard University
"Everyone should read this book."
— Daniel Kokotajlo, OpenAI whistleblower and lead author, AI 2027
"While I'm skeptical that the current trajectory of AI development will lead to human extinction, I acknowledge that this view may reflect a failure of imagination on my part. Given AI's exponential pace of change there's no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration."
— Lieutenant General John N.T. "Jack" Shanahan (USAF, Ret.),
Inaugural Director, Department of Defense Joint AI Center
"Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous."
— Emmett Shear, former interim CEO of OpenAI
"This is our warning. Read today. Circulate tomorrow. Demand the guardrails. I'll keep betting on humanity, but first we must wake up."
— R.P. Eddy, former Director, White House, National Security Council