Media Kit | If Anyone Builds It, Everyone Dies | If Anyone Builds It, Everyone Dies

Media Kit

Logos, book covers, author bios, quotes, and background images for use in articles, reviews, and coverage of If Anyone Builds It, Everyone Dies.

Eliezer Yudkowsky is a founding researcher of the field of AI alignment and the co-founder of the Machine Intelligence Research Institute. With influential work spanning more than twenty years, Yudkowsky has played a major role in shaping the public conversation about smarter-than-human AI. He appeared on Time magazine's 2023 list of the 100 Most Influential People In AI, and has been discussed or interviewed in The New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, the Washington Post, and elsewhere.

Nate Soares is the President of the Machine Intelligence Research Institute. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.

IABIED Creatives Resources (Internal)/OOH ads/OOH SF skyway billboard/MIRI SF Billboard FINAL Visual Ref 07.08.25.jpg
IABIED Creatives Resources (Internal)/Semafor Website Ads/Semafor Ad Files/MIRI Semafor Ad - mobile retina - 600x500.jpg
IABIED Creatives Resources (Internal)/Semafor Website Ads/Semafor Ad Files/MIRI Semafor Ad - mobile retina - 600x100.jpg
IABIED Creatives Resources (Internal)/book assets/IABIED hardcover UK - HIGH RES.png
IABIED Creatives Resources (Internal)/book assets/IABIED hardcover US - HIGH RES.png
Nate Soares/Nate profile Square cropped bright 2025.png
IABIED Creatives Resources (Internal)/book assets/US_cover_3D_spine_NOshadow.png
Nate Soares/Touched up (cut hair)/Nate Soares0179-4.jpg
IABIED Creatives Resources (Internal)/Semafor Website Ads/Semafor Ad Files/MIRI Semafor Ad - desktop retina - 600x1200.jpg
Nate and EY May 2025/purple-white-frankenstein-straight.jpg
IABIED Creatives Resources (Internal)/promo creatives/UK book announcement/MIRI v1 graphics/Image1.jpg
Nate Soares/Touched up (cut hair)/Nate Soares0938-11.jpg
IABIED Creatives Resources (Internal)/book assets/US_cover_3D_spine_shadow.png
Eliezer Yudkowsky/Eliezer profile Square bright 2025.jpg
IABIED Creatives Resources (Internal)/preview5.jpg
Eliezer Yudkowsky/ca 2023/EY-202310.jpg
IABIED Creatives Resources (Internal)/image (3).png
Nate and EY May 2025/Yudkowsky_8388_Edited.jpg
IABIED Creatives Resources (Internal)/image (2).png

Quotes

A compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action.
Jon Wolfsthal, former Special Assistant to the President for National Security Affairs
A compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks...
Jon Wolfsthal, former Special Assistant to the President for National Security Affairs
The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it!
Stephen Fry, actor and writer
The authors raise an incredibly serious issue that merits – really demands – our attention.
Suzanne Spaulding, former Under Secretary for the Department of Homeland Security
The authors raise an incredibly serious issue that merits — really demands — our attention... fascinating, accessible, and thought-provoking.
Suzanne Spaulding, former Under Secretary for the Department of Homeland Security
The authors raise an incredibly serious issue that merits—really demands—our attention. You don’t have to agree with the predictions or prescriptions in this book, nor do you have to be tech or AI savvy, to find it fascinating, accessible, and thought-provoking.
Suzanne Spaulding, former Under Secretary for the Department of Homeland Security
If Anyone Builds It, Everyone Dies isn't just a wake-up call; it's a fire alarm ringing with clarity and urgency.
Mark Ruffalo, actor
Exploring these possibilities helps surface critical risks and questions we cannot collectively afford to overlook.
Yoshua Bengio, Turing Award winner and scientific director of MILA
This is our warning. Read today. Circulate tomorrow. Demand the guardrails. I'll keep betting on humanity, but first we must wake up.
R.P. Eddy, former Director, White House, National Security Council
A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended.
Ben Bernanke, Nobel laureate and former Chairman of the Federal Reserve
Essential reading for policymakers, journalists, researchers, and the general public.
Bart Selman, Professor of Computer Science, Cornell University
Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous.
Emmett Shear, former interim CEO of OpenAI
A sober but highly readable book on the very real risks of AI.
Bruce Schneier, leading computer security expert
While I'm skeptical that the current trajectory of AI development will lead to human extinction, I acknowledge that this view may reflect a failure of imagination on my part. Given AI's exponential pace of change there's no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration.
Lieutenant General John N.T. "Jack" Shanahan (USAF, Ret.), Inaugural Director, Department of Defense Joint AI Center
The most important book of the decade.
Max Tegmark, Professor of Physics, MIT
Brilliant…Shows how we can and should prevent superhuman AI from killing us all.
George Church, Founding Core Faculty, Wyss Institute, Harvard University
This book offers brilliant insights into history's most consequential standoff between technological utopia and dystopia, and shows how we can and should prevent superhuman AI from killing us all.
George Church, Founding Core Faculty, Wyss Institute, Harvard University
The best no-nonsense, simple explanation of the AI risk problem I've ever read.
Yishan Wong, former CEO of Reddit
Humans are lucky to have Soares and Yudkowsky in our corner, reminder us not to waste the brief window of time that we have to make decisions about our future.
Grimes, musician
An Artificial Intelligence war cannot be won and must never be fought.
Dr Brooke Taylor, CEO, Defending Our Country, LLC

Usage & License

Unless otherwise noted, all assets on this page (photos, covers, backgrounds, and related files) are dedicated to the public domain under the CC0 1.0 Public Domain Dedication. You may copy, modify, distribute, and use them — including for commercial purposes — without permission or attribution. If attribution is convenient, please credit If Anyone Builds It, Everyone Dies or link to this page. Third‑party trademarks, book covers, and logos remain the property of their respective owners and are not covered by this dedication.