"The current level of policy discourse over artificial general intelligence (AGI) is dangerously low. If AGI leads to human annihilation — and the authors make a compelling case it almost certainly will — then the imagined benefits of building AGI mean nothing. The incentives for major companies to keep pushing ahead to build superintelligence will win the day unless governments around the world recognize the risks and start to take collective and effective action."
"A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended."
"This book offers brilliant insights into history's most consequential standoff between technological utopia and dystopia, and shows how we can and should prevent superhuman AI from killing us all. Yudkowsky and Soares's memorable storytelling about past disaster precedents (e.g., the inventor of two environmental nightmares: tetra-ethyl-lead gasoline and Freon) highlights why top thinkers so often don't see the catastrophes they create."
"A sober but highly readable book on the very real risks of AI. Both skeptics and believers need to understand the authors' arguments, and work to ensure that our AI future is more beneficial than harmful."
"Everyone should read this book."
"The authors raise an incredibly serious issue that merits — really demands — our attention. You don't have to agree with the prediction or prescriptions in this book, nor do you have to be tech or AI savvy, to find it fascinating, accessible, and thought-provoking."
"If Anyone Builds It, Everyone Dies isn't just a wake-up call; it's a fire alarm ringing with clarity and urgency. Yudkowsky and Soares pull no punches: unchecked superhuman AI poses an existential threat. It's a sobering reminder that humanity's future depends on what we do right now."
"If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can."
"Humans are lucky to have Nate Soares and Eliezer Yudkowsky because they can actually write. As in, you will feel actual emotions when you read this book. We are currently living in the last period of history where we are the dominant species. We have a brief window of time to make decisions about our future in light of this fact. Sometimes I get distracted and forget about this reality, until I bump into the work of these folks and am re-reminded that I am being a fool to dedicate my life to anything besides this question."
"This is our warning. Read today. Circulate tomorrow. Demand the guardrails. I'll keep betting on humanity, but first we must wake up."
"This is the best no-nonsense, simple explanation of the AI risk problem I've ever read."