Are you suggesting that ChatGPT could kill us all?
No. The worry is about forthcoming advances in AI.
Part of why you’re reading this book now is that developments like ChatGPT have brought AI into the news. The world is now beginning to discuss AI progress and the way that AI impacts society. This presents a natural opportunity to discuss smarter-than-human AI, and how the current situation is not looking good.
We, the authors, have been working in this field for a long time. Recent AI progress informs our views, but our worries weren’t sparked by ChatGPT, nor by earlier large language models. We have been doing technical research to try to ensure that smarter-than-human AI goes well for decades (Soares since 2013, Yudkowsky since 2001). But we’ve recently seen evidence that this may be a conversation the world is ready to have. And it’s a conversation that we plausibly need to have now, or the world may lose its window of opportunity to respond.
The field of AI is progressing, and eventually (we don’t know when) it’s going to progress to the point where it makes AI that is smarter than we are. That is the explicit goal of all of the leading AI companies:
We are now confident we know how to build AGI [artificial general intelligence] as we have traditionally understood it. […] We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else.
I think [powerful AI] could come as early as 2026. […] By powerful AI, I have in mind an AI model […] with the following properties: In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields — biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
Overall, we are focused on building full general intelligence. All of the opportunities that I’ve discussed today are downstream of delivering general intelligence and doing so efficiently.
I think in the next five to ten years, there will be maybe a 50 percent chance that we’ll have what we’d define as AGI.
Wes: So, Demis, are you trying to cause an intelligence explosion?
Demis: No, not an uncontrolled one…
They are putting their money where their mouths are. Microsoft, Amazon, and Google all announced plans to spend $75 to $100 billion on AI datacenters in 2025. The startup xAI bought out the social media site X.com with a valuation of $80 billion, about twice as high as X itself, shortly before raising $10 billion to support a massive datacenter and further develop its AI, Grok. OpenAI has announced the $500 billion Project Stargate, in partnership with Microsoft and others.
Meta CEO Mark Zuckerberg has said that Meta expects to spend $65 billion on AI infrastructure this year, and “hundreds of billions” on AI projects in the coming years. Meta has already invested $14.3 billion in ScaleAI and hired its CEO to run the new Meta Superintelligence Labs, in the process poaching over a dozen top researchers from rival labs with offers as high as $200 million for a single researcher.
None of this means that smarter-than-human AI is just around the corner. But it does mean that all of the major companies are trying as hard as they can to build it, and that AIs like ChatGPT are the result of this research program. These companies aren’t setting out to make chatbots. They’re setting out to make superintelligences, and chatbots are a pit stop along the way.
Our own view, after decades of trying to better understand this issue and think seriously about future developments, is that there isn’t a principled barrier to researchers achieving a breakthrough tomorrow and succeeding in building smarter-than-human AI.
We don’t know whether that threshold will in fact be hit in the near future, or whether it’s still a decade away, etc. History shows that timing new technologies is a lot harder than predicting that a technology will be developed at all. But we believe that the evidence of danger is vastly greater than is needed to justify an aggressive international response today. That argument is, of course, sketched out in the book.
Notes
[1] poaching: From Bloomberg, July 2025: “Meta CEO Mark Zuckerberg has now successfully hired more than ten OpenAI researchers, as well as top researchers and engineers from Anthropic, Google and other startups.”