Isn’t ChatGPT already a general intelligence?
You could call it that if you’d like.
ChatGPT and its ilk are more general than the AIs that came before them. They can do a little math, and write some poetry and some code. ChatGPT can’t always do these things well (as of August 2025), but it can do a whole lot of things.
It’s a reasonable guess that GPT-5 is still less general at reasoning than a human child. It can recite from more textbooks, sure. But it has plausibly memorized a vastly greater volume of shallower patterns than a human child would use, whereas a child plausibly uses deeper mental gears to complete comparable tasks (with better results in some cases, and worse results in others).
If we authors were forced to compare the two, we’d say that ChatGPT feels generally dumber in some deep sense than a human — and not only because (as we write this sentence in July 2025) chatbots have limited episodic memories.
There are at least some people who’d snap back, “What do you mean? ChatGPT can talk; it can have deep emotional conversations with me; it can solve advanced math problems and write code, which lots of humans can’t. Who’s to say it’s dumber-than-human?” That was not a conversation we were faced with ten years ago, which says something about how much progress has occurred since then.
The world is currently perhaps at some halfway point between “AIs are clearly dumber than humans” and “It depends on what you ask the AI to do.”
Maybe what it takes to cross the remaining distance is just a little bit more scale, like how human brains seem broadly similar to chimpanzee brains, but three to four times larger. Or maybe the architecture underlying ChatGPT is too shallow to support the “spark” of generality.
Maybe there’s some important component of general intelligence that modern AI algorithms just can’t handle, and modern AIs make up for it by applying massive amounts of practice and memorization to the sorts of tasks that can be solved by brute practice. In that case, maybe all it takes is one brilliant (and also incredibly stupid) algorithmic invention to fix that deficit, and AIs will be able to understand most things a human can understand, and learn from experience about as efficiently as a human. (While still being able to read and memorize the entire internet.) Or maybe it will take four more algorithmic breakthroughs. Nobody knows, as discussed in Chapter 2.
There are many different things one might mean by “general intelligence.”
By “AIs are now generally intelligent,” someone might mean the AIs have acquired whatever poorly-understood combination of abilities caused all hell to break loose in the form of human civilization.
Or they might mean that AI has at least advanced to the point that people now vociferously argue about whether humans or AIs are truly smarter.
Or they might have in mind a time when people have stopped arguing, because it’s clear that the AIs are deeply and generally smarter than any human. Or a time when people have stopped arguing, because there is no one left to argue; humanity has pushed too far, and AI has brought all of our arguments and endeavors to their end.
There wasn’t an exact day and time when you could say that AIs “started playing human-level chess.” But by the time chess AIs could crush the human world champion, that time had passed.
All of which is to say: The answer to “Is ChatGPT generally intelligent?” could be either yes or no, depending on what exactly you mean by the question. (Which says quite a lot about AI progress over the last few years! Deep Blue was clearly quite narrow.)
Superintelligence is a more important distinction.
Since there are several different things “human-level intelligence” could reasonably mean, we’ll usually avoid using that terminology ourselves, except when talking about superhuman AI. This is likewise why we usually avoid saying “artificial general intelligence.” If we need to talk about one of those ideas, we’ll spell it out in more detail.
We will use terms like “smarter-than-human AI,” “superhuman AI,” or “superintelligence,” which assume some kind of human reference point:
By “smarter-than-human AI” or “superhuman AI” (here and in the book), we mean AI that has whatever “spark of generality” separates humans from chimps and that is clearly overall better than the smartest individual humans at solving problems and figuring out what’s true.
Superhuman AI might only be mildly smarter than top humans, and there may be a few tasks where top humans still do better. But we’ll assume, here and in the book, that “smarter-than-human AI” at least means that a fair comparison across a wide range of tricky tasks would have the AI do better than the most competent humans, across all sorts of difficult tasks.
- By “superintelligent AI” or “artificial superintelligence” (ASI), meanwhile, we mean superhuman AI that vastly outstrips human intelligence. We’ll assume that individual humans and real-world groups of humans are completely unable to compete with superintelligent AI in any practically important domain, for reasons discussed in Chapter 6.
The book will mostly use the terms “superhuman” and “superintelligent” interchangeably. The distinction becomes more relevant in Part II, where we describe an AI takeover scenario in which AIs start off weakly smarter-than-human but not superintelligent. This helps illustrate that superintelligence is plausibly overkill: AI may be superintelligent soon, but it doesn’t need to be that smart in order to cause human extinction.
These are very rough definitions, but they’re good enough for the purposes of this book.
This isn’t a book that proposes a complex theory of intelligence, and then deduces some esoteric implications of the theory that portend disaster. Instead, we’ll be operating at a pretty basic level, with claims like:
- At some point, AI will probably fully achieve whatever it is that lets humans (and not chimpanzees) build rockets and centrifuges and cities.
- At some point, AI will surpass humans.
- Powerful AIs will probably have their own goals that they stubbornly pursue, because stubbornly pursuing goals is useful for a wide range of tasks (and, e.g., humans evolved goals for this very reason).
Claims like those, whether right or wrong, don’t depend on us having special insight into all the inner workings of intelligence. We can see the truck barreling toward us, even without appealing to a complicated model of the truck’s internals. Or so we’ll argue.
And simple arguments like these don’t hinge on whether or not ChatGPT is “really” human-level, or “really” a general intelligence. It does what it does. Future AIs will do more things better. The rest of the book discusses where that path leads.