Is “general intelligence” a meaningful concept?

Yes.

The peregrine falcon can dive through the air at 240 miles per hour. A sperm whale can dive miles below the ocean’s surface. A falcon would drown in the sea, and a whale would splat if it tried to fly; but somehow humans have managed to both fly faster and dive deeper than either creature while inside metal shells of our own design.

Our ancestral environment did not include the deep ocean, nor were our forebears selected on their ability to soar. We managed these things and many others, not through special instincts, but by the sheer versatility of our minds.

Our ancestors were, somehow, selected to be good at solving problems, broadly construed, despite our ancient ancestors rarely facing an engineering trial more complicated than building a spear.

Do humans possess a perfect ability to solve problems? No, obviously not. Humans can’t seem to learn to play chess as well as the best chess-playing AIs, at least within the time limits of the game. Superhuman levels of chess performance are demonstrably possible, and humans can’t reach those levels unaided. Our intelligence is not universal — that is, we can’t learn to do everything that is physically doable. So this “generality” stuff that humans have is not about being able to do everything doable using our brains alone. Nevertheless, there’s something immensely more general about a human’s ability to learn and solve new problems, compared to the learning and problem-solving ability of a narrow chess AI like Deep Blue.

But generality isn’t all-or-nothing. It admits of degrees.

Deep Blue was not very general in its ability to steer anything other than a chessboard. It could find winning chess moves, but it could not steer a car to the store and buy milk, let alone discover the laws of gravity and design a moon rocket. Deep Blue couldn’t even play other board games, be they simpler games like checkers, or harder games like Go.

By contrast, consider AlphaGo, the AI that finally conquered Go. The algorithms behind AlphaGo are also able to play excellent chess. Go didn’t fall to the first chess algorithm humanity found, but a variant of the first Go algorithm humanity found was able to break previous records in chess, and the same algorithm was also able to excel at playing Atari video games on the side. These new algorithms still couldn’t fetch milk from the store, mind you, but they were more general.

Some methods of intelligence, it turns out, are much more general than others.

But we’re even further from pinning down “generality” than “intelligence.”

It’s easy to say that humans are more general than fruit flies. But how does generality work?

We don’t know. There isn’t yet a mature formal theory of “generality.” We can wave our hands and say that an intelligence is “more general” to the extent that it’s able to predict and steer in a wider range of environments, despite a wider range of complicated challenges. But we can’t give you a way of quantifying challenges and environments that makes this a formal definition.

Does this sound unsatisfying? We’re unsatisfied too. We very much wish humanity would accumulate a better understanding of general intelligence before attempting to build generally intelligent machines. This might improve the dire technical situation we’ll describe in Chapters 10 and 11.

While we don’t have a formal description of the phenomenon, we can nevertheless deduce a few facts about generality by observing the world around us.

We know that humans are not born with the innate knowledge and skill to build skyscrapers and moon rockets, because our distant ancestors never had to work with skyscrapers and moon rockets in a way that could encode that knowledge into our genes. Rather, those abilities come from our power to learn about domains that we weren’t born understanding.

To assess generality, don’t ask how much something knows. Ask how much it learns.

There is some sense in which humans are more powerful learners than mice. It’s not that mice can’t learn at all — for instance, they can learn to navigate a maze. But humans can learn more complicated and weirder stuff than mice can, and we can string our pieces of knowledge together more effectively.

How does this work, exactly? What do we have that mice don’t?

Consider two people who are learning how to navigate a new town after a move.

Alice memorizes whatever routes she needs to know. To get from her house to the hardware store, she takes a left on Third Street, a left at the second stoplight, and then goes two more blocks and takes a right into the parking lot. She separately memorizes the route to the grocery store, and the route to her office.

Meanwhile, Beth studies and internalizes a map of the town.

Alice may do well in her routine life, but if she ever has to drive somewhere new without directions, she’s in trouble. In contrast, Beth has to spend more time planning her routes, but she’s much more flexible.

Alice may well be faster on the specific routes she memorized, but Beth will be better at driving everywhere else. Beth will also have an advantage in other tasks, like finding a route that minimizes traffic during rush hour, or even designing a street layout for another town.

There seem to be types of learning that are less like memorizing driving routes and more like internalizing a map. There seem to be mental gears that can be reused and adapted to lots of different scenarios. There seem to be types of thinking that run deep.

We’ll have more to say about this topic in Chapter 3.

Notes

[1] not universal: A formal definition of “universal intelligence” was put forth by Legg and Hutter in 2007.

Your question not answered here?Submit a Question.