Is “intelligence” a simple scalar quantity?

No. But there are levels AI hasn’t reached.

We’ve sometimes heard it suggested that the idea of superintelligence assumes that “intelligence” is a simple, one-dimensional quantity. Pour more AI research in, get more “intelligence” out — as though intelligence were less like a machine, and more like a fluid that you can just keep pumping out of the ground.

We agree with the underlying critique: Intelligence isn’t a simple scalar quantity. It may not always be straightforward to build smarter AIs by just throwing more computing hardware at the problem (although sometimes it will be, if the last decade is any indication). Greater intelligence may not always translate directly into greater power. The world is complicated, and capabilities can run into bottlenecks and plateaus.

But as we noted in Chapter 1, the existence of complications, limits, and bottlenecks doesn’t mean that AI will conveniently hit a wall close to the human capability range. Biological brains have limitations that aren’t present in AI, as discussed in the book.

Human intelligence has many limitations, and yet it put us on the moon. Animal intelligence is not a single scalar quantity, and yet humans are able to blow chimpanzees out of the water. For all that intelligence is complicated, there is a clear and qualitative gap between us and the chimpanzees.

Artificial superintelligences could have limitations and complications as well, while still being able to blow humans out of the water. A qualitative gap could still open up between them and us, if researchers and engineers keep racing to create AIs that are ever-more capable.

Notes

[1] heard it suggested: For an example of such a critique, see Ernest Davis’s paper Ethical Guidelines for a Superintelligence.

Your question not answered here?Submit a Question.