Can we use past progress to extrapolate when we’ll build smarter-than-human AI?

We don’t have a good enough understanding of intelligence for that.

One class of successful predictions involves taking a straight line on a graph, one that has been steady for many years, and predicting that the straight line continues for at least another year or two.

This doesn’t always work. Trend lines sometimes change. But it often works reasonably well; it is a case where people make successful predictions in practice.

The great trouble with this method is that often what we really want to know is not “how high up will this line on the graph be by 2027?” but rather, “What happens, qualitatively, if this line keeps going up?” What height of the line corresponds to important real-world outcomes?

And in the case of AI, we just don’t know. It’s easy enough to pick some measure of artificial intelligence that forms a straight line on a graph (such as “perplexity”) and project that line outwards. But nobody knows what future level of “perplexity” corresponds to which level of qualitative chess-playing ability. People can’t predict that in advance; they’ve just got to run the AI and find out.

Nobody knows where the “now it has the ability to kill everyone” line falls on that graph. All they can do is run the AI and find out. So extrapolating the straight line on the graph doesn’t help us. (And that’s even before the graph is rendered irrelevant by algorithmic progress.)

For that reason, we don’t spend time in the book extrapolating lines on graphs to predict exactly when somebody will throw 1027 floating-point operations at training an AI, or what consequences this would have. That’s a hard call. The book focuses on what seem to us to be the easy calls. This is a narrow range of topics, and our ability to make a small number of important predictions in that narrow domain doesn’t justify making arbitrary prognostications about the future.

Your question not answered here?Submit a Question.