Won’t AIs be limited by their ability to design and run experiments?

Intelligence lets you learn more from experiments and run faster, more informative, more parallelized experiments.

A civilization of motivated minds that think a thousand times faster than humanity wouldn’t necessarily be able to produce technological outputs a thousand times faster than humans do.

By analogy: If you spend three hours grocery shopping, and two of those hours are spent commuting to and from the grocery store on horseback, then a car that’s ten times faster than the horse can speed up your shopping trip — but not by a factor of ten. Eventually, the last hour spent picking out items at the store dominates the amount of time spent.

Even a civilization full of incredibly intelligent reasoners must occasionally wait for experimental results to come back. If your thoughts are sufficiently fast, then the bottleneck is likely to become how quickly you can act in the world, how quickly you can take in information, and how long your plans take to play out.

But it’s not as bad as the grocery store analogy might lead you to believe, because the ability to think trades off against the need for experimental results:

  • Often, you can just think more and think better, and obviate the need for a test, because you realize that previous observations contain the answer already. Compare the ability of modern AIs to learn how to pilot robots, sometimes using pure simulation.
  • Sometimes you can think harder until you find a similarly reliable but faster test.
  • Sometimes you can perform lots of faster but less reliable tests that can be run many times in parallel to yield similarly reliable results at a higher speed.
  • Sometimes you can perform many complicated tests at once, such that the data is complex and hard to interpret — which is a fine tradeoff if the cognition it takes to untangle the results is cheaper (from the perspective of an extremely fast-thinking mind) than running multiple tests.
  • Sometimes you can find a way to build other devices that perform the experiments much faster. For example, instead of sending many different requests to a biolab to have them synthesize drugs, can you find a way to send one request to a biolab, which will result in it synthesizing a single bacterium that contains the genetic code to produce all of the drugs you wish to synthesize? Similarly, can you create a bacterium that is sensitive to radio signals and will respond quickly to instructions from a fast-running AI — far more quickly than the excruciatingly slow humans running back and forth according to your instructions?
  • And sometimes you can simply take your top ten best guesses, figure out what you would do in each of those cases, build a complicated device that will work no matter which way reality actually turns out to be, and skip the tests entirely.

A civilization full of copies of Steve Jobs, Marie Curie, John von Neumann, and some of the world’s greatest workers and programmers — if they were running at 10,000 times our speed — would notice that the key bottleneck was waiting on experimental results, and they could work on that bottleneck to reduce it.

The history of the Human Genome Project is a good example of what it looks like when intelligent humans continually notice and work on the bottlenecks in a massive research project. What was expected to take fifteen years and $3 billion finished two years early and $300 million under budget; most of the genome was mapped in the final two years using improved methods and equipment.

As for humans, so for AI. An intelligent reasoner doesn’t have to sit there idle while it waits for subjective years for slow tests to crawl to completion. A superhuman reasoner considers alternative pathways, and is adept at finding them — that’s what intelligence is all about.

For a little practical evidence in this regard, consider how humans handle software versus space probes. Making changes to a software product is cheap and rapid, and software engineers have a tendency to experiment constantly, to produce software that doesn’t quite work yet and then fix it where it’s most broken.

By contrast, experimentation is very expensive on space probes — so humans spend a lot of time getting the space probe exactly right and cramming as many experiments into it as they possibly can. They put lots of effort into giving the space probes general experimental machinery that can be remote-controlled from afar, so that if they come up with a new idea for an experiment they don’t need to invent and launch a whole new spacecraft.

A sufficiently smart reasoner, moreover, also has the option of just figuring out how reality is without needing so many dang experiments. Sometimes the data you already have is enough, if you’re smart enough to interpret it.

As a case study: It took eight years for Einstein’s theory of general relativity to be empirically tested on new data. The test was conducted by Frank Watson Dyson and Arthur Stanley Eddington, who photographed the stars behind the sun during a total solar eclipse and measured the degree that the light bent around the sun; they found it accorded precisely with Einstein’s theory.

But that eight-year wait didn’t block any real scientific progress.

The 29 May 1919 solar eclipse

One reason for this is that Einstein’s theory was clearly correct: It was already validated on data such as the movement of the perihelion of Mercury — inaccurately predicted by Newton’s theory and accurately predicted by Einstein’s. Human scientists didn’t count this prediction as a win because the data had been collected before Einstein posed his theory. But “only new observations count” is the sort of crutch that a civilization needs when it has serious issues with hindsight bias, confirmation bias, and scientists cheating to inflate the evidence for their hypotheses. None of these is a necessary feature of good reasoning. And indeed, careful thinkers were able to figure out whether Einstein’s theory was correct well before the Eddington experiment, using the evidence already available to them.

Additionally, there were faster methods of testing the theory — such as building telescopes and observing (the effects of) black holes, as predicted by Einstein’s theory — which presumably could have been done in less than eight years by a sufficiently fast-thinking and competent civilization. Or if you already had space flight capabilities, you could test the clocks on satellites in less than a day. To assume that Einstein’s theory required eight years to test would be to radically underestimate the power of intelligence.

When humanity finally got around to building GPS satellites, the satellites were programmed with two different clocks — one that used Einstein’s theory, and one that didn’t. This was a strange choice, given how well-confirmed Einstein’s theory was at this point. But this choice underscores the point that in many cases, a civilization can just take both branches when it’s uncertain about a theory. And it underscores that when experiments and failures are expensive (as in the case of satellites), it’s often much cheaper to just build things in ways that don’t rely too much on any particular theory.

And as we point out in the book, Einstein (when compared to Newton and Kepler and Brahe before him) is also an example of how smart people can deduce much more than you might expect from very limited observations. Einstein is impressive not just for figuring out the theory of relativity, but for doing it from so little data.

So while the need for experimental data may indeed constrain how quickly AI can take various actions, this constraint is likely to be a lot weaker than it may intuitively seem.

Your question not answered here?Submit a Question.