Isn’t it important to race ahead because of “hardware overhang”?

That would be suicidal, because we’re too far from an alignment solution.

Over the past decade or so, some people concerned about the dangers of AI argued that it might be good to advance AI as quickly as possible. The idea was that the smartest AIs would then require nearly all of the world’s computing hardware to run. No individual breakthrough would unleash thousands of powerful AIs thinking thousands of times faster than any human suddenly.

So long as humanity was always using a sizable fraction of its computing power running the smartest AIs, change would at least happen gradually, and give humanity time to adapt. There would be no “hardware overhang” — no moment where AI capabilities suddenly leap ahead because the world has been waiting to deploy a large backlog of computing hardware on AI. Or so the argument went.

We think this is a pretty bad argument. One problem with it is that intelligence looks like it’s probably subject to threshold effects.

The transition from chimpanzee-level intelligence to human-level intelligence wasn’t “discontinuous” in any sense; it was all quite gradual from humanity’s point of view. But it still went quite quickly from an evolutionary perspective. And the transition from pre-industrial to post-industrial civilization went even faster. None of it was gradual enough for other animals to adapt in any meaningful way.

As an example, an AI that requires a significant fraction of the world’s computing power to run might be smart enough to discover new AI algorithms and new computer chip designs that lead to a thousand smarter-than-human AIs thinking thousands of times faster than humanity pretty quickly thereafter. (Remember: A modern datacenter requires as much electricity to run as a small city, whereas a human requires as much electricity to run as a large light bulb. There’s a lot of room for AI efficiency to improve.)

Or, if the bottleneck is computing power to build AIs rather than computing power to run AIs, we can expect there to be an overhang of large amounts of hardware once the training process is done, freeing up the hardware to run large numbers of fast-thinking AIs.

Even if intelligence weren’t subject to threshold effects, we’re skeptical of the idea that continually hitting humanity with smarter and smarter AIs (even if none of them are quite smart enough to kill us) as fast as possible is a great way to help humanity develop the engineering discipline required to build robustly friendly AIs.

The problem is that AIs are grown rather than crafted, and nobody is anywhere close to figuring out how to grow AIs that robustly care about anything their designers want them to.

That problem is not solved by growing more AIs at the earliest moment it’s possible to grow them. The idea is practically a non sequitur. See also some of Soares’s old writing on how AI alignment requires serial effort.

The non sequitur was, nevertheless, picked up by OpenAI CEO Sam Altman, who gave it as his excuse in 2023 for OpenAI to rush ahead as fast as possible.

This excuse was then revealed to be hollow when that same Sam Altman rushed to build dramatically more computing hardware.

We think this is a decent case study in how executives at AI companies will latch onto whatever argument they think might fly to excuse racing ahead. We think that most such arguments can be dismissed on their merits, and we recommend against putting any extra stock in an argument because an AI corporate executive has made it.

Notes

[1] large light bulb: McMurray et al.’s paper gives an average basal metabolic rate (the minimum resting energy consumption) of about 0.863 kilocalories per hour per kilogram, which works out to about 1 watt per kg or about 60-80 watts for a human. That’s only 60-80% of total energy expenditure, which — including physical activity — is about 100 watts.  

Your question not answered here?Submit a Question.