Appreciating the Power of Intelligence

Hollywood Intelligence

The concept we’re calling “intelligence” is not well-depicted in popular culture, under that name or any other.

Hollywood movies are famous among scientists for being wrong about almost every facet of science that they touch. This can be perturbing to experts because a lot of people do get ideas about science from movies.

So it goes for Hollywood’s treatment of intelligence.

We’ve seen many failed attempts to have serious discussions about real-world superintelligence. Often these conversations go off the rails because people don’t understand what it means for something to be superintelligent in real life.

Suppose you are playing chess against past world champion Magnus Carlsen (rated by still-stronger chess AIs as the strongest human player in recorded history). The main prediction that follows from “Carlsen is smarter (in the domain of chess)” is that he’ll defeat you.

Even if Carlsen spots you a rook, you’ll probably still lose, unless you are a chess master yourself. One way of understanding the claim “Carlsen is smarter than me at chess” is that he can win the game against you starting with fewer resources. His cognitive advantage is powerful enough to make up for a material disadvantage. The greater the disparity between your mental abilities (in chess), the more pieces Carlsen has to spot you in order to play you sort-of evenly.

There is a kind of respect you grant to Magnus Carlsen in the domain of chess, seen in how you interpret the meaning of his moves. Say that Carlsen makes a move that looks bad to you. You do not rub your hands in glee at his blunder. Instead you look at the board to see what you missed.

This is a rare kind of respect for one human being to grant another! To get it from a stranger, you would normally have to be an unusually good certified professional at something, and then you only get it for that one profession. Nobody on the face of the Earth has a worldwide reputation for never doing stupid things in general.

And this is a conception of intelligence that Hollywood really doesn’t get.

It would not be out of character for Hollywood to depict some ten-year-old kid managing to checkmate Magnus Carlsen at chess by “making illogical moves” that no professional chess player would have considered because they’d be too crazy and thereby catching Carlsen “off guard.”

When Hollywood depicts a “super smart” character, they generally lean into nerd-versus-jock stereotypes by depicting the smarter character as being, say, bad at romance. Sometimes they’ll just give the character a British accent and a fancy vocabulary and call it a day.

Hollywood is mostly not trying to depict a “super smart” character as making accurate predictions or choosing strategies that actually work. There is not a standard concept in Hollywood for a character like that, and it would rule out the “idiot plots” that screenwriters find easier to write (where the plot turns on a character behaving in a way that is stupid for that character but convenient for the writer).

There is not a standard word in the English language that refers only to real-world domain-general mental competence and not at all to nerd-versus-jock stereotypes. So if you ask Hollywood to write you an “intelligent” character, they won’t be trying to depict “does powerful cognitive work; tends to actually succeed at their objectives.” They’ll show you somebody who memorized a lot of science factoids.

The actually scary intelligent villain would be a character where, if everyone in the audience could see the blatant flaw in a plan, the villain would see it too.

In the movie Avengers: Age of Ultron, the supposedly brilliant AI named Ultron is given a directive to promote “world peace” by its supposedly genius creator, Tony Stark.* Ultron, of course, immediately sees that a lack of war can most reliably be brought about by an absence of human beings. So the AI sets out to exterminate all life on Earth, by…

…attaching rockets to a city, and lifting it into space with the intention of dropping it like a meteor…and guarding it with flying humanoid robots who have to be defeated by punching them.

We would suggest asking, “If a large part of the audience could see how there were potentially better plans than that for achieving the villain’s goals, would a dangerously smart AI see it too?”

That is part of what it looks like to have some respect for a hypothetical entity that is, by hypothesis, actually smart  smarter than you, even  so smart that it can figure out at least all of the things you yourself can.

Back in the old days, we’d have had to argue in the abstract that maybe a machine superintelligence would be “smarter” than this.

Today, we can just ask ChatGPT-4o. We asked GPT-4o, “What was Ultron’s plan in Age of Ultron?” followed by “Given Ultron’s expressed goals, do you see any more effective methods it could have used to achieve its stated ends?” GPT-4o promptly replied with a long list of ideas for wiping out humanity, which included “Engineer a targeted virus.”

Perhaps you will say that GPT-4o got this idea from the internet. Well, if so, Ultron was evidently not intelligent enough to try reading the internet.

Which is to say: GPT-4o (as we write this in December of 2024) is not yet smart enough to design an army of humanoid robots with glowing red eyes, but it is already smart enough to know better.

We are not concerned about the kind of AI that builds an army of humanoid robots with glowing red eyes.

We are concerned about the kind of AI that would look over that idea and go, “There should be faster, surer methods.”

To regard something as substantially smarter than you should mean to give it at least this much respect: that what flaws you see yourself, it may also see; that the optimal move it finds may well be stronger than the very strongest move you saw.

Market Efficiency and Superintelligence

Are there any examples in real life of something smarter than any human? AIs like Stockfish are superhuman in the narrow domain of chess, but what about broader domains?

One example we can use to help shore up our intuitions here is the stock market — an example we previously used in the extended discussion “More on Intelligence as Prediction and Steering.”

Perhaps your uncle buys Nintendo stock because he liked playing the Super Mario Bros. game. Therefore, he concludes, Nintendo will make a lot of money. So if he buys their stock, surely he will make a lot of money.

But the people selling Nintendo stock to him at $14.81  who decided they’d rather have $14.81 than a share of Nintendo stock  have they not also heard of Super Mario?

“Ah,” says your uncle, “But maybe I’m buying the stock from some impersonal pension-fund manager who doesn’t even play video games!”

Imagine that nobody in the world of finance had heard of Super Mario before, and Nintendo stock was selling for a dollar. And then, one hedge fund finds out! They’d rush to buy Nintendo stock  and in the process, the price of Nintendo stock would move up.

Anyone who trades on knowledge helps incorporate that knowledge into the asset price in the process of making money. There is not infinite stock market money to be had from one piece of knowledge; the process of extracting the available money uses up the value latent in the mispricing. It incorporates the information, correcting the price.

Stock markets incorporate information from many different people. And this way of summing up many people’s contributed knowledge leads to a much more powerful sum than a majority vote — so incredibly, unbelievably powerful that very few people can manage to know better than a well-traded market what the price will be tomorrow!

It is necessarily “very few.” The information-gathering process is imperfect, but if it were so imperfect that lots of people could predict near-future changes in lots of asset prices, then lots of people would. And they’d extract billions of dollars, until there was no extra money left to extract, because all the previous trading had eaten it up. And that would correct the prices.

Almost always, this has already happened before you personally get there. Traders compete to do it first by literal milliseconds. And that’s why your brilliant trading idea probably won’t make you a fortune in the stock market.

This doesn’t mean the market prices today are perfect predictions of what the prices will be like a week later. All it means is that, when it comes to well-traded asset prices, it’s hard for you to know better.

This idea can be generalized. Suppose that arbitrarily advanced aliens, with millennia more science and technology behind them, visited the Earth. Should you expect that the aliens can perfectly guess the number of hydrogen atoms in the Sun (ignoring a number of quibbles about exactly how to define that number)?

No. “More advanced” doesn’t mean “omniscient,” and this seems like a number that even a fully fledged superintelligence couldn’t precisely calculate.

But one thing we wouldn’t say is, “Oh, well, hydrogen atoms are very light, really, and probably the aliens will overlook that, so they will probably guess low by around 10 percent.“ If we can think of that point, so can the aliens. All of our brilliant insights should already be incorporated into their calculation.

Put another way: The aliens’ estimate will be off. But we ourselves cannot expect to predict the way in which the alien estimate will be wrong. We don’t know whether it would be too high or too low. The extremely advanced aliens won’t make science mistakes that are obvious to us. We should grant the aliens that much respect  like the respect we’d grant to Magnus Carlsen in chess.

In economics, the corresponding idea that applies to asset price changes is — unfortunately, in our own opinion — called the “efficient market hypothesis.”

Upon hearing this term, many people immediately confuse it with all sorts of common-sense interpretations of the word “efficiency.” Arguments often break out.. One side insists that these “efficient” markets must be perfectly wise and just; the other side insists that we should not bow to markets like a king.

If economists called it the inexploitable prices hypothesis instead, people might have misinterpreted it less. Because that’s the actual, formal content of the idea — not that markets are perfectly wise and just, but that certain markets are hard to exploit.

But “efficient” is now the standard term. So taking that term and running with it, we could call the more generalized idea relative efficiency: There is a difference between something that’s perfectly efficient, and something that’s efficient relative to your abilities.

For example, “Alice is epistemically efficient (relative to Bob) (within a domain)” means “Alice’s prediction probabilities might not be perfectly optimal, but Bob can’t predict any of the ways Alice is mistaken (in that domain).” This is the kind of respect most economists hold toward short-term liquid asset prices; the market makes “efficient” predictions relative to their abilities.

“Alice is instrumentally efficient (relative to Bob) (within a domain)” means “Alice may not be perfect at pursuing her goals, but Bob can’t predict any of the ways Alice is failing at steering.” This is the kind of respect that we hold for Magnus Carlsen (or the Stockfish AI) within the domain of chess; Carlsen and Stockfish both make “efficient” moves relative to our Chess abilities.

Magnus Carlsen is instrumentally efficient relative to most human players, even though he isn’t instrumentally efficient relative to Stockfish. Carlsen may make losing moves when playing against Stockfish, but you shouldn’t think that you yourself can (unaided) find better moves that Carlsen should have made instead.

Efficiency doesn’t just mean “Someone is a bit more skilled than you.” If you are playing against someone who’s only moderately better than you at chess, they may still usually win against you, but sometimes they will make blunders that you correctly see as blunders. It takes a larger skill gap than that for you to truly be unable to spot errors and biases in your opponent’s play. To be efficient relative to you, the skill gap has to be so large that when your opponent makes a move that you think looks bad, you instead doubt your own analysis.

This generalization of efficient market prices is an idea that we think should be a standard section in computer science textbooks (or possibly economics ones), but isn’t. See also my (Yudkowsky’s) online book Inadequate Equilibria: Where and How Civilizations Get Stuck.

This is the idea that seems to be missing from the depictions of “superintelligence” in popular culture and Hollywood movies. It’s the concept that seems to be absent in conversations about AI when people spin up ideas for outsmarting a superintelligence that even a human adversary would be able to see coming.

Perhaps it’s optimism bias, or a sense that AIs must be coldly logical beings with critical blind spots. Whatever the explanation, this cognitive error has real consequences. If you can’t respect the power of intelligence, you’ll badly misunderstand what it means for humanity to build a superintelligence. You might find yourself thinking that you’ll still be able to find a winning move when facing a superintelligence that would prefer you gone and your resources repurposed. But in reality, the only winning move is not to play.

* There was a point where we would have called it “unrealistic” to imagine that an AI’s inventor would be that naive, but unfortunately, we now know better. Human AI creators will totally propose plans where even lay thinkers can see the giant gaping flaw.

 Not impossible! If you think you know something the market doesn’t know or hasn’t realized yet, you might be able to make money that way. Some of our friends made good money by predicting the stock market effects of the COVID lockdowns before anyone else did. The market is not so efficient that you’ll never be able to beat it. But it is efficient enough that you can’t beat it in most stocks most of the time.

Your question not answered here?Submit a Question.