Will AIs have human-like emotions?

Probably not.

AIs don’t need to be human-like in order to be great at solving problems. They don’t even need to be human-like to be great at solving the problem “imitate humans.”

When an AI has been trained to mimic humans as closely as possible, it’s more tempting to anthropomorphize the AI. But it isn’t any more valid.

It would be foolish to say “This LLM is great at imitating humans, so I’m going to project all sorts of human characteristics onto it, including the characteristic of having wants.”

There’s a twin mistake, however, that we can call “mechanomorphism” — the fallacy of assuming that AIs, being made of mechanical parts, must have all of the stereotypical limitations of machines. This is the mistake behind assuming that AIs must be rigid and inflexible; or cold and unimaginative; or thoughtless and unreflective.

To predict the behavior of AI, we shouldn’t imagine that AIs will be motivated by human emotions, or animated by human goals for the future. But we also shouldn’t assume that AIs are runaway lawnmowers, blind and “automatic” in their behavior. AIs can be machines, and yet still be flexible, adaptive, and strategic.

In other words: Artificial intelligence can be genuinely intelligent, rather than having the stereotypical blind spots of an evil Hollywood robot.

AIs today are still quite limited, but they have improved rapidly in recent years.

At some point, we should expect AI to be “genuinely intelligent,” even if some important pieces are still missing today. And when we try to predict what such an AI would be like, we shouldn’t use the heuristic “it will be like a human” or the heuristic “it will be like normal machines.” As discussed in the book, a better frame for thinking about powerful AI is to ask:

What behavior is required for the AI to succeed?

If you’re playing chess against a powerful chess program, the best way to predict the AI’s next move isn’t to imagine it as a human opponent, and it isn’t to imagine it as an unthinking automaton. It’s to ask: “What kind of move would make the AI likeliest to win?”

AI researchers’ aspiration is to create an AI that’s like the best chess programs, but trying to “win” at complex and varied tasks in the real world.

And the reason such AIs will act like they want things isn’t that they’re just like humans, and humans want things.

It’s that want-like behavior helps with winning.

Your question not answered here?Submit a Question.