But experts don’t all agree about the risks!

Lack of expert consensus is a sign of an immature technical field.

We’ve noted that many senior AI scientists think that this technology has a serious chance of killing all humans. For example, Nobel laureate Geoffrey Hinton, who played a large role in pioneering the modern approach to AI, has said that his independent personal assessment puts the odds of AI killing us all at greater than 50 percent. More than 300 AI scientists signed the 2023 Statement of AI Risk that we opened the book with.*

Other scientists, however, have the opposite view — some well-known examples being Yann LeCun and Andrew Ng.

What’s to be made of this lack of scientific consensus?

Well, mostly, we recommend that you check out the different arguments made by the two sides (including our own arguments in the book) and assess them for yourself. We think the quality of argumentation mostly speaks for itself, and any attempts to explain why there’s persistent disagreement should be treated as an afterthought.

We note in passing, however, that this state of affairs isn’t any great mystery, in the wake of what we discussed in Chapters 11 and 12. The mere existence of widespread expert disagreement doesn’t establish the book’s thesis, of course. But it’s more congruent with the picture we’ve painted — that the field is in an early, alchemy-like state — than the opposing picture that AI is a mature field with strong technical foundations.

It’s definitely a bit strange for the field of AI to be so divided, even as it spins up powerful technologies. Other technological dangers had more consensus about them. Roughly 100 of 100 scientists in the Manhattan Project would have said that global thermonuclear war presented a substantial risk of global catastrophe. In contrast, among the three scientists who received a Turing Award for the research that more or less kicked off the modern AI revolution, two of them (Hinton and Bengio) are outspoken about the dangers of superintelligence, and one (LeCun) is outspokenly dismissive.

This level of disagreement about the operation of a machine isn’t normal between experts in a mature technical field. It’s a sign of technical immaturity.

In most technological fields, that immaturity is actually a sign of safety. Back when physicists were still arguing about the basic properties of matter, they weren’t anywhere near creating nuclear weapons. You could observe their disagreement and make an informed guess that they weren’t about to create a bomb that could level cities. Indeed, it’s not possible to create a nuclear bomb without scientists who understand the inner workings of the bomb in detail.

It would be a different situation if the physicists were still bickering about the basic operating principles of their field while creating larger and larger explosions.

Imagine a world in which physicists could somehow “growing” nuclear bombs, and they didn’t really understand why or how they operated. Now suppose that two-thirds of the most-decorated scientists said, “We did our best to figure out what’s going on. It looks like these devices might create excessive amounts of cancerous radiation that will kill lots of distant civilians, if we continue down this path. Please look at our arguments for why this is so dangerous, and stop racing down this path.” The remaining one-third responds, “That sounds ridiculous! There are always people predicting doom, and you can’t let them get in the way of progress.”

Well, that would indeed have been a different situation entirely.

Discord among scientists in that sort of scenario would not be especially comforting. Engineers probably shouldn’t be allowed to keep growing larger and larger explosives in a situation like that.

AI companies are succeeding at growing machines that are smarter and smarter, year after year. They don’t understand the inner workings of the devices they create. Many of the most eminent scientists in the field express grave concerns; others wave the concerns aside without articulating much in the way of counterargument. This is, at the very least, evidence that the field is immature. The lack of consensus is, at the very least, not evidence that things are fine. The lack of consensus in a situation like this should, at the very least, be worrying.

How do you figure out whether those worries are real? How do you figure out who’s right between the people raising the alarm and the people trying to dismiss it? As always, you’ve just got to evaluate the arguments.

* More examples, including surveys showing that these examples are widely shared in the field, can be found in our discussion of what AI Experts say about catastrophe scenarios.

 Considered the highest honor in the field.

Your question not answered here?Submit a Question.