Won’t AI differ from all the historical precedents?

Yes.

Some of the unique features of the AI alignment challenge will make it easier than, say, engineering a nuclear power plant. Other features will make it harder. On the whole, nuclear weapons and nuclear power plants seem dramatically simpler to manage than smarter-than-human AI.

People in the industry are quick to point out that AI itself can be asked to help with the challenge of aligning AI. We don’t think this will matter too much — roughly, because aligning a superintelligence is a difficult problem, and we don’t have a good way to evaluate solutions or measure progress. This means that AI would need to already be very capable, and very aligned, in order to help with this problem. We’ll discuss this idea more in Chapter 11.

Another way AI alignment could be easier than engineering nuclear power plants is that humans could have quite a high degree of control over how the AIs they build function. You can’t choose the physics that governs a nuclear reactor, but if humans were crafting AIs and they knew exactly what they were doing, then they could make a lot of choices about the AI’s cognitive dynamics. Though, of course, nobody is anywhere near that level of understanding in real life, as discussed in Chapter 2.

As for ways that AI is likely to be a harder challenge than past problems humanity has faced, let’s compare superintelligence to nuclear weapons. We think that this comparison suggests that superintelligent AI is a far thornier problem, for a number of reasons:

  1. Nuclear weapons are not smarter than humanity.
  2. Nuclear weapons are not self-replicating.
  3. Nuclear weapons are not self-improving.
  4. Most realistic nuclear war scenarios do not involve humanity getting wiped out entirely; in all likelihood, there would be people left among the ruins to rebuild.
  5. Venture-backed companies aren’t out there scaling up global nuclear weapon stockpiles by a factor of ten every year.
  6. The science of nuclear weapons is pretty well understood. Engineers can calculate roughly how powerful a nuclear weapon will be before they build it, and they know exactly what concentration of fissile material is needed to set off the chain reaction that leads to a cataclysmic detonation.
  7. Nuclear weapons don’t make their own plans. If a country builds a nuclear weapon, then it owns the nuke. Its scientists don’t have to worry about the nuke getting vastly smarter than them and deciding it would rather not be owned.
  8. The world generally agrees that if nuclear weapons go off, they kill people. The physicist community is not fractured into philosophical camps with strange stances such as, “If every individual has their own nuke, they won’t be at the mercy of bad people with nukes,” or, “Don’t worry, humans will just merge with the nuclear weapons,” or, “Nuclear war is inevitable, and therefore it is childish and silly to try to stop it.”
  9. Nuclear weapons are hard to replicate. There is no huge technological effort underway to build rentable technology that anyone can use to make nukes, and making one nuclear weapon in a lab doesn’t let you deploy 100,000 copies of that nuclear weapon a week later.
  10. Major world powers treat nuclear war as a real possibility and an unacceptable outcome. World leaders put real work into avoiding it; even the most selfish among them knows that a nuclear war could kill them and their family and ruin the places and possessions dearest to them. Citizens and voters don’t want a nuclear war. Humanity is as united against nuclear war as we have ever been united about anything.

Worse yet — as discussed in the book and in the extended discussion section below — humanity only gets one shot at getting superintelligence right. If a nuclear power plant explodes, other people in the world can learn from what happened and do better next time.

All of these features suggest that superintelligence poses an extraordinary challenge, and an extraordinarily novel one. There are analogies, but they only apply in narrow ways. There is no established playbook for ASI.

Your question not answered here?Submit a Question.