Isn’t it smarter to rush ahead and make sure good guys have the lead?

No.

Modern AI techniques do not yield AIs that do what their operators intend (as discussed in Chapter 4). Solving this problem is the sort of thing that would typically take humanity quite a lot of trial and error, and we have no room for error here (as discussed in Chapter 10).

Moreover, the current crop of AI engineers is very far from being up to the task, as discussed in Chapter 11. Modern AI engineers sorely lack the scientific understanding it’d take to succeed at AI alignment. AI researchers aren’t like the operators of the Chernobyl nuclear reactor; those operators were working with a device that was theoretically well-understood and had careful safety manuals that they neglected in a fashion that led to catastrophe. There’s no such thing as an AI safety manual built from a comprehensive understanding of the AI’s internals and what arrangements might cause things to go wrong. We’re not even close to the Chernobyl level of competence, here. And Chernobyl exploded.

AI researchers are flying blind and winging it, with almost no chance of success.

In that context, it doesn’t matter whether the “good guys” or the “bad guys” build superintelligence. The AI’s preferences aren’t sneezed onto it by whoever’s standing closest.

It doesn’t matter how well-intentioned they are, and how careful they say they’re being. It doesn’t matter who “wins” the race. If humanity races to artificial superintelligence, then we all die.

It’s not impossible to stop. It might not even be all that hard.

We’ll turn to this point in the final chapter of the book.

Things change. They especially change when there is a desperate, urgent, recognized need. The main impediment to stopping is world leaders failing to realize the danger. And that process has already begun.

Your question not answered here?Submit a Question.