What if AI is developed only slowly, and it slowly integrates with society?
This would very likely be catastrophic.
Our predictions are about endpoints, not pathways. We don’t know what will happen with AI between now and the point where it becomes truly dangerous.
For all we know, it could happen in six months, if it turns out that dumb AIs thinking for a really long time are pretty decent at doing their own AI research (in a way that kicks off a critical feedback loop). And for all we know, the field could stall out for six years while awaiting some critical insight that then takes another six years to mature.
But in those scenarios, AI doesn’t stop improving. And in those scenarios, humans are not going to be able to self-improve to keep up with AIs indefinitely, for the reasons we outlined in the online supplement to Chapter 6.
Thus, even though that future might get interesting and weird, it would be a future where more and more power goes to AIs. Once any collection of those AIs gets into a position where they could take the planet’s resources for themselves, that’s the point of no return. Either that collection of AIs contains a component that deeply cares about happy, healthy, free people, or the future goes poorly for us.