Can we adopt a wait-and-see approach?
No. We don’t know where the critical thresholds are.
There’s a decent chance that AI development will get out of control once AIs are smart enough to automate all of AI research. That could, in theory, happen quietly in a lab with no loud precursor events, with no warning shots to wake humanity up.
As we discussed previously, chimpanzee brains are very similar to human brains, except for being smaller by about a factor of four. There’s not an extra “be very smart” module inside human brains; there’s smooth pathway between brains like theirs and brains like ours; it’d be hard to tell where the line was between “a society of these will lead to a bunch of monkeys” and “a society of these will walk on the moon” just by looking at the brains. Primate brains crossed a critical threshold, and it wouldn’t’ve been obvious from the outside. Are there critical thresholds that AI will cross? Who knows! It’s not like AI engineers can tell us; they are not even able to predict the specific capabilities of their new AI systems in advance of running them.
If humanity understood exactly how intelligence worked, and exactly how the behavior of AIs would change as their capabilities grow, it might be feasible to dance along the edge of the cliff. But right now, humanity is like someone sprinting toward a cliff-edge in the dark and fog, with the final fall some unknown distance away. We can’t just wait until we stumble over the edge to decide that we should have acted differently.
We’ll never be certain. This means that we’re forced to act before we’re certain, or die.