Isn’t this handing too much power to governments?
The power to ban dangerous technology is already vested in governments.
Banning AI research on the path to smarter-than-human AI wouldn’t make much of a difference with regard to state power. Governments legislate and regulate an enormous number of things. Restricting a single research program is potentially a big deal to the AI industry, but it’s a drop in the bucket to governments and to society, which are accustomed to state involvement in many parts of life, and which have a precedent of banning dangerous technology such as chemical weapons.*
Banning one additional tech isn’t going to plunge the world into totalitarianism, any more than nuclear arms treaties led to totalitarianism.
This isn’t to say that it’s no big deal to ban a technology. We don’t think the bar for state intervention should be low. Rather, we think that superintelligence easily clears any reasonable bar.
If humanity decided to put a stop to AI research and development today, the ban would not need to be particularly invasive. Today, creating a cutting-edge AI requires an extraordinary number of highly specialized computer chips drawing huge amounts of electrical power.
Maybe in ten years it will be possible to do meaningful AI development on a consumer laptop, if humanity allows further improvements to computer chips and further research into AI algorithms. But humanity does not need to let that happen. Governments limiting AI R&D do not need to be any more invasive to the average person’s life than governments controlling the proliferation of nuclear weapons technology — so long as the world wakes up to the situation we’re in and puts a stop to things now.
* Keep in mind that we advocate for treaties under which governments can’t build superintelligence, either. We aren’t calling for a powerful technology to be built by state actors instead of corporations; we’re calling for a lethally dangerous technology to not be built at all, at least in anything like today’s world.