Why a research ban? That seems extreme.

More breakthroughs might make it effectively impossible to stop people from making superintelligence.

In the book, we mentioned how a single paper in 2017 kicked off the entire LLM revolution by describing an algorithm that made it practical to train useful AIs on specialized commercial hardware.

If powerful AI should ever become trainable on widely available consumer hardware, measures to prevent superintelligence would need to become onerous, and would fail more quickly.

That’s why research into even more powerful and efficient AI algorithms is also a lethal poison for humanity.

This is very bad news, and not what we wish were true. But it seems to be the situation we’re in.

No law can prevent current AI scientists from thinking more about efficient algorithms in the privacy of their own minds. Maybe some people start an underground network for sharing research results. Some people in the AI industry already proudly declare that humanity should die to AIs; they might do their best to drive forward, no matter what anyone else says.

But AI research would slow down a lot if it were illegal, and all the more so if it were widely understood that this really is a kind of research that is liable to get us all killed. It would slow down immensely if underground networks of that sort were tracked down and stopped with the same conviction used to stop people who try to enrich uranium in their garage, because the real-world dangers are taken seriously.

Most people do not try to do extremely illegal things that will make international law enforcement and intelligence agencies genuinely upset. Making it illegal to publish clever new AI algorithms would deter perhaps 99.9 percent of people and nearly all corporations, and then the remaining 0.1 percent can be handled by local, national, and international police and intelligence agencies, and wouldn’t get nearly the current level of academic funding.

It would be a very different world from the current world, where it’s totally legal to run the most dangerous mad science experiments in history and giant corporations put billions of dollars into the endeavor.

We don’t know how many more breakthroughs it will take before the AIs are smart enough to do AI research and build even smarter AIs. It could be one breakthrough. It could be five. But better algorithms are just as deadly as better hardware. They are two horses drawing the same cart off a cliff.

Your question not answered here?Submit a Question.