Can a technology really be stopped?

Many technologies are banned or heavily regulated.

Nuclear fission is the classic example of a regulated technology. Private companies are not allowed to enrich uranium without government oversight, no matter how useful the cheap energy would be.

In fact, humanity is pretty good at regulating and slowing down all sorts of other technology. The U.S. strictly regulates new drugs and medical devices, housing construction, nuclear power generation, television and radio programming, accounting practices, childcare, pest control, agriculture, and dozens of other industries. Every single state requires a licensing exam for hair styling and nail care. Most of them require one for massage therapists.

We happen to be of the opinion that humanity regulates technology far too much, in many cases. For example, it seems to us that the U.S. Food and Drug Administration is killing far more people (by slowing down or preventing the creation of life-saving drugs, through onerous requirements) than it is saving (by preventing the release of dangerous drugs). It seems to us that the price of housing is far too high, in part because of legal zoning restrictions on what can be built and where. It seems to us that the U.S. essentially destroyed its own nuclear power industry by way of onerous regulations. And seriously, hair stylists?

Humanity absolutely has the ability to impede technological progress. It would be truly tragic and absurd if we used that ability on medicine, housing, and energy, and neglected to use it on one of the rare technologies that would actually kill us all if created.

A ban can be narrowly targeted.

A ban on advanced AI R&D doesn’t need to affect the average person. It doesn’t even need to take away modern chatbots or shut down the self-driving car industry.

Most people are not purchasing dozens of top-of-the-line AI GPUs and housing them in their garages. Most people aren’t running huge datacenters. Most people won’t even feel the effects of a ban on AI research and development. It’s just that ChatGPT wouldn’t change quite so often.

Humanity wouldn’t even need to stop using all the current AI tools. ChatGPT wouldn’t have to go away; we could keep figuring out how to integrate it into our lives and our economy. That would still be more change than the world used to see for generations. We would miss out on new AI developments (of the sort that would land as AI gets smarter but not yet smart enough to kill everyone), but society is mostly not clamoring for those developments.

And we would get to live. We would get to see our children live.

Developments that people are clamoring for, such as the development of new and life-saving medical technology, seem possible to pursue without also pursuing superintelligence. We are in favor of carve-outs for medical AI, so long as they function with adequate oversight and steer clear of dangerous generality.

Governments working to avoid the creation of a rogue superintelligence would have to ensure that AI chips were not being used to develop more capable AIs. As such, the question of what AI activities and services would be allowed to continue would depend on what verification mechanisms could be used to ensure dangerous AI development was not happening. Better verification mechanisms could decrease the cost of halting AI development by allowing a wider set of activities to continue.

Another step that could potentially help on the margin is installing kill-switches in AI chips, and establishing monitoring and emergency shutdown protocols for any large datacenters in use.* Nuclear reactors are designed such that they can be rapidly shut down in an emergency. If you agree that superintelligence poses an extinction-level threat, then it seems obvious that AI chips and datacenters should be designed to make it easy for regulators to rapidly shut them off.

The point is not to burn all technology because we hate technology. The point is to avoid going further down the road that ends with human extinction.

A big part of the problem is that people don’t understand the looming threat of artificial superintelligence.

In our experience, the people arguing that humanity can’t stop the race to superintelligence are simply failing to understand the point that, if anyone builds it, everyone dies.

“But AI offers great benefits!” — no, actually; you can’t make use of any of the power of superintelligence if it just kills everyone. If humanity wants to reap the benefits offered by superintelligence, then humanity needs to find some way to navigate the transition to superintelligence that doesn’t kill everyone as a side effect.

“But nuclear power plants are scary because they’re associated with atomic bombs that leveled cities, whereas AI is associated with benign tools like ChatGPT!” — True, at least for now. If humanity never manages to understand that artificial superintelligence built using anything remotely like modern methods would just kill everyone, then they might not put a stop to it. But the obstacle there isn’t that humanity never manages to control or stymie budding technologies (such as nuclear weapons or nuclear power); the obstacle there is that people don’t understand the threat.

Hence this book. As we discuss in the final chapter, humanity is capable of quite a lot when enough people understand the nature of the problem.

* Note that installing kill switches into chips and setting up protocols for shutting down datacenters pretty clearly doesn’t solve the problem on its own, given that we may get no warning shots and we may not respond effectively to warning shots. But it’s a relatively cheap step that is fully possible to take, and one that could help in marginal cases where the risk is almost negligibly low but it would be helpful to have more of a safety margin.

 If society really fears that this will slow the world down too much, we recommend speeding the world up somewhere else. Let people build more nuclear power plants. Let biochemists do more experiments, not on deadly viruses but on making people healthier and stronger and smarter.

(Of course, society writ large does not actually clamor for mad science, so much as resist change from the status quo. But to the people who say “we can’t stop AI because it’s important for civilization to make progress,” the correct answer is that there’s plenty of progress to be made elsewhere, with the sort of mad science that leaves behind survivors.)

Your question not answered here?Submit a Question.