What Would It Take to Shut Down Global AI Development?

We aren’t experts in international law, and this is a formidably complicated topic that we expect to require a large amount of effort by domain experts. In the interest of getting the ball rolling quickly, however, we’ve worked with our technical governance team and outside advisors to assemble some sketches and guesses at some measures that could be effective.

We offer these in the spirit of encouraging conversation, debate, critique, and iteration. These first-draft ideas should in no way be treated as confident or authoritative.

As a first step, let’s walk through the constraints and shape of the problem we’re trying to solve — a topic that could easily take up a book of its own. The overall problem has been preventing the development of machine superintelligence for decades. And because we don’t know where the critical thresholds are, that essentially amounts to stopping AI research and development entirely.

Current AI progress stems from a combination of creating better computer chips, using more chips longer for longer training runs, and improving AI algorithms. We’ll contend with each of those in turn, explaining the corresponding levers for halting progress toward artificial superintelligence.

Preventing the Creation of More and Better AI Chips

Increasing the capabilities of modern AIs takes an enormous investment of computing power and electrical power. As a result, it appears possible for modern state actors to identify and monitor all relevant facilities and prevent the emergence of new such facilities, with minimal impact on consumer hardware.

The supply chain for producing advanced AI chips is extremely concentrated. For some steps in the supply chain, there is only a single company in the world capable of filling that role, and these companies are largely in countries traditionally allied with the United States.

For example, only a few firms can fabricate AI chips — primarily the Taiwanese company TSMC — and one of the key machines used in high-end chips is only produced by the Dutch company ASML. This is the extreme ultraviolet lithography machine, which is the size of a school bus, weighing 200 tons and costing hundreds of millions of dollars.

This supply chain is the result of decades of innovation and investment, and replicating it is expected to be quite difficult — likely taking over a decade, even for technologically advanced countries.

The most advanced AI chips are also quite specialized, so tracking and monitoring them would have few spillover effects. NVIDIA’s H100 chip, one of the most common AI chips as of mid-2025, costs around $30,000 per chip and is designed to be run in a datacenter due to its cooling and power requirements. These chips are optimized for doing the numerical operations involved in training and running AIs, and they’re typically tens to thousands of times more performant at AI workloads than standard computers (consumer CPUs).*

The concentration and complexity of the AI chip supply chain makes halting advanced AI development easier than one might expect. It would be simple to stop the production of new AI chips. It would require fairly minimal monitoring of a small number of key suppliers to ensure that secret supply chains are not created, given how complex and interconnected the production process is.

Some of the same infrastructure is used to produce AI chips and other advanced computer chips (such as cell phone chips), but there are notable differences between these chips. If advanced AI chip production is shut down, it would be feasible to monitor and ensure that any ongoing chip production is only creating non-AI-specialized chips.

Pre-existing specialized AI chips could be monitored if they’re kept and used to run existing AIs, such as ChatGPT. Ensuring that such chips were only being used to run low-capability AIs (rather than for novel research and development) would be a challenge, but not an insurmountable one. Existing chip locations could be tracked and monitored, and there are various potential mechanisms that could be used to verify what those chips are being used for. This sort of monitoring requires physical access to chips (e.g., inspectors taking measurements in a datacenter). Remote access could be sufficient for verification if new chips were fabricated with improved security and designed with verification and monitoring in mind. As we discuss in the following section, the chip concentrations required to be dangerous (at the August 2025 level of AI algorithms) are so large that it wouldn’t be difficult for state actors to detect all such facilities and subject them to regular inspection.

Preventing the Usage of More and Better AI Chips

Moving our attention now from the production of chips to the usage of chips: The current largest AI datacenters house hundreds of thousands of AI chips, which cost billions of dollars. To train one of the most powerful AIs today, these chips need to be used for months on end.

Each of these chips has a
similar power consumption to the average American home, so a datacenter with hundreds of thousands of chips has a power usage comparable to that of a small city. Powering all these chips requires specialized electrical infrastructure, such as large transmission lines. These datacenters are also fairly large buildings with distinctive thermal signatures from continuously running and cooling large numbers of energy-intensive chips.

Internally, these datacenters house their thousands of chips in server racks and have extensive cooling infrastructure to ensure chips don’t overheat. If one went inside one of these buildings, it would be extremely clear that it was a datacenter. It’s not like their purpose could be hidden from international monitors who come knocking, especially if the international monitors check the chips in the datacenter and find that they’re AI-specialized chips.

Large datacenters and their related power infrastructure are so massive that they can be identified by orbiting satellites. This means that if governments wanted to locate current large datacenters, they would likely be able to do so with a high success rate, whether those datacenters are inside their borders or in other countries. Although the state of public knowledge is limited, this intervention alone could track down the majority of high-end AI chips.

States may attempt to hide their datacenters in the future to make it difficult to identify them with satellites. For example, states might attempt to conceal a datacenter in a mountain (like in the Cheyenne Mountain Complex, which houses NORAD) where it would not be visible from above. Even so, it would be difficult to hide the infrastructure required to run the datacenter.

The biggest factor favoring detection is that datacenters have very large electricity requirements. This power is usually provided via transmission lines, which are almost always above ground. It’s possible to bury transmission lines, but it’s much more expensive and time-consuming, and the construction effort to bury the transmission lines is also difficult to conceal.

So long as it continues taking more than 100,000 chips to train a cutting-edge AI, it looks quite possible for state actors to detect and monitor every relevant datacenter.

Preventing Algorithmic Progress

More efficient AI algorithms can reduce the computational resources needed to train an AI, or they can allow more capable AIs to be produced using a given amount of computational resources, or both.

Algorithmic progress is primarily driven by research and engineering, and these currently depend on human skill and effort. The skills needed to improve AI algorithms are relatively rare, which explains the large salaries commanded by top researchers in the field.

Although these skills are rare today, it’s unclear how that might change as more researchers move into the field and more knowledge becomes public. Depending on how one wants to count the number of people with the requisite skills, the true number is likely in the hundreds or low thousands (e.g., based on the number of AI researchers and engineers at top AI companies). Conservative estimates could be much higher — for instance, there are tens of millions of software engineers in the world.

Legal and social interventions could likely dramatically slow algorithmic progress. Most people don’t want to break the law, especially when there are real consequences. If it were illegal to publish certain AI research or perform various AI experiments based on catastrophic risks posed by sufficiently capable AI, this would likely dissuade almost all potential AI research scientists, as we discussed previously. Governments could implement export controls that would make sharing or publishing such research illegal without an export license and government approval.

Social taboos would help, too. As precedent, we can look at the Asilomar Conference on Recombinant DNA in 1975, which resulted in a voluntary ban of certain biological experiments that were thought to pose undue risks. In theory, scientists could institute a voluntary ban on advancing AI capabilities. However, this would require these scientists to take seriously the danger from smarter-than-human AI — a departure from the status quo, where advancing AI capabilities is lauded in many circles. Given the myopic monetary incentives and the observed behavior of the labs to date, external legal restrictions seem extremely likely to be necessary, unless the culture of the field shifts dramatically (and in short order).

A critical component of making an imperfect ban effective may be something as obvious as ensuring that world leaders actually understand that they and their families will personally die if they continue to push forward, as we discussed previously. The likeliest noncompliance scenarios are ones in which governments see home-grown superintelligence as a strategic asset (or as a mirage distracting them from profitable new AI tools), rather than as a global suicide button. Governments are much less likely to run secret ASI research projects if they correctly see that this amounts to loading a gun, putting it to their head, and pulling the trigger.

Research bans wouldn’t stop everyone. Some prominent research scientists and tech executives have already said that destroying humanity is an acceptable price to pay for progress. But we should not let the perfect be the enemy of the good. Algorithmic advancements would at least go slower if such people were defunded and shunned by their peers, forcing them to do their lethal research outside the law and without collaboration with any of their more upstanding peers.

The Longer We Wait, the Harder It Gets

If AI chip production and distribution continue on their current trajectory, the challenge of ensuring that enough AI chips are centralized and monitored will only become more difficult. Even if states are not yet convinced of the risks, starting to internationally track AI chips today means that intervention may remain possible in the future. That window may close soon if governments do not move quickly.

If researchers are allowed to continue advancing the state of AI algorithms, smaller and smaller numbers of AI chips are likely to pose a serious threat. If and when AI systems become capable of automating parts of the AI R&D process, it could become especially difficult to control AI development. Such systems could be easily copied and distributed, and the hardware required to run them may not be significant. (The hardware requirements to run AI systems are much lower than those to train AI systems.)

Eventually, it may be impossible for the world’s governments to stop the development of superintelligent AI systems. We’re not there, but it gets harder every month. The plan we outline is premised on stopping AI development soon. There are other plans that don’t rely on this assumption, but they’re more difficult to implement, have higher costs to personal freedoms, and come with a greater chance of failure.

* They are sometimes used for other computation-intensive tasks, such as physics and weather simulations, but they are primarily used for AI. One quick method of estimating how many AI chips are used for non-AI activities is to look at the revenue over time of the main chip producer, NVIDIA. If we assume that the recent boom in demand for their datacenter GPUs stems almost entirely from AI uses — a reasonable assumption, given the enormous recent boom in the AI industry and the lack of any comparable trend in other fields that use these chips — we would conclude that AI accounts for the vast majority of AI chip use, as recent revenue growth dwarfs previous revenue. Preventing the fabrication of specialized AI chips need not have much effect on consumer hardware.

 Another possible intervention, assuming the number of AI algorithmic progress researchers continues to be small (i.e., in the hundreds or thousands), would be to pay these researchers to direct their efforts toward non-AI uses or toward AI capabilities or alignment research that has negligible aggregate risk. There is precedent for this kind of intervention in the 1990s, when the U.S. government started an initiative to channel the work of former Soviet weapons scientists and technicians to productive, non-military endeavors.

Notes

[1] extremely concentrated: Thandi and Allen of the Center for Strategic & International Studies have analyzed the semiconductor supply chain.

[2] taking over a decade: According to analysis done by Saif M. Kahn of the Center for Security and Emerging Technology (CSET).

[3] quite specialized: For a report on AI chips and how they differ from consumer hardware, see an analysis by Kahn of CSET.

[4] power usage: For a report on AI infrastructure, see work done by the Institute For Progress.

[5] high-end AI chips: For some analysis of trends in the production and ownership of high-end AI chips, see the report by Pilz et. al on Trends in AI Supercomputers.

[6] difficult to conceal: It may be possible to generate power on-site, thus removing any conspicuous transmission lines. The current Cheyenne Mountain Complex uses diesel generators, and probably has the capacity to power around 10,000 of the most advanced AI chips. But running these chips continuously for a large training run would require constantly delivering fuel, which would be noticeable. Rough calculations show that these 10,000 chips would require about one tank truck every day. Even if there was the local generation capacity to power 200,000 chips, this would require 20 tank trucks of diesel every day.

Datacenters could also be powered by nuclear power plants. Fortunately, many state actors already have practice and experience monitoring the creation of new nuclear power plants.

[7] Algorithmic progress: Examples of this kind of progress include FlashAttention, an algorithm that makes AI chips execute a certain set of mathematical operations more efficiently by taking advantage of details of AI chip design; Mixture-of-Experts, a change to the architecture of AIs that makes only a subset of their parameters get used on each input token (e.g., word); and GRPO, a method for fine-tuning AIs.

[8] top AI companies: For some analysis of the scarcity of top AI researchers, see Sharon Goldman’s piece in Fortune.

[9] already said: For a review of what prominent researchers have said and where we think they’re going wrong, see or answer to Why don’t you care about the values of any entities other than humans?

Your question not answered here?Submit a Question.