Notes
Parties would want to ensure that existing AI chips are not being used to do dangerous AI training. There are legitimate reasons to use these chips to run existing AI services like (extant versions of) ChatGPT. The ISIA thus requires the ability to verify that AI chips are only being used for permitted activities.
This article creates a positive incentive to join the treaty: A country may continue using AI chips as long as supervision can verify that their use does not put the world at risk. Given the goal of preventing large-scale AI training, there are two main approaches: Ensure nobody has the necessary hardware (i.e., that AI chips do not exist), or ensure that the hardware is not used in the development of superintelligence (i.e. via monitoring). Monitoring is what permits the continued safe use of AI chips. This is conceptually analogous to IAEA Safeguards: In order for a non-nuclear weapon country to be permitted nuclear materials and facilities, it is necessary for the IAEA to inspect and ensure the use is only for peaceful purposes.
Feasibility
Various technical methods could be used to make verification easier. For example, using the algorithms of 2025, AI training requires much higher bandwidth compared to AI inference. Thus, if the chips are connected using low-bandwidth networking cables, they are effectively limited such that they can engage in inference but not training. There are various nuances to these and other mechanisms;we refer curious readers to previous work on the topic.
This article tasks the ISIA with developing and implementing better verification mechanisms, defined broadly. We think this flexibility is necessary due to the pace of change in AI and the possibility that unanticipated developments could disrupt verification methods. The state of AI verification research is also nascent; more technological development in verification technology is important to give ISIA a solid set of tools.
It is much easier to verify whether a new AI is being created than it is to verify that an existing AI is not performing dangerous inference tasks (such as research that advances the creation of superintelligence). As of August 2025, existing AIs don’t obviously seem capable enough for their inference activities to substantially advance the creation of superintelligence, and so the monitoring challenge that would be faced by ISIA is easier.
It is unclear how difficult it would be to monitor AI inference activities. Inference monitoring is already applied by many AI companies today, for instance to detect if users are trying to use AIs to make biological weapons, but it is unclear whether that monitoring is comprehensive, and it is unclear whether it would get less reliable if AIs were allowed to become more capable. The longer that AI capabilities are allowed to advance before a Treaty resembling our draft comes into effect, the more difficult monitoring would become. Verification that chips are only being used for permitted purposes would become more difficult and more expensive, or might even become impossible.
Other Considerations
In theory, verification could be facilitated by technological means that allow for remote monitoring. However, current technology likely contains security vulnerabilities that would allow chip owners to bypass monitoring measures. Thus, verification would likely require either continuous on-site monitoring or that chips be shut off until the technological means mature. Once monitoring technology is mature, strong hardware-enabled governance mechanisms could allow chips to be monitored remotely with confidence.†
Paragraph 5 of this article allows the ISIA to carry out different verification methods for different CCCs. One reason for this discrimination is practical: Different CCCs would require different verification approaches in order to establish justified confidence that they are not being used for dangerous AI development. For example, large datacenters that were previously being used for frontier AI training would have the greatest ability to contribute to prohibited training and so might require greater monitoring.
Second, discrimination in verification approaches would make the treaty more palatable by requiring less invasive monitoring for sensitive CCCs. For example, intelligence agencies or militaries may not want any ISIA monitoring of their datacenters (which may have more computing power than 16 H100-equivalents despite being used for purposes that have nothing to do with AI), and this provision helps strike a balance. It would still be necessary to verify that these datacenters are not being used for dangerous AI activities, and the ISIA would work with these groups to ensure it can get the information it needs while also meeting the privacy and security needs of CCC owners. On the other hand, allowing different verification protocols might hurt the viability of the Treaty if it is viewed as unfair, especially if the decision making around these processes is unbalanced.
Our draft Treaty allows chip use and production to continue so that the world may benefit from such chips. One alternative approach is to shut down new chip production and/or destroy existing chips. Absent algorithmic advancements, the destruction of chips would increase the “breakout time” — the time it takes between when a group starts trying to create a superintelligence and the point at which they succeed. This is because (in lieu of algorithmic advancements), a rogue actor would need to develop the capability to produce chips, which is a lengthy and conspicuous process. However, because we think it’s feasible to track chips and verify their usage, we do not think that the benefit of longer breakout times is clearly worth the cost of shutting off all AI chips.