Notes
Article V aims to centralize, into monitored facilities, all AI chip clusters (i.e., sets of interconnected chips above a small size) and the vast majority of AI chips. Monitoring itself is covered in Article VI, and prevention of proliferation is covered in Article VII.
Our draft specifies international verification of this centralization process so that all parties can confirm that all other parties have also centralized their chips. Verification of this type is likely to be straightforward for large AI datacenters, as intelligence agencies are likely to already know where these are. For smaller datacenters, the ISIA can provide oversight of domestic centralization processes as a confidence-building measure.
Chip centralization is an important first step to restricting the development of artificial superintelligence. Centralizing chips in declared facilities enables further monitoring for how these chips are being used, or verification that they are powered off (if they are not safe to use). Centralization would also make it easier for parties to destroy these chips, as might become necessary under Article XII, if a Party persists in violating the treaty.
We avoid recommending, in the treaty text, that CCCs be located away from population centers, despite their capacity for danger. We avoid this restriction both because (in the case of treaty violations) datacenters can likely be shut down without much collateral damage, and because modern datacenters are already regularly located near cities. That said, alternative treaties might prefer to proscribe treating AI datacenters as military facilities, given their potential to pose grave security threats.
Verifying Centralization
Most parties would not and should not blindly trust other parties to follow the rules, and would need some way to verify compliance. The centralization of AI chips into declared facilities makes it possible for ISIA inspections and monitoring to confirm the presence and activity of the chips.
Centralization might not be strictly necessary if there are other ways to monitor AI chips. Unfortunately, we think this is currently the only feasible option short of physically destroying all existing stockpiles of AI chips, given the limited security mechanisms in current chips today.
In the future, hardware-enabled governance mechanisms could be developed to enable remote governance of AI chips, so that chips don’t need to be centralized to declared locations. Aarne et al. (2024) provide estimates for the implementation time of some of these on-chip governance mechanisms. Their estimates cover the timeline to develop mechanisms that are robust against different adversaries. For concision, we will use their estimates for security in a covertly adversarial context where competent state actors may try to break the governance mechanisms but would face major consequences if caught. They estimate a development time of two to five years for ideal solutions, with less secure but potentially workable options available in just months.
Even though that report is over a year old, we are not aware of significant progress toward these mechanisms, and we think two to five additional years is the most relevant estimate from Aarne et al. Which is to say that, possibly, after a few years of research and development into chip security measures, it would be possible to confidently monitor chips without centralizing them, after some further lag time for new securely-monitorable chips to be produced, and/or for old chips to be retrofitted. Aarne et al. estimate that the first of these options might take four years, but we are optimistic that retrofitting could be done in one to two years if chips are already being tracked.
While centralization as discussed in Article V entails the physical concentration of covered chip clusters, it does not require that governments take ownership of chips. For large datacenters, the treaty permits the datacenter and its chips to remain where they are, under private ownership, so long as they receive monitoring and oversight from the domestic government and the ISIA. This monitoring would ensure that datacenters are engaged only in non-AI activities or permitted AI activities like running old models. For smaller chip aggregations, it may be necessary to physically move them into a larger datacenter, with their owner continuing to access the chips remotely; we do not consider this an overly onerous restriction given that it is already common in cloud computing arrangements.
Feasibility
It looks feasible to verifiably consolidate the majority of AI chips. The very largest AI datacenters, such as those with more than 100,000 H100-equivalents, are hard to hide. They are detectable from their physical footprint and power draw, and many of them are publicly reported on. In fact, it’s probably possible for intelligence services to track and locate datacenters as small as around 10,000 H100-equivalents. Locating smaller datacenters would involve domestic authorities using various powers in cooperation with ISIA inspectors.
States have a range of tools available for tracking down chips owned domestically. They can legally require reporting of all chip clusters larger than 16 H100s; they can use sales records and other financial information from chip distributors; they can interview technicians with expertise in datacenter construction; etc. If they suspect smuggling, obfuscation, or concealment of chips, they can employ law enforcement to investigate further. This process of domestic centralization would be overseen by ISIA inspectors to ensure thorough compliance.
Locating large datacenters could happen quickly, in days or weeks. Actually centralizing chips could take longer, as it might be necessary to build further datacenter capacity in the facilities that would become CCCs.
One significant challenge is providing justified confidence that one Party is not doing a secret AI project with non-declared AI chips. ISIA verification of domestic chip centralization provides some assurance, but it may not be sufficient if some country could purposefully undermine domestic centralization efforts. For further assurances against illegal AI projects, see the intelligence gathering and challenge inspections discussed in Article X.
On the Definition of CCCs
Our definition of CCC draws a line at 16 H100-equivalents. This threshold aims to meet a few criteria:
- Monitoring chip clusters larger than 16 H100s works well with the training FLOP thresholds in Article IV. Training with 16 H100s (FP8 precision, 50 percent utilization — realistic but optimistic parameters) would take 7.3 days to get to 1e22 FLOP, and 2 years to get to 1e24 FLOP. Therefore, it would be feasible for people to use undeclared chips to reach the bottom threshold, but it would be somewhat impractical for them to get to the prohibited training threshold.
- This threshold is plausibly sufficient for preventing the advancement of AI capabilities, when combined with bans on AI research in Article VIII. Article IV lays out training restrictions where large-scale training is prohibited and medium-scale training is allowed but subject to oversight. It is probably acceptable — that is, it probably poses minimal risk — to allow small-scale training, such as the amount that can be done on 16 H100s in a realistic time frame.
- This threshold has limited impact on hobbyists and consumers. Very few individuals own more than 16 H100s. In mid-2025, a set of 16 H100 chips costs around $500,000. This isn’t a threshold one would accidentally cross by having a few old gaming consoles laying around.
- Consolidating AI chips gets harder as the allowable quantity shrinks. Finding datacenters with 100,000 chips is easy; finding those with 10,000 is likely also relatively easy; with 1,000 it’s unclear; and below 100, it may start to become quite difficult. The 16 H100 threshold is likely to be challenging, and is picked partially due to the increasing infeasibility of still lower thresholds.
- Despite potential enforcement challenges, it is possible that this definition would need to be revised and the threshold brought lower (e.g., 8 H100-equivalents). In our treaty, the ISIA would be tasked with assessing this definition and changing it as needed.
Other Considerations
This article calls for parties to avoid co-locating AI chips with non-ancillary non-AI chips. This is suggested because co-location might make verification of chip use (Article VII) more difficult. However, this is not strictly necessary, and it may not be desired. AI chips are currently often colocated with non-AI chips, and the inconvenience of changing this could outweigh the inconvenience of monitoring and verifying the AI chips in a datacenter that mixes AI chips with non-AI chips.
There is some risk that private citizens could construct an unmonitored CCC from “loose” H100-equivalent chips. To combat this, the treaty holds that parties shall make “reasonable effort” to monitor chip sales (in excess of 1 H100-equivalent) and detect the formation of new CCCs. More stringent measures could be taken, such as requiring all such chips and sales to be formally registered and tracked. Our draft does not go to that length, both because we do not expect all that many “loose” H100-equivalent chips to be unaccounted-for after all chips in CCCs are cataloged, and because other mechanisms (such as the whistleblower protections in Article X) help with the detection of newly-formed CCCs.
Rather than immediately requiring small clusters (e.g., 100 H100s) to be centralized, the treaty could instead implement a staged approach. For example: In the first 10 days all datacenters with more than 100,000 H100-equivalent chips must be centralized and declared, then in the next 30 days all datacenters with more than 10,000 H100-equivalent chips must be centralized and declared, etc. A tiered approach might better track international verification capacity as intelligence services ramp up their detection efforts.
One downside of a staged approach is that it might provide more opportunities for states to hide chips and establish secret datacenters. This approach nevertheless parallels how some previous international agreements have worked within the constraints of their verification and enforcement options. For instance, the 1963 Partial Test Ban Treaty did not ban underground testing of nuclear weapons, due to the difficulty in detecting such tests.