A Tentative Draft of a Treaty, with Annotations
Below, we provide an annotated example draft language for the sort of treaty that could be implemented by major governments around the world, if they recognized the dangers from artificial superintelligence (ASI) and sought to prevent anyone from building ASI.1
We are not policymakers and we are not well-versed in international law. We present this as an illustrative example of some potentially valuable treaty provisions to have in view, using mechanisms tailored to the situation at hand and grounded in historical precedent.
This draft text covers many different mechanisms that we think would be required to prevent AI developers from seriously endangering humanity. In practice, we would expect different aspects to likely be covered by different treaties.2 And of course, in reality, the international community should carefully draft the whole treaty, subject to negotiation and review by relevant experts.
For each article in the example treaty below, we’ve provided a commentary section explaining why we made key decisions, and a section discussing some relevant precedent.
A real treaty would involve many details. We’ve included some example details, but most are relegated to “annexes” (which we do not flesh out in their entirety). Many of the quantities and numerical thresholds we use in our draft constitute our best guess, but they should still be treated only as guesses. Many of those numbers would require further study and revision before being finalized. These sorts of details plausibly wouldn’t be included in the treaty itself, analogous to how, in the case of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), specific details of inspections and so-called “safeguards” programs were decided between each country and the IAEA, rather than being included in the NPT itself. However, for clarity, we have kept our best-guess numbers directly in the treaty text, to help it feel more concrete.
[1] It might be that nation-states concerned about artificial superintelligence would prefer to take smaller steps first — e.g., steps that don’t shut down AI research and development just yet, but that keep the option open to shut down AI R&D in the future. We don’t recommend that course of action, because we think the situation is already clearly out-of-hand and we are not confident the situation will get much clearer before it’s too late. Nevertheless, the MIRI technical governance team is working on proposals for those scenarios, in case they are helpful. You can follow their work here.
[2] This is the case with nuclear weapons agreements, where separate treaties establish the IAEA (1956, by the Conference on the Statute of the International Atomic Energy Agency, hosted at the Headquarters of the United Nations), the NPT (1970, through negotiations in the United Nations Eighteen Nation Committee on Disarmament), and the arms control agreements like the START treaty (1991, following nine years of intermittent negotiation between the U.S. and the Soviet Union).
Treaty on the Prevention of Artificial Superintelligence
Preamble
The States concluding this Treaty, hereinafter referred to as the Parties to the Treaty,
Alarmed by the prospect that the development of artificial superintelligence would lead to the deaths of all people and the end to all human endeavor,
Affirming the necessity of urgent, coordinated, and sustained international action to prevent the creation and deployment of artificial superintelligence under present conditions,
Convinced that the measures to prevent advancement of artificial intelligence capabilities will reduce the chance of human extinction,
Recognizing that the stability of this Treaty relies on the ability to verify the compliance of all Parties,
Recalling the precedent of prior arms control and nonproliferation agreements in addressing global security threats,
Undertaking to co-operate in facilitating the verification of artificial intelligence activities globally when they steer well clear of artificial superintelligence, and seeking to preserve access to the benefits of artificial intelligence systems even while avoiding dangers,
Have agreed as follows:
Each Party to this Treaty shall not develop, deploy, or seek to develop or deploy artificial superintelligence (“ASI”) by any means. Each Party shall prohibit and prevent all such development within their borders and jurisdictions, and, due to the uncertainty as to when further progress would produce ASI, shall not engage in or permit activities that materially advance toward ASI as described in this Treaty. Each Party shall assist, or not impede, reasonable measures by other Parties to dissuade and prevent such development by and within non-Party states and jurisdictions. Each Party shall implement and carry out all other obligations, measures, and verification arrangements set forth in this Treaty.
Where some classes of AI infrastructure and capabilities staying far from ASI may be deemed acceptable but only under conditions of international supervision, only Parties to the Treaty may carry out such activities, or own or operate AI chips and manufacturing capabilities that could potentially lead to the development of ASI if unsupervised. Non-Parties are denied such access for the safety of the Parties and of all life on Earth.(Article V, Article VI, Article VII).
Parties commit to a dispute resolution process (Article XI) to minimize unnecessary Protective Actions (Article XII).
Precedent
Article I of the NPT, as in many treaties, states the high-level commitment parties are making — in this case, to not share their nuclear weapons or help others obtain them:
Each nuclear-weapon State Party to the Treaty undertakes not to transfer to any recipient whatsoever nuclear weapons or other nuclear explosive devices or control over such weapons or explosive devices directly, or indirectly; and not in any way to assist, encourage, or induce any non-nuclear-weapon State to manufacture or otherwise acquire nuclear weapons or other nuclear explosive devices, or control over such weapons or explosive devices.
The commitment summarized in Article I of our draft agreement is stronger than this because an ASI breakout by anyone, anywhere, cannot be allowed to happen even once.4 It would not be enough to not “assist, encourage, or induce” others to build it. We have therefore included a commitment to “assist, or not impede, reasonable measures” by parties to dissuade and prevent such development anywhere.
The NPT works to contain an existing threat (nuclear weapons), while our draft agreement is working to prevent a threat from existing at all (ASI). Precedent for preventing the development of dangerous new technology can be found in the Protocol on Blinding Laser Weapons, part of the Convention on Certain Conventional Weapons.5 Its Article I reads:
It is prohibited to employ laser weapons specifically designed, as their sole combat function or as one of their combat functions, to cause permanent blindness to unenhanced vision, that is to the naked eye or to the eye with corrective eyesight devices. The High Contracting Parties shall not transfer such weapons to any State or non-State entity.
That language doesn’t try to keep anyone anywhere from ever testing or accidentally making such a system, however. Our agreement must be strong enough to prevent ASI from being made accidentally. Because it’s not clear where the point-of-no-return might be, our Article I includes a commitment to “not engage in or permit activities that materially advance toward ASI.”
[4] The NPT is generally credited with keeping the number of nuclear states lower than it might have been, but acquisitions by non-signatories (India, Pakistan, Israel) and former signatories (North Korea) have still occurred. Any non-signatory creating even a single ASI is comparable in danger to a mass thermonuclear exchange, and must be treated accordingly.
[5] The Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, commonly called the CCW, entered into force in 1983. As of 2024, its 128 parties commit to protect combatants and non-combatants from unnecessary and egregious suffering by restricting various categories of weapons.
For the purposes of this Treaty:
- Artificial intelligence (AI) means a computational system that performs tasks requiring cognition, planning, learning, or taking actions in physical, social or cyber domains. This includes systems that perform tasks under varying and unpredictable conditions, or that can learn from experience and improve performance.
- Artificial superintelligence (ASI) is operationally defined as any AI with sufficiently superhuman cognitive performance that it could plan and successfully execute the destruction of humanity.
- For the purposes of this Treaty, AI development which is not explicitly authorized by the ISIA (Article III) and is in violation of the limits described in Article IV shall be assumed to have the aim of creating artificial superintelligence.
- Dangerous AI activities are those activities which substantially increase the risk of an artificial superintelligence being created, and are not limited to the final step of developing an ASI but also include precursor steps as laid out in this treaty. The full scope of dangerous AI activities is concretized by Articles IV through IX and may be elaborated and modified through the operation of the Treaty and the activities of the ISIA.
- Floating-point operations (FLOP) is the computational measure used to quantify the scale of training and post‑training, based on the number of mathematical operations done. FLOP shall be counted as either the equivalent operations to the half-precision floating-point (FP16) format or the total operations (in the format used), whichever is higher.
- Training run means any computational process that optimizes an AI’s parameters (specifications of the propagation of information through a neural network, e.g., weights and biases) using gradient-based or other search/learning methods, including pre-training, fine-tuning, reinforcement learning, large-scale hyperparameter searches that update parameters, and iterative self-play or curriculum training.
- Pre-training means the training run by which an AI’s parameters are initially optimized using large-scale datasets to learn generalizable patterns or representations prior to any task- or domain-specific adaptation. It includes supervised, unsupervised, self-supervised, and reinforcement-based optimization when performed before such adaptation.
- Post-training means a training run executed after a model’s pre-training. In addition, any training performed on an AI created before this Treaty entered into force is considered post-training.
- Advanced computer chips are integrated circuits fabricated on processes at least as advanced as the 28 nanometer process node.
- AI chips mean specialized integrated circuits designed primarily for AI computations, including but not limited to training and inference operations for machine learning models [this would need to be defined more precisely in an Annex]. This includes GPUs, TPUs, NPUs, and other AI accelerators. This may also include hardware that was not originally designed for AI uses but can be effectively repurposed. AI chips are a subset of advanced computer chips.
- AI hardware means all computer hardware for training and running AIs. This includes AI chips, as well as networking equipment, power supplies, and cooling equipment.
- AI chip manufacturing equipment means equipment used to fabricate, test, assemble, or package AI chips, including but not limited to lithography, deposition, etch, metrology, test, and advanced-packaging equipment [a more complete list would need to be defined in an Annex].
- H100-equivalent means the unit of computing capacity (FLOP per second) equal to one NVIDIA H100 SXM accelerator, 990 TFLOP/s in FP16, or a Total Processing Performance (TPP) of 15,840, where TPP is calculated as TPP = 2 × non-sparse MacTOPS × (bit length of the multiply input).
- Covered chip cluster (CCC) means any set of AI chips or networked cluster with aggregate effective computing capacity greater than 16 H100-equivalents. A networked cluster refers to chips that either are physically co-located, have inter-node aggregate bandwidth — defined as the sum of bandwidth between distinct hosts/chassis — greater than 25 Gbit/s, or are networked to perform workloads together. The aggregate effective computing capacity of 16 H100 chips is 15,840 TFLOP/s, or 253,440 TPP, and is based on the sum of per-chip TPP. Examples of CCCs would include: the GB200 NVL72 server, three eight-way H100 HGX servers residing in the same building, CloudMatrix 384, a pod with 32 TPUv6e chips, every supercomputer.
- National Technical Means (NTM) includes satellite, aerial, cyber, signals, imagery (including thermal), and other remote-sensing capabilities employed by Parties for verification consistent with this Treaty.
- Chip-use verification means methods that provide insight into what activities are being run on particular computer chips in order to differentiate acceptable and prohibited activities.
- Methods used to create frontier models refers to the broad set of methods used in AI development. It includes but is not limited to AI architectures, optimizers, tokenizer methods, data curation, data generation, parallelism strategies, training algorithms (e.g., RL algorithms) and other training methods. This includes post-training but does not include methods that do not change the parameters of a trained model, such as prompting. New methods may be created in the future.
Notes
On Definitions of AI
The definition of AI used here (adapted from Senator Chuck Grassley’s AI Whistleblower Protection Act) is possibly too broad. Further refinement would help make it clear that the definition should not apply to obviously-safe computer systems such as spellcheck or image recognition systems.
If AI technology were never going to change from its modern form, in which development for a frontier Large Language Model requires highly specialized hardware and is easily distinguishable from other activities, it would be easier to craft a narrow tailored definition. But AIS is a moving target, and the definition of AI that is used must cover more than just LLMs. A treaty banning solely machine learning might encourage researchers to develop new AI paradigms that don’t technically meet the definitions, so that they can race ahead toward superintelligence. If a novel paradigm did emerge, especially one which is not as AI-chip-intensive as deep learning, then the treaty would likely need to be updated, and enforcement might become substantially more difficult.
On Definitions of Computing Capacity
We use H100-equivalent as the primary metric for computing capacity. In Article V, this is used to set the size of the largest allowed unmonitored chip cluster (16 H100-equivalents).6 Article IV defines thresholds in terms of the total operations used to train an AI, and so, by setting limits on unmonitored operations per second, this effectively would make it infeasibly slow to conduct an illegally large training run on unmonitored hardware.
We use H100-equivalents because the most relevant metric in various chip designs is how quickly they perform operations, and H100s serve as a fine and precedented measuring stick. Other chip metrics are important in AI training (such as high bandwidth memory), but overall, these matter less than the number of operations per second.
Our proposed definition of a covered chip cluster (CCC) is an attempt to satisfy several constraints: The bound should be high enough to prevent regular people from breaking the rules (i.e., 25 Gbit/s bandwidth between chassis is faster than non-datacenter internet connections; it is very rare and expensive for an individual to own more than 16 H100-equivalents). The bound must also be set low enough to prevent dangerous AI activities and to make subversion difficult (i.e., make it difficult to do training distributed across multiple sub-CCC sets of chips). We discuss the tradeoffs more in the notes after Article V.
AI chips are a subset of advanced computer chips, and there isn’t a bright line that distinguishes AI chips from non-AI chips. Instead of defining and relying on a distinction here, we use the overall computing capacity (in operations per second) of a cluster, as measured in H100-equivalents. If the chips could be configured for training or running AIs and are above the defined threshold, then the treaty requires that they be monitored.
Note that National Technical Means (NTM) may be deprecated as the official term by some governments. We use it in this treaty in the style of past arms control agreements for ease of comparison.
[6] This is twice the limit mentioned as a clearly-safe limit in the book. It is likely still safe for some time yet, and evaluating where the limits should be (and changing them over time) is the subject of Article III, Article V, and Article XIII.
- Treaty Parties hereby establish the International Superintelligence Agency (ISIA), to implement this Treaty and its provisions, including those for international verification of compliance with it, and to provide a forum for consultation and cooperation among Parties.
- There are hereby established as the organs of the ISIA: the Conference of the Parties, the Executive Council, and the Technical Secretariat.
- Conference of the Parties
- The Conference of the Parties comprises all Treaty Parties.
- The Conference of the Parties shall: Determine overall policy; adopt and oversee the budget; elect members of the Executive Council; consider compliance matters reported by the Executive Council; and adopt and revise Annexes upon Executive Council recommendation.
- It shall convene in regular session no less than annually, or at a more frequent rate as may be set by the Conference, in addition to special sessions as required. Each Party has one vote. Quorum is a majority of Parties.
- Executive Council
- The Executive Council shall have 15 members: (i) 5 designated seats for permanent members of the United Nations Security Council, and (ii) 10 elected seats distributed by equitable geographic representation. Details of this are elaborated in Annex A.
- Elected members serve two-year terms. Half of the seats are elected each year.
- The Executive Council shall: approve challenge inspections; recommend budget and policy to the Conference; appoint the Director-General; provide oversight of the Technical Secretariat and approve its recommendations.
- Decision making processes are as follows:
- The Executive Council elects the Chair and Vice Chair of the Executive Council.
- The Chair or Vice Chair can act as the presiding officer.
- Voting proceeds by One Member, One Vote.
- Votes to approve a challenge inspection under Article X require a majority.
- Votes to recall or appoint a Director-General require two-thirds majority.
- All other decisions require a majority.
- Quorum requires two-thirds of the Executive Council
- Technical Secretariat and Director-General
- The Director-General of the Technical Secretariat shall be its head and chief administrative officer.
- The Director-General is appointed by the Executive Council for a four-year term, renewable once. The Executive Council can recall the Director-General.
- The Technical Secretariat shall at its outset include technical divisions for Chip Tracking and Manufacturing Safeguards, Chip Use Verification Safeguards, Research Controls, Information Consolidation, Technical Reviews, Administration and Finance, and Legal and Compliance. The Director-General can create and disband technical divisions.
- The Technical Secretariat, by means of the Director-General, proposes changes to technical definitions and safeguard protocols, as necessary to implement Article IV, Article V, Article VI, Article VII, Article VIII, Article IX, and Article X of this Treaty.
- Time-sensitive changes to FLOP thresholds (Article IV), the size of covered compute clusters (Article V), and the boundaries of restricted research (Article VIII) may be implemented by the Director-General immediately in the case where inaction poses a security risk. Such changes remain in effect for thirty days. Past that, the changes need approval from the Executive Council to remain in effect.
- The Executive Council shall make decisions on matters of substance as far as possible by consensus; the Director-General should make efforts to achieve consensus. If consensus is not possible at the end of 24 hours, a vote will be taken, and the Executive Council shall accept the changes if a majority of members present and voting vote to accept the changes, and shall reject them otherwise.
- The ISIA’s regular budget is funded by assessed contributions of Parties, using a scale derived from the UN assessment scale, subject to a floor and ceiling set by the Executive Council. Member states also have the option of making voluntary contributions for AI safety research related to alignment, interpretability, and capacity-building activities of member states including beneficial uses of safe AI, test bed development, good practices, information sharing, and the facilitation of cooperation and joint activities loosely modeled on the IAEA network of Nuclear Security Support Centers.
Precedent
The three-body governing structure of our treaty’s International Superintelligence Agency (ISIA) is modeled after that of the OPCW,7 the body tasked with implementing the Chemical Weapons Convention (CWC). The names of these bodies are likewise borrowed from the OPCW. (An actual treaty may prefer alternate structures and names that serve the same functions; we provide precedent for some less centralized arrangements further below.)
The Executive Council established by our Paragraph 4, subparagraph (a) and (d) emulates the NPT’s Board of Governors. In designating five of fifteen Council seats for permanent members of the United Nations Security Council, we reflect that the five original Nuclear Weapons States of the NPT also happened to be the five permanent members of the UN Security Council; without their participation as central partners, the NPT would likely have floundered from the start.
Our provision for “10 elected seats distributed by equitable geographic representation” also echoes the NPT, which stipulates that its outgoing Governors include “the member most advanced in the technology of atomic energy including the production of source materials in each of” eight specified regions.
Taiwan complicates our treaty concept, given its delicate geopolitical situation and its status as the producer of most of the world’s AI chips. Fortunately, precedent provides guidance: Though Taiwan is not a party to the NPT, it has stated on multiple occasions that it considers itself bound by the principles of the NPT. Taiwan allows the IAEA to conduct inspections and apply safeguards to its nuclear facilities through a trilateral agreement with the United States and the IAEA. A similar arrangement could be worked out with regards to our treaty.
The decision-making processes of our treaty’s Executive Council have been modeled after the Board of Governors Rules and Procedures used by the International Atomic Energy Agency (IAEA), the main organization for the international governance of nuclear technology.8 Voting procedures likewise follow the Statute of the IAEA.
Precedent for less centralized (but still potentially effective) treaty implementation mechanisms could be found in other nuclear arms treaties. The Intermediate-Range Nuclear Forces (INF) Treaty and Strategic Arms Reduction Treaties (START I, START II, and New START), place responsibility for implementation and verification on the individual parties; each commit to procedures that allow the other to obtain reasonable assurance of compliance.
The “challenge inspections” in Paragraph 4(c) are modeled after the mechanism in Part X of the CWC; we will elaborate on this precedent with Article X.
Notes
As in other international bodies, the ISIA would be staffed by diplomats and technical experts from signatory countries. The purpose of the language above is to ensure that the ISIA is given authority to implement what the treaty requires and to update the treaty over time.
Our treaty prioritizes preventing the creation of superintelligence for as long as necessary. The ISIA centralizes the implementation of several key treaty functions toward this end, including maintaining the precise limits of permitted AI research, development, and deployment; being the primary verifier of treaty compliance; and consolidating confidential intelligence information from signatories. Critically, the cooperative operation of the ISIA builds necessary trust between signatories over time.
That said, this sort of approach comes with tradeoffs. A first tradeoff is that more centralization requires more trust between parties. Prospective signatories might not feel that it is politically viable to assign this level of authority to an international organization, or might not trust the organization to operate sufficiently independently of the controlling influence of its most powerful member(s).
An alternative arrangement could centralize only those few functions which must be centralized (such as maintaining and clarifying limits on AI research, development, and deployment), while allowing individual signatories to come to other arrangements for verifying and enforcing compliance.
A treaty like this would also face tradeoffs about how many parties to include. The text above would create a multilateral organization in which all states are invited to sign the treaty and participate in its execution. An alternative might be to start with only two major actors at the frontier of AI development, such as the U.S. and China. A narrow bilateral verification regime could meet each party’s needs while sacrificing the smallest amount of autonomy and transparency. Parties to a small treaty like that could then adopt a separate subsequent goal of bringing other states on board, until their security needs were met.
As the motive of this draft is to demonstrate what international controls could look like if world leaders around the world realized the pressing dangers, we illustrate a structure that would work in a scenario where a wide variety of parties recognize the common interest they have in joining a treaty such as this one.
As such, the structure of the proposed ISIA Executive Council includes all permanent UN Security Council members, and is modeled on the composition of the IAEA.
Given the status of TSMC as the preeminent AI chip manufacturer, any AI treaty must consider how to address Taiwan. As discussed in the Precedent section, we would encourage Taiwan to adhere to our treaty much as it adheres to the NPT without having signed it, through formal arrangements and/or declarations stating that Taiwan considers itself to be bound by the principles of this treaty and is open to on-site routine and/or challenge inspections..
This article describes a structure that puts significant power in the hands of the Technical Secretariat while giving oversight power to the Executive Council. One benefit to our draft structure is that it enables the technical body to carry out rapid decision-making and gives it a broad mandate to achieve its mission, albeit with any changes requiring approval from a simple majority of Executive Council members within 30 days in order to stay in effect.
While world leaders may be hesitant to delegate so much power to technical experts, technical experts may not trust geopolitical actors to resolve the thorny technical questions that would come up in implementing this treaty, and to be sufficiently adaptive to a changing technical landscape. But many other arrangements could also work.
One alternative approach would be to disaggregate further the responsibilities, definitions, and types of safeguards implemented by the ISIA (e.g., training FLOP thresholds, definition of CCC, definition of AI chip, whether a particular facility should be counted as a chip production facility, chip use verification protocols, defining Restricted Research, etc.) and establish different procedures for changing definitions according to how impactful the definition is, and according to how subject it might be to technological changes that demand rapid response.
[7] The Organisation for the Prohibition of Chemical Weapons (OPCW) conducts inspections, monitors the destruction of chemical weapons stockpiles, and assists in preparation for chemical weapons attacks, among various other functions critical to the Chemical Weapons Convention (CWC). The CWC entered force in 1997; its 193 parties work to effect and maintain a prohibition on the use, development, and proliferation of chemical weapons and their precursors, with some narrow exemptions.
[8] The IAEA was established in 1957, more than a decade before the NPT. The NPT was able to designate this pre-existing body to carry out some functions. In the case of artificial intelligence, no such international body exists yet, so our treaty must commit parties to creating one.
- Each Party agrees to ban and prohibit AI training above the following thresholds: Any training run exceeding 1e24 FLOP or any post-training run exceeding 1e23 FLOP. Each Party agrees to not conduct training runs above these thresholds, and to not permit any entity within its jurisdiction to conduct training runs above these thresholds.
- The Technical Secretariat may modify these thresholds, in accordance with the process described in Article III.
- Each Party shall report any training run between 1e22 and 1e24 FLOP to the ISIA, prior to initiation. This applies for training runs conducted by the Party or any entity within its jurisdiction.
- This report must include, but is not limited to, all training code, and an estimate of the total FLOP to be used. The Party must provide ISIA staff supervised access to all data, with access logging appropriate to the data’s sensitivity, and protections against duplication or unauthorized disclosure. Failure to provide ISIA staff sufficient access to data is grounds for denying the training run, at the ISIA’s discretion. The ISIA may request any additional documentation relating to the training run. The ISIA will also pre-approve a set of small modifications that could be made to the training procedure during training. Any such changes will be reported to the ISIA when and if they are made.
- Nonresponse by the ISIA after 30 days constitutes approval, however the ISIA may extend this time period by giving notice that they require additional time to review. These extensions are not limited, but Parties may appeal excessive delays to the Director or the Executive Council.
- The ISIA may monitor such training runs, and the Party will provide checkpoints of the model to the ISIA upon request from the ISIA, including the final trained model [initial details for such monitoring would need to be described in an Annex].
- In the event that monitoring indicates worrisome AI capabilities or behaviors, the ISIA can issue an order to pause a training run or class of training runs until they deem it safe for the training run to proceed.
- The ISIA will maintain robust security practices. The ISIA will not share information about declared training runs unless it determines that the declared training violates the Treaty, in which case it will provide all Treaty Parties with sufficient information to determine whether a violation occurred.
- In the event that a Party discovers a training run above the designated thresholds, the Party must report this training run to the ISIA, and halt this training run (if it is ongoing). Such a training run may only resume with approval from the ISIA.
- Each Party, and entities within its jurisdiction, may conduct training runs of less than 1e22 FLOP without oversight or approval from the ISIA.
- The ISIA may authorize, upon two thirds majority vote of the Executive Council, specific carveouts for activities such as safety evaluations, self‑driving vehicles, medical technology, and other activities in fashions which are are deemed safe by the Director-General. These carveouts may allow for training runs larger than 1e24 FLOP with ISIA oversight, or a presumption of approval from the ISIA for training runs between 1e22 and 1e24 FLOP.
Precedent
While the numerical values for thresholds specified in our agreement can and should be revisited when moving beyond the early draft stage, quantitative caps are common in international agreements, preempting disputes that would otherwise hinge on differing interpretations of qualitative language.
The 1974 Threshold Test Ban Treaty established a cap of 150 kilotons on underground nuclear tests performed by the U.S. and USSR.9 The purpose and effect of this treaty was to at least somewhat hinder further development of larger and more destructive “city buster” warheads. A relevant parallel to AI development is that, as of mid 2025, more general and capable — and therefore more hazardous — models take correspondingly larger training runs to create; our treaty specifies caps intended to prevent such AIs from being intentionally developed, but also to reap the essential (if non-parallel) benefit of reducing the risk of an unforeseen capabilities threshold being accidentally and irretrievably crossed.
The training limit we have suggested as a starting point is low enough that some AI models trained today would exceed it; we see this as prudent in expectation of advances that make newer models more capable per unit of training (discussed with Article VIII). Arms reduction agreements provide precedent for thresholds set below the current maximum level. The 1922 Washington (Naval) Treaty set warship displacement limits that required the U.S. and other naval powers to scrap dozens of capital ships.10 In Article II of the 1991 START treaty,11 the U.S. and the Soviet Union (and later, the Russian Federation) agreed to limits in the sizes of their nuclear stockpiles and delivery systems that required them to phase out more than four thousand warheads each.
Precedent for quantitative thresholds that limit breakout potential will be discussed with Article V.
Notes
In recent years, advances in AI have followed first and foremost from an increase in computational resources poured into AI training. Restricting these resources, and restricting algorithmic progress research (described in Article VIII), would dramatically reduce the risk that superintelligence could be created in the near term.
The restrictions in our draft are based on the number of computational operations used, as this is relatively easy to define and measure. The performance of existing state-of-the-art AI informs amounts of computing hardware that appear safe, at least when using AI algorithms from mid-2025.
We would prefer to limit training based on the capabilities of the trained system. But no one has the technical ability to confidently predict what a new AI will or won’t be able to do before it is trained. Computational resources are an available proxy.
The hard prohibition at 1e24 FLOP for training is slightly below that used to train models near the state of the art as of August 2025 (such as DeepSeek-V3, trained with 3e24 FLOP). We suggest this threshold because it is below the level at which we expect AIs to be dangerous (given current algorithms), and because it provides some breathing room and a buffer against algorithmic progress.
The prohibition of post-training over 1e23 is meant to apply to the post-training of AIs created prior to the treaty entering into force. Many of these AIs will have been trained using more than 1e24 FLOP; as of mid-2025, there are between 50 and 100 such models. Given that the weights of many such AIs will have been openly released, it is not feasible to prevent people from using them, but it is feasible to prevent large modifications to them via post-training.
A 1e22 FLOP training run on 16 H100s would take around one week. This is computing-intensive enough that hobbyists would not accidentally break the threshold by training small and allowed models. AIs trained at the 1e22 scale (with current algorithms) have so far appeared to be innocuous, though that situation would need to be monitored, as it could change as a result of unprevented AI research progress. Requiring reporting but not approval for training runs in the 1e22 to 1e24 FLOP band seems to us to strike a balance between allowing people to achieve some of the benefits from training AIs in a way that presently looks safe, while retaining the ability to prevent the creation larger, potentially dangerous AIs.
The ISIA monitoring would also allow the ISIA to stay (somewhat) up to date with any algorithmic progress that happens in spite of the bans, and Article XIII provides for ISIA evaluations of models trained in this band. This would help to apprise the ISIA of trends in AI development and adjust the FLOP thresholds however merited.
ISIA staff are permitted access to training data used in monitored training runs, subject to a variety of restrictions. The intent of the restrictions is to guarantee that logging and other oversight methods are used to prevent unauthorized disclosure of sensitive contents in the training data, including but not limited to personal identifying information, personal health information, classified data, trade secrets, and banking data subject to secrecy laws.
[9] The U.S. and USSR had already agreed to stop other kinds of nuclear weapons tests in 1963 with the Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water, commonly called the Limited Test Ban Treaty (LTBT) or Test Ban Treaty.
[10] The Treaty Between the British Empire, France, Italy, Japan, and the United States of America for the Limitation of Naval Armament (the Washington Naval Treaty) lists ships to be scrapped by name in a table (Section II).
[11] The Strategic Arms Reduction Treaty was signed in 1991 and entered force in 1994. Signatories were each barred from deploying more than 6,000 nuclear warheads on a total of 1,600 intercontinental ballistic missiles and bombers.
- Each Party shall ensure that within their jurisdiction, all covered chip clusters (CCCs), as defined in Article II (i.e., a set of chips with capacity greater than 16 H100-equivalents) [note that 16 H100s collectively cost around $500,000 in 2025 and these are rarely owned by individuals], are located in facilities declared to the ISIA, and that these AI chips are subject to monitoring by the ISIA.
- Parties shall aim to avoid co-locating AI chips with non-ancillary non-AI computer hardware in these declared facilities.
- These facilities shall be accessible to physical inspection. This may include, for instance, that verification teams can reach any CCC from at least one airport with scheduled international service within 12 hours.
- Parties shall not house AI chips in so many different locations that it is infeasible for the ISIA to monitor all locations. If requested by the ISIA, Parties must further consolidate their AI chips into fewer monitored facilities.
- Unmonitored AI chips that are not part of a CCC (i.e., that have capacity less than 16 H100‑equivalents) may remain outside of ISIA‑declared facilities, provided that such stockpiles are not aggregated or networked to meet the CCC definition, are not rotated among sites to defeat monitoring, and are not used for prohibited training. Parties will make reasonable efforts to monitor the sale and aggregation of AI chips to ensure that any newly created CCCs are detected and monitored.
- Within 120 days of the Treaty entering into force, each Party shall locate, inventory, and consolidate all CCCs into facilities declared to the ISIA. Parties shall not disaggregate, conceal, or otherwise reassign chips to evade this requirement or to cause a set of chips which would have been classified as a CCC to no longer be classified as a CCC.
- The ISIA shall monitor the domestic consolidation process, including through on‑site inspections, document and inventory verification, accompaniment of domestic authorities during transfers and inspection, and information sharing with Parties under Article X. The ISIA may require chain‑of‑custody records for transfers and may conduct challenge inspections as described in Article X. Parties shall provide timely access to relevant facilities, transport hubs, and records. Whistleblower protections and incentives under Article X apply to the consolidation process, and the ISIA shall maintain protected reporting channels.
- Within 120 days of the Treaty entering into force, Parties shall submit to the ISIA a register of their CCCs. The register must include the location, type, quantity, serial or other unique identifiers where available, and associated interconnects of all AI chips in the CCCs. Each Party shall provide the ISIA with an updated and accurate register no later than every 90 days.
- Parties shall provide the ISIA with advance notice of any planned transfer of AI chips, whether domestic or international, no less than 14 days before the planned transfer. No transfer shall proceed unless the ISIA is afforded the opportunity to observe the transfer. For international transfers, both the sending and receiving Parties shall coordinate with the ISIA on routing, custody, and receipt. Emergency transfers undertaken for safety or security reasons shall be notified as soon as practicable, with post‑facto verification.
- Broken, defective, surplus, or otherwise decommissioned AI chips shall continue to be treated as functional chips, until the ISIA certifies they are destroyed. Parties shall not destroy AI chips without ISIA oversight. Destruction or rendering permanently inoperable shall be conducted under ISIA oversight using ISIA‑approved methods and recorded in a destruction certificate [the details will need to be explained in an Annex]. Salvage or resale of components from such hardware is prohibited unless expressly authorized by the ISIA.
Precedent
Declaring assets of concern is often a first step in restrictive treaties. Parties to the 1922 Washington Naval Treaty provided inventories of capital ships and their tonnage, and committed to notify each other when replacing these vessels. The 1991 START I treaty included a classified Agreement on Exchange of Coordinates and Site Diagrams (in Article VIII), outlining the sharing of data on the location of all declared strategic arms. Article V, Paragraph 3 of our draft agreement requires parties to locate, inventory, and consolidate covered chip clusters within 120 days.
Consolidating assets to facilitate verification of compliance is often another step in restrictive treaties. Article III of START I forbade ICBMs from being co-located with space-launch facilities, easing monitoring. Paragraph 1.a of our Article V commits parties “to avoid co-locating AI chips with non-ancillary non-AI hardware” for the same reason.
History demonstrates that consolidation also limits breakout potential, by making it easier to strike offending asset concentrations in the event of a crisis of confidence. In the 2016 JCPOA12 (also known as the Iran nuclear deal), Iran agreed to keep its operational uranium enrichment centrifuges at just two designated sites (Natanz and Fordow), both of which were struck in June 2025 operations by Israel and the United States. This motivates a note accompanying our Article V in which we suggest parties locate their covered chip clusters (CCCs) away from population centers.
Monitoring and inspections are common components of prior treaties in limited-trust contexts; we have consequently drafted provisions for this where appropriate, in Paragraphs 1, 4, 6, and 7 of this article. Some specific precedent for this:
- Verification of START I included hundreds of on-site inspections in the first few years.
- The CWC requires the declaration and inspection of all Chemical Weapons Production Facilities — there have been 97 declared — and the majority of these have been verifiably destroyed. (In requiring the declaration of existing facilities, these agreements also prohibit certain activities from occurring outside declared facilities, analogous to this article’s prohibition on unmonitored CCCs.)
- Over 700 declared nuclear facilities around the world are monitored by the IAEA as part of the NPT.
Similar to Paragraph 3 of this article, numerous arms control agreements require that parties not interfere with each other’s NTM in the context of treaty verification. Examples include SALT I,13 ABM,14 INF,15 and START I.
Precedent for parties restricting their domestic private sector industries to meet treaty commitments (as would need to be the case with AI) can be seen in U.S. legislation following its ratification of the CWC: The Chemical Weapons Convention Implementation Act of 1998 and Department of Commerce regulations ensured U.S. entities were in compliance. Similarly, the U.S. Congress amended the Clean Air Act following ratification of the Montreal Protocol to ban ozone-depleting substances.
Approaches to implementing chip centralization in the U.S. might run through the Fifth Amendment’s Takings Clause, in which the government can use its power of eminent domain to seize private property for public purposes, so long as it pays appropriate compensation.
Notes
Article V aims to centralize, into monitored facilities, all AI chip clusters (i.e., sets of interconnected chips above a small size) and the vast majority of AI chips. Monitoring itself is covered in Article VI, and prevention of proliferation is covered in Article VII.
Our draft specifies international verification of this centralization process so that all parties can confirm that all other parties have also centralized their chips. Verification of this type is likely to be straightforward for large AI datacenters, as intelligence agencies are likely to already know where these are. For smaller datacenters, the ISIA can provide oversight of domestic centralization processes as a confidence-building measure.
Chip centralization is an important first step to restricting the development of artificial superintelligence. Centralizing chips in declared facilities enables further monitoring for how these chips are being used, or verification that they are powered off (if they are not safe to use). Centralization would also make it easier for parties to destroy these chips, as might become necessary under Article XII, if a Party persists in violating the treaty.
We avoid recommending, in the treaty text, that CCCs be located away from population centers, despite their capacity for danger. We avoid this restriction both because (in the case of treaty violations) datacenters can likely be shut down without much collateral damage, and because modern datacenters are already regularly located near cities. That said, alternative treaties might prefer to proscribe treating AI datacenters as military facilities, given their potential to pose grave security threats.
Verifying Centralization
Most parties would not and should not blindly trust other parties to follow the rules, and would need some way to verify compliance. The centralization of AI chips into declared facilities makes it possible for ISIA inspections and monitoring to confirm the presence and activity of the chips.
Centralization might not be strictly necessary if there are other ways to monitor AI chips. Unfortunately, we think this is currently the only feasible option short of physically destroying all existing stockpiles of AI chips, given the limited security mechanisms in current chips today.
In the future, hardware-enabled governance mechanisms could be developed to enable remote governance of AI chips, so that chips don’t need to be centralized to declared locations. Aarne et al. (2024) provide estimates for the implementation time of some of these on-chip governance mechanisms. Their estimates cover the timeline to develop mechanisms that are robust against different adversaries. For concision, we will use their estimates for security in a covertly adversarial context where competent state actors may try to break the governance mechanisms but would face major consequences if caught. They estimate a development time of two to five years for ideal solutions, with less secure but potentially workable options available in just months.
Even though that report is over a year old, we are not aware of significant progress toward these mechanisms, and we think two to five additional years is the most relevant estimate from Aarne et al. Which is to say that, possibly, after a few years of research and development into chip security measures, it would be possible to confidently monitor chips without centralizing them, after some further lag time for new securely-monitorable chips to be produced, and/or for old chips to be retrofitted. Aarne et al. estimate that the first of these options might take four years, but we are optimistic that retrofitting could be done in one to two years if chips are already being tracked.
While centralization as discussed in Article V entails the physical concentration of covered chip clusters, it does not require that governments take ownership of chips. For large datacenters, the treaty permits the datacenter and its chips to remain where they are, under private ownership, so long as they receive monitoring and oversight from the domestic government and the ISIA. This monitoring would ensure that datacenters are engaged only in non-AI activities or permitted AI activities like running old models. For smaller chip aggregations, it may be necessary to physically move them into a larger datacenter, with their owner continuing to access the chips remotely; we do not consider this an overly onerous restriction given that it is already common in cloud computing arrangements.
Feasibility
It looks feasible to verifiably consolidate the majority of AI chips. The very largest AI datacenters, such as those with more than 100,000 H100-equivalents, are hard to hide. They are detectable from their physical footprint and power draw, and many of them are publicly reported on. In fact, it’s probably possible for intelligence services to track and locate datacenters as small as around 10,000 H100-equivalents. Locating smaller datacenters would involve domestic authorities using various powers in cooperation with ISIA inspectors.
States have a range of tools available for tracking down chips owned domestically. They can legally require reporting of all chip clusters larger than 16 H100s; they can use sales records and other financial information from chip distributors; they can interview technicians with expertise in datacenter construction; etc. If they suspect smuggling, obfuscation, or concealment of chips, they can employ law enforcement to investigate further. This process of domestic centralization would be overseen by ISIA inspectors to ensure thorough compliance.
Locating large datacenters could happen quickly, in days or weeks. Actually centralizing chips could take longer, as it might be necessary to build further datacenter capacity in the facilities that would become CCCs.
One significant challenge is providing justified confidence that one Party is not doing a secret AI project with non-declared AI chips. ISIA verification of domestic chip centralization provides some assurance, but it may not be sufficient if some country could purposefully undermine domestic centralization efforts. For further assurances against illegal AI projects, see the intelligence gathering and challenge inspections discussed in Article X.
On the Definition of CCCs
Our definition of CCC draws a line at 16 H100-equivalents. This threshold aims to meet a few criteria:
- Monitoring chip clusters larger than 16 H100s works well with the training FLOP thresholds in Article IV. Training with 16 H100s (FP8 precision, 50 percent utilization — realistic but optimistic parameters) would take 7.3 days to get to 1e22 FLOP, and 2 years to get to 1e24 FLOP. Therefore, it would be feasible for people to use undeclared chips to reach the bottom threshold, but it would be somewhat impractical for them to get to the prohibited training threshold.
- This threshold is plausibly sufficient for preventing the advancement of AI capabilities, when combined with bans on AI research in Article VIII. Article IV lays out training restrictions where large-scale training is prohibited and medium-scale training is allowed but subject to oversight. It is probably acceptable — that is, it probably poses minimal risk — to allow small-scale training, such as the amount that can be done on 16 H100s in a realistic time frame.
- This threshold has limited impact on hobbyists and consumers. Very few individuals own more than 16 H100s. In mid-2025, a set of 16 H100 chips costs around $500,000. This isn’t a threshold one would accidentally cross by having a few old gaming consoles laying around.
- Consolidating AI chips gets harder as the allowable quantity shrinks. Finding datacenters with 100,000 chips is easy; finding those with 10,000 is likely also relatively easy; with 1,000 it’s unclear; and below 100, it may start to become quite difficult. The 16 H100 threshold is likely to be challenging, and is picked partially due to the increasing infeasibility of still lower thresholds.
- Despite potential enforcement challenges, it is possible that this definition would need to be revised and the threshold brought lower (e.g., 8 H100-equivalents). In our treaty, the ISIA would be tasked with assessing this definition and changing it as needed.
Other Considerations
This article calls for parties to avoid co-locating AI chips with non-ancillary non-AI chips. This is suggested because co-location might make verification of chip use (Article VII) more difficult. However, this is not strictly necessary, and it may not be desired. AI chips are currently often colocated with non-AI chips, and the inconvenience of changing this could outweigh the inconvenience of monitoring and verifying the AI chips in a datacenter that mixes AI chips with non-AI chips.
There is some risk that private citizens could construct an unmonitored CCC from “loose” H100-equivalent chips. To combat this, the treaty holds that parties shall make “reasonable effort” to monitor chip sales (in excess of 1 H100-equivalent) and detect the formation of new CCCs. More stringent measures could be taken, such as requiring all such chips and sales to be formally registered and tracked. Our draft does not go to that length, both because we do not expect all that many “loose” H100-equivalent chips to be unaccounted-for after all chips in CCCs are cataloged, and because other mechanisms (such as the whistleblower protections in Article X) help with the detection of newly-formed CCCs.
Rather than immediately requiring small clusters (e.g., 100 H100s) to be centralized, the treaty could instead implement a staged approach. For example: In the first 10 days all datacenters with more than 100,000 H100-equivalent chips must be centralized and declared, then in the next 30 days all datacenters with more than 10,000 H100-equivalent chips must be centralized and declared, etc. A tiered approach might better track international verification capacity as intelligence services ramp up their detection efforts.
One downside of a staged approach is that it might provide more opportunities for states to hide chips and establish secret datacenters. This approach nevertheless parallels how some previous international agreements have worked within the constraints of their verification and enforcement options. For instance, the 1963 Partial Test Ban Treaty did not ban underground testing of nuclear weapons, due to the difficulty in detecting such tests.
[12] The Joint Comprehensive Plan of Action was finalized in 2015 between the five permanent members of the United Nations Security Council, Germany, the European Union, and Iran. When it took effect in January of 2016, Iran gained sanctions relief and other provisions in exchange for accepting restrictions on its nuclear program.
[13] The Strategic Arms Limitation Talks (SALT) commenced in 1969 between the U.S. and USSR, producing the SALT I treaty, signed in 1972, which froze the number of strategic ballistic missile launchers and regulated the addition of new submarine-launched ballistic missiles, among other restrictions.
[14] The 1972 Anti-Ballistic Missile Treaty (ABM) grew out of the original SALT talks, and limited each party to two anti-ballistic complexes each (later, just one) with restrictions on their armament and tracking capabilities.
[15] With the 1987 Intermediate-Range Nuclear Forces Treaty (INF), the U.S. and USSR agreed to ban most nuclear delivery systems with ranges in between those of battlefield and intercontinental systems. (Given the short warning time strikes from such systems would afford, they were seen more as destabilizing offensive systems than as defensive assets.)
- The ISIA will implement monitoring of AI chip production facilities and key inputs to chip production. This monitoring will ensure that all newly produced AI chips are immediately tracked and monitored until they are installed in declared CCCs and that unmonitored supply chains are not established.
- The ISIA will monitor AI chip production facilities determined to be producing or potentially producing AI chips and relevant hardware [the precise definitions of AI chip production facilities, AI chips, and relevant hardware would need to be further described in an Annex; the monitoring methods would also need to be described in an Annex].
- Monitoring of newly produced AI chips will include monitoring of production, sale, transfer, and installation. Monitoring of chip production will start with fabrication. The full set of activities includes fabrication of high-bandwidth memory (HBM), fabrication of logic chips, testing, packaging, and assembly [this set of activities would need to be specified in an Annex].
- For facilities where ISIA tracking and monitoring is not feasible or implemented, production of AI chips will be halted. Production of AI chips may continue when the ISIA declares that acceptable tracking and monitoring measures have been implemented.
- If a monitored chip production facility is decommissioned or repurposed, the ISIA will oversee that process, and, if done to the satisfaction of the ISIA, this ends the monitoring requirement.
- No Party shall sell or transfer AI chips or AI chip manufacturing equipment except as authorized and tracked by the ISIA.
- Sale or transfer of AI chips within or between Treaty Parties shall have a presumption of approval and be tracked by the ISIA.
- Sale or transfer of AI chip manufacturing equipment within or between Treaty Parties shall not have a presumption of approval. Approval for such transfer shall be based on an assessment of the risk of diversion or Treaty withdrawal of the receiving Party.
- Sale or transfer of AI chips and AI chip manufacturing equipment to non-Party States or entities outside a Party State shall have a presumption of denial.
- No Party shall sell or transfer non-AI advanced computer chips or non-AI advanced computer chip manufacturing equipment to non‑Party States or entities outside a Party State except as authorized and tracked by the ISIA.
- Sale or transfer of non-AI advanced computer chips or non-AI advanced computer chip manufacturing equipment within or between Treaty Parties is not restricted under this Article.
Precedent
Treaty provisions for monitoring production facilities are not new. Article XI of the 1987 INF allowed for thirteen years inspections of designated facilities where intermediate-range nuclear delivery systems had previously been produced; Section VII of the accompanying inspection protocol permitted continuous perimeter and portal monitoring that could include weighing (and in some cases x-raying) any vehicle leaving the facility large enough to carry a relevant missile.
Monitoring AI chip production is more complicated, due to the difficulty of discerning a chip’s function and capabilities from outward characteristics; this is why our Article VI stipulates that “relevant hardware would need to be further described in an Annex,” along with monitoring methods. But the experience of IAEA safeguards under the NPT shows that verification of a wide variety of production components and precursors across a supply chain is possible. One way the IAEA does this is by providing guidelines for the design of facilities to make them inspection friendly and reduce compliance costs.
Transfer embargoes on end-products, precursors, and production equipment (like the one suggested here on sale or transfer of AI chips and advanced computer chip manufacturing equipment to non-party states or entities) all have substantial precedent:
- In Article I of the NPT, each nuclear-weapon state commits “not to transfer to any recipient whatsoever nuclear weapons or other nuclear explosive devices” In its Article III, Paragraph 2, they also agree not to provide a “source or special fissionable material” or equipment “especially designed or prepared for the processing, use or production of special fissionable material.”
- Article I of the CWC likewise commits parties to never “transfer, directly or indirectly, chemical weapons to anyone”; its Article VII requires them to subject listed precursors to specified “prohibitions on production, acquisition, retention, transfer, and use”
- The Cold-War-era Coordinating Committee for Multilateral Export Controls (CoCom) established a coordinated set of export controls from Western Bloc countries to the Communist Bloc, covering nuclear-related materials, munitions, and dual-use industrial items such as semiconductors.
- The Nuclear Suppliers Group is a multilateral export control regime that restricts the supply of nuclear and nuclear-related technology that could be diverted to nuclear weapons programs.
- Especially relevant is the series of U.S. export controls that have focused on AI chips and advanced chip manufacturing equipment, covering dozens of countries in the last couple years.
Notes
The AI chip supply chain is narrow and specialized, making it feasible to monitor production. The vast majority of AI chips are designed by NVIDIA. The most advanced logic chips (the main processor) used in AI chips are almost all fabricated by TSMC — accounting for around 90 percent of market share. Most AI chips are fabricated on versions of TSMC’s five-nanometer process node, a node likely only supported by two or three manufacturing plants. EUV lithography machines, a critical component in advanced logic chip fabrication, are made exclusively by ASML. High-bandwidth memory (HBM), another key component to AI chips, is dominated by two or three companies. This narrow and technical supply chain would be relatively easy to monitor and hard to clandestinely replicate.
Monitoring AI chip production would have relatively small spillover effects. While some of the same processes also produce other chips (e.g., smartphone chips), the chips themselves are easily differentiated. Chip design would change over time, but as a snapshot, current AI chips would be identifiable via their large high-bandwidth memory (HBM) capacity and specialized matrix-multiply components, among other factors.
When it comes to monitoring the AI chip supply chain, based on existing bottlenecks, a good start might be to monitor HBM production, logic die fabrication, and subsequent steps (e.g., packaging, testing, server assembly), along with key inputs such as EUV lithography machines.
Our Article states that sales of AI chips within Party states will have a presumption of approval, but does not indicate this presumption for AI chip manufacturing equipment. Chip sales are likely to have a relatively short-term effect on AI development capacity, as the lifecycle of AI chips is typically only a few years. By contrast, chip manufacturing capacity could lead to significant chip production for many years to come, and it would be especially concerning if a country became a Party, built up an AI chip supply chain, and then withdrew from the treaty. Therefore, we suggest more conservative restrictions on chip manufacturing equipment than on chips themselves.
Paragraphs 4 and 5 of this article permit the sale of AI chips and chip manufacturing equipment to Treaty Parties but not to non-Party States or other entities. That is, parties accept risks from chip manufacturing and concentration, but only in cases where the chips are subject to monitoring. The ability to manufacture and possess chips without a protective response from other states thus emerges as a positive incentive to join the treaty.
On its own, this does not prevent non-Treaty Parties from accessing AI chips in Treaty Parties remotely (i.e., cloud computing, or Infrastructure-as-a-Service), but such chips would be under ISIA monitoring to ensure they are not being used in violation of Article IV.
Restrictions on non-parties could go further, if need be. For example, non-parties could be banned from remote access to AI chips (i.e., from renting AI chips in Treaty countries via the cloud) or from accessing AI models via APIs.
If monitoring chip production and preventing smuggling were infeasible, another approach would be to ban all production of new AI chips. This approach would run less of a risk of chips being diverted, but it has the cost of losing the value that these chips could have produced in non-research, non-development AI applications. It would still rely on some monitoring of chip production facilities — e.g., to ensure they are only producing non-AI chips or that they are decommissioned. Our treaty design invites chip production to continue due to the large benefits their use might bring, with the dangers mitigated by monitoring (Article VII).
- Parties accept continuous on‑site verification of total chip usage by the ISIA at declared CCCs. The methods used for verification will be determined and updated by the Technical Secretariat, in accordance with the process described in Article III. These methods may include, but are not limited to:
- In-person inspectors
- Tamper-proof cameras
- Measurements of power, thermal, and networking characteristics
- On-chip hardware-enabled mechanisms, including retrofitted mechanisms
- Declaration of the workloads and operations of chips by the CCC operator
- Rerunning of declared workloads at an ISIA facility to confirm fidelity of declarations
- The aim of this verification will be to ensure chips are not being used for prohibited activities, such as large-scale AI training described in Article IV.
- In cases where the ISIA assesses that current verification methods cannot provide sufficient assurance that the AI hardware is not being used for prohibited activities, AI hardware must be powered off, and its non-operation continually verified by in-person inspectors or other ISIA-approved verification mechanisms.
- The ISIA may impose various restrictions on how chips can operate in order to ensure proper verification. These restrictions may include but are not limited to:
- Restrictions on the bandwidth and latency between different chips, or between chips and their datacenter network, in order to distinguish permitted inference from prohibited training.
- Restrictions on the number or rate of FLOP/s or memory bandwidth at which chips can operate, in order to distinguish permitted inference from prohibited training or other prohibited workloads.
- Restrictions on the numerical precision of chip operations, in order to differentiate AI from non-AI workloads.
- The ISIA will approach verification for different CCCs differently based on their likelihood of being used for AI activities and their sensitivity as relevant to national security.
- The ISIA will lead research and engineering to develop better technologies for chip use monitoring and verification. Parties will support these efforts [more details would be provided in an Annex].
Precedent
In our discussion of precedent for Article VI, we described the continuous monitoring of former intermediate-ranged missile production sites under the INF treaty, which, while allowing for weighing and non-destructive scanning of vehicles leaving the facilities, did not allow inspectors inside the trucks or the sites themselves. Analogous perimeter monitoring of datacenters can provide some clues about operations from power draw, thermal emissions, and network bandwidth. But reasonable assurance that restricted AI operations are not occurring would likely require some combination of the elements we listed under Paragraph 1 of our Article VII, which includes tamper-proof cameras, on-chip hardware-enabled mechanisms, and in-person inspectors.
Such practices are already routine for the International Atomic Energy Agency, which is increasingly using around-the-clock surveillance technologies to supplement inspections:
Over a million pieces of encrypted safeguards data are collected by over 1400 surveillance cameras, and 400 radiation and other sensors around the world. More than 23 000 seals installed at nuclear facilities ensure containment of material and equipment.
One of the methods used under START I to verify compliance with missile performance characteristics was the sharing of almost all telemetry data transmitted from in-flight sensors during tests, as specified in the telemetry protocol, which also required parties to provide any playback equipment and data formatting information necessary to interpret it. Depending on the mix of verification methods adopted, an International Superintelligence Agency may use analogous methods, building on the light-touch monitoring that is common practice for cloud computing providers to collect about customer workloads.
Continuous government monitoring of private commercial facilities (as most datacenters are) also has plenty of precedent. The U.S. Nuclear Regulatory Commission, tasked with overseeing domestic nuclear reactor safety, places two resident inspectors in each U.S. commercial power plant, and U.S. meat producers cannot conduct slaughter operations if inspection personnel from the FSIS16 are not on site to oversee them.
Notes
Parties would want to ensure that existing AI chips are not being used to do dangerous AI training. There are legitimate reasons to use these chips to run existing AI services like (extant versions of) ChatGPT. The ISIA thus requires the ability to verify that AI chips are only being used for permitted activities.
This article creates a positive incentive to join the treaty: A country may continue using AI chips as long as supervision can verify that their use does not put the world at risk. Given the goal of preventing large-scale AI training, there are two main approaches: Ensure nobody has the necessary hardware (i.e., that AI chips do not exist), or ensure that the hardware is not used in the development of superintelligence (i.e. via monitoring). Monitoring is what permits the continued safe use of AI chips. This is conceptually analogous to IAEA Safeguards: In order for a non-nuclear weapon country to be permitted nuclear materials and facilities, it is necessary for the IAEA to inspect and ensure the use is only for peaceful purposes.
Feasibility
Various technical methods could be used to make verification easier. For example, using the algorithms of 2025, AI training requires much higher bandwidth compared to AI inference. Thus, if the chips are connected using low-bandwidth networking cables, they are effectively limited such that they can engage in inference but not training. There are various nuances to these and other mechanisms;we refer curious readers to previous work on the topic.
This article tasks the ISIA with developing and implementing better verification mechanisms, defined broadly. We think this flexibility is necessary due to the pace of change in AI and the possibility that unanticipated developments could disrupt verification methods. The state of AI verification research is also nascent; more technological development in verification technology is important to give ISIA a solid set of tools.
It is much easier to verify whether a new AI is being created than it is to verify that an existing AI is not performing dangerous inference tasks (such as research that advances the creation of superintelligence). As of August 2025, existing AIs don’t obviously seem capable enough for their inference activities to substantially advance the creation of superintelligence, and so the monitoring challenge that would be faced by ISIA is easier.
It is unclear how difficult it would be to monitor AI inference activities. Inference monitoring is already applied by many AI companies today, for instance to detect if users are trying to use AIs to make biological weapons, but it is unclear whether that monitoring is comprehensive, and it is unclear whether it would get less reliable if AIs were allowed to become more capable. The longer that AI capabilities are allowed to advance before a Treaty resembling our draft comes into effect, the more difficult monitoring would become. Verification that chips are only being used for permitted purposes would become more difficult and more expensive, or might even become impossible.
Other Considerations
In theory, verification could be facilitated by technological means that allow for remote monitoring. However, current technology likely contains security vulnerabilities that would allow chip owners to bypass monitoring measures. Thus, verification would likely require either continuous on-site monitoring or that chips be shut off until the technological means mature. Once monitoring technology is mature, strong hardware-enabled governance mechanisms could allow chips to be monitored remotely with confidence.17
Paragraph 5 of this article allows the ISIA to carry out different verification methods for different CCCs. One reason for this discrimination is practical: Different CCCs would require different verification approaches in order to establish justified confidence that they are not being used for dangerous AI development. For example, large datacenters that were previously being used for frontier AI training would have the greatest ability to contribute to prohibited training and so might require greater monitoring.
Second, discrimination in verification approaches would make the treaty more palatable by requiring less invasive monitoring for sensitive CCCs. For example, intelligence agencies or militaries may not want any ISIA monitoring of their datacenters (which may have more computing power than 16 H100-equivalents despite being used for purposes that have nothing to do with AI), and this provision helps strike a balance. It would still be necessary to verify that these datacenters are not being used for dangerous AI activities, and the ISIA would work with these groups to ensure it can get the information it needs while also meeting the privacy and security needs of CCC owners. On the other hand, allowing different verification protocols might hurt the viability of the Treaty if it is viewed as unfair, especially if the decision making around these processes is unbalanced.
Our draft Treaty allows chip use and production to continue so that the world may benefit from such chips. One alternative approach is to shut down new chip production and/or destroy existing chips. Absent algorithmic advancements, the destruction of chips would increase the “breakout time” — the time it takes between when a group starts trying to create a superintelligence and the point at which they succeed. This is because (in lieu of algorithmic advancements), a rogue actor would need to develop the capability to produce chips, which is a lengthy and conspicuous process. However, because we think it’s feasible to track chips and verify their usage, we do not think that the benefit of longer breakout times is clearly worth the cost of shutting off all AI chips.
[16] The Food Safety and Inspection Service (FSIS) is an agency of the U.S. Department of Agriculture formed in 1977.
[17] Another key consideration for chip use verification measures is security and privacy. Parties will want to ensure that the IAEA only has access to the information it needs for verification without also having access to sensitive data on the chips (such as military secrets or sensitive user data). Therefore, the verification methods used would need to be made secure and would be narrowly scoped when possible.
- For the purpose of preventing specific research that advances the frontier of AI capabilities or undermines the ability of Parties to implement the measures in this Treaty, this Treaty designates research meeting any of the conditions below as Restricted Research:
- Improvements to the methods used to create frontier models, as defined in Article II, that would improve model capabilities or the efficiency of AI development, deployment, or use
- Distributed or decentralized training methods, or training methods optimized for use on widely available or consumer hardware
- Research into computer artificial intelligence paradigms beyond machine learning
- Advancements in the fabrication of AI-relevant chips or chip components
- Design of more performant or more efficient AI chips
- The ISIA’s Research Controls division shall classify all Restricted Research activities as either Controlled or Banned.
- Each Party shall monitor any Controlled Research activities within its jurisdiction, and take measures to ensure that all controlled research is monitored and made available to the Research Controls division for review and monitoring purpose.
- Each Party shall not conduct any Banned Research, and shall prohibit and prevent Banned Research by any entity within its jurisdiction.
- No Party shall assist, encourage, or share Banned Research, including by funding, procuring, hosting, supervising, teaching, publishing, providing controlled tools or chips, or facilitating collaboration.
- Each Party shall provide a representative to the ISIA’s Research Controls division, under the Technical Secretariat (established in Article III). This division gains these responsibilities:
- Interpret and clarify the categories of Restricted Research, and respond to questions as to the boundaries of Restricted Research, in response to new information, and in response to requests from researchers or organizations or Party members.
- Interpret and clarify the boundary between Controlled Research and Banned Research, and respond to questions as to this boundary, in response to new information, and in response to requests from researchers or organizations or Party members.
- Modify the definition of Restricted Research and its categories, in response to changing conditions, or in response to requests from researchers or organizations or Party Members.
- Modify the boundary between Controlled Research and Banned Research in response to changing conditions, or in response to requests from researchers or organizations or Party Members.
- The Technical Secretariat may modify the categories, boundaries, and definitions of Restricted Research in accordance with the process described in Article III.
Precedent
Pre-emptive restrictions on the dissemination of information related to dangerous technology find precedent in the Atomic Energy Act of 1946, still in force, which established information on certain topics as Restricted Data by default (the “born secret” doctrine); exclusions were at the discretion of the new Atomic Energy Commission created by this legislation:18
The term “restricted data” as used in this section means all data concerning the manufacture or utilization of atomic weapons, the production of fissionable material, or the use of fissionable material in the production of power, but shall not include any data which the Commission from time to time determines may be published without adversely affecting the common defense and security.
Unlike other types of government classification, Restricted Data can be created (deliberately or accidentally) by the private sector, a matter of unresolved constitutionality19 that highlights the need for a regulatory arm authorized and capable of making everyday decisions about the exact boundaries of Restricted Data. The National Nuclear Security Administration (NNSA) does this for nuclear secrets in the U.S.. Under our Article VIII, Paragraph 5, the Research Controls division of the new ISIA would take on this role for restricted AI research. It would also fill other NNSA-analogous functions, outlined in our Article IX, by (1) maintaining relationships with researchers and and organizations working on projects that approach the classification threshold, and (2) establishing secure infrastructure for reporting and containment of inadvertent discoveries.
There is also precedent for containing and controlling research in dangerous fields. In the final months of World War II, the U.K. and U.S. collaborated on the Alsos Mission to capture German nuclear scientists, gather information about German progress toward an atomic bomb, and prevent the USSR from obtaining these resources for its own nuclear program. Project Overcast (also called Operation Paperclip) was a secret U.S. program to take German rocket engineers into U.S. employment after the war.
Containment of Restricted AI Research within Party states might run through existing regulatory frameworks. In the U.S., these include:
- The “deemed exports” concept in export control law, which obliges a U.S. entity to obtain an export license from the Bureau of Industry and Security20 before sharing controlled technologies with foreign persons by deeming such sharing as an export.
- The International Traffic in Arms Regulations (ITAR), a set of U.S. State Department regulations that control the export of military and some dual-use technologies. ITAR was used to prevent the broader development and use of cryptographic techniques by the private sector until 1996, as these were classified as a “defense article” on the United States Munitions List.
- The Invention Secrecy Act of 1951, which gives U.S. government agencies the power to impose “secrecy orders” on new patent applications with national security implications. Inventors can not only be denied patents, but legally prohibited from disclosing, publishing, or even using their inventions.21
Project Overcast also provides precedent for controlling researchers by simply paying them well to act in the interest of the state. Additional precedent for such incentives is discussed with Article IX.
Notes
Banning several broad categories of research, when relevant know-how is already distributed in the private sector, presents a challenge. In our draft, research is restricted if it advances AI capabilities or performance, or if it endangers the verification scheme laid out in previous articles.
Some research must be banned to prevent AI capabilities from advancing, even when holding the amount of training FLOP constant. This ban would need to cover all research that might make AIs more efficient to train or that might increase the capabilities of AIs, often referred to as “algorithmic progress.” In current paradigms, this includes advances in the algorithms used in pre-training, post-training, and inference. As paradigms change, these distinctions may become less clear and new categories may arise.22 For this reason, the treaty uses the terms “development, deployment, or use.”
Previous algorithmic innovations, such as the development of the transformer architecture, demonstrate the potential for rapid advances in AI capabilities. Continued innovation could dramatically lower the amount of computational resources required for a given level of AI capability. As a feasibility argument, observe that modern AIs are much less data-efficient than human beings, which suggests that much more data-efficient algorithms can be found.
It is much harder to prevent the training of dangerous AIs when they can be trained with a small number of AI chips, or with many chips geographically dispersed in small clusters.
Separately, a ban must preclude research into new ways to manufacture untracked AI chips. Monitoring and verification of AI chips is feasible in large part because of the present complexity and centralization of advanced AI-relevant semiconductor manufacture.
Article VII also bans research into the design of more performant or efficient AI chips, which otherwise become substantially more efficient year over year. A datacenter using more efficient AI chips would be easier to conceal, as these chips would use less electricity for the same or greater performance.
The specific types of research that are restricted would need to be updated in response to changing conditions. One example of an activity the ISIA may later want to restrict is research into consumer hardware that can efficiently perform AI training activities, if such progress would pose a risk to verification.
Domestic efforts to restrict research could start by focusing on the publication and funding of research. Most researchers want to be law-abiding, gainfully employed citizens; steps that push dangerous AI research outside of accepted social norms would likely be impactful.
The diversity of restricted actions in Paragraph 3 addresses a need to ensure that if research activities are split between multiple jurisdictions, the Treaty still unambiguously holds each state responsible for prohibiting and preventing the individual activities. Paragraph 3 applies, for example, in the case where a company in one jurisdiction hires an employee in a second who remotely operates chips hosted in a third.
[18] The 1946 Atomic Energy Act was later augmented by the Atomic Energy Act of 1954 with the goal of allowing for a civilian nuclear industry, which required allowing some Restricted Data to be shared with private companies.
[19] The 1979 case of United States v. The Progressive, in which a newspaper intended to reveal the “secret” of the hydrogen bomb, might have given the U.S. Supreme Court an opportunity to rule on whether the “born secret” doctrine violates the First Amendment’s protections on speech, if the government hadn’t dropped the case as moot.
[20] An arm of the U.S. Department of Commerce.
[21] Hundreds of such orders have been placed on cryptography-related patents over the decades.
[22] For instance, the development of AlphaGo — a state-of-the-art AI in 2016 — doesn’t fit cleanly into the modern “pre-training, post-training, inference” paradigm.
- Each Party shall create or empower a domestic agency with the following responsibilities:
- Maintain awareness of and relationships with domestic researchers and organizations working on areas adjacent to Restricted Research, in order to communicate the categories of Restricted Research established in Article VIII.
- Impose penalties to deter domestic researchers and organizations from conducting Restricted Research. These penalties shall be proportionate to the severity of the violation and should be designed to act as a sufficient deterrent. Each Party shall enact or amend legal statutes as necessary to enable the imposition of these penalties.
- Establish secure infrastructure for reporting and containment of inadvertent discoveries meeting the conditions for Restricted Research. These reports will be shared with the Research Controls division.
- To aid in the international verification of research bans, the Research Controls division will develop and implement verification mechanisms.
- These mechanisms could include but are not limited to:
- ISIA interviews of researchers who have previously worked in Restricted Research topics, or are presently working in adjacent areas.
- Monitoring of the employment status and whereabouts of researchers who have previously worked in Restricted Research topics, or are presently working in adjacent areas.
- Maintaining embedded auditors from the ISIA in selected high-risk organizations (e.g., projects difficult to distinguish from Restricted Research, organizations that were previously AI research organizations).
- Parties will assist in the implementation of these verification mechanisms.
- The information gained through these verification mechanisms will be compiled into reports for the Executive Council, keeping as much sensitive information confidential as possible to protect the privacy and secrets of individuals and Parties.
- These mechanisms could include but are not limited to:
Precedent
Existing agencies empowered to “maintain awareness of and relationships with domestic researchers and organizations” at risk of developing restricted information, as called for by our Article IX (1.a.) include the DOE and NNSA, discussed in the precedent section for Article VIII.
Precedent for “monitoring of the employment status and whereabouts of researchers” in high-risk fields, as we suggest in Paragraph 2.(a).(ii), can be found in the International Science and Technology Center (ISTC).23 Established in 1994, the ISTC was specifically created to reduce nuclear proliferation risks by keeping Soviet nuclear researchers gainfully employed in peaceful activities and connected to the international scientific community. The ISTC also shows the potential of incentives as a complement to penalties for keeping technical experts (who may find themselves unemployed as a result of this treaty) from engaging in Restricted Research.
To the extent that penalties may need to be severe to provide the deterrence indicated in our Article IX.1.(b), a template may be found with the Enforcement chapter (18) of the 1946 Atomic Energy Act, under which the unauthorized sharing of Restricted Data can be punished by death or imprisonment if the disclosures were made with treasonous intent.24
When developing secure ISIA “infrastructure for reporting and containing inadvertent discoveries of Restricted Research,” precedent and potentially usable templates may be found in the extensive DOE procedures for handling different categories of sensitive data. The DOE’s Occurrence Reporting and Processing System, as well as the Committee on National Security Systems’s25 instructions for classified information spillage, may also be of use.
Our treaty’s Research Controls division might look to existing practices by the IAEA when developing inspection protocols. Under the framework of the Model Additional Protocol approved in 1997 by the IAEA Board of Governors, states that have made comprehensive safeguard agreements26 allow complementary access inspections that look for undeclared nuclear material. As part of such visits, inspectors may interview operators, analogous to our proposal in Paragraph IX.2.a.i.
We also propose “maintaining embedded auditors from the ISIA in select high-risk organizations,” much the way DOE and NNSA field offices are physically located at contractor-operated national nuclear labs and production plants today.
To “protect the privacy and secrets of individuals and parties” when performing verifications, as required by this article’s Paragraph 2.(c), the ISIA Research Controls division might adapt compartmentalization practices of parties’ existing intelligence agencies and multilateral intelligence-sharing agreements. For example, under the “third party rule” or “originator control principle” understood to be commonplace in such arrangements, it is prohibited to disclose shared information to third parties (potentially even oversight bodies) without permission from the originating agency.
Notes
To help verify that there is no prohibited AI research happening, Article IX tasks parties with demarcating “areas adjacent to Restricted Research” and then establishing relationships with the researchers working in these adjacent areas. There are sufficiently few top AI researchers in the world that it may be feasible to track the activities of a significant fraction of them. The technical staff of top AI companies numbers on the order of 5,000 researchers, and it is commonly believed that a much smaller group is critical to frontier AI development, likely numbering in the hundreds.27 The number of attendees of top AI conferences is estimated to be about 70,000. States could interview researchers about their activities and offer asylum and financial incentives for any whistleblowers (see Article X).
While much about current AI development practices happens in the public view, we think legal restrictions would dramatically hamper the efforts of rogue actors to create superintelligent machines.
Monitoring could be extended to researchers and engineers involved in semiconductor design and manufacture if states are willing to incur the extra costs. A more affordable alternative might be to monitor semiconductor manufacturing companies rather than individuals, taking advantage of complex dependencies within the industry which ensure that small groups of rogue individuals would have trouble creating their own chip fabricators.
Parties may be concerned that other parties will violate domestic research bans and hide research efforts from foreign intelligence. Most likely, large efforts involving many researchers and AI-relevant chips would be noticed by a determined intelligence community. But smaller efforts, like developing alternative machine intelligence paradigms, might only involve a few researchers and commonly available hardware. Verifying a research ban is a complex and sensitive undertaking requiring ongoing effort and iteration. To facilitate that end, Article X (below) institutes a variety of tools to facilitate intelligence gathering and to protect whistleblowers.
[23] The International Science and Technology center grew out of the 1991 Nunn-Lugar Cooperative Threat Reduction program, a U.S. initiative to secure and dismantle WMDs and their associated infrastructure in former Soviet states.
[24] Parties to our treaty may wish to explore expanding the concept of crimes against humanity (codified in the 1988 Rome Statute of the International Criminal Court) to cases where a researcher deliberately seeks to develop ASI at the expense of the people of Earth.
[25] The Committee on National Security Systems (CNSS) is a U.S. intergovernmental organization that sets security policies for government information systems.
[26] 144 States, as of June 2025.
[27] In a 2025 interview, David Luan, head of Amazon’s AGI research lab, estimated the number of people he would trust “with a giant dollar amount of compute” to develop a frontier model at “sub-150.”
- A key source of information for the ISIA is the independent information gathering efforts of Parties. As such, the Information Consolidation division (Article III) will be ready to receive this information.
- The Information Consolidation division shall take precautions to protect commercial, industrial, security, and state secrets and other confidential information coming to its knowledge in the implementation of the Treaty, including the maintenance of secure, confidential, and, optionally anonymous reporting channels.
- For the purpose of providing assurance or compliance with the provisions of this Treaty, each Party shall use National Technical Means (NTM) of verification at its disposal in a manner consistent with generally recognized principles of international law.
- Each Party undertakes not to interfere with the National Technical Means of verification of other Parties operating in accordance with the above.
- Each Party undertakes not to use deliberate concealment measures which impede verification by national technical means of compliance with the provisions of this Treaty.
- Parties are encouraged, but not obligated, to cooperate in the effort to detect dangerous AI activities in non-Party countries. Parties are encouraged, but not obligated, to support the NTM of Parties directed at non-Parties, as relevant to this Treaty.
- A key source of information for the ISIA are individuals who provide evidence of dangerous AI activities to the ISIA. These individuals are subject to whistleblower protections.
- This Article establishes protections, incentives, and assistance for individuals (“Covered Whistleblowers”) who, in good faith, provide the ISIA or a Party with credible information concerning actual, attempted, or planned violations of this Treaty or other activities that pose a serious risk of human extinction, including concealed chips, undeclared datacenters, prohibited training or research, evasion of verification, or falsification of declarations. Covered Whistleblowers include employees, contractors, public officials, suppliers, researchers, and other persons with material information, as well as Associated Persons (family members and close associates) who assist or are at risk due to the disclosure.
- Parties shall prohibit and prevent retaliation against Covered Whistleblowers and Associated Persons, including but not limited to dismissal, demotion, blacklisting, loss of benefits, harassment, intimidation, threats, civil or criminal actions, visa cancellation, physical violence, imprisonment, restriction of movement, or other adverse measures. Any contractual terms (including non‑disclosure or non‑disparagement agreements) purporting to limit protected disclosures under this Treaty shall be void and unenforceable. Mistreatment of whistleblowers shall constitute a violation of this Treaty and be handled under Article XI, Paragraph 3.
- The ISIA shall maintain secure, confidential, and, optionally anonymous reporting channels. Parties shall establish domestic channels interoperable with the ISIA system. The ISIA and Parties shall protect the identity of Covered Whistleblowers and Associated Persons and disclose it only when strictly necessary and with protective measures in place. Unauthorized disclosure of protected identities shall constitute a violation of this Treaty and be handled under Article XI, Paragraph 3.
- Parties shall offer asylum or humanitarian protection to Covered Whistleblowers and their families, provide safe‑conduct travel documents, and coordinate secure transit.
- The ISIA may conduct challenge inspections of suspected sites upon credible information about dangerous AI activities.
- Parties may request for the ISIA to perform a challenge inspection. The Executive Council, either by request or because of the analysis provided by the Information Consolidation division, will consider the information at hand in order to request additional information, of Parties or non-Parties, or to propose a challenge inspection, or to decide that no further action is warranted.
- A challenge inspection requires approval by a majority of the Executive Council.
- Access to a suspected site must be granted by the nation in which the site is present within 24 hours of the ISIA calling for a challenge inspection. During this time, the site may be surveilled, and any people or vehicles leaving the site may be inspected by officials from a signatory Party or the ISIA.
- The challenge inspection will be conducted by a team of officials from the ISIA who are approved by both the Party being inspected and the Party that called for the inspection. The ISIA is responsible to work with Parties to maintain lists of approved inspectors for this purpose.
- Challenge inspections may be conducted in a given Party’s territory at most 20 times per year, and this limit can be changed by a majority vote of the Executive Council.
- Inspectors will take absolute care to protect the sensitive information of the inspected state, passing along to the Executive Council only what information is pertinent to the treaty.
Precedent
We previously discussed precedent for information consolidation with Article VIII, where we cited the existence of intelligence agreements understood to include compartmentalization practices like the “third party rule.” Similar rules can be seen in the IAEA, as in INFCIRC/153 Part 1.5:
…the Agency shall take every precaution to protect commercial and industrial secrets and other confidential information coming to its knowledge in the implementation of the Agreement.
Staff are bound by confidentiality obligations and face criminal penalties for leaks. This matters, because the IAEA has benefited from the intelligence disclosures of participating states, including satellite imagery and documents, as in the case of Iran’s undeclared enrichment activities. Similarly, the IAEA required a special inspection of North Korea’s undeclared plutonium production in response to provided intelligence.
Recognizing the indispensable role of national technical means (NTM — satellite imagery, signals collection, and other remote sensing) in verification of multilateral agreements, our draft agreement borrows language from the ABM treaty limiting anti-ballistic missile systems, in which “each Party shall use national technical means of verification” and “undertakes to not interfere with the national technical means of verification of the other Party.” Similar language can be found in Article XII of the 1987 Intermediate-Range Nuclear Forces Treaty, Article IV of the 1996 Comprehensive Nuclear-Test-Ban Treaty, and throughout the 2010 New START treaty.
As NTM would not be sufficient for detecting all dangerous violations in the case of ASI, we have borrowed features of the IAEA Safeguards framework that encourage internal reporting and provide channels for doing so. But these are hampered by a lack of explicit whistleblower protections; nothing in the NPT or these Safeguards would protect an informant from their government if it decides to retaliate unless that state has applicable domestic protections in place. The treaty-level provisions for whistleblower protection and asylum in our draft agreement are meant to address this shortcoming.
Recent EU legislation on AI has taken similar measures. The EU AI Act’s Recital 172 explicitly extends the Union’s existing general whistleblower protections to those reporting AI Act infringements.
The 1951 Refugee Convention provides a possible framework for granting asylum to informants, basing qualification on “well-founded fear of being persecuted,” though an amendment or supplemental agreement may be needed to ensure that AI whistleblowing is a legally qualifying cause of persecution.
Asylum for people with sensitive knowledge or expertise was routinely granted in the context of the Cold War and its aftermath. Section 7 of the CIA Act of 1949 provided for admission and permanent residence of up to a hundred defectors and their immediate families per fiscal year if deemed “in the interest of national security or essential to the furtherance of the national intelligence mission.” The Soviet Scientists Immigration Act of 1992 gave up to 750 visas to former Soviet and Baltic States scientists with “expertise in nuclear, chemical, biological or other high technology fields or who are working on nuclear, chemical, biological or other high-technology defense projects.”
The challenge inspections mechanism we lay out in Paragraph 3 of this article is modeled after that of Part IX of the CWC:
Each State Party has the right to request an on-site challenge inspection of any facility or location in the territory or in any other place under the jurisdiction or control of any other State Party for the sole purpose of clarifying and resolving any questions concerning possible non-compliance…
The CWC, along with other arms control treaties such as the INF and START I nuclear treaty between the U.S. and USSR, combines NTM with challenge-like inspections to verify compliance.
Notes
Intelligence Gathering
We expect all parties would make ongoing efforts to independently determine whether any actor is conducting dangerous AI activities, out of interest in their own security. A range of state intelligence gathering activities would supplement and validate monitoring the ISIA conducts directly (as described in Articles IV through VII). Towards that end, an Information Consolidation division is vital, and must be trusted to receive information from all parties.
Towards that end, confidentiality is vital, and must be sufficiently robust to assure state intelligence services that the risks imposed on their intelligence methods are minimal, and are justified in order to provide needed information to the ISIA. Avoiding collecting sensitive information whenever possible, and keeping the collected information in the strictest confidence, minimizes risks of compromise.
Article X also addresses the surveillance of non-signatories, where the need for intelligence is strong.
Article X stops short of imposing an obligation to surveil. It would be unprecedented to mandate the creation of a self-sufficient intelligence gathering capability within the ISIA at the required level of capability to give states assurance, and such it seems unnecessary in light of the fact that the creation of superintelligence would pose a grave security threat, which means all parties are already strongly incentivized to surveil and monitor any actor with that capability. Thus, the ISIA relies on parties to provide key intelligence
Whistleblower Protections
The overall effectiveness of this treaty relies on parties’ justified confidence that other parties are not undertaking prohibited AI activities. Even with National Technical Means and other intelligence gathering, it may be difficult for states to detect clandestine efforts to develop superintelligence. There are many domains in which it may not be feasible for states to gather intelligence on their rivals, such as efforts conducted inside military facilities. Whistleblowers can serve as an additional source of information, and the possibility of whistleblowing provides further deterrence against non-compliance.
Whistleblowers may be effective because individuals involved in secret treaty violations (e.g., clandestine training runs or AI research) may themselves be concerned about the danger from ASI. This article aims to make it safer and less costly for them to report violations, shifting the personal incentives away from silence and toward disclosure.
Whistleblowers could sound the alarm for violations of the treaty including:
- Article IV: Training runs that are unmonitored, exceed thresholds, or use prohibited distributed training methods.
- Article V: The existence of undeclared chip clusters, the failure to consolidate all covered hardware, or the diversion of chips to secret, unmonitored facilities.
- Article VI: New manufactured AI chips diverted away from monitoring, or created without mandated security features.
- Article VIII: Prohibited AI research.
Modifications to the whistleblower clauses could change its efficacy and political viability in various ways. For example, states could offer to financially compensate legitimate whistleblowers to provide additional incentives, but this may be seen as paying citizens to defect on their own countries.
Challenge Inspections
Challenge inspections are a critical function provided by the ISIA. Without the credible threat of detection, parties may fear that their rivals would attempt to cheat the treaty (despite the lose-lose nature of a race to superintelligence). Intelligence gathering is one method to combat apparent (illusory) incentives to defect.
- Any Party (“Concerned Party”) may raise concerns regarding the implementation of this treaty, including concerns about ambiguous situations or possible non-compliance by another Party (“Requested Party”). This includes misuse of Protective Actions (Article XII).
- The Concerned Party shall notify the Requested Party of their concern, while also sharing their concern with the Director-General and Executive Council. The Requested Party will acknowledge this notification within 36 hours, and provide clarification within 5 days.
- If the issue is not resolved, the Concerned Party may request that the Executive Council assist in adjudicating and clarifying the concern. This may include the Concerned Party requesting a challenge inspection in accordance with Article X.
- The Executive Council shall provide appropriate information in its possession relevant to such a concern.
- The Executive Council may task the Technical Secretariat to compile additional documentation, convene closed technical sessions, and recommend resolution measures.
- If the Executive Council determines there was a Treaty violation, it can take actions to prevent dangerous AI activities or reprimand the Requested Party. These actions may include:
- Require additional monitoring or restrictions on AI activities
- Require relinquishment of AI hardware
- Call for sanctions
- Recommend Parties take Protective Actions under Article XII
Precedent
Our Article XI Dispute Resolution procedures borrow from Articles IX, XII, and XIV of the Chemical Weapons Convention. Article IX of the CWC requires signatories to respond to requests for clarification “as soon as possible, but in any case not later than 10 days after the request.” Given how quickly digital developments can propagate, we chose a 5-day response deadline, but even this figure may need to be adjusted downward.
Our Paragraph 2 of this article is modeled after Article XIV of the CWC, which permits its Executive Council to “contribute to the settlement of a dispute by whatever means it deems appropriate, including offering its good offices, calling upon the States Parties to a dispute to start the settlement process of their choice and recommending a time-limit for any agreed procedure.” Parties are also encouraged to refer cases to the International Court of Justice as appropriate.
As in Paragraph 3 of our Article XI, the CWC’s Article XII empowers the Executive Council to recommend remedies, including sanctions, “in cases where serious damage to the object and purpose of this Convention may result from activities prohibited under this Convention.” To give force to those recommendations, the CWC’s Council is to “bring the issue, including relevant information and conclusions, to the attention of the United Nations General Assembly and the United Nations Security Council.” Recommendations by our treaty’s ISIA Executive Council may be similarly escalated.
Notes
The purpose of Article XI is to include a consultation and clarification process to resolve issues that arise between signatories.
Given the pace of AI innovation, determining violations on a reasonable timeline can be challenging. The role of the Executive Council is to adjudicate any concerns raised by any party to the treaty. The Technical Secretariat has the role of ensuring that the inspections are conducted by experts that have an understanding of cutting-edge AI technologies. The treaty uses an aggressive timeline (measured in hours and days) in the hopes that it is fast enough for parties to wait for rulings before taking Protective Actions (as described in Article XII, below), even despite the rapid pace of technological change in the field of AI. That said, of course no treaty can prevent a party from taking protective actions that they deem necessary to ensure their own security.
- Recognizing that the development of ASI or other Dangerous AI Activities, as laid out in Articles IV through IX, would pose a threat to global security and to the life of all people, it may be necessary for Parties to this Treaty to take drastic actions to prevent such development. The Parties recognize that development of artificial superintelligence (ASI), anywhere on earth, would be a threat to all Parties. Under Article 51 of the United Nations Charter and as longstanding precedent, states have a right to self-defence. Due to the scale and speed of ASI-related threats, self-defence may require pre-emptive actions to prevent the development of ASI.
- To prevent the development or deployment of ASI, this Article authorizes tailored Protective Actions. Where there is credible evidence that a State or other actor (whether a Party or a non‑Party) is conducting or imminently intends to conduct activities aimed at developing or deploying ASI in violation of Article I, Article IV, Article V, Article VI, Article VII, or Article VIII, a State Party may undertake Protective Actions that are necessary and proportionate to prevent activities. In recognition of the harms and escalatory nature of Protective Actions, Protective Actions should be used as a last-resort. Outside of emergencies and time-sensitive situations, Protective Actions shall be preceded by other approaches such as, but not limited to:
- Trade restrictions or economic sanctions
- Asset restrictions
- Visa bans
- Appeal to the UN Security Council for action
- Protective Actions may include measures such as cyber operations to sabotage AI development, interdiction or seizure of covered chip clusters, military actions to disable or destroy AI hardware, and physical disablement of specific facilities or assets directly enabling AI development.
- Parties shall minimize collateral harm, including to civilians and essential services, wherever practical, subject to mission requirements.
- Protective Actions shall be strictly limited to preventing ASI development or deployment and shall not be used as a pretext for territorial acquisition, regime change, resource extraction, or broader military objectives. Permanent occupation or annexation of territory is prohibited. Action will cease upon verification by ISIA that the threat no longer exists.
- Each Protective Action shall be accompanied, at initiation or as soon as security permits, by a public Protective Action Statement that:
- Explains the protective purpose of the action;
- Identifies the specific AI‑enabling activities and assets targeted;
- States the conditions for cessation;
- Commits to cease operations once those conditions are met.
- Protective Actions shall terminate without delay upon any of the following:
- ISIA certification that the relevant activities have ceased.
- Verified surrender or destruction of covered chip clusters or ASI‑enabling assets, potentially including the establishment of sufficient safeguards to prevent Restricted Research activities.
- A determination by the acting Party, communicated to the ISIA, that the threat has abated.
- Parties shall not regard measured Protective Actions taken by another Party under this Article as provocative acts, and shall not undertake reprisals or sanctions on that basis. Parties agree that Protective Actions meeting the above requirements shall not be construed as an act of aggression or justification for the use of force.
- The Executive Council shall review each Protective Action for compliance with this Article and report to the Conference of the Parties. If the Executive Council finds that an action was not necessary, proportionate, or properly targeted, actions may be taken under Article XI, Paragraph 3.
Precedent
The idea that nation-states can take protective actions for their own security is a reality regardless of precedent, but one case of its codification into international law is Chapter VII of the United Nations Charter, which states that the Security Council may take military or non-military measures to maintain international peace and security, when necessary.
The concept of Protective Actions as they appear in the draft above is further grounded in historical precedents where states have acted, individually or collectively, to prevent the development of technologies deemed a threat to international security. These actions range from sanctions to cyber and military strikes.
The international effort to prevent Iran from developing nuclear weapons provides a clear, modern example. The UN Security Council has several times imposed sanctions on Iran due to its nuclear program, most of which were lifted after Iran agreed to limits on said program in the 2015 Joint Comprehensive Plan of Action.
The U.S. and Israel reportedly collaborated on Stuxnet, a highly sophisticated cyberweapon which destroyed many of Iran’s uranium enrichment centrifuges in 2010.
In June 2025, Israel launched airstrikes against many of Iran’s nuclear facilities, and this was followed by U.S. airstrikes nine days later which were partially aimed at disabling the Fordow Uranium Enrichment Plant.
Another historical precedent for Protective Actions is the international response to Iraq’s nuclear noncompliance in the 1990s. Following the 1991 Gulf War, the United Nations Special Commission (UNSCOM) was created to oversee the destruction of Iraq’s weapons of mass destruction. Non-compliance with the UNSCOM inspection regime eventually led to Operation Desert Fox in 1998, a bombing campaign aimed at degrading Iraq’s ability to produce WMDs.
Notes
A treaty to prevent the creation of artificial superintelligence might not need to be explicit about the need for Protective Actions against states undertaking ASI development, and instead leave these dynamics implicit, as similar agreements often do. Our draft is explicit because this deterrence regime is core to the effectiveness of the treaty, and clarity around the incentives increases the effectiveness. This explicitness also allows us to include measures that may help prevent Protective Actions being misused, including more thorough description of when these Actions are acceptable.
As discussed elsewhere, once world leaders understand the threat from ASI, they will likely be willing to take action to stop rogue AI development, including limited military interventions. Military actions, such as narrowly targeted airstrikes, should always be treated as a last resort option to prevent the development of ASI, after all other diplomacy has failed. But it is important that they are available as a last resort, in order for the deterrence and compliance regime to hold even towards actors who wrongly perceive recklessly created artificial superintelligence as a technology that would be beneficial rather than destructive.
We stress that any use of force should be targeted at preventing ASI, and should stop once it is clear that the threat has been removed. Article XII aims to make it clear that signatories would not prevent reasonable Protective Actions taken by other parties, but these actions must also be reviewed to ensure that this article is not being abused.
- For AI models created via declared training or post‑training within the limits of Article IV, the ISIA may require evaluations and other tests. These tests will inform whether the thresholds set in Article IV, Article V, Article VII, and Article VIII need to be revised. The methods used for reviews will be determined by the ISIA and may be updated.
- Evaluations shall be conducted at ISIA facilities or monitored CCCs, by ISIA officials. Officials from Treaty Parties may be informed which tests are conducted, and the ISIA may provide a summary of the test results. Parties will not gain access to AI models they did not train, except when granted access by the model owner, and the ISIA will take steps to ensure the security of sensitive information.
- The ISIA may share detailed information with Parties or the public, if the Director-General deems that this may be necessary to reduce the chance of human extinction from advanced AI.
Precedent
Precedents for ISIA-mandated tests with oversight are shared with precedents around chip use verification discussed under Article VII, with the missile telemetry sharing protocol of START I being particularly relevant. The added component here in our Article XIII is using collected data to inform recommendations for potential threshold adjustments (which could take place under the precedented mechanisms we discuss with Article XIV).
Regarding the inherent tension between disclosures to the public (Paragraph 3) and the information consolidation provisions of our Article X, we note that the Statute of the IAEA’s Article VII confidentiality provision28 has not prevented it from publishing regular and detailed reports on major developments in its associated field and their implications for global security.
Notes
The purpose of Article XIII is to ensure the ISIA stays up to date with the state of the field of AI, in case it is advancing. For example, reviewing declared training would allow the ISIA to understand the level of AI capabilities that can be reached with different levels of training FLOP. Even with algorithmic research banned, there may be progress that cannot be effectively stopped, and the ISIA must keep track of it.
Additionally, the ISIA has reason to monitor progress in capabilities elicitation. For example, new prompting methods could be discovered that cause an old AI to perform much better on some critical evaluation metric.
We envision ISIA reviews that also involve capability evaluations to make sure AIs aren’t getting dangerously capable in specific domains. They could also look at the training data to ensure AIs aren’t being trained for specifically dangerous tasks (like automating AI research), or to test for unexpected AI behavior.
When reviews reveal shifts in the AI development landscape, those shifts could necessitate changes to thresholds relevant to Article IV and Article V, and changes to the definitions of Restricted Research in Article VIII, with those changes implemented according to the mechanisms in Article III.
[28] VII.F states that “[...] subject to their responsibilities to the Agency, [the Director General and the staff] shall not disclose any industrial secret or other confidential information coming to their knowledge by reason of their official duties for the Agency”
- Any State Party may propose amendments to this treaty. “Amendments” are considered revisions to the main body and Articles of the treaty. Amendments include revisions to the purpose of the Articles of the Treaty. Under Article III, The ISIA Technical Secretariat, with a majority vote from the Executive Council, may change specific definitions and implementation methods, such as those relevant to Article IV, Article V, Article VI, Article VII, Article VIII, Article IX, and Article X. Fundamental revisions to the purposes of these Articles or to voting procedures require an Amendment.
- Such proposed amendments will be submitted to the ISIA Director-General and circulated to the State Parties.
- For an amendment to be formally considered, one third or more of the State Parties must support its consideration.
- Amendments to the main body of the treaty are not ratified until accepted by all State Parties (with no negative votes).
- If the Executive Council recommends to all States Parties that the proposal be adopted, the changes will be considered approved if no State Party rejects it within 90 days.
- Three years after the entry into force of this Treaty, a Conference of the Parties shall be held in Geneva, Switzerland, to review the operation of this Treaty with a view to assuring that the purposes of the Preamble and the provisions of the Treaty are being realized. At intervals of three years thereafter, Parties to the Treaty will convene further conferences with the same objective of reviewing the operation of the Treaty.
Precedent
The NPT has a rigid amendment process, requiring approval by “a majority of the votes of all the Parties to the Treaty.” This intentionally makes formal changes extremely difficult. Our treaty follows this precedent with the aim of fortifying the agreement against short-term pressures to relax thresholds or weaken provisions.
Hard-to-amend (and thus hard-to-weaken) treaties rely on other mechanisms for strengthening as needed. The NPT has never been amended, but has been adapted through the five-yearly Review Conference stipulated in Article VIII, where consensus agreements are made “with a view to assuring that the purposes of the Preamble and the provisions of the Treaty are being realised.”
Similarly, Article XII of the 1975 Biological Weapons Convention relies on its five-yearly Review Conferences to strengthen the treaty through non-binding Confidence-Building Measures, as formal amendments are rare. Our agreement stipulates a three-year conference, as AI has been a field prone to rapid shifts; this period may need to be further shortened.
Article XV of the Chemical Weapons Convention makes a distinction between amendments and administrative or technical changes, with less stringent approval provisions for the latter. Similar language could be added to our draft agreement to provide a level of flexibility in managing future developments in the field of AI.
Article XV of the Outer Space Treaty Article contains an amendment clause, but the treaty has never been formally amended; instead, new treaties have been negotiated to address emerging space issues. This could be another option for shoring up weaknesses that may become apparent in an AI treaty.
Notes
Article XIV sets out the process to make major revisions to the treaty structure. These revisions require substantial support from the parties and there is a high bar to make such revisions. By contrast, changes to the details of various categories and restrictions can be made much more easily and rapidly (subject to slower review), as described in Article III, and as is necessitated by the fast pace of change in the field of AI. A careful review process seems warranted given the gravity of the situation, and given the risk of overzealous actors could, if left unchecked, impose misguided restrictions that inconvenience the public for little-to-no benefit.
- The Treaty shall be of unlimited duration.
- Each Party shall in exercising its national sovereignty have the right to withdraw from the Treaty if it decides that extraordinary events, related to the subject matter of this Treaty, have jeopardized the supreme interests of its country. It shall give notice of such withdrawal to the ISIA 12 months in advance.
- During this 12 month period, the withdrawing state shall cooperate with ISIA efforts to certify that after withdrawal, the withdrawing state will be unable to develop, train, post-train, or deploy dangerous AI systems, including ASI or systems above the Treaty thresholds. Withdrawing states acknowledge that such cooperation aids the ISIA and Parties in avoiding the use of Article XII.
- In particular, the withdrawing state, under ISIA oversight, will remove all covered chip clusters and ASI-enabling assets (e.g., advanced computer chip manufacturing equipment) from its territory to ISIA-approved control or render them permanently inoperable (as described in Article V).
- Nothing in this Article limits the applicability of Article XII. A State that has withdrawn (and is therefore a non-Party) remains subject to Protective Actions if credible evidence indicates activities aimed at ASI development or deployment.
Precedent
It is common for treaties to lack expiration dates. The first paragraph of Article XVI of the CWC states “This Convention shall be of unlimited duration.”
Treaties of unlimited duration do not necessarily last forever.29 But they do typically provide a mechanism for withdrawal, usually with a required period of notice and other stipulations that might let it leave in a manner less concerning to the remaining parties. Article XVI of the CWC allows for a party to withdraw “if it decides that extraordinary events, related to the subject-matter of this Convention, have jeopardized the supreme interests of its Country.” The withdrawing country must give 90 days notice. Article XVI of the Outer Space Treaty requires one year notice for withdrawal.
Our treaty language expects 12 months notice from departees, allowing ample time for assisting with the assurance-providing measures in Paragraph 3. Our intent with these measures (which go beyond what we readily find in the historical record of withdrawal provisions) is to reduce the potential need for protective actions against the withdrawing Party, as no Party or non-Party can be allowed to create ASI or weaken the world’s ability to prevent its creation.
Historical precedent for a withdrawn party remaining subject to protective actions is found in the case of United Nations Security Council Resolution 1718, which imposed sanctions against North Korea after its 2006 nuclear test, despite North Korea’s previous withdrawal from NPT.
Notes
Given the dangers of ASI research and development, as well as the risk that if one country decides to withdraw from the treaty and race to superintelligence then others might follow, a treaty needs barriers to withdrawal.
In practice, this is challenging. North Korea, for example, withdrew from the NPT to continue its nuclear proliferation activities, even at the cost of UN Security Council resolutions and associated sanctions. The consequences did not prove sufficient to cause North Korea to stop its proliferation activities.
If nations wish to withdraw from the treaty, our wording makes it clear that, in the eyes of all parties, they forgo the right to AI infrastructure, and that they would be subject to Article XII Protective Actions if they engage in dangerous AI activities. Any further negotiation around the ASI issue — e.g., to avoid Protective Actions — would have to be negotiated separately by interested parties.
Parties concerned about withdrawals could include mechanisms to make withdrawal more difficult. For example, both U.S. and Chinese officials could agree to install mutual killswitches inside retained datacenters, allowing either party to permanently shut off the other’s datacenter. Alternatively, parties to the treaty could adopt a multilateral licensing regime in which all new AI chips must be fabricated with hardware locks that require approval from multiple parties to continue operation, so that if a country withdrew from the treaty, others could stop approving their licenses and incapacitate their chips. Another option involves moving key AI infrastructure into third-party countries where the infrastructure could be confiscated or destroyed if a party withdrew from the treaty. Our draft sticks to minimal deterrence methods, but many other methods are available (or could be made available with a little technological investment).
Our draft treaty is focused narrowly on preventing the creation of rogue artificial superintelligence. A more sweeping treaty could also attempt to unify the parties around a particular positive vision of how AI development could eventually continue, and could end with an article that agrees to joint investment in that vision. Coming to that sort of agreement seems to us like an extra difficult step, and so our draft does not venture that sort of proposal. As we discussed elsewhere, people need not agree on those details to agree that the race to superintelligence should be stopped, and world leaders could unite around a treaty that resembles this draft even as they separately negotiate joint investments into positive paths forwards to the future, in whatever ways they see fit.
[29] Sometimes they are superseded by other treaties. This was the case for the 1947 General Agreement on Tariffs and Trade (GATT); it was superseded by the 1994 Marrakesh agreement, which incorporated the rules from GATT but established the World Trade Organization (WTO) to replace GATT’s institutional structure. Treaties of unlimited duration also sometimes end when parties withdraw in a manner that makes the treaty ineffective. For example, the U.S. and USSR initially agreed to the 1987 Intermediate-Range Nuclear Forces (INF) Treaty for an unlimited duration, but the U.S. withdrew in 2019 citing Russian non-compliance, and Russia later announced it would no longer abide by the treaty in 2025.