A Tentative Draft of a Treaty, with Annotations
Below, we provide an annotated example draft language for the sort of treaty that could be implemented by major governments around the world, if they recognized the dangers from artificial superintelligence (ASI) and sought to prevent anyone from building ASI.1
We are not policymakers and we are not well-versed in international law. We present this as an illustrative example of some potentially valuable treaty provisions to have in view, using mechanisms tailored to the situation at hand and grounded in historical precedent.
This draft text covers many different mechanisms that we think would be required to prevent AI developers from seriously endangering humanity. In practice, we would expect different aspects to likely be covered by different treaties.2 And of course, in reality, the international community should carefully draft the whole treaty, subject to negotiation and review by relevant experts.
For each article in the example treaty below, we’ve provided a commentary section explaining why we made key decisions, and a section discussing some relevant precedent.
A real treaty would involve many details. We’ve included some example details, but most are relegated to “annexes” (which we do not flesh out in their entirety). Many of the quantities and numerical thresholds we use in our draft constitute our best guess, but they should still be treated only as guesses. Many of those numbers would require further study and revision before being finalized. These sorts of details plausibly wouldn’t be included in the treaty itself, analogous to how, in the case of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), specific details of inspections and so-called “safeguards” programs were decided between each country and the IAEA, rather than being included in the NPT itself. However, for clarity, we have kept our best-guess numbers directly in the treaty text, to help it feel more concrete.
[1] It might be that nation-states concerned about artificial superintelligence would prefer to take smaller steps first — e.g., steps that don’t shut down AI research and development just yet, but that keep the option open to shut down AI R&D in the future. We don’t recommend that course of action, because we think the situation is already clearly out-of-hand and we are not confident the situation will get much clearer before it’s too late. Nevertheless, the MIRI technical governance team is working on proposals for those scenarios, in case they are helpful. You can follow their work here.
[2] This is the case with nuclear weapons agreements, where separate treaties establish the IAEA (1956, by the Conference on the Statute of the International Atomic Energy Agency, hosted at the Headquarters of the United Nations), the NPT (1970, through negotiations in the United Nations Eighteen Nation Committee on Disarmament), and the arms control agreements like the START treaty (1991, following nine years of intermittent negotiation between the U.S. and the Soviet Union).
Preamble
Article I: Primary Purpose
Article II: Definitions
Article III: ISIA
Article IV: AI Training
Article V: Chip Consolidation
Article VI: AI Chip Production Monitoring
Article VII: Chip Use Verification
Article VIII: Restricted Research: AI Algorithms and Hardware
Article IX: Research Restriction Verification
Article X: Information Consolidation and Challenge Inspections
Article XI: Dispute Resolution
Article XII: Protective Actions
Article XIII: ISIA Reviews
Article XIV: Treaty Revision Process
Article XV: Withdrawal and Duration
Preamble
The States concluding this Treaty, hereinafter referred to as the Parties to the Treaty,
Alarmed by the prospect that the development of artificial superintelligence would lead to the deaths of all people and the end to all human endeavor,
Affirming the necessity of urgent, coordinated, and sustained international action to prevent the creation and deployment of artificial superintelligence under present conditions,
Convinced that the measures to prevent advancement of artificial intelligence capabilities will reduce the chance of human extinction,
Recognizing that the stability of this Treaty relies on the ability to verify the compliance of all Parties,
Recalling the precedent of prior arms control and nonproliferation agreements in addressing global security threats,
Undertaking to co-operate in facilitating the verification of artificial intelligence activities globally when they steer well clear of artificial superintelligence, and seeking to preserve access to the benefits of artificial intelligence systems even while avoiding dangers,
Have agreed as follows:
[3] The Treaty on the Non-Proliferation of Nuclear Weapons (commonly called the “Non-Proliferation Treaty”) entered into force in 1970 and was extended indefinitely in 1995. Known for its near-universal membership (191 parties), its preamble emphasizes the global hazard of weapons proliferation while affirming that the benefits of peaceful nuclear applications should be available to all parties.
Article I: Primary Purpose
Each Party to this Treaty shall not develop, deploy, or seek to develop or deploy artificial superintelligence (“ASI”) by any means. Each Party shall prohibit and prevent all such development within their borders and jurisdictions, and, due to the uncertainty as to when further progress would produce ASI, shall not engage in or permit activities that materially advance toward ASI as described in this Treaty. Each Party shall assist, or not impede, reasonable measures by other Parties to dissuade and prevent such development by and within non-Party states and jurisdictions. Each Party shall implement and carry out all other obligations, measures, and verification arrangements set forth in this Treaty.
Where some classes of AI infrastructure and capabilities staying far from ASI may be deemed acceptable but only under conditions of international supervision, only Parties to the Treaty may carry out such activities, or own or operate AI chips and manufacturing capabilities that could potentially lead to the development of ASI if unsupervised. Non-Parties are denied such access for the safety of the Parties and of all life on Earth.(Article V, Article VI, Article VII).
Parties commit to a dispute resolution process (Article XI) to minimize unnecessary Protective Actions (Article XII).
[4] The NPT is generally credited with keeping the number of nuclear states lower than it might have been, but acquisitions by non-signatories (India, Pakistan, Israel) and former signatories (North Korea) have still occurred. Any non-signatory creating even a single ASI is comparable in danger to a mass thermonuclear exchange, and must be treated accordingly.
[5] The Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, commonly called the CCW, entered into force in 1983. As of 2024, its 128 parties commit to protect combatants and non-combatants from unnecessary and egregious suffering by restricting various categories of weapons.
Article II: Definitions
For the purposes of this Treaty:
- Artificial intelligence (AI) means a computational system that performs tasks requiring cognition, planning, learning, or taking actions in physical, social or cyber domains. This includes systems that perform tasks under varying and unpredictable conditions, or that can learn from experience and improve performance.
- Artificial superintelligence (ASI) is operationally defined as any AI with sufficiently superhuman cognitive performance that it could plan and successfully execute the destruction of humanity.
- For the purposes of this Treaty, AI development which is not explicitly authorized by the ISIA (Article III) and is in violation of the limits described in Article IV shall be assumed to have the aim of creating artificial superintelligence.
- Dangerous AI activities are those activities which substantially increase the risk of an artificial superintelligence being created, and are not limited to the final step of developing an ASI but also include precursor steps as laid out in this treaty. The full scope of dangerous AI activities is concretized by Articles IV through IX and may be elaborated and modified through the operation of the Treaty and the activities of the ISIA.
- Floating-point operations (FLOP) is the computational measure used to quantify the scale of training and post‑training, based on the number of mathematical operations done. FLOP shall be counted as either the equivalent operations to the half-precision floating-point (FP16) format or the total operations (in the format used), whichever is higher.
- Training run means any computational process that optimizes an AI’s parameters (specifications of the propagation of information through a neural network, e.g., weights and biases) using gradient-based or other search/learning methods, including pre-training, fine-tuning, reinforcement learning, large-scale hyperparameter searches that update parameters, and iterative self-play or curriculum training.
- Pre-training means the training run by which an AI’s parameters are initially optimized using large-scale datasets to learn generalizable patterns or representations prior to any task- or domain-specific adaptation. It includes supervised, unsupervised, self-supervised, and reinforcement-based optimization when performed before such adaptation.
- Post-training means a training run executed after a model’s pre-training. In addition, any training performed on an AI created before this Treaty entered into force is considered post-training.
- Advanced computer chips are integrated circuits fabricated on processes at least as advanced as the 28 nanometer process node.
- AI chips mean specialized integrated circuits designed primarily for AI computations, including but not limited to training and inference operations for machine learning models [this would need to be defined more precisely in an Annex]. This includes GPUs, TPUs, NPUs, and other AI accelerators. This may also include hardware that was not originally designed for AI uses but can be effectively repurposed. AI chips are a subset of advanced computer chips.
- AI hardware means all computer hardware for training and running AIs. This includes AI chips, as well as networking equipment, power supplies, and cooling equipment.
- AI chip manufacturing equipment means equipment used to fabricate, test, assemble, or package AI chips, including but not limited to lithography, deposition, etch, metrology, test, and advanced-packaging equipment [a more complete list would need to be defined in an Annex].
- H100-equivalent means the unit of computing capacity (FLOP per second) equal to one NVIDIA H100 SXM accelerator, 990 TFLOP/s in FP16, or a Total Processing Performance (TPP) of 15,840, where TPP is calculated as TPP = 2 × non-sparse MacTOPS × (bit length of the multiply input).
- Covered chip cluster (CCC) means any set of AI chips or networked cluster with aggregate effective computing capacity greater than 16 H100-equivalents. A networked cluster refers to chips that either are physically co-located, have inter-node aggregate bandwidth — defined as the sum of bandwidth between distinct hosts/chassis — greater than 25 Gbit/s, or are networked to perform workloads together. The aggregate effective computing capacity of 16 H100 chips is 15,840 TFLOP/s, or 253,440 TPP, and is based on the sum of per-chip TPP. Examples of CCCs would include: the GB200 NVL72 server, three eight-way H100 HGX servers residing in the same building, CloudMatrix 384, a pod with 32 TPUv6e chips, every supercomputer.
- National Technical Means (NTM) includes satellite, aerial, cyber, signals, imagery (including thermal), and other remote-sensing capabilities employed by Parties for verification consistent with this Treaty.
- Chip-use verification means methods that provide insight into what activities are being run on particular computer chips in order to differentiate acceptable and prohibited activities.
- Methods used to create frontier models refers to the broad set of methods used in AI development. It includes but is not limited to AI architectures, optimizers, tokenizer methods, data curation, data generation, parallelism strategies, training algorithms (e.g., RL algorithms) and other training methods. This includes post-training but does not include methods that do not change the parameters of a trained model, such as prompting. New methods may be created in the future.
[6] This is twice the limit mentioned as a clearly-safe limit in the book. It is likely still safe for some time yet, and evaluating where the limits should be (and changing them over time) is the subject of Article III, Article V, and Article XIII.
Article III: ISIA
- Treaty Parties hereby establish the International Superintelligence Agency (ISIA), to implement this Treaty and its provisions, including those for international verification of compliance with it, and to provide a forum for consultation and cooperation among Parties.
- There are hereby established as the organs of the ISIA: the Conference of the Parties, the Executive Council, and the Technical Secretariat.
- Conference of the Parties
- The Conference of the Parties comprises all Treaty Parties.
- The Conference of the Parties shall: Determine overall policy; adopt and oversee the budget; elect members of the Executive Council; consider compliance matters reported by the Executive Council; and adopt and revise Annexes upon Executive Council recommendation.
- It shall convene in regular session no less than annually, or at a more frequent rate as may be set by the Conference, in addition to special sessions as required. Each Party has one vote. Quorum is a majority of Parties.
- Executive Council
- The Executive Council shall have 15 members: (i) 5 designated seats for permanent members of the United Nations Security Council, and (ii) 10 elected seats distributed by equitable geographic representation. Details of this are elaborated in Annex A.
- Elected members serve two-year terms. Half of the seats are elected each year.
- The Executive Council shall: approve challenge inspections; recommend budget and policy to the Conference; appoint the Director-General; provide oversight of the Technical Secretariat and approve its recommendations.
- Decision making processes are as follows:
- The Executive Council elects the Chair and Vice Chair of the Executive Council.
- The Chair or Vice Chair can act as the presiding officer.
- Voting proceeds by One Member, One Vote.
- Votes to approve a challenge inspection under Article X require a majority.
- Votes to recall or appoint a Director-General require two-thirds majority.
- All other decisions require a majority.
- Quorum requires two-thirds of the Executive Council
- Technical Secretariat and Director-General
- The Director-General of the Technical Secretariat shall be its head and chief administrative officer.
- The Director-General is appointed by the Executive Council for a four-year term, renewable once. The Executive Council can recall the Director-General.
- The Technical Secretariat shall at its outset include technical divisions for Chip Tracking and Manufacturing Safeguards, Chip Use Verification Safeguards, Research Controls, Information Consolidation, Technical Reviews, Administration and Finance, and Legal and Compliance. The Director-General can create and disband technical divisions.
- The Technical Secretariat, by means of the Director-General, proposes changes to technical definitions and safeguard protocols, as necessary to implement Article IV, Article V, Article VI, Article VII, Article VIII, Article IX, and Article X of this Treaty.
- Time-sensitive changes to FLOP thresholds (Article IV), the size of covered compute clusters (Article V), and the boundaries of restricted research (Article VIII) may be implemented by the Director-General immediately in the case where inaction poses a security risk. Such changes remain in effect for thirty days. Past that, the changes need approval from the Executive Council to remain in effect.
- The Executive Council shall make decisions on matters of substance as far as possible by consensus; the Director-General should make efforts to achieve consensus. If consensus is not possible at the end of 24 hours, a vote will be taken, and the Executive Council shall accept the changes if a majority of members present and voting vote to accept the changes, and shall reject them otherwise.
- The ISIA’s regular budget is funded by assessed contributions of Parties, using a scale derived from the UN assessment scale, subject to a floor and ceiling set by the Executive Council. Member states also have the option of making voluntary contributions for AI safety research related to alignment, interpretability, and capacity-building activities of member states including beneficial uses of safe AI, test bed development, good practices, information sharing, and the facilitation of cooperation and joint activities loosely modeled on the IAEA network of Nuclear Security Support Centers.
[7] The Organisation for the Prohibition of Chemical Weapons (OPCW) conducts inspections, monitors the destruction of chemical weapons stockpiles, and assists in preparation for chemical weapons attacks, among various other functions critical to the Chemical Weapons Convention (CWC). The CWC entered force in 1997; its 193 parties work to effect and maintain a prohibition on the use, development, and proliferation of chemical weapons and their precursors, with some narrow exemptions.
[8] The IAEA was established in 1957, more than a decade before the NPT. The NPT was able to designate this pre-existing body to carry out some functions. In the case of artificial intelligence, no such international body exists yet, so our treaty must commit parties to creating one.
Article IV: AI Training
- Each Party agrees to ban and prohibit AI training above the following thresholds: Any training run exceeding 1e24 FLOP or any post-training run exceeding 1e23 FLOP. Each Party agrees to not conduct training runs above these thresholds, and to not permit any entity within its jurisdiction to conduct training runs above these thresholds.
- The Technical Secretariat may modify these thresholds, in accordance with the process described in Article III.
- Each Party shall report any training run between 1e22 and 1e24 FLOP to the ISIA, prior to initiation. This applies for training runs conducted by the Party or any entity within its jurisdiction.
- This report must include, but is not limited to, all training code, and an estimate of the total FLOP to be used. The Party must provide ISIA staff supervised access to all data, with access logging appropriate to the data’s sensitivity, and protections against duplication or unauthorized disclosure. Failure to provide ISIA staff sufficient access to data is grounds for denying the training run, at the ISIA’s discretion. The ISIA may request any additional documentation relating to the training run. The ISIA will also pre-approve a set of small modifications that could be made to the training procedure during training. Any such changes will be reported to the ISIA when and if they are made.
- Nonresponse by the ISIA after 30 days constitutes approval, however the ISIA may extend this time period by giving notice that they require additional time to review. These extensions are not limited, but Parties may appeal excessive delays to the Director or the Executive Council.
- The ISIA may monitor such training runs, and the Party will provide checkpoints of the model to the ISIA upon request from the ISIA, including the final trained model [initial details for such monitoring would need to be described in an Annex].
- In the event that monitoring indicates worrisome AI capabilities or behaviors, the ISIA can issue an order to pause a training run or class of training runs until they deem it safe for the training run to proceed.
- The ISIA will maintain robust security practices. The ISIA will not share information about declared training runs unless it determines that the declared training violates the Treaty, in which case it will provide all Treaty Parties with sufficient information to determine whether a violation occurred.
- In the event that a Party discovers a training run above the designated thresholds, the Party must report this training run to the ISIA, and halt this training run (if it is ongoing). Such a training run may only resume with approval from the ISIA.
- Each Party, and entities within its jurisdiction, may conduct training runs of less than 1e22 FLOP without oversight or approval from the ISIA.
- The ISIA may authorize, upon two thirds majority vote of the Executive Council, specific carveouts for activities such as safety evaluations, self‑driving vehicles, medical technology, and other activities in fashions which are are deemed safe by the Director-General. These carveouts may allow for training runs larger than 1e24 FLOP with ISIA oversight, or a presumption of approval from the ISIA for training runs between 1e22 and 1e24 FLOP.
[9] The U.S. and USSR had already agreed to stop other kinds of nuclear weapons tests in 1963 with the Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water, commonly called the Limited Test Ban Treaty (LTBT) or Test Ban Treaty.
[10] The Treaty Between the British Empire, France, Italy, Japan, and the United States of America for the Limitation of Naval Armament (the Washington Naval Treaty) lists ships to be scrapped by name in a table (Section II).
[11] The Strategic Arms Reduction Treaty was signed in 1991 and entered force in 1994. Signatories were each barred from deploying more than 6,000 nuclear warheads on a total of 1,600 intercontinental ballistic missiles and bombers.
Article V: Chip Consolidation
- Each Party shall ensure that within their jurisdiction, all covered chip clusters (CCCs), as defined in Article II (i.e., a set of chips with capacity greater than 16 H100-equivalents) [note that 16 H100s collectively cost around $500,000 in 2025 and these are rarely owned by individuals], are located in facilities declared to the ISIA, and that these AI chips are subject to monitoring by the ISIA.
- Parties shall aim to avoid co-locating AI chips with non-ancillary non-AI computer hardware in these declared facilities.
- These facilities shall be accessible to physical inspection. This may include, for instance, that verification teams can reach any CCC from at least one airport with scheduled international service within 12 hours.
- Parties shall not house AI chips in so many different locations that it is infeasible for the ISIA to monitor all locations. If requested by the ISIA, Parties must further consolidate their AI chips into fewer monitored facilities.
- Unmonitored AI chips that are not part of a CCC (i.e., that have capacity less than 16 H100‑equivalents) may remain outside of ISIA‑declared facilities, provided that such stockpiles are not aggregated or networked to meet the CCC definition, are not rotated among sites to defeat monitoring, and are not used for prohibited training. Parties will make reasonable efforts to monitor the sale and aggregation of AI chips to ensure that any newly created CCCs are detected and monitored.
- Within 120 days of the Treaty entering into force, each Party shall locate, inventory, and consolidate all CCCs into facilities declared to the ISIA. Parties shall not disaggregate, conceal, or otherwise reassign chips to evade this requirement or to cause a set of chips which would have been classified as a CCC to no longer be classified as a CCC.
- The ISIA shall monitor the domestic consolidation process, including through on‑site inspections, document and inventory verification, accompaniment of domestic authorities during transfers and inspection, and information sharing with Parties under Article X. The ISIA may require chain‑of‑custody records for transfers and may conduct challenge inspections as described in Article X. Parties shall provide timely access to relevant facilities, transport hubs, and records. Whistleblower protections and incentives under Article X apply to the consolidation process, and the ISIA shall maintain protected reporting channels.
- Within 120 days of the Treaty entering into force, Parties shall submit to the ISIA a register of their CCCs. The register must include the location, type, quantity, serial or other unique identifiers where available, and associated interconnects of all AI chips in the CCCs. Each Party shall provide the ISIA with an updated and accurate register no later than every 90 days.
- Parties shall provide the ISIA with advance notice of any planned transfer of AI chips, whether domestic or international, no less than 14 days before the planned transfer. No transfer shall proceed unless the ISIA is afforded the opportunity to observe the transfer. For international transfers, both the sending and receiving Parties shall coordinate with the ISIA on routing, custody, and receipt. Emergency transfers undertaken for safety or security reasons shall be notified as soon as practicable, with post‑facto verification.
- Broken, defective, surplus, or otherwise decommissioned AI chips shall continue to be treated as functional chips, until the ISIA certifies they are destroyed. Parties shall not destroy AI chips without ISIA oversight. Destruction or rendering permanently inoperable shall be conducted under ISIA oversight using ISIA‑approved methods and recorded in a destruction certificate [the details will need to be explained in an Annex]. Salvage or resale of components from such hardware is prohibited unless expressly authorized by the ISIA.
[12] The Joint Comprehensive Plan of Action was finalized in 2015 between the five permanent members of the United Nations Security Council, Germany, the European Union, and Iran. When it took effect in January of 2016, Iran gained sanctions relief and other provisions in exchange for accepting restrictions on its nuclear program.
[13] The Strategic Arms Limitation Talks (SALT) commenced in 1969 between the U.S. and USSR, producing the SALT I treaty, signed in 1972, which froze the number of strategic ballistic missile launchers and regulated the addition of new submarine-launched ballistic missiles, among other restrictions.
[14] The 1972 Anti-Ballistic Missile Treaty (ABM) grew out of the original SALT talks, and limited each party to two anti-ballistic complexes each (later, just one) with restrictions on their armament and tracking capabilities.
[15] With the 1987 Intermediate-Range Nuclear Forces Treaty (INF), the U.S. and USSR agreed to ban most nuclear delivery systems with ranges in between those of battlefield and intercontinental systems. (Given the short warning time strikes from such systems would afford, they were seen more as destabilizing offensive systems than as defensive assets.)
Article VI: AI Chip Production Monitoring
- The ISIA will implement monitoring of AI chip production facilities and key inputs to chip production. This monitoring will ensure that all newly produced AI chips are immediately tracked and monitored until they are installed in declared CCCs and that unmonitored supply chains are not established.
- The ISIA will monitor AI chip production facilities determined to be producing or potentially producing AI chips and relevant hardware [the precise definitions of AI chip production facilities, AI chips, and relevant hardware would need to be further described in an Annex; the monitoring methods would also need to be described in an Annex].
- Monitoring of newly produced AI chips will include monitoring of production, sale, transfer, and installation. Monitoring of chip production will start with fabrication. The full set of activities includes fabrication of high-bandwidth memory (HBM), fabrication of logic chips, testing, packaging, and assembly [this set of activities would need to be specified in an Annex].
- For facilities where ISIA tracking and monitoring is not feasible or implemented, production of AI chips will be halted. Production of AI chips may continue when the ISIA declares that acceptable tracking and monitoring measures have been implemented.
- If a monitored chip production facility is decommissioned or repurposed, the ISIA will oversee that process, and, if done to the satisfaction of the ISIA, this ends the monitoring requirement.
- No Party shall sell or transfer AI chips or AI chip manufacturing equipment except as authorized and tracked by the ISIA.
- Sale or transfer of AI chips within or between Treaty Parties shall have a presumption of approval and be tracked by the ISIA.
- Sale or transfer of AI chip manufacturing equipment within or between Treaty Parties shall not have a presumption of approval. Approval for such transfer shall be based on an assessment of the risk of diversion or Treaty withdrawal of the receiving Party.
- Sale or transfer of AI chips and AI chip manufacturing equipment to non-Party States or entities outside a Party State shall have a presumption of denial.
- No Party shall sell or transfer non-AI advanced computer chips or non-AI advanced computer chip manufacturing equipment to non‑Party States or entities outside a Party State except as authorized and tracked by the ISIA.
- Sale or transfer of non-AI advanced computer chips or non-AI advanced computer chip manufacturing equipment within or between Treaty Parties is not restricted under this Article.
Article VII: Chip Use Verification
- Parties accept continuous on‑site verification of total chip usage by the ISIA at declared CCCs. The methods used for verification will be determined and updated by the Technical Secretariat, in accordance with the process described in Article III. These methods may include, but are not limited to:
- In-person inspectors
- Tamper-proof cameras
- Measurements of power, thermal, and networking characteristics
- On-chip hardware-enabled mechanisms, including retrofitted mechanisms
- Declaration of the workloads and operations of chips by the CCC operator
- Rerunning of declared workloads at an ISIA facility to confirm fidelity of declarations
- The aim of this verification will be to ensure chips are not being used for prohibited activities, such as large-scale AI training described in Article IV.
- In cases where the ISIA assesses that current verification methods cannot provide sufficient assurance that the AI hardware is not being used for prohibited activities, AI hardware must be powered off, and its non-operation continually verified by in-person inspectors or other ISIA-approved verification mechanisms.
- The ISIA may impose various restrictions on how chips can operate in order to ensure proper verification. These restrictions may include but are not limited to:
- Restrictions on the bandwidth and latency between different chips, or between chips and their datacenter network, in order to distinguish permitted inference from prohibited training.
- Restrictions on the number or rate of FLOP/s or memory bandwidth at which chips can operate, in order to distinguish permitted inference from prohibited training or other prohibited workloads.
- Restrictions on the numerical precision of chip operations, in order to differentiate AI from non-AI workloads.
- The ISIA will approach verification for different CCCs differently based on their likelihood of being used for AI activities and their sensitivity as relevant to national security.
- The ISIA will lead research and engineering to develop better technologies for chip use monitoring and verification. Parties will support these efforts [more details would be provided in an Annex].
[16] The Food Safety and Inspection Service (FSIS) is an agency of the U.S. Department of Agriculture formed in 1977.
[17] Another key consideration for chip use verification measures is security and privacy. Parties will want to ensure that the IAEA only has access to the information it needs for verification without also having access to sensitive data on the chips (such as military secrets or sensitive user data). Therefore, the verification methods used would need to be made secure and would be narrowly scoped when possible.
Article VIII: Restricted Research: AI Algorithms and Hardware
- For the purpose of preventing specific research that advances the frontier of AI capabilities or undermines the ability of Parties to implement the measures in this Treaty, this Treaty designates research meeting any of the conditions below as Restricted Research:
- Improvements to the methods used to create frontier models, as defined in Article II, that would improve model capabilities or the efficiency of AI development, deployment, or use
- Distributed or decentralized training methods, or training methods optimized for use on widely available or consumer hardware
- Research into computer artificial intelligence paradigms beyond machine learning
- Advancements in the fabrication of AI-relevant chips or chip components
- Design of more performant or more efficient AI chips
- The ISIA’s Research Controls division shall classify all Restricted Research activities as either Controlled or Banned.
- Each Party shall monitor any Controlled Research activities within its jurisdiction, and take measures to ensure that all controlled research is monitored and made available to the Research Controls division for review and monitoring purpose.
- Each Party shall not conduct any Banned Research, and shall prohibit and prevent Banned Research by any entity within its jurisdiction.
- No Party shall assist, encourage, or share Banned Research, including by funding, procuring, hosting, supervising, teaching, publishing, providing controlled tools or chips, or facilitating collaboration.
- Each Party shall provide a representative to the ISIA’s Research Controls division, under the Technical Secretariat (established in Article III). This division gains these responsibilities:
- Interpret and clarify the categories of Restricted Research, and respond to questions as to the boundaries of Restricted Research, in response to new information, and in response to requests from researchers or organizations or Party members.
- Interpret and clarify the boundary between Controlled Research and Banned Research, and respond to questions as to this boundary, in response to new information, and in response to requests from researchers or organizations or Party members.
- Modify the definition of Restricted Research and its categories, in response to changing conditions, or in response to requests from researchers or organizations or Party Members.
- Modify the boundary between Controlled Research and Banned Research in response to changing conditions, or in response to requests from researchers or organizations or Party Members.
- The Technical Secretariat may modify the categories, boundaries, and definitions of Restricted Research in accordance with the process described in Article III.
[18] The 1946 Atomic Energy Act was later augmented by the Atomic Energy Act of 1954 with the goal of allowing for a civilian nuclear industry, which required allowing some Restricted Data to be shared with private companies.
[19] The 1979 case of United States v. The Progressive, in which a newspaper intended to reveal the “secret” of the hydrogen bomb, might have given the U.S. Supreme Court an opportunity to rule on whether the “born secret” doctrine violates the First Amendment’s protections on speech, if the government hadn’t dropped the case as moot.
[20] An arm of the U.S. Department of Commerce.
[21] Hundreds of such orders have been placed on cryptography-related patents over the decades.
[22] For instance, the development of AlphaGo — a state-of-the-art AI in 2016 — doesn’t fit cleanly into the modern “pre-training, post-training, inference” paradigm.
Article IX: Research Restriction Verification
- Each Party shall create or empower a domestic agency with the following responsibilities:
- Maintain awareness of and relationships with domestic researchers and organizations working on areas adjacent to Restricted Research, in order to communicate the categories of Restricted Research established in Article VIII.
- Impose penalties to deter domestic researchers and organizations from conducting Restricted Research. These penalties shall be proportionate to the severity of the violation and should be designed to act as a sufficient deterrent. Each Party shall enact or amend legal statutes as necessary to enable the imposition of these penalties.
- Establish secure infrastructure for reporting and containment of inadvertent discoveries meeting the conditions for Restricted Research. These reports will be shared with the Research Controls division.
- To aid in the international verification of research bans, the Research Controls division will develop and implement verification mechanisms.
- These mechanisms could include but are not limited to:
- ISIA interviews of researchers who have previously worked in Restricted Research topics, or are presently working in adjacent areas.
- Monitoring of the employment status and whereabouts of researchers who have previously worked in Restricted Research topics, or are presently working in adjacent areas.
- Maintaining embedded auditors from the ISIA in selected high-risk organizations (e.g., projects difficult to distinguish from Restricted Research, organizations that were previously AI research organizations).
- Parties will assist in the implementation of these verification mechanisms.
- The information gained through these verification mechanisms will be compiled into reports for the Executive Council, keeping as much sensitive information confidential as possible to protect the privacy and secrets of individuals and Parties.
- These mechanisms could include but are not limited to:
[23] The International Science and Technology center grew out of the 1991 Nunn-Lugar Cooperative Threat Reduction program, a U.S. initiative to secure and dismantle WMDs and their associated infrastructure in former Soviet states.
[24] Parties to our treaty may wish to explore expanding the concept of crimes against humanity (codified in the 1988 Rome Statute of the International Criminal Court) to cases where a researcher deliberately seeks to develop ASI at the expense of the people of Earth.
[25] The Committee on National Security Systems (CNSS) is a U.S. intergovernmental organization that sets security policies for government information systems.
[26] 144 States, as of June 2025.
[27] In a 2025 interview, David Luan, head of Amazon’s AGI research lab, estimated the number of people he would trust “with a giant dollar amount of compute” to develop a frontier model at “sub-150.”
Article X: Information Consolidation and Challenge Inspections
- A key source of information for the ISIA is the independent information gathering efforts of Parties. As such, the Information Consolidation division (Article III) will be ready to receive this information.
- The Information Consolidation division shall take precautions to protect commercial, industrial, security, and state secrets and other confidential information coming to its knowledge in the implementation of the Treaty, including the maintenance of secure, confidential, and, optionally anonymous reporting channels.
- For the purpose of providing assurance or compliance with the provisions of this Treaty, each Party shall use National Technical Means (NTM) of verification at its disposal in a manner consistent with generally recognized principles of international law.
- Each Party undertakes not to interfere with the National Technical Means of verification of other Parties operating in accordance with the above.
- Each Party undertakes not to use deliberate concealment measures which impede verification by national technical means of compliance with the provisions of this Treaty.
- Parties are encouraged, but not obligated, to cooperate in the effort to detect dangerous AI activities in non-Party countries. Parties are encouraged, but not obligated, to support the NTM of Parties directed at non-Parties, as relevant to this Treaty.
- A key source of information for the ISIA are individuals who provide evidence of dangerous AI activities to the ISIA. These individuals are subject to whistleblower protections.
- This Article establishes protections, incentives, and assistance for individuals (“Covered Whistleblowers”) who, in good faith, provide the ISIA or a Party with credible information concerning actual, attempted, or planned violations of this Treaty or other activities that pose a serious risk of human extinction, including concealed chips, undeclared datacenters, prohibited training or research, evasion of verification, or falsification of declarations. Covered Whistleblowers include employees, contractors, public officials, suppliers, researchers, and other persons with material information, as well as Associated Persons (family members and close associates) who assist or are at risk due to the disclosure.
- Parties shall prohibit and prevent retaliation against Covered Whistleblowers and Associated Persons, including but not limited to dismissal, demotion, blacklisting, loss of benefits, harassment, intimidation, threats, civil or criminal actions, visa cancellation, physical violence, imprisonment, restriction of movement, or other adverse measures. Any contractual terms (including non‑disclosure or non‑disparagement agreements) purporting to limit protected disclosures under this Treaty shall be void and unenforceable. Mistreatment of whistleblowers shall constitute a violation of this Treaty and be handled under Article XI, Paragraph 3.
- The ISIA shall maintain secure, confidential, and, optionally anonymous reporting channels. Parties shall establish domestic channels interoperable with the ISIA system. The ISIA and Parties shall protect the identity of Covered Whistleblowers and Associated Persons and disclose it only when strictly necessary and with protective measures in place. Unauthorized disclosure of protected identities shall constitute a violation of this Treaty and be handled under Article XI, Paragraph 3.
- Parties shall offer asylum or humanitarian protection to Covered Whistleblowers and their families, provide safe‑conduct travel documents, and coordinate secure transit.
- The ISIA may conduct challenge inspections of suspected sites upon credible information about dangerous AI activities.
- Parties may request for the ISIA to perform a challenge inspection. The Executive Council, either by request or because of the analysis provided by the Information Consolidation division, will consider the information at hand in order to request additional information, of Parties or non-Parties, or to propose a challenge inspection, or to decide that no further action is warranted.
- A challenge inspection requires approval by a majority of the Executive Council.
- Access to a suspected site must be granted by the nation in which the site is present within 24 hours of the ISIA calling for a challenge inspection. During this time, the site may be surveilled, and any people or vehicles leaving the site may be inspected by officials from a signatory Party or the ISIA.
- The challenge inspection will be conducted by a team of officials from the ISIA who are approved by both the Party being inspected and the Party that called for the inspection. The ISIA is responsible to work with Parties to maintain lists of approved inspectors for this purpose.
- Challenge inspections may be conducted in a given Party’s territory at most 20 times per year, and this limit can be changed by a majority vote of the Executive Council.
- Inspectors will take absolute care to protect the sensitive information of the inspected state, passing along to the Executive Council only what information is pertinent to the treaty.
Article XI: Dispute Resolution
- Any Party (“Concerned Party”) may raise concerns regarding the implementation of this treaty, including concerns about ambiguous situations or possible non-compliance by another Party (“Requested Party”). This includes misuse of Protective Actions (Article XII).
- The Concerned Party shall notify the Requested Party of their concern, while also sharing their concern with the Director-General and Executive Council. The Requested Party will acknowledge this notification within 36 hours, and provide clarification within 5 days.
- If the issue is not resolved, the Concerned Party may request that the Executive Council assist in adjudicating and clarifying the concern. This may include the Concerned Party requesting a challenge inspection in accordance with Article X.
- The Executive Council shall provide appropriate information in its possession relevant to such a concern.
- The Executive Council may task the Technical Secretariat to compile additional documentation, convene closed technical sessions, and recommend resolution measures.
- If the Executive Council determines there was a Treaty violation, it can take actions to prevent dangerous AI activities or reprimand the Requested Party. These actions may include:
- Require additional monitoring or restrictions on AI activities
- Require relinquishment of AI hardware
- Call for sanctions
- Recommend Parties take Protective Actions under Article XII
Article XII: Protective Actions
- Recognizing that the development of ASI or other Dangerous AI Activities, as laid out in Articles IV through IX, would pose a threat to global security and to the life of all people, it may be necessary for Parties to this Treaty to take drastic actions to prevent such development. The Parties recognize that development of artificial superintelligence (ASI), anywhere on earth, would be a threat to all Parties. Under Article 51 of the United Nations Charter and as longstanding precedent, states have a right to self-defence. Due to the scale and speed of ASI-related threats, self-defence may require pre-emptive actions to prevent the development of ASI.
- To prevent the development or deployment of ASI, this Article authorizes tailored Protective Actions. Where there is credible evidence that a State or other actor (whether a Party or a non‑Party) is conducting or imminently intends to conduct activities aimed at developing or deploying ASI in violation of Article I, Article IV, Article V, Article VI, Article VII, or Article VIII, a State Party may undertake Protective Actions that are necessary and proportionate to prevent activities. In recognition of the harms and escalatory nature of Protective Actions, Protective Actions should be used as a last-resort. Outside of emergencies and time-sensitive situations, Protective Actions shall be preceded by other approaches such as, but not limited to:
- Trade restrictions or economic sanctions
- Asset restrictions
- Visa bans
- Appeal to the UN Security Council for action
- Protective Actions may include measures such as cyber operations to sabotage AI development, interdiction or seizure of covered chip clusters, military actions to disable or destroy AI hardware, and physical disablement of specific facilities or assets directly enabling AI development.
- Parties shall minimize collateral harm, including to civilians and essential services, wherever practical, subject to mission requirements.
- Protective Actions shall be strictly limited to preventing ASI development or deployment and shall not be used as a pretext for territorial acquisition, regime change, resource extraction, or broader military objectives. Permanent occupation or annexation of territory is prohibited. Action will cease upon verification by AIAI that the threat no longer exists.
- Each Protective Action shall be accompanied, at initiation or as soon as security permits, by a public Protective Action Statement that:
- Explains the protective purpose of the action;
- Identifies the specific AI‑enabling activities and assets targeted;
- States the conditions for cessation;
- Commits to cease operations once those conditions are met.
- Protective Actions shall terminate without delay upon any of the following:
- ISIA certification that the relevant activities have ceased.
- Verified surrender or destruction of covered chip clusters or ASI‑enabling assets, potentially including the establishment of sufficient safeguards to prevent Restricted Research activities.
- A determination by the acting Party, communicated to the ISIA, that the threat has abated.
- Parties shall not regard measured Protective Actions taken by another Party under this Article as provocative acts, and shall not undertake reprisals or sanctions on that basis. Parties agree that Protective Actions meeting the above requirements shall not be construed as an act of aggression or justification for the use of force.
- The Executive Council shall review each Protective Action for compliance with this Article and report to the Conference of the Parties. If the Executive Council finds that an action was not necessary, proportionate, or properly targeted, actions may be taken under Article XI, Paragraph 3.
Article XIII: ISIA Reviews
- For AI models created via declared training or post‑training within the limits of Article IV, the ISIA may require evaluations and other tests. These tests will inform whether the thresholds set in Article IV, Article V, Article VII, and Article VIII need to be revised. The methods used for reviews will be determined by the ISIA and may be updated.
- Evaluations shall be conducted at ISIA facilities or monitored CCCs, by ISIA officials. Officials from Treaty Parties may be informed which tests are conducted, and the ISIA may provide a summary of the test results. Parties will not gain access to AI models they did not train, except when granted access by the model owner, and the ISIA will take steps to ensure the security of sensitive information.
- The ISIA may share detailed information with Parties or the public, if the Director-General deems that this may be necessary to reduce the chance of human extinction from advanced AI.
[28] VII.F states that “[...] subject to their responsibilities to the Agency, [the Director General and the staff] shall not disclose any industrial secret or other confidential information coming to their knowledge by reason of their official duties for the Agency”
Article XIV: Treaty Revision Process
- Any State Party may propose amendments to this treaty. “Amendments” are considered revisions to the main body and Articles of the treaty. Amendments include revisions to the purpose of the Articles of the Treaty. Under Article III, The ISIA Technical Secretariat, with a majority vote from the Executive Council, may change specific definitions and implementation methods, such as those relevant to Article IV, Article V, Article VI, Article VII, Article VIII, Article IX, and Article X. Fundamental revisions to the purposes of these Articles or to voting procedures require an Amendment.
- Such proposed amendments will be submitted to the ISIA Director-General and circulated to the State Parties.
- For an amendment to be formally considered, one third or more of the State Parties must support its consideration.
- Amendments to the main body of the treaty are not ratified until accepted by all State Parties (with no negative votes).
- If the Executive Council recommends to all States Parties that the proposal be adopted, the changes will be considered approved if no State Party rejects it within 90 days.
- Three years after the entry into force of this Treaty, a Conference of the Parties shall be held in Geneva, Switzerland, to review the operation of this Treaty with a view to assuring that the purposes of the Preamble and the provisions of the Treaty are being realized. At intervals of three years thereafter, Parties to the Treaty will convene further conferences with the same objective of reviewing the operation of the Treaty.
Article XV: Withdrawal and Duration
- The Treaty shall be of unlimited duration.
- Each Party shall in exercising its national sovereignty have the right to withdraw from the Treaty if it decides that extraordinary events, related to the subject matter of this Treaty, have jeopardized the supreme interests of its country. It shall give notice of such withdrawal to the ISIA 12 months in advance.
- During this 12 month period, the withdrawing state shall cooperate with ISIA efforts to certify that after withdrawal, the withdrawing state will be unable to develop, train, post-train, or deploy dangerous AI systems, including ASI or systems above the Treaty thresholds. Withdrawing states acknowledge that such cooperation aids the ISIA and Parties in avoiding the use of Article XII.
- In particular, the withdrawing state, under ISIA oversight, will remove all covered chip clusters and ASI-enabling assets (e.g., advanced computer chip manufacturing equipment) from its territory to ISIA-approved control or render them permanently inoperable (as described in Article V).
- Nothing in this Article limits the applicability of Article XII. A State that has withdrawn (and is therefore a non-Party) remains subject to Protective Actions if credible evidence indicates activities aimed at ASI development or deployment.
[29] Sometimes they are superseded by other treaties. This was the case for the 1947 General Agreement on Tariffs and Trade (GATT); it was superseded by the 1994 Marrakesh agreement, which incorporated the rules from GATT but established the World Trade Organization (WTO) to replace GATT’s institutional structure. Treaties of unlimited duration also sometimes end when parties withdraw in a manner that makes the treaty ineffective. For example, the U.S. and USSR initially agreed to the 1987 Intermediate-Range Nuclear Forces (INF) Treaty for an unlimited duration, but the U.S. withdrew in 2019 citing Russian non-compliance, and Russia later announced it would no longer abide by the treaty in 2025.