The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Why the IAEA model may not be best for regulating artificial intelligence

By Ian J. Stewart | June 9, 2023

Credit: geralt on Pixabay

OpenAI, the company behind Chat GPT and an apparent advocate for strong regulation in the artificial intelligence space, recently suggested that an International Atomic Energy Agency-like model may be needed for AI regulation.

On the face of it, the IAEA might seem like a reasonable model for AI regulation. The IAEA system of safeguards has developed over time and provides confidence that safeguarded materials are not diverted to weapons-related end uses and that safeguarded nuclear facilities cannot be used for weapons development. But the nuclear governance model is actually not a good one for regulation of Artificial General Intelligence.

While one might argue that both nuclear weapons and artificial intelligence have the potential to destroy the world, the modality and pathways of this catastrophe for AI are not as clear as for nuclear technology. While many focus on the idea that an AI could somehow develop or take over military capabilities, including nuclear weapons (e.g. Skynet), the existence of credible paths through which AI could destroy the world have not yet been established. Work to identify such paths is underway and must continue.

Nonetheless, given that the imperative of addressing urgent global threats has driven the evolution of nonproliferation and safeguard measures, a lesson from the nuclear domain is that consensus around the credibility and definition of a global challenge is necessary before states will take collective action to address it.

A graphic reads, "Test your global insight from nuclear risks to AI breakthroughs. Take our 10-minute quiz." A globe with connecting points spanning across it appears in the lower right-hand corner. Behind the globe are sprawling lines connected by circles, symbolizing connection and technology.

The evolution of nuclear regulation. The current mandate of the IAEA—assisting states in leveraging nuclear technology for peaceful purposes, developing safety and security standards, and verifying that states comply with their commitments not to build nuclear weapons—took several decades to evolve. Early efforts to manage the atom through international control (and specifically the UN Atomic Energy Commission) were unsuccessful, with international nuclear governance developing in an ad-hoc way as a result of repeated crises. This ad-hoc model for nuclear governance can be directly traced to the failure of countries to cooperate in the 1940s and to the launching of the US Atoms for Peace initiative in the 1950s, without an associated effective system of control as a result of Cold War geopolitics. The result was a system that needed to be repeatedly patched as gaps and failures became apparent through real-world cases of nuclear proliferation cases.

This cycle of cooperation—gap—proliferation—lessons learned—cooperation (and so on) has been an enduring feature of the nuclear age and is one that must be avoided if AI is to be effectively regulated.

There are clearly many important differences between the nuclear research space and the AI space, including the fact that nuclear material and technology have physical form whereas artificial intelligence is digital and thus largely intangible in nature. There is another fundamental point of difference that will make it even more difficult to address the collective action problem. In the nuclear sphere, the principal actors are generally governments. This was certainly true in the early nuclear age when initial efforts to devise systems of control were undertaken. It was only governments that could develop nuclear weapons, and, throughout much of the atomic age, the role of the commercial sector has been limited.

RELATED:
Apathy and hyperbole cloud the real risks of AI bioweapons

AI is likely to have transformative implications for governments, the military, and the intelligence sphere. In the AI sphere, however, it seems clear that private enterprises are ahead of governments in most spheres of machine intelligence now and will likely continue to be in the future. This private-sector focus produces strong private-sector lobbying forces that were largely absent from the early nuclear age. Efforts to forge corporate cooperation to address collective action problems in other domains—such as the environmental sphere and its so-called tragedy of the commons—have not been notably successful.

It is also not clear that AI can be safeguarded in a way that’s comparable to nuclear materials. Nuclear safeguards focus primarily on securing physical nuclear material (that is, fissile materials). Without nuclear materials, you cannot make nuclear weapons. The IAEA uses this as its starting point, taking declarations from countries about their nuclear materials and then seeking verification—via material accountancy, inspection, trade analysis, and open-source information—to confirm the correctness and completeness of the state’s compliance.

In the AI space, the key elements of production are training data, the computing capacity to train the AI model, the trained model (which is typically a computer file perhaps with a separate file containing training weights (parameters) and other algorithmic data), and the offered services. While the United States has moved to restrict the export of computing capability to China, it is not clear that any of the elements used in creating AI technology are “safeguardable” in the way that nuclear material is, particularly given AI’s intangible (i.e., non-physical) character. This is not to say that export controls and other measures should not be used to restrict the export of military-relevant AI capabilities. Indeed, due to the nature of export controls, specific applications of many military-relevant AIs are already likely controlled.

In practice, however, there is nothing to stop anyone—a state, a company, or even a non-state actor—from seeking to train a model. This possibility will only grow as models and model weights are leaked or released on an open-source basis and as high-performance compute capability spreads. For this reason, though it might be possible to constrain and monitor the capabilities to train high performance large language models for the immediate future because of the extreme compute capacity required, the practicality of such an approach will decline over time.

RELATED:
Introduction: Securing elections, democracy, and the information ecosystem in a critical political year

The IAEA’s peaceful uses and safeguards mandates were first to emerge and are generally thought of to be the key missions of the IAEA. But it is worthwhile to also examine the IAEA’s mandate related to safety and security, which has grown and emerged over time. Here, the IAEA plays the role of a standard coordinating body and independent reviewer. The IAEA undertakes International Physical Protection Advisory Service (IPPAS) missions, through which the regulatory preparedness of countries preparing to embark on nuclear programs is assessed. In thinking about lessons for the AI space, it might be this setting of common standards and assessment by an independent third party that is most relevant. This model of governance is not unique to the IAEA. In fact, this is such a common model of international governance that it raises the question of why the IAEA would be an appropriate exemplar, rather than, for example, the Financial Action Task Force, which implements a similar mechanism in the financial sphere.

Identifying the existential threat AI might pose. It is promising to see leaders in the AI space working proactively to identify and mitigate the risks posed by the technology. Given the profound negative potentials of both nuclear technology and AI, it is right that the nuclear field be examined to identify lessons for the AI field. But the lessons one can take from examination of the nuclear field are not all (or perhaps even generally) positive. The lack of clarity around the specific threat path of AI—how precisely it might pose an existential threat—is a challenge and clear point of distinction between it and nuclear weapons.

It took decades to build an effective system of control for atomic energy even with a common view of the risk. If AI does pose a threat to the future of humanity, we cannot afford to follow a similar approach for devising controls on AI. This line of reasoning reaches one clear conclusion: An intense focus is needed to identify possible pathways through which AI could threaten humanity. Only when specific pathways have been identified can one start to map out control approaches to identify a role for an organization similar to the International Atomic Energy Agency.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments