The authoritative guide to ensuring science and technology make life on Earth better, not worse.

AI and the A-bomb: What the analogy captures and misses

By Kevin Klyman, Raphael Piliero | September 9, 2024

Illustration by Thomas Gaulkin / Aha-Soft / depositphotos.com

When OpenAI released ChatGPT in the fall of 2022, generative AI went global, gaining one million users in days and 100 million in months. As the world began to grapple with AI’s significance, policymakers asked: Will artificial intelligence change the world or destroy it? Would AI democratize access to information, or would it be used to rapidly spread disinformation? When used by the military, could it be used to spawn “killer robots” that make wars easier to wage?

Technologists and bureaucrats scrambled to find ways to understand and forecast generative AI’s impact. What other revolutionary technological achievement combined the hope of human advancement with the lingering dangers of massive societal destruction? The obvious analogue was nuclear weapons. Within months, some of the leading scientists in machine learning signed a letter that claimed “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Elon Musk went further, stating AI poses a “significantly higher risk” than nuclear weapons. United Nations Secretary-General António Guterres proposed creating an International Atomic Energy Agency equivalent to promote the safe use of AI technology, while OpenAI CEO Sam Altman has suggested a Nuclear Regulatory Commission for AI, akin to the agency that regulates the operation of nuclear plants in the United States.

While the nuclear analogy offers some lessons for mitigating the risks of artificial intelligence systems, the differences between the two technologies are more significant than the similarities. Unlike nuclear weapons, AI development is diffuse, spanning tens of firms and countries, making it far more challenging to reduce its spread.

To address the unique challenges of widespread advanced AI systems, policymakers can start by studying attempts to craft controls in other domains with emerging or dangerous technologies, like cyberspace, outer space, and biological weapons. Instead of trying to lock down AI development, policymakers should focus primarily on crafting norms for responsible behavior. Specific measures might include major players committing to limit the use of AI in influence operations, requiring companies to conduct risk assessments, and holding countries responsible for activities of their commercial entities—derived from successes in other areas, where policymakers managed to contain the impact of dangerous technologies. Adversary nations can share safety-enhancing technologies for AI in ways similar to how Americans and the Soviets shared expertise that restricted access to nuclear weapons.

More different than alike. For all the differences, one fortunate similarity exists for those hoping to control nuclear weapons and AI: The development of both relies on a defined technological process. Finite physical inputs are necessary to create both a nuclear explosion and a large language model. Creating a nuclear reaction rests on the enrichment of uranium through centrifuges, or the reprocessing of spent fuel to extract plutonium. Similarly, training and deploying an advanced AI model requires the use of advanced AI chips or graphics-processing units (GPUs). Without these inputs, there can be no end product—no nuclear bombs or AI models. Just as few states possessed the means to enrich or reprocess nuclear fuel at an industrial scale, few actors currently have access to the compute required to build AI models, with the United States and China controlling more than wo-thirds of the world’s supply.

But this is where the important similarities end. The sharp distinction between civil and military nuclear programs, which has no analogue for AI, makes nonproliferation efforts much easier. While nuclear technology is dual-use—nuclear reactions can create energy or explosions—there remain hard lines between peaceful and military use. Nuclear energy facilities can safely run nuclear fuel through reactors without enriching uranium to bomb-grade or reprocessing plutonium, the two processes unique to bomb-making that have no peaceful purpose. This is why the United States pursues nuclear energy arrangements with nearly 50 countries that aim to prevent proliferation, where counterparts receive nuclear fuel and run it through light-water reactors, but agree not to enrich or use heavy-water reactors. In contrast, no narrow capability or “mode” separates peaceful and harmful AI systems. The same system that helps scientists synthesize new drug compounds could also assist a terrorist in making neurotoxins, while a platform aiding businesses in quickly writing mass-emails could just as easily generate disinformation for botnets.

RELATED:
On November 5, AI is also on the ballot

Although efforts can be taken to prevent the spread and transfer of nuclear technology to other actors, the digital nature of AI models make any analogous restraint difficult. Aiding another country with the development of an indigenous nuclear program—or transferring already-made weapons—is no small task. The large movement of assets like reactors, fuel, centrifuges, and scientists across borders is relatively easy to notice. Compounding the difficulty is the relatively small number of nuclear states, with their capabilities under the microscope of intelligence agencies worldwide. Delivering a nuclear weapon requires mounting it on a missile or delivery via airplane, which export control regimes like the Missile Technology Control Regime limit. In contrast, once an AI model is developed and released openly, very little can be done to prevent its widespread use and proliferation. Common guardrails that are placed on AI systems are easy and inexpensive to remove by “fine-tuning” the model and providing it with additional dangerous data, meaning that even companies that invest heavily in safety and do not openly release their models can provide few assurances to national-security decision makers.

Magnifying this problem is the relative decentralization of the AI industry, spread across not just a handful of mostly wealthy states but instead the entire private sector. Even if governments enact harsh restrictions on AI, they lack a monopoly on the chips, talent, and scientific knowhow needed to build advanced AI models. In 2023, 51 notable machine learning models were built by firms worldwide, while just two were built by governments. With investment in AI at an all-time high, the incentives to find loopholes in government controls have never been higher, as evidenced by Nvidia’s repeated circumvention of US semiconductor export controls.

Learning lessons. After inventing the ability to destroy all of humanity, the global community managed to prevent the use and spread of the world’s deadliest weapons. Today’s policymakers can only hope to have a fraction of the success in controlling AI as their predecessors had in the nuclear age.

As the chart below shows, nuclear weapons and AI do have many similarities: Both present catastrophic risks, are dual-use technologies with peaceful and violent uses, and emerged against a backdrop of great power competition.

SIMILARITIES DIFFERENCES
Nuclear weapons and AI both Only AI
… pose catastrophic risks. … is general purpose.
… are dual use. … isn’t highly regulated.
… have great-power edge. … has advantages that don’t plateau.
… led to arms races. … is decentralized.
… can cause mutual assured destruction. … is often openly available.
… have military applications. … has not yet caused mass deaths.
… pose risk of miscalculation. … is led by the private sector.
… can cause revolution in warfighting. … is reinforced by emerging tech.
… are rapidly evolving technologies. … can resemble human intelligence.
… can be acquired by third parties. … is becoming much cheaper.

 

Policymakers seeking to draw on the nuclear analogy for AI should pursue dialogue with adversary nations and universal safety measures for dual-use technologies. Just as the United States and the Soviet Union succeeded in evading nuclear catastrophe through dialogue, so too can the United States and China avert the most catastrophic outcomes from AI. Though America and China do not have the absolute monopoly over AI that the United States and the Soviet Union had over nuclear weapons, they do have the pole position, with the vast majority of the world’s computing power, allowing the two powers to decide what controls should be placed on advanced AI systems.

Negotiations can also pave the way for the United States and China to share safety-enhancing technologies for AI. In the nuclear age, the Americans shared technology with the Soviet Union known as permissive action links, which restricted access to nuclear weapons by cryptographically locking weapons. Former Deputy National Security Advisor Jason Matheny has said: “With our competitors, we need to find effectively the permissive action link for AI. That is a safety technology that you would want your competitors to use just as you’d want yourself to use it.”

Imagining alternatives. Nuclear weapons are not the only metaphor that policymakers can draw upon. Researchers at the University of Cambridge suggest 55 different analogies for AI, arguing that the unprecedented nature of the technology requires drawing insights from fields across science, policy, and law. Why not, for instance, draw lessons from highly regulated fields (like aviation, finance, and pharmaceuticals), regulation at the intersection of science and dangerous materials (like biosecurity), or emerging technologies where actors have negotiated international accords against the backdrop of great-power competition (like outer space and cyber)? No one analogy is complete, but this is precisely our point: Analysts should recognize the ways in which AI is not analogous to anything we have seen before in many critical respects.

RELATED:
Introduction: Securing elections, democracy, and the information ecosystem in a critical political year

One aspect of AI that can be controlled is compute—a finite number of countries make GPUs powerful enough to train advanced AI models. Technological restrictions can limit who can build these models as doing so is highly resource intensive. A regime for governing dangerous applications of AI might aim at controlling who can build cutting edge models. But if and when those models are developed, preventing them from being capable of misuse (or from spreading) will be next to impossible due to the nature of software. Many models just one generation behind the cutting edge are freely available for download online. Policymakers should draw two lessons from alternative domains.

First, even when technologies cannot be contained, state behaviors can nonetheless be constrained. Consider cyber: The United States and China cannot stop one another from acquiring the capability to carry out cyberattacks, yet in 2015 Presidents Obama and Xi pledged not to use offensive cyber operations to steal intellectual property. The United States and China should issue a similar joint statement agreeing to limit the use of AI in influence operations.

Likewise, military capabilities on the seas and in outer space will always exist and pose threats to other countries. Agreements to limit their use offer helpful precedents for AI governance. For example, the United States-Soviet Incidents at Sea Agreement restrains military behaviors, such as operating in close proximity, rather than underlying technologies that might enable such behaviors, making the agreement more durable as technologies evolve. The Incidents at Sea Agreement also requires that states notify the other side of dangerous maneuvers in advance, which could be repurposed for notification of dangerous AI incidents or the beginning of large training runs. The Outer Space Treaty is another useful model. It allows states to develop powerful capabilities but limits how they are used: Countries are not allowed to position nuclear weapons or military bases in space. Moreover, countries are held responsible for the activities of their commercial entities, creating incentives for governments to compel responsible private sector behavior in space. If the country that is responsible for a catastrophic AI system were held responsible by the international community, then nations like China that are “flirting with AI catastrophe” may be incentivized to behave differently.

Second, governments should require risk assessments by companies, similar to those done for dangerous biological research. In particular, governments should consider requiring companies that develop advanced AI systems for commercial use to conduct rigorous pre-deployment risk assessments in secure facilities. If scientists were to develop an AI system that could be responsible for a massive cyberattack, they could be held to similar standards to those carrying out gain-of-function research for lethal pathogens. The Biological Weapons Convention also offers a model that may be feasible for international control of AI systems—while there is no global agency responsible for inspections and enforcement under the Convention, creating tiers for labs of concern and separation of powers between agencies responsible for authorizing and shutting down labs may be important steps.

AI governance is as thorny as it is complex. While the desire to copy and paste from the nuclear playbook is understandable, it will not succeed in preventing the most dangerous applications of advanced AI models. Nuclear governance was a response to the nuclear annihilation of Hiroshima and Nagasaki. People must hope that policymakers will act to regulate AI without needing to experience a similar catastrophe.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments