The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Humans should teach AI how to avoid nuclear war—while they still can

By Cameron Vega, Eliana Johns | July 22, 2024

The systemic use of AI-enabled technology in nuclear strategy, threat prediction, and force planning could erode human skills and critical thinking over time—and even lure policymakers and nuclear planners into believing that a nuclear war can be won. (Image: Screenshot from the 1983 movie WarGames, Metro-Goldwyn-Mayer)

When considering the potentially catastrophic impacts of military applications of Artificial Intelligence (AI), a few deadly scenarios come to mind: autonomous killer robots, AI-assisted chemical or biological weapons development, and the 1983 movie WarGames.

The film features a self-aware AI-enabled supercomputer that simulates a Soviet nuclear launch and convinces US nuclear forces to prepare for a retaliatory strike. The crisis is only partly averted because the main (human) characters persuade US forces to wait for the Soviet strike to hit before retaliating. It turns out that the strike was intentionally falsified by the fully autonomous AI program. The computer then attempts to launch a nuclear strike on the Soviets without human approval until it is hastily taught about the concept of mutually assured destruction, after which the program ultimately determines that nuclear war is a no-win scenario: “Winner: none.”

US officials have stated that an AI system would never be given US nuclear launch codes or the ability to take control over US nuclear forces. However, AI-enabled technology will likely become increasingly integrated into nuclear targeting and command and control systems to support decision-making in the United States and other nuclear-armed countries. Because US policymakers and nuclear planners may use AI models in conducting analyses and anticipating scenarios that may ultimately influence the president’s decision to use nuclear weapons, the assumptions under which these AI-enabled systems operate require closer scrutiny.

Pathways for AI integration. The US Defense Department and Energy Department already employ machine learning and AI models to make calculation processes more efficient, including for analyzing and sorting satellite imagery from reconnaissance satellites and improving nuclear warhead design and maintenance processes. The military is increasingly forward-leaning on AI-enabled systems. For instance, it initiated a program in 2023 called Stormbreaker that strives to create an AI-enabled system called “Joint Operational Planning Toolkit” that will incorporate “advanced data optimization capabilities, machine learning, and artificial intelligence to support planning, war gaming, mission analysis, and execution of all-domain, operational level course of action development.” While AI-enabled technology presents many benefits for security, it also brings significant risks and vulnerabilities.

One concern is that the systemic use of AI-enabled technology and an acceptance of AI-supported analysis could become a crutch for nuclear planners, eroding human skills and critical thinking over time. This is particularly relevant when considering applications for artificial intelligence in systems and processes such as wargames that influence analysis and decision-making. For example, NATO is already testing and preparing to launch an AI system designed to assist with operational military command and control and decision-making by combining an AI wargaming tool and machine learning algorithms. Even though it is still unclear how this system will impact decision-making led by the United States, the United Kingdom, and NATO’s Nuclear Planning Group concerning US nuclear weapons stationed in Europe, this type of AI-powered analytical tool would need to consider escalation factors inherent to nuclear weapons and could be used to inform targeting and force structure analysis or to justify politically motivated strategies.

The role given to AI technology in nuclear strategy, threat prediction, and force planning can reveal more about how nuclear-armed countries view nuclear weapons and nuclear use. Any AI model is programmed under certain assumptions and trained on selected data sets. This is also true of AI-enabled wargames and decision-support systems tasked with recommending courses of action for nuclear employment in any given scenario. Based on these assumptions and data sets alone, the AI system would have to assist human decision-makers and nuclear targeters in estimating whether the benefits of nuclear employment outweigh the cost and whether a nuclear war is winnable.

Do the benefits of nuclear use outweigh the costs? Baked into the law of armed conflict is a fundamental tension between any particular military action’s gains and costs. Though fiercely debated by historians, the common understanding of the US decision to drop two atomic bombs on Japan in 1945 demonstrates this tension: an expedited victory in East Asia in exchange for hundreds of thousands of Japanese casualties.

RELATED:
Don’t panic: AI can strengthen democracy too

Understanding how an AI algorithm might weigh the benefits and costs of escalation depends on how it integrates the country’s nuclear policy and strategy. Several factors contribute to one’s nuclear doctrine and targeting strategy—ranging from fear of consequences of breaking the tradition of non-use of nuclear weapons to concern of radioactive contamination of a coveted territory and to sheer deterrence because of possible nuclear retaliation by an adversary. While strategy itself is derived from political priorities, military capabilities, and perceived adversarial threats, nuclear targeting incorporates these factors as well as many others, including the physical vulnerability of targets, overfly routes, and accuracy of delivery vehicles—all aspects to further consider when making decisions about force posture and nuclear use.

In the case of the United States, much remains classified about its nuclear decision-making and cost analysis. It is understood that, under guidance from the president, US nuclear war plans target the offensive nuclear capabilities of certain adversaries (both nuclear and non-nuclear armed) as well as the infrastructure, military resources, and political leadership critical to post-attack recovery. But while longstanding US policy has maintained to “not purposely threaten civilian populations or objects” and “not intentionally target civilian populations or targets in violation of [the law of armed conflict],” the United States has previously acknowledged that “substantial damage to residential structures and populations may nevertheless result from targeting that meets the above objectives.” This is in addition to the fact that the United States is the only country to have used its nuclear weapons against civilians in war.

There is limited public information with which to infer how an AI-enabled system would be trained to consider the costs of nuclear detonation. Certainly, any plans for nuclear employment are determined by a combination of mathematical targeting calculations and subjective analysis of social, economic, and military costs and benefits. An AI-enabled system could improve some of these analyses in weighing certain military costs and benefits, but it could also be used to justify existing structures and policies or further ingrain biases and risk acceptance into the system. These factors, along with the speed of operation and innate challenges in distinguishing between data sets and origins, could also increase the risks of escalation—either deliberate or inadvertent.

Is a nuclear war “winnable”? Whether a nuclear war is winnable depends on what “winning” means. Policymakers and planners may define winning as merely the benefits of nuclear use outweighing the cost when all is said and done. When balancing costs and benefits, the benefits need only be one “point” higher for an AI-enabled system to deem the scenario a “win.”

In this case, “winning” may be defined in terms of national interest without consideration of other threats. A pyrrhic victory could jeopardize national survival immediately following nuclear use and still be considered a win by the AI algorithm. Once a nuclear weapon has been used, it could either incentivize an AI system to not recommend nuclear use or, on the contrary, recommend the use of nuclear weapons on a broader scale to eliminate remaining threats or to preempt further nuclear strikes.

“Winning” a nuclear war could also be defined in much broader terms. The effects of nuclear weapons go beyond the immediate destruction within their blast radius; there would be significant societal implications from such a traumatic experience, including potential mass migration and economic catastrophe, in addition to dramatic climatic damage that could result in mass global starvation. Depending on how damage is calculated and how much weight is placed on long-term effects, an AI system may determine that a nuclear war itself is “unwinnable” or even “unbearable.”

Uncovering biases and assumptions. The question of costs and benefits is relatively uncontroversial in that all decision-making involves weighing the pros and cons of any military option. However, it is still unknown how an AI system will weigh these costs and benefits, especially given the difficulty of comprehensively modeling all the effects of nuclear weapon detonations. At the same time, the question of winning a nuclear war has long been a thorn in the side of nuclear strategists and scholars. All five nuclear-weapon states confirmed in 2022 that “a nuclear war cannot be won and must never be fought.” For them, planning to win a nuclear war would be considered inane and, therefore, would not require any AI assistance. However, deterrence messaging and discussion of AI applications for nuclear planning and decision-making illuminate the belief that the United States must be prepared to fight—and win—a nuclear war.

RELATED:
‘I’m afraid I can’t do that’: Should killer robots be allowed to disobey orders?

The use of AI-assisted nuclear decision-making has the potential to reveal and exacerbate the biases and beliefs of policymakers and strategists, including the oft-disputed idea that nuclear war can be won. AI-powered analysis incorporated into nuclear planning or decision-making processes would operate on assumptions about the capabilities of nuclear weapons as well as their estimated costs and benefits, in the same way that targeters and planners have done for generations. Some of these assumptions could include missile performance, accurate delivery, radiation effects, adversary response, and whether nuclear arms control or disarmament is viable.

Not only are there risks of inherent bias in AI systems, but this technology can be purposely designed with bias. Nuclear planners have historically underestimated the damage caused by nuclear weapons in their calculations, so an AI system fed that data to make recommendations could also systemically underestimate the costs of nuclear employment and the number of weapons needed for targeting purposes. There is also a non-zero chance that nuclear planners poison the data so that an AI program recommends certain weapons systems or strategies.

During peace time, recommendations based on analysis by AI-enabled systems could also be used as part of justifying budgets, capabilities, and force structures. For example, an AI model that is trained on certain assumptions and possibly underestimates nuclear damage and casualties may recommend increasing the number of deployed warheads, which will be legally permissible after New START—the US-Russian treaty that limits their deployed long-range nuclear forces—expires in February 2026. The inherent trust placed in computers by their users is also likely to provide undue credibility to AI-supported recommendations, which policymakers and planners could use to veil their own preferences behind the supposed objectivity of a computer’s outputs.

Despite this heavy skepticism, advanced AI/machine learning models could still potentially provide a means of sober calculation in crisis scenarios, where human decision-making is often clouded, rushed, or falls victim to fallacies. However, this requires that the system has been fed accurate data, shaped with frameworks that support good faith analysis, and is used with an awareness of its limitations. Rigorous training on nuclear strategy for the “humans in the loop” as well as on methods for interpreting AI-generated outputs—that is, considering all its limitations and embedded biases—could also help mitigate some of these risks. Finally, it is essential that governments practice and promote transparency concerning the integration of AI technology into their military systems and strategic processes, as well as the structures in place to prevent deception, cyberattacks, disinformation, and bias.

Human nature is nearly impossible to predict, and escalation is difficult to control. Moreover, there is arguably little evidence to support claims that any nuclear employment could control or de-escalate a conflict. Highlighting and addressing potential bias in AI-enabled systems is critical for uncovering assumptions that may deceive users into believing that a nuclear war can be won and for maintaining the well-established ethical principle that a nuclear war should never be fought.

Editor’s note: The views expressed in this article are those of the authors and do not necessarily represent the views of the US State Department.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Vello
Vello
4 months ago

How can we teach AI to avoid nuclear war when we have yet to figure it out. We can’t teach AI something we have not yet mastered. Given the opportunity, AI will independently figure out that the best way to avoid nuclear war will be to eliminate humans, the ones most capable and likely to cause nuclear war.

Odyssios Redux
Odyssios Redux
4 months ago

I doubt we can teach anyone or anything to avoid nuclear war until we teach ourselves how to avoid it. It’s not so much Artificial Intelligence that terrifies me, as natural stupidity – or fear, by one group of another.
I lived through the Cuban Missile crisis of 1962, as a teenager. During that time, the sky was not our friend. At any moment, it might simply turn into a small local star. That political generation actually learned a lot from such a near-death experience. That generation’s grandchildren have apparently forgotten it all.

Ben E.
Ben E.
4 months ago

I appreciate the reference to the movie WarGames. It is certainly appropriate. However, at 69 and living through the 70’s, I feel there is a movie from 1969, Colossus: The Forbin Project, which dovetails even more with the AI-Nuclear Weapons nexus.
To my observation, that movie applies perhaps even more relavently. I’m puzzled as to why that movie is rarely mentioned in regard to lessons concerning turning over too much control to Artificial Intelligence. (With a deeply honorable mention to Stanley Kubrick’s 2001:A Space Odyssey ‘s introduction of HAL.)

Text reads, “Give the gift of Bulletin swag. Shop merch designed to raise awareness about nuclear risk, climate change, and disruptive technologies.” Below it is a button that says “Show now.” A man appears wearing a Bulletin T-shirt and smiling.

RELATED POSTS

Receive Email
Updates