The authoritative guide to ensuring science and technology make life on Earth better, not worse.

‘Artificial Escalation’: Imagining the future of nuclear risk

Featured Video Play Icon

Imagine it’s 2032. The US and China are still rivals. In order to give their military commanders better intel and more time to make decisions, both powers have integrated artificial intelligence (AI) throughout their nuclear command, control, and communications (NC3) systems. But instead, events take an unexpected turn and spin out of control, with catastrophic results.

This is the story told in a new short film called Artificial Escalation produced by Space Film & VFX for The Future of Life Institute. This plot may sound like science fiction (and the story is fictional), but the possibility of AI integration into weapons of mass destruction is now very real. Some experts say that the United States should build an NC3 system using AI “with predetermined response decisions, that detects, decides, and directs strategic forces.” The US is already envisioning integration like this in conventional command and control systems: the Joint All-Domain Command and Control has proposed connecting sensors from all military services into a single network, using AI to identify targets and recommend the “optimal weapon.” But NC3-AI integration is a terrible idea.

The Stockholm International Peace Research Institute (SIPRI) explored key risks of AI integration into NC3, including: increased speed of warfare, accidental escalation, misperception of intentions and capabilities, erosion of human control, first-strike instability, the unpredictability of AI, the vulnerabilities of AI to adversary penetration, and arms race dynamics. The National Security Commission on AI cautioned that AI “will likely increase the pace and automation of warfare across the board, reducing the time and space available for de-escalatory measures.”

This new rate of warfare would leave less time for countries to signal their own capabilities and intentions or to understand their opponents’ perspectives. This could lead to unintended conflict escalation, crisis instability, and even nuclear war.

As arms race dynamics push AI progress forward, prioritizing speed over safety, it is important to remember that in races toward mutual destruction, there is no winner. There is a point at which an arms race becomes a suicide race. The reasons not to integrate AI into comprehensive command, control, and communications systems are manifold:

RELATED:
Interview: Rose Gottemoeller on the precarious future of arms control

Adversarial AI carries unpredictable escalation risk. Even if AI-NC3 systems are carefully tested and evaluated, they may be made unpredictable by design. Two or more such systems interacting in a complex and adversarial environment can push each other to new extremes, greatly increasing the risk of accidental escalation. We have seen this before with the 2010 “flash crash” of the stock market, when adversarial trading algorithms wiped trillions of dollars off the stock exchange in under an hour. The military equivalent of that hour would be catastrophic.

No real training data. AI systems require a lot of data in their training, whether real or simulated. But training systems for nuclear conflict necessitates the generation of synthetic data with incomplete information, because the full extent of an adversary’s capabilities is unknown. This adds another element of dangerous unpredictability into the command and control mix.

Cyber vulnerabilities of networked systems. AI-integrated command, control, and communications systems would also be vulnerable to cyberattacks, hacking, and data poisoning. When all sensor data and systems are networked, failure can spread throughout the entire system. Each of these vulnerabilities must be considered across the systems of every nuclear nation, as the whole system is only as strong as its weakest link.

Epistemic uncertainty. Widespread use of AI to create misinformation is already clouding what is real and what is fake. The inability to discern truth is especially dangerous in the military context, and accurate information is particularly crucial to the stability of command and control systems. Historically, there have been channels of reliable, trustworthy communication between adversaries, even when there were also disinformation campaigns happening in the background. When we automate more and engage person-to-person less, those reliable channels dissipate and the risk of unnecessary escalation skyrockets.

RELATED:
Mark Cuban on AI, Elon Musk, and Big Tech's influence on society and elections

Human Deference to Machines. If an algorithm makes a suggestion, people could defy it, but will they? When reliable communication channels shut down and the problem faced is complex, it’s natural to rely on computers and intelligent systems to provide the right answer. Defying a recommendation requires the understanding of context and how decisions are made. Today, even the designers of AI systems don’t understand how they work, so we shouldn’t expect end users in high-stress environments to understand the complexity of an AI system’s choice and decide they know better.

Taken together, all of these factor serve to enfeeble humans and erode their control by promoting extreme deference to AI decision-making. Depictions of humans losing control of AI typically fall into two categories: rogue AI or malicious use. But there is a third way humans can lose control, and it’s the most realistic of all: Humans cede functional control to AI willingly under the illusion that they still have it.

A commonly pitched panacea for keeping human control over AI is to maintain human involvement. In Artificial Escalation, humans are ostensibly involved in the decisions along the way. In practice, however, their humanity leads them to defer to the machine and lose control over the process. Simply having a human in the loop is not enough; countries and their militaries must ensure that humans retain meaningful control over high-stakes decisions.

Integrating AI into the critical functions of command, control, and communication is reckless. The world cannot afford to give up control over something as dangerous as weapons of mass destruction. As the United Nations Security Council prepares to meet tomorrow to discuss AI and nuclear risk, now is the time to set hard limits, strengthen trust and transparency, and ensure that the future remains in human hands.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
A graphic reads, "Test your global insight from nuclear risks to AI breakthroughs. Take our 10-minute quiz." A globe with connecting points spanning across it appears below it. Behind the globe are sprawling lines connected by circles, symbolizing connection and technology.”

RELATED POSTS

Receive Email
Updates