The authoritative guide to ensuring science and technology make life on Earth better, not worse.

As the US, China, and Russia build new nuclear weapons systems, how will AI be built in?

By Matt Field | December 20, 2019

A US Air Force commander simulates launching a nuclear weapon.A US Air Force commander simulates launching a nuclear weapon during a test. Credit: US Air Force/Staff Sgt. Christopher Ruano.

Researchers in the United States and elsewhere are paying a lot of attention to the prospect that in the coming years new nuclear weapons—and the infrastructure built to operate them—will include greater levels of artificial intelligence and automation. Earlier this month, three prominent US defense experts published a comprehensive analysis of how automation is already involved in nuclear command and control systems and of what could go wrong if countries implement even riskier forms of it.

The working paper “A Stable Nuclear Future? The Impact of Autonomous Systems and Artificial Intelligence” by the team of Michael Horowitz, Paul Scharre, and Alexander Velez-Green comes on the heels of other scholarly takes on the impact artificial intelligence (AI) will have on strategies around using nuclear weapons. All this research reflects the fact that militaries around the world are incorporating more artificial intelligence into non-nuclear weaponry—and that several countries are overhauling their nuclear weapons programs. “We wanted to better understand both the potentially stabilizing and destabilizing effects of automation on nuclear stability,” Scharre, a senior fellow at the Center for a New American Security, told the Bulletin.

“In particular, as we see nations modernize their nuclear arsenals, there is both a risk and an opportunity in how they use automation in their nuclear operations.”

The report notes that nuclear weapons systems already include some automated functionality: For example, warning systems automatically alert nuclear weapons operators of an attack. After the Cold War, Russian missiles were programmed to automatically retarget themselves to hit US targets if they were launched without a flight plan. For its part, the United States at one point designed its entire missile arsenal so that it could be retargeted in seconds from its peacetime default of flying into the ocean. Even these forms of automation are risky as an accidental launch could “spark a nuclear war,” the report says. But some countries, the report warns, might resort to riskier types of automation.

Those risks could come from a variety of different sources. Countries could develop unmanned vehicles carrying nuclear weapons; with no one on board and responsible for deploying a nuclear weapon, the systems could be hacked or otherwise “slip out of control,” the authors say. In fact, the report notes, Russia is already reportedly developing an autonomous nuclear torpedo. Horowitz, a University of Pennsylvania political science professor, told the Bulletin that the weapon, called Poseidon or Status-6, could be the start of a trend, though it’s not yet clear how or if AI will be included. “While so much about it is uncertain, Russia’s willingness to explore the notion of a long-duration, underwater, uninhabited nuclear delivery vehicle in Status-6 shows that fear of conventional or nuclear inferiority could create some incentives to pursue greater autonomy,” Horowitz said.

RELATED:
Apathy and hyperbole cloud the real risks of AI bioweapons

Countries might also build more artificial intelligence into the so-called early warning systems that indicate whether a nuclear attack is underway, or insert more powerful AI into the strategic decision support systems they use to keep tabs on other militaries and nuclear forces. Even simple forms of automation in such systems have, in the past, exacerbated nuclear tensions. The report cites a famous 1983 incident where a Soviet officer, Lt. Col. Stanislav Petrov, had to disregard automated audible and visual warnings that US nuclear missiles were inbound. Fortunately, Petrov chose not to trust what his systems were telling him and defied the powerful cognitive phenomenon known as automation bias.

Another problematic form of early automation was the Soviet strategic decision support system known as VYRAN. It was a computer program in place to warn Soviet leaders when the United States had achieved a level of military superiority that required Moscow to launch a nuclear attack. But Soviet intelligence agents were inputting information that often confirmed their pre-existing beliefs about US intentions. “This feedback loop amplified and intensified those perceived threats, rather than providing Soviet leaders with a clearer understanding of US intentions,” the report notes. There is evidence that countries including Russia and China are placing more emphasis on developing these sorts of so-called computational models for analyzing threats.

The US military tests a missile in the ocean.
A Trident II D5 missile test. The US military, along with others around the world, is upgrading its nuclear weapons systems. Credit: US Navy/Mass Communication Specialist 1st Class Ronald Gutridge.

Despite all these drawbacks, however, the report’s authors believe there could be reasons to implement more AI and automation into nuclear weapons systems. They note how artificial intelligence systems could process more data and allow officials in charge of nuclear weapons greater situational awareness. Automation could also be useful in communicating commands in “highly contested electromagnetic environments,” as the report dryly puts it—perhaps, say, during a war. But, the report says, “many of these ways that autonomous systems could increase the resiliency and accuracy of [nuclear command and control systems] are speculative.”

RELATED:
AI goes nuclear

The countries most likely to take on the risks of incorporating greater levels of artificial intelligence and automation in their nuclear weapons systems are the ones that are less certain of their ability to retaliate after an attack on their nuclear arsenal. As the report notes, that’s because the consequences of missing signs of an actual incoming attack—a false negative–would be relatively lower in more confident countries.

Horowitz believes that incorporating artificial intelligence in nuclear weapons systems themselves poses mostly low probability risks. In fact, what concerns him most is how AI in non-nuclear military systems could affect nuclear weapons’ policies.  “The risk I worry most about is how conventional military applications of AI, by increasing the speed of war, could place pressure on the early warning and launch doctrines of nuclear weapons states that fear decapitation in conventional war,” Horowitz told the Bulletin.

Or, as the report puts it, AI-induced time pressure could lead to a chain of decision-making that, in the worst cases, could result in a country launching a pre-emptive nuclear attack. “Fear of losing quickly could create incentives for more rapid escalation to the nuclear level.”

The report predicts that there’s a pretty strong likelihood that more automation will “creep its way” into nuclear operations over time—especially as nations modernize their nuclear forces. The United States has already embarked on a multi-decade, trillion-dollar-plus plan to upgrade its nuclear forces; Russia and China are similarly modernizing theirs.

“What is interesting, though, is that both the United States and Russia—and the Soviet Union before that—have had elements of automation in their nuclear operations, early warning, command-and-control, and delivery systems for decades,” Scharre said. “So it is an issue worthy of deeper exploration.”

Maybe that’s even a bit of an understatement.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments