The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Will artificial intelligence undermine nuclear stability?

By Andrew J. Lohn, Edward Geist | April 30, 2018

PsychHeadTwitter.jpg

Artificial intelligence and nuclear war have been fiction clichés for decades. Today’s AI is impressive to be sure, but specialized, and remains a far cry from computers that become self-aware and turn against their creators. At the same time, popular culture does not do justice to the threats that modern AI indeed presents, such as its potential to make nuclear war more likely even if it never exerts direct control over nuclear weapons.

Russian President Vladimir Putin recognized the military significance of AI when he declared in September that the country that leads in artificial intelligence will eventually rule the world. He may be the only leader to have put it so bluntly, but other world powers appear to be thinking similarly. Both China and the United States have announced ambitious efforts to harness AI for military applications, stoking fears of an incipient arms race.

In the same September speech, Putin said that AI comes with “colossal opportunities” as well as “threats that are difficult to predict.” The gravest of those threats may involve nuclear stability—as we describe in a new RAND publication that outlines a few of the ways in which stability could be strained.

Strategic stability exists when governments aren’t tempted to use nuclear threats or coercion against their adversaries. It involves more than just maintaining a credible ability to retaliate after an enemy attack. In addition to that deterrent, nuclear stability requires assurance and reassurance. When a nation extends a nuclear security guarantee to allies, the allies must be assured that nukes will be launched in their defense even if the nation extending the guarantee must put its own cities at risk. Adversaries need to be reassured that forces built up for deterrence and to protect allies will not be used without provocation. Deterrence, assurance, and reassurance are often at odds with each other, making nuclear stability difficult to maintain even when governments have no interest in attacking each other.

In a world where increasing numbers of rival states are nuclear-armed, the situation becomes almost unmanageable. In the 1970s, four of the five declared nuclear powers primarily targeted their weapons on the fifth, the Soviet Union (Beijing, after its 1969 border clashes with the Soviet Union, feared Moscow much more than Washington). It was a relatively simple bilateral stand-off between the Bolsheviks and their many adversaries. Today, nine nuclear powers are entangled in overlapping strategic rivalries—including Israel, which has not declared the nuclear arsenal that it is widely believed to possess. While the United States, the United Kingdom, and France still worry about Russia, they also fret about an increasingly potent China. Beijing’s rivals include not just the United States and Russia but India as well. India fears China too, but primarily frets about Pakistan. And everyone is worried about North Korea.

In such a complex and dynamic environment, teams of strategists are required to navigate conflict situations—to identify options and understand their ramifications. Could AI make this job easier? With AI now beating human professionals in the ancient Chinese strategy game Go, as well as in games of bluffing such as poker, countries may be tempted to build machines that could “sit” at the table amid nuclear conflicts and act as strategists.

Artificially intelligent machines may prove to be less error-prone than humans in many contexts. But for tasks such as navigating conflict situations, that moment is still far off in the future. Much effort must be expended before machines can—or should—be relied on for consistent performance of the extraordinary task of helping the world avoid nuclear war. Recent research suggests that it is surprisingly simple to trick an AI system into reaching incorrect conclusions when an adversary gets to control some of the inputs, such as how a vehicle is painted before it is photographed.

But AI could undermine the foundations of nuclear stability through means other than providing advice to strategists. Sensors and cameras are increasing in number throughout the world; AI’s growing ability to make predictions based on information from these disparate sources may cause nations to worry that the missiles and submarines they depend upon for assured retaliation will become vulnerable. During the Cold War, the superpowers sought crippling “first-strike” capabilities, but this was a perilous strategy—each superpower became convinced that the other might launch a disarming strike against it. With retaliation prevented, whoever struck first would gain a huge advantage. Thus the chances of accidental nuclear war were greatly increased. Such challenges are even more fraught in today’s world. More states are nuclear-armed—and AI technology might lend extra credibility to threats against nuclear retaliatory forces.

In the coming years, AI-enabled progress in tracking and targeting adversaries’ nuclear weapons could undermine the foundations of nuclear stability; that is, nations may question whether their missiles and submarines are vulnerable to a first strike. Will AI someday be able to guide strategy decisions about escalation or even launching nuclear weapons? Such capabilities are off in the distance for now, but the chance that they will eventually emerge is real—as is the need to understand, right now, how AI could reshape the world’s approach to nuclear stability.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
venze
venze
6 years ago

At the rate AI is advancing, it will be able to do many things a normal person cannot do in the near future. Like it or not, it could eventually control 99.99% of humans. In that respect, the chances of AI undermining nuclear stability is certainly high, it may well be already a foregone conclusion.
Therefore, one suggests that all nuclear weapons be destroyed since we still can. But then many leaders would never want to, digging their own graves.
(boontee)

Mark Sankey
Mark Sankey
5 years ago

I appreciate the commentary here and the thoughtful analysis. But as I commented on LinkedIn in response to an article glorifying AI, it is “artificial” by definition. It is humanly conceived, and yes, quite amazing. But it is “artificial!” Our smartphones have tremendous capability, but I am not deceived to believe they will ever control 99.9% of humans. It does add a complexity, and dangers, to the world’s balance of political power. And it is unhealthy as its microwaves affect us in ways we do not fully understand. Its social impact can also be unnerving: Consider how Facebook accelerated the… Read more »

Text reads, “Give the gift of Bulletin swag. Shop merch designed to raise awareness about nuclear risk, climate change, and disruptive technologies.” Below it is a button that says “Show now.” A man appears wearing a Bulletin T-shirt and smiling.

RELATED POSTS

Receive Email
Updates