Strangelove redux: US experts propose having AI control nuclear weapons

By Matt Field, August 30, 2019

US missile tests. Photo via Wikimedia Commons. Public Domain.A US missile test. Photo via Wikimedia Commons. Public Domain.

Hypersonic missiles, stealthy cruise missiles, and weaponized artificial intelligence have so reduced the amount of time that decision makers in the United States would theoretically have to respond to a nuclear attack that, two military experts say, it’s time for a new US nuclear command, control, and communications system. Their solution? Give artificial intelligence control over the launch button.

In an article in War on the Rocks titled, ominously, “America Needs a ‘Dead Hand,’” US deterrence experts Adam Lowther and Curtis McGiffin propose a nuclear command, control, and communications setup with some eerie similarities to the Soviet system referenced in the title to their piece. The Dead Hand was a semiautomated system developed to launch the Soviet Union’s nuclear arsenal under certain conditions, including, particularly, the loss of national leaders who could do so on their own. Given the increasing time pressure Lowther and McGiffin say US nuclear decision makers are under, “[I]t may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”

In case handing over the control of nuclear weapons to HAL 9000 sounds risky, the authors also put forward a few other solutions to the nuclear time-pressure problem: Bolster the United States’ ability to respond to a nuclear attack after the fact, that is, ensure a so-called second-strike capability; adopt a willingness to pre-emptively attack other countries based on warnings that they are preparing to attack the United States; or destabilize the country’s adversaries by fielding nukes near their borders, the idea here being that such a move would bring countries to the arms control negotiating table.

Still, the authors clearly appear to favor an artificial intelligence-based solution.

“Nuclear deterrence creates stability and depends on an adversary’s perception that it cannot destroy the United States with a surprise attack, prevent a guaranteed retaliatory strike, or prevent the United States from effectively commanding and controlling its nuclear forces,” they write. “That perception begins with an assured ability to detect, decide, and direct a second strike. In this area, the balance is shifting away from the United States.”

History is replete with instances in which it seems, in retrospect, that nuclear war could have started were it not for some flesh-and-blood human refusing to begin Armageddon. Perhaps the most famous such hero was Stanislav Petrov, a Soviet lieutenant colonel, who was the officer on duty in charge of the Soviet Union’s missile-launch detection system when it registered five inbound missiles on Sept. 26, 1983. Petrov decided the signal was in error and reported it as a false alarm. It was. Whether an artificial intelligence would have reached the same decision is, at the least, uncertain.

One of the risks of incorporating more artificial intelligence into the nuclear command, control, and communications system involves the phenomenon known as automation bias. Studies have shown that people will trust what an automated system is telling them. In one study, pilots who told researchers that they wouldn’t trust an automated system that reported an engine fire unless there was corroborating evidence nonetheless did just that in simulations. (Furthermore, they told experimenters that there had in fact been corroborating information, when there hadn’t.)

University of Pennsylvania political science professor and Bulletin columnist Michael Horowitz, who researches military innovation, counts automation bias as a strike against building an artificial intelligence-based nuclear command, control, and communications system. “A risk in a world of automation bias is that the Petrov of the future doesn’t use his judgment,” he says, “or that there is no Petrov.”

The algorithms that power artificial intelligence-systems are usually trained on huge datasets which simply don’t exist when it comes to nuclear weapons launches. “There have not been nuclear missile attacks, country against country. And so, training an algorithm for early warning means that you’re relying entirely on simulated data,” Horowitz says. “I would say, based on the state-of-the-art in the development of algorithms, that generates some risks.”

Mostly, Horowitz thinks the United States wouldn’t develop an artificial intelligence-based command, control, and communications system because, even if there may be less time to react to an attack in this era than in earlier decades, the government is confident in the military’s second-strike capability. “As long as you have secure-second strike capabilities, you can probably absorb some of these variations in speed, because you always have the ability to retaliate,” he says.

Lowther and McGiffin point out that a second strike means there’s already been a first strike somewhere.

The "Doomsday Machine" in the movie Dr. Strangelove shares some similarities with a system the Soviet Union actually set up. Photo via Wikimedia Commons. Public Domain.
The “Doomsday Machine” in the movie Dr. Strangelove shares some similarities with a system the Soviet Union actually set up. Photo via Wikimedia Commons. Public Domain.

There is some precedent for the system proposed by the War on the Rocks authors, who have served in government or in the military in nuclear-weapons-related capacities. In the fictional world of Hollywood, that precedent was established in Stanley Kubrick’s nuclear satire Dr. Strangelove and called the “Doomsday Machine,” which author Eric Schlosser described this way for The New Yorker:

“The device would trigger itself, automatically, if the Soviet Union were attacked with nuclear weapons. It was meant to be the ultimate deterrent, a threat to destroy the world in order to prevent an American nuclear strike. But the failure of the Soviets to tell the United States about the contraption defeats its purpose and, at the end of the film, inadvertently causes a nuclear Armageddon. ‘The whole point of the Doomsday Machine is lost,’ Dr. Strangelove, the President’s science adviser, explains to the Soviet Ambassador, ‘if you keep it a secret!'”

About two decades later, satire became closer to reality with the advent of the Soviet Union’s semiautomated Dead Hand system, formally known as Perimeter. When that system perceived that the Soviet military hierarchy no longer existed and detected signs of a nuclear explosion, three officers deep in a bunker were to launch small command rockets that would fly across the country initiating the launch of all of the Soviet Union’s remaining missiles, in a sort of revenge-from-the-grave move. The system was intended to enhance deterrence. Some reports suggest it is still in place.

The possibility that taking humans out of the loop might lead to an accidental launch and unintended nuclear war is a main element in US Naval War College Prof. Tom Nichols’ harsh characterization of the Dead Hand system in a 2014 article in The National Interest: “Turns out the Soviet high command, in its pathetic and paranoid last years, was just that crazy.”

But Lowther and McGiffin say a hypothetical US system would be different than Dead Hand because “the system itself would determine the response based on its own assessment of the inbound threat.“ That is to say, the US system would be better, because it wouldn’t necessarily wait for a nuclear detonation to launch a US attack.


Share: 

17
Leave a Reply

avatar
14 Comment threads
3 Thread replies
1 Followers
 
Most reacted comment
Hottest comment thread
17 Comment authors
bobDavid A Wargowskiamylinda meyerErrol Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
drive-by poster
Guest

Didn’t James Cameron make a movie about this idea?

Anthony Maw
Guest
Anthony Maw

I think I’ll watch my favorite Arnold Schwarzenegger movie tonight….

bob
Guest
bob

Dr. Strangelove meets Skynet.

dervish
Guest
dervish

What could possibly go wrong?

nhsolarguy
Guest
nhsolarguy

Have they learned NOTHING from the Terminator films?

Max Nunnemaker
Guest
Max Nunnemaker

“In case handing over the control of nuclear weapons to HAL 9000 sounds risky,” consider the risk of allowing a biological life form that was stupid enough to invent nuclear weapons in the first place, and that now can’t even figure out how to get rid of them, a completely unsupervised responsibility for looking after them.

Folks, we is not wrapped tight.

John Connor
Guest
John Connor

Has no one ever watched The Terminator films?

Kevin Flynn
Guest
Kevin Flynn

Have they seen the Terminator movies? Hahaha. Are they gonna call the AI Skynet?

Rob Goldston
Guest
Rob Goldston

My understanding is that the Perimeter system was to transfer launch authority to local commanders, not to launch an attack automatically.

JoeVet
Guest
JoeVet

We know the problems with people in the loop. AI trained on simulated data will be disastrous faster since stupid plus speed is still stupid. Today the best AI system is still an idiot and most systems advertising AI were hiding humans in the loop…Alexa, Siri, etc.

J Clark
Guest
J Clark

Oh why not. As long as they fit WOPR, SkyNet and HAL in the name, let’s do this. Also get Microsoft, Google and Facebook logos on that sucker…

Chris
Guest
Chris

It’s about time we got Skynet going! We’re behind schedule people!!!

Errol
Guest
Errol

Well, that kinda brings the doomsday clock down to 1 minute to midnight, in my opinion.

linda meyer
Guest
linda meyer

Does anyone else remember the movie “WARGAMES?” Where an AI computer nearly starts WW3?

amy
Guest
amy

..sounds more like artificial stupidity

David A Wargowski
Guest
David A Wargowski

Implementing a new nuclear deterrence program implies the continued justification and existence of nuclear weapons. Equally scary is the concept of letting two so called “experts” suggest a nuclear deterrence policy that if implemented, could place all life on Earth in more peril than we have already created. We don’t need anymore nuclear deterrence experts. We don’t need a 21st century nuclear weapon justification management policy, because that is what it really is. We have been playing Russian-roulette (no pun intended) for seventy-four years. Instead of focusing on a way to unload the bullets from the gun, and throw the… Read more »

RELATED POSTS

Receive Email
Updates