Hypersonic missiles, stealthy cruise missiles, and weaponized artificial intelligence have so reduced the amount of time that decision makers in the United States would theoretically have to respond to a nuclear attack that, two military experts say, it’s time for a new US nuclear command, control, and communications system. Their solution? Give artificial intelligence control over the launch button.
In an article in War on the Rocks titled, ominously, “America Needs a ‘Dead Hand,’” US deterrence experts Adam Lowther and Curtis McGiffin propose a nuclear command, control, and communications setup with some eerie similarities to the Soviet system referenced in the title to their piece. The Dead Hand was a semiautomated system developed to launch the Soviet Union’s nuclear arsenal under certain conditions, including, particularly, the loss of national leaders who could do so on their own. Given the increasing time pressure Lowther and McGiffin say US nuclear decision makers are under, “[I]t may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”
In case handing over the control of nuclear weapons to HAL 9000 sounds risky, the authors also put forward a few other solutions to the nuclear time-pressure problem: Bolster the United States’ ability to respond to a nuclear attack after the fact, that is, ensure a so-called second-strike capability; adopt a willingness to pre-emptively attack other countries based on warnings that they are preparing to attack the United States; or destabilize the country’s adversaries by fielding nukes near their borders, the idea here being that such a move would bring countries to the arms control negotiating table.
Still, the authors clearly appear to favor an artificial intelligence-based solution.
“Nuclear deterrence creates stability and depends on an adversary’s perception that it cannot destroy the United States with a surprise attack, prevent a guaranteed retaliatory strike, or prevent the United States from effectively commanding and controlling its nuclear forces,” they write. “That perception begins with an assured ability to detect, decide, and direct a second strike. In this area, the balance is shifting away from the United States.”
History is replete with instances in which it seems, in retrospect, that nuclear war could have started were it not for some flesh-and-blood human refusing to begin Armageddon. Perhaps the most famous such hero was Stanislav Petrov, a Soviet lieutenant colonel, who was the officer on duty in charge of the Soviet Union’s missile-launch detection system when it registered five inbound missiles on Sept. 26, 1983. Petrov decided the signal was in error and reported it as a false alarm. It was. Whether an artificial intelligence would have reached the same decision is, at the least, uncertain.
One of the risks of incorporating more artificial intelligence into the nuclear command, control, and communications system involves the phenomenon known as automation bias. Studies have shown that people will trust what an automated system is telling them. In one study, pilots who told researchers that they wouldn’t trust an automated system that reported an engine fire unless there was corroborating evidence nonetheless did just that in simulations. (Furthermore, they told experimenters that there had in fact been corroborating information, when there hadn’t.)
University of Pennsylvania political science professor and Bulletin columnist Michael Horowitz, who researches military innovation, counts automation bias as a strike against building an artificial intelligence-based nuclear command, control, and communications system. “A risk in a world of automation bias is that the Petrov of the future doesn’t use his judgment,” he says, “or that there is no Petrov.”
The algorithms that power artificial intelligence-systems are usually trained on huge datasets which simply don’t exist when it comes to nuclear weapons launches. “There have not been nuclear missile attacks, country against country. And so, training an algorithm for early warning means that you’re relying entirely on simulated data,” Horowitz says. “I would say, based on the state-of-the-art in the development of algorithms, that generates some risks.”
Mostly, Horowitz thinks the United States wouldn’t develop an artificial intelligence-based command, control, and communications system because, even if there may be less time to react to an attack in this era than in earlier decades, the government is confident in the military’s second-strike capability. “As long as you have secure-second strike capabilities, you can probably absorb some of these variations in speed, because you always have the ability to retaliate,” he says.
Lowther and McGiffin point out that a second strike means there’s already been a first strike somewhere.
There is some precedent for the system proposed by the War on the Rocks authors, who have served in government or in the military in nuclear-weapons-related capacities. In the fictional world of Hollywood, that precedent was established in Stanley Kubrick’s nuclear satire Dr. Strangelove and called the “Doomsday Machine,” which author Eric Schlosser described this way for The New Yorker:
“The device would trigger itself, automatically, if the Soviet Union were attacked with nuclear weapons. It was meant to be the ultimate deterrent, a threat to destroy the world in order to prevent an American nuclear strike. But the failure of the Soviets to tell the United States about the contraption defeats its purpose and, at the end of the film, inadvertently causes a nuclear Armageddon. ‘The whole point of the Doomsday Machine is lost,’ Dr. Strangelove, the President’s science adviser, explains to the Soviet Ambassador, ‘if you keep it a secret!'”
About two decades later, satire became closer to reality with the advent of the Soviet Union’s semiautomated Dead Hand system, formally known as Perimeter. When that system perceived that the Soviet military hierarchy no longer existed and detected signs of a nuclear explosion, three officers deep in a bunker were to launch small command rockets that would fly across the country initiating the launch of all of the Soviet Union’s remaining missiles, in a sort of revenge-from-the-grave move. The system was intended to enhance deterrence. Some reports suggest it is still in place.
The possibility that taking humans out of the loop might lead to an accidental launch and unintended nuclear war is a main element in US Naval War College Prof. Tom Nichols’ harsh characterization of the Dead Hand system in a 2014 article in The National Interest: “Turns out the Soviet high command, in its pathetic and paranoid last years, was just that crazy.”
But Lowther and McGiffin say a hypothetical US system would be different than Dead Hand because “the system itself would determine the response based on its own assessment of the inbound threat.“ That is to say, the US system would be better, because it wouldn’t necessarily wait for a nuclear detonation to launch a US attack.