When AI is in control, who’s to blame for military accidents?

By Julia Ciocca, Lauren Kahn | October 1, 2020

Photo Illustration: Matt Field/Pixabay/Wikimedia Commons.

In April, 2001, a US EP-3 turboprop on a routine surveillance flight in the South China Sea collided with a Chinese fighter jet that had been aggressively tailing the mission. The collision destroyed the Chinese plane and sent its pilot plummeting to the ocean below. The US spy plane, meanwhile, limped toward Hainan Island, its crew desperately trying to destroy sensitive materials before landing the crippled craft on a Chinese air strip. Ultimately, both the Chinese government and the American government blamed the in-air actions of each other’s pilots as the cause of the accident. Military accidents happen. They’re an accepted aspect of international politics, and as happened in the spy plane incident, political and military leaders often cite human error as a cause. Relations stabilize and both sides move on. The EP-3 crew and plane were eventually returned. But as militaries invest in new technologies and move into an era when machines, not generals or pilots, take on more decision-making, a new risk—that accidents won’t be resolved so neatly in the future—has emerged.

From surveillance systems to tank turrets to “glass battlefields” that show three-dimensional depictions of conflict areas, militaries are racing to incorporate artificial intelligence (AI) into weapons and other systems. But if an AI is making critical military decisions, governments could find it harder to smooth tensions after an accident like the EP-3 incident and to move on. If AI caused the accident, human error can’t be blamed.

Compared to other military accidents, an accident involving AI could be particularly risky; it could be difficult to determine whether an incident was deliberate or not, in other words, to determine if it was even an accident. After all, the advantages of AI are to automate—to reduce inefficiencies and require less human oversight. But what will these changes mean when something goes wrong? And what can governments and militaries do to ensure that such errors do not result in unintended escalation?

Utilizing AI in weapons systems means human beings won’t be making all the decisions they once made while flying planes, targeting munitions, and, to use an extreme example, firing weapons. When something goes wrong in this new, AI-driven era, as it inevitably will, the fact that an artificial intelligence algorithm made many of the decisions leading to an accident could cause blame to shift from an individual user—whether a soldier or a military unit—to a much broader level of governance and decision-making, including agencies, the government as a whole, and even those who created the algorithms in the first place. What if the EP-3 or one of the Chinese jets had been piloted by AI rather than a person? The blame may have landed closer to the government that implemented and programmed the AI rather than the pilots.

RELATED:
Protein de-extinction: How Neanderthals and mammoths could help find new antibiotics

A similar dynamic has already played out with precision-guided munitions, whose greater accuracy has raised expectations so much that errors are harder to accept when they occur. After a US military plane dropped a precision-guided munition on a house in Baiji, Iraq, in 2006, killing the family that lived there, legal scholar Dakota S. Rudesill, wrote “American military reconnaissance and precision strike capabilities are so advanced that efforts to explain such tragic events as mechanical or human errors have sometimes been met with skepticism.”

Like any technology, algorithms can fail, even after the initial testing phase, and an AI failure isn’t always analogous to malfunctioning equipment or broken parts. Rather, accidents could develop due to the AI making decisions its developers couldn’t have predicted. As it’s not always clear how AI works, the technical causes of an accident may be unknown or difficult to decipher—even for a system’s owner; a government may struggle to convincingly explain what happened. Given the immature state of military artificial intelligence today, the victim of an AI accident likely wouldn’t have a technical means of implicating AI as a cause.

Current AI applications are also notoriously brittle—they perform well under a narrow set of circumstances but can fail if the operating environment changes. Even narrow AI applications, as Matthijs M. Maas, a researcher affiliated with the University of Oxford’s Future of Humanity Institute notes, “often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments” and are therefore “prone to normal accident-type failures” that have the potential to snowball.

Because algorithms are typically embedded into a variety of systems, rather than representing a physical system themselves, signaling that AI was implicated in an accident is difficult. There are also significant strategic benefits to keeping AI algorithms secret, as nondisclosure creates barriers to intellectual property theft. Additionally, AI technology is progressing so rapidly that it is outstripping the constraints of the patent system. This discourages private industry, where most AI research takes place, from publicly disclosing or discussing its AI developments.

Many nuclear-armed countries are declaring their determination to be world leaders in artificial intelligence, and perhaps it is in the nuclear realm that the prospect of AI accidents is most troubling. A 2020 report by the Stockholm International Peace Research Institute predicts that advances in autonomy and machine learning—the AI methodology whereby algorithms improve themselves through accessing training data—will have significant impacts on a “wide array of nuclear force-related capabilities, ranging from early warning to command and control and weapon delivery.” AI accidents within these types of realms pose substantial risks, the number one being unintended escalation of tensions between adversaries.

RELATED:
The disruptive technologies year in review

There are clear incentives for a government to avoid escalation after a military accident occurs; by definition, militaries don’t mean to cause whatever harm the accident inflicted. A critical precondition to de-escalation is the ability of a military to convince an adversary that the accident was indeed accidental, but doing so in the era of AI may prove difficult.

Policy makers can draw from prior accidents and historical examples in order to get a better understanding about how to consider AI accidents.

After the signing of the Helsinki Final Act in 1975, the United States and the Soviet Union implemented a series of measures in order to reduce the likelihood of a nuclear “worst case scenario” in the form of an accidental or surprise attack. The two countries were able to create  a process for sharing information about “military forces, equipment, and defense planning,” mitigating much of the uncertainty about the technologies each possessed.

The agreement built on an already established relationship between the two adversaries that emphasized communication through means like the nuclear hotline in order to avoid accidental escalation. Likewise, the Incidents at Sea Agreement between the United States and the Soviet Union significantly decreased the potential for US-Soviet accidental escalation by reducing the number of miscommunications and incidents between their active Navy vessels.

A similar toolkit could be developed for dealing with AI accidents, and for getting potential adversaries on the same page about expectations for verification. Codes-of-conduct between AI-capable states could reduce miscommunication and miscalculation and provide channels through which other countries would be notified of trainings, trials of systems, and accidents. These types of arrangements would not necessarily require any limits on capabilities; instead they would emphasize communication and safety.

AI will likely significantly shape the future of warfare. However, its brittle, complicated, and currently inexplicable processes create the potential for accidents with unintended consequences that increase the risk of escalation. Planning for how to respond to AI accidents will be a crucial part of avoiding unintentional escalation when these incidents inevitably occur.

This article was made possible, in part, by a grant from the Air Force Office of Scientific Research and the Minerva Research Initiative under grant #FA9550-18-1-0194.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments