Killer robots reconsidered: Could AI weapons actually cut collateral damage?

By Larry Lewis | January 10, 2020

The Sea Hunter, an autonomous ship. Militaries around the world are developing greater autonomous capability into weapons systems. The US Defense Advanced Research Projects Agency built the Sea Hunter, a prototype autonomous ship. Credit: DARPA.

The United States, Russia, and China are all signaling that artificial intelligence (AI) is a transformative technology that will be central to their national security strategies. And their militaries are already announcing plans to quickly move ahead with applications of AI. This has prompted some to rally behind an international ban on autonomous, AI-driven weapons. I get it. On the surface, who could disagree to quashing the idea of supposed killer robots?

Well, me for starters.

The problem with an autonomous weapons ban is that its proponents often rely on arguments that are inaccurate both about the nature of warfare and about the state of such technology. Activists and representatives from various countries have been meeting at the United Nations for six years now on the issue of lethal autonomous weapons. But before calling for society to ban such weapons, it behooves us to understand what we are really talking about, what the real risks are, and that there are potential benefits to be lost. In short, we need to talk about killer robots, so we can make an informed decision.

Unfortunately, for many people, the concept of autonomous weapons consists of Hollywood depictions of robots like the Terminator or RoboCop—that is, uncontrolled or uncontrollable machines deciding to wreak havoc and kill innocents. But this picture does not represent the current state of AI technology. While artificial intelligence has proved powerful for applications in banking, in medicine, and in many other fields, these are narrow applications for solving very specific problems such as identifying signs of a particular disease. Current AI does not make decisions in the sense that humans do. Many AI experts such as the authors of Stanford University’s One Hundred Year Study on Artificial Intelligence don’t think so-called general AI–the kind envisioned in science fiction that’s more akin to human intelligence and able to make decisions on its own—will be developed any time soon.

The proponents of a UN ban are in some respects raising a false alarm.

I should know. As a senior advisor for the State Department on civilian protection in the Obama administration, I was a member of the US delegation in the UN deliberations on lethal autonomous weapons systems.

As part of that delegation, I contributed to international debates on autonomous weapons issues in the context of the Convention on Certain Conventional Weapons, a UN forum that considers restrictions on the design and use of weapons in light of the requirements of international humanitarian law, i.e, the laws of war. Country representatives have met every year since 2014 to discuss the future possibility of autonomous systems that could use lethal force. And talk of killer robots aside, several nations have mentioned their interest in using artificial intelligence in weapons to better protect civilians. A so-called smart weapon—say a ground-launched, sensor-fused munition— could more precisely and efficiently target enemy fighters and deactivate itself if it does not detect the intended target, thereby reducing the risks inherent in more intensive attacks like a traditional air bombardment.

Activists hold a banner for the Campaign to Stop Killer Robots.
Many activists hope the United Nations enacts a ban on lethal autonomous weapons systems. Credit: Campaign to Stop Killer Robots (Creative Commons).

I’ve worked for over a decade to help reduce civilian casualties in conflict, an effort sorely needed given the fact that most of those killed in war are civilians. I’ve looked, in great detail, at the possibility that automation in weapons systems could in fact protect civilians. Analyzing over 1,000 real-world incidents in which civilians were killed, I found that humans make mistakes (no surprise there) and that there are specific ways that AI could be used to help avoid them. There were two general kinds of mistakes: either military personnel missed indicators that civilians were present, or civilians were mistaken as combatants and attacked in that belief. Based on these patterns of harm from real world incidents, artificial intelligence could be used to help avert these mistakes.

RELATED:
Interview with Sneha Revanur, “the Greta Thunberg of AI”

Though the debate often focuses on autonomous weapons, there are in fact three kinds of possible applications for artificial intelligence in the military: optimization of automated processing (e.g., improving signal to noise in detection), decision aids (e.g., helping humans to make sense of complex or vast sets of data), and autonomy (e.g., a system taking actions when certain conditions are met). While those calling for killer robots to be banned focus on autonomy, there are risks in all of these applications that should be understood and discussed.

The risks fall in one of two basic categories: those associated with the intrinsic characteristics of AI (e.g., fairness and bias, unpredictability and lack of explainability, cyber security vulnerabilities, and susceptibility to tampering), and those associated with specific military applications of AI (e.g., using AI in lethal autonomous systems). Addressing these risks—especially those involving intrinsic characteristics of AI—requires a collaboration among members of the military, industry, and academia to identify and address areas of concern.

Like the Google employees who pushed the company to abandon work on a computer vision program for the Pentagon, many people are concerned about whether military applications of artificial intelligence will be fair or biased. For example, will racial factors lead to some groups being more likely to be targeted by lethal force? Could detention decisions be influenced by unfair biases? For military personnel themselves, could promotion decisions incorporate and perpetuate historical biases regarding gender or race? Such concerns can be seen in another area where AI is already being used for security-related decisions: law enforcement.

While US municipalities and other governmental entities aren’t supposed to discriminate against groups of people, particularly on a racial basis, analyses such as the Department of Justice investigation of the Ferguson, Mo., Police Department illustrate that biases nonetheless persist. Law enforcement in the United States is not always fair.

A number of investigations have raised concerns that the AI-driven processes used by police or the courts—for instance, risk assessment programs to determine whether defendants should get paroled—are biased or otherwise unfair. Many are concerned that the pervasive bias that already exists in the criminal justice system introduces bias into the data on which AI-driven programs are trained to perform automated tasks. AI approaches using that data could then be affected by this bias.

Academic researchers have been looking into how AI methods can serve as tools to better understand and address existing biases. For example, an AI system could pre-process input data to identify existing biases in processes and decisions. This could include identifying problematic practices (e.g., stop and frisk) as well as officers and judges who seem to make decisions or arrests that may be compromised by bias, reducing the role data from these processes or people has in a risk assessment. There are also ways to adjust the use of AI tools to help ensure fairness: for example, treating cases from different groups in a manner that is consistent with the way a particular group believed not to be subject to bias is treated. In such a way, AI—so often believed to be hopelessly bound to bias—can in fact be a tool to identify and correct existing biases.

RELATED:
Wargames and AI: A dangerous mix that needs ethical oversight
A Google office building.
In a 2018 letter, Google employees cited fears of bias in AI to pressure the company to abandon an AI project to help the Pentagon analyze video taken by drones. Credit: The Pancake of Heaven! (Creative Commons).

Similarly, the Pentagon could analyze which applications of artificial intelligence are inherently unsafe or unreliable in a military setting. The Defense Department could then leverage expertise in academia and industry to better characterize and then mitigate these types of risks. This dialogue could allow society to better determine what is possible and what applications should be deemed unsafe for military use.

The US Defense Department’s 2018 AI strategy commits it to lead internationally in military ethics and AI safety, including by developing specific AI applications that would reduce the risk of civilian casualties. There’s no visible evidence yet of the Defense Department starting an initiative to meet this commitment, but other nations have begun practical work to develop such capabilities. For example, Australia is planning to explore this technology to better identify medical facilities in conflict zones, a much needed capability given the many such attacks in recent years.

The Pentagon has taken some steps to prioritize AI safety. For example, the Defense Advanced Research Projects Agency, also known as DARPA, has a program that aims to develop explainable AI. AI systems can make decisions or produce results even while the how and the why behind those decisions or results is completely opaque to a human user. Steps to address this “black box” problem would be welcome, but they fall short of what is possible: a comprehensive approach to identify and systematically address AI safety risks.

When it comes to lethal autonomous weapons, some say the time for talking is over and it’s time to implement a ban. After all, the argument goes, the United Nations has been meeting since 2014 to talk about lethal autonomous weapons systems, and what has been accomplished? Actually, though, there has been progress: The international community has a much better idea of the key issues, including the requirement for compliance with international law and the importance of context when managing the human-machine relationship. And the UN group of government experts has agreed to a number of principles and conclusions to help frame a collective understanding and approach. But more substantive talking is needed about the particulars, including the specific risks and benefits of autonomous weapons systems.

And there is time. In 2012, the Pentagon created a policy on autonomous weapons (Directive 3000.09) requiring a senior level review before development could begin. Still, after eight years, not one senior level review has yet been requested, showing that the fielding or even the development of such capabilities is not imminent.

Artificial intelligence may make weapons systems and the future of war relatively less risky for civilians than it is today. It is time to talk about that possibility.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Heller
Heller
4 years ago

The argument that AI limits kill zones and is therefore superior is a restatement of the ancient question whether armies should engage cities or armies. In WW1, on the western front at least, the preference was to create kill zones outside of cities, excluding the cities occupied in Germany’s initial invasion of Belgium. The kill zones were so lethal that the war effectively destroyed a generation of Western Europeans, producing a cultural-political crisis leading directly to WW2. Although the AI kill zones are mobile, the create more or less the same bloodbath as WW1 Trench Warfare. The arguments for their… Read more »

Martin
Martin
4 years ago

Aaah, but as a computer tech I understand that at this stage AI’s are susceptible to hacking, not to mention if you give it learning from its mistakes or as it goes, it may very well learn something that makes it go off mission! Trouble is at this stage we just don’t know enough!