The authoritative guide to ensuring science and technology make life on Earth better, not worse.

If a killer robot were used, would we know?

By Zachary Kallenborn | June 4, 2021

Kargu drone test.A screenshot from the Turkish defense company STM's video about its Kargu drone. The drone has both autonomous and manual functionality, according to the company, and a recent UN report referred to a Kargu model as a lethal autonomous weapons system, saying it was used to attack retreating soldiers in Libya.

A recent UN report on Libya implies—but does not explicitly state—that a Turkish Kargu-2 drone was used to attack humans autonomously using the drone’s artificial intelligence capabilities. I wrote about the event in the Bulletin, and the story went viral. The New York Times, NPR, Axios, Gizmodo, and a solid couple dozen more outlets in at least 15 languages all covered it. The intensity of the response surprised some experts, who noted that weapons that operate autonomously have been around for years. Perhaps the significance of the Libyan incident is socially symbolic—an event that draws sudden public attention to an issue brewing for a long time—not a Sputnik moment, but a Kargu-2 moment.

But most of the attention ignored a very obvious question: How do we know the Kargu-2 was used autonomously? The vagueness in the UN report allows multiple interpretations, defense companies exaggerate their products’ capabilities, and how best to define so-called lethal autonomous weapons is a hotly debated issue. The question has far reaching implications beyond whether autonomous weapons were used in Libya. Groups like the Campaign to Stop Killer Robots seek comprehensive bans on autonomous weapons, yet a ban cannot be enforceable unless some way exists to verify autonomous weapons use.

The reality is verification would be extremely difficult. According to STM, the Kargu-2 manufacturer, the drone has both autonomous and manual modes. In general, an attack with the autonomous mode enabled would look exactly the same from the outside as a manual-mode attack, except for probably imperceptible changes in decision-making speed. Compare that with verifying the use of chemical weapons. Chemical weapons agents may be detected through specialized sensors or through the identification of specialized, chemical weapons munitions. They cause recognizable symptoms in victims and can be identified in environmental samples. None of those methods are useful for autonomous weapons, especially if the weapon has both autonomous and remotely operated (or manual) modes.

The likely best way to verify autonomous weapons use is by inspecting the weapon itself. If investigators retrieve a weapon, they can study it through digital forensic techniques and look for evidence in data logs, such as flight path data, photo or video processing records, received orders, or other indicators that the weapon allows autonomous control of firing decisions and that they were used. While such a forensic investigation could positively verify use, it couldn’t prove the opposite, that an autonomous attack had not occurred. What if a seized weapon was used manually, but others in a battle were used autonomously? Furthermore, investigators may have no access to the weapon: an autonomous gun turret like South Korea’s SGR-A1 may stay in the control of the military that used it; a used weapon may not be recovered; the data on the weapon may be corrupted, spoofed, or deliberately wiped; or the weapon may be destroyed in the attack.

The next best, and highly situational, alternative is for the military that used the weapon to confirm the weapon was used autonomously. Claiming credit for using an autonomous weapon could help show a military is innovative and strong, using cutting-edge military technology. But as international norms and treaties around autonomous weapons grow and strengthen, admitting to autonomous weapons use will come with growing diplomatic, social, and political costs. A whistleblower might come forward, but that’s hardly reliable. Alternatively, intelligence sources from intercepted communications to human intelligence assets may reveal orders were given to use a weapon autonomously. But that information almost certainly cannot be made publicly available, because it might burn the intelligence source. (The UN report on Kargu-2 use in Libya cites a “confidential” source. Perhaps that’s an intelligence source, a military defector, or something else entirely, but it’s impossible to know and therefore assess the source’s credibility.)

The other, even weaker option, is to conclusively rule out the possibility of human decision-making. A remotely operated weapon needs to communicate with a human operator. These signals can be jammed to thwart remote operations. If the communication link between an operator and a weapon was completely and irrevocably severed (or does not appear to have been present at all) and the weapon continued to operate, investigators might decide the weapon had been operating autonomously. But the conclusion assumes that the jammers targeted the correct communications frequencies and were operating correctly. It also assumes that there was no delay between an order to fire and the weapon acting and that the jammed weapon was not employing counter-measures like increasing signal strength to overcome the jamming. Likewise, if investigators believed that no operator was within control range of a weapon, they might see that as evidence of an autonomous system. But this analysis would also be fraught with uncertainty. What if the operator had been in a camouflaged vehicle or left the search area before they could be found?

The good news is that verifying the use of the riskiest autonomous weapons may actually be quite easy. States are increasingly developing autonomous drone swarms, which pose global security risks akin to traditional weapons of mass destruction. For example, the U.S. Strategic Capabilities Office launched 103 Perdix drones out of three F/18 Super Hornets in October 2016, while India tested a 75-drone swarm during its recent Army Day and stated they plan to build a swarm of 1,000 or more drones operating without human control. No human could plausibly have meaningful direct control over such a massive swarm. Verifying autonomous weapons use would be as simple as counting the number of deployed drones. (Finding an exact threshold where human control becomes implausible is definitely a challenge, but basic intution suggests a threshold must exist.)

Researchers have also proposed new, technical means of verification. Systems might be designed to require and verify the presence of a human operator, or include unalterable data logs on autonomous usage. Or autonomous weapons may incorporate cryptographic techniques to prove the weapon cannot initiate an attack without human authorization. Alternatively, critical subsystems such as a weapon deployment system may be made transparent to allow third-party inspection without providing access to more sensitive subsystems. But all of these methods require significant state acquiescence and trust and may not be successful in practice. If a state chooses not to incorporate such measures, then any verification value goes away. (Of course, adopting such measures may show state commitment to emerging autonomous weapons norms, which does have its own value.)

Drawing all this back to the opening question of autonomous weapons use in Libya, any verification—positive or negative—will inherently come with considerable doubt. And war is inherently confusing and opaque. The simple fact is that a wide range of states big and small are developing autonomous weapons. That technology is not necessarily complicated. Designing an autonomous weapon with facial recognition is simple enough that it could be a computer science class project. The challenge of verifying autonomous weapons use means the world may have seen the first use of “killer robots” without anyone actually knowing. The dawn of autonomous warfare may be so simple as a completed homework assignment.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Reid Byers
Reid Byers
3 years ago

If a robot has autonomous capability, it should be considered a killer robot no matter what mode it is operating in. Its use, in any mode, should be construed as sufficient to complete the offense.

Dr. Selim Yalvac
Dr. Selim Yalvac
3 years ago

How can you logically compare an autonomous drone that may inadvertently kill a few civilians around a military target versus a 500 lb. cluster bomb that purposely kills hundreds of civilians? Which one is worse?

Text reads, “Give the gift of Bulletin swag. Shop merch designed to raise awareness about nuclear risk, climate change, and disruptive technologies.” Below it is a button that says “Show now.” A man appears wearing a Bulletin T-shirt and smiling.

RELATED POSTS

Receive Email
Updates