Russia may have used a killer robot in Ukraine. Now what?

By Zachary Kallenborn | March 15, 2022

A loitering munition. A screenshot of the loitering munition known as the KUB-BLA in English. Credit: Kalashnikov Group.

Using pictures out of Ukraine showing a crumpled metallic airframe, open-source analysts of the conflict there say they have identified images of a new sort of Russian-made drone, one that the manufacturer says can select and strike targets through inputted coordinates or autonomously. When soldiers give the Kalashnikov ZALA Aero KUB-BLA loitering munition an uploaded image, the system is capable of “real-time recognition and classification of detected objects” using artificial intelligence (AI), according to the Netherlands-based organization Pax for Peace (citing Jane’s International Defence Review). In other words, analysts appear to have spotted a killer robot on the battlefield.

The images of the weapon, apparently taken in the Podil neighborhood of Kyiv and uploaded to Telegram on March 12, do not indicate whether the KUB-BLA, manufactured by Kalashnikov Group of AK-47 fame, was used in its autonomous mode. The drone appears intact enough that digital forensics might be possible, but the challenges of verifying autonomous weapons use mean we may never know whether it was operating entirely autonomously. Likewise, whether this is Russia’s first use of AI-based autonomous weapons in conflict is also unclear: Some published analyses suggests the remains of a mystery drone found in 2019 Syria was from a KUB-BLA (though, again, the drone may not have used the autonomous function).

Nonetheless, assuming open-source analysts are right, the event illustrates well that autonomous weapons using artificial intelligence are here. And what’s more, the technology is proliferating fast. The KUB-BLA is not the first AI-based autonomous weapon to be used in combat. In 2020, during the conflict in Libya, a United Nations report said the Turkish Kargu-2 “hunted down and remotely engaged” logistics convoys and retreating forces. The Turkish government denied the Kargu-2 was used autonomously (and, again, it’s quite tough to know either way), but the Turkish Undersecretary for Defense and Industry acknowledged Turkey can field that capability.

Autonomous weapons have generated significant global concern. A January 22, 2019 Ipsos poll found that 61 percent of respondents across 26 countries oppose the use of lethal autonomous weapons. Thousands of artificial intelligence researchers have also signed a pledge by the Future of Life Institute against allowing machines to take human life. These concerns are well-justified. Current artificial intelligence is particularly brittle; it can be easily fooled or make mistakes. For example, a single pixel can convince an artificial intelligence that a stealth bomber is a dog. A complex, dynamic battlefield filled with smoke and debris makes correct target identification even harder, posing risk to both civilians and friendly soldiers. Even if no one is harmed, errors may simply prevent the system from achieving the military objective.

The open questions are: What will the global community do about autonomous weapons? What should it do?

In the first case the answer is pretty clear: Almost certainly nothing. International norms around autonomous weapons are quite nascent, and large, powerful countries, including the United States, have pushed back against them. Even if there were broadly accepted norms, it’s not clear how much more could be done. Russia is already under harsh, punishing sanctions for its actions in Ukraine. The US Congress just approved a $13.6 billion Ukraine aid bill, which includes providing Javelin anti-tank and Stinger anti-aircraft missiles. The United States and its allies have also been clear they have little appetite for direct military intervention in the conflict. Plus, how much can the global community really do without knowing for sure what happened? But Russia’s apparent use of the KUB-BLA does lend greater urgency to broader international discussions around autonomous weapons.

RELATED:
The nuclear year in review: A renewed interest in nuclear weapons—for and against

The state of autonomous weapons discussions. Last week, global governments met in Geneva under the auspices of the United Nations Convention on Certain Conventional Weapons to discuss questions raised by autonomous weapons, including whether new binding treaties are needed. Arms control advocates have not been successful in winning support for a binding treaty banning autonomous weapons so far. The convention’s process requires member states to reach consensus on any changes to the treaty. The United States, Russia, and Israel have significant concerns, and various others do not support a ban. The Convention on Certain Conventional Weapons process just is not going anywhere.

Nonetheless, the discussions at the convention have had great value in clarifying options and positions. Delegates to the convention have previously identified four general approaches for addressing autonomous weapons: a legally-binding instrument; a political declaration; strengthening the application of existing international humanitarian laws; and the option of doing nothing. In addition, there’s likely a fifth possibility: Countries could, where applicable, raise the issue of autonomous weapons in discussions on other weapons treaties, like those addressing nuclear or chemical weapons.

A legally-binding comprehensive ban on autonomous weapons would represent the strongest possible measure. But the reality is that major military powers would never support this tack. The active protection and close-in weapon systems they use to defend military platforms from incoming missiles and other attacks are simply too valuable.

Advocates might have some greater success if they rally around the position of the International Committee of the Red Cross, which offers an option for common ground. The organization’s position on autonomous weapons focuses on their risky aspects. It recommends a ban on unpredictable autonomous weapons, autonomous weapons that target human beings, and various regulations on other sorts of “non prohibited” autonomous weapons. The committee’s position also would likely remove the autonomous weapons militaries depend on, like active protection systems and close-in weapon systems, from a potential ban.

Alternatively, governments could focus on better implementing existing international humanitarian law. They could develop a set best practices, practical measures, and general information sharing to improve compliance with international humanitarian law for autonomous weapons. Developing best practices might include ensuring weapons undergo rigorous testing; developing military doctrine, training practices, and procedures to increase the accuracy of any weapons; or undertaking a legal review of weapons use. However, there is an underlying question of how well autonomous weapons can comply with existing humanitarian law. If failure is rampant and occurs even with best practices in place, then those measures are not enough. Conversely, if those measures do effectively—or even drastically—reduce the risk, then perhaps this approach is useful. Of course, error rates may vary between weapon systems: perhaps risk can be  reduced reliably in some types of weapons, but not others. This may lead to future discussions about narrow bans, better informed by battlefield experiences.

Or countries could just issue a political declaration about the necessity of human control. This might be the easiest approach because no one would be required to give up or alter their weapon systems. But that may also place countries in an awkward position: If human control is necessary, why have autonomous weapons? At the same time, advocates for a ban on the weapons might oppose a declaration with minimal effects on military activity. So, ironically, the seemingly easiest compromise actually might be the least likely.

RELATED:
Threat in the sky: How cheap drones are changing warfare

Last, countries could simply ignore the growing tide of public opinion against autonomous weapons. This would let militaries keep, without apology, whatever autonomous systems they like, but also could be a challenge in democratic societies where public opinion has at least some effect.

Autonomous weapons and weapons of mass destruction treaties. Another possibility for placing some sort of guidelines around autonomous weapons, one that has garnered minimal attention in Geneva, would be to expand the debate to other international treaty discussions. Treaties around chemical, biological, radiological, and nuclear weapons might have applicability to autonomous weapons in certain contexts.

The Nuclear Non-Proliferation Treaty does not require states to maintain human control over decisions to use nuclear weapons. Incorporating the requirement for human control in some manner might actually get great power support. And that is quite significant, because autonomous nuclear weapons are perhaps the riskiest autonomous weapons. An error could wipe out humanity. (Large autonomous drone swarms are another significant risk.) The congressionally authorized National Security Commission on Artificial Intelligence recommended the United States not allow AI to make decisions on firing nuclear weapons. Notably, TopWar, a Russian defense outlet, wrote in support of arms control negotiation on autonomous nuclear weapons on March 8, shortly after the current conflict started.

Advocates could also raise the topic in Chemical Weapons Convention or the Biological Weapons Convention discussions that consider how potential risks might be limited. The more precise targeting that autonomous weapons offer is quite significant for chemical and biological weapons delivery. Part of why most countries have given up chemical and biological weapons is because delivery is unreliable, making them militarily less useful. An errant wind might blow the agent away from the intended target and towards a friendly or neutral population. But artificial intelligence-aided delivery could change that, and may weaken the existing norms around those weapons further.

At minimum, states might consider whether and how to adopt export control measures to reduce the risk of algorithms and software designed for dispersal of pesticides or other chemicals falling into the hands of governments that have chemical and biological weapons. Other treaties may also be options for regulating autonomous technologies in some fashion.

Of course, establishing treaties and norms is only the first step. The next is figuring out an enforcement mechanism: Well, what does the global community do if those treaties and norms are violated? The nature of the response will depend in large part on which option countries can settle on, and how they do so. Arms control advocates might conclude that countries with large, powerful militaries will never be supportive of regulations on autonomous weapons and therefore could attempt to establish a comprehensive ban among whichever states are willing to sign on.

But if great powers do not support the norm, potential punishments like economic sanctions or military intervention won’t be meaningful. Certain punishments like robust military intervention might require a specific country like the United States to carry it out and accept whatever risks may come. Conversely, if countries settle on a public declaration advocating for human control or strengthening the applicability of existing international law, nothing may happen until after a conflict and the global community considers what, if any, war crimes may have been committed.

Once again, a potential AI-based autonomous weapon was used in combat. Once again, the details are murky. Once again, the question remains: What do we do now?


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Daan Kayser
2 years ago

Thank you for paying attention to this topic. In the PAX report mentioned in the article, we state that the weapon systems we analysed have a human approving an attack, and are therefore not autonomous weapons, let alone fully autonomous weapons (aka killer robots). Also with the KUB there is still a human operator approving an attack, so it is not an autonomous weapon per se, as it does not detect and attack targets based on sensor input, and a human still makes the attack decision. These weapons do show the trend towards increasing autonomy and a reducing role of… Read more »