Stopping killer robots and other future threats

By Seth Baum | February 22, 2015

Only twice in history have nations come together to ban a weapon before it was ever used. In 1868, the Great Powers agreed under the Saint Petersburg Declaration to ban exploding bullets, which by spreading metal fragments inside a victim’s body could cause more suffering than the regular kind. And the 1995 Protocol on Blinding Laser Weapons now has 104 signatories, who have agreed to ban the weapons on the grounds that they could cause excessive suffering to soldiers in the form of permanent blindness.

Today a group of non-governmental organizations is working to outlaw another yet-to-be used device, the fully autonomous weapon or killer robot. In 2012 the group formed the Campaign to Stop Killer Robots to push for a ban. Different from the remotely piloted unmanned aerial vehicles in common use today, fully autonomous weapons are military robots designed to make strike decisions for themselves. Once deployed, they identify targets and attack them without any human permission. None currently exist, but China, Israel, Russia, the United Kingdom, and the United States are actively developing precursor technology, according to the campaign.

It’s important that the Campaign to Stop Killer Robots succeed, either at achieving an outright ban or at sparking debate resulting in some other sensible and effective regulation. This is vital not just to prevent fully autonomous weapons from causing harm; an effective movement will also show us how to proactively ban other future military technology.

Fully autonomous weapons are not unambiguously bad. They can reduce burdens on soldiers. Already, military robots are saving many service members’ lives, for example by neutralizing improvised explosive devices in Afghanistan and Iraq. The more capabilities military robots have, the more they can keep soldiers from harm. They may also be able to complete missions that soldiers and non-autonomous weapons cannot.

But the potential downsides are significant. Militaries might kill more if no individual has to bear the emotional burden of strike decisions. Governments might wage more wars if the cost to their soldiers were lower. Oppressive tyrants could turn fully autonomous weapons on their own people when human soldiers refused to obey. And the machines could malfunction—as all machines sometimes do—killing friend and foe alike.

Robots, moreover, could struggle to recognize unacceptable targets such as civilians and wounded combatants. The sort of advanced pattern recognition required to distinguish one person from another is relatively easy for humans, but difficult to program in a machine. Computers have outperformed humans in things like multiplication for a very long time, but despite great effort, their capacity for face and voice recognition remains crude. Technology would have to overcome this problem in order for robots to avoid killing the wrong people.

A government that deployed a weapon that struck civilians would violate international humanitarian law. This serves as a basis for the anti-killer robot campaign. The global humanitarian disarmament movement used similar arguments to achieve international bans on landmines and cluster munitions, and is making progress towards a ban on nuclear weapons.

If the Campaign to Stop Killer Robots succeeds, it will achieve a rare feat. It is no surprise that weapons are rarely banned before they are ever used, because doing so requires proactive effort, whereas people tend to be reactive. When a vivid, visceral event occurs, people are especially motivated to act. Hence concern about global warming spiked after Hurricane Katrina devastated New Orleans in 2005, and concern about nuclear power plant safety spiked after the 2011 Fukushima disaster.

The successful humanitarian campaigns against landmines and cluster munitions made very effective use of the many victims maimed by these weapons. The current humanitarian campaign against nuclear weapons similarly relies on the hibakusha—the victims of the 1945 Hiroshima and Nagasaki bombings—and victims of nuclear test detonations. The victims’ presence and their stories bring the issue to life in a way that abstract statistics and legal arguments cannot. Today there are no victims of fully autonomous weapons, so the campaign must be proactive rather than reactive, relying on expectations of future harm.

Protection from the dangers that could be caused by killer robots is a worthy end in its own right. However, the most important aspect of the Campaign to Stop Killer Robots is the precedent it sets as a forward-looking effort to protect humanity from emerging technologies that could permanently end civilization or cause human extinction. Developments in biotechnology, geoengineering, and artificial intelligence, among other areas, could be so harmful that responding may not be an option. The campaign against fully autonomous weapons is a test-case, a warm-up. Humanity must get good at proactively protecting itself from new weapon technologies, because we react to them at our own peril.

Editor's note: The views presented here are the author’s alone, and not those of the Global Catastrophic Risk Institute.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments