The authoritative guide to ensuring science and technology make life on Earth better, not worse.

The struggle to ban killer robots

397px-HMS_Daring_(D32)_Phalanx_CIWS.jpg

The Campaign to Stop Killer Robots was launched in April 2013 with the objective of achieving a ban on the development, production, and deployment of lethal autonomous weapons. The same month, Christof Heyns, the United Nations’ special rapporteur on extrajudicial, summary or arbitrary executions, called for a moratorium on the development and deployment of such weapons while an international commission considered the issue. Within a remarkably short period of time, the campaign has achieved significant traction. Every month, a flurry of media reports, international conferences, and policy events are dedicated to the issue. The campaign is succeeding at something very important: bringing politics to bear on what are, at the most basic level, sets of computer algorithms designed to accomplish particular military tasks.

From May 13 to 16, a meeting of UN experts in Geneva under the auspices of the Convention on Certain Conventional Weapons will discuss questions relating to emerging technologies in lethal autonomous weapon systems. At this stage, it is inevitable that there will be much debate and discussion over the scope and meaning of any future prohibition. The campaign is still being shaped; if it is to succeed, a group of states and governments must coalesce around a shared understanding of the problem and its solutions over the next couple of years. What is the way forward?

Most important, the Campaign to Stop Killer Robots needs to strike a balance between establishing a wide-ranging prohibition and pragmatically accommodating the interests of potential state supporters. Would-be signatories need to be reassured that they won't have to give up something they perceive to be militarily essential. If it is not possible to persuade states that a prohibition is needed, the campaign will most likely not find the support required to form a coalition and negotiate a successful treaty. To move into the next phase, the nongovernmental organizations that make up the campaign need to agree among themselves on a set of key issues.

A primary task will be to clarify what, exactly, should be subject to new laws and regulations, what type of rules (if any) should apply, how they should be implemented, and under whose oversight. So far, much attention has been given to whether lethal autonomous weapons are unlawful under international humanitarian law. The jury is still out on the complex issue of illegality, and the campaign must think strategically about the emphasis put on existing legal norms. If lethal autonomous weapons are not unlawful, but one wants them to be, a ban is needed. If they are unlawful, but one wants to end the discussion once and for all, a ban would still be useful. At the same time, agreement on international law alone does not resolve the matter.

One of the most contentious issues is likely to concern the threshold at which a weapon system is deemed to be “fully autonomous.” The minimum level that is set would determine which systems are banned and which are allowed to continue in operation. Setting the threshold of autonomy is going to involve significant debate, because machine decision-making exists on a continuum. A key task for the campaign will be to create consensus on this issue among both nongovernmental organizations and the states that would have to negotiate and then implement a ban.

Furthermore, the world must be convinced that a ban is realistic. Those who disagree with a need for a ban often argue that it's too late because the technology is already in the pipelines; that it is unfeasible given the difficulty of defining automated and autonomous processes; or, in a version of the "regulate the use, not the technology" argument, that a ban is unnecessary. The campaign must address each objection.

Finally, there is the challenge of reaching a public that is already debating the use of artificial intelligence technology for civilian purposes. While the public imagination is most easily captured by fantasies of menacing-looking hardware, the problem with lethal autonomous weapons is one of decision-making and software development. The campaign needs to provide a convincing analysis of what distinguishes killer software from non-killer software, and find effective ways to communicate this distinction to governments and citizens worldwide.

In sum, the campaign must balance engagement in technical expert conversations with active participation in public debate. Identifying and arguing for broad ethical principles while keeping the objective narrow appears to be the most feasible strategy, along with insisting that the development of lethal autonomous weapons is not inevitable. Political choices and priorities will determine what kind of algorithms result.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
A graphic reads, "Test your global insight from nuclear risks to AI breakthroughs. Take our 10-minute quiz." A globe with connecting points spanning across it appears below it. Behind the globe are sprawling lines connected by circles, symbolizing connection and technology.”

RELATED POSTS

Receive Email
Updates