By Kerstin Vignard, April 26, 2018
Why UN discussions on the management of lethal autonomous weapons need greater participation by the scientific and research communities and representatives of the private sector. Statements of alarm are not enough.
On April 9, diplomatic discussions on the weaponization of increasingly autonomous technologies resumed at the United Nations in Geneva. These talks, conducted within the framework of the 1980 Convention on Certain Conventional Weapons, began in 2014 as a series of informal meetings; since 2017, they have convened in a more formal format known as a “group of governmental experts.”
All national delegations to the talks contribute arms control expertise—but only a few delegations are staffed with technical experts in artificial intelligence, robotics, or other relevant technical domains.
It is therefore significant that in August 2017, in an open letter to the convention, researchers in AI and robotics, as well as founders of companies in those fields, urged the convention’s high contracting parties “to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.”
The open letter left no doubt that the signatories harbored grave concerns about the application of AI and robotics to future weapon systems: “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
The signatories specifically mentioned their sense of responsibility for sounding the alarm about autonomous weapons. Indeed, scientists and technologists have long played roles both in the creation of tools of violence and in advocating for limits on and controls over those tools. When scientists and technologists voice concerns today about the weaponization of artificial intelligence, they strongly evoke the actions of another group of eminent visionaries who, more than 60 years ago and in a similar historical context, warned of the dangers of nuclear weapons.
The Russell-Einstein Manifesto, issued in 1955, was a public declaration by prominent scientists at a time of heightened international tensions between East and West, as well as of growing interference in the internal affairs of states through both military and covert operations. It was also a period characterized by revolutionary scientific progress and technological innovation.
The manifesto had its origins in the concerns of the physicist Joseph Rotblat and the polymath Bertrand Russell about the effects of nuclear weapons, the risk of nuclear proliferation, and the future of humankind. That is, the authors wanted to sound the alarm about the potentially catastrophic consequences of the use of nuclear weapons and the existential risk posed to humanity by their retention. The manifesto’s 11 signatories included several of the world’s most prominent scientists. Ten were Nobel laureates, recognized for their contributions to physics, chemistry, physiology and medicine, literature, or peace.
Released in London, the manifesto garnered significant attention and served as the catalyst for the establishment of the Pugwash Conferences on Science and World Affairs, an organization whose activities were to serve as a “channel of communication between scientists, scholars, and individuals experienced in government, diplomacy, and the military for in-depth discussion and analysis of the problems and opportunities at the intersection of science and world affairs.” Although Pugwash received the Nobel Peace Prize in 1995 for its efforts toward nuclear disarmament, many would argue that the manifesto’s ultimate impact on nuclear disarmament was limited.
The manifesto’s most enduring phrase is its stark plea to “Remember your humanity, and forget the rest.” However, a different passage resonates deeply (despite its gendered 1950s language) in the context of today’s debates on international security, science, and technology:
Many warnings have been uttered by eminent men of science and by authorities in military strategy. None of them will say that the worst results are certain. What they do say is that these results are possible, and no one can be sure that they will not be realized. We have not yet found that the views of experts on this question depend in any degree upon their politics or prejudices. They depend only, so far as our researches have revealed, upon the extent of the particular expert’s knowledge. We have found that the men who know most are the most gloomy.
To be clear, not all members of the scientific community are convinced of the potential hazards of autonomous weapons—although those who are convinced have been the most vocal so far. Nor are members of the scientific community gloomy about the potential benefits of artificial intelligence or about technological innovation more generally. They are deeply vested in—and passionate about—ensuring that AI is beneficial to society and ultimately humanity. Indeed, several initiatives under way in the science and technology communities seek to shape the potential benefits of AI while limiting its hazards. These include the research agenda articulated in the 2015 Open Letter on Research Priorities for Robust and Beneficial Artificial Intelligence and the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Other initiatives include, at the multilateral level, the 2017 AI for Good Global Summit; as well as the work of organizations such as OpenAI, the Partnership on AI, and many others.
Members of the scientific community, however, are the people most knowledgeable about the fundamental open questions in AI research that have implications for autonomous weapon systems. These include but are not limited to explainability (that is, the ability to understand how a system arrives at its output, instead of allowing it to operate as a “black box”); vulnerability to bias in training data and models; failures due to unanticipated operating environments; and fragility in the face of input or training data crafted by adversaries. Autonomy in weapon systems is far from the only realm in which these open questions pose vexing problems. But for autonomous weapons, the challenges are particularly acute.
Resolving these open questions will be integral to gaining societal acceptance of artificial intelligence in people’s daily lives—acceptance of AI-enabled improvements in medical diagnosis and treatment plans, vehicle safety, detection of fraud and other criminal activity, adaptive learning techniques for children, and greater equity in criminal justice systems, to name just a few benefits. And just as scientists and researchers are committed to making progress on these open questions, they are also the most qualified to inform others about what remains unknown or unresolved—and to learn from instances when AI delivers “surprising” or “unexpected” results, whether those results are positive or negative.
Today, scientists, technologists, and the research community more broadly are vital voices on the international security implications of science and technology—just as they were the bellwether on nuclear issues six decades ago. However, if decision makers are to leverage this crucial knowledge to reach sound policy decisions on issues as significant as weapon systems, policy makers and the scientific community need to engage more productively with each other. For scientists to bring their technical knowledge to bear on the international policy discussion, it is not enough to issue open letters.
The scientific and research communities could speak with a voice distinct from others who work in the advocacy space. Legal scholars, for example, haven’t merely articulated concerns and proposed policy responses; they have detailed particular questions that greater autonomy in weapon systems presents for international humanitarian law and international human rights law. Philosophers have weighed in on specific ethical approaches. Manifestos and warnings that the scientific and research communities issue from afar risk being dismissed as advocacy—rather than treated as conclusions grounded in technical and scientific knowledge. Thus the scientific community might miss an opportunity to enrich the international conversation with a unique contribution that only it is qualified to make.
Since 2014, a number of nongovernmental experts—including scientists, engineers, and researchers—have been invited to address the convention. For example, in last year’s discussion, the chairman prioritized a technological “stock-taking” and invited a considerable number of scientific experts to address the meeting. However, aside from those invited to speak, few scientists have chosen to attend the meetings as observers. They should consider taking advantage of the opportunity. By the standards of arms control, the convention’s framework is refreshingly open to nongovernmental observers and offers the scientific community a heretofore underutilized opportunity to help shape international understanding and eventual regulation of autonomous weapon systems. Observers often have the opportunity to address the meetings as well as to organize side events. Prominent researchers and scientists active in the Campaign to Stop Killer Robots and the International Committee for Robot Arms Control have done just that. Even greater participation in the work of the convention by the scientific and research communities, by professional technical societies, as well as by representatives of the private sector, would be highly desirable.
For their part, states participating in the group of governmental experts on lethal autonomous weapons should welcome more active participation by the scientific and technological communities because their participation can bring greater scientific rigor and clarity to the discussions, as well as narrow the gap in technological understanding. Governments could draw upon national expertise by inviting nongovernmental scientific experts to be part of their delegations, as well as by encouraging more researchers and scientists to register as observers.
The scientific and research communities have an important contribution to make to the discussion on autonomous weapons. These women and men, with their extraordinary vision and technical skills, have created technologies that have changed every aspect of people’s lives. Now some of them are using their vision and substantive knowledge to caution about the potential risks and uncertainties of increased autonomy in weapon systems. It would be paradoxical if society lauded them for their ability to imagine and create the personal assistants that most people carry in their pockets and the autonomous cars that many hope to see on roadways—yet selectively discounted their expertise, experience, knowledge, and instincts when they urge caution regarding the weaponization of related technologies, or point out the need for further reflection, deeper understanding, or greater certainty.
The time has come to move beyond manifestos and open letters to a more productive engagement between policy makers and the scientific and technical community. Six decades after the Russell-Einstein Manifesto, scientific and technological leaders around the world are once again lending their voices to a debate over whether and how scientific innovations should be employed in weapon systems. They can offer important technical contributions to the international discussion about autonomy in weapon systems and about AI’s broader implications for international security. These contributions are relevant, necessary, and welcome.
The views expressed here are those of the author, and do not represent the views of UNIDIR, its sponsors, or the United Nations.
The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.