The authoritative guide to ensuring science and technology make life on Earth better, not worse.
By Emma Hansen | August 27, 2014
Until now, the first use of any new military technology has been extrajudicial in the purest sense, because no laws in existence govern such a situation. There has invariably been an interval of time during which a technologically inventive nation could operate with impunity.
The treaties adopted after World War I to govern aerial bombing, which were themselves reactionary, were suddenly insufficient when the Manhattan Project reached completion. The nuclear bomb presented a series of difficult legal questions—when generations of citizens, yet unborn, are deeply affected by any nuclear detonation, how is one to apply the principle of proportionality?—but an overwhelming desire to end World War II prohibited an impartial interpretation of international law.
Internationally-minded scientists who had worked on the bomb raised cries of lament, many of which are documented in the early issues of the Bulletin. More widespread, though, was an atmosphere of celebration: The war was over! With this celebration came acceptance of the strange new weapons that had brought a decisive end to the war and saved countless American lives, or so the story went. For the American public, the introduction of nuclear weapons was inextricably linked with Allied victory, making it difficult to view the weapons as immoral. Only later would revisionist historians cast doubt on the political impact of the Hiroshima and Nagasaki bombings; only later would international law be interpreted with greater sobriety. The war may have been over, but with its end came the menace of a weapon whose debilitating effects on global security have not faded.
Because technologically advanced countries typically enjoy political and economic power, nations inventing new weapons are often able to shape the relevant international legal framework. This trend has continued with the introduction of emerging technologies since 1945.
Lethal drones. The first use of lethal force by unmanned aerial vehicles, like the first use of nuclear weapons, occurred in a climate of crisis. Following 9/11, President George W. Bush ordered the CIA to defeat Osama bin Laden by any means necessary. Bush relied on the newly introduced Authorization for Use of Military Force, which gave him sweeping powers and made any application of force acceptable by domestic legal standards. It was public knowledge that a CIA drone program was preparing to deploy Predator unmanned aerial vehicles equipped with antitank missiles, and that Al Qaeda posed an immediate threat to American security. The crisis mandated decisive action. American citizens probably did not anticipate that their government’s agencies would continue the drone strikes for more than a decade, or that they would target humans who could not be considered imminent threats to national security.
Nuclear stockpiles still pose an existential threat. American drone strikes have killed more than 2,000 people without a public record of due process. The ongoing debate about the morality of emergent technologies has not stagnated, but familiarity with these previously exotic weapons leads to greater acceptance, fundamentally reframing the debate. Yet the legislative safeguards on nuclear weapons and armed drones were put in place during times of crisis, and thus were placed loosely. The world is no longer engaged in a total war. Bin Laden is dead. But the international community has not stopped to tighten the safeguards in order to ensure that these weapons do not continue to cause crises of their own.
In his 2009 Nobel Peace Prize address, President Obama declared his conviction that the United States must be “a standard bearer in the conduct of war.” This is the same man who authorizes every unmanned aerial vehicle strike. Obama, and all others in power, have a responsibility to interpret the existing body of international law without seeking to exploit it. Civil society also has a responsibility to pay attention to the development of new technologies, to carefully consider the consequences of their introduction, and to take action if necessary.
Killer robots. The introduction of another novel class of weapons is on the horizon as lethal autonomous weapons, or “killer robots,” leave the realm of science fiction. Robots have no innate revulsion to killing. Killer robots would exacerbate the already senseless nature of warfare and could not be programmed with any means of reconciliation. Widespread use of these weapons would completely and irrevocably dehumanize warfare. Even nuclear warheads and attack drones have a human in the loop.
The 9/11 attacks raised US domestic support for unmanned aerial vehicles, just as World War II did for nuclear weapons, although the destructive consequences of both weapons later became very apparent. At this pivotal moment, though, when lethal autonomous robots totter on the threshold of use, nowhere is there an imminent threat necessitating or justifying their use.
Breaking the pattern of violence. In commemorating the centenary of World War I, the world remembers the beginning of a century of violence. The conflicts were made all the more bitter by human ingenuity turned against humanity on an industrial scale, of which the aerial bombings beginning in 1915 are but one stark example.
Thus far, international law has stumbled along behind technological innovation. In the most significant examples, imminent threats have made it easier to stretch legal frameworks. It is encouraging—and unprecedented—that discourse about killer robots is happening at a time when the public does not face an immediate security threat on a large scale. It’s also necessary, given that the only voices able to completely subdue these inhumane weapons must speak before lethal autonomous robots are used. To stay silent would allow for the use of a horribly indiscriminate weapon and would also set a dangerous precedent for the introduction of ever more inhumane technologies.
Legislation following scientific progress is not a pattern that must or should continue. Lethal autonomous robots can be stigmatized and banned before they are used. Though many of its applications are immoral, technological advance itself is amoral, and thus must be kept in check. Prohibiting the use of lethal autonomous robots would not eliminate existing nuclear stockpiles, nor would it bring justice to the many innocent victims of drone strikes. It would simply be much-needed damage control. Consider it an experiment.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.