In general, the universal prohibition on biological weapons is widely supported, and there is healthy concern over how dual-use technologies—those with both beneficial and dangerous applications—might threaten it. The upcoming Eighth Review Conference of the Biological and Toxin Weapons Convention (BTWC), in November will tackle this concern, as have individual countries. The German Ethics Council issued a 2014 report recommending legal regulation of worrisome dual-use biological research, and in 2016, the US National Science Advisory Board for Biosecurity published recommendations on the risks and benefits of so-called “gain of function” research, which aims to identify how pathogens evolve by forcing genetic changes that sometimes make them more dangerous.
That said, most of the attention has concentrated only on dangers posed by microbiology and novel pathogens. This tight focus risks missing other trends in the life sciences that may threaten the norm against biochemical weapons. Advances in cognitive neuroscience, in particular, might have dangerous implications. While neuroscience, which studies the brain and central nervous system, has enormous potential for good, a subset of neuroscience research is dual-use, with the potential to be applied developing incapacitating agents and interrogation tools. Despite their appeal in modern conflicts—particularly those in which enemies are embedded in civilian populations—such “nonlethal neuroweapons” are unlikely to be nonlethal in practice, and they run contrary to both the BTWC and the Chemical Weapons Convention (CWC). Two modest proposals could help prevent misuse of dual-use neuroscience: increasing national transparency about the aims of current neuroscience research, and making neuroscience the subject of science and technology review under the two international conventions.
Good Intentions. Neuroscience research promises huge benefits for the world. For countries with rapidly aging populations, advances in the field will be integral to managing and alleviating chronic pain, mental illness, and neurocognitive degenerative conditions such as dementia. Dementia alone was estimated to cost the United States as much as $215 billion in 2010, including in opportunity costs for patients, families, and caregivers; researchers estimate this figure will surpass $1 trillion per year by 2040 unless new interventions are found.
At the same time, neuroscience presents a range of possible uses for countries seeking to improve national security, whether against conventional threats from other states or militant non-state groups. The US Defense Advanced Research Projects Agency (more commonly known as DARPA) is seeking advances in the science of behavior prediction and modification that would improve intelligence gathering and detection and confrontation of security threats.
A subset of neuroscience research, though, involves the development of pharmacological agents that would incapacitate targets and aid in interrogation. This research is typically dual-use, as many agents can be used as anesthetics and analgesics, or to help develop therapeutic tools for countering the effects of dementia. Interaction between brain scientists and state militaries is not new. For example, in the 1980s, the US Army explored alpha-2 adrenergic receptor antagonists as incapacitating agents; these same drugs are also prescribed in lower doses to treat Tourette’s Syndrome.
Today, calmatives—agents that render individuals calm and compliant—are seen as potentially useful in riot control and counterinsurgency. In 2003, a National Research Council report noted “the theoretical possibility of peacefully incapacitating combatants/agitators, reducing the need for the violence that is frequently associated with many of the current methods.” A US multi-service document on how to deploy non-lethal weapons notes that using riot-control agents in war is prohibited by both the CWC and a 1975 US executive order, but contends that these agents may nonetheless be employed as “defensive means of warfare” for such tasks as controlling riots, dispersing civilians who are being used as shields, and conducting rescue missions. The most famous recent use of an incapacitating chemical agent occurred in 2004, when Russian special forces used an unidentified gas (later named as a derivative of the anesthetic fentanyl) to end a hostage crisis in a Moscow theater.
It is easy enough to see why some military leaders find the idea of using nonlethal neuroweapons in armed conflict appealing. There is, in principle, a case for using them in “unconventional” conflict scenarios, such as hostage situations, when an enemy is using human shields, or when insurgents are occupying civilian buildings such as hospitals. The thought that a gas weapon could put everyone to sleep and allow belligerents to be apprehended is seductive. But principles must give way to facts when making applied ethical decisions. This hypothetical nonlethal neuroweapon is—to borrow from the philosopher Michael Davis—like a flying pig in a thought experiment: for the example to guide action, we would need to live in a universe where pigs can fly. Because we don’t live in that kind of universe, the example is invalid. We likewise don’t live in a universe where the ideal nonlethal weapon is possible, much less plausible.
Why? First, “lethality” is hard to define. Dose makes the poison, and in practice we know too little about the diffusion of gaseous weapons in combat environments to be able to state confidently that a weapon is nonlethal. An allergic reaction, hyperventilation, or simply being too old or too young could render a neuroweapon fatal, particularly for noncombatants.
Second, context matters. If neuroweapons were to be used in an area where an unconscious civilian could fall to harm, “non-lethal” would lose its meaning. We can say that gently shoving a person is nonlethal, while still acknowledging that a gentle shove off a cliff is fatal, even if it isn’t the shove but the fall that ultimately kills. In the same way that precision munitions in the wrong context can cause significant civilian casualties, the ideal of non-lethality elides the reality. Terminology should not lull us into complacency.
Third, the history of chemical weapons suggests that commanders often vastly over-estimate their efficacy and under-estimate the confusion they can wreak in managing the battlefield. Their tactical advantages over other options are limited at best, yet they have led to expensive and error-prone arms races —part of the reason US President Richard Nixon renounced the first use of chemical weapons and all methods of biological warfare in 1969.
Finally, it is not clear that the norms of armed conflict can accommodate the restraint needed to practice nonlethal methods. Nonlethal weapons of many kinds are used by law enforcement, but often to lethal effect. The threshold for the use of lethal force in armed conflict is much lower than in law enforcement, and not without reason—but this lower threshold could lead to more, not less lethal use of nonlethal weapons.
The road to hell. International law strongly proscribes the use of neuroweapons, which target the brain and central nervous system and can be chemical, biological, or toxin-producing in nature. The CWC bans the production, acquisition, stockpiling, retention, and use of chemical weapons. This relatively straightforward prohibition is undercut, however, by the convention’s provision allowing chemical agents to be used for certain law enforcement activities, like riot control. The BTWC is more sweeping in its prohibition: It bans biological agents or toxins from being used as weapons, whatever their mode or method of production. But unlike the CWC, the BTWC has no mechanism for inspecting national facilities to determine whether the prohibition is being upheld.
Neuroweapons, thus, present a challenge for both the CWC, because of its limited scope, and the BTWC, because of its lack of an inspection mechanism. In light of these limitations, it would not be surprising to see governments turn to exotic incapacitating agents in the coming decade, in an attempt to strengthen their ability to respond to threats that are unconventional, mobile, or embedded within civilian populations. Some of these neuroweapon users may be autocratic regimes interested in repressing dissent or rebellion, but as we’ve seen, the militaries of rich, democratic nations also have an interest in using nonlethal chemical weapons in attempting to confront insurgents or terrorist groups.
Attempting to counter the threat of terrorism without harming noncombatants is arguably a well-placed intention, but it paves a road to hell by threatening to undermine almost half a century of work to keep the global community engaged in upholding the ban on biological weapons, and the hard-won (if incomplete) destruction of chemical weapons stockpiles around the world. Facilities that can create nonlethal biochemical agents aren’t too different from ones that can create lethal agents—and may be identical if the only difference between a lethal and nonlethal weapon is dose.
International agreements like the CWC and BTWC survive on trust. That trust is needed to maintain the prohibition against biological weapons in the face of a range of other threats, such as the proliferation of dual-use biotech, non-state actors interested in chemical and biological weapons, and naturally occurring pandemics. It would be a shame to break that fragile trust, and jeopardize potential gains, for the limited strategic benefits of nonlethal neuroweapons.
Moreover, subsuming neuroscience advances under the rubric of national security may derail the field from researching peaceful uses. The US BRAIN Project, a White House-backed research initiative launched in 2013, was hailed for the health advances it might bring about. A full third of the project’s 2016 funding, though, is through DARPA. Although some basic neuroscience questions about maladies like post-traumatic stress disorder may be addressed by national security agencies, the central mission of DARPA and similar organizations remains a military one. Time and again, the story of the relationship between science and security is that when the two are mixed, the priorities of the latter dominate the trajectory of the former. There is no reason things are likely to be different this time.
Paving a better path. The upcoming Eighth Review Conference in Geneva will be an ideal venue to raise awareness about troubling technological developments in neuroscience, and enact policies to strengthen the norm against biochemical weapons.
One step in the right direction would be to require greater transparency among states that are party to the BTWC, by calling on them to disclose their research interests and programs in cognitive neuroscience. Given the undoubted benefits the field could bring to medicine and public health, there is little reason to keep these programs under a shroud of secrecy—by, for example, housing them in defense organizations. A commitment by members to transparency and openness in their neuroscience research should not be difficult to achieve.
Second, neuroscience and its associated subspecialties in pharmacology, cognitive science, and microbiology should be subject to detailed review for their capacity to produce technologies that run counter to the convention. There are already signs of support for such a measure. At the Preparatory Committee to the Eighth Review Conference, which concluded in August, participants spoke often about the need for a robust science and technology review process to strengthen the BTWC.
These two relatively modest measures are important steps towards resolving the dual-use dilemma neuroscience presents, and paving the way for further policy action at national and international levels.
The authors of this piece are Nicholas G. Evans and Jonathan D. Moreno. Evans is an assistant professor in the Department of Philosophy at the University of Massachusetts Lowell, where he conducts research at the intersection of infectious disease and national security. He is co-editor of the collection Ebola’s Message: Public Health and Medicine in the Twenty-First Century, published in 2016 by MIT Press. Moreno, the David and Lyn Silfen University Professor at the University of Pennsylvania, is a philosopher, historian, and, in the words of The American Journal of Bioethics, “the most interesting bioethicist of our time.” His most recent book was Impromptu Man: J.L. Moreno and the Origins of Psychodrama, Encounter Culture, and the Social Network (2014), and he is a senior fellow at the Center for American Progress.
The authors are supported by funding from the Greenwall Foundation for their research on dual-use neurotechnologies and international governance.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.