The authoritative guide to ensuring science and technology make life on Earth better, not worse.

The security impact of the neurosciences

By Jonathan Y. Huang, Margaret E. Kosal | June 20, 2008

In light of increased research into human brain function and cognition, some important questions have arisen: What are the implications of security-related applications of this research? What impact will this research and its applied technologies have globally? What international regimes could regulate the dual use or potential misuse of these technologies? Or will international regimes develop as a result?

To begin assessing the security and ethical impact of cognitive science research it is best to examine the state of three neuroscience applications that have generated the most concerns: neuropharmacology, neural imaging, and brain-machine interactions.

Neuropharmacology

The field of neuropharmacology encompasses research into drugs that influence the chemical balance of the nervous system and the brain. Doctors use these advanced drugs to treat the effects of a range of diseases and disorders, from Parkinson’s disease to addiction, by maintaining or restoring the chemical interactions of a person’s neurological activities. Neuropharmacology is also used in the development of anesthetics and analgesics (painkillers), which are among the prescription drugs most scrutinized by law enforcement. Some have undesired side-effects, such as depression or deleterious slowing of breathing rate, so widening their usage beyond traditional medical settings requires ethical and legal discussion.

Since before they began rationing tobacco to soldiers in war, militaries have considered and employed substances that affect soldiers’ brain chemistries. Today, worldwide military interest in neuropharmacology involves two main areas of research: the refinement of calmatives and the development of human enhancement drugs.1

The test case for the use of calmatives or other chemicals as a less-than-lethal means in military operations was the 2002 Moscow Theater incident, where the Russian military employed a fentanyl derivative to kill Chechen terrorists who had taken several hundred civilians hostage. Overdoses of the calmative also caused many civilians casualties. Critics questioned not only whether the use of fentanyl against terrorists was ethical but also whether using the chemical agent violated the Chemical Weapons Convention (CWC). The use of calmative agents in warfare would challenge the CWC, and because they manipulate human consciousness, calmatives could also pose threats to fundamental human rights, including freedom of thought. The questions raised by the Moscow Theater incident, however, have not stopped research into calmatives. In fact, researchers from University of Goettingen in Germany, have found that it is possible to prevent breathing depression from drugs such as fentanyl, opening up further concern about the development of derivative drugs.2

The development and use of human enhancement drugs, especially for military purposes, causes even greater debate and speculation. Defense Advanced Research Projects Agency (DARPA)’s program on Preventing Sleep Deprivation, for example, has invested millions of dollars in developing drugs that aim to “prevent the harmful effects of sleep deprivation” and increase “soldiers’ ability to function more safely and effectively despite the prolonged wakefulness inherent in current operations.” Interest in this program kindled further research in performance enhancement drugs that counter sleep deprivation such as Ampakine CX717, which enhances attention span and alertness by binding to AMPA-type glutamate receptors in the brain boosting the activity of glutamate, a neurotransmitter, and other improvements from Modafinil, a non-amphetamine based stimulant or “wakefulness promoting agent.” Other potential neuropharmaceutical applications include improving memory retention and treating post-traumatic stress disorder.

Neuroethicists are beginning to consider constraining or improving the cognitive ability of soldiers with drugs within larger bioethical debates.3 While some embrace the potential of these drugs and assert that controlling their use is best accomplished through self-regulation, others see the political inequality of these drugs and the way they disrupt natural physiological processes and evolution as reasons to be worried about enhancement technologies.4

Neural Imaging

Neural imaging, or neuroimaging as it’s sometimes called, has a long tradition in brain research. In the last 15 years, the advent of functional magnetic resonance imaging (fMRI) capabilities, which allow for the scanning and monitoring of blood flow in the brain and other parts of the central neural system, advanced scientific understanding of brain activity, structure, and function.5 In medicine, this technology is used to detect tumors, blood clots, weakened or burst blood vessels, and indications of stroke-risk.

This technology is also a powerful and illuminating tool for psychology and cognition studies. Scientists can now track active neural systems using fMRI and document the relationship between the brain’s chemistry, human behavior, and mental activities. Psychologists have used fMRI to study emotions, moral judgment, memory, and deception.6 Such capabilities have led to a range of potential applications. For example, DARPA has assessed the potential of detecting, transmitting, and reconstructing the neural activities of soldiers’ brains by including monitoring equipment in a small device, such as headgear. These technologies have particular defensive applications as well, such as medical screening and “life preservation,” meaning doctors can more effectively manage the mental stress, workload, and physiological status of soldiers.

Other uses of neural imaging have interested not only defense, but also homeland security and intelligence officials. Using fMRI, researchers at the University of Pennsylvania found that different groups of neurons are activated within the brain when a person lies as opposed to when they tell the truth. Thus, fMRI could contribute to a more effective and accurate lie-detection process than current technology allows.7 It is notable that current commercial applications of this technology have found the largest market among spouses suspicious of their partners.

The ethical arguments surrounding the application of neural imaging technology are complicated. On the one hand, the ability to map and understand the activities of the human brain could benefit individuals with spinal cord injuries or patients who are completely incapacitated. Furthermore, some observers argue that using neural imaging for lie detection would be more humane, more precise, and more accurate than traditional methods of interrogation. On the other hand, some ethicists are concerned that the next generation of neural imaging devices, such as those proposed to monitor public facilities for individuals with malicious intent, would raise privacy concerns.

Brain-Machine Interactions

Research into artificial intelligence spurred early interest in brain-machine interactions. Today, cutting-edge brain-machine applications primarily involve using the brain to control external devices, such as prosthetics and implements that could restore feeling, movement, or sensation to individuals with spinal cord or other paralytic injuries. The most common method of establishing a brain-computer interface, through the external recording of electroencephalographic (EEG) signals, provides limited performance, but more invasive measures, such as implanting microelectrodes into the brain, may yield better results.8

A second part of brain-machine research involves the external control of internal bodily functions. This avenue of research is more problematic technically and ethically. By stimulating the somatosensory cortical and medial forebrain bundles of a rat’s brain, scientists have been able to “control” the movement of the rat and direct it to navigate complicated terrains.9 Yet, this capability raises the potential that someday scientists may be able to negate independent thought–a defining characteristic of humans, according to the U.N. Declaration of Human Rights.

Advanced brain-machine research is of interest to defense and security programs for several reasons. The ability to control a machine directly with a human mind could enable the remote operation of a robot or unmanned vehicle in a dangerous or hostile environment.10 Such a capability would provide a substantial offensive advantage to armed forces. Brain-machine technology also manifests itself in the field of human augmentation, or by enhancing human perceptive ability by manipulating the inner workings of the brain. Such a feat could be accomplished through the use of neural prostheses. The limits to which the human augmentation should be supported by the developing capabilities in brain-machine interactions have already begun to be discussed by interested scientists at conferences and in the federal funding agencies, such as DARPA, the National Science Foundation, and the National Institutes of Health.

Advances in cognitive science research, as discussed above, are not explicitly governed by existing treaties. The U.N. Declaration of Human Rights and the Chemical Weapons Conventions (CWC), which addresses certain chemical agents, are the two international agreements most relevant to cognitive science research. And past successes in guarding against the misuse of certain technologies may not provide an adequate model to address these deficiencies. Judging by the many questions these technologies raise, this vacuum in international governance may seem alarming.

University of British Columbia scholar Richard Price theorizes that the taboos on the use of chemical and nuclear weapons originated from the stigmas of their immorality, which led to their general rejection throughout the international community. To follow this logic, as discussions of the ethical and societal implications of technologies involving neural and cognitive sciences expand, the terms of the technologies’ acceptability will become clearer. If, throughout the process of its development, the ethics of these fields are consistently questioned, then an international regime that governs the use of these technologies will be more likely to emerge.

The potential of neurologically interactive technologies to change or to enhance human capability, to invade the privacy of human thought, and to infringe on the independence of human minds raises fundamental ethical questions regarding the definition and meaning of being human, as well as possible contributions to the development of new defensive and offensive weapons. Like the much discussed dual-use conundrum of advanced biotechnology, almost all the equipment and materials needed to develop dangerous applications of cognitive science have legitimate uses in a range of scientific and industrial settings.

Further research into how states across the globe are addressing the potential military use of neuroscience would provide interesting comparisons and a more complete assessment of the implications. Judging from the current literature on neuroethics, a norm concerning the ethical and social boundaries on the use of these technologies is slowly developing for non-security related applications. If scientists expand this discussion, the development of an international security regime based on developing norms is likely.

1A well-publicized idea for adapting neuropharmaceuticals for military use was the “gay bomb.” The concept of using aphrodisiacs to influence enemy behavior on the battlefield illustrates widening interest in neuropharmaceuticals, despite the fact that this highly speculative internal proposal was rejected by the Air Force Research Laboratory.
2Jennifer Couzin, “A Sigh of Relief for Painkillers,” Science, vol. 301 (July 11, 2003), p. 150; Till Manzke, Ulf Guenther, Evgeni G. Ponimaskin, Miriam Haller, Mathias Dutschmann, Stephan Schwarzacher, and Diethelm W. Richter, “5-HT4(a) Receptors Avert Opioid-Induced Breathing Depression Without Loss of Analgesia,” Science, vol. 301 (July 11, 2003), pp. 226­-9.
3For a discussion on how this question should be approached, see Erik Parens, “Creativity, Gratitude, and the Enhancement Debate,” in Neuroethics: Defining the Issues in Theory, Practice, and Policy, ed. Judy Illes (New York: Oxford University Press, 2006), pp. 75-86.
4Francis Fukuyama, Our Posthuman Future: Consequences of the Biotechnology Revolution (New York: Picador, 2003).
5For a discussion on the principles of MRI and fMRI as well as the stimulus-response activities observed using MRI technology, see Nikos K. Logothetis, “The Neural Basis of the Blood-Oxygen-Level-Dependent Functional Magnetic Resonance Imaging Signal,” Philosophical Transactions: Biological Sciences, vol. 357 (August 29, 2002), pp. 1003-37.
6Joshua D. Greene, R. Brian Sommerville, Leigh E. Nystrom, John M. Darley, and Jonathan D. Cohen, “An fMRI Investigation of Emotional Engagement in Moral Judgment,” Science, vol. 293 (September 14, 2001), pp. 2105-08; Xuchu Weng, Yu-Shin Ding, and Nora D. Volkows, “Imaging the Functioning Human Brain,” Proceedings of the National Academy of Sciences, vol. 96 (September 28, 1999), p. 11073-4; D. D. Langleben, L. Schroeder, J. A. Maldjian, R. C. Gur, S. McDonald, J. D. Ragland, C. P. O’Brien, and A. R. Childress, “Brain Activity During Stimulated Deception: An Event-Related Functional Magnetic Resonance Study,” NeuroImage, vol. 15 (January 2002), pp. 727-32.
7Jennifer Wild, “Brain-Imaging Ready to Detect Terrorist, Say Neuroscientists,” Nature, vol. 437 (September 22, 2005), p. 457.
8Stephen H. Scott, “Converting Thoughts Into Action,” Nature, vol. 442 (July 13, 2006), pp. 141-2.
9Sanjiv K. Talwar, Shaohua Xu, Emerson S. Hawley, Shennan A. Weiss, Karen A. Moxon, and John K. Chapin, “Rat Navigation Guided by Remote Control,” Nature, vol. 417 (May 2, 2002), pp. 37-8.
10Gregory T. Huang, “Mind-Machine Merger,” Technology Review, vol. 106 (May 2003), pp. 39-45.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.


Topics: Biosecurity, Opinion

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments