Ethics on the near-future battlefield

By Michael L. Gross | December 17, 2015

The US Army’s recent report “Visualizing the Tactical Ground Battlefield in the Year 2050” describes a number of future war scenarios that raise vexing ethical dilemmas. Among the many tactical developments envisioned by the authors, a group of experts brought together by the US Army Research laboratory, three stand out as both plausible and fraught with moral challenges: augmented humans, directed-energy weapons, and autonomous killer robots. The first two technologies affect humans directly, and therefore present both military and medical ethical challenges. The third development, robots, would replace humans, and thus poses hard questions about implementing the law of war without any attending sense of justice.

Augmented humans. Drugs, brain-machine interfaces, neural prostheses, and genetic engineering are all technologies that may be used in the next few decades to enhance the fighting capability of soldiers, keep them alert, help them survive longer on less food, alleviate pain, and sharpen and strengthen their cognitive and physical capabilities. All raise serious ethical and bioethical difficulties.

Drugs and prosthetics are medical interventions. Their purpose is to save lives, alleviate suffering, or improve quality of life. When used for enhancement, however, they are no longer therapeutic. Soldiers designated for enhancement would not be sick. Rather, commanders would seek to improve a soldier’s war-fighting capabilities while reducing risk to life and limb. This raises several related questions.

First, should medical science serve the ends of war? This is not a new question—it first arose when the US Army recruited doctors to develop chemical and biological weapons during World War II. And while there may be good military reasons to have doctors help build bombs, the medical community has firmly rejected this role. Physicians are healers, not warriors; enhancing soldiers to kill undermines the integrity of medicine.

Another ethical difficulty speaks to the transformative effects of enhancements. Many pharmaceutical agents raise legitimate concerns about personality changes. For example, if soldiers use drugs to maximize cognitive prowess by reducing anxiety and eliminating fear, visions of power and grandeur may result. Some drugs, meanwhile, could block memories of battlefield events. Without memory, there is no remorse, and without remorse, there is no constraint.

Finally, we must consider the rights of soldiers designated for enhancement. Soldiers have no right to refuse standard medical treatments that keep them fit for duty. But must soldiers agree to enhancement? Soldiers who do are already healthy and fit; enhancement only makes them more fit. As a result, enhancement should require informed consent together with the medical supervision necessary to oversee safety. And because the long-term effects of medical augmentation remain unknown, military authorities should make every effort to utilize nonmedical alternatives (such as body armor, armored transport, and improved weaponry) to improve troop performance.

Meeting these conditions, however, will be problematic. For one thing, informed consent is often difficult to attain in a military hierarchy where “orders are orders.” For another, the medical effects of some enhancements won’t necessarily be known. Soldiers may not have sufficient information to make the informed decisions that medical ethics require. Nor can officers order service personnel to accept medical care that is not therapeutic. Weighing the benefits of enhancement demands that we weigh these reservations. It may be that any technology that increases military efficiency and protects soldiers will carry the day. Military leaders must exercise caution, however, to insure that service personnel do not abuse their enhancements and violate humanitarian law.

Directed-energy weapons. The Army report predicts that a variety of directed-energy weapons will be employed by 2050. It doesn’t delve deeply into specifics, but this category could include blinding lasers, electromagnetic radiation, and magnetic stimulation, all technologies within reach. None are designed to be lethal. Blinding lasers emit pulses of directed energy to permanently or temporarily blind and incapacitate combatants. International law now bans lasers that blind permanently, but laser “dazzlers” only cause temporary blindness and would allow troops to disarm and arrest assailants. Another directed-energy weapon is the US military’s Active Denial System, or ADS, which emits a 95 gigahertz energy beam that penetrates the skin to create an intense burning sensation without tissue damage. Both blinding lasers and ADS-type weapons could be particularly helpful in battlefield conditions where armies confront mixed populations of civilians and guerrillas or terrorists who do not wear uniforms. Using either technology, soldiers could incapacitate combatants and noncombatants, then arrest and detain the former while freeing the latter uninjured.

Transmagnetic stimulation (TMS) could also be useful for targeting undifferentiated crowds, but rather than incapacitating people, it would direct an intense magnetic field to manipulate brain activity. Currently being studied as a treatment for depression, TMS might, for example, eventually be able to alter a person’s mood to transform hostility and hatred into trust and cooperation. Existing devices are small and require an operator to pass a coil directly over a person’s head, but future applications may allow for long-distance operation. So armed, a military force could painlessly and non-lethally alter an enemy’s state of mind and behavior to prevail in battle.

At first glance, these technologies evoke revulsion. But what exactly is the problem? First, in violation of its traditional role, medical science is developing weapons that inflict pain. It may be transitory pain, but involves suffering nonetheless. Second, medicalized weapons undermine the human body in an especially insidious way. Most weapons kill or injure by inflicting blunt trauma or blood loss, but blinding lasers, the Active Denial System, and transmagnetic stimulation manipulate specific physiological systems rather than simply traumatize the human body. These weapons raise fears of injuries that defy medical care and of technologies that may eventually alter humans beyond all recognition. These peculiar features of some modern weapons have led the International Committee of the Red Cross to recommend a ban on weapons specifically designed to kill or injure by causing disease or a specific abnormal physiological state, such as being blinded or burnt. There is good reason to exercise extreme care as we move ahead with weapons that directly invade the body.

Transmagnetic stimulation offers especially compelling reasons for concern. Directed at the brain, it disrupts cognitive processes and temporarily alters essential human characteristics. Is this where military technology should be going? In addition to medicalizing warfare, neurological interventions raise the risk of dehumanization and infringements of “cognitive liberty”—the right to think for oneself, free of external constraints or mind control. Tied closely to the right of privacy, cognitive liberty should prohibit others from invading one's personal mind-space to either disrupt its processes or reveal its contents.

Whether the enemy’s right to cognitive liberty is inviolable or subject to the dictates of military necessity remains an open question. Drawing on our understanding that deprivations of physical liberty (such as incarceration) require due process, one may cogently argue that deprivations of cognitive liberty, if permissible at all, require a much higher bar. Fighting a war does not permit every use of force. This is a fundamental axiom of international humanitarian law. Although nonlethal, weapons that alter states of mind may go beyond the pale. At the very least, they require military and political authorities to closely monitor their use and as-yet-unknown effects.

Autonomous killer robots. The US Army report says that “deployed robots would be capable of operating in a variety of ‘control’ modes from total autonomy to active management by humans.” Consider the “total autonomy” mode. Turned loose on the battlefield, killer robots (those armed with lethal weaponry) could act individually or collectively. Programmed with a mission, they would be able to degrade or disable enemy forces using tactics consistent with the law of armed conflict and international humanitarian law.

Minimally, killer robots must understand and apply the law as they fulfill their mission. Is it possible to simply program them to do so? The law of armed conflict has a very salient ethical component. Since the 19th century, international jurists have understood that no law can cover every possible situation. This leaves two default rationales for decision-making: military necessity or some higher standard of conduct. Should an officer lacking clear guidance fall back on accomplishing her mission, or defer to moral principles? The answer is as clear now as it was in 1899, when delegates to the Hague Convention on the Law and Customs of War declared

“the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience.

So programming a killer robot to behave justly is considerably more difficult than uploading the corpus of international law. One must instill a sense of justice. Is this possible? One solution may be to establish rules of thumb and some element of oversight, but neither will be easy to implement. For example, the rule of proportionality demands that a field officer weigh the military advantage of attacking a military target against the harm that will befall enemy civilians as a result. It is an extraordinarily difficult decision because the elements of the equation—military advantage and civilian harm—are incommensurate. Death and injury measure civilian harm, but what measures military advantage? Compatriots’ lives saved, enemy resources degraded, deterrent credibility restored, or some combination of these factors? Human commanders have enough difficulty with this kind of decision. Can killer robots handle things any better?

Even if they could, there would still be political sensibilities to consider. For example, who counts as a “civilian?” After the 2008-2009 Gaza War between Israel and Palestinian forces, each side acknowledged that roughly 1,200 Palestinians lost their lives. But Israel claimed that 75 percent were combatants while Palestinians claimed that 75 percent were civilians. The difference turned on the contentious status of police officers and individuals working for the political wing of the organization Hamas. What is a killer robot to do? Technically, it is feasible to upload pictures of people in Hamas’ political wing and Gaza’s police force. But when ordinary soldiers are provided with such pictures (often in the form of a deck of cards), they are expected to exercise discretion when they consider whether to arrest, kill, or even ignore a suspect. Expecting killer robots to do the same does not seem feasible or desirable. The “laws of humanity” rest with humans, not robots. Just as we can arrest and try soldiers who violate the law and morality, it must be possible to arrest and try robots’ human supervisors who do the same. Full autonomy for robots is far from ideal. Responsibility for the conduct of war must eventually fall to human beings.

What technology can’t solve. Human augmentation, directed-energy weapons, and killer robots are all being developed with the aim of saving combatant and noncombatant lives. How well they succeed in this goal will depend on how well civilian and military operators navigate several straits.

First, leaders must be wary of the slippery slope. Augmenting soldiers may lead to enhancing police officers or de-enhancing criminals. Similarly, operators may utilize directed-energy weapons to torture rather than incapacitate their targets. Either technology could end up undermining civil liberties.

Second, operators and weapon designers must be aware of the pitfalls of force multiplication. This is particularly true in asymmetric warfare. Weapons designed to mitigate injury and loss of life may also intensify harm. How would a state armed with augmented soldiers, directed-energy weapons, and killer robots fight insurgents? Would it use its arsenal to incapacitate, subdue, and arrest guerrillas, or would it simply kill disabled militants? That’s what Russian security forces did in 2002, using a calmative gas to first incapacitate and then kill Chechen militants who took over a Moscow theater.

As we search for answers to these questions, we must remain wary of placing too much stock in technology. Contemporary armed conflict amply demonstrates how relatively weak guerrillas, insurgents, and terrorists find novel ways to overcome advanced technologies through such relatively low-tech tactics as suicide bombings, improvised explosive devices, human shields, hostage taking, and propaganda. There is little doubt that these tactics gain purchase because many state armies endeavor to embrace the “laws of humanity and the requirements of the public conscience,” and, as democracies, often choose to fight with one hand tied behind their backs. The emerging technologies that will accompany future warfare only sharpen this dilemma, particularly as asymmetric war intensifies and some inevitably ask whether killer robots lacking a sense of justice might not be such a bad thing after all. 


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments