The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Cyberwarfare ethics, or how Facebook could accidentally make its engineers into targets

By Adam Henschke, Patrick Lin | August 25, 2014

Without clear rules for cyberwarfare, technology workers could find themselves fair game in enemy attacks and counterattacks. If they participate in military cyberoperations—intentionally or not—employees at Facebook, Google, Apple, Microsoft, Yahoo!, Sprint, AT&T, Vodaphone, and many other companies may find themselves considered “civilians directly participating in hostilities” and therefore legitimate targets of war, according to the legal definitions of the Geneva Conventions and their Additional Protocols.

It may all seem a minor issue of semantics, but definitions matter a lot. Depending on how you think the old laws of war fit with the new realities of cyberconflict, it may be legally possible that enemy rockets could one day rain down on the Googleplex, or even Mark Zuckerberg’s private home, because Gmail or Facebook servers were used in a cyberattack.

Intelligence agencies worldwide have already infiltrated popular technologies to spy on users, and the line between defense and industry has become increasingly blurred. In fact, the Pentagon announced its recent airstrikes in Iraq in a tweet. (A fuller version was posted on the Facebook page of the Department of Defense.) So it’s not absurd to think that governments could make even greater use of the same technologies and online services that everyday people use, but for a wide range of military purposes—not just for public relations. And once something is used for military purposes, where does it stop? When does it become a military target?

Cyberwar experts in Geneva. These were some of the many new puzzles discussed in a two-day expert workshop recently hosted at the International Committee of the Red Cross (ICRC) headquarters in Geneva, Switzerland. The gathering was organized by researchers from California Polytechnic State University (San Luis Obispo), Naval Postgraduate School, Western Michigan University, and the Centre for Applied Philosophy and Public Ethics (Australia). It explored the ethics of cyberwarfare and grappled with how cyberattacks could be responsibly conducted, given existing laws of war and ethical norms.

Workshop participants included about 30 philosophers, political scientists, technologists, activists, policy wonks, military officers, and other experts. They came from China, Australia, Finland, Norway, Israel, the United Kingdom, the United States, and other nations. To promote honest discussion and the sharing of information, the meeting was held under the “Chatham House Rule”—meaning that no statement or position could be attributed to a particular person.

In our meeting, we wanted to know: Is there something actually new going on in cyberwar that is different from the traditional wars fought purely in the physical realm? If not, then how can society apply existing ethical discussions about war to cyberwar? If there is something new, then how should we respond to that in law and ethics?

But the answers aren’t easy. Cyberwar is both new and old, and that poses unique challenges to ethics and law.

Can cyberattacks trigger war? The Charter of the United Nations allows for war when one nation threatens another with the use of force or armed attack. “Force” is usually meant to be physical influence, as opposed to, say, economic policy; “armed” likewise usually means physical weapons such as bullets and biological weapons—but not inert things such as radio signals and insults. Given this understanding, could sending packets of code across borders ever amount to the kind of aggression that could trigger war?

It’s not just the zeroes and ones in a packet of code that matter—it’s also their effect on the targeted system, or the information they represent when extracted from another system. Cyber operations can be conducted with a wide range of goals, from gathering intelligence to defacing a website, or sabotaging a system. And not all of these would count as the use of force or armed attack, if one looked at conventional, physical actions that produce the same result. For that reason, many commentators want to be careful in distinguishing actual cyberattacks from less aggressive acts, such as data breaches, cyberespionage, and cybervandalism.

But something seems different about the cyber realm. Unlike physical espionage, for example, it is a very short hop in cyberspace from spying to sabotage—and sabotage is usually accepted as a reason for war. That same insertion of malicious code to steal information could also enable someone to control, compromise, and even destroy the entire system itself; the targeted party might not be able to tell the difference until it was too late. This suggests that cyberespionage should be taken much more seriously and treated differently. Even if ordinary espionage cannot legally trigger an armed response, perhaps cyberespionage and therefore other exploitations of system vulnerabilities could?

A more foundational question to help answer this puzzle: Why are armed responses permitted in the first place? Ethically, there must be some significant harm or wrong to people, actual or threatened, for an armed or military response to be considered justified. According to “just-war theory”—the philosophical tradition underwriting much of the laws of war and international humanitarian law—war is so terrible that it ought to be a last resort, used only after all political, economic, and social pressures have failed. Thus, it is permitted as the lesser evil, if the alternative is to not defend one’s nation at all from serious attacks.

However, very few cyberattacks directly kill people. (For instance, cyberattacks could theoretically target pacemakers and cause heart attacks.) More commonly, they would cause a lot of damage to physical infrastructures, such as gas pipelines and nuclear centrifuges. Or to institutions such as stock markets. And potentially to people —at least psychologically. Could the severity of this damage ever be great enough to justify an armed response, as a physical attack on a fellow human could? 

For example, compare the inconvenience caused when a nation’s banking websites go down for a week to the harm inflicted by an armed assault upon that same nation’s soldiers. While it seems obvious that a little inconvenience is vastly less important than a person’s life, what if everyone in a country of millions of people suffered that same inconvenience? And what if that inconvenience goes for more than a week, or a month, or half a year? At what point does it become more than an inconvenience—could it then be more important than one person’s life? Obviously, if a nation’s banking system is continually disrupted for an extended time, its economy will suffer some impact, and real physical suffering may then occur. Think of it: no money to buy food, get gasoline, or purchase medical supplies. How long could a barter system or a bank holiday last?

To complicate matters, the mere threat of economic harm—much less actual harm—has started wars before, such as when naval blockades physically threatened to disrupt economies and the material well-being of a population. How similar that scenario is to a cyberattack with serious economic implications is a matter of debate, as is the question of whether economic threats can really be a reason for war. For instance, international sanctions and embargos can have the same effect as a naval blockade, yet those policy actions aren’t usually considered a valid pretext for a military response; the visceral, physical nature of a blockade seems to invite or provoke a physical response, rightly or wrongly.

Perhaps one way to explain the different kinds of value here is to look at the difference between harming and wronging. When I am harmed, I feel something like physical pain; whereas, when I am wronged, I may not suffer any physical pain but I understand that someone has interfered with my rights in some way. Consider that rather than being denied access to my bank online, I am no longer able to post to my favorite blogs or social networks. Again, the physical harms might be utterly minimal, but I might feel that my rights, such as a right to freedom of speech, are under assault. For some people, such wrongs are not only morally relevant, but they also are deeply troubling, threaten the basis for humans to flourish, and may require a military response.

Assuming that a nation has a just reason to attack at all—for instance, in self-defense against an actual attack, or possibly a preemptive strike in anticipation of an attack—how should it conduct a cyberattack? Again, the just-war tradition can help guide an ethical response, following two key criteria: There must be some net-gain outcome, and noncombatants (generally, civilians but also wounded or surrendering soldiers, military chaplains, military doctors, and so on) should not be deliberately harmed. Both of these conditions are relevant to cyberwar.

Turning civilians into targets. The just-war tradition is deeply concerned with harm to civilians. The general premise is that civilians should never be targeted by the military—unless they are directly participating in hostilities, at which point they become (unlawful) combatants. To purposely target noncombatants would be indiscriminate, something that is considered wrong because noncombatants pose no threat. The primary justification for an armed response is to defend against a threat and never for revenge or to annihilate a society. Indiscriminate attacks can turn even an otherwise-just war into an unjust one.

What does this mean for military cyberoperations? Most of the Internet is now powered by civilian-owned hardware and software, a long way from its military roots in the ARPANET of the 1960s. So, if a warring nation were to use privately run or supported assets to launch a cyberattack, those same assets would seem to be fair game for a counterattack. For example, if a nation’s military routed a cyberattack through Facebook’s servers, its adversary could hardly be blamed for defending against the attack by taking down Facebook’s servers.

There are other ways in which civilian-owned Internet tools could be in the cyber crossfire. If a military organization stored important information in Dropbox and communicated to its personnel through Twitter (for whatever reason), then both that cloud data storage facility and that microblogging service could become pieces of “dual-use” infrastructure and therefore strategic targets for an adversary.

Thus, a nation could make its own civilians into legitimate or attractive targets for attack, depending on how directly linked they are to hostilities. This is a dilemma, since so much of the cyber realm is owned, run, supported, and used by civilians. So, there are deep concerns that a country could irresponsibly put its own people at risk if it waged cyberwar through the Internet.

An analogy can be found in housing military personnel and equipment in civilian homes. Whether the purpose is to have a human shield or simply a place to hide, those homes are in very real danger of being attacked, even if the civilians occupying them are unaware of or uncooperative with the military. Likewise, co-opting civilian cyber assets is possibly illegal, because it puts civilians in the same precarious position. What’s more, in the United States it could run afoul of the Third Amendment to the Constitution, which says that it is unlawful for the state to quarter soldiers in private homes without the owners’ consent. To use Google’s servers or other civilian assets for military cyberoperations—with or without civilian knowledge or cooperation—may be contrary to the broader principle implied by the Third Amendment, to respect property rights and to insulate civilians from military operations and the associated risk.   

The problem may be more acute for well-intentioned industry researchers, such as those working for Google’s Project Zero team or the Microsoft Security Response Center, which proactively identify and fix security vulnerabilities. Should those vulnerabilities have military relevance, then those researchers may unwittingly find themselves in the middle of a cyberwar and become impediments to be removed.

Unpredictable effects may be disproportionate. Another consideration about the just-war tradition is that it places a priority on proportionality: A military cannot target civilians, and if it is going to cause unintentional harm or “collateral damage,” that harm must be outweighed by the military gains sought. Because a military must assess proportionality, it cannot use a cyberweapon without some idea of what the military advantage is and what the effects would be. This includes both the direct effects as well as secondary, tertiary, and other effects over time, including the risk of proliferation of a cyber weapon onto civilian networks.

The problem, however, is that the effectiveness and scope of many cyberattacks are often unknown in advance, particularly if they have never been used before, such as an attack on a new or “zero-day” security vulnerability. Most cyberweapons fall into this knowledge gap, because they are effectively one-time use weapons; once the cyberattack is discovered, security countermeasures rapidly develop that stop that cyberweapon from working again. This uncertainty about the likelihood of success encourages multiple and simultaneous cyberattacks, to make sure that at least one will work; if all work, then the effect could be more disproportionately devastating—the whole being more than the sum of its parts.

All war is deception, especially cyberwar. Would a cyberattack be ethical, if a nation is able to launch one that does not use any civilian assets, does not propagate to civilian systems, and is fully predictable and proportionate in its effects?

The answer is: “It depends.” There are also rules about deception to consider. The typical cyberattack cannot take place without some level of deception. Whether it is a virus seeking to disrupt operations or some hacker stealing data, an attack generally has to pretend that it is a legitimate system or network action.

Some sorts of deception are permitted in military operations, such as ambushes, misinformation, and camouflage. But if combatants pretend to represent a protected group, such as a noncombatant civilian, then this is an illegal act of what is known as “perfidy.” This is akin to a soldier dressing up as a Red Cross worker for safe passage into enemy camps in order to kill them from the inside. Such actions betray what little, fragile trust exists between adversaries, something the international community wants to preserve to limit the horrors of war. Thus, for a cyberattack to be just, a nation might be permitted to deceive its enemy, but prohibitions on perfidy will prevent it from using, say, a false email address purporting to come from the Red Cross to trick the recipient into installing its malware payload.

New warfare, new obligations? Cybertechnologies can deliver some truly novel capacities, and so there may be new features in cyberwar that exceed the just-war tradition. One unique feature of many cyberattacks stands out: reversibility, meaning that a cyberattack may cause a great deal of damage now, but when hostilities cease, those damages may be reversed. Is an attacking nation morally obliged, then, to create reversible cyberweapons whenever possible—for instance, weapons that do not destroy data, but only encrypt it, with the possibility of decryption later?

The ICRC counts loss of function as legally equivalent to damage to a physical object. But what this means for cyberwar needs to be further studied. It may be that if a cyberattack causes as much disruption as a physical attack, then an armed response is justified. But does the possible reversibility of the attack change the moral calculus of going to war?

Contrast this to the new possibility that a cyberattack, though reversible, might still set the stage for other, much greater harms: A loss-of-function attack on radars, to name just one scenario, would undermine a nation’s air defenses, thus allowing a devastating aerial attack. It seems strange to think that the targeted country would not view this cyberattack as extremely important. In such a situation, is a preemptive or preventative counterattack (either by cyber or conventional means) justified? And how long before the potential physical attack may the counterattack occur? When the cyberintrusion is attempted, or when the radar is disabled, or just before the aerial assault, or at some other time? In such scenarios, a cyberattack seems to be novel, in that it harms nothing and no one permanently—unlike what conventional sabotage usually does—yet can still be a reasonable trigger for war.

In cyberwar, time or duration of an attack becomes especially important. A cyberattack that causes loss of function might be considered to be damaging for only a short period of time, and a small set of inconveniences, if they persist, can also become a much more significant harm, especially across a large population. Thus, time adds another layer of complexity: Big harms could be momentary; small harms can be unrelenting; and with enough time, violations of civilian rights may run through a whole country. Cyberwar’s impacts can be tiny or huge—or both at once.

All this seems to point to one of the truly novel aspects of cyber: Because the cyber domain is uniquely intertwined across civilian, military, government, and industry spheres, and the effects of cyberoperations can vary wildly, the world is confronting a new way to wage war and therefore a new set of ethical and policy challenges. The existing legal regime, with some stretching, may be able to account for many of these challenges, but perhaps not all.

The fog of cyberwar. This novelty, including new variants of old problems, shows something important: Without the necessary experience, a nation cannot know what it is doing in the cyber realm or even what it should be most worried about. Nations will overreact or underreact, especially because it is never easy to determine the perfectly appropriate response, given a full range of other plausible options. They are unsure of what to do, or how their responses will play out through time, and any confidence in unproven techniques seems to be unjustified. This inexperience carries with it deep moral concerns.

Part and parcel of the uncertainty of cyberwar is the so-called “attribution problem,” or identifying who the attackers are and whether the attack was really intended. A nation might be able to quickly identify that crippling attacks to its banking system are originating from a particular country, but should it send in the military straight away? Before a military response, the attacked nation better be sure that the other country truly is responsible for the cyberattacks and not, say, the unwitting host of a gang of criminals or terrorists merely staging an operation from that location. Governments need to be confident that the information and digital forensics that they are acting upon can be trusted—a persistent challenge.

A second concern is that the mere use of a cyberweapon can reveal the weapon’s own techniques and the system vulnerabilities it exploits. Given this, it might be infeasible to test many cyberweapons “in the wild.” Consequently, a would-be attacker might not have enough information to reliably predict what a cyberweapon will do once released, despite the just-war tradition that places limitations on harming civilians and causing disproportional damage to the enemy. If the expected damage from an attack is largely unknown, that uncertainty raises questions about whether anyone may reasonably even consider the use of such weapons.

A third concern arises when a military tries to estimate its adversary’s response to a cyberattack, as well as its own counter-response. A great deal of modern military strategy is built upon anticipating what the enemy will do to counter various actions and reactions, and vice versa. But given that the cyber realm is largely untested, a nation cannot know what its adversaries will do, and its adversaries do not know what it will do. So far, there seems to be a common level of prudent self-interest, and military responses to cyberattacks have not been launched. But this prudence may eventually shift.

To prevent such escalation, it is important to recognize the role of political decisions in the world of cyberwarfare. Attribution, for example, is not merely a technical issue; the confidence level of a given attribution can have a heavy political element. If the political nature of cyberattacks and responses is not accounted for, then nations run the risk of misunderstanding adversaries and alienating allies.

In a sense, this shows that cyberwar is not all so very new: The common element is people. Diplomacy and statecraft can be applied, along with other leverage. The international community can build upon existing discussions about just-war and military ethics, to better understand the moral issues related to violent conflict. Those discussions may need to be supplemented with considerations about other areas, such as economic policies and rules, policing models for international crime and terrorism, and even science policy that can suggest ways to deal with uncertainty.

A more diverse approach to military ethics. The final theme raised in our Geneva workshop dealt with understanding cyberwar beyond the narrow analytic frame of just-war theory, a tradition that is a largely Western set of thought. For example, few nations have wanted to adopt the findings of the Tallinn Manual, which was published in 2013 and presents the conclusions of a large group of Western legal experts on the international laws applicable to cyber warfare.

Instead, each nation, naturally enough, will tend to understand cyberattacks and other new phenomena through the prism of its own particular culture, its own beliefs, its own academic disciplines, and so on. But given cyber’s global reach, the main players to be involved in any future cyberconflicts will likely include not only Europe, the United Kingdom, and the United States, but also China, Russia, India, Brazil, and other military powers. These countries and cultures will view cyberconflict in different ways that ought to be accounted for, in order to achieve a degree of international consensus.

Furthermore, many of the victims of cyberattack—as well as its accomplices—are increasingly likely to be large-scale, private entities playing on the world stage, rather than just nation-states. Accordingly, future planning needs to account for the Googles, the Microsofts, the Facebooks, the Twitters, and other big players appearing on the scene. A responsible nation needs to decide if it can justifiably use, say, Google services for its own military ends. (And Google will presumably seek to find a way to keep itself from becoming the mere puppet of some irresponsible nation seeking to co-opt it.)

These companies will need to carefully consider their roles, knowing that their actions might put their own workers at risk by making them “civilians directly participating in hostilities”—in other words, legitimate targets, okay to hit according to the legal framework of the Geneva Conventions (or at least logical military targets, whether legal or not). Policymakers also must consider whether these companies are entitled to act on their own: If they are the victim of a foreign cyberattack, are they morally or legally permitted to respond aggressively—especially if no state response seems forthcoming? What limits can a hosting government place on the actions of companies located or listed in their territory?

Finally, generational differences seem relevant here. Most, if not all, of our workshop experts were more than 30 years old and easily remember a time when libraries held index cards and computers were not connected to each other. In many parts of the world, teenagers have been born into a hyper-connected informational world. This may affect their understanding not only of concepts such as privacy but also of ownership, law, order, and even citizenship.

Further, while being cut off from the Internet may seem like an inconvenience to older “digital immigrants” and casual users, perhaps it is a deep harm—even if only psychological—for these newer generations that are more digitally connected. If this is the case, then we need to be open to the possibility that even a purely informational cyberattack, causing no physical damage, may cause widespread misery and ought to be taken very seriously.

Ethics can add something important to the debates here. Two of the chief skills of a systemized ethical analysis are to recognize and explain the frameworks around a given discussion, and to be able to offer some critical judgments on them. To recognize the need to avoid deep suffering, to offer basic respect for others, to treat people fairly—these and similar concerns are fairly common across cultures. In this way, ethics can help us to recognize not only where the old is relevant, but also where it must give way to the new.

Editor's note: Some research connected to this article is supported by the US National Science Foundation, the International Committee of the Red Cross, the US Naval Academy’s Vice Adm. James B. Stockdale Center for Ethical Leadership, the University of Notre Dame’s Reilly Center for Science, Technology, and Values, and Charles Sturt University’s Centre for Applied Philosophy and Public Ethics. The statements expressed in the article are the authors’ and do not necessarily reflect the views of the aforementioned organizations.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
The Bulletin’s 2024 November Magazine Cover appears above text that reads, “November magazine: Fusion — the next big thing … again? Subscribe to start reading.”

RELATED POSTS

Receive Email
Updates