By Émile P. Torres, December 6, 2016
Consider a seemingly simple question: If the means were available, who exactly would destroy the world? There is surprisingly little discussion of this question within the nascent field of existential risk studies. But the issue of “agential risks” is critical: What sort of agent would either intentionally or accidentally cause an existential catastrophe?
An existential risk is any future event that would either permanently compromise our species’ potential for advancement or cause our extinction. Oxford philosopher Nick Bostrom coined the term in 2002, but the concept dates back to the end of World War II, when self-annihilation became a real possibility for the first time in human history.
In the past 15 years, the concept of an existential risk has received growing attention from scholars in a wide range of fields. And for good reason: An existential catastrophe could only happen once in our history. This raises the stakes immensely, and it means that reacting to existential risks won’t work. Humanity must anticipate such risks to avoid them.
So far, existential risk studies has focused mostly on the technologies—such as nuclear weapons and genetic engineering—that future agents could use to bring about a catastrophe. Scholars have said little about the types of agents who might actually deploy these technologies, either on purpose or by accident. This is a problematic gap in the literature, because agents matter just as much as, or perhaps even more than, potentially dangerous advanced technologies. They could be a bigger factor than the number of weapons of total destruction in the world.
Agents matter. To illustrate this point, consider the “two worlds” thought experiment: In world A, one finds many different kinds of weapons that are powerful enough to destroy the world, and virtually every citizen has access to them. Compare this with world B, in which there exists only a single weapon, and it is accessible to only one-fourth of the population. Which world would you rather live in? If you focus only on the technology, then world B is clearly safer.
Imagine, though, that world A is populated by peaceniks, while world B is populated by psychopaths. Now which world would you rather live in? Even though world A has more weapons, and greater access to them, world B is a riskier place to live. The moral is this: To accurately assess the overall probability of risk, as some scholars have attempted to do, it’s important to consider both sides of the agent-tool coupling.
Studying agents might seem somewhat trivial, especially for those with a background in science and technology. Humans haven’t changed much in the past 30,000 years, and we’re unlikely to evolve new traits in the coming decades, whereas the technologies available to us have changed dramatically. This makes studying the latter much more important. Nevertheless, studying the human side of the equation can suggest new ways to mitigate risk.
Agents of terror. “Terrorists,” “rogue states,” “psychopaths,” “malicious actors,” and so on—these are frequently lumped together by existential risk scholars without further elaboration. When one takes a closer look, though, one discovers important and sometimes surprising differences between various types of agents. For example, most terrorists would be unlikely to intentionally cause an existential catastrophe. Why? Because the goals of most terrorists—who are typically motivated by nationalist, separatist, anarchist, Marxist, or other political ideologies—are predicated on the continued existence of the human species.
The Irish Republican Army, for example, would obstruct its own goal of reclaiming Northern Ireland if it were to dismantle global society or annihilate humanity. Similarly, if the Islamic State were to use weapons of total destruction against its enemies, doing so would interfere with its vision for Muslim control of the Middle East.
The same could be said about most states. For example, North Korea’s leaders may harbor fantasies of world domination, and the regime could decide that launching nuclear missiles at the West would help achieve this goal. But insofar as North Korea is a rational actor, it is unlikely to initiate an all-out nuclear exchange, because this could produce a nuclear winter leading to global agricultural failures, which would negatively impact the regime’s ability to maintain control over large territories.
On the other hand, there are some types of agents that might only pose a danger after world-destroying technologies become widely available—but not otherwise. Consider the case of negative utilitarians. Individuals who subscribe to this view believe that the ultimate aim of moral conduct is to minimize the total suffering in the universe. As the Scottish philosopher R. N. Smart pointed out in a 1958 paper, the problem with this view is that it seems to call for the destruction of humanity. After all, if there are no humans around to suffer, there can be no human suffering. Negative utilitarianism—or at least some versions of it—suggests that the most ethical actor would be a “world-exploder.”
As powerful weapons become increasingly accessible to small groups and individuals, negative utilitarians could emerge as a threat to human survival. Other types of agents that could become major hazards in the future are apocalyptic terrorists (fanatics who believe that the world must be destroyed to be saved), future ecoterrorists (in particular, those who see human extinction as necessary to save the biosphere), idiosyncratic agents (individuals, such as school shooters, who simply want to kill as many people as possible before dying), and machine superintelligence.
Superintelligence has received considerable attention in the past few years, but it’s important for scholars and governments alike to recognize that there are human agents who could also bring about a catastrophe. Scholars should not succumb to the “hardware bias” that has so far led them to focus exclusively on superintelligent machines.
Agents of error. Above, I have focused mostly on agential terror. But there’s also the possibility of agential error—actors who might bring about a catastrophe by accident. In a world full of dangerous technologies, this is not as unlikely as it might seem. Civilization has already come extremely close to nuclear war as a result of instrument failures and misinterpreted data. There have also been numerous laboratory mistakes that have had real-world consequences. For example, the 2009 swine flu outbreak may have originated with a virus accidentally released from a Russian laboratory.
To quantify the risk of agential error, imagine that Earth comes to sustain 10 billion people. Imagine further that a mere 500 individuals—that is, only 0.000005 percent of the population—have access to a “doomsday button” that, if pushed, would cause total destruction. If each of these individuals had a 1 percent chance of accidentally pushing this button per decade, civilization as a whole—with its 10 billion members—would have a 99 percent probability of collapsing per decade. In other words, doom would be almost certain. The point is that if future technologies become widely accessible, agential error could pose an even greater threat than terror.
Risk mitigation strategies. The good news is that a focus on agential risks, and not just the technological tools that agents might use to cause a catastrophe, suggests additional ways to mitigate existential risk. If humanity manages to devise effective ways of creating a population like that of the peaceful world A mentioned above, then it wouldn’t matter much if advanced technologies were to become widely available. There is some evidence that the world is becoming, on average, less violent and more “moral,” although this is counterbalanced by other studies showing that, for example, apocalyptic terrorism and ecoterrorism could become significantly worse this century. “Moral bioenhancement”—the use of pharmaceuticals, genetic engineering, and other interventions to increase empathy, sympathetic concern, and a sense of fairness—has also been seriously discussed by philosophers such as Ingmar Persson and Julian Savulescu.
Creating a population like that in world A, though, might not improve our odds of surviving erroneous catastrophes. There doesn’t appear to be any connection between moral character and error avoidance. But there are studies showing that general intelligence is correlated with fewer accidents, so perhaps moral bioenhancements could be supplemented with cognitive enhancements to reduce the threat of both agential terror and agential error.
Another possibility is to closely examine the link between different types of agents and different types of technologies. For example, a negative utilitarian is more likely to use a weapon that has a high probability of killing everyone (to eliminate suffering), rather than a weapon that would cause more humans to suffer. Of all the advanced technologies on the horizon today, self-replicating nanobots probably offer the most reliable way of destroying our species. That is to say, this hypothetical weapon is likely to have all-or-nothing effects, and so be particularly attractive to negative utilitarians.
In contrast, some eco-anarchists—such as Ted Kaczynski—don’t wish for human extinction, but they would like to see civilization collapse. As a result, these individuals may be unlikely to experiment with environment-eating nanobots, opting instead for designer pathogens or nuclear weapons that could decimate certain human populations and destroy the global economy. Analyzing potential agent-tool pairings could help government agencies focus their counter-terror efforts in more effective ways.
It’s important to understand every aspect of the unique risks facing our species this century. This is why scholars and governments should initiate programs to understand not only how advanced technologies could be misused and abused, but what sorts of people, groups, or states might attempt to do this.
The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.