The authoritative guide to ensuring science and technology make life on Earth better, not worse.

How can we reduce the risk of human extinction?

In the early morning of September 10, the Large Hadron Collider will be tested for the first time amid concern that the device could create a blackhole that will destroy the Earth. If you’re reading this afterwards, the Earth survived. Still, the event provides an opportunity to reflect on the possibility of human extinction. Since 1947, the Bulletin has maintained the Doomsday Clock, which “conveys how close humanity is to catastrophic destruction–the figurative midnight–and monitors the means humankind could use to obliterate itself.” The Clock may have been the first effort to educate the general public about the real possibility of human extinction.

Less publicly, there had been earlier speculations about humanity’s undoing. During the Manhattan Project, Robert Oppenheimer ordered a study to calculate whether a nuclear detonation would cause a self-propagating chain of nuclear reactions in the Earth’s atmosphere. The resulting report, “LA-602: Ignition of the atmosphere with nuclear bombs,” may represent the first quantitative risk assessment of human extinction. LA-602 concluded that ignition was physically impossible, and nuclear development proceeded.

In 1950, physicist Leo Szilard renewed worries about human extinction after estimating that a sufficiently large number of nuclear weapons wrapped in cobalt would, when detonated, render the Earth’s surface uninhabitable for five years (the half-life of cobalt 60). Szilard’s fear that such a “doomsday device” might be developed inspired much of Herman Kahn’s 1960 treatise, On Thermonuclear War, as well as the premise of Stanley Kubrick’s 1964 film Dr. Strangelove. While such a device remains possible in principle, it would require vast amounts of cobalt, and there is no indication that such a weapon was ever built.

In 1983, discussion of human extinction re-emerged when Carl Sagan and others calculated that a global thermonuclear war could generate enough atmospheric debris to kill much of the planet’s plant life and, with it, humanity. While the “nuclear winter” theory fell out of favor in the 1990s, recent climate models suggest that the original calculations actually underestimated the catastrophic effects of thermonuclear war. Moreover, the original model of Sagan and his collaborators supported research showing that supervolcanic eruptions and asteroid or comet impacts could pose comparable extinction risks.

Despite these notable instances, in the 61 years since the Doomsday Clock’s creation, the risk of human extinction has received relatively scant scientific attention, with a bibliography filling perhaps one page. Maybe this is because human extinction seems to most of us impossible, inevitable, or, in either case, beyond our control. Still, it’s surprising that a topic of primary significance to humanity has provoked so little serious research.

One of the missions of the Future of Humanity Institute at Oxford University is to expand scholarly analysis of extinction risks by studying extinction-level hazards, their relative probabilities, and strategies for mitigation. In July 2008, the institute organized a meeting on these subjects, drawing experts from physics, biology, philosophy, economics, law, and public policy.

The facts are sobering. More than 99.9 percent of species that have ever existed on Earth have gone extinct. Over the long run, it seems likely that humanity will meet the same fate. In less than a billion years, the increased intensity of the Sun will initiate a wet greenhouse effect, even without any human interference, making Earth inhospitable to life. A couple of billion years later Earth will be destroyed, when it’s engulfed by our Sun as it expands into a red-giant star. If we colonize space, we could survive longer than our planet, but as mammalian species survive, on average, only two million years, we should consider ourselves very lucky if we make it to one billion.

Humanity could be extinguished as early as this century by succumbing to natural hazards, such as an extinction-level asteroid or comet impact, supervolcanic eruption, global methane-hydrate release, or nearby supernova or gamma-ray burst. (Perhaps the most probable of these hazards, supervolcanism, was discovered only in the last 25 years, suggesting that other natural hazards may remain unrecognized.) Fortunately the probability of any one of these events killing off our species is very low–less than one in 100 million per year, given what we know about their past frequency. But as improbable as these events are, measures to reduce their probability can still be worthwhile. For instance, investments in asteroid detection and deflection technologies cost less, per life saved, than most investments in medicine. While an extinction-level asteroid impact is very unlikely, its improbability is outweighed by its potential death toll.

The risks from anthropogenic hazards appear at present larger than those from natural ones. Although great progress has been made in reducing the number of nuclear weapons in the world, humanity is still threatened by the possibility of a global thermonuclear war and a resulting nuclear winter. We may face even greater risks from emerging technologies. Advances in synthetic biology might make it possible to engineer pathogens capable of extinction-level pandemics. The knowledge, equipment, and materials needed to engineer pathogens are more accessible than those needed to build nuclear weapons. And unlike other weapons, pathogens are self-replicating, allowing a small arsenal to become exponentially destructive. Pathogens have been implicated in the extinctions of many wild species. Although most pandemics “fade out” by reducing the density of susceptible populations, pathogens with wide host ranges in multiple species can reach even isolated individuals. The intentional or unintentional release of engineered pathogens with high transmissibility, latency, and lethality might be capable of causing human extinction. While such an event seems unlikely today, the likelihood may increase as biotechnologies continue to improve at a rate rivaling Moore’s Law.

Farther out in time are technologies that remain theoretical but might be developed this century. Molecular nanotechnology could allow the creation of self-replicating machines capable of destroying the ecosystem. And advances in neuroscience and computation might enable improvements in cognition that accelerate the invention of new weapons. A survey at the Oxford conference found that concerns about human extinction were dominated by fears that new technologies would be misused. These emerging threats are especially challenging as they could become dangerous more quickly than past technologies, outpacing society’s ability to control them. As H.G. Wells noted, “Human history becomes more and more a race between education and catastrophe.”

Such remote risks may seem academic in a world plagued by immediate problems, such as global poverty, HIV, and climate change. But as intimidating as these problems are, they do not threaten human existence. In discussing the risk of nuclear winter, Carl Sagan emphasized the astronomical toll of human extinction:

A nuclear war imperils all of our descendants, for as long as there will be humans. Even if the population remains static, with an average lifetime of the order of 100 years, over a typical time period for the biological evolution of a successful species (roughly ten million years), we are talking about some 500 trillion people yet to come. By this criterion, the stakes are one million times greater for extinction than for the more modest nuclear wars that kill “only” hundreds of millions of people. There are many other possible measures of the potential loss–including culture and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise.

There is a discontinuity between risks that threaten 10 percent or even 99 percent of humanity and those that threaten 100 percent. For disasters killing less than all humanity, there is a good chance that the species could recover. If we value future human generations, then reducing extinction risks should dominate our considerations. Fortunately, most measures to reduce these risks also improve global security against a range of lesser catastrophes, and thus deserve support regardless of how much one worries about extinction. These measures include:

  • Removing nuclear weapons from hair-trigger alert and further reducing their numbers;
  • Placing safeguards on gene synthesis equipment to prevent synthesis of select pathogens;
  • Improving our ability to respond to infectious diseases, including rapid disease surveillance, diagnosis, and control, as well as accelerated drug development;
  • Funding research on asteroid detection and deflection, “hot spot” eruptions, methane hydrate deposits, and other catastrophic natural hazards;
  • Monitoring developments in key disruptive technologies, such as nanotechnology and computational neuroscience, and developing international policies to reduce the risk of catastrophic accidents.

Other measures to reduce extinction risks may have less in common with strategies to improve global security, generally. Since a species’ survivability is closely related to the extent of its range, perhaps the most effective means of reducing the risk of human extinction is to colonize space sooner, rather than later. Citing, in particular, the threat of new biological weapons, Stephen Hawking has said, “I don’t think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet.” Similarly, NASA Administrator Michael Griffin has noted, “The history of life on Earth is the history of extinction events, and human expansion into the Solar System is, in the end, fundamentally about the survival of the species.”

Probably cheaper than building refuges in space would be building them on Earth. Elaborate bunkers already exist for government leaders to survive nuclear war, and the Svalbard Global Seed Vault in Norway protects crop seeds from nuclear war, asteroid strikes, and climate change. Although Biosphere 2 may inspire giggles, functioning refuges that are self-sufficient, remote, and permanently occupied would help to safeguard against a range of hazards, both foreseeable and unforeseeable.

Perhaps least controversial, we should invest more in efforts to enumerate the risks to human survival and the means to mitigate them. We need more interdisciplinary research in quantitative risk assessment, probability theory, and technology forecasting. And we need to build a worldwide community of experts from various fields concerned about global catastrophic risks. Human extinction may, in the long run, be inevitable. But just as we work to secure a long life for individuals, even when our eventual death is assured, we should work to secure a long life for our species.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
A painted Doomsday Clock surrounded by text snippets and illustrations from the Bulletin’s magazine archives appears beside text that reads, “Discuss the US elections, geopolitics, space, and more at the Bulletin’s annual gathering. On November 12, join 250 attendees and members of Bulletin leadership—including those who set the Doomsday Clock—at our annual gathering in Chicago.” Below it, a button that reads, “Get my ticket.”

RELATED POSTS

Receive Email
Updates