The authoritative guide to ensuring science and technology make life on Earth better, not worse.
By Rumtin Sepasspour | April 21, 2023
Every four years, the National Intelligence Council’s Global Trends report generally catalogs the global issues that the United States and its incoming presidential administration ought to be most concerned about. It’s not pleasant bedtime reading. But the most recent report, which projects out to 2040, went beyond the typical assessments around internal instability, interstate tensions, and international competition.
In a section on technology, the report contained a short but remarkable description of “existential risks” or “threats that could damage life on a global scale” and lead to human extinction and civilizational collapse. The report specifically cites runaway artificial intelligence, engineered pandemics, nanotechnology weapons, and nuclear war. Other threats include extreme climate change, geoengineering, ecological collapse, supervolcanoes, and near-Earth objects.
By bringing attention to this issue, the US intelligence community is doing what it’s designed to do—provide strategic insight to their leadership on trends, events, and risks in the global landscape. However, intelligence communities have until now paid almost no attention to existential risks. Their primary focus, justifiably, remains on conventional national security issues such as defense, counterterrorism, and counterespionage. But the COVID-19 pandemic revealed that other, potentially calamitous, threats are just as important—and the intelligence community should play heed.
Given the scale and uncertainty of existential threats, intelligence is key. This is a view that the Global Trends report itself makes: “Such low-probability, high-impact events are difficult to forecast and expensive to prepare for, but identifying potential risks and developing mitigation strategies in advance can provide some resilience to exogenous shocks.”
Intelligence communities are certainly not the only sectors of governments that could or should analyze existential threats. Nor would they be responsible for preventing the risks or building resilience to them. They can, however, play a critical role. Intelligence collection and analysis capability would help lead government efforts to detect, understand, and warn senior policymakers of these threats.
A new extreme. Until recently, end-of-the-world scenarios have been the domain of Hollywood films and centuries-old mythologies. In the past decade, however, the study of human extinction scenarios has taken a decidedly more contemporary and academic perspective, with a small but integrated field of researchers—such as those from the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, the Future of Humanity Institute at the University of Oxford, and the US-based Future of Life Institute that have formed around this issue.
As Martin Rees, the United Kingdom’s Astronomer Royal and co-founder of the Centre for the Study of Existential Risk, stated: “Our Earth has existed for 45 million centuries. But this [century] is special. It’s the first when one species has the planet’s future in its hands.” The test of the first nuclear bomb on July 16, 1945, ushered in a new epoch of risk. But humanity’s impact on the world has only continued to grow and has reached a moment in time when the human species threatens itself.
An existential intelligence problem. Existential risk is a particularly challenging intelligence problem that requires special attention. The first challenge is scale: Existential risks impact human civilization and its future. At worst, they can lead to human extinction.
Nuclear winter is a prime example. The direct casualties from nuclear war would be extraordinary, potentially in the hundreds of millions. However, it is the aftermath that could be an existential threat. For example, a so-called “small” nuclear exchange of 100 weapons could send extraordinary amounts of aerosols into the stratosphere and lead to global cooling. Global temperatures could drop up to 8 degrees Celsius, disrupting crop growth and causing mass starvation and billions of deaths, including possibly human extinction.
Effects of extreme climate change could wreak similar havoc. If average global temperatures rise more than expected—for example, above 6 degrees Celsius this century—it could trigger feedback loops and cascade effects such as the thawing of permafrost that would release catastrophic amounts of trapped carbon, further accelerating the planet’s warming.
Despite this, academic debate continues around whether extreme climate change technically poses an extinction risk. It is difficult to wipe out all of humanity this way. But global impacts such as crop failures, heat stress, and desertification that make parts of the world unlivable, cannot be ruled out. Even if the climate catastrophe does not result in extinction, the scale of death and suffering would be incredibly high, and ultimately a threat to national security and prosperity.
The second challenge is uncertainty around how these risks unfold, how likely the scenarios are, and when the risks could occur. Such ambiguity makes these problems hard to analyze and devote policy resources towards. Intelligence communities, however, can help policy makers navigate this uncertainty. Using extreme climate change as an example again, there is a high amount of uncertainty around the tail risk—those rare events with potentially calamitous downsides. For example, despite being one of the most well-studied risks, scientists have little clarity on the likelihood or impact of very high global temperature increases. Three separate studies have concluded that the chance of catastrophic climate change is between 5 and 20 percent, depending on different emissions pathways.
What the world looks like with such increases in temperatures is equally uncertain. It is extremely difficult to assess humanity’s resilience to climate disruption, the dynamics of global ecological and social systems, and when and how feedback loops kick in. Climate change may be a reasonably well-understood risk. Extreme climate change, however, is not.
The final challenge is risk novelty: Many of these risks are only now emerging on the horizon. Although the risks of nuclear winter and climate change have been known for decades, the risks of catastrophic technology-based threats—artificial intelligence, synthetic biology, nanotechnology, geoengineering, and their interconnection with weapons of mass destruction—are yet to have fully arisen.
For example, advances in synthetic biology could eventually make the modification of dangerous pathogens more accessible. This could mean that malicious actors could increasingly develop bioweapons due to the reduced education, training, cost, time, and equipment thresholds required to modify and employ pathogens.
Artificial intelligence also brings its own risks. Current AI systems pose risks of accidents, malicious use by terrorists and rogue states, and systemic risks, such as unstable escalation in a “flash war,” or a sudden military offensive.
The limits and speed of AI progress remain highly uncertain. Advanced forms of AI, which would be near or beyond the level of human intelligence, could pose even more severe risks of accident or misuse. Before even reaching that point, however, artificial intelligence integrated into nuclear weapons systems could destabilize nuclear stability and deterrence arrangements.
These risks will only continue to emerge and grow. So, policy makers must start thinking about them now. Intelligence can help assess the likelihood and impacts of the various risks. It can backcast potential pathways and scenarios. Backcast refers to an analytical technique that outlines a potential future event and works backwards to identify drivers, milestones, and decisions that would lead to that outcome. Based on this analysis, intelligence can support innovative approaches to reducing the risks.
Off the radar. Intelligence communities are not adequately investigating existential risk. If this situation continues, they are doomed to repeat the same tragic intelligence failures of the past, such as the fall of the Soviet Union, Saddam Hussein’s invasion of Kuwait, 9/11, the alleged presence of weapons of mass destruction in Iraq, the Arab Spring, Russia’s invasion of Ukraine in 2022, and COVID-19.
Previous intelligence failures have arisen from a perfect storm of problems: politicization, bureaucratic obstacles, poor evaluation of sources, faulty assumptions, cognitive biases, gaps in information, and poor interagency communication.
In the case of existential risk, three key gaps are stopping intelligence agencies from allocating effort that is commensurate with the risk: lack of responsibility, insufficient resources, and inadequate relationships.
Filling these gaps will be critical to avoiding an intelligence failure of existential proportions.
Responsibility. The heavily security-focused missions of intelligence communities mean that existential threats at a global level are typically ignored. Intelligence communities in democratic countries conduct their work based on the authority of legislation and policy. They deliver intelligence advice based on requirements from their customers, primarily political and military leaders.
Naturally, the focus of intelligence communities tends towards national security, defense, and foreign policy issues. For example, the United States’ 2019 National Intelligence Strategy lists its missions as strategic intelligence, military operations, cyber, counterterrorism, counterproliferation, and counterintelligence.
One level up, national security strategies provide a higher degree of guidance. Western countries, in particular, typically share a similar set of national security priorities: terrorism and other forms of extremism, domestic security, espionage, cyber threats, military threats, and sophisticated criminal elements. As a result, intelligence collection and analyses are targeted towards the most immediate or direct threats. Long-term or unconventional threats can often take a back seat.
Intelligence effort pushes beyond the traditional security and foreign policy focus when customers demand it. In recent years, for example, countries have begun adding other areas of concerns—such as climate change or emerging technologies—more formally into their list of priorities. The United Kingdom’s 2021 Integrated Review, which sought a combined strategy for its security, defense, development, and foreign policy, listed tackling climate change and biodiversity loss as the government’s number one international priority. It also recognized that national resilience required understanding a diverse range of risks, including “low-probability, catastrophic-impact events.”
These risks, however, will continue to be neglected without explicit guidance, whether through a national security strategy, intelligence strategy and policy, or strict direction from policy customers.
Resources. Intelligence agencies in many Western countries are particularly well-funded, especially over the last 20 years due to terrorism threats. The United States intelligence apparatus alone works with a budget of more than $80 billion a year. According to one 2009 study, the United States represented about 65 percent of the global spending on intelligence. The same study found that a total of 1.1 million people work in intelligence around the world, with 172,000 in Russia and 144,000 in the United States.
There is no public reporting on the number of intelligence assets explicitly devoted to existential risks. Intelligence communities currently allocate a large number of resources towards issues relating to general sources of existential risk, such as weapons of mass destruction and terrorism. But where these risks put the global population in peril is almost certainly not receiving appropriate levels of funding or resources.
Raw dollars and people are not the only issues. The investigation of existential threats requires expertise that is of high caliber and from across the disciplinary spectrum. Intelligence analysis organizations—such as the Central Intelligence Agency and Defense Intelligence Agency in the United States, or the Joint Intelligence Organisation in the United Kingdom—are central to this effort. They have the ability to develop, maintain, and integrate the various expertise. They can set up new or different collection requirements to ensure that the intelligence being gathered can best inform the analysis. When provided the appropriate scope, the analysts should have the time and tradecraft to draw together all sources of intelligence to provide insight to senior policymakers.
Relationships. Given their areas of responsibility, intelligence communities have traditionally engaged with national security and foreign affairs establishments. Their customers are practitioners of foreign, defense, and national security policy, such as political leaders, members of the military, and diplomats. And collectors of intelligence—such as of human, signals, geospatial, open source, or technical intelligence—are almost entirely within the intelligence ecosystem.
But existential risk, and the modern world in which existential risk has arisen, does not sit neatly within these boundaries. The issue requires a different lens through which to view intelligence relationships, one that is more holistic and inclusive of non-traditional customers, partners, and those who collect information for intelligence agencies.
Intelligence agencies have in many instances failed to see stakeholders outside the traditional set as potential customers or partners. It has often taken crises to force new behavior and linkages. When a novel coronavirus emerged in early 2020, links between national intelligence and health communities had to develop extremely rapidly amid a growing crisis. As former Deputy Director of the CIA Michael Morrel stated about COVID-19:
“[T]here are issues that are outside of the traditional national security framework, whether they be pandemics, whether they be the vulnerability of supply chains, whether it be climate change, whether it be any sort of non-traditional national security issue that in some way impacts our security, that the intelligence community should be focused on.”
Interagency coordination also remains a barrier to agile and innovative responses for complex threats. Intelligence communities in the West learned this hard lesson in the 2000s, after multiple strategic surprises and intelligence missteps, such as Al Qaeda’s attacks on 9/11. Following two decades of learning and reform, many intelligence communities are increasingly comfortable coordinating whole-of-government approaches to problems.
Existential risk will similarly test how intelligence communities manage relationships. The full scope of the topic cannot be understood or responded to without the help of non-traditional partners, such as domestic policy agencies, technology companies, critical infrastructure providers, research organizations and scientific experts. Building these bridges proactively will face cultural, bureaucratic, security-related, and legislative obstacles. Interagency coordination is particularly critical, given the sprawling and complex nature of the issue.
Existential risk as an intelligence priority. Humanity cannot afford for an existential catastrophe to be an intelligence failure. Given the proper support, intelligence communities, and particularly intelligence analysts, could play a crucial role in detecting, analyzing, and understanding threats of an existential nature.
In some ways, intelligence communities already are well-positioned to do so. Analysts focus on long-term and uncertain events, even if those events are not the highest priority over day-to-day intelligence requirements. Intelligence agencies are generally highly capable and well-resourced parts of government with experience in assessing complex and decentralized threats, especially from malicious groups. Some areas of potentially extreme risk, such as nuclear weapons and advanced technologies in the hands of adversaries, are already well within the purview.
Intelligence effort on existential threats does not require a huge investment. Several small but sensible steps will make intelligence an important enabler of governments’ efforts on existential risk. Any country’s intelligence community could take steps to further evaluate existential threats, though the United States is the most obvious leader on the issue given the scale and capability of its intelligence community. That these efforts are relatively inexpensive, especially in comparison to the overall risk, makes the proposition both prudent and low risk.
The first step is for existential threats to be acknowledged in policy documents—through legislation or strategic guidance—as explicitly within the responsibility of intelligence work. Agencies will not focus on these risks unless there is clear guidance from leadership.
For example, the United States’ efforts at studying unidentified flying objects started in 2009 when then-Senate Majority leader Harry Reid pushed to secure appropriation funding specifically for such a program. The Advanced Aerospace Threat Identification Program, which has led to recently published footage of unusual aerial phenomena, started out as an intelligence community effort under the Defense Intelligence Agency, the Pentagon’s intelligence analysis arm. National security strategies and national intelligence strategies provide other opportunities to shape the mandate. In the United States, existential risk could be added to the National Intelligence Priorities Framework.
The next step is to devote specific resources towards analyzing and warning about existential threats and global catastrophes. At a minimum, technology analysts could be allocated towards extreme technological threats, such as engineered pandemics, runaway artificial intelligence, and highly advanced autonomous weapons. But a standing capability would be better. An extreme global threats warning team sitting within the central analytical agency, such as the National Intelligence Council, could work across the intelligence community to identify and track these risks.
Richard Clarke and R.P. Eddy, in their book on heeding warnings for catastrophes, recommend a “National Warning Office” in the Executive Office of the President, sitting outside the intelligence community to facilitate policy responses. Imagine how better warned governments could have been about COVID-19 if they had a team devoted to tracking extreme threats, notwithstanding the United States’ National Center for Medical Intelligence. Perhaps intelligence communities could establish a mission around extreme global threats, with a mission manager that allocates the resources devoted to this mission, coordinates agencies around the topic, and presents a central point of responsibility for policymakers.
Intelligence communities should regularly issue reports on issues relating to existential threats. A key role that intelligence communities play is responding to needs that policymakers do not even know they have yet, rather than simply being responsive to the latest request. Intelligence communities could regularly flag these threats in traditional annual or strategic assessments. They could develop a global risk register with a long-term (say, 20-plus years) outlook.
Analytical agencies could produce regular assessments on national security implications of extreme risks. Extreme climate change, advanced artificial intelligence, engineered pandemics, and near-earth objects are the most logical initial cases, but solar storms, speculative emerging technologies, and geoengineering could also have their own potential customer base. The pathway, triggers, and likelihood of a nuclear winter should also be a regularly updated assessment.
The final ingredient is increasing collaboration and relationships inside and outside government around existential risks. Consistent, formalized communication channels with scientific organizations and domestic agencies will be critical to closing internal knowledge and informational gaps. Liaisons with relevant background knowledge, similar to the FBI’s liaison officer at the Department of Health and Human Services, could provide technical expertise and crucial linkages with scientific communities. And intelligence cooperation arrangements, such as the Five Eyes—the intelligence alliance between the United States, the United Kingdom, Canada, Australia, and New Zealand—or the NATO intelligence enterprise, could look to develop a community of interest or collaboration around extreme global risks.
Academic and scientific input on existential risks could also massively scale up even a very small intelligence-led mission. As an analogy, in 2021, the United States intelligence community established a pair of outside panels to study the Havana syndrome, a mysterious disease ailing American spies and diplomats. Academic organizations and intelligence communities could even jointly host conferences and conduct long-term estimates. Relationships with academic institutions and the private sector will also be important for studying risks arising from accidents and failures of technological progress, such as synthetic biology and artificial intelligence.
Domestic agencies, such as those responsible for environment, industry, infrastructure, health, and emergency management, could be both potential customers and sources of intelligence. Fundamental challenges, such as security clearances, legal requirements, and suspicions of motives could take years to work through.
Intelligence success. Intelligence communities are on the frontline of the future. They wade through dark and murky territory, without a map, grasping at signals of what the future may hold. Existential risks might be unlikely and uncertain. They might be difficult to analyze and communicate. They might test the boundaries of plausibility. But that does not mean they should be ignored or avoided.
Indeed, it makes the role of intelligence communities even more important. Existential risk is one of the greatest policy challenges of the 21st century. By extension, it is one of the greatest intelligence challenges of the century. Existential risk could be a massive intelligence success.
(Editor’s note: Research for this article was supported by Krystal Ha, a researcher on existential and global catastrophic risk, who has worked for the Grattan Institute, the economic consultancy HoustonKemp, and the Victorian Department of Treasury and Finance in Australia.)
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
Keywords: espionage, existential risk, intelligence
Topics: Disruptive Technologies