The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Convergence: Artificial intelligence and the new and old weapons of mass destruction

By Emilia Javorsky, Hamza Chaudhry | August 18, 2023

AI back to back headsA California bill tries to address the pro and cons of artificial intelligence. By: dStudio/Adobe Stock

Last October, congresswoman Anna G. Eshoo issued an open letter to the national security advisor and the Office of Science and Technology Policy (OSTP) urging them to address the biosecurity risks posed by the use of artificial intelligence (AI) in both civilian and military applications. She wrote: “AI has important applications in biotechnology, healthcare, and pharmaceuticals, however, we should remain vigilant against the potential harm dual-use applications represent for the national security, economic security, and public health of the United States, in the same way we would with physical resources such as molecules or biologics.” At the UN Security Council’s historic first meeting on the impact of AI on peace and security in July, Secretary General António Guterres echoed this concern, noting that the “interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics is deeply alarming.”

The field that explores how the dual-use nature of AI systems can amplify the dual-use nature of other technologies—including biological, chemical, nuclear, and cyber—has come to be known as convergence. Policy thought leaders have traditionally focused on examining the risks and benefits of distinct technologies in isolation, assuming a limited interaction between threat areas. Artificial intelligence, however, is uniquely capable of being integrated with and amplifying the risks of other technologies. This demands a reevaluation of the standard policy approach and the creation of a typology of convergence risks that, broadly speaking, might stem from either of two concepts: convergence by technology or convergence by security environment.

Convergence by technology. The direct interactions between AI and developments in other technological domains create unique benefits and risks of their own. Examples of this type of convergence include the interaction of AI with biosecurity, chemical weapons, nuclear weapons, cybersecurity, and conventional weapons systems.

AI and biosecurity. In the context of evaluating the relative utility of risk assessment frameworks for mapping the convergence of AI and biosecurity risk, researchers John T. O’Brien and Cassidy Nelson define convergence as “the technological commingling between the life sciences and AI such that the power of their interaction is greater than the sum of their individual disciplines.” Their work surveyed potential interactions between domains that could considerably increase the risk of deliberate or accidental high-consequence biological events. This includes, for instance, AI-assisted identification of virulence factors in the in silico (via computer simulation) design of novel pathogens. Subsequent work highlighted the applications of deep learning in genomics, as well as cyber-vulnerabilities within repositories of high-risk biological data. Several recent articles in the Bulletin of Atomic Scientists have identified additional ways in which developments in AI may be accelerating biological risks.

AI and chemical weapons. As part of a convergence initiative at the Swiss Federal Institute for Nuclear, Biological, and Chemical (NBC) Protection, a computational toxicology company was asked to investigate the potential dual-use risks of AI systems involved in drug discovery. The initiative demonstrated that these systems could generate thousands of novel chemical weapons. Most of these new compounds, as well as their key precursors, were not on any government watchlists due to their novelty. This development must be viewed in light of the advent of large language model-based artificial agents. These are agents that understand how to change open-source drug discovery programs in a similar way, how to send emails and payments to custom manufacturers, and how to hire temp workers to accomplish compartmentalized tasks in the physical world.

AI and nuclear weapons. growing amount of research and advocacy has highlighted the potentially destabilizing consequences of AI integration into nuclear weapons command, control, and communications (NC3), which are illustrated in the Future of Life Institute film Artificial Escalation. A high-level discussion involving senior AI experts and government officials broadcast at the Arms Control Association in June laid bare many of the security concerns from this integration. These concerns include an inability to verify and scrutinize AI decision making, a higher risk of the accidental use of autonomous weapons, and an increased likelihood of conflict escalation.

RELATED:
Don’t panic: AI can strengthen democracy too

AI and cybersecurity. In the field of cyberspace, reports have pointed to the ways that artificial intelligence systems can make it easier for malevolent actors to develop more virulent and disruptive malware. They also help adversaries automate attacks on cyberspaces via novel zero-day exploits (previously unidentified vulnerabilities) targeting command and control, phishing, and ransomware. Autonomously initiated hacking is also be expected to be a near-term emergent capability given the current trajectory of AI development.

AI in conventional weapons systems. A key feature of AI is that it enables a single actor to perform activities at scale and at machine speeds. It has been argued that applying this paradigm to AI integration into conventional weapons systems, such as antipersonnel drones and drone swarms, creates a new category of weapons with the potential for mass destruction. Further, the United States’ Joint All Domain Command and Control initiative seeks to integrate all aspects of the conventional command and control structure into a single network powered by AI, which carries many risks, including one of accidental escalation.

Each of these examples explores interactions between AI systems and specific technologies, but the reality of this landscape is even more complex. For example, how do AI, cybersecurity and nuclear weapons command, control, and communications (NC3) all interact? How does one think about combinations of the above in conjunction with threats to critical infrastructure, such as hacking and disabling power grids or water treatment facilities? How does one evaluate the risks posed by advanced AI systems in connection with traditional security threats and other emerging technologies?

Convergence by security environment. Beyond these questions of direct interaction, it is also critical to consider how an environment in which widespread use of AI systems results in misinformation and increasing deference to technology affects the barriers to weapons of mass destruction (WMD) development and use. Convergence by security environment encompasses situations in which technology developments change the security environment as a whole, creating indirect effects that accentuate overall risks. The impacts here will likely be harder to investigate. Nonetheless, foreseeable examples abound.

One could imagine, for instance, that the development of AI systems that make it easier to craft disinformation and deepfakes could increase misperceptions on the international stage. This would reduce the possibility of successful attribution of biological incidents. There could be an increase in nuclear risk due to the rise of informational asymmetries and signalling failures. Runaway competitive dynamics between two nations on AI development could also push one actor to consider using a weapon of mass destruction or conducting conventional attack on the other.

Beyond typology, another relevant macro question remains: whether to examine convergence risks holistically or to craft individual research spaces for each aspect of convergence. For instance, borrowing from the interdisciplinary project on biosecurity set up by the Stockholm International Peace Research Institute (SIPRI) titled “BIO Plus X,” there could be a large field of study titled “AI + X,” which evaluates the impact of AI systems on other technologies and weapons of mass destruction threats holistically, investigating common pathways and remedies. At the same time, it could be that the differences between each convergence pathway (like AI and bio versus AI and nuclear) are so significant that important nuances may be lost in examining these risks together. The likely answer is some mixture of both, but its makeup deserves important consideration.

The different schools of thought on convergence. At face value, the different schools of thought on convergence broadly parallel the division of camps on technological progress. The techno-optimist framing would argue that AI systems would maximize the benefits of these technologies and could help minimize their risks. Benefits include more robust nuclear command and control, speedier vaccine development, and better-trained cyber software; the optimists could make the case that regulation would delay or impede these benefits.

RELATED:
Interview: Rose Gottemoeller on the precarious future of arms control

Those with a safety mindset could weigh the concerns discussed in this article much more heavily, reasoning that unregulated AI developments are likely to lead to net reductions in international and national security.

A third camp, wed to the status quo, would point to the lack of empirical research on both the benefits and downsides of convergence and cast doubt more generally on the transformative power of AI systems in either direction.

Given the rapid pace, scale, and ubiquity of AI development and deployment, it is imperative that experts start with a safety mindset. All these technologies have wide dual-use applications, and accelerated development could deliver both benefits and harms. It is already a question of great empirical difficulty to evaluate the benefit-risk balance of each of these technologies. This problem is further compounded by convergence and underscores the need for further research to quantify the upsides and downsides and to investigate frameworks that could accommodate this complexity.

As research is conducted, however, studies demonstrate that defensive technology is often disadvantaged compared to offensive technology in many high-risk arenas. For instance, a very lethal pathogen will generally outcompete the development of a vaccine. It is critical to investigate the balance at the intersection of each of these technologies, but the tendency for emerging defensive technology to lag emerging offensive technology requires that policy makers use the utmost caution in regard to convergence threats.

Policies to safeguard against convergence risks. In addition to further research, there is much that can be done in the policy realm to reduce convergence risks.

First, it is critical for the government to dedicate funding to institutes, such as the National Science Foundation, to improve our understanding of the risks from convergence. This should include exploration of technology convergence in specific domains and security environment convergence, as well as more holistic investigation of the dynamics of threat convergence independent of the technological domain.

Second, Congress may consider a growing roster of policy recommendations already in the public sphere on mitigating risks from specific AI pathways. A recent report—the  culmination of a meeting of high-level experts on AI-bio convergence, convened by the Helena organization in May— provides several recommendations. These include testing large-language models for biological misuse, mandating DNA synthesis screening, and expanding biosecurity and biosafety guidance to include AI-enabled biology. A new bill in the Senate has built on policy recommendations to prevent the integration of advanced AI into nuclear weapons command, control, and communications (NC3) systems. The National Security Commission on AI released guidance on strategies to mitigate risks associated with AI in weapons systems, such as proliferation and escalation. As the risks from other convergence pathways are better understood, it is likely that many more high-value policy recommendations will emerge.

Importantly, many of these convergence pathways can likely be narrowed through common policy mechanisms that generally apply to advanced AI systems. For instance, a comprehensive approval process for the deployment of advanced artificial intelligence systems, including mandatory independent auditing and red teaming, could help prevent misuse and unintended consequences by withholding approval for the deployment of systems with a high risk of convergence dangers. Legal liability frameworks to hold AI developers accountable for harms resulting from the systems they create also hold promise for incentivizing major labs to adequately test for and mitigate convergence risks by design. Finally, more coordination and cooperation across different companies and countries to establish common safeguards for AI development are likely to reduce geopolitical tensions and dissuade relevant actors from driving the military use of their AI technologies.

As developments in all these technologies accelerate and nuclear tensions stand at a near-unprecedented high, investigating the convergence of old and new threats will become vital to upholding and advancing national and international security.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments