The bioweapons convention: A new approach

By | November 24, 2015

The August 2015 meeting of state parties to the Biological Weapons Convention brought a welcome but little-noticed development: a document submitted by the United States encouraging fellow members to develop a common understanding of “tacit knowledge,” arguably the key determinant of bioweapons development, but one which past nonproliferation efforts have largely ignored in favor of more tangible threats, such as the spread of materials, technologies, and more recently scientific information.

Tacit knowledge consists of unarticulated skills, know-how, or practices that cannot be easily translated into words, but are essential in the success of scientific endeavors. Often such skills amount to unarticulated “ways of doing things” special to individual scientists (personal knowledge) or shared among teams (communal knowledge). Tacit knowledge grows in stable work environments free of disruption, and likewise decays when teams break up or practices go unused. In the case of bioweapons development, tacit knowledge takes the form of skills scientists and technicians develop to manipulate fragile and unpredictable micro-organisms that mutate and are sensitive to their environment and handling conditions.

When it comes to nonproliferation efforts, tacit knowledge poses a threat if it matures in the wrong places, such as illicit bioweapons programs; however it can also act as a bulwark against the replication of dangerous technologies outside their native programs—something that international monitors do not always take into account when assessing threats. Should state parties to the bioweapons convention truly embrace the concept and use it to assess potential threats, particularly as they relate to new technologies, they would not only improve the overall efficacy of nonproliferation policies, but they would also mend longstanding differences regarding the implementation of two apparently contradicting treaty requirements: preventing the spread of technologies that can support bioweapons programs, and ensuring cooperation and exchanges of biotechnologies among member countries.

When it comes to preventing or degrading tacit knowledge in illicit programs, much depends on how long a given program has been around. For a budding program, where teams are still jelling and experimenting with various agents to acquire necessary skills, the most appropriate policy is one that frustrates the accumulation of knowledge. International inspections and police activity, or even the threat of such disruptions, have proven effective in achieving such a goal, as was the case with programs run by Saddam Hussein’s Iraq and the Japanese terrorist group Aum Shinrikyo. When it comes to threats from established programs, such as the former Soviet bioweapons program, a more appropriate policy would focus on accelerating knowledge decay. Breaking up teams, and employing former scientists in areas that do not require them to use their tacit bioweapons skills, would degrade both communal and personal knowledge.

No matter how long a given program has existed, however, refocusing nonproliferation efforts on tacit knowledge has two important implications for disabling illicit projects. First, effective nonproliferation requires a verification mechanism for the bioweapons convention, which could create the kind of disruptions (inspections, etc.) that can frustrate or prevent the accumulation of tacit knowledge. Without the necessary knowledge, bioweapons work becomes extremely difficult, if not impossible, even with ample access to material, technology, and equipment, as demonstrated by past state and terrorist bioweapons programs. Second, the new focus could help address a major design flaw of current efforts to prevent knowledge proliferation. In former Soviet states, for example, nonproliferation efforts have consisted of funding research projects to maintain scientists at their former facilities, preventing them from selling their skills to outside parties. Although at first glance the approach makes a lot of sense, when viewed from the tacit knowledge angle, it looks highly counterproductive. Former bioweapons scientists continue to work with old colleagues, and on bio-agents they’re familiar with from their former work. In other words, the current approach allows these scientists to maintain both communal and personal tacit knowledge. A more productive policy would be to help these scientists exit the bioweapons field by placing them in separate institutions and employing them to work in areas that do not require them to use their bioweapons skills.

Illicit programs aside, tacit knowledge can also play an important role in assessing possible threats from new technologies. In recent years, several experiments raised alarm among government officials and security analysts because they seemed to suggest that mere access to technologies, materials, and information could allow the production of harmful agents by malevolent actors. These include the synthesis of the poliovirus at the State University of New York at Stony Brook in 2002; the resurrection of the deadly 1918 flu virus in 2005; the creation of the first self-replicating cell by the Venter Institute in 2010; and an H5N1 experiment in 2011 that resulted in a virus that spreads more easily among mammals. In the final case, the controversy reached new heights when in 2012 the National Science Advisory Board for Biosecurity requested a halt on the publication of scientific findings produced by a Dutch team working with funding from the US National Institute of Health. Soon after, the Dutch government imposed export controls on the publication of the same scientific results, and the international scientific community agreed to observe a year-long moratorium on research with the virus.

Yet analyses of these experiments showed that alarmist claims were unfounded. First, the experiments were far from being easily replicable: The skills required to achieve such results were honed over a long period of time, sometimes decades, by individuals and teams of scientists with unique expertise, working in a specific setting, and whose tacit knowledge was not captured in the published articles describing their results. Therefore, replicating their work on the basis of scientific publications alone would have been difficult even for individuals with the appropriate scientific knowledge, and probably impossible for individuals with limited scientific background, such as terrorists. Furthermore, in the case of the H5N1 experiment, additional information about experimental methods and findings, which ironically came to light in the press as a result of the controversy (not as a result of government probes), revealed that the mutated virus produced by the Dutch team was not as dangerous as originally announced by the lead author. (The virus became lethal only when injected directly into animals’ nostrils.) The episode underscores the fact that, in the absence of a systematic analysis of the knowledge necessary to achieve successful results, government authorities can easily get caught in a whirlpool of alarmism that leads to inappropriate decisions.

Thus a new focus on tacit knowledge could help advance key mandates of the bioweapons convention, namely the assessment of new technologies, the improvement of national implementation, and the strengthening of cooperation among member states. Analyzing potentially dangerous experiments through the lens of tacit knowledge can limit instances where security-minded governments block access to scientific data with potential health benefits. This requires that member countries first abandon the current approach of simply compiling lists of potentially dangerous scientific developments and imagining what is conceptually possible to do with them, in favor of a more rigorous investigation of what is actually achievable. To that end, one would need to ask the following important questions: What type of knowledge was necessary to conduct an experiment? What part of this knowledge is widely available and what part is specific to the individuals or the team comprising the laboratory involved? And to what extent could others replicate the combination of tacit skills available to a specific experiment? Armed with an empirically sound analysis of the risks and benefits of scientific and technical developments, member countries will be in a better position to make appropriate decisions about sharing such advancements. Ultimately this should promote more sharing with member countries, which will have a positive effect on overall implementation.

Some analysts believe that new technologies will eventually de-skill the field of biology because they automate processes and tasks that formerly required tacit skills, and therefore will permit states or terrorist groups to produce lethal agents without much expertise. Current evidence suggests otherwise. If they do eliminate the need for certain skills, new technologies require their users to develop new skills in order to solve problems that inevitably arise. For example, the PCR machine, which has been in use for about 30 years, still causes its users numerous problems. The machine automates the amplification of DNA samples, a task that was previously performed manually and required expertise. Yet users of the machine indicate that it automates only a portion of the process, and still requires manual skills. In addition, the kits sold with the device, designed to facilitate various manipulations, in fact create new problems that users need to solve through a painstaking trial-and-error process, or by calling on the skills of a community of experts. Similarly, recent research related to DNA synthesis technologies, including the so-called next-generation sequencing technologies, indicate that they are prone to errors, which their users can solve only by having the required skills and expertise. Therefore new technologies are not the straightforward tools that untrained actors can use for malevolent purposes, as some experts claim. Evaluating the potential threat posed by new technologies or new processes requires a careful examination of the hidden contingencies associated with their use and the types of expertise and hands-on skills required to solve such challenges.

By stating that tacit knowledge is a “risk modulator,” the US document submitted to the August meeting captures just what has been missing in threat assessments thus far: a more nuanced, empirical appraisal of possible threat scenarios. State parties generally agree that current threat assessments are insufficient, and that improved implementation requires a systematic, fact-based analysis. However, discussions around the US submission at the August meeting fell along the traditional fault line of compliance versus cooperation. It is important, therefore, that the United States continue to promote a common understanding of tacit knowledge among convention members and emphasize its benefits in achieving both compliance and cooperation.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments