The authoritative guide to ensuring science and technology make life on Earth better, not worse.

How do we control dangerous biological research?

By Filippa Lentzos | April 12, 2018

biology-clinic-doctor-4154.jpg

No military wishes for an enemy with capabilities that match its own. Indeed, the US chairman of the joint chiefs of staff has said he does not want American service members to ever have to face a fair fight. But how do you stay ahead of an adversary? The United States tries to remain “overmatched” against any enemy by investing heavily in technological innovation, and today, a considerable part of that investment goes into the biological sciences.

At DARPA—the Defense Advanced Research Projects Agency, the US military’s research wing—the goal to “harness biology as technology” is one of four main areas of focus for its strategic investments. The biological sciences are expected to play a significant role in future conflicts and hybrid warfare, and techniques to sequence, synthesize, and manipulate genetic material feature prominently in DARPA efforts. While no countries have openly adopted synthetic biology techniques for offensive use, the US intelligence community says they pose a threat to national security, and a National Academy of Sciences committee, funded by the US Defense Department to systematically assess synthetic biology threats, said that “it is possible to imagine an almost limitless number of potential malevolent uses for synthetic biology.” The United States is clearly worried an adversary may be harnessing these methods, and is investing in defensive capabilities. “The same tools of synthetic biology that we’re concerned about as being capable of being used against us, we are also using in the laboratories to help develop countermeasures,” said Arthur T. Hopkins, acting assistant secretary of defense for nuclear, chemical, and biological defense programs, in Congressional testimony last year.

Russia, too, is concerned an adversary may be harnessing synthetic biology for offensive use. Back in 2012, President Vladimir Putin highlighted “genetic weapons” as a future threat, and last year he claimed that the US military is now secretly collecting Russian biological material.

Washington, Moscow, and other governments say they are focused only on “defensive” biosecurity activities, but there is a fine line between “defensive” and “offensive” in this realm, and the alarming military focus on synthetic biology may cause people to wonder if there is some way to control the weaponization of biology. In fact, the Biological Weapons Convention (BWC) was established back in 1972 to do just that. It has 180 states party, including Russia and the United States, and it unequivocally prohibits the development of biological agents—whether naturally occurring, genetically modified, or chemically synthesized—for the purpose of deliberately causing disease, death, or disruption to the human body’s functions.

The BWC, though, is not well equipped to deal with potential security implications of rapidly developing biological research, in part for reasons going back to when it was established. In the tech world, “research and development” are often mentioned in the same breath, but in fact they are distinct, and the convention only addresses one of them. It explicitly bans “development,” but is much vaguer when it comes to research activities. There is a reason for this. Those negotiating the treaty in the late 1960s and early 1970s were aware that some early-stage biological research could have multiple uses: that it might lead to positive breakthroughs for human health, or to defensive countermeasures, or to discoveries with significant potential for offensive misuse. In an effort to avoid having to determine exactly what kind of research would and would not be permitted under the treaty, the negotiators addressed only the post-research phase of discovery, that is, efforts to actually develop, manufacture, or acquire biological weapons. It is much harder to prohibit research than manufacturing, and negotiators did not want to get mired in discussions on how to identify and manage particular subsets of research. So they put the topic aside.

With current advances in biology, we can’t afford to avoid the topic any longer. It is high time the international community turn its focus to the security and governance of biological research. This is an urgent issue, because whenever a proof of concept, technological breakthrough, or scientific game changer is found to have unexpected military utility, it can significantly alter the balance of incentives and disincentives to comply with BWC obligations. The question is, how do we guard against experiments or lines of inquiry that lead some researchers to pursue the kind of edge that contravenes international norms and legal frameworks?

Both security policy and science policy play a role. Clearly there is not one simple answer. Part of the work has to be done by governments and policymakers focused on international security, and should include strengthening norms against misuse and supporting humanitarian policies; modernizing the BWC to counter its growing irrelevance; increasing capacities to defend against and investigate allegations of misuse; and building transparency, confidence, and trust in biodefence programs. But there is likewise much work that should be done by governments and policymakers in terms of science policy to raise awareness of the security dimension of biological and life-science research, promote research integrity, foster a culture of responsibility, and develop sound accountability practices.

To accomplish any of this, we have to be able to both characterize and evaluate biological research with high misuse potential. This is exceptionally difficult to do, and continues to elude both the international community and national policymakers. The United States has come farther than most countries in its deliberations, and in 2012 began implementing “Dual-Use Research of Concern” (DURC) policies after a challenging, decade-long process. These policies establish procedures for reviewing certain types of research with certain types of high-consequence pathogens and toxins. Unfortunately, they contain significant weaknesses, many of which are highlighted by a recent experiment to synthesize horsepox virus from scratch, the details of which have gradually been coming out over the last few months.

The experiment was primarily a proof-of-concept study, carried out in 2016 by virologist David Evans’ team at the University of Alberta in Canada and funded by Tonix, a pharmaceutical company headquartered in New York City. Their aim was to demonstrate that it is possible to synthetically create horsepox virus in the lab, and by extension, in the longer-term, that it would be possible to develop a horsepox-based vaccine against smallpox that would be safer and more effective than contemporary vaccines. To do this, the research team obtained gene fragments through mail order from a DNA synthesis company, assembled the fragments into the sequence of the horsepox virus genome, and stitched them together. The resulting virus was then shown to be capable of infecting cells and reproducing.

Evans first discussed the experiment at a World Health Organization (WHO) meeting in November 2016. A Tonix press release came out in March 2017, a report of the WHO meeting was published in May 2017, and the journal Science brought the story to wider prominence in July 2017. A write-up of the study was rejected by two leading science journals before it was eventually published by the journal PLOS One in January 2018.

The security concerns raised by the experiment are fairly straightforward. Horsepox virus does not cause disease in humans and is not itself considered a dangerous virus; it is not believed to exist naturally anymore, and the only known samples are stored at the US Centers for Disease Control and Prevention (which, incidentally, would not give Evans’ team permission to use them commercially). What classifies the experiment as “of concern,” however, is that the proof of concept and methodology for synthetically constructing horsepox virus is equally applicable to horsepox’s much more dangerous cousin: the variola virus which causes smallpox. This highly contagious and lethal human disease was eradicated 40 years ago through an extensive global campaign. Existing strains of the variola virus are kept at two WHO high-security labs, and there are ongoing efforts to agree on their destruction and bid a final goodbye to the virus.  The horsepox experiment is a step in the wrong direction, actively increasing the likelihood that smallpox could reemerge as a threat to global health security.

The horsepox experiment highlighted three weaknesses in the American DURC policies. First, horsepox virus is not listed as a pathogen requiring review, so the horsepox experiment did not have to be assessed by Evans’ team or their institution for potential security concerns before it was carried out. Second, even if the horsepox virus had been listed, the experiment would not be covered by DURC policies, because review obligations only apply to US government-funded research and the horsepox experiment was privately funded.

Yet, while both government and funder review failed, a third “line of defense” did go ahead: publisher review. The PLOS Dual Use Research of Concern Committee reviewed the paper for security concerns and found that the benefits of publication outweighed the risks. Following publication, once the larger biosecurity community had access to the details of the case, a number of experts weighed in on the risk-benefit analysis, arguing that the PLOS committee got it wrong and that there was a weak scientific foundation and even weaker business case for the project. The expert assessments underscore the DURC policies’ third weakness: They do not call for collective decision making. This leaves biosecurity research vulnerable to what has been dubbed the “unilateralist’s curse,” a set of incentives that mean research with high potential for misuse is more likely to be carried out when scientists act independently than when they agree to a decision as a group. Biotechnology researcher Gregory Lewis explains: “Imagine that 100 scientists are individually deciding whether it would be a good idea to synthesize horsepox. All of them act impartially and in good faith: They would only conduct this work if they really thought it was on balance good for humankind. Each of them independently weighs up the risks and benefits of synthesizing horsepox, decides whether it is wise to do so, and acts accordingly…if synthesis of horsepox is not to occur, all 100 scientists must independently decide not to pursue it; while if any of the scientists judges the benefits to outweigh the risks, he or she acts unilaterally to synthesize horsepox.” The problem with the DURC policies is that decisions on pursuing potentially harmful research are primarily left to individual researchers and are therefore held hostage to the judgement of the most extreme outlier rather than based on a collectively negotiated group judgment.

Risk-benefit analysis is the wrong approach to biosecurity review. The horsepox situation is symptomatic of a larger problem with DURC policies. Their underlying framework is one of risk-benefit analysis. Quantifying risks and benefits, and weighing them up as equal units of comparison, however, relies on certainty. Yet the security and public health implications of developments in synthetic biology, and of novel bio-technologies more generally, are anything but certain. They are most often vague and unclear. It would therefore be careless to wait for definitive proof of harm before taking any protective action. Good security rests not on evaluating risks and benefits, but rather on managing uncertainty, ambiguity, and ignorance—sometimes even situations where we don’t know what we don’t know. Standard risk-benefit calculations are the wrong approach to evaluating biological research with high misuse potential.

Security review of biological research requires a different logic. Those at risk should not be required to demonstrate that a given experiment or line of inquiry is potentially dangerous. Rather, the funders who support research, the scientists who conduct it, and the publishers who approve and communicate it should be required to prove the absence of danger. This is the notion behind the maxim “first do no harm” that is fundamental to doctors, and the precautionary principle that many regulatory bodies have applied to new areas of scientific research.

In essence, this principal recognizes that it may be better to do nothing than to risk causing more harm than good. Rather than having decisions that may affect society as a whole made by an individual or a small group of likeminded peers, a regulatory framework controlling dangerous biological research should emphasize collective and transparent decision-making. Such a framework should also encourage exploring alternatives to potentially harmful actions and setting goals that protect health and the environment. We need responsible research and innovation, which continually works to align with the values, needs, and expectations of society.

Guarding against deliberate misuse of biology is a tall order for the international community and national policymakers. On the other hand, it is not an impossible task given political will. We already have frameworks, concepts, and experiences to draw on, including the Biological Weapons Convention and US policies on Dual Use Research of Concern. We can build on these to reduce the security risk posed by the rapid evolution of biology.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments