Because we all agreed early on–for one reason or another–that most genetic information should
remain freely accessible, this discussion turned quickly to the more general question of what
information to oversee, limit, or even prevent and how to do it. This question has worried experts
for a while. And though the answer is neither straightforward nor simple, it’s a necessary
In response to the question about what information should be overseen, limited, or prevented,
most scientists would say that higher risk information deserves more intrusive oversight.
Scientists have compiled several lists of “risky” research. The often-quoted Fink report lists
seven classes of experiments of concern, including experiments that increase transmissibility of an
agent and enable the evasion of diagnostic and detection methods. The CISSM oversight proposal
mentioned earlier categorizes activities into extreme, moderate, or potential concern; activities
of extreme concern would include research with eradicated or BSL-4 agents. In 2002, Raymond
Zilinskas and Jonathan Tucker proposed
a list of six types of
“sensitive” activities, including facilitating dissemination as a fine particle aerosol and
improving stability of pathogens.
All of these lists are similar. Most mention synthesizing viruses as a problematic activity. And
all seem to take into account which agents will work effectively as bioweapons. Despite these
similarities, scientists and experts are not close to agreeing about what constitutes risky
activities. After all, the pathogen is only one part of a bioweapon. The dissemination mechanism is
the second part, and it is often overlooked. Some of the participants in this roundtable,
particularly Jens Kuhn, have referenced “weaponization,” but we didn’t properly discuss the
subject. Discussions about risky weaponization activities address factors such as the open-air
release of agents, methods for aerosol immunization, or the use of viruses as transport vehicles.
Dissemination methods are important because, in most cases, they spell the difference between a
bioweapon attack with limited effects and a mass casualty attack.
Most existing and proposed oversight procedures focus on research before it begins–for good
reason. However, some research results are completely unexpected (see the legendary mousepox
research in Australia) so projects need to be reviewed after they’re done as well. So far, however,
post-research oversight is left to individual scientists and is voluntary. Will this work? It’s
unlikely. A soon-to-be published survey by the Research Group for Biological Arms Control at the
University of Hamburg shows that most major English-, Russian-, and Chinese-language life science
journals have not implemented security review procedures, despite calls for them to do so.
All existing and proposed oversight approaches only work when they are implemented widely,
ideally on a global scale. Whether pulmonary plague breaks out in Washington, Berlin, or Nairobi it
will reach all corners of the globe sooner rather than later. Implementing good oversight in one
country will have a limited effect. But how do scientists go about making oversight international?
Many states place biosecurity very low on their national agendas; they have other problems to
address, for example, hunger, AIDS, or corruption. Biosecurity oversight typically follows biotech
industry development, though most sensitive research projects are still done in the Western world,
even though biotechnologies are rapidly expanding across the globe. So, for the moment, Western
states bear the biggest part of the responsibility to protect the public. They still can and should
influence the conduct of biological research worldwide to prohibit the hostile use of