In interdisciplinary discussions there are usually disagreements over terminology. I noticed this firsthand at a January seminar I attended in Australia that brought together some 40 natural scientists, social scientists, philosophers, ethicists, and policy makers to discuss the role and limits of ethics in regards to the dual-use problem in the life sciences. The seminar, which was dubbed "Promoting Dual-Use Ethics" and consisted of two eight-hour days of discussion, was held at the Centre for Applied Philosophy and Public Ethics at the Australian National University in Canberra. The first day we focused on the problem of dual-use created by advances in the life and associated sciences; the second day we examined what ethical analyses and ethicists could contribute to solving these dilemmas.
Perhaps no field in the life sciences presents such cause for concern and hope for benefit as synthetic biology. Dual-use fears and the potential for unintended consequences of such a highly sophisticated science are clear, how to deal with them are less so."
In the past, the term "dual-use" was applied mainly to military research that found useful civil applications. Over time, it flipped. Today the term refers to civil work that might find military or hostile applications. In Canberra, a discussion about how the widespread use of illegally produced botulinum toxin--a potent neurotoxin--in cosmetics could be exploited for hostile purposes raised the question of whether we should expand our definition of dual-use to include information, know-how, supply chains, and equipment. The "definition creep" associated with the term dual-use was well illustrated by a participant who asked whether it should be applied to a plant pathologist's work to find a better means to kill weeds or illicit drug-producing crops. Should such work be considered dual-use, since it could also be used to eradicate a country's food crops in war?
Scientists and ethicists view the problem of dual-use from many different--and sometimes conflicting--perspectives. My own view is that there are numerous fields of research and development that could give rise to concern. The seminar had papers from a range of disciplines, including synthetic biology, neuroscience, nanotechnology, and agrobiology. One can imagine each of these fields as a pipeline through which research ideas develop into projects, publications, and eventually, applications. Along the way toward application and products, there are points where control measures and ethical considerations might be applied such as implementation of reasonable codes of conduct, oversight, and educational measures.
Perhaps no field in the life sciences presents such cause for concern and hope for benefit as synthetic biology. Synthetic biology strongly suggests that in the future the life sciences will no longer merely be descriptive but more of an engineering discipline, allowing for the creation of artificial living organisms that use novel synthetic DNA base pairs, unnatural amino acids, and even entirely different genetic coding systems. Dual-use fears and the potential for unintended consequences of such a highly sophisticated science are clear, how to deal with them are less so.
One concept that has been quite popular in bioethical circles in recent years is the precautionary principle. The principle exists in varying levels of severity. The weakest states that precautionary action can be taken before full certainty of potential risks are known. A more severe version forces research proponents, rather than their critics, to shoulder the burden of proof that their work is safe. Finally in the principle's strongest form, even the smallest risk of serious consequences is deemed sufficient to stop research. An interesting question raised by scientists and ethicists alike in Canberra was how these different precautionary principles might be applied to proposed synthetic biology developments. The ensuing debate brought us to discuss the applicability of Thomas Aquinas's doctrine of double effect.
The thirteenth-century theologian's doctrine argues that while it is not (usually) morally permissible for an individual to kill another human being, such an act in self-defense can have the good and intended effect of saving one's own life--although it can also have the bad effect of killing the attacker. A scientist faced with whether to pursue research that could be misused is seemingly in a similar position. There is the intended positive outcome of beneficial civil science and the possible negative outcome of future misuse. Although misuse isn't a direct or desired result of the researcher's work, it could be seen as making possible a negative future outcome.
Is it therefore morally permissible for the researcher to proceed with a project if seen from the perspective of the doctrine of double effect? My understanding of the outcome of the discussion in Canberra was yes, but with a caveat: Given that an evil action could be enabled, the researcher also would have a moral obligation to help prevent such future misuse--e.g., by helping to strengthen safety controls and other regulatory curbs over research and later potential applications.
This view that it is necessary to look beyond an individual scientist's direct responsibility for his or her own work links with a strong undercurrent that I detected throughout the two day meeting: that dealing with dual-use issues is certainly in part the responsibility of individual scientists, but other people and organizations also have responsibilities as well. As the dual-use potential of the life sciences has only been recognized in recent years, scientists and scientific organizations must carefully examine their working norms, rules, and governance structures so that scientists, managers, publishers, and funders aren't left in impossible dual-use dilemmas. My own general conclusion from Canberra is that there is much to be gained from continued interaction with ethicists, who can be invaluable in such an effort--even if it is a struggle to grapple with their terminology and modes of analysis!