Bioethicists enter the dual-use debate

By Malcolm Dando | April 20, 2009

Well-informed scientists disagree about whether classic dual-use experiments, such as the genetic manipulation of mouse pox and the sequencing and synthesis of 1918 Spanish Influenza, should have been carried out and/or published. Given this acrimony, an ethical analysis might help as the revolution in the life sciences continues apace. Bioethicists, who have not yet engaged much with the dual-use problem in the life science community, are beginning to apply their expertise to these questions, and the early results suggest that easy answers are still lacking.

Two papers published last year offer insight into how bioethicists would approach dual-use life science research. Hans-Jörg Ehni of the Institute of Ethics and History of Medicine at the University of Tübingen in Germany wrote “Dual-Use and the Ethical Responsibility of Scientists” and Frida Kuhlau and colleagues at the Centre for Research Ethics and Bioethics at Uppsala University in Sweden published “Taking Due Care: Moral Obligations in Dual-Use Research.”

Kuhlau and her colleagues emphasize how difficult it would be for scientists to consider all the possible negative implications of their work, noting, ‘many obstacles remain with respect to clarifying what is foreseeable and how to foresee potential misuses.'”

In his May 2008 article, Ehni wastes no time in carefully pinning down his central interests in the subject: “The question here is not how far a scientist is responsible for the intended effects of his action, but how far he is responsible for the foreseen effects of his research, for their prevention, and also for the effort to predict certain effects. . . . “Scientists typically react to these questions with one of two basic attitudes. A scientist may feel that the freedom to carry out scientific work, however undesirable its consequences, is most important. In this view a scientist has no responsibility for the unintended dual-use applications of his work. On the other hand, a scientist may feel responsible for the possible future misuse of his or her work, but be able to do very little about any misuse.

Each position, according to Ehni, has philosophical underpinnings. Sociologist Niklas Luhmann argued that science is a social system concerned with a search for truth and that moral reflections about other issues from within the system cannot and should not influence that search. Other social systems, such as the judicial system, should engage in those reflections. In contrast, Ehni also discusses the work of the philosopher Hans Jonas, who argued that such an attitude could apply to cosmology but not to most science, as modern science is inevitably linked to technology and thus impacts the real world in ways for which scientists are accountable.

Ehni also introduces the concept of “complicity,” which has been used “to distinguish different degrees of contribution to the success of an action” in other bioethical debates. In direct causality, the actor makes a conscious contribution to an immoral act; in indirect causality the actor causes or encourages another actor; and in causality by normative evaluation the actor encourages others to be more tolerant of the immoral action. Finally, in expressive dissonance the actor disagrees with the moral condemnation of the immoral action. According to this logic, even a scientist who has the results of his or her benignly intended work misused has a moral responsibility. Ehni uses a simple analogy to make his point: “If people are around who are searching for stones to throw through windows and the scientist should have known that, it may be considered negligent behavior with a low degree of indirect causality and, therefore, a certain corresponding moral responsibility.”

If the outcome of such an act is fatal then that responsibility carries duties. For example, the responsibility to decide whether to stop research or publication of findings that could be misused, Ehni argues, cannot be put on an individual editor or scientist. In his view, “only a mixed authority which is constituted by the scientific community together with government bodies, but with the participation of scientists meeting their responsibilities so far as possible, can solve the problem.” In Ehni’s analysis, it is necessary for the individual scientist to be aware of the potential for dual-use and to contribute his or her expertise to dealing with it. It is also necessary for other levels of control, from professional organizations and governments, to be involved, he concludes, while acknowledging the difficulties that others have identified in organizing such a mixed oversight system.

Kuhlau and her colleagues take a different tack. They look at the scientific and policy communities’ attempts to formulate codes of conduct to deal with the dual-use problem and suggest that “the bioethical reasoning behind the obligations proposed in the codes has not been thoroughly investigated.” The researchers aim to “identify ethical dilemmas that might occur in dual use research and to analyze proposed moral obligations for life scientists.” Their paper suggests five criteria for what constitutes preventable harm and then assesses life scientists’ obligations against these criteria. These authors stress that their concern, like Ehni’s, is the malign consequences of benignly intended research, rather than with the deliberate misuse of the life sciences.

The first criterion they identify is general: “[R]esponsibility is determined by what the social role or position demands. The life science profession, according to this line of reasoning, has a collective responsibility for the potential harm caused by their research.” The second criterion is that any obligation demanded of scientists must be within that person’s professional capacity and ability to enact; the third is that the harmful consequences of research must be reasonably foreseeable. This last criterion disavows willful ignorance. According to Kuhlau and her colleagues, scientists have an obligation to seek the knowledge necessary to consider potentially harmful consequences of their work.

The fourth criterion is that a research project’s benefits should be weighed against its risks, and finally, that scientists consider the possibility that potential risks might result from knowledge or materials more readily available from other sources. These last two criteria suggest that while certain research could present risks, other factors could diminish the scientist’s responsibility.

In assessing life scientists’ responsibilities, the authors suggest that scientists could be expected to prevent bioterrorism, but they clearly see this as an unreasonable demand: The “misapplication of peacefully intended research may cause moral distress among scientists; however, it is difficult to argue that researchers should be held morally accountable for harm caused by unforeseen acts of misuse. It is equally difficult to argue that they are responsible for preventing these acts.” While this is similar to Ehni’s general conclusion, Ehni appears to be somewhat more guarded and demanding of scientists (see his stone throwing/window analogy).

Kuhlau and her colleagues also argue that it is unreasonable to suggest that a scientist is duty-bound to engage in research that responds to potential bioterrorism, particularly as it is difficult to judge the extent of the bioterrorist threat. More generally, they emphasize how difficult it would be for scientists to consider all the possible negative implications of their work, noting, “many obstacles remain with respect to clarifying what is foreseeable and how to foresee potential misuses.” In regard to a possible duty not to publish or share sensitive information, they suggest, “this duty can perhaps better be formulated as a duty to consider whether to refrain from publishing or sharing sensitive information.”

They also have reservations about scientists overseeing and limiting access to dangerous materials. In their view, “evaluating recipients’ liability and potential harmful intentions to misuse material before sharing it cannot be performed by individual researchers and should not be promoted as a tool for assuming scientific responsibility.” They suggest a modified form of this duty: Do not “share sensitive material with individuals or organizations where reasonable grounds exist to suspect that sharing might lead to harm.” Finally, they regard it as reasonable to expect scientists to report activities of concern but stress that whistle-blowing is far from an easy choice for people to make.

Set against their criteria, many of the duties the researchers explore are reasonable–but not unconditionally. Indeed, they argue that analyzing these obligations could lead to uncovering creative solutions as long as the inherent ethical dilemmas are acknowledged. My own view is that those in the arms control and life science communities should welcome these bioethical interventions, even if the arguments seem unfamiliar, as they may well help address the dual-use problem. I imagine that bioethicists’ contributions to the discussion will soon move beyond analyzing the dilemmas of individual scientists, editors, managers, and project funders. What lies ahead, I suspect, is a fruitful application of the bioethical toolkit to all points of practical intervention in the web of policies that aim to prevent the hostile misuse of the life sciences.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments