‘Dual Use’—Multiple Meanings

The expression ‘dual-use technology’ was originally used to refer to technology that could be used for both civilian and military purposes. This was a non-normative use of the expression ‘dual use’. Conceived this way, “dual-use” technology was not necessarily considered to be problematic (except, perhaps, by those who think that military activity is inherently evil). From the perspective of policymakers, this kind of dual-use technology could sometimes be considered a good thing—i.e., a way of killing two birds with one stone by promoting a particular path of technological development. There are obvious economic advantages to developing technologies that will simultaneously meet both a country’s civilian and its military needs. Although dual-use technologies, as traditionally conceived, could thus in some contexts be considered desirable, they could also give cause for concern. A country might be reluctant, that is, to export dual-use technologies (as traditionally conceived) to adversary countries to which it would not ordinarily want to export weapons.

In recent years, the expression ‘dual use’ has most commonly been used in an explicitly normative fashion. The expression ‘dual use’ is now usually meant to refer to research, knowledge, technology, or materials that can be used for both beneficial and harmful purposes. Current debates about “dual-use” science and technology refer to science and technology that has legitimate uses (e.g., in medicine) but that might also be used by malevolent actors for nefarious purposes (e.g., in bioterrorism) [14]. Though the expression ‘dual use’ could be used to apply broadly to things that might be used for both good and bad purposes, when characterized this way almost everything could count as “dual use”. Machetes are used for farming, but they were also used in the Rwandan genocide as tools of murder. Paper has obvious good uses, but a piece of paper could also be used to (intentionally) set fire to a building. Contemporary debates about dual-use science and technology, however, have usually been more narrowly circumscribed. Most of the current debate has focused on research/knowledge/technologies/materials with implications for weapons, and usually weapons of mass destruction in particular—i.e., where the consequences of malevolent use would be especially severe.

There are thus at least three plausible definitions of ‘dual-use science and technology’:

  1. 1.

    that which has both civilian and military applications

  2. 2.

    that which can be used for both beneficial/good and harmful/bad purposes, and

  3. 3.

    that which has both beneficial/good and harmful/bad purposes—where the harmful/bad purposes involve weapons, and usually weapons of mass destruction in particular.

In what follows I will be operating with the third definition of ‘dual use’ (which falls at the intersection of the first two definitions), because this is the way ‘dual use’ is (implicitly) most commonly used in contemporary debates about the “dual use dilemma”.

The dual use dilemma arises, then, because it is sometimes the case that the very same scientific research that can be used to benefit humankind can also be used to cause grave harm (e.g., via biochemical weapons). The dual use phenomenon poses a dilemma for well-intentioned individual scientists who must decide what projects to work on. Responsible scientists want to generate scientific knowledge that will benefit humankind; but, at the same time, they presumably want to avoid generation of scientific knowledge that would ultimately do more harm than good. The dual use phenomenon also poses a dilemma for policymakers, who aim to make policies that promote and/or avoid the same kinds of outcomes—and who are responsible to citizens for making policies that do so.

The dual use dilemma is not altogether new. When atomic physicists made key discoveries regarding atomic fission and the chain reaction early in the 20th century, they realized that these discoveries might have beneficial applications in medicine and the generation of energy, but they also realized that the same discoveries might lead to the development of new, monstrously devastating weapons [18]. The production and use of the first atomic bombs revealed that these concerns were justified. Since the dawn of the 21st century, the dual-use potential of the life sciences has become especially salient. In many ways, the situation of life sciences at present is similar to that of physics when atomic weapons first became possible. This is partly revealed by a recent unclassified CIA document titled “Our Darker Bioweapons Future” which claims, among other things, that recent advances in biotechnology enable the production of “biological agents... worse than any disease known to man” [4]. It has also been recognized that nanotechnology and other converging technologies associated with life sciences are increasingly raising dual use dangers. A recent US National Research Council (NRC) report on Globalization, Biosecurity, and the Future of the Life Sciences notes that

other fields not traditionally viewed as biotechnologies—such as materials science, information technology, and nanotechnology—are becoming integrated and synergistic with traditional biotechnologies in extraordinary ways enabling the development of previously unimaginable technological applications. It is undeniable that this new knowledge and these advancing technologies hold enormous potential to improve public health and agriculture, strengthen national economies, and close the development gap between resource-rich and resource-poor countries. However, as with all scientific revolutions, there is a potential dark side to the advancing power and global spread of these and other technologies. For millennia, every major new technology has been used for hostile purposes, and most experts believe it naive to think that the extraordinary growth in the life sciences and its associated technologies might not similarly be exploited for destructive purposes [15: 1–2].

The dual-use potential of nanotechnology is partly revealed by its association with (i.e., as a constituent technology of) “synthetic biology”, which is commonly considered to be a paradigm example of an emerging dual-use science/technology [17]. In what follows, I aim to show that those concerned with ethical issues associated with the dual-use potential of nanotechnology have much to learn from recent history regarding dual-use research in the life sciences and associated debates regarding the role of science codes of conduct in particular.

Lessons from Life Sciences

There are numerous reasons why the dual use dilemma has been a centre of attention in life science policy debates in recent years. One explanation is the elevation in fear of bioterrorism following the events of 11 September 2001 and the subsequent anthrax attacks in the US. Recent developments in the biological sciences—and three controversial experiments in particular—have also drawn attention to the problem. Australian scientists, for example, genetically engineered the mousepox virus with the aim to develop an immunocontraceptive that would serve as a means of (rodent) pest control. To their surprise they discovered that they had accidentally produced a superstrain of mousepox that killed both mice that were naturally resistant to, and mice that had been vaccinated against, ordinary mousepox. They published their findings, along with description of materials and methods, in 2001 [7]. In a second study, American researchers at SUNY, Stony Brook artificially synthesised a “live” polio virus from scratch [3]. Following the map of the polio RNA genome, which is published on the Internet, they stitched together corresponding DNA sequences purchased via mail order. The addition of the synthesised genome to “cell juice” containing key proteins resulted in a virus that paralysed and killed mice. They published their results, including description of materials and methods, in 2002.

Both of these studies have implications for smallpox in particular. The former study might enable production of vaccine resistant smallpox (which is closely related to mousepox), and the latter might enable artificial synthesis of smallpox by those that would not have access to the natural virus. Smallpox, meanwhile, is one of the worst diseases known to humankind—and it usually tops lists of feared biological weapons agents. Because routine smallpox vaccination ended with eradication of smallpox in 1980, a large proportion of the world population now lacks immunity to the disease. Modelling has shown that a smallpox attack (e.g., by bioterrorists) could lead to the devastation expected from (a series of) nuclear attack(s). The significance of the mousepox study partly relates to the fact that there is no known treatment for smallpox; vaccination is our only protection against it.

In a more recent study the 1918 flu virus—which killed 20 to 100 million people—was reconstructed in 2005 via synthetic genomic techniques similar to those used in the polio study [21]. Again, the (US) researchers published their findings along with description of materials and methods. This study has important implications for medicine, especially given current concerns about pandemic influenza; but it may also facilitate bioterrorism.

Given obvious implications for biological weapons making, critics complained that these studies, and others like them, should not have been conducted and/or that their results (and/or description of materials and methods) should not have been published. Publishing such studies, according to critics, alerted would-be bioterrorists to new possible ways of making weapons and (worse) provided them with explicit instructions—“recipes”, “road-maps”, or “blueprints”—for doing so. Many in the scientific community, on the other hand, defended what was done. All of this research was conducted by well-intentioned researchers and, it was argued, there were good reasons for publication. Many claimed, for example, that publication was important to inform the scientific community about new dangers we need to develop protections against. In response to objections that materials and methods should have been omitted from the published articles, it was argued that publishing description of materials and methods is essential to scientific methodology (e.g., for purposes of replication and verification).

Whether or not these studies should have been published, we can imagine some that probably should not be. If a scientist (perhaps accidentally) discovered an easy way to produce a pathogen as deadly, contagious, and untreatable as smallpox, for example, then details about how to do so should presumably be kept out of the public domain—at least until appropriate protections are developed [19]. Hypothetical examples from nanotechnology and other areas of science and technology could also be imagined. If censorship of the life sciences or nanotechnology might sometimes be appropriate, then the important question of who should have ultimate decision making authority arises. Should we rely on voluntary self-governance of scientists (perhaps guided by codes of conduct) in matters of censorship—or might censorship by government sometimes be legitimate? Government censorship of nuclear science with weapons implications has been the norm for decades [14].

Recent history of dual-use life science research is relevant to nanotechnology partly because synthetic biologyFootnote 1 lies at the intersection of biology and nanotechnology and partly because a majority of contemporary debate and policy making regarding dual-use research has thus far focused on the life sciences in particular, and especially the three studies described above.

In the aftermath of the mousepox and polio studies, for example, a number of important science journals announced that they would screen submitted articles for security implications and that they would modify, or refrain from publishing, studies in cases where harms would otherwise outweigh benefits [10]. In 2004 the NRC published the “Fink Report”—a landmark study of the dual use dilemma in the context of biotechnology [14]. Among other things, it called for voluntary self-governance of the life science community in matters of censorship, increased education of scientists about the dual use phenomenon, and the development of relevant codes of conduct for scientists.

Codes of Conduct—Roles

In the context of dual-use science and technology, be it biology or nanotechnology, there are numerous reasons why science codes of conduct might be important; and there are various roles they might be expected to play. Even the best codes of conduct, however, will have important limitations that must be kept in mind.

Raising Awareness of the Dual Use Phenomenon and Relevant Weapons Conventions

One of the primary roles envisioned for science codes of conduct in the context of dual-use science is that they would promote much needed awareness-raising among scientists [20]. It has been shown, for example, that life scientists generally lack awareness of the ways in which their well-intentioned research might be abused by malevolent actors and, indeed, that they lack awareness of the dual use phenomenon in general. There is likewise remarkably little awareness, among life scientists, of the 1972 Biological and Toxin Weapons Convention (BTWC) and/or its implications [5]. Codes of conduct which both draw attention to (the importance of adherence to) BTWC prohibitions and ways in which research permitted by the BTWC could be used (in prohibited ways) by malevolent actors is thus essential. Similar things can be presumably said about nanotechnology. Given what has been shown about the lack of awareness regarding these issues in the life sciences, it is not unlikely that those working in nanotechnology likewise lack awareness of the dual use phenomenon and details about what is prohibited by relevant biological and chemical weapons conventions.

Raising Awareness Regarding Social Responsibility

A related role for codes of conduct is to raise awareness regarding scientists’ social responsibilities more generally. This is especially important when we bear in mind the history of scientific culture [8]. At various times in history, to a greater or lesser degree, science has been characterized as neutral, apolitical, and/or value free. Common ideas among scientists have been that science involves an impartial pursuit of knowledge and/or that (scientific) knowledge is inherently good [11]. Another idea, which was heard especially often in debates about social responsibility in the context of nuclear weapons, is that knowledge, technology, and other fruits of science are neither good nor bad—and that, to the contrary, it is the uses to which they are applied (by others) that are good or bad. Similar things (relevant to nanotechnology) have recently been said about chemistry: “There are no bad molecules, only evil human beings” [2]. If this were so, then some might be tempted to think that well-intentioned scientists should not be considered responsible for any bad consequences that result from knowledge gained or technologies made possible by scientific research. If scientists do not produce anything that is inherently bad, one might think that scientists are not responsible for any bad outcomes that result from their morally neutral scientific pursuits and products. Those who employed scientific knowledge in a malign manner—and/or policy makers who failed to prevent them from doing so—would be responsible for bad outcomes, and scientists would remain innocent.

The idea that scientists should be (fully) divorced from responsibility for the consequences of their well-intentioned research, however, is not all that tenable. If one foresees that his work is likely to be used in ways that cause more harm than good and proceeds regardless, then he will be implicated in the bad consequences that ensue. If I knowingly enable a malevolent actor to cause harm, then I am partly responsible for harm that results. We should go farther by saying that scientists have a responsibility to consider the uses to which their work will be applied—and that they bear significant responsibility for bad outcomes that are foreseeable whether or not they are actually foreseen by the scientists in question. The point here is that scientists (within reason) have a responsibility to be aware and/or to reflect on the ways in which their work will be used. The failure to reflect—or to foresee the foreseeable—should be considered negligence. In the context of weapons of mass destruction, such negligence could cause grave harm.

Related ideas are eloquently stated in one of the earlier formal guidelines concerned with dual-use science in particular—the American Medical Association’s 2005 “Guidelines to Prevent the Malevolent Use of Biomedical Research”:

Biomedical research may generate knowledge with potential for both beneficial and harmful application. Before participating in research, physician-researchers should assess foreseeable ramifications of their research in an effort to balance the promise of benefit from biomedical innovation against potential harms from corrupt application of the findings.

In exceptional cases, assessment of the balance of future harms and benefits of research may preclude participation in the research; for instance, when the goals of research are antithetical to the foundations of the medical profession, as with the development of biological or chemical weapons [6].

I would argue that a similar statement can and should be adopted in codes of conduct for nanotechnology.

The importance of drawing attention to the weapons conventions, and their requirements, is often emphasised by those advocating scientific codes of conduct in the context of dual-use research. This is crucial, but it is also important to draw attention to responsibilities regarding well-intentioned dual-use research permitted by the weapons conventions. In addition to being familiar with, and acting in accordance with, international law, scientists should assess—and take (at least some) responsibility for—the harms and benefits of research that is legal. Neither the BTWC nor the 1993 Chemical Weapons Convention (CWC) was designed to address the dual use dilemma as defined earlier in this paper (e.g., according to the third, stipulated definition). As revealed by “general provisions clauses” in both the BTWC and CWC, the conventions’ prohibitions largely turn on the intentions of researchers and/or research programs. Well-intentioned dual-use research falls outside the domain of these two weapons conventions’ prohibitions. No one, so far as I am aware, has argued that the mousepox, polio, or flu studies contravened the biological or chemical weapons conventions. The point of critics was that these were dangerous experiments/publications—not that they were (already) prohibited ones. Even if these studies were (by hypothesis) too dangerous to be conducted or published, they were not illegal according to the relevant weapons conventions.

Winning Trust

An additional role for codes of conduct, from the perspective of the scientific community, lies in the fact that their adoption may help win public trust in the scientific enterprise. Consider the history of nuclear weapons and current controversy surrounding the biotechnology revolution—especially things like cloning and genetically modified organisms—and growing concerns about the implications of nanotechnology. Despite the undeniable enormous benefits that scientific progress has made possible, the public is often (and sometimes rightly) fearful of scientific progress. Science, according to many, is driven by a technological imperative, where anything goes. People are afraid that, rather than being driven by what is best for society and humankind and/or the environment, science is driven by what is technically feasible—or, perhaps worse, by what will maximize industry profits. The public wants science to make the world a better place; and it funds scientific research with the expectation that it will do so. Commitment, via codes of conduct, to a value oriented approach to science—i.e., science explicitly aimed at the promotion of human flourishing and the avoidance of harm—may be instrumental in alleviating public anxiety about science. Insofar as the public funds science, the public holds the scientific enterprise accountable, the expectation being that science will promote the good of society. This is a social contract. A code of conduct with suitable content would partly amount to a scientists’ pledge to hold up their end of the bargain. If codes are convincing—and followed—then trust will be gained.

Avoiding Over-Regulation

A related role for science codes of conduct, and one that may be especially important from the perspective of the scientific community, is that endorsement of (appropriate) codes would partly amount to a statement that “we scientists will regulate our conduct, and these are the values and rules that we will live and work by.” If science is potentially dangerous and if the scientific community does not acknowledge this and proactively (e.g., via the adoption of codes of conduct) take measures to govern its own conduct in a manner that the public is comfortable with, then more governmental regulation can be expected. The autonomy of science—i.e., scientific freedom—within limits anyway, is important. The Lysenko affair [9] in the former Soviet Union (which involved a thoroughgoing politicisation of biology), though an extreme example, illustrates reasons why this is so.

My point here is not that implementation of self-regulatory measures (such as the adoption of codes) by the science community would or should make governmental regulation of dual-use science unnecessary. The point is that proactive self-governance via codes of conduct, among other things, may help avoid a situation of overregulation. Some members of the synthetic biology community have explicitly advocated proactive self-regulation on the grounds that this would help avoid imposition of restrictions from above [12].

Though self-regulation via codes of conduct may reduce government interference with science, the government may, nevertheless, still have an important role to play in the regulation of dual-use science in particular (even if apt codes are adopted) for reasons I will explain later. For now I will merely suggest that completely autonomous self-governed science would likely be incompatible with democracy. Science affects society in innumerable ways, and so society should surely have at least some control over science [11].

“Process Benefits”

A final potentially beneficial role of science codes of conduct, according to Brian Rappert, relates to “process benefits” that accrue from deliberation about them or other activities related to their development and/or adoption. The perceived need for (and resultant activities surrounding) codes of conduct to address the dual use dilemma in the life sciences, according to Rappert, has lead to a fruitful “furthering [of] communication, consultation, coordination, and collaboration between organisations” and/or key stake holders that previously had relatively little to do with each other [16]. Another benefit that may arise from the development of codes of conduct is that sustained rigorous debate and deliberation about their content may lead to greater clarity regarding the ethical principles that should govern scientific activity.

Codes of Conduct—Limits

While codes of conduct may, as described above, play valuable roles in dealing with problems posed by dual-use science and technology, they also have limitations. Codes of conduct may be important, but they are not magic bullets. One common critique of codes of conduct points to challenges regarding their level of specificity. While people such as Joseph Rotblat (in the 1990s) have championed the idea of a “Hippocratic Oath for Scientists”—i.e., a universal code of conduct for scientists—others have argued that a universal science code of conduct would inevitably lack substance and/or be too general to be action guiding. One point is that any science code of conduct that would be generally acceptable to all sciences and scientists would presumably be one which merely listed uncontroversial commonsense precepts that conscientious people would presumably seek to follow whether or not they were enshrined in codes of conduct. A second point is that any science code of conduct that was general enough to apply to all sciences may lack sufficient detail to clearly prescribe action in the specific contexts of particular sciences. While a response to the second concern may be the idea that different sciences need their own codes of conduct, a proliferation of potentially conflicting codes may then arise and raise questions about which has ultimate authority. In any case, according to critics, the more specific detail that is put into codes (for any particular sciences) to make them more clearly prescriptive about who should do what under what circumstances, the more controversial and less widely accepted they would be.

A second major critique of science codes of conduct holds that, unless they are enforced, codes of conduct will not be effective. The point here is simply that those who would do the kinds of things ruled out by codes of conduct are precisely the kinds of people that would not follow (voluntary) codes of conduct to begin with. To be truly effective, according to critics, codes of conduct must be enforced by sanctions.

Enforcement

In response to this last objection, it should be noted that there are various mechanisms by which science codes of conduct could be enforced in practice. While codes of conduct are often advocated as a form of “bottom-up” voluntary self-governance by scientists and/or the scientific community, this does not mean that no enforcement of science codes of conduct is possible. One idea is that the scientific community could itself enforce codes on its members. This would be possible, for example, via the professionalisation of sciences [22]. If sciences were professionalised (like medicine), then adherence to codes of conduct could be required as a condition of official membership within, or licensing by, the professional society.

Enforcement of codes of conduct is also possible via funding bodies. One commonly proposed mechanism for addressing problems posed by dual-use research would involve including review of dual use dangers associated with proposed research as part of the research oversight process. For such a mechanism to work most effectively, we would want scientists to (reflect upon and) report dual use issues arising with the research being proposed to the ethics and/ or biosafety committee reviewing the research proposal. Codes of conduct could mandate that scientists do this, and funding bodies could make adherence with such a mandate a condition of funding eligibility. This is how clinical research ethics guidelines are currently enforced in countries like the US: rather than being required by law, adherence to clinical research ethics guidelines is a condition of (e.g., NIH) funding. Given the importance of funding to research, this kind of enforcement has proven to be highly effective.

Last but not least, science codes of conduct could also potentially be enforced by law. Governments, that is, could impose legislation requiring that scientists adhere to codes requiring them to, among other things, (reflect upon and) report dual use issues arising with research (and/or potential publications) being proposed to relevant institutional review committees. This could be part of a broader set of regulations that also mandate the establishment of appropriate committees and specify procedures to be followed by committees reviewing studies (and/or potential publications) posing dual use dangers [13]. Such regulations might require that review committees at the level of research institutions eventually refer studies (and/or potential publications) to a governmental body or a nongovernmental regulatory authority for “higher-level review” in the most difficult cases. Just as regulations might require individual scientists to refer studies to a local institutional review board, regulations could require local institutional review boards to refer them to higher-level review boards (e.g., with greater expertise and/or decision making authority) under specified circumstances.

In response to the idea that at least some parts of science codes of conduct should be legally enforced, one might be tempted to think that if particular actions are required by law then their inclusion in codes of conduct would be redundant. This objection, like many of the other objections to codes of conduct considered above, fails to recall the role of codes of conduct with regard to awareness-raising. Though prohibitions of the weapons conventions are already written into law (in signatory countries), a benefit of including clauses regarding the weapons conventions in codes of conduct is that this would increase scientists’ awareness of what the law actually is.

Regulation

While codes of conduct are often advocated by those who favour voluntary self-governance of the scientific community, it is dubious that (with or without codes of conduct) we should rely entirely on scientists to govern themselves in the context of dual-use science and technology. A consensus has emerged in debates over the dual use dilemma in the life sciences that we need to strike a balance between the promotion of scientific progress, on the one hand, and the promotion of security, on the other [14]. Both of these legitimate values are at stake, and neither should have absolute priority over the other. Heavy regulation of science may come at too high a cost in terms of scientific progress; but we need adequate governance/oversight for the protection of security.

Scientists are right to be worried about too much governmental interference with science. If decisions about what research is done and what papers are published are left in the hands of bureaucrats, then (based on what they do for a living) the decision makers will likely be biased in favour of security over science values, and they might often lack expertise for judging the scientific importance of research they might want to vet or papers they might want to censor [19].

Leaving decision making in the hands of scientists, on the other hand, poses similar risks. Scientists (based on what they do for a living) are likely to be biased in favour of science values over security values, and (lacking background in security studies) they will usually lack expertise for assessing the security dangers of their research. The danger of relying on voluntary self-governance of scientists is perhaps best illustrated by further reflection on the mousepox study. This case reveals that scientists are sometimes systematically denied access to information required for assessment of the risks of research and/or publication [19]. The primary risk associated with the mousepox study is that the genetic engineering technique they used might enable production of vaccine resistant smallpox. For bioterrorists (or state sponsored biological weapons programs) to employ this technique on smallpox, however, they would need to have access to the smallpox virus to begin with. Because all of the world’s remaining smallpox samples are officially supposed to be safe and secure at only two facilities worldwide (i.e., the Centers for Disease Control and Prevention in the US, and a similar facility in Russia), the primary danger is that there has been proliferation from the former Soviet Union’s biological weapons program’s stockpiles of the virus. Any detailed information about likely/possible smallpox proliferation, however, is classified information [1] that scientists (lacking security clearance) would not have access to. Assessing the dangers of publishing the mousepox study, therefore, is beyond the expertise of ordinary scientists.

If we want decision making that strikes a balance between the promotion of science and the protection of security—by decision makers that have adequate expertise for assessing the extent to which both kinds of values are threatened—it may thus be imprudent to allow scientists to govern themselves (via voluntary codes of conduct). The ultimate decision making authority (in difficult cases) should embody sufficient expertise regarding both science and security, and should not be biased in favour of either science or security values. A mixed panel of experts comprised of scientists, security experts, ethicists, civilians, and members of government could arguably provide the right combination of expertise and values [19]. Retrospective analysis of the mousepox study reveals that we would have wanted the scientists involved to have their paper reviewed/approved by such a panel prior to publication. An enforceable code of conduct that required them to refer the paper to such a panel would have helped make that happen. A code of conduct would have played a role, but it would have been part of a broader regulatory oversight system.

Conclusion

Those concerned with ethical issues associated with the dual use potential of nanotechnology have much to learn from recent experience regarding dual-use life science research—which has been the focus of attention regarding dual-use science and technology in recent years. Science codes of conduct may play important roles in raising awareness about the dual use phenomenon, the requirements of weapons conventions, and the social responsibilities of scientists. Implementation of codes of conduct might also help win trust in the scientific enterprise and prevent over-regulation of science by government. Nevertheless, there are challenges regarding the level of generality/specificity that science codes of conduct should have; and unenforced codes of conduct may not effectively govern behaviour of scientists. In response to criticisms that codes of conduct are ineffective if they are unenforceable, it should be noted that there are various mechanisms by which codes of conduct can be enforced. Rather than being essentially associated with voluntary self-governance of scientists, (at least some parts of) codes of conduct should arguably be part of a broader regulatory oversight system.