“Forbidding science”—two words that are likely to strike fear in many scientists, dredging up past abuses such as Galileo’s persecution by the Catholic church and Lysenko’s ideological suppression of most genetic research in Stalin’s Russia. Yet, as the enormous power of science continues to expand into areas that were once unthinkable, including the creation of new species of organisms, the development of ever-more powerful weapon technologies, and the possible redesign of the human body and brain, there is a growing chorus of voices saying there are some places science should not go. Some of these concerns relate to the potential intentionally malevolent misuse of the technology to cause harm, others relate to potential accidental releases or unintentional misapplications of the technology, and still others relate to potential adverse consequences from the intended beneficial use of the technology including the enormous, and often contested, societal transformations the technology might bring. In addition, some have concerns that however potentially beneficial a particular area of research may be, the nature of the research itself is problematic because of the research design and/or the kind of information the investigation is likely to produce. These concerns are spawning proposed research restrictions that limit how, where, and by whom some scientific research is conducted, that restrict the publication and dissemination of research findings, or that ban some research altogether.

The conduct of science has, of course, long been regulated. There are regulatory requirements for protecting human research subjects, for promoting the welfare of research animals, for restricting the use and possible accidental release of pathogens, and for properly disposing of potentially hazardous waste materials generated by some scientific research. Those restrictions on the conduct of science, while important and seemingly ever-expanding, are not the focus of this special issue. Rather, the focus is on the possible restriction of some types of scientific research based on its ends rather than its means. In other words, are there some types of research which should be prevented or otherwise restricted or limited because of their potential implications or misuse that may result from the findings, rather than conduct of the research?

This primary question regarding whether some research should be forbidden raises a series of sub-questions. Which (if any) research should be restricted? What criteria should be used to make this determination? How should the research be controlled—prohibited outright or subjected to new restrictions or oversight? Who should decide which research is restricted? Legislatures? Existing regulatory agencies? Some new regulatory authority, perhaps at the international level? What should be the role of scientists and self-regulation in identifying and controlling such research? How should the public and public opinion be incorporated into this decision-making? Is there a constitutional right to conduct scientific research in controversial areas?

These questions were addressed at a 2-day conference held at Arizona State University (ASU) in January 2006 entitled Forbidding Science?: Balancing Freedom, Security, Innovation and Precaution. The conference was organized by an inter-disciplinary cluster of ASU academic units—including the Center for the Study of Law, Science and Technology, the Biodesign Institute, the Consortium for Science Policy Outcomes, the Biology in Society program, and the Lincoln Center for Applied Ethics—in partnership with the American Association for the Advancement of Science (AAAS). Focusing on case studies that included nanotechnology, pathogen research and human enhancement, the more than 300 conference participants energetically discussed and debated whether and how some science should be forbidden.

The one over-riding conclusion evident from this discourse was that people, whether they be experts or members of the general public, have sharply divergent and deeply held opinions on the central question of whether some scientific research should be forbidden. Reflecting these diverse opinions, this special issue of Science and Engineering Ethics provides six papers from this conference, paired with six commentaries, which approach the issue of “forbidding science” from different perspectives, disciplines, and objectives.

Some general themes and observations can be distilled from the diverse and provocative papers that follow. The term “forbidding science” itself is subject to different interpretations. The conference was deliberately entitled “Forbidding Science?,” phrased in the form of a question to avoid pre-judging the question of whether some science should be restricted. In this context, “forbidding” is being applied as an active verb, the action of imposing coercive, legalistic restrictions on which science should and should not be undertaken. As Leon Kass points out in his contribution to this volume, though, “forbidding” could also be interpreted as an adjective, used to describe that science which is repellent or abhorred (Kass 2009). This perspective implicitly expands the focus beyond mandatory, legal restrictions on which science can be done, to also consider social and other constraints (both within the scientific community and the broader public) that work to limit some lines of scientific inquiry without relying on legal restrictions. Thus, there are no laws against researching whether genetic differences between racial groups affect characteristics such as intelligence, yet many scientists would consider such research “forbidding” and decline to pursue it (Ceci and Williams 2009; Rose 2009). In fact, there is evidence that most explicit or implicit restrictions on which science is undertaken are mediated through social, political and moral forces rather than via legal proscriptions (Kempner et al. 2005).

The choice between social and legal controls on scientific investigation raises many questions. Are non-binding social controls acceptable? If so, are they sufficient? To what extent can non-compliance with non-binding controls be tolerated? As technology gets more powerful, and the potential consequences of any accidental or deliberate misuse of many scientific discoveries becomes increasingly dire, can society afford to rely solely on social forces and peer pressure to ensure that scientists do not undertake unacceptable research? As Howard Markey, the late former Chief Judge of the U.S. Federal Circuit court of appeals stated: “Law is the only tool that society has to tame and channel science and technology” (Markey 1984). Moreover, bioethicist Daniel Callahan has argued, in this era of increasing “legalism,” society may have reached a point where it is perceived to be implicitly endorsing any activity it does not legally prohibit (Callahan 1996).

Conversely, it could be argued that the influence and role of law is diminishing in today’s changing world. The rigidity, ossification and burden associated with legal regulation and regulatory rulemaking are encouraging the development of more informal, “soft law” approaches to oversight of many new areas of science and technology (Abbott and Snidal 2009; Marchant et al. 2008). Also, the increasing internationalization of science and technology may be outdating the existing legal paradigm that relies on regulations that apply only within political jurisdictions, a theme developed by Victoria Sutton in her commentary herein (Sutton 2009). As Ronald Atlas argues in his article, less legalistic measures such as codes of conduct, policy statements of scientific organizations, and even more informal social norms that help to create a “culture of responsibility” may travel across state and national boundaries much more effectively than legal instruments (Atlas 2009). Moreover, as James Weinstein notes in his contribution, there may be constitutional limits on legal restrictions of science in some countries (e.g., USA), although as Weinstein notes these constitutional limits are largely untested and uncertain (Weinstein 2009).

Whether implemented through legal restrictions or more informal measures, the threshold question is whether some types of research should not be undertaken because of the potential applications or implications of the results. (As mentioned above, some types of research should not be done because the conduct of the research, as opposed to the results of the research, would be unethical.) On this question, opinion generally varies widely (as do the contributions to this volume), but there does seem to be consensus that if such restrictions are imposed, they should be done cautiously, infrequently, and in only the worst cases. Thus, Leon Kass, who probably takes the strongest explicit position in this volume that there are questions that science should not explore, at the same time acknowledges that society should not seek to prohibit all, or even any, research that is found to be “forbidding” (Kass 2009).

The question of whether some research should be prohibited is often framed in different ways. One approach is to address the question in the framework of a “right to research,” which is explored in the contribution by Mark Brown and David Guston (Brown and Guston 2009). The “right to research,” like many other rights, is often portrayed in counter-majoritarian terms as protecting scientists from the political process that seeks to infringe the right to research. Brown and Guston re-orient this right to research to align it with, rather than in opposition to, the political process by arguing that the right is not absolute, but contingent on the social value and the impacts on society (both good and bad) of a given line of research. Robert Post reaches essentially the same conclusion through more legal reasoning, suggesting that any constitutional protection for scientific research may not be uniform, but would apply differentially based on the type and subject matter of the research being performed (Post 2009).

At the same time, if there is no absolute right to research, but rather only a preference or contingent right to research, how is it to be decided which research is to be protected? More cogently, who makes that decision about what research is protected? Certainly scientists need to be at the table when such decisions are made, since they are the participants that will be most directly affected by any restrictions on their craft, and they bring unique perspectives on the costs and benefits of restricting a particular line of research or subfield of science. For example, scientists have unique perspectives on the serendipity of scientific research, where both unexpected benefits and unanticipated risks can arise, greatly complicating the effort to predict the social value and implications of any line of scientific research prospectively. Scientists also bring an important and unique normative perspective on the value of openness and free inquiry in scientific research. According to Mark Frankel, scientists generally do not frame the issue of potential restrictions on research in terms of a “right to research” (Frankel 2009), but rather see the issue as one of scientific freedom that inherently incorporates some practical limits. Yet, scientists often frame the problem as one of individual responsibility that should be addressed from within the scientific community, rather than as a question of externally imposed restrictions from outside the profession. Nevertheless, scientists and the research community as part of and not apart from society, consciously and unconsciously understand and frame their work in the context, values and mores of the larger society.

The politically motivated attempts to restrict and distort science for partisan or self-interested purposes by some governmental actors, industry representatives, and public interest groups have contributed to a siege mentality in some quarters of the scientific establishment (Frankel 2009). It is perhaps not surprising then that some scientists often appear defensive and insular when broader and more representative societal forces seek to influence and shape the direction of scientific inquiry. In this regard, scientists as individuals and as a group are often ineffective in participating in deliberations on larger policy issues and representing their interests and perspectives. Few scientists have received any formal training in ethics, public policy, or the law. Moreover, scientists sometimes take positions that seem naive or overly simplistic on science policy issues in which they assume they are (and indeed should be) conversant, yet for which they have had little training or experience. Hand in hand with adequate training in communicating their work to those outside the scientific community (Garrett and Bird 2000), researchers need to have more explicit education in the ethical, legal, and social policy implications of their science.

Bioethics and the legal profession also have an important role to play in deliberations on forbidding science. Professionals in these fields bring specialized training with direct relevance to the oversight of science, and often are the first to recognize and call attention to the profound ethical or risk implications raised by certain lines of scientific research. But like the scientists, these professionals, too, have limits in their capacity to effectively participate in deliberations about the conduct of and restriction of scientific research. As Jason Robert eloquently suggests, too many experts in these fields are willfully ignorant of the methods, challenges, goals and norms of the scientific enterprise (Robert 2009). Emboldened by the occasional naiveté or ineffectiveness of scientists on policy questions, bioethicists and legal experts often assume a self-nominated role of “moral police” or “moral firefighter” (Robert 2009).

Finally, and most problematically in terms of implementation, the public also needs to be involved in these discussions in an informed and meaningful way, as Andrew Askland sets forth in his contribution (Askland 2009). Yet, as members of the public are increasingly occupied in individual daily obligations of work, family, and chores, as well as the myriad entertainment, leisure and hobby activities available, the average person seems to have less and less time to learn and engage in the growing number of complex issues and controversies, both scientific and not, facing society. Rational ignorance is the sensible choice for many issues for most if not all citizens—it is just not feasible to even try to stay up-to-date with too many of the pressing issues that compete for attention at any time. So, given this state of affairs, what is the best approach to assure that the public is included in meaningful deliberation on whether and how some types of science might be restricted or steered in particular directions? Here, there is some light at the end of the tunnel, as a wide variety of initiatives of “upstream” public engagement are being pursued with modest, but important, progress. Indeed, a small indication of the effect and potential of such efforts is demonstrated by the fact that over 150 non-academic members of the public took time out of their busy schedules to join the approximate same number of representatives from the academic world to attend a 2-day conference on “Forbidding Science?”

Yet interest is not enough. A number of challenges hinder the informed and effective participation of the public in discussion of public policies regarding science, as well as the role of science in public policy development (Bird 2003). While limited scientific literacy is the most widely noted, real appreciation of the evolving nature and uncertainty of science, and awareness of the many assumptions, values and value systems that are inherent in science and technology are also key. Understanding these aspects of scientific research is essential to meaningful public involvement.

Other than the procedural question of who participates in making the decision, perhaps the most challenging substantive question related to whether some science should be forbidden is the “dual use” problem—involving situations in which the same research could lead to both beneficial and detrimental applications. This problem most frequently arises in the context of research on potential pathogens, which is often intended to help prevent or treat disease, but could also be appropriated by terrorists to develop more effective bioweapons. As Ronald Atlas describes in his contribution to this special issue, this dual use problem now applies to almost all life sciences research (Atlas 2009). It also applies to much research in nanotechnology, neuroscience, surveillance technologies, and information technology. While described as the “dual” use problem, the problem is often more complex as Askland argues, because there is often a multitude of potential uses of any technology, not just two, and many of these applications cannot be anticipated (Askland 2009).

In a different context not directly related to national security, there is also a “dual use” problem of a different sort with respect to many biomedical technologies. Here, the alternative applications are not between military and civilian applications, but rather between medical uses to treat people with real diseases and enhancement applications for healthy people. This glass can be half-full or half-empty, depending on one’s perspective. Nick Bostrom and Anders Sandberg, in advocating a right to cognitive enhancement, focus on the potential individual and societal benefits of enhancement technologies, and perceive enhancement as a logical and largely non-delineable extension of treatment technologies (Bostrom and Sandberg 2009). Leon Kass, in contrast, while also seeing that treatment and enhancement applications are intertwined, seeks to draw a line to prevent the enhancement applications without sacrificing the treatment benefits of the same technology (Kass 2009).

If a decision is made to forbid some science at either the research or application stage, the next question is how or even whether such an objective can be achieved. Gary Marchant and Lynda Pope lay out the obstacles to legal approaches to forbidding science, including the limited technical competence of many legal decision-makers, the potential for political mischief and manipulation, the difficulty in enforcing legal restrictions in a globalized economy, and legislative inertia (Marchant and Pope 2009). Yet, Patrick Taylor reminds us that the “softer” alternative of self-regulatory measures, such as codes of conduct, can suffer from their own weaknesses and limitations (Taylor 2009). Even proponents of declaring some areas of scientific research out-of-bounds such as Leon Kass concede that achieving such limitations will be difficult if not infeasible (Kass 2009). Given that the alternative, giving in to the technological imperative and allowing science to pursue any line of research that can be funded, may no longer be politically or socially viable, it is clear that much more work needs to be done. This volume represents only a beginning to addressing the problem of forbidding science.