Introduction

Much of the contemporary discourse pertaining to limiting or banning scientific speech and research has primarily been concerned with the topic of dangerous knowledge (e.g., Intemann and de Melo-Martin 2008; Weinstein 2009; Kitcher 2011). Such knowledge would, for example, advance our human cloning capacities, enable private citizens to create biological weapons, violate copyright protections, or give trade secrets or confidential government information to terrorists. Scientific speech, construed as the communication of ideas that have as their expressive content scientific information, could debatably fall under First Amendment protection (e.g., Volokh 2005; Brown and Guston 2009; Weinstein 2009). Specifically, ideas that might inform public policy, such as climate science research, food safety information, or medical technological advancements, might arguably be best thought of as belonging in the marketplace of ideas. Yet, when we consider that much of ‘science’—and more to the point, scientific research—is action, we find ourselves in the space between the ‘thinking’ and the ‘doing’ of science. This difference might be better thought of as two ends of a continuum, one that runs between conversation and action. However, there does seem to be a difference between discussing theoretical or abstract ideas and proposing research that is funded, conducted, and then discussed by scientists and the public.

When considering science’s place within liberal democratic societies, different kinds of dangers come to mind. The sort of dangers I find most troubling are those that represent an immediate threat to the well-being of the most vulnerable or underprivileged groups of citizens, particularly when those dangers are the result of funded scientific research. In Science in a democratic society, Philip Kitcher claims that unidentified oppression is one of the primary challenges facing societies today; a problem that public knowledge produced by a well-ordered science could correct (2011). Moreover, in chapter eight of Science, truth, and democracy, Philip Kitcher makes the case that scientists ought to restrain scientific inquiry when the outcomes of such research could cause potential harm to already underprivileged or vulnerable populations.Footnote 1 Nonetheless, Kitcher also argues that even though such avenues of inquiry ought not to be pursued, “demanding a ban on inquiry … would be to take a further, illegitimate, step” (2001). Kitcher argues against banning unscrupulous research on the grounds that such explicit restrictions would exacerbate the underlying social problems. He specifically states, “the consequences of any type of official intervention are thus likely to be counterproductive” (2001). While the arguments Kitcher presents are based on the ideal of a well-ordered science, such ideals lead to challenges when faced with the non-idealized circumstances under which global scientific inquiry takes place.

In what follows, I will critique Kitcher’s claim that officially sanctioning certain avenues of scientific inquiry could cause more harm than good. First, I will identify the dilemma Kitcher introduces as he explores the question of banning or limiting scientific inquiry. I will show that the question of officially or legally limiting scientific inquiry introduces a similar dilemma discussed by Wendy Brown in “Suffering rights as paradoxes” (2000). The dilemma, I argue, is not a true paradox, but a question of a trade-off between fundamental values; curtailing certain research involves a moral compromise between the violation of the rights of scientists and broader societal considerations of eliminating unjust vulnerabilities. I then show that Kitcher’s argument in favor of dissuading inquiry through conventional standards set forth by the scientific community or sound philosophical argumentation is problematic and falls prey to the same critique he offers in opposition to official bans. As such, I argue that there is just as much evidence, given the force of human rights concerns, to support limiting certain kinds of scientific inquiry by bringing the conversation to bear on how we might better attempt to balance the scientist’s freedom to pursue research, particularly when that research happens in a global context, against the disparate treatment particular populations might experience in the light of such research. I end by exploring if such disparate impact justifies limiting research under international human rights standards more generally, and within the context of the United States, under Title VII of the Civil Rights Act of 1964.

Kitcher’s Argument

Kitcher’s overall project in Science, truth, and democracy and in Science in a democratic society is to develop an account of scientific inquiry that carefully navigates between the picture of an idealized, well-ordered science and the picture of scientific inquiry motivated by the personal, self-interested, and under-informed concerns of particular individuals or groups within democratic society (2001, 2011). Kitcher wants to demonstrate that the truths about the world that scientists investigate also belong to the practical concerns of the society in which it is situated. One consequence of Kitcher’s connection between scientific investigations and social aims is that well-ordered science should respect and promote democratic values.Footnote 2

Toward this end, Kitcher tackles many of the challenges contemporary democracies face, such as the ethical limitations of scientific inquiry. In the chapter on “Well-ordered science”, Kitcher ends by briefly articulating that “[e]thical limits are imposed, even when the cost of the restrictions is that questions we hope to address become more difficult or even unanswerable” (2011, p. 131). Through his analysis of classic examples of unethical research studies, such as the Tuskegee experiments, scientific piracy, and questionable use of sentient animals in research, Kitcher proposes that scientists should engage in a community-based deliberation. This deliberative process is to be modeled on ideal conversations involving representative scientists and properly trained community members to identify areas in need of critical investigations, whether and how to prioritize research, and on rare occasion determine whether a particular scientist has an ethical obligation to pursue a particular line of research. Yet, in Science in a democratic society, Kitcher does not directly address whether particular lines of scientific research should be avoided. For such an analysis, we must return to Science, truth, and democracy.

In the chapter “Constraints on free inquiry,” Kitcher opens with examples of scientific inquiry that might lead to, or perpetuate, social inequalities, such as research into innate differences based on gender or race. Kitcher’s examination of the force and limits of free inquiry moves through the most commonly used arguments to support or deny unconstrained research. These arguments hinge on a more foundational debate between the value of freedom versus the value of equality, and how members of a liberal democracy can fairly distribute these two (sometimes) conflicting values.

Pitting Mill against those who would use the arguments from On liberty to support unconstrained scientific inquiry, Kitcher points out that Mill’s argument in favor of free expression extends only to the point at which free inquiry impedes the rights of others. Moreover, Kitcher defends the existence of political and epistemic asymmetries not only as an historical reality, but also as an ever present risk when engaging in the sort of research that might bear on struggles to achieve social justice. Kitcher points to the fact that human history is littered with imbalances of information (epistemic asymmetry), and imbalances of political power or benefits with legal protections (political asymmetry). I do not take this to be a contentious claim; rather, it is a goal of liberal democracies to alleviate such asymmetries. Building from the recognition that “the impact of research is affected by both a political and epistemic asymmetry” (Kitcher 2001, p. 102), he argues for constraint, but not external limitations or bans, on scientific inquiry.

In On liberty, Mill argues that protecting the freedom of thought, expression, and action are paramount to the progress of a liberal democratic society. Extending this sentiment, under the domain of human liberty is the freedom of scientists to explore, express, and publish their findings. As Mill states, “[t]he liberty of expressing and publishing opinions may seem to fall under a different principle, since it belongs to that part of the conduct of an individual which concerns other people; but, being almost of as much importance as the liberty of thought itself, and resting in great part on the same reasons, is practically inseparable from it” (1977/1859). These freedoms are bound only by the Harm Principle that states, “[t]he principle requires liberty of tastes and pursuits; of framing the plan of our life to suit our own character; of doing as we like, subject to such consequences as may follow: without impediment from our fellow-creatures, so long as what we do does not harm them …” (Mill 1977/1859, Chapter 1).Footnote 3

Scientists, as agents involved in the inquiry about the structure of nature, must recognize the ways in which social values may hinder their ability to discern significant truths and how such facts show up in the first place. Furthermore, Kitcher claims that when scientists are deciding on questions for inquiry, they have a duty to respect the goals of a democratic society by practicing science in a manner that is consistent with these broader societal goals (i.e. liberty, freedom, and equality). As such, scientists should be responsible for the research they pursue and have a duty to curtail harmful research. Kitcher claims that scientists qua moral agents have a duty to care for those who are already underprivileged and this duty supersedes any duty the scientists qua scientists might have to discover facts about nature. He states, “far less controversial than any duty to seek the truth is the duty to care for those whose lives already go less well and to protect them against foreseeable occurrences that would further decrease their well-being” (Kitcher 2001).Footnote 4 In short, whenever questions of research come with the foreseeable consequence of harming the underprivileged, they should be avoided.

While Kitcher defends the moral limits of free scientific inquiry against claims on unconstrained research, he argues just as strongly, “the fact that we ought not to pursue a particular course of action doesn’t mean that there should be a publicly enforceable ban” (2001, p. 105). Moreover, Kitcher later commits to the stronger claim that sometimes a scientist qua member of a community might be forced to sacrifice her/his freedom of inquiry to pursue projects for the common good, although, he notes this will be a rare exception (2011, p. 135). However, a duty to pursue a certain line of inquiry is not the same as curtailing or limiting another. In the light of harms certain research could cause, we might have a prima facie reason to consider some restriction on the sorts of questions scientific inquiry pursues. Moreover, considering the threat of widening the inequality gap between groups of citizens, restrictions on scientific research may be in line with promoting moral equality amongst citizens.

Kitcher, however, claims that conditional and explicit limitations on scientific research would be detrimental to both the scientific community and the underprivileged population under investigation. In the first case, Kitcher argues that conditional limits on inquiry—those that only exclude scientific research that will clearly lead to perpetuation or worsening the circumstances for the underprivileged—could all too quickly devolve into a slippery slope, thus preventing valuable research in the good-faith attempt at curtailing unscrupulous inquiry (2001).

In the second case, Kitcher claims that explicit bans of any type would be counterproductive at best and harmful at worst. In his words, “the ‘cure’ is worse than the ‘disease’” (2001, p. 105). Specifically, the heart of Kitcher’s concern is that the political inequalities would only be further exacerbated and would only lend support to those who might think, “official ideology has stepped into conceal an uncomfortable truth” (2001, p. 105). Kitcher proposes that scientists should practice responsible science by avoiding the sorts of questions that will inevitably bear on the political and epistemic asymmetries, but asserts that any sort of explicit censorship on inquiry or official intervention runs the risk of being counterproductive (2001).

Here Kitcher leaves us with what I would like to call the banning dilemma. If moral equality were widely accepted as a fact, there would be no need for a ban; when moral equality is not widely accepted as a fact, any ban would be seen as illegitimate and only reify the underlying causes of this perceived natural difference (p. 105). In the face of this dilemma, Kitcher sidesteps the deeper question about what to do with competing fundamental values (freedom vs. equality) and how these values play out through competing rights (the right of the scientist to free inquiry versus the right of the underprivileged to social justice, necessitating limitations on scientific inquiry). Instead, Kitcher focuses his arguments on the need for scientists to regulate themselves in the face of the need to distribute both fundamental values (freedom and equality) fairly.

In his more recent work, Kitcher’s concern shifts to focus on the interplay between the public perception of science and Science’s role to produce public knowledge. In this way, we are given a picture of two silos of activity; the scientists who pursue research and disseminate their findings on the one hand and the public who receive knowledge. However, those outside of the social institution of Science (i.e., the public) have a limited role in setting research agendas or weighing in on interpretations of findings, particularly when the lay-public are operating in a distorted marketplace of ideas (2011). In the chapter on “Diversity and dissent” he states, “[q]uestions about which topics are worthy of attention … are exactly parallel to the issues about which lines of inquiry should be pursued … vulgar democracy would give the untutored majority sway in the determination of the course of research” (Kitcher 2011, p. 221). To involve the wider community in the conversation that scientists might have about setting research agendas, Kitcher recommends that selected members of the public be tutored and brought into the conversations to be had, serving as a kind of go-between for the scientists and the lay, under-informed public (2011). In this way, he carries forward his prior claim that scientists ought to be the ones to decide upon the community standards and best practices to guide research.

Although I agree with Kitcher that scientists and the scientific community ought to have standards by which they can self regulate and move toward more socially responsible science, it seems that Kitcher has ignored a fundamental purpose of such explicit limitations: that they act as protective measures for those underprivileged groups. While I might also concede to Kitcher that, in an idealized world, the issue of bans would never surface, it seems that current scientific inquiry in our world is far from this ideal. Furthermore, while Kitcher takes the time to demonstrate the interrelations between scientific inquiry and social values, he shies away from the connection between social values and rights that enhance or enforce such values (i.e., freedom and equality). The specification and protection of particular rights, while not the only means by which a democratic society ought to attend to systemic inequalities, does bring with it certain benefits. However, these benefits are not without a price. As Wendy Brown points out, rights that grant particular groups special protections may compound the vulnerabilities these groups already experience (2000).

The Banning Dilemma and the Dilemma of the Rights Discourses

In “Suffering Rights as Paradoxes” (2000), Brown discusses how the instantiation of suffering rights, or those rights that seek to redress inequalities for marginalized groups, cannot deliver equality and freedom (i.e., those things that liberal democracies promise). It is important to note that Brown is not making a claim for or against rights themselves, but rather, is seeking to explore the difficulties marginalized groups within liberal democracies continuously encounter in the pursuit of rights to fulfill the promises of freedom and equality. While Brown discusses several different examples, I will limit the discussion to the example of the law that attempts to prevent or alleviate gender inequalities. These might include inequalities such as unequal pay practices or the deprivation of women’s abilities to protect and control their own bodies (e.g., reproductive freedom or the right not to be sexually harassed). Women, to varying degrees, can be said to be in a position of vulnerability directly as a result of these particular and intersecting forms of inequality.Footnote 5

To paraphrase Brown, the rights paradox functions by locking the group it seeks to protect into the identity defined by subordination. That is, specific rights identify the group to be protected, and through this identification inscribe the need to be protected on that identity. For example, if we specify women as a group that needs special protections, we are labeling them as being the kind of agents that require special protections. This, in turn, keeps the group so defined in an inegalitarian position by virtue of being identified as the subordinate group within conversations pertaining to rights. Yet, rights that eschew this specificity sustain the invisibility of inequality and may even enhance it (Brown 2000, p. 232). In sexual harassment law, for instance, we can see the paradox arise by asking questions such as, “What understanding of interconstitutive powers of gender and sexuality is lost when sex discrimination (as sex harassment) is cast as something that women can do to men? On the other hand, what presumption about women’s inherent subordination through sexuality is presumed if sexual harassment is understood as a site of gender discrimination only for women?” (2000, p. 233).

Brown is pointing out that in reframing the rights discourse—in this case harassment legislation—as gender neutral, we cover over the power relations that perpetuate this inequality occurring at the intersection of gender and sexuality; women remain vulnerable. Yet, when the harassment legislation is posed in gendered terms, we reinscribe and consequently perpetuate the assumption that women are inherently subordinate to men; it is something inherent in a person’s sex or gender that places her in need of special protections. Through examining such questions Brown shows that rights which groups bear (women, in this case) and which individuals exercise as members of that group tend to consolidate the regulative norms of race, gender, and sexuality (in this case heterosexuality) to hold people in positions of vulnerability.

Vulnerability and Research

At the heart of Kitcher’s project is an explication of what a well-ordered science would aim for and how it would operate within a democratic society. In this way, Kitcher moves between the ideal of scientific inquiry within a democratic society and the social realities of practicing science in an imperfect world. What we find in both Kitcher and Brown are two different approaches to grapple with the difficulties of balancing the fundamental values liberal democracies hold dear: liberty, freedom, and equality. On the one hand, we can cast Brown’s analysis as focusing on the particularities; what is it about being in a position of vulnerability that resists external attempts to secure liberty and equality for these populations? On the other hand, we can cast this aspect of Kitcher’s project as attempting to address vulnerabilities through self-regulation; by bringing liberty, freedom, and equality into harmony will we not also mitigate the vulnerabilities of those whose lives go poorly? Through the lens of Brown’s analysis, we see that Kitcher is addressing whether external regulations lessen inequalities that are the source of a group’s vulnerability. If they do not, as Kitcher claims, they are not worth the trade-off of limiting the freedom of inquiry of the scientist.

Kitcher states, “in a world where research into race differences in I.Q. is banned, the residues of belief in the inferiority of the members of certain races are reinforced by the idea that official ideology has stepped into conceal an uncomfortable truth” (2001, p. 105). Stated another way, one might say that Kitcher’s dilemma claims that the harm done by implementing a ban or restriction on research would act to perpetuate or enhance the vulnerabilities this underprivileged population experiences. This seems to be the “uncomfortable truth”, and in vulnerability being cast as an essential trait, something that inheres within the individual, we cause more harm than good. In addition to limiting the advancement of scientific inquiry, the explicit limitations do nothing to defeat inequality in society, it may even institutionalize the distinctions and, hence, compound vulnerabilities. However, this is not a paradox. Brown’s analysis aims to demonstrate that banning certain kinds of harmful interactions between persons (in Kitcher’s case certain avenues of inquiry) involves a moral trade-off between the violation of the rights of scientists and broader societal considerations. While Brown is not making a positive case for rights legislation, she does acknowledge that such legislation, even as it serves to mitigate but never resolve the deeper social injustice, does soften the blows of inequality (p. 231).Footnote 6

Problem with Deterrence by Convention

Kitcher’s arguments will likely resonate with those researchers who might be inclined to self-regulate in the face of standards set by the scientific community. However, Kitcher does not give us a way to address the scientist who, while well intentioned, may stumble upon data that might perpetuate harmful inequalities. Moreover, Kitcher fails to acknowledge the practical point that even if some people will adhere to community standards that does not mean that everyone who wishes to pursue such research will be prevented from doing so on these grounds alone. Moreover, his moral claim may not show up to this group of researchers at all. If the response to the practical point is that we can curtail those avenues of research the scientific community deems socially disadvantageous through implicit actions, such as conventionally accepted standards or the scientific community frowning upon such research, this argument seems to fall prey to the same critique Kitcher had against explicit restrictions.

If the argument against explicit restrictions or bans is that they would cause additional harm to the already underprivileged or vulnerable populations by implicitly reinforcing their position of inequality in the minds of some people, why would conventional standards meant to serve as guidelines not call into question these same conspiracy theories in the minds of those few? If the harm caused by restrictions are going to come from the outrageous claims of a fewFootnote 7 who allege such limitations are “concealing an uncomfortable truth,” then why does Kitcher seem to presume that these few would not make the same allegation against the guidelines he put forth in deciding what avenues should not be pursued as possible questions of research? The only difference between the explicit limitations and an implicitly accepted conventional standard set by the scientific community seems to be that the explicit limitation carries with it certain protections for the vulnerable population.

Kitcher’s hesitation to recommend explicit restrictions might stem from a desire to internally constrain research through the valuation of particular moral values (i.e., those commensurable with and supportive of liberal democratic societies). While I will grant that the ideal of responsible science may be valuable, much like the ideal of pure science or objectivity, this does not mean that we would not also need explicit standards and limitations to protect the vulnerable (e.g., standards set on human subject testing post Belmont, Health Insurance Portability and Accountability Act (HIPAA), etc.) until societies and researchers within those societies have progressed past the point of needing such explicit restrictions and limitations on research. Nor does this imply that such explicit limitations on research should be the only measure scientists, or societies, should take in the alleviation of inequalities.

Some of the more persuasive criticismsFootnote 8 against legal limitations on research have to do with the potential negative effects on scientific progress, rights of scientists to pursue knowledge through research, and the degree of correspondence between public policy and science.Footnote 9 Each of these criticisms focuses on different aspects of science’s connection to the broader society. The first, potentially hampering scientific progress, focuses on what science can offer society (i.e., technological and/or epistemic advancement). The second, the rights of scientists to research, focuses on the rights of scientists to pursue knowledge, and the degree to which research acts as a necessary precondition for scientific speech. And the third, science and public policy, focuses on the degree to which public policy is directly or indirectly shaped by certain kinds of scientific research.

Limiting Science, Limiting Rights, and Democratic Society

One might think that the group most affected by any form of restrictions to scientific research is the scientists themselves. From the point of view of scientists, it makes sense that other scientists ought to be the ones to develop limitations or restrictions on research. Thus, the implicitly accepted conventional standard might seem the better option. Additionally, conventional standards set by scientists would, arguably, run less of a risk of hindering the progression of ideas. Indeed, some theorists suggest that restrictions on certain kinds of research, specifically legislative restrictions, would hamper scientific progress. One such argument set forth by Marchant & Pope claims, “legal restrictions in the areas of scientific research, particularly in the form of legislation imposing … penalties, may be too blunt, inflexible, and permanent to deal effectively with rapidly developing scientific fields” (2009, p. 386).

This line of argumentation is faulty for two reasons. First, it presents ‘scientific research’ and ‘scientists’ as if they are monolithic categories, when in fact the category ‘research’ includes an array of phenomena and activities (Post 2009, p. 435). Regulating one vein of scientific research does not cut off lifeblood to the rest of the scientific corpus. For example, if a government strictly limits research dealing with smallpox, progress in the field of theoretical physics would, most likely, be unaffected.Footnote 10 Second, the recent history of science in America shows us that scientific advancement and rapid technological progress, particularly in the field of biomedical science, has thrived under systemic and ever-heightening restrictions and government regulation of science, certain avenues of research, and publications of research findings (Horner and Minifie 2011). To assume that regulating some kinds of science will inevitably lead to hindered scientific progress is a slippery slope we should not assume to be a legitimate threat to scientific progress.

Turning to the second criticism against limiting scientific research, one could present compelling arguments as to why scientists qua individual citizens have a general right to the freedom of thought and expression. However, there is not a necessary bridge between this general right and liberty of action. Expressing ideas does entail a right to gather evidence through the actions associated with experimentation; the latter entails material consequences, not merely making a given subject the object of inquiry (Bayertz 2006).

One might approach the issue from another starting point and attempt to argue that specific kinds of conduct (i.e., experimental research) are a necessary precondition for specific kinds of protected speech acts (i.e., scientific speech). Consequently, the argument would continue, scientists have a right to research because research serves as a bridging right to the right of expression.Footnote 11 However, as Weinstein points out, “assuring the flow of information likely to enrich public discourse … is a concern instrumental to the proper functioning of democracy, not constitutive of it … [t]hus government interference with information flow would not infringe an individual right” (2004, p. 10). The first step in the argument for scientific research to fall under Mill’s arguments for free expression of thought is to demonstrate that the conduct involved in the research could be deemed to be expressive conduct. That is, the scientific conduct would need to convey a ‘particularized message’ similar to the sort of message one can infer from the burning of an American Flag as a symbol of political dissent. This presents a few difficulties, the least of which being that one would have to discern what the message being conveyed would be when observing most forms of research. Thinking about differing forms of experimentation, for instance animal experimentation, cell duplication, bacteria generation, etc.: how would we phrase the expressive content of such actions?

Some might claim that attempting to conceive of scientific actions as a form of speech rights is like squaring the circle. Putting the point another way, Irwin states, “[the expressive conduct] arguments fail for the simple reason that scientific experimentation consists of the application, not the communication, of scientific ideas” (2005, p. 1498). Yet, pulling apart the ideas expressed during research and the expression of ideas as a necessary precondition for research becomes no easy task. The process of research begins with the exchange of ideas that come in a plethora of forms. Consider the array of ideas expressed before research begins, such as those contained in the background and significance or aims section of grant applications, or in the justifications for funding requests more generally. During the research process, reports are often filled out, progress recorded and reported, and findings disclosed; all of which are a combination of actions fueled by and giving rise to ideas. At the end of a research cycle, the exchange of ideas continues through professional presentations, academic publications, and media exchanges, all of which clearly contain expressive content.

Even granting that scientific research, as scientific speech, contains expressive content, we still might think that there are good reasons to limit some forms that fall under this category. For example, in a US context, not all forms of speech that contain expressive content are granted First Amendment protections. Most notably, such forms of speech appear when the expressive content of the speech act could harm the public (e.g., as in the case of yelling “fire” in a crowded theater) or counts as hate speech; in such cases citizens who would otherwise have the protected right to free speech are legally silenced or censured. These ideas and the kinds of scientific inquiry under consideration deal with normative labels about people, not merely flat-footed empirical descriptions of the world (e.g., a particle physicist trying to find the Higgs-boson). That is, even if the scientist engaging in research or scientific speech might otherwise be entitled to free expression, there may be times that these rights are overridden for the sake of ensuring the right of expression for all persons. Does this mean that such speech has no significance?

Even if it turned out that scientific research and scientific speech en toto is not to be considered expressive conduct, it does not follow that such speech would lose its functional role in liberal democracies. Turning again to the world we live in, whether we are considering the influence of policy in the funding of scientific research or the ways in which scientific research, as an expression of the epistemic authority of science, shapes policy, the social institution of science has a symbiotic relationship with the society in which it is embedded. The enterprise of science in democratic societies is a public issue; one that has the potential to inform and shape public policy and the lives of people in that society, not only the lives and pursuits of individual scientists.Footnote 12 Scientists have accepted that some ethical values, such as the value of human life and human dignity, rightly constrain research.Footnote 13 Likewise, when scientific research would clearly interfere with some group’s rights to self-determination or self-governance, it is no longer an issue only about scientists practicing unsavory science or scientists being denied their right to scientific speech; it is a human rights issue.

The danger here is not strictly an epistemic danger. That is, the danger that might result is not limited to cognitive biases of the kind Kitcher discusses—the fact that people generally tend to ascribe too much weight to evidence that is consistent with their beliefs and too little weight to evidence that is inconsistent with those beliefs (2001, p. 97). The danger extends outward from the research activities associated with such inquiries to affecting public perceptions, perpetuating social inequalities, and having the potential to affect public policy that would further exacerbate such social inequalities. In his more recent work, Kitcher acknowledges that some scientific research might be the sort of thing that undermines equality, perpetuates vulnerabilities of particular persons, or unduly coerces the “participation” of some groups (2011, p. 135). Within the context of civil rights discourse in the United States, such research results in a disparate impact on certain populations. The kinds of public limitations that might best protect against such threats in the United States context to equality would find their basis in civil rights law. We can identify the justification to limit actions that cause a disparate impact and then extrapolate to discuss the degree to which such rights might align with more general internationally recognized human rights, particularly those identified in the United Nation’s International Covenant on Economic, Social and Cultural Rights (ICESCR) to promote “the general welfare within a democratic society” (United Nations 1966).

Disparate Impact, Title VII, and Scientific Research

Disparate impact holds that actions may be considered discriminatory (and illegal) if they have a “disproportionate adverse impact” on members of a minority group protected by the Civil Rights Act.Footnote 14 Historically, disparate impact has been applied specifically to employment practices to avoid particular and systemic barriers to economic self-determination for groups of persons. In effect, Title VII restricts the hiring choices and policies of managers, so that all groups, barring business necessity,Footnote 15 have equal access to employment and advancement opportunities. One role that business owners and hiring managers play is that of gatekeeper to social goods and economic opportunities. These social goods and economic opportunities act as necessary preconditions that enable people to exercise their right of self-determination. I suggest that we ought to consider science and the scientists who carry out research, as a part of the social institution of science, to function in a similar role. Thus, they ought to be subject to similar restrictions in their practices as gatekeepers.

Consider, for example, the manner in which biomedical researchers have acted as the gatekeepers to experimental medicine and interventions. As discussed by Dresser (1992), it was not until December of 1990 that the NIH mandated the inclusion of women and minorities in study populations. This, then commonplace, practice of excluding women and minorities was often not the result of disparate treatment, but the application of policies that resulted in disparate impact. Such disparate impact has a ripple effect that can be seen at multiple levels, such as who receives the benefits of experimental life-saving interventions or medications, accurate population dosing standards, epistemic advancements across populations (instead of the “normal” standard being the representation of one population), and whose health concerns are of interest to the research community. Institutional Review Boards (IRBs) also have restrictions that place the burden of justification on the researcher if they wish to exclude women or minorities from their study populations. This kind of restriction embodies the spirit of the business necessity exemption. When these restrictions were first implemented, they had to be justified as being worth the burden in virtue of the fact that scientific research was having a disparate impact on certain populations. These sorts of examples could also be thought to show that the scientific community is both willing and able to self-regulate. Making a similar point, Kitcher points out that the kinds of ethical limitations to which IRBs are held to adhere to is an example of limiting inquiry approximating the ideal (2011, p. 135). The lack of insulation from social pressure and values is another reason to think that IRBs do not approximate to the ideal in the way Kitcher discusses. IRBs are held to standards that reflect public values and broader societal values more generally, not simply the tutored public and the scientists doing research. Such values can be seen in the language of research proposals and consent documents (e.g., a catholic institution may employ the term “unborn baby” instead of the more commonly used term of “fetus”) as well as the degree to which a local community may support or protest particular programs of research. Such examples indicate that an IRB is not nearly as insulated as the ideal deliberative structure Kitcher describes. Rather, IRBs, those who make up the population of scientists, and those who instantiate and implement government regulations are not isolated from each other.Footnote 16

Moreover, there is a distinction between protecting vulnerable populations and disparate impact. In the first instance, protecting vulnerable populations entails a negative duty to not interfere or adversely impact those whose lives are already poorly off. A researcher cannot target a vulnerable population unless there is compelling reason to do so. Comparatively, disparate impact is broader in scope. Disparate impact is a form of unintentional discrimination that does not offer the same level of advancement, promotion, etc. for all groups. As such, disparate impact in business often entails positive obligations on the part of employers; systems must be put into place, training programs established, or mentoring opportunities provided to ensure equality of opportunity. Thus, shifting to a concept of disparate impact moves beyond protecting vulnerable populations during research to requiring positive actions on the parts of researchers to ensure that the research process as well as the reporting of findings do not disparately impact historically marginalized populations.

In the context of the United States, extending legal measures to protect against disparate impact is a natural extension of the IRB charge of protecting vulnerable populations; an extension that recognizes the need to protect against compounding existing vulnerability is not the only way in which science might harm a group. Yet, harm can extend beyond the context of those who are directly and immediately participating in research to impact vulnerable populations in the broader society. Thus, the impact of Science on those outside of the society in which it is situated becomes increasingly important within the context of global research.

International Standards and Global Research

An eye toward disparate impact may serve as a useful guide in thinking about the scope and scale of impact research could have, as well as the obligations of researchers given the impact. It is important to keep in mind that calculations of scope and scale are not only about how science will impact a particular democratic society, but they should also include how research will impact populations broadly construed. One consequence of gauging scope and scale in a global research setting will be that responsibility will be shared by researchers and governments. In a setting involving multiple countries and cooperation across nation-states, it is no longer about what a particular researcher might intend or be responsible for, it is also about what kinds of rights the state has committed to protect.

Interestingly, while the United Nation’s Declaration of Human Rights does not specify a right to scientific research (UN Declaration 1948), Article 15 of the International covenant on economic, social and cultural rights (ICESCR) recognizes “the right of everyone to enjoy the benefits of scientific progress and its application and the freedom to perform scientific research” (ICESCR 1966). Furthermore, the ICESCR recognizes the importance of free inquiry and free expression as necessary preconditions for scientific progress (1966, Article 15, 3). Although, as Audrey Chapman points out, “Scientists do not have carte blanche to proceed in areas where the process or outcomes of the research may be of harm to individuals or communities” (2009, p. 17). Recognizing the symbiotic relationship between science and society, Chapman goes on to discuss the findings the National Bioethics Advisory Commission, pointing out that “limits on freedom of inquiry must be carefully set, must be justified and should be reevaluated on an ongoing basis” (2009, p. 18).

This is not to say that there are not plenty of instances of unwarranted and unjust limitations to inquiry. Chapman discusses a number of examples, particularly in the context of oppressive state environments, where scientific progress and the free exchange of ideas have suffered significantly as a result of undue government interference. Even within the context of the United States, scientists are often presented with barriers when seeking to go to Cuba for research or dissemination of research (Chapman 2009, pp. 18–19). Rather, the point is that within the context of global research, there are recognized limitations of inquiry. Moreover, state members make commitments to positive actions aimed to ensure that their citizens have equal access to the benefits of research, and are protections against the utilization of science to violate their human rights. My claim is that researchers and the nation-states that support them will have positive as well as negative obligations.

Degrees of Harm: From Well-Intended Science to Unsavory Science

Impact can come in many forms. Just as a researcher would have to develop special protections and protocols when conducting a study involving vulnerable populations, so too should researchers have to consider the social impact of their research and dissemination process. Disparate impact will entail greater or lesser degrees of responsibility, depending on the context and scope of impact. To explore this more concretely, I will consider some contemporary cases ranging from less to more pernicious forms of potentially harmful research. It seems that when research might have a dual use, the dual use could introduce specific and foreseeable harm(s) to particular persons. It is not merely that research and technology might have unintended consequences, rather, the harm in the more worrisome cases should have been foreseeable (and thus preventable) given the context in which the research is taking place.

First, let us consider the case of the well-intended researcher who begins a new project. As far as she can foresee, this project does not directly threaten the well-being of any particular vulnerable persons or populations. She begins to analyze her data some time later only to discover that it could be interpreted, maybe quite easily, to reify a particular stereotype that would keep a group in a subordinate position. To take a recent example, consider the manner with which the correlation between testes size and childcare involvement was reported. As reported by The Guardian, “[a]n anthropologist at Emory University in Atlanta, said that while her work revealed a correlation between testes size and parenting, it said nothing about the causes. She suspects the size of a man’s testes influences how involved he gets in childcare. But the reverse could be equally true: indulging in childcare may make a man’s testes shrink” (Sample 2013). Such careless reporting of the data may lend itself to reifying the perception that caring for one’s offspring is a “feminine” trait. So much so, in fact, that it is either the case that men with smaller testes enjoy doing it, or it shrinks a man’s testes when he regularly cares for his children.

To be clear, the way in which the data was reported may not in all societies throughout all time be problematic. Situational factors will determine whether people in a particular segment of the population are in a position that threatens their well-being. Thus, taken against the backdrop of a history of misogyny, sexism, and given the fact that the United States has not reached gender parity in childcare, I would argue that the scientists gathering and analyzing the data for such a study have a greater responsibility to take care with how they report their data. Moreover, they have a duty to make a good faith effort that their data is not misreported (e.g., the scientists respond if their work is taken up and misreported by a famous news organization or radio talk-show host). This duty becomes particularly important when conducting research in a global context.

For example, consider the recent moratorium on research related to H5N1, a highly pathogenic form of influenza. As reported by David Fidler, with a mortality rate of sixty percent, H5N1 is a continuing source of public health concern (2012). Nevertheless, research in the United States and the Netherlands faced continued and mounting criticism. The primary concerns centered on the findings resulting from the studies and the influenza strains themselves being used by others to manufacture biological weapons (Fidler 2012). If one such strain found its way into the wrong hands, we could have a global crisis. Such an effect would not have been an intended consequence of the researchers, but it is a foreseeable one. Furthermore, while it would not necessarily affect any particular population, the impact on human kind more generally has the potential to be devastating. Employing Mill’s Harm Principle, limiting such scientific inquiry, even including those with the best of intentions, might be justified.

Finally, let us consider the example of the scientist who conducts what is known to be problematic or unsavory research. Kitcher represents the unsavory scientist as a thing of the past; painting a picture of the Nazi doctor or those who coerced “patients” to participate in the Tuskegee experiments (2011, p. 135). Kitcher presents those who pursue potentially damaging research as belonging to a minority or as particular sub-communities; a small fringe of scientists who are either relics of a bygone era or outside the mainstream community who continue to balk community standards. If science is to always have its ‘crazy uncles,’ is the proper response to just ignore their existence? Is it enough that only certain communities within science accept the community standards or implicit restrictions as a morally good and justified standard? What would be the potentially burdensome impact on the scientific community at large, if there were external restriction on certain avenues of unsavory scientific inquiry? If there is only a small minority of scientists who actually pursue this kind of research, then what is the harm in bringing it to a full stop?

A deeper problem, which Kitcher seems to gloss over, is that this kind of research is indeed quite well funded, accepted as “good” science, and directly comes into public policy conversations. As one example, we can turn to the work of J.P. Rushton, a well-respected empirical psychologist, whose Charles Darwin Institute received almost $500,000 from the Pioneer fund to study the IQ differences between different racial groups (Teo 2011, pp. 239–240). Rushton’s work, and that of others like him, has sought to show that “the public must accept the pragmatic reality that some groups will be overrepresented and other groups underrepresented in various socially valued outcomes” (Rushton and Jensen 2005). To carry this into the realm of public policy, Rushton and Jensen specifically state, “the apparent failure of equal opportunity programs to enable all groups in society to perform equally scholastically or even to narrow the gap in the test [is because of] the true nature of individual and group differences, genetics, and evolutionary biology” (2005). Ruston and Jensen go further to claim that the public should be educated about inherent differences, the implication on public policy being that funds should stop being spent on equal opportunity programs.

In the first example, the well-intended scientist should have been able to infer, given the history of gender disparity in the United States, the ways in which the research could be used to perpetuate entrenched gender norms that devalue or undervalue care-labor. This is quite different from the second example considered, the case of the moratorium on H5N1 research. In the second case, there is a general harm to persons the world over, while in the first case, the foreseeable disparate impact affects a particular segment of the population (i.e., women). Finally, consider the harm from research performed by Rushton and Jensen. This, I take it, is the most troublesome kind of case. In such cases, disparate impact is immediate and foreseeable; their findings include statements about the efficacy of equal opportunity policies. In addition to being dangerous and harmful to historically marginalized populations, this kind of response seems unwarranted. If, for instance, we discovered alcoholism to be the result of a genetic anomaly, that it were natural fact, does this entail that we ought to defund social support aimed at enabling those who suffer from the effects of alcoholism from living fulfilling lives?Footnote 17 Surely not, rather, it may entail that we should, in the name of social justice, double our efforts. In short, if this kind of science still has the ability to be publicly or privately funded,Footnote 18 carried out, and brought to bear in public policy debates, why should we deny those who will actually and disparately feel the impact legal redress?

Conclusion

When addressing the question of placing limitations on certain types of research we cannot limit the discussion to how we can fairly distribute fundamental values. It would be an illegitimate move to sever the connection between the valuing of freedom and equality and how those values play out in the fields of public policy and our other social institutions. The question we are left with is a practical one. Which answer will cause more harm than good to both the scientific community and the underprivileged groups—which, we must keep in mind, are not and should not be considered to be mutually exclusive.

I have argued that Kitcher was too quick to assume that implicit guidelines and internal community standards would be the best answer. By not placing explicit restrictions on inquiry, we are not able to externally hold scientists who pursue research they have cause to believe is harmful accountable for continuing research that may have been well-intended, or not having a more explanatory role in the reporting and dissemination of such research. More worrisome, we enable those who might be inclined to pursue harmful hypotheses; and in doing so, we keep avoidable hardships to vulnerable populations a very real risk. In addition, the fact that those community standards are also subject to the same conspiracy tactics as bans or explicit limitations, and as such fail to cause less harm than explicit restrictions, demonstrates that Kitcher’s recommendation falls prey to the critique he offers against bans and overt limitations. If such conspiracy theories cannot be laid to rest through argumentation, at least some of the harm to those suffering from political or epistemic asymmetry will be mediated. In short, it seems that if the goal of curtailing certain avenues of inquiry through argumentation, implicit restrictions, or explicit restrictions on research is to mediate harm and bring the institution of science in line with democratic social values then the explicit restriction might be the best way of doing so, even if it means limiting the freedoms of scientists.