Human bioenhancement, whether thought of as increasing people’s welfare, or increasing human functioning above a biologically, culturally, or statistically defined level of normality (Savulescu et al. 2011) using biomedical substances and practices (e.g. drugs, external devices, genetic selection), aims to give us more. That more is typically a level of functioning, capacity, or state of being that would normally be beyond the limits of what it means to be human and otherwise could not be achieved without biomedical intervention. Human bioenhancement can also help us acquire functions, capacities, and states of being that nature did not grant us with at all. In general, the language of enhancement is “more”—more abilities, more welfare, more stability, and more autonomy. Although more of some abilities, possessions, money, and time are desirable on some occasions, there are instances when more is not conducive to a better life, giving us just cause to be suspicious of the enhancement model, specifically the model set forth by some proponents of moral bioenhancement.

Although not without controversy, discussion of moral bioenhancement in the field of ethics is the result of advances in neurotechnology, psychology, and neuroscience (among other areas), which have given us new information about the relationship between our brains and our moral behaviors (Persson and Savulescu 2012b). As we have come to learn new information about how and which parts of our brains are responsible for our sense of morality and justice, using biomedical practices to enhance our morality has become the next target of human ingenuity.

Human ingenuity paired with our increasingly complex lives is one way to explain the boom in experimentation and ethical discussion of biomedical human enhancement. According to some moral bioenhancement scholars, our moral progress has not kept up with the rapid pace of our technological progress, which has resulted in us having a much greater impact on the wellbeing of future generations and distant others (Persson and Savulescu 2013). Other humans, the environment, and non-human animals are often at the losing end of this mismatch. In other words, according to some moral bioenhancement scholars, humans are not very kind when it comes to those outside of their close circle of family and friends; but the hope is that biomedical intervention could make us better by improving the traits necessary to be more moral people, with empathy,Footnote 1 being one such trait.

We, however, are skeptical of this way of thinking. Focusing specifically on empathy, it is our contention that given the ways in which empathy is susceptible to certain biases, there is some reason to believe that inducing a generalized increase in empathy, as would be the case with moral bioenhancement, would do little to address, let alone to counteract, some of the biases that lead us to treat other people, the environment, and animals in immoral ways. Indeed, it may even undermine our responsiveness to principles of impartiality. Second, psychological research suggests that maintaining effective levels of empathy requires the regulation of empathic feeling. We are therefore also concerned that moral bioenhancement could potentially undermine our capacity for empathy regulation. In this paper, we give a detailed account of our concerns.

In Part I of this paper, we focus on a narrative given by Savulescu and Persson as an example of the body of literature that describes the need for moral bioenhancement. In response to this narrative, in Part II, we draw from psychological research to demonstrate how a biomedical increase in empathy can actually counter moral bioenhancement’s noble project of making our moral community more inclusive. We also survey some standard worries about the role of empathy in our ethical motivations and deliberations—e.g., its apparent partiality—which any defense of increasing empathy must address. In Part III, we respond to possible criticisms of our argument. We conclude that proponents of moral bioenhancement should consider the importance of empathy regulation for counteracting biases and for the empathizer’s ability to maintain effective levels of empathy if moral bioenhancement is to be a viable option for helping us to become more moral people.

1 Part I: Morality and Our Ability to Harm Everyone and Everything, Near and Far

In several articles, Ingmar Persson and Julian Savulescu, both well-known proponents of moral bioenhancement, argue that the need for moral bioenhancement stems from the evolution of human communities. As their argument goes, for most of human history we have lived in small communities with limited technology that only enabled us to have a significant impact on our immediate surroundings. As a result, our moral psychology adapted to fit our life in these small communities, which required that we develop a concern only for the people around us and for those people in the immediate future. However, now we live very different kinds of lives; now we live in large communities and have technology that can affect people and environments that are geographically far from us, as well as affect them well into the future. But our moral psychology has not evolved as our communities and worldly impact has evolved and continues to evolve. Persson and Savulescu argue that this poses a big problem because human influence is contributing to environmental corruption and climate change. Additionally, advances in science and technology have created weapons of mass destructions capable of wiping out masses of people all at once (Persson and Savulescu 2008, 2011, 2012b, 2013), the threat of which becomes more pressing because of our lack of moral evolution.

1.1 Moral Bioenhancement

Persson and Savulescu (2012b) argue that developing new technology will not solve these man-made problems, nor will traditional moral education alone. Rather, the appropriate response to humans’ stagnant moral character is to change ourselves through moral bioenhancement:

What is needed is an enhancement of the moral dispositions of their citizens, an extension of their moral concern beyond a small circle of personal acquaintances, including those existing further into the future. The expansion of our powers of action as the result of technological progress must be balanced by a moral enhancement on our part. (Persson and Savulescu 2012b, p. 400)

Persson and Savulescu give the following definition of moral enhancement:

To be morally enhanced is to have those dispositions which make it more likely that you will arrive at the correct judgement of what is right to do and more likely to act on that judgement. (p. 406)

Persson and Savulescu’s goal is to use bioenhancement to change the motivations (2012) for our actions; to use bioenhancement to overcome our moral psychological shortcomings. Persson and Savulescu, however, do not give us a particular moral theory for what ought to guide our motivations, but they do give us a sense of what is needed to help us make correct judgements in their discussion of moral enhancement and autonomy.

Persson and Savulescu’s (2012b) remarks about moral enhancement and autonomy occur in the context of a reply to an objection. In response to the objection that moral enhancement compromises individual freedom by prohibiting us from acting wrongly and forcing us to act correctly, Persson and Savulescu detail a thought experiment. In their thought experiment we have the ability to prevent harms to people before the harms are committed. We have this power thanks to the “God Machine,” a computer that monitors people’s thoughts and desires, and which can alter those thoughts and desires without letting us know that any changes have been made. The God Machine intervenes only to prevent extensive harm or great immoral behavior, not small acts of immorality like lying. However, it would intervene if, say, a person considered murdering another person and it became evident that the person would indeed commit murder. In such a world, Persson and Savulescu argue that people are still somewhat free—they are free to choose moral thoughts and actions. People are only not free to choose immoral acts like murder and, as they argue, this kind of world would not be such a bad world to live in.

Nonetheless, by not giving people the freedom to fail, as Persson and Savulescu (2012b) argue, the God Machine does limit autonomy; the God Machine puts people under the control of someone or something other than themselves. But Persson and Savulescu make it clear that the God Machine is not a (biomedical) moral enhancement as (biomedical) moral enhancements are those interventions that do not compromise autonomy:

Moral enhancements which increase altruism, including empathetic imagination of the suffering and interests of others, coupled with sympathetic response to this, together with greater preparedness to sacrifice one’s own interests, greater willingness to cooperate, and better impulse control would not undermine freedom or autonomy. (Persson and Savulescu 2012b, p. 417)

Using the God Machine to show what (biomedical) moral enhancement is not is important to Persson and Savulescu’s description of moral bioenhancement because it helps us get a clearer understanding of what they intend for moral bioenhancement to entail, namely interventions that increase our sense of justice, altruism, and empathy. The God Machine is not a moral bioenhancement because it only prevents immoral behavior by curbing immoral desires (Persson and Savulescu 2012b). Unlike moral bioenhancement, the God Machine does not give us tools to make moral deliberations by augmenting the faculties needed to be good moral deliberators.

Based on Persson and Savulescu’s description, moral bioenhancement is the act of biomedically intervening on our inadequate moral psychology so that we are more likely to make moral deliberations that take into consideration our ability to impact distant people and lands. To achieve this, according to Harris and Savulescu (2015), we have to question whether the amount of empathy that naturally occurs in the human population is appropriate for our human relationships. Savulescu holds that it is possible that nature and evolution got it wrong; it is possible that nature and evolution gave us too much or too little empathy. He argues that we have to look at science and ethics to tell us what amount of empathy is best for the contemporary problems we face.

Rather than just “sticking our head in the sand like ostriches and saying, ‘Human beings, they’re good enough,’ or, ‘We shouldn’t use medicine or science to change human beings,’” Savulescu argues that we ought to let scientific exploration guide our moral bioenhancement pursuits (Harris and Savulescu 2015, p. 20). After this exploration, we may find that, when it comes to empathy, “maybe we need more. Maybe we need less” (Harris and Savulescu 2015, p. 12). It is evident from their writings that Savulescu and Persson see empathy and morality as connected. If scientific evidence were to back this intuition, they would advocate for moral bioenhancement with the aim of increasing our morality.

Despite its advocates’ admirable goals, in the remainder of this paper, we identify two areas of concern about moral bioenhancement. In particular, we argue that the broad increase in empathy that advocates of moral bioenhancement call for (if it were determined that empathy does, in fact, increase altruistic tendencies) may, in fact, undermine morality. Our first concern focuses on the social biases that tend to affect the ways in which we empathize. We contend that moral bioenhancement would not effectively address these biases. Second, focusing on the role of empathy regulation in the delivery of effective health care, we caution that bioenhancement may undermine our ability to appropriately regulate empathy in cases where an excess of empathy may be inappropriate. To be clear, we do not argue that empathy necessarily undermines morality. Instead, we present evidence about the importance of properly regulating empathy, and we contend that any proposal for moral bioenhancement ought to take these factors into serious consideration.

2 PART II: The Importance of Empathy Regulation for Moral Agency

2.1 Empathy and its Susceptibility to Biases

Persson and Savulescu state that the moral enhancement they have in mind involves the “increase of empathetic imagination of the suffering and interests of others, coupled with sympathetic response to this” (2012b, 417). While they do not directly define empathy, their writings indicate that they view empathy, or empathic imagination, as kind of perspective-taking that allows one to better understand the experience of others, as well congruent emotional response to others’ state of wellbeing. Persson and Savulescu view ‘normal,’ unenhanced empathy as something that is automatic or relatively easily evoked with respect to those who are near to us and familiar to us, but difficult to muster when it comes to distant and different others. They write that we:

Can only empathise with a few individuals based on their proximity and similarity to us, rather than, say, on the basis of their situations. So our ability to cooperate, applying our notions of fairness and justice, is limited to our circle, a small circle of family and friends. Strangers, or out-group members, in contrast, are generally mistrusted, their tragedies downplayed, and their offences magnified (2012b, 2)

Persson and Savulescu’s concern is not that we lack the ability to empathize, generally, or that we are incapable of fathoming what the experiences of others are like, but that the propensity to be moved by the plight of distant people is weak. We are deficient not in the capacity to understand the effects of our actions on others, but in moral motivation with respect to those who are not part of our intimate, inner circle (3). As a result, we make errors when it comes to applying standards of fairness and justice.

There is reason to be skeptical of Persson and Savulescu’s idea that increasing empathy in general would improve our moral judgment. John Harris captures one reason to be skeptical of Persson and Savulescu’s argument:

...what worries me is that we will increase our capacity and our willingness to help those near to us at the expense of those further away, and it seems to me that what we need in the global village that we now all inhabit is the power and the willingness to generalize our affections and concern for others right across the planet, right across the globe. (Harris and Savulescu 2015, p. 12)

In this statement Harris expresses the idea that more empathy will not necessarily mitigate the problem of proximity bias, or our tendency to care about what and who is nearby, while showing little concern for people and environments that are far away. Rather, we need to learn to do a better job of applying empathy, which may not require more empathy by way of intervention, but developing the skill of appropriately directing the empathy that we already possess. In a debate with Savulescu, Harris continues to express his concerns about the inadequacy of increasing empathy for overcoming proximity bias:

it is not enough to improve sympathy, the empathy, the caring behavior that we offer to our friends and neighbors, to our families and our workplace. We need more imagination, more awareness of how to generalize those very important feelings of sympathy, empathy, and cooperation. (Harris and Savulescu 2015, p. 11)

Harris suggests that qualities other than empathy— namely imagination and the ability to generalize moral emotions and behaviors, as well as “being better at knowing the good and understanding what is likely to conduce the good” (2011, p. 104).—are required to address the problem of proximity bias.

We emphasize the importance of getting the relationship between empathy and moral bioenhancement right because of just how important empathy is to our sense of morality. Psychologists tell us that empathy is thought to play a role in prosocial behavior, moral motivations and deliberations, and how we interact with others, including how much sympathy we show others (Decety 2010). But Harris’ statement suggests that more empathy is not the solution to being more moral people. Instead, we need to be better at employing the empathy we already have.

Harris’ statements highlight an important aspect of our moral psychology: we have a tendency to be more empathetic towards people who are near to us. Empirical research also shows that we have a tendency to be more empathetic towards people who are similar to us, including people who share similar social standings as us, who look like us, and who share the same beliefs as us—in general, people with whom we can identify and relate (de Waal and Frans 2008; Maibom 2014). We are also more likely to be more empathetic towards our friends than we are to strangers (Maibom 2014; Meyer et al. 2013). It is easy to imagine a scenario in which a stranger may be more deserving of our empathy and the actions that accompany our empathy, such as offering aid, but we decide instead to offer our empathy to our friends because of our fraternal relationship. These tendencies represent forms of in-group bias.

Another form of bias, which we believe proponents of biomedical enhancement should take into careful consideration, is racial bias, which has been shown to adversely affect moral judgment. For example, one study found that when asked to put themselves in a defendant’s shoes, white jurors felt less empathy for black defendants than white defendants and gave harsher legal punishment to black defendants than white defendants, even when both white and black defendants committed similar crimes (Johnson and Simmons 2002).

How and with whom we empathize is affected by many factors that can generally be categorized into (1) our attitudes about subjects and (2) what we know about the subjects (Maibom 2014). In other words, is our relationships with people that influences how much empathy we tend to have for them. And it is for this very reason that whom we deem worthy of our empathy is typically inconsistent. Many of the concerns that we have about moral bioenhancement and empathy also apply to justice because, as the above-reference study indicates, both empathy and our sense of justice require the use of cognitive processes that are vulnerable to bias. Whether implicit or explicit, biases can lead us to make reason defying moral decisions.

Proponents of bioenhancement assume that increasing empathy would result in an expansion of our tendency to empathize beyond our small circle of family and friends. We have reason to doubt, however, that biomedically enhanced empathy would transform the underlying factors that result in the discriminatory ways in which people empathize. Instead, increasing empathy might exacerbate the effects of racial bias by intensifying the feelings associated with empathy, while the prejudiced beliefs and attitudes that underlie discriminatory empathizing remain in place. It is not difficult to imagine someone who shows an abundance of kindness and compassion towards their own children, friends, and even many fellow citizens, but who is cold and cruel when it comes to certain social groups, even if she comes into contact with members of those groups on a daily basis. The problem is not that the person lacks the capacity for empathy; rather, she directs her empathy towards a limited group of people. Increasing her empathy may only intensify her partiality towards the group with whom she identifies, particularly if she sees other social groups as a threat to the wellbeing and interests of those whom she cares about.Footnote 2

For moral bioenhancement to live up to its promise of creating a more moral human populous, it must address how our capacity for empathy and ability to deploy justice is affected by deeply ingrained social and cultural attitudes, racial, gendered, and speciesist prejudices, and a wide variety of other, often complicated, judgements and preferences.

2.2 The Role of Empathy Regulation in Sustaining Care for Others

Thus far, we have raised concerns that using biomedical resources to increase empathy will not correct the most important limitations to empathy and, in fact, increasing empathy could possibly lead us to treat certain people worse. For example, we have argued that a biomedical approach to increasing empathy will not improve the problem of relative deficiencies in empathy towards members of stereotyped social groups. Relatedly, the augmentation of empathic dispositions may undermine the capacity to regulate empathy in ways that are conducive to morality by changing how we interact with people who are experiencing extreme distress.

Human beings have empathy-regulating mechanisms that help to prevent us from experiencing degrees of empathy that would be excessively harmful (Hoffman 2000). When an empathizer enters into the mode of excessive empathy—what psychologist Martin Hoffman calls “empathic over-arousal”—the empathizer may react in a number of ways to shut down empathy so as to reduce the empathic distress. These responses are often involuntary. In the state of empathic over-arousal, vicarious painful feelings cause the empathizer’s attention to shift from the other person’s suffering to the empathizer’s own, personal empathic distress. That is, the empathizer begins by feeling badly for the other person who is suffering, but then the empathizer turns her thoughts to her own pain, which has resulted from her feeling badly for the other person. This turning of attention away from the other person and onto oneself can have the effect of decreasing the empathizer’s feelings of empathy for the suffering other, which, in turn, lessens the empathizer’s own overall distress (198). People also develop defense mechanisms to emotionally numb themselves to the suffering of others, such as emotionally “hardening” themselves (203), dehumanizing (Vaes and Muratore 2013)Footnote 3 or otherwise emotionally distancing themselves from the person with whom they might empathize, and habituating themselves (Hoffman, 204) to witnessing certain forms of suffering in others, in order to reduce the intensity of empathic distress.

One might worry that the mechanisms for desensitizing us to the suffering of others would have adverse moral effects, but these mechanisms can also make us better able to experience empathy at appropriate times, which can induce positive moral effects. First of all, we are more capable of sustaining high levels of empathy when empathic emotions can be discharged through action. Hoffman finds that in cases of empathic over-arousal the ability to help sufferers can lessen our susceptibility to compassion fatigue (201). Health care providers may be better able to emotionally process the suffering they witness when they recognize that they are making a difference in the lives of those they care for. Second, in cases where there is a clear sense of obligation to respond to suffering because of factors like role expectations or love, levels of empathy that would otherwise produce over-arousal can continue to increase our motivation to help (204). When empathy cannot be discharged in ways that alleviate the suffering of those with whom we empathize, it is suppressed. That is not to say that we lose the capacity to experience empathy for the person whom we are unable to help. Rather, the intensity of emotional distress we feel in response to their suffering subsides when we turn our attention away from their suffering.

Empathy-regulating mechanisms balance the preservation of empathizers’ emotional wellbeing and their tendency towards prosocial action. Hoffman thus concludes: “Over-arousal may be empathy’s ultimate self-regulating, self-preserving mechanisms, which fits with the increasing evidence that the ability to regulate one’s emotions correlates positively with empathy and helping behavior” (14). In effect, empathy-regulating mechanisms support our capacities and tendencies for moral action. The regulation of empathy ensures that, rather than quickly expending all of our emotional resources by empathizing too much or too often, we retain sensitivity to the suffering of others when it will be most conducive to morality. Our concern is that moral bioenhacement would likely disrupt the functioning of these empathy-regulating mechanisms by medically inducing an increase in empathy—in effect forcing empathy when it would normally, and often appropriately, be suppressed—and would therefore undercut the positive effects of empathy-regulating mechanisms. If moral bioenhancement is to work, it would have to consider how biomedical resources would interact with our natural inclination for empathetic response regulation.

In addition to undermining moral motivation, increasing empathy could also compromise the capacity to carry out helping behaviors. The reason, again, is that empathic management is more valuable to moral agency than merely increasing empathy. In the health professions, for example, the importance of empathy is widely recognized.Footnote 4 Empathy is a cognitive tool for improving medical judgment (Gleichgerrcht and Decety 248). It encourages communication between health care givers and patients, which in turn influences healing (Halpern 2001) and improves the quality of the diagnosis (Gleichgerrcht and Decety 248). It also supports patient trust, which can lead to increased adherence to treatment recommendations and patient satisfaction (Duberstein et al. 2007; Epstein et al. 2007). Excessive levels of empathy, however, can undermine physicians’ ability to carry out their duties to patients. Gleichgerrcht and Decety (2012) argue that continuous exposure to patients in pain produces negative emotions in physicians that are resource competing (248). That is, the cognitive work that goes into processing empathic emotions draws attentional resources that might otherwise be directed at clinical task performance (Ellis and Ashbrook 1988). Compassion fatigue, resulting from the experience of frequent, ongoing emotional strain, may also interfere with caregivers’ capacity to carry out the tasks required to assist others (Decety and Lamm 2009), because it tends to produce symptoms like emotional exhaustion, detachment, and a low sense of accomplishment (Maslach et al. 1996). Thus, the skill of regulating empathy is vital to the efficacy of caregiving practice (Gleichgerrcht and Decety 2012; Sultan Haque and; Waytz 2011). Far from a simple increase in empathy, some researchers recommend institutional and individual strategies for decreasing empathy throughout the day, such as taking breaks from emotional stimuli, or intentionally framing certain caregiving activities in terms that objectify the patient when effective intervention might be inhibited by empathic distress (Gleichgerrcht and Decety 2012).

While the emotional demands on caregivers in the health professions are certainly unique, the need for the ability to regulate empathy applies beyond the medical context. Giving care to others—whether emotional or physical—is a part of human life. We often depend on one another for meeting our needs. Empathy no doubt has a central place in this part of our lives, but the wellbeing of caregivers and those for whom they care depends on the ability to draw appropriate boundaries in our practices of loving and caring. Similar to the concern that moral enhancement does not address biases that tend to affect the ways in which we empathize, our concern here is that moral enhancement may prevent people from regulating empathy in ways that are essential to caring well for others.

In addition to disrupting empathy regulating mechanisms, biomedical enhancement may also undermine caregivers’ wellbeing because some of the primary factors that undermine empathy and make empathy harmful to those who experience it are systemic and organizational rather than individual. Such factors cannot be biomedically addressed and, furthermore, increasing empathy in the face of such factors could do real harm to moral agents. For example, studies find that the physician burnout, characterized by emotional exhaustion and the erosion of empathy in physicians, is caused by a variety of institutional factors, such as an over-emphasis in training on technical abilities (Montgomery 2014). Other notable factors include an over-reliance on computer-based diagnostic and therapeutic technology, time pressures, changes in the market-driven health care system, a cultural ethic of clinical neutrality, and affective detachment in modern medicine via the association of medicine with science rather than patient care, sleep deprivation, and climates of intimidation and harassment as well as a lack of role models in medical school. (Hojat et al. 2009). Increasing empathy through biomedical means would not address the underlying causes of empathy erosion in physicians. It would not allow for greater collaboration among physicians, nor the opportunity for more meaningful relationships with patients, nor increased patient involvement—approaches which would be likely to reduce physician burnout (Montgomery 2014). Ironically, biomedical intervention is all-too consistent with the hyper-individualized and overly-technologized approaches that erode empathy in physicians the first place.

3 Part III: Counterarguments

Although our paper cautions against some foreseeable consequences of moral bioenhancement, here we consider a number of potential criticisms of our argument. For one, our argument focuses on empathy and its relationship with our moral deliberations, but there are other emotions that are also involved in our moral deliberations. If the information we have about empathy leads us to be weary of increasing empathy, then, alternatively, we could enhance other emotions, such as benevolence or even pity. Levy et al. (2014) provide a response to this potential criticism that shows that if we are right to be cautious of biomedically enhancing empathy, we also have reason to be cautious of biomedically enhancing any other emotion that could affect our moral deliberations.

According to Levy et al. (2014), some people who take some SSRIs (selective serotonin reuptake inhibitors frequently used as antidepressants) or the drug propranolol (a beta blocker frequently used to treat hypertension) have been known to display prosocial behaviors. These drugs can make some people more moral by helping them to ignore irrelevant and dangerous emotions that would be detrimental to their decision-making abilities, including racial biases. According to Levy and colleagues, if moral agents have high levels of empathy, taking SSRIs will increase their aversion to causing harm, but whether this translates into desirable behavior depends on the situation they find themselves in and how the drugs affect them: “If an agent is likely to find herself in a situation in which the enforcement of norms by way of punishment is important socially, we may wish to discourage medically unnecessary SSRI use” (Levy et al. 2014, p. 11). The concern is whether in such a situation a medically induced aversion to causing harm (along with high levels of empathy) would jeopardize the individual’s well being. We would have to question whether non-medical (i.e. enhancement) use of mood and cognitive altering drugs would make people too trusting or too cooperative, particularly in those situations in which they ought not to trust and ought not to cooperate.

This argument could also apply to other emotions. One could imagine a situation in which very benevolent people are given drugs to increase their benevolence and then they find themselves in situations in which too much benevolence leads them to take part in harmful activities, such as being too kind to people who have repeatedly displayed malicious behavior. When using biomedical substances to alter ourselves, thus altering the ways we morally deliberate, we also have to take into account the communities in which we live, the nature of the people that are a part of our communities, and how we view our relationship to those people we consider to be a part of and outside of our community (Levy et al. 2014).

Another potential criticism of our view questions the relation between empathy and bias. In this paper we have argued that we should be concerned that moral bioenhancement could exacerbate our biases, particularly the biased ways in which we deploy empathy. One might argue that even in spite of our biases, bioenhancement would make us more moral. That is, enhancing our treatment of members of our own race (or gender, social class, etc.) can be desirable; at the very minimum, at least moral bioenhancement can make us treat people we consider a part of our “in-group” (how we decide who is considered a member of our in-group is often complicated, contextual, and inconsistent) better than we would if we were not morally bioenhanced. In this case, the members of our in-group would reap the benefits of our moral bioenhancement. However, if there is some empirical evidence which suggests that moral bioenhancement will make us treat people like ourselves in more moral ways, we ought to be concerned whether moral bioenhancement will enhance how poorly we treat people we consider to be outside of our in-group.

A study conducted by De Dreu et al. (2011) is an example of such research. In this study male participants were assigned to two groups: one group was given oxytocin, a synthetic drug meant to mimic the naturally-occurring hormone connected to positive human emotion and the second group served as the control group. When presented with different moral situations, participants in the oxytocin group were less likely to sacrifice members of their own racial group to save a group of racially unspecified individuals than they were willing to sacrifice members of a different racial group. Similarly, the control group was also more willing to sacrifice members of a group when those members were racially different from them. This suggests that oxytocin can enhance in-group favoritism, but does not lessen our biases against members of the “out group.”

Although research on oxytocin shows that moral bioenhancement can aid us in treating some people better, it signals that moral bioenhancement could encourage us to treat people whom we already treat badly, even worse. Similarly, Levy et al. (2014) note the social ills that can result from in-group favoritism and immoral treatment of out-groups. Genocide, terrorism, unequal distribution of wealth and political power are all social ills that can result from favoritism. We ought to be concerned that moral bioenhancement could also enhance the likelihood and magnitude of these social ills.

4 Conclusion

In this paper we have raised concerns about a particular view held by some proponents of moral bioenhancement—that more empathy will likely make us more moral people. We have cautioned against this view by showing that with respect to empathy—a capacity which we recognize is often morally valuable—simply adding more of it may in fact undermine moral agency. We find that increasing empathy would not be effective for addressing empathy’s vulnerability to the various biases that can undermine moral judgment. Enhancing empathy would also undercut our capacity to regulate empathy in ways that makes it conducive to altruistic behavior. In both of these cases, a more fine-grained approach for cultivating, redirecting, expanding, and regulating empathy is called for, and such an approach is inconsistent with the more-is-better ethic that characterizes much of the pro-moral bioenhancement literature.

The moral bioenhancement project is a complex one with many unanswered questions. For instance, the plausibility of moral enhancement is still questionable, despite some minor advancements in the field. It is also questionable if moral bioenhancement, if feasible on a large scale, would be more appropriate and effective than moral enhancement that does not involve biomedical practices, such as moral education. We would also have to determine by what standards we would measure the success of moral bioenhancement and what theory of morality, if any, we would encourage people to adopt. Overall, if the moral bioenhancement project is to be a serious one, namely a project that indeed makes us better to our fellow humans, and possibly better to our physical environment and to those non-human animals with whom we share the planet, it must address these unanswered questions, as well as the questions we have raised about the regulation of empathy and the ways in which our biases influence our moral deliberations.