Marc Hauser puts forth the theory that humans have evolved a universal moral instinct, unconsciously propelling us to deliver judgments of right and wrong independent of gender, education, and religion. Combining his cutting-edge research with the latest findings in cognitive psychology, linguistics, neuroscience, evolutionary biology, economics, and anthropology, Hauser explores the startling implications of his provocative theory vis-à-vis contemporary bioethics, religion, the law, and our everyday lives.
The psychological and neurobiological processes underlying moral judgement have been the focus of many recent empirical studies1–11. Of central interest is whether emotions play a causal role in moral judgement, and, in parallel, how emotion-related areas of the brain contribute to moral judgement. Here we show that six patients with focal bilateral damage to the ventromedial prefrontal cortex (VMPC), a brain region necessary for the normal generation of emotions and, in particular, social emotions12–14, produce an abnor- mally ‘utilitarian’ pattern of (...) judgements on moral dilemmas that pit compelling considerations of aggregate welfare against highly emotionally aversive behaviours (for example, having to sacrifice one person’s life to save a number of other lives)7,8. In contrast, the VMPC patients’ judgements were normal in other classes of moral dilemmas. These findings indicate that, for a selective set of moral dilemmas, the VMPC is critical for normal judgements of right and wrong. The findings support a necessary role for emotion in the generation of those judgements. (shrink)
��Is moral judgment accomplished by intuition or conscious reasoning? An answer demands a detailed account of the moral principles in question. We investigated three principles that guide moral judgments: (a) Harm caused by action is worse than harm caused by omission, (b) harm intended as the means to a goal is worse than harm foreseen as the side effect of a goal, and (c) harm involving physical contact with the victim is worse than harm involving no physical contact. Asking whether (...) these principles are invoked to explain moral judgments, we found that subjects generally appealed to the first and third principles in their justifications, but not to the second. This finding has significance for methods and theories of moral psychology: The moral principles used in judgment must be directly compared with those articulated in justification, and doing so shows that some moral principles are available to conscious reasoning whereas others are not. (shrink)
To what extent do moral judgments depend on conscious reasoning from explicitly understood principles? We address this question by investigating one particular moral principle, the principle of the double effect. Using web-based technology, we collected a large data set on individuals' responses to a series of moral dilemmas, asking when harm to innocent others is permissible. Each moral dilemma presented a choice between action and inaction, both resulting in lives saved and lives lost. Results showed that: (1) patterns of moral (...) judgments were consistent with the principle of double effect and showed little variation across differences in gender, age, educational level, ethnicity, religion or national affiliation (within the limited range of our sample population) and (2) a majority of subjects failed to provide justifications that could account for their judgments. These results indicate that the principle of the double effect may be operative in our moral judgments but not open to conscious introspection. We discuss these results in light of current psychological theories of moral cognition, emphasizing the need to consider the unconscious appraisal system that mentally represents the causal and intentional properties of human action. (shrink)
: To what extent do moral judgments depend on conscious reasoning from explicitly understood principles? We address this question by investigating one particular moral principle, the principle of the double effect. Using web-based technology, we collected a large data set on individuals’ responses to a series of moral dilemmas, asking when harm to innocent others is permissible. Each moral dilemma presented a choice between action and inaction, both resulting in lives saved and lives lost. Results showed that: patterns of moral (...) judgments were consistent with the principle of double effect and showed little variation across differences in gender, age, educational level, ethnicity, religion or national affiliation and a majority of subjects failed to provide justifications that could account for their judgments. These results indicate that the principle of the double effect may be operative in our moral judgments but not open to conscious introspection. We discuss these results in light of current psychological theories of moral cognition, emphasizing the need to consider the unconscious appraisal system that mentally represents the causal and intentional properties of human action. (shrink)
When we judge an action as morally right or wrong, we rely on our capacity to infer the actor's mental states. Here, we test the hypothesis that the right temporoparietal junction, an area involved in mental state reasoning, is necessary for making moral judgments. In two experiments, we used transcranial magnetic stimulation to disrupt neural activity in the RTPJ transiently before moral judgment and during moral judgment. In both experiments, TMS to the RTPJ led participants to rely less on the (...) actor's mental states. A particularly striking effect occurred for attempted harms : Relative to TMS to a control site, TMS to the RTPJ caused participants to judge attempted harms as less morally forbidden and more morally permissible. Thus, interfering with activity in the RTPJ disrupts the capacity to use mental states in moral judgment, especially in the case of attempted harms. (shrink)
Studies of normal individuals reveal an asymmetry in the folk concept of intentional action: an action is more likely to be thought of as intentional when it is morally bad than when it is morally good. One interpretation of these results comes from the hypothesis that emotion plays a critical mediating role in the relationship between an action’s moral status and its intentional status. According to this hypothesis, the negative emotional response triggered by a morally bad action drives the attribution (...) of intent to the actor, or the judgment that the actor acted intentionally. We test this hypothesis by presenting cases of morally bad and morally good action to seven individuals with deficits in emotional processing resulting from damage to the ventromedial prefrontal cortex (VMPC). If normal emotional processing is necessary for the observed asymmetry, then individuals with VMPC lesions should show no asymmetry. Our results provide no support for this hypothesis: like normal individuals, those with VMPC lesions showed the same asymmetry, tending to judge that an action was intentional when it was morally bad but not when it was morally good. Based on this finding, we suggest that normal emotional processing is not responsible for the observed asymmetry of intentional attributions and thus does not mediate the relationship between an action’s moral status and its intentional status. (shrink)
Is the basis of criminality an act that causes harm, or an act undertaken with the belief that one will cause harm? The present study takes a cognitive neuroscience approach to investigating how information about an agent’s beliefs and an action’s conse- quences contribute to moral judgment. We build on prior devel- opmental evidence showing that these factors contribute differ- entially to the young child’s moral judgments coupled with neurobiological evidence suggesting a role for the right tem- poroparietal junction (RTPJ) (...) in belief attribution. Participants read vignettes in a 2 2 design: protagonists produced either a negative or neutral outcome based on the belief that they were causing the negative outcome (‘‘negative’’ belief) or the neutral outcome (‘‘neutral’’ belief). The RTPJ showed significant activation above baseline for all four conditions but was modulated by an interaction between belief and outcome. Specifically, the RTPJ response was highest for cases of attempted harm, where protag- onists were condemned for actions that they believed would cause harm to others, even though the harm did not occur. The results not only suggest a general role for belief attribution during moral judgment, but also add detail to our understanding of the inter- action between these processes at both the neural and behavioral levels. (shrink)
Anthropologists have provided rich field descriptions of the norms and conventions governing behavior and interactions in small-scale societies. Here, we add a further dimension to this work by presenting hypothetical moral dilemmas involving harm, to a small-scale, agrarian Mayan population, with the specific goal of exploring the hypothesis that certain moral principles apply universally. We presented Mayan participants with moral dilemmas translated into their native language, Tseltal. Paralleling several studies carried out with educated subjects living in large-scale, developed nations, the (...) Mayan participants judged harms caused as the means to a greater good as more forbidden than harms caused as a side-effect (i.e., side-effect bias). However, unlike these other populations living in large-scale societies, as well as a more educated and less rural Mayan comparison group, the target rural Mayan participants did not judge actions causing harm as worse than omissions (i.e., omission bias). A series of probes targeting the action-omission distinction suggest that the absence of an omission bias among the rural Mayan participants was not due to difficulties comprehending the dilemmas, using the judgment scale, or in attributing a greater causal role for actions over omissions. Thus, while the moral distinction between means and side-effect may be more universal, the moral distinction between actions and omission appears to be open to greater cross-cultural variation. We discuss these results in light of issues concerning the role of biological constraints and cultural variation in moral decision-making, as well as the limitations of such experimental, cross-cultural research. (shrink)
Inspired by the success of generative linguistics and transformational grammar, proponents of the linguistic analogy (LA) in moral psychology hypothesize that careful attention to folk-moral judgments is likely to reveal a small set of implicit rules and structures responsible for the ubiquitous and apparently unbounded capacity for making moral judgments. As a theoretical hypothesis, LA thus requires a rich description of the computational structures that underlie mature moral judgments, an account of the acquisition and development of these structures, and an (...) analysis of those components of the moral system that are uniquely human and uniquely moral. In this paper we present the theoretical motivations for adopting LA in the study of moral cognition: (a) the distinction between competence and performance, (b) poverty of stimulus considerations, and (c) adopting the computational level as the proper level of analysis for the empirical study of moral judgment. With these motivations in hand, we review recent empirical findings that have been inspired by LA and which provide evidence for at least two predictions of LA: (a) the computational processes responsible for folk-moral judgment operate over structured representations of actions and events, as well as coding for features of agency and outcomes; and (b) folk-moral judgments are the output of a dedicated moral faculty and are largely immune to the effects of context. In addition, we highlight the complexity of the interfaces between the moral faculty and other cognitive systems external to it (e.g., number systems). We conclude by reviewing the potential utility of the theoretical and empirical tools of LA for future research in moral psychology. (shrink)
In his groundbreaking book, Marc Hauser puts forth a revolutionary new theory: that humans have evolved a universal moral instinct, unconsciously propelling us to deliver judgments of right and wrong independent of gender, education, and religion. Combining his cutting-edge research with the latest findings in cognitive psychology, linguistics, neuroscience, evolutionary biology, economics, and anthropology, Hauser explores the startling implications of his provocative theory vis-à-vis contemporary bioethics, religion, the law, and our everyday lives.
The demise of behaviorism has made ethologists more willing to ascribe mental states to animals. However, a methodology that can avoid the charge of excessive anthropomorphism is needed. We describe a series of experiments that could help determine whether the behavior of nonhuman animals towards dead conspecifics is concept mediated. These experiments form the basis of a general point. The behavior of some animals is clearly guided by complex mental processes. The techniques developed by comparative psychologists and behavioral ecologists are (...) able to provide us with the tools to critically evaluate hypotheses concerning the continuity between human minds and animal minds. (shrink)
Moral judgments, whether delivered in ordinary experience or in the courtroom, depend on our ability to infer intentions. We forgive unintentional or accidental harms and condemn failed attempts to harm. Prior work demonstrates that patients with damage to the ventromedial prefrontal cortex deliver abnormal judgments in response to moral dilemmas and that these patients are especially impaired in triggering emotional responses to inferred or abstract events, as opposed to real or actual outcomes. We therefore predicted that VMPC patients would deliver (...) abnormal moral judgments of harmful intentions in the absence of harmful outcomes, as in failed attempts to harm. This prediction was confirmed in the current study: VMPC patients judged attempted harms, including attempted murder, as more morally permissible relative to controls. These results highlight the critical role of the VMPC in processing harmful intent for moral judgment. (shrink)
Developmental psychologists have long argued that the capacity to distinguish moral and conventional transgressions develops across cultures and emerges early in life. Children reliably treat moral transgressions as more wrong, more punishable, independent of structures of authority, and universally applicable. However, previous studies have not yet examined the role of these features in mature moral cognition. Using a battery of adult-appropriate cases (including vehicular and sexual assault, reckless behavior, and violations of etiquette and social contracts) we demonstrate that these features (...) also distinguish moral from conventional transgressions in mature moral cognition. Each hypothesized moral transgressions was treated as strongly and clearly immoral. However, our data suggest that although the majority of hypothesized conventional transgressions also form an obvious cluster, social conventions seem to lie along a continuum that stretches from mere matters of personal preference (e.g., getting tattoos or wearing black shoes with a brown belt) to transgressions that are treated as matters for legitimate social sanction (e.g., violating traffic laws or not paying your taxes). We use these findings to discuss issues of universality, domain-specificity, and the importance of using a well-studied set of moral scenarios to examine clinical populations and the underlying neural architecture of moral cognition. (shrink)
Research on moral psychology has frequently appealed to three, apparently consistent patterns: Males are more likely to engage in transgressions involving harm than females; educated people are likely to be more thorough in their moral deliberations because they have better resources for rationally navigating and evaluating complex information; political affiliations and religious ideologies are an important source of our moral principles. Here, we provide a test of how four factors ‐ gender, education, politics and religion ‐ affect intuitive moral judgments (...) in unfamiliar situations. Using a large-scale sample of participants who voluntarily logged on to the internet-based Moral Sense Test, we analyzed responses to 145 unique moral and conventional scenarios that varied widely in content. Although each demographic or cultural factor sometimes yielded a statistically significant difference in the predicted direction, these differences were consistently associated with extremely small effect sizes. We conclude that gender, education, politics and religion are likely to be relatively insignificant for moral judgments of unfamiliar scenarios. We discuss these results in light of current debates concerning the mechanisms underlying our moral judgments and, especially, the idea that we share a universal moral sense that constrains the range of cross-cultural variation. (shrink)
Altruistic self-sacrifice is rare, supererogatory, and not to be expected of any rational agent; but, the possibility of giving up one's life for the common good has played an important role in moral theorizing. For example, Judith Jarvis Thomson (2008) has argued in a recent paper that intuitions about altruistic self-sacrifice suggest that something has gone wrong in philosophical debates over the trolley problem. We begin by showing that her arguments face a series of significant philosophical objections; however, our project (...) is as much constructive as critical. Building on Thomson's philosophical argument, we report the results of a study that was designed to examine commonsense intuitions about altruistic self-sacrifice. We find that a surprisingly high proportion of people judge that they should give up their lives to save a small number of unknown strangers. We also find that the willingness to engage in such altruistic self-sacrifice is predicted by a person's religious commitments. Finally, we show that folk-moral judgments are sensitive to agent-relative reasons in a way that diverges in important ways from Thomson's proposed intuitions about the trolley problem. With this in mind, we close with a discussion of the relative merits of folk intuitions and philosophical intuitions in constructing a viable moral theory. (shrink)
The thesis we develop in this essay is that all humans are endowed with a moral faculty. The moral faculty enables us to produce moral judgments on the basis of the causes and consequences of actions. As an empirical research program, we follow the framework of modern linguistics.1 The spirit of the argument dates back at least to the economist Adam Smith (1759/1976) who argued for something akin to a moral grammar, and more recently, to the political philosopher John Rawls (...) (1971). The logic of the argument, however, comes from Noam Chomsky’s thinking on language specifically and the nature of knowledge more generally (Chomsky, 1986, 1988, 2000; Saporta, 1978). If the nature of moral knowledge is comparable in some way to the nature of linguistic knowledge, as defended recently by Harman (1977), Dwyer (1999, 2004), and Mikhail (2000; in press), then what should we expect to find when we look at the anatomy of our moral faculty? Is there a grammar, and if so, how can the moral grammarian uncover its structure? Are we aware of our moral grammar, its method of operation, and its moment-to-moment functioning in our judgments? Is there a universal moral grammar that allows each child to build a particular moral grammar? Once acquired, are different moral grammars mutually incomprehensible in the same way that a native Chinese speaker finds a native Italian speaker incomprehensible? How does the child acquire a particular moral grammar, especially if her experiences are impoverished relative to the moral judgments she makes? Are there certain forms of brain damage that disrupt moral competence but leave other forms of reasoning intact? And how did this machinery evolve, and for what particular adaptive function? We will have more to say about many of these questions later on, and Hauser (2006) develops others. However, in order to flesh out the key ideas and particular empirical research paths, let us turn to some of the central questions in the study of our language faculty.. (shrink)
What are the brain and cognitive systems that allow humans to play baseball, compute square roots, cook soufflés, or navigate the Tokyo subways? It may seem that studies of human infants and of non-human animals will tell us little about these abilities, because only educated, enculturated human adults engage in organized games, formal mathematics, gourmet cooking, or map-reading. In this chapter, we argue against this seemingly sensible conclusion. When human adults exhibit complex, uniquely human, culture-specific skills, they draw on a (...) set of psychological and neural mechanisms with two distinctive properties: they evolved before humanity and thus are shared with other animals, and they emerge early in human development and thus are common to infants, children, and adults. These core knowledge systems form the building blocks for uniquely human skills. Without them we wouldn’t be able to learn about different kinds of games, mathematics, cooking, or maps. To understand what is special about human intelligence, therefore, we must study both the core knowledge systems on which it rests and the mechanisms by which these systems are orchestrated to permit new kinds of concepts and cognitive processes. What is core knowledge? A wealth of research on non-human primates and on human infants suggests that a system of core knowledge is characterized by four properties (Hauser, 2000; Spelke, 2000). First, it is domain-specific: each system functions to represent particular kinds of entities such as conspecific agents, manipulable objects, places in the environmental layout, and numerosities. Second, it is task-specific: each system uses its representations to address specific questions about the world, such as “who is this?” [face recognition], “what does this do?” [categorization of artifacts], “where am I?” [spatial orientation], and “how many are here?” [enumeration]. Third, it is relatively encapsulated: each uses only a subset of the information delivered by an animal’s input systems and sends information only to a subset of the animal’s output systems. (shrink)
The Argument from Disagreement (AD) (Mackie, 1977) depends upon empirical evidence for ‘fundamental’ moral disagreement (FMD) (Doris and Stich, 2005; Doris and Plakias, 2008). Research on the Southern ‘culture of honour’ (Nisbett and Cohen, 1996) has been presented as evidence for FMD between Northerners and Southerners within the US. We raise some doubts about the usefulness of such data in settling AD. We offer an alternative based on recent work in moral psychology that targets the potential universality of morally significant (...) distinctions (e.g. means vs. side-effects, actions versus omissions). More specifically, we argue that a recent study showing that a rural Mayan population fails to perceive as morally significant the distinction between actions and omissions provides a plausible case of FMD between Mayans and Westerners. (shrink)
Means-based harms are frequently seen as forbidden, even when they lead to a greater good. But, are there mitigating factors? Results from five experiments show that judgments about means-based harms are modulated by: 1) Pareto considerations (was the harmed person made worse off?), 2) the directness of physical contact, and 3) the source of the threat (e.g. mechanical, human, or natural). Pareto harms are more permissible than non-Pareto harms, Pareto harms requiring direct physical contact are less permissible than those that (...) do not, and harming someone who faces a mechanical threat is less permissible than harming someone who faces a non-mechanical threat. These results provide insight into the rich representational structure underlying folk-moral computations, including both the independent and interacting roles of the inevitability, directness and source of harm. (shrink)
Cooperation is common across nonhuman animal taxa, from the hunting of large game in lions to the harvesting of building materials in ants. Theorists have proposed a number of models to explain the evolution of cooperative behavior. These ultimate explanations, however, rarely consider the proximate constraints on the implementation of cooperative behavior. Here we review several types of cooperation and propose a suite of cognitive abilities required for each type to evolve. We propose that several types of cooperation, though theoretically (...) possible and functionally adaptive, have not evolved in some animal species because of cognitive constraints. We argue, therefore, that future modeling efforts and experimental investigations into the adaptive function of cooperation in animals must be grounded in a realistic assessment of the psychological ingredients required for cooperation. Such an approach can account for the puzzling distribution of cooperative behaviors across taxa, especially the seemingly unique occurrence of cooperation observed in our own species. (shrink)
Donald Griffin has suggested that cognitive ethologists can use communication between non-human animals as a "window" into animal minds. Underlying this metaphor seems to be a conception of cognition as information processing and communication as information transfer from signaller to receiver. We examine various analyses of information and discuss how these analyses affect an ongoing debate among ethologists about whether the communicative signals of some animals should be interpreted as referential signals or whether emotional accounts of such signals are adequate. (...) We discuss the food-calling behavior of a group of rhesus monkeys to develop these issues. (shrink)
��Is moral judgment accomplished by intuition or conscious reasoning? An answer demands a detailed account of the moral principles in question. We investigated three principles that guide moral judgments: (a) Harm caused by action is worse than harm caused by omission, (b) harm intended as the means to a goal is worse than harm foreseen as the side effect of a goal, and (c) harm involving physical contact with the victim is worse than harm involving no physical contact. Asking whether (...) these principles are invoked to explain moral judgments, we found that subjects generally appealed to the first and third principles in their justifications, but not to the second. This finding has significance for methods and theories of moral psychology: The moral principles used in judgment must be directly compared with those articulated in justification, and doing so shows that some moral principles are available to conscious reasoning whereas others are not. (shrink)