The psychological and neurobiological processes underlying moral judgement have been the focus of many recent empirical studies1–11. Of central interest is whether emotions play a causal role in moral judgement, and, in parallel, how emotion-related areas of the brain contribute to moral judgement. Here we show that six patients with focal bilateral damage to the ventromedial prefrontal cortex (VMPC), a brain region necessary for the normal generation of emotions and, in particular, social emotions12–14, produce an abnor- mally ‘utilitarian’ pattern of (...) judgements on moral dilemmas that pit compelling considerations of aggregate welfare against highly emotionally aversive behaviours (for example, having to sacrifice one person’s life to save a number of other lives)7,8. In contrast, the VMPC patients’ judgements were normal in other classes of moral dilemmas. These findings indicate that, for a selective set of moral dilemmas, the VMPC is critical for normal judgements of right and wrong. The findings support a necessary role for emotion in the generation of those judgements. (shrink)
We examined the effects of order of presentation on the moral judgments of professional philosophers and two comparison groups. All groups showed similar‐sized order effects on their judgments about hypothetical moral scenarios targeting the doctrine of the double effect, the action‐omission distinction, and the principle of moral luck. Philosophers' endorsements of related general moral principles were also substantially influenced by the order in which the hypothetical scenarios had previously been presented. Thus, philosophical expertise does not appear to enhance the stability (...) of moral judgments against this presumably unwanted source of bias, even given familiar types of cases and principles. (shrink)
��Is moral judgment accomplished by intuition or conscious reasoning? An answer demands a detailed account of the moral principles in question. We investigated three principles that guide moral judgments: (a) Harm caused by action is worse than harm caused by omission, (b) harm intended as the means to a goal is worse than harm foreseen as the side effect of a goal, and (c) harm involving physical contact with the victim is worse than harm involving no physical contact. Asking whether (...) these principles are invoked to explain moral judgments, we found that subjects generally appealed to the first and third principles in their justifications, but not to the second. This finding has significance for methods and theories of moral psychology: The moral principles used in judgment must be directly compared with those articulated in justification, and doing so shows that some moral principles are available to conscious reasoning whereas others are not. (shrink)
Recent research in moral psychology has attempted to characterize patterns of moral judgments of actions in terms of the causal and intentional properties of those actions. The present study directly compares the roles of consequence, causation, belief and desire in determining moral judgments. Judgments of the wrongness or permissibility of action were found to rely principally on the mental states of an agent, while judgments of blame and punishment are found to rely jointly on mental states and the causal connection (...) of an agent to a harmful consequence. Also, selectively for judgments of punishment and blame, people who attempt but fail to cause harm more are judged more leniently if the harm occurs by independent means than if the harm does not occur at all. An account of these phenomena is proposed that distinguishes two processes of moral judgment: one which begins with harmful consequences and seeks a causally responsible agent, and the other which begins with an action and analyzes the mental states responsible for that action. Ó 2008 Elsevier B.V. All rights reserved. (shrink)
Responding to recent concerns about the reliability of the published literature in psychology and other disciplines, we formed the X-Phi Replicability Project to estimate the reproducibility of experimental philosophy. Drawing on a representative sample of 40 x-phi studies published between 2003 and 2015, we enlisted 20 research teams across 8 countries to conduct a high-quality replication of each study in order to compare the results to the original published findings. We found that x-phi studies – as represented in our sample (...) – successfully replicated about 70% of the time. We discuss possible reasons for this relatively high replication rate in the field of experimental philosophy and offer suggestions for best research practices going forward. (shrink)
We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the (...) scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise. (shrink)
To what extent do moral judgments depend on conscious reasoning from explicitly understood principles? We address this question by investigating one particular moral principle, the principle of the double effect. Using web-based technology, we collected a large data set on individuals' responses to a series of moral dilemmas, asking when harm to innocent others is permissible. Each moral dilemma presented a choice between action and inaction, both resulting in lives saved and lives lost. Results showed that: (1) patterns of moral (...) judgments were consistent with the principle of double effect and showed little variation across differences in gender, age, educational level, ethnicity, religion or national affiliation (within the limited range of our sample population) and (2) a majority of subjects failed to provide justifications that could account for their judgments. These results indicate that the principle of the double effect may be operative in our moral judgments but not open to conscious introspection. We discuss these results in light of current psychological theories of moral cognition, emphasizing the need to consider the unconscious appraisal system that mentally represents the causal and intentional properties of human action. (shrink)
Rationalization occurs when a person has performed an action and then concocts the beliefs and desires that would have made it rational. Then, people often adjust their own beliefs and desires to match the concocted ones. While many studies demonstrate rationalization, and a few theories describe its underlying cognitive mechanisms, we have little understanding of its function. Why is the mind designed to construct post hoc rationalizations of its behavior, and then to adopt them? This may accomplish an important task: (...) transferring information between the different kinds of processes and representations that influence our behavior. Human decision making does not rely on a single process; it is influenced by reason, habit, instinct, norms, and so on. Several of these influences are not organized according to rational choice. Rationalization extracts implicit information – true beliefs and useful desires – from the influence of these non-rational systems on behavior. This is a useful fiction – fiction, because it imputes reason to non-rational psychological processes; useful, because it can improve subsequent reasoning. More generally, rationalization belongs to the broader class of representational exchange mechanisms, which transfer information between many different kinds of psychological representations that guide our behavior. Representational exchange enables us to represent any information in the manner best suited to the particular tasks that require it, balancing accuracy, efficiency, and flexibility in thought. The theory of representational exchange reveals connections between rationalization and theory of mind, inverse reinforcement learning, thought experiments, and reflective equilibrium. (shrink)
: To what extent do moral judgments depend on conscious reasoning from explicitly understood principles? We address this question by investigating one particular moral principle, the principle of the double effect. Using web-based technology, we collected a large data set on individuals’ responses to a series of moral dilemmas, asking when harm to innocent others is permissible. Each moral dilemma presented a choice between action and inaction, both resulting in lives saved and lives lost. Results showed that: patterns of moral (...) judgments were consistent with the principle of double effect and showed little variation across differences in gender, age, educational level, ethnicity, religion or national affiliation and a majority of subjects failed to provide justifications that could account for their judgments. These results indicate that the principle of the double effect may be operative in our moral judgments but not open to conscious introspection. We discuss these results in light of current psychological theories of moral cognition, emphasizing the need to consider the unconscious appraisal system that mentally represents the causal and intentional properties of human action. (shrink)
Research on the capacity to understand others’ minds has tended to focus on representations of beliefs, which are widely taken to be among the most central and basic theory of mind representations. Representations of knowledge, by contrast, have received comparatively little attention and have often been understood as depending on prior representations of belief. After all, how could one represent someone as knowing something if one doesn't even represent them as believing it? Drawing on a wide range of methods across (...) cognitive science, we ask whether belief or knowledge is the more basic kind of representation. The evidence indicates that non-human primates attribute knowledge but not belief, that knowledge representations arise earlier in human development than belief representations, that the capacity to represent knowledge may remain intact in patient populations even when belief representation is disrupted, that knowledge attributions are likely automatic, and that explicit knowledge attributions are made more quickly than equivalent belief attributions. Critically, the theory of mind representations uncovered by these various methods exhibit a set of signature features clearly indicative of knowledge: they are not modality-specific, they are factive, they are not just true belief, and they allow for representations of egocentric ignorance. We argue that these signature features elucidate the primary function of knowledge representation: facilitating learning from others about the external world. This suggests a new way of understanding theory of mind—one that is focused on understanding others’ minds in relation to the actual world, rather than independent from it. (shrink)
What are the criteria people use when they judge that other people did something intentionally? This question has motivated a large and growing literature both in philosophy and in psychology. It has become a topic of particular concern to the nascent field of experimental philosophy, which uses empirical techniques to understand folk concepts. We present new data that hint at some of the underly- ing psychological complexities of folk ascriptions of intentional action and at dis- tinctions both between diverse concepts (...) and between associated mechanisms. (shrink)
Ordinary people often make moral judgments that are consistent with philosophical principles and legal distinctions. For example, they judge killing as worse than letting die, and harm caused as a necessary means to a greater good as worse than harm caused as a side-effect (Cushman, Young, & Hauser, 2006). Are these patterns of judgment produced by mechanisms specific to the moral domain, or do they derive from other psychological domains? We show that the action/omission and means/side-effect distinctions affect nonmoral representations (...) and provide evidence that their role in moral judgment is mediated by these nonmoral psychological representations. Specifically, the action/omission distinction affects moral judgment primarily via causal attribution, while the means/side-effect distinction affects moral judgment via intentional attribution. We suggest that many of the specific patterns evident in our moral judgments in fact derive from nonmoral psychological mechanisms, and especially from the processes of causal and intentional attribution. (shrink)
An extensive body of research suggests that the distinction between doing and allowing plays a critical role in shaping moral appraisals. Here, we report evidence from a pair of experiments suggesting that the converse is also true: moral appraisals affect doing/allowing judgments. Specifically, morally bad behavior is more likely to be construed as actively ‘doing’ than as passively ‘allowing’. This finding adds to a growing list of folk concepts influenced by moral appraisal, including causation and intentional action. We therefore suggest (...) that the present finding favors the view that moral appraisal plays a pervasive role in shaping diverse cognitive representations across multiple domains. (shrink)
Studies of normal individuals reveal an asymmetry in the folk concept of intentional action: an action is more likely to be thought of as intentional when it is morally bad than when it is morally good. One interpretation of these results comes from the hypothesis that emotion plays a critical mediating role in the relationship between an action’s moral status and its intentional status. According to this hypothesis, the negative emotional response triggered by a morally bad action drives the attribution (...) of intent to the actor, or the judgment that the actor acted intentionally. We test this hypothesis by presenting cases of morally bad and morally good action to seven individuals with deficits in emotional processing resulting from damage to the ventromedial prefrontal cortex (VMPC). If normal emotional processing is necessary for the observed asymmetry, then individuals with VMPC lesions should show no asymmetry. Our results provide no support for this hypothesis: like normal individuals, those with VMPC lesions showed the same asymmetry, tending to judge that an action was intentional when it was morally bad but not when it was morally good. Based on this finding, we suggest that normal emotional processing is not responsible for the observed asymmetry of intentional attributions and thus does not mediate the relationship between an action’s moral status and its intentional status. (shrink)
Is the basis of criminality an act that causes harm, or an act undertaken with the belief that one will cause harm? The present study takes a cognitive neuroscience approach to investigating how information about an agent’s beliefs and an action’s conse- quences contribute to moral judgment. We build on prior devel- opmental evidence showing that these factors contribute differ- entially to the young child’s moral judgments coupled with neurobiological evidence suggesting a role for the right tem- poroparietal junction (RTPJ) (...) in belief attribution. Participants read vignettes in a 2 2 design: protagonists produced either a negative or neutral outcome based on the belief that they were causing the negative outcome (‘‘negative’’ belief) or the neutral outcome (‘‘neutral’’ belief). The RTPJ showed significant activation above baseline for all four conditions but was modulated by an interaction between belief and outcome. Specifically, the RTPJ response was highest for cases of attempted harm, where protag- onists were condemned for actions that they believed would cause harm to others, even though the harm did not occur. The results not only suggest a general role for belief attribution during moral judgment, but also add detail to our understanding of the inter- action between these processes at both the neural and behavioral levels. (shrink)
How are our actions sorted into those that are intentional and those that are not? The philosophical and psychological literature on this topic is livelier now than ever, and we seek to make a contribution to it here. Our guiding question in this article is easy to state and hard to answer: How do various factors— specifically, features of vignettes—that contribute to majority folk judgments that an action is or is not intentional interact in producing the judgment? In pursuing this (...) question we draw on a number of empirical studies, including some of our own, and we sketch some future studies that would shed light on our topic. We emphasize that the factors that concern us here are limited to features of stories to which subject respond: examples include the value of the action asked about, the agent’s being indifferent to performing that action, and the agent’s seeking to perform it. We do not discuss underlying cognitive or emotional processes here, nor do we discuss whether respondents are making errors of any kind. (Both of these issues are discussed in Cushman and Mele [forthcoming].) 1. THREE KINDS OF ACTION In the present section we draw some distinctions that set the stage for our discussion of empirical results. Our actions have effects, and an agent’s bringing about such an effect is itself an action. For example, unbeknownst to Ann, her unlocking the door to her house frightened an intruder. That is, at least one effect of Ann’s unlocking her door was the intruder’s fright. Her bringing about this effect—that is, her frightening the intruder—is an action. Side-effect actions, as we understand this.. (shrink)
The capacity for representing and reasoning over sets of possibilities, or modal cognition, supports diverse kinds of high-level judgments: causal reasoning, moral judgment, language comprehension, and more. Prior research on modal cognition asks how humans explicitly and deliberatively reason about what is possible but has not investigated whether or how people have a default, implicit representation of which events are possible. We present three studies that characterize the role of implicit representations of possibility in cognition. Collectively, these studies differentiate explicit (...) reasoning about possibilities from default implicit representations, demonstrate that human adults often default to treating immoral and irrational events as impossible, and provide a case study of high-level cognitive judgments relying on default implicit representations of possibility rather than explicit deliberation. (shrink)
When solving problems, like making predictions or choices, people often “sample” possibilities into mind. Here, we consider whether there is structure to the kinds of thoughts people sample by default—that is, without an explicit goal. Across three experiments we found that what comes to mind by default are samples from a probability distribution that combines what people think is likely and what they think is good. Experiment 1 found that the first quantities that come to mind for everyday behaviors and (...) events are quantities that combine what is average and ideal. Experiment 2 found, in a manipulated context, that the distribution of numbers that come to mind resemble the mathematical product of the presented statistical distribution and a (softmax-transformed) prescriptive distribution. Experiment 3 replicated these findings in a visual domain. These results provide insight into the process generating people’s conscious thoughts and invite new questions about the value of thinking about things that are both likely and good. (shrink)
Moral intuitions are strong, stable, immediate moral beliefs. Moral philosophers ask when they are justified. This question cannot be answered separately from a psychological question: How do moral intuitions arise? Their reliability depends upon their source. This chapter develops and argues for a new theory of how moral intuitions arise—that they arise through heuristic processes best understood as unconscious attribute substitutions. That is, when asked whether something has the attribute of moral wrongness, people unconsciously substitute a different question about a (...) separate but related heuristic attribute (such as emotional impact). Evidence for this view is drawn from psychology and neuroscience, and competing views of moral heuristics are contrasted. It is argued that moral intuitions are not direct perceptions and, in many cases, are unreliable sources of evidence for moral claims. (shrink)
We review several instances where cognitive research has identified distinct psychological mechanisms for moral judgment that yield conflicting answers to moral dilemmas. In each of these cases, the conflict between psychological mechanisms is paralleled by prominent philosophical debates between different moral theories. A parsimonious account of this data is that key claims supporting different moral theories ultimately derive from the psychological mechanisms that give rise to moral judgments. If this view is correct, it has some important implications for the practice (...) of philosophy. We suggest several ways that moral philosophy and practical reasoning can proceed in the face of discordant theories grounded in diverse psychological mechanisms. (shrink)
The doctrine of double effect is a moral principle that distinguishes between harm we cause as a means to an end and harm that we cause as a side-effect. As a purely descriptive matter, the DDE is well established that it describes a consistent feature of human moral judgment. There are, however, several rival theories of its psychological cause. I review these theories and consider their advantages and disadvantages. Critically, most extant psychological theories of the DDE regard it as an (...) accidental byproduct of cognitive architecture. This may provide philosophers with some reason to question its normative significance. (shrink)
The thesis we develop in this essay is that all humans are endowed with a moral faculty. The moral faculty enables us to produce moral judgments on the basis of the causes and consequences of actions. As an empirical research program, we follow the framework of modern linguistics.1 The spirit of the argument dates back at least to the economist Adam Smith (1759/1976) who argued for something akin to a moral grammar, and more recently, to the political philosopher John Rawls (...) (1971). The logic of the argument, however, comes from Noam Chomsky’s thinking on language specifically and the nature of knowledge more generally (Chomsky, 1986, 1988, 2000; Saporta, 1978). If the nature of moral knowledge is comparable in some way to the nature of linguistic knowledge, as defended recently by Harman (1977), Dwyer (1999, 2004), and Mikhail (2000; in press), then what should we expect to find when we look at the anatomy of our moral faculty? Is there a grammar, and if so, how can the moral grammarian uncover its structure? Are we aware of our moral grammar, its method of operation, and its moment-to-moment functioning in our judgments? Is there a universal moral grammar that allows each child to build a particular moral grammar? Once acquired, are different moral grammars mutually incomprehensible in the same way that a native Chinese speaker finds a native Italian speaker incomprehensible? How does the child acquire a particular moral grammar, especially if her experiences are impoverished relative to the moral judgments she makes? Are there certain forms of brain damage that disrupt moral competence but leave other forms of reasoning intact? And how did this machinery evolve, and for what particular adaptive function? We will have more to say about many of these questions later on, and Hauser (2006) develops others. However, in order to flesh out the key ideas and particular empirical research paths, let us turn to some of the central questions in the study of our language faculty.. (shrink)
Conservatives and liberals disagree sharply on matters of morality and public policy. We propose a novel account of the psychological basis of these differences. Specifically, we find that conservatives tend to emphasize the intrinsic value of actions during moral judgment, in part by mentally simulating themselves performing those actions, while liberals instead emphasize the value of the expected outcomes of the action. We then demonstrate that a structural emphasis on actions is linked to the condemnation of victimless crimes, a distinctive (...) feature of conservative morality. Next, we find that the conservative and liberal structural approaches to moral judgment are associated with their corresponding patterns of reliance on distinct moral foundations. In addition, the structural approach uniquely predicts that conservatives will be more opposed to harm in circumstances like the well-known trolley problem, a result which we replicate. Finally, we show that the structural approaches of conservatives and liberals are partly linked to underlying cognitive styles. Collectively, these findings forge a link between two important yet previously independent lines of research in political psychology: cognitive style and moral foundations theory. (shrink)
Humans have a strong sense of who should be punished, when, and how. Many features of these intuitions are consistent with a simple adaptive model: Punishment evolved as a mechanism to teach social partners how to behave in future interactions. Yet, it is clear that punishment as practiced in modern contexts transcends any biologically evolved mechanism; it also depends on cultural institutions including the criminal justice system and many smaller analogs in churches, corporations, clubs, classrooms, and so on. These institutions (...) can be thought of as a kind of ‘exaptation’: a culturally evolved set of norms that exploits biologically evolved intuitions about when punishment is deserved in order to achieve cooperative benefits for social groups. (shrink)
The commentaries suggest many important improvements to the target article. They clearly distinguish two varieties of rationalization – the traditional “motivated reasoning” model, and the proposed representational exchange model – and show that they have distinct functions and consequences. They describe how representational exchange occurs not only by post hoc rationalization but also by ex ante rationalization and other more dynamic processes. They argue that the social benefits of representational exchange are at least as important as its direct personal benefits. (...) Finally, they construe our search for meaning, purpose, and narrative – both individually and collectively – as a variety of representational exchange. The result is a theory of rationalization as representational exchange both wider in scope and better defined in mechanism. (shrink)
To understand the structure of moral emotions poses a difficult challenge. For instance, why do liberals and conservatives see some moral issues similarly, but others starkly differently? Or, why does punishment depend on accidental variation in the severity of a harmful outcome, while judgments of wrongfulness or character do not? To resolve the complex design of morality, it helps to think in functional terms. Whether through learning, cultural evolution or natural selection, moral emotions will tend to guide behavior adaptively in (...) ordinary social situations. Thus, considering possible functions of morality can help us to comprehend its form. (shrink)
People often engage in “offline simulation”, considering what would happen if they performed certain actions in the future, or had performed different actions in the past. Prior research shows that these simulations are biased towards actions a person considers to be good—i.e., likely to pay off. We ask whether, and why, this bias might be adaptive. Through computational experiments we compare five agents who differ only in the way they engage in offline simulation, across a variety of different environment types. (...) Broadly speaking, our experiments reveal that simulating actions one already regards as good does in fact confer an advantage in downstream decision making, although this general pattern interacts with features of the environment in important ways. We contrast this bias with alternatives such as simulating actions whose outcomes are instead uncertain. (shrink)
When making a moral judgment, people largely care about two factors: Who did it (causal responsibility), and did they intend to (intention)? Since Piaget's seminal studies, we have known that as children mature, they gradually place greater emphasis on intention, and less on mere bad outcomes, when making moral judgments. Today, we know that this developmental shift has several signature properties. Recently, it has been shown that when adults make moral judgments under cognitive load, they exhibit a pattern similar to (...) young children; that is, their judgments become notably more outcome based. Here, we show that all of the same signature properties that accompany the outcome‐to‐intent shift in childhood characterize the “intent‐to‐outcome” shift obtained under cognitive load in adults. These findings hold important implications for current theories of moral judgment. (shrink)
In Natural Justice Binmore offers a game-theoretic map to the landscape of human morality. Following a long tradition of such accounts, Binmore’s argument concerns the forces of biological and cultural evolution that have shaped our judgments about the appropriate distribution of resources. In this sense, Binmore focuses on the morality of outcomes. This is a valuable perspective to which we add a friendly amendment from our own research: moral judgments appear to depend on process just as much as outcome. What (...) matters is not just that the butler is dead, but who killed him, how, and for what reason. Thus, a complete understanding of natural justice’ will entail an account not only of evolutionary pressures, but also of the psychological mechanisms upon which they act. (shrink)