When we judge an action as morally right or wrong, we rely on our capacity to infer the actor's mental states. Here, we test the hypothesis that the right temporoparietal junction, an area involved in mental state reasoning, is necessary for making moral judgments. In two experiments, we used transcranial magnetic stimulation to disrupt neural activity in the RTPJ transiently before moral judgment and during moral judgment. In both experiments, TMS to the RTPJ led participants to rely less on the (...) actor's mental states. A particularly striking effect occurred for attempted harms : Relative to TMS to a control site, TMS to the RTPJ caused participants to judge attempted harms as less morally forbidden and more morally permissible. Thus, interfering with activity in the RTPJ disrupts the capacity to use mental states in moral judgment, especially in the case of attempted harms. (shrink)
Moral judgments, we expect, ought not to depend on luck. A person should be blamed only for actions and outcomes that were under the person’s control. Yet often, moral judgments appear to be influenced by luck. A father who leaves his child by the bath, after telling his child to stay put and believing that he will stay put, is judged to be morally blameworthy if the child drowns (an unlucky outcome), but not if his child stays put and doesn’t (...) drown. Previous theories of moral luck suggest that this asymmetry reflects primarily the influence of unlucky outcomes on moral judgments. In the current study, we use behavioral methods and fMRI to test an alternative: these moral judgments largely reflect participants’ judgments of the agent’s beliefs. In “moral luck” scenarios, the unlucky agent also holds a false belief. Here, we show that moral luck depends more on false beliefs than bad outcomes. We also show that participants with false beliefs are judged as having less justified beliefs and are therefore judged as more morally blameworthy. The current study lends support to a rationalist account of moral luck: moral luck asymmetries are driven not by outcome bias primarily, but by mental state assessments we endorse as morally relevant, i.e. whether agents are justified in thinking that they won’t cause harm. (shrink)
Language has been shown to play a key role in the development of a child’s theory of mind, but its role in adult belief reasoning remains unclear. One recent study used verbal and nonverbal interference during a false-belief task to show that accurate belief reasoning in adults necessarily requires language (Newton & de Villiers, 2007). The strength of this inference depends on the cognitive processes that are matched between the verbal and nonverbal inference tasks. Here, we matched the two interference (...) tasks in terms of their effects on spatial working memory. We found equal success on false-belief reasoning during both verbal and nonverbal interference, suggesting that language is not specifically necessary for adult theory of mind. (shrink)
Simulation theory accounts of mind-reading propose that the observer generates a mental state that matches the state of the target and then uses this state as the basis for an attribution of a similar state to the target. The key proposal is thus that mechanisms that are primarily used online, when a person experiences a kind of mental state, are then co-opted to run Simulations of similar states in another person. Here I consider the neuroscientific evidence for this view. I (...) argue that there is substantial evidence for co-opted mechanisms, leading from one individual’s mental state to a matching state in an observer, but there is no evidence that the output of these co-opted mechanisms serve as the basis for mental state attributions. There is also substantial evidence for attribution mechanisms that serve as the basis for mental state attributions, but there is no evidence that these mechanisms receive their input from co-opted mechanisms. (shrink)
Moral judgment depends critically on theory of mind, reasoning about mental states such as beliefs and intentions. People assign blame for failed attempts to harm and offer forgiveness in the case of accidents. Here we use fMRI to investigate the role of ToM in moral judgment of harmful vs. helpful actions. Is ToM deployed differently for judgments of blame vs. praise? Participants evaluated agents who produced a harmful, helpful, or neutral outcome, based on a harmful, helpful, or neutral intention; participants (...) made blame and praise judgments. In the right temporo-parietal junction, and, to a lesser extent, the left TPJ and medial prefrontal cortex, the neural response reflected an interaction between belief and outcome factors, for both blame and praise judgments: The response in these regions was highest when participants delivered a negative moral judgment, i.e., assigned blame or withheld praise, based solely on the agent's intent. These results show enhanced attention to mental states for negative moral verdicts based exclusively on mental state information. (shrink)
Contemporary moral psychology has focused on the notion of a universal moral sense, robust to individual and cultural differences. Yet recent evidence has revealed individual differences in the psychological processes for moral judgment: controlled cognition, mental-state reasoning, and emotional responding. We discuss this evidence and its relation to cross-cultural diversity in morality.
In daily life, perceivers often need to predict and interpret the behavior of group agents, such as corporations and governments. Although research has investigated how perceivers reason about individual members of particular groups, less is known about how perceivers reason about group agents themselves. The present studies investigate how perceivers understand group agents by investigating the extent to which understanding the ‘mind’ of the group as a whole shares important properties and processes with understanding the minds of individuals. Experiment 1 (...) demonstrates that perceivers are sometimes willing to attribute a mental state to a group as a whole even when they are not willing to attribute that mental state to any of the individual members of the group, suggesting that perceivers can reason about the beliefs and desires of group agents over and above those of their individual members. Experiment 2 demonstrates that the degree of activation in brain regions associated with attributing mental states to individuals—i.e., brain regions associated with mentalizing or theory-of-mind, including the medial prefrontal cortex (MPFC), temporo-parietal junction (TPJ), and precuneus—does not distinguish individual from group targets, either when reading statements about those targets' mental states (directed) or when attributing mental states implicitly in order to predict their behavior (spontaneous). Together, these results help to illuminate the processes that support understanding group agents themselves. (shrink)
Although a second-person neuroscience has high ecological validity, the extent to which a second- versus third-person neuroscience approach fundamentally alters neural patterns of activation requires more careful investigation. Nonetheless, we are hopeful that this new avenue will prove fruitful in significantly advancing our understanding of typical and atypical social cognition.
What is the impact of science on philosophy? In “Experiments in Ethics”, Kwame Anthony Appiah addresses this question for morality and ethics. Appiah suggests that scientific results may undermine moral intuitions by undermining our confidence in the actual sources of our intuitions, or by invalidating our factual assumptions about the causes of human behavior. Appiah worries that scientific results showing situational causes on human behavior force us to abandon the intuition, formalized in virtue ethics, that what matters is “who you (...) are on the inside”. In this review, we agree with Appiah that scientific results at once force and do not force us to abandon this intuition. We also propose that Appiah’s worry is due in part to an over-simplified conception of “internal causes”, shared widely among scientists and philosophers. By re-introducing the true richness of internal causes invoked in moral judgments, we hope to relax the tension between scientific results and moral intuitions. Ultimately, we propose that science can undermine and constrain but cannot affirm our commitment to specific moral intuitions. (shrink)
This chapter presents the advantages of the use of functional regions of interest along with its specific concerns, and provides a reference to Karl J. Friston related to the subject. Functionally defined ROI help to test hypotheses about the cognitive functions of particular regions of the brain. fROI are useful for specifying brain locations and investigating separable components of the mind. The chapter provides an overview of the common and uncommon misconceptions about fROI related to assumptions of homogeneity, factorial designs (...) versus independent localizers, a summary measure, and the naming of fROI. (shrink)