Recent studies indicate that indicative conditionals like "If people wear masks, the spread of Covid-19 will be diminished" require a probabilistic dependency between their antecedents and consequents to be acceptable (Skovgaard-Olsen et al., 2016). But it is easy to make the slip from this claim to the thesis that indicative conditionals are acceptable only if this probabilistic dependency results from a causal relation between antecedent and consequent. According to Pearl (2009), understanding a causal relation involves multiple, hierarchically organized conceptual dimensions: (...) prediction, intervention, and counterfactual dependence. In a series of experiments, we test the hypothesis that these conceptual dimensions are differentially encoded in indicative and counterfactual conditionals. If this hypothesis holds, then there are limits as to how much of a causal relation is captured by indicative conditionals alone. Our results show that the acceptance of indicative and counterfactual conditionals can become dissociated. Furthermore, it is found that the acceptance of both is needed for accepting a causal relation between two co-occurring events. The implications that these findings have for the hypothesis above, and for recent debates at the intersection of the psychology of reasoning and causal judgment, are critically discussed. Our findings are consistent with viewing indicative conditionals as answering predictive queries requiring evidential relevance (even in the absence of direct causal relations). Counterfactual conditionals in contrast target causal relevance, specifically. Finally, we discuss the implications our results have for the yet unsolved question of how reasoners succeed in constructing causal models from verbal descriptions. (shrink)
The authors challenge the reigning “causal power framework” as an explanation for whether a particular outcome was actually caused by a specific potential cause. They test a new measure of causal attribution in two experiments by embedding the measure within the Structure Induction model of Singular Causation (SISC, Stephan & Waldmann, 2016).
The past decade has seen a renewed interest in moral psychology. A unique feature of the present endeavor is its unprecedented interdisciplinarity. For the first time, cognitive, social, and developmental psychologists, neuroscientists, experimental philosophers, evolutionary biologists, and anthropologists collaborate to study the same or overlapping phenomena. This review focuses on moral judgments and is written from the perspective of cognitive psychologists interested in theories of the cognitive and affective processes underlying judgments in moral domains. The review will first present and (...) discuss a variety of different theoretical and empirical approaches, including both behavioral and neuroscientific studies. We will then show how these theories can be applied to a selected number of specific research topics that have attracted particular interest in recent years, including the distinction between moral and conventional rules, moral dilemmas, the role of intention, and sacred/protected values. One overarching question we will address throughout the chapter is whether moral cognitions are distinct and special, or whether they can be subsumed under more domain-general mechanisms. (shrink)
Causal queries about singular cases, which inquire whether specific events were causally connected, are prevalent in daily life and important in professional disciplines such as the law, medicine, or engineering. Because causal links cannot be directly observed, singular causation judgments require an assessment of whether a co‐occurrence of two events c and e was causal or simply coincidental. How can this decision be made? Building on previous work by Cheng and Novick (2005) and Stephan and Waldmann (2018), we propose a (...) computational model that combines information about the causal strengths of the potential causes with information about their temporal relations to derive answers to singular causation queries. The relative causal strengths of the potential cause factors are relevant because weak causes are more likely to fail to generate effects than strong causes. But even a strong cause factor does not necessarily need to be causal in a singular case because it could have been preempted by an alternative cause. We here show how information about causal strength and about two different temporal parameters, the potential causes' onset times and their causal latencies, can be formalized and integrated into a computational account of singular causation. Four experiments are presented in which we tested the validity of the model. The results showed that people integrate the different types of information as predicted by the new model. (shrink)
Currently, two frameworks of causal reasoning compete: Whereas dependency theories focus on dependencies between causes and effects, dispositional theories model causation as an interaction between agents and patients endowed with intrinsic dispositions. One important finding providing a bridge between these two frameworks is that failures of causes to generate their effects tend to be differentially attributed to agents and patients regardless of their location on either the cause or the effect side. To model different types of error attribution, we augmented (...) a causal Bayes net model with separate error sources for causes and effects. In several experiments, we tested this new model using the size of Markov violations as the empirical indicator of differential assumptions about the sources of error. As predicted by the model, the size of Markov violations was influenced by the location of the agents and was moderated by the causal structure and the type of causal variables. (shrink)
Adults’ intentionality judgments regarding an action are influenced by their moral evaluation of this action. This is clearly indicated in the so-called side-effect effect: when told about an action (e.g. implementing a business plan) with an intended primary effect (e.g. raise profits) and a foreseen side effect (e.g. harming/helping the environment), subjects tend to interpret the bringing about of the side effect more often as intentional when it is negative (harming the environment) than when it is positive (helping the environment). (...) From a cognitive point of view, it is unclear whether the side-effect effect is driven by the moral status of the side effects specifically, or rather more generally by its normative status. And from a developmental point of view, little is known about the ontogenetic origins of the effect. The present study therefore explored the cognitive foundations and the ontogenetic origins of the side-effect effect by testing 4-to 5-year-old children with scenarios in which a side effect was in accordance with/violated a norm. Crucially, the status of the norm was varied to be conventional or moral. Children rated the bringing about of side-effects as more intentional when it broke a norm than when it accorded with a norm irrespective of the type of norm. The side-effect effect is thus an early-developing, more general and pervasive phenomenon, not restricted to morally relevant side effects. (shrink)
ABSTRACTModern technological means allow for meaningful interaction across arbitrary distances, while human morality evolved in environments in which individuals needed to be spatially close in order to interact. We investigate how people integrate knowledge about modern technology with their ancestral moral dispositions to help relieve nearby suffering. Our first study establishes that spatial proximity between an agent's means of helping and the victims increases people's judgement of helping obligations, even if the agent is constantly far personally. We then report and (...) meta-analyse 20 experiments elucidating the cognitive mechanisms behind this effect, which include inferences of increased efficaciousness and personal involvement. Implications of our findings for the scientific understanding of ancestral moral dispositions in modern environments are discussed, as well as suggestions for how these insights might be exploited to increase charitable giving. Our meta-analysis provides a practical example.. (shrink)
In everyday life, people typically observe fragments of causal networks. From this knowledge, people infer how novel combinations of causes they may never have observed together might behave. I report on 4 experiments that address the question of how people intuitively integrate multiple causes to predict a continuously varying effect. Most theories of causal induction in psychology and statistics assume a bias toward linearity and additivity. In contrast, these experiments show that people are sensitive to cues biasing various integration rules. (...) Causes that refer to intensive quantities (e.g., taste) or to preferences (e.g., liking) bias people toward averaging the causal influences, whereas extensive quantities (e.g., strength of a drug) lead to a tendency to add. However, the knowledge underlying these processes is fallible and unstable. Therefore, people are easily influenced by additional task‐related context factors. These additional factors include the way data are presented, the difficulty of the inference task, and transfer from previous tasks. The results of the experiments provide evidence for causal model and related theories, which postulate that domain‐general representations of causal knowledge are influenced by abstract domain knowledge, data‐driven task factors, and processing difficulty. (shrink)
The goal of the present set of studies is to explore the boundary conditions of category transfer in causal learning. Previous research has shown that people are capable of inducing categories based on causal learning input, and they often transfer these categories to new causal learning tasks. However, occasionally learners abandon the learned categories and induce new ones. Whereas previously it has been argued that transfer is only observed with essentialist categories in which the hidden properties are causally relevant for (...) the target effect in the transfer relation, we here propose an alternative explanation, the unbroken mechanism hypothesis. This hypothesis claims that categories are transferred from a previously learned causal relation to a new causal relation when learners assume a causal mechanism linking the two relations that is continuous and unbroken. The findings of two causal learning experiments support the unbroken mechanism hypothesis. (shrink)
Research on human causal induction has shown that people have general prior assumptions about causal strength and about how causes interact with the background. We propose that these prior assumptions about the parameters of causal systems do not only manifest themselves in estimations of causal strength or the selection of causes but also when deciding between alternative causal structures. In three experiments, we requested subjects to choose which of two observable variables was the cause and which the effect. We found (...) strong evidence that learners have interindividually variable but intraindividually stable priors about causal parameters that express a preference for causal determinism. These priors predict which structure subjects preferentially select. The priors can be manipulated experimentally and appear to be domain-general. Heuristic strategies of structure induction are suggested that can be viewed as simplified implementations of the priors. (shrink)
I defend the claim that in psychological theories concerned with theoretical or practical rationality there is a constitutive relation between normative and descriptive theories: Normative theories provide idealized descriptive accounts of rational agents. However, we need to resist the temptation to collapse descriptive theories with any specific normative theory. I show how a partial separation is possible.