Knowledge of mechanisms is critical for causal reasoning. We contrasted two possible organizations of causal knowledge—an interconnected causal network, where events are causally connected without any boundaries delineating discrete mechanisms; or a set of disparate mechanisms—causal islands—such that events in different mechanisms are not thought to be related even when they belong to the same causal chain. To distinguish these possibilities, we tested whether people make transitive judgments about causal chains by inferring, given A causes B and B causes C, (...) that A causes C. Specifically, causal chains schematized as one chunk or mechanism in semantic memory led to transitive causal judgments. On the other hand, chains schematized as multiple chunks led to intransitive judgments despite strong intermediate links. Normative accounts of causal intransitivity could not explain these intransitive judgments. (shrink)
Human behavior is frequently described both in abstract, general terms and in concrete, specific terms. We asked whether these two ways of framing equivalent behaviors shift the inferences people make about the biological and psychological bases of those behaviors. In five experiments, we manipulated whether behaviors are presented concretely (i.e. with reference to a specific person, instantiated in the particular context of that person’s life) or abstractly (i.e. with reference to a category of people or behaviors across generalized contexts). People (...) judged concretely framed behaviors to be less biologically based and, on some dimensions, more psychologically based than the same behaviors framed in the abstract. These findings held true for both mental disorders (Experiments 1 and 2) and everyday behaviors (Experiments 4 and 5) and yielded downstream consequences for the perceived efficacy of disorder treatments (Experiment 3). Implications for science educators, students of science, and members of the lay public are discussed. (shrink)
When a cause interacts with unobserved factors to produce an effect, the contingency between the observed cause and effect cannot be taken at face value to infer causality. Yet, it would be computationally intractable to consider all possible unobserved, interacting factors. Nonetheless, two experiments found that when an unobserved cause is assumed to be fairly stable over time, people can learn about such interactions and adjust their inferences about the causal efficacy of the observed cause. When they observed a period (...) in which a cause and effect were associated followed by a period of the opposite association, rather than concluding a complete lack of causality, subjects inferred an unobserved, interacting cause. The interaction explains why the overall contingency between the cause and effect is low and allows people to still conclude that the cause is efficacious. (shrink)