Elsevier

Cognition

Volume 133, Issue 3, December 2014, Pages 611-620
Cognition

The role of causal models in multiple judgments under uncertainty

https://doi.org/10.1016/j.cognition.2014.08.011Get rights and content

Highlights

  • We derive and test predictions of a causal Bayes net account of judgment under uncertainty across multiple observations.

  • Causal explanation of false positives promoted stability in probability estimates across multiple observations.

  • Statistics without an apparent cause were treated as stochastic in intuitive probability judgments.

  • Identical observed events can lead to different probability judgments depending on causal beliefs about the events.

Abstract

Two studies examined a novel prediction of the causal Bayes net approach to judgments under uncertainty, namely that causal knowledge affects the interpretation of statistical evidence obtained over multiple observations. Participants estimated the conditional probability of an uncertain event (breast cancer) given information about the base rate, hit rate (probability of a positive mammogram given cancer) and false positive rate (probability of a positive mammogram in the absence of cancer). Conditional probability estimates were made after observing one or two positive mammograms. Participants exhibited a causal stability effect: there was a smaller increase in estimates of the probability of cancer over multiple positive mammograms when a causal explanation of false positives was provided. This was the case when the judgments were made by different participants (Experiment 1) or by the same participants (Experiment 2). These results show that identical patterns of observed events can lead to different estimates of event probability depending on beliefs about the generative causes of the observations.

Introduction

Causal knowledge plays a central role in cognition. Such knowledge has profound effects on the way people learn contingencies (Cheng, 1997, Gopnik et al., 2004, Griffiths and Tenenbaum, 2005, Waldmann et al., 2006), categorize (Ahn and Kim, 2001, Hayes and Rehder, 2012, Rehder and Kim, 2010, Sloman et al., 1998), reason (Fernbach et al., 2011, Holyoak et al., 2010, Kemp et al., 2012, Kemp and Tenenbaum, 2009, Rehder, 2006, Sloman, 2005), make decisions (Hagmayer & Sloman, 2009), and remember (Shank & Abelson, 1995). This paper examines how causal knowledge affects the way people interpret statistical information in judgments under uncertainty.

Many judgments under uncertainty require the evaluation of statistical information to arrive at an estimate of the probability of an outcome (Newell, 2013). Performance on such tasks is often poor with participants generating estimates that deviate considerably from a normative Bayesian solution (see Barbey and Sloman, 2007, Koehler, 1996, Tversky and Kahneman, 1974, for reviews). Early work suggested that causal knowledge might have an important role in such judgments (Ajzen, 1977, Tversky and Kahneman, 1980). For example, when base rate information is seen as causally relevant to a judgment it is less likely to be neglected (Ajzen, 1977, Bar-Hillel, 1980). Ajzen (1977) attributed this to a “causality” heuristic which increases the salience of statistical information associated with causal mechanisms relative to non-causal statistics.

A more elaborate account of the role of causal knowledge on judgments under uncertainty is suggested by causal Bayes net models of cognition (Griffiths and Tenenbaum, 2005, Rehder, 2010, Sloman, 2005, Waldmann, 1996). The causal Bayes net approach assumes that people make inferences by constructing hypotheses about the causal relations between the components of a judgment problem (often referred to as “variables”) and updating these hypotheses in the light of observed data. In this approach, hypothesized causal relations can be represented as Bayesian networks – directed acyclic graphs in which the nodes represent variables in the system and linking arrows represent the causal relations between these variables. This graph structure can be used to infer conditional and joint probabilities over the variables in the network. Such an approach has been successfully applied to the explanation of key phenomena in domains such as learning about causal systems (Kemp et al., 2010, Sloman and Lagnado, 2005, Waldmann et al., 2006), reasoning (Kemp & Tenenbaum, 2009) and conceptual development (Gopnik et al., 2004, Tenenbaum et al., 2011).

Krynski and Tenenbaum (2007) proposed that the causal Bayes net approach could also lead to a better understanding of how people make judgments under uncertainty (also see Bes, Sloman, Lucas, & Raufaste, 2012). Krynski and Tenenbaum suggest that failures to arrive at normative probability estimates in such judgments reflect a difficulty in mapping the statistics given in the problem to the relevant components of intuitive causal models. They show that performance on such problems may be improved by framing the task so that this mapping is more transparent (e.g., by providing a causal explanation of key statistics).

The current studies focus on a different implication of the approach to judgments under uncertainty; namely how causal beliefs affect the interpretation of evidence arising from multiple observations. Our key intuition is that statistically equivalent information provided over multiple observations will be interpreted differently depending on one’s causal beliefs about the variables generating the observations. In particular, we predict that repeated observations with no obvious cause will be treated as independent, stochastic events. When a common causal explanation is available for these observations however, the repeated observations will be seen as dependent, arising from the same generative process. These ideas were suggested by Krynski and Tenenbaum (2007) but not tested empirically.

As a simple example, consider a situation where you turn on your laptop computer and attempt to launch your internet browser but receive an error message saying that no connection could be established. There are at least two interpretations of this event. It could reflect the somewhat random fluctuation in strength of wireless server signals as the laptop is moved around. Alternately, there may be a more serious and stable underlying cause, such as a failure of the remote server or suspension of your server account. In the absence of an obvious cause you may favor the former hypothesis and make several attempts to restart the browser. In this case, you are treating browser failures as independent events such that each subsequent attempt is as likely to succeed as the first. On the other hand if, after the first try, you recall that you have not paid the most recent bill from your internet provider, the browser failure may be seen as evidence for an alternative cause (i.e. that your account has been suspended). In this case it seems futile to continue restarting the browser because the underlying cause of the failure remains unchanged and will lead to the same outcome on each occasion. Hence, browser failures are interpreted differently depending on your mental model of the causal dynamics of the situation.

We now examine how this approach can be applied to a classic problem in judgment under uncertainty. In the “mammogram problem,” the task is to estimate the conditional probability that a woman has cancer given that she has received a positive mammogram (cf. Eddy, 1982, Gigerenzer and Hoffrage, 1995, Krynski and Tenenbaum, 2007). To derive this estimate, participants are given information about the base rate (probability of breast cancer in the target population), the “hit” rate (probability of obtaining a positive mammogram given cancer), and the false positive rate (probability of obtaining a positive result in the absence of cancer). In our variant of this problem (hereafter the “double mammogram problem”) participants make conditional judgments after observing one positive mammogram and/or after two positive mammograms.

Crucially we manipulated causal beliefs about the false positive rate (cf. Krynski & Tenenbaum, 2007). Our non-causal condition was similar to most previous studies using the mammogram problem, in that no cause of false positives was offered. In the causal condition, an alternative probabilistic cause of false positives (a benign cyst) was suggested; a positive mammogram could thus be seen as a “common effect” of two different causes (cancer or cyst). Fig. 1 shows the detailed scenarios for each condition.

The different attributions for false positives in the respective conditions should lead to the construction of different intuitive causal models of the problem as shown in Fig. 2. The Figure illustrates the relations between the causal variables of cancer (C) and cyst (Cy) or an unknown alternative cause of false positive tests (U), the respective base rates of these variables (cC and cCy), and the probability that cancer (mC,Oj), a cyst (mCy,Oj) or an unknown cause (bj) will generate a positive test (O1 or O2). The dashed lines in the Figure represent instantiations of cancer and cyst on different mammogram tests.

The crucial question that we examined was how causal framing of false positives affects judgments about cancer probability in the light of multiple observations of positive mammogram tests. We hypothesize that people will see biological causes for a positive test (cancer or cyst) as relatively stable over multiple observations. Hence in the causal case in Fig. 2, identifying that cancer (or cyst) is responsible for O1 suggests that it is also the cause of O2. In the non-causal condition however, two false positives are not assumed to have a common cause. As in the internet connection example, false positives at each observation might be interpreted as independent stochastic events.

Under these assumptions we expect different conditional probability estimates in the causal and non-causal conditions. At a qualitative level, the prediction for the causal condition is that the judged probability of a positive test being due to cancer should remain relatively stable across the successive observations O1 and O2. After observing O2 there may be some increase in the perceived probability of cancer but this increase should be relatively small. This is because cyst remains a viable alternative explanation of O2 as well as O1 (see below for a quantitative demonstration). In contrast, in the non-causal condition, the probability of two statistically independent false positives is substantially lower than the probability of a single false positive. Hence, there should be a larger increase in estimates of the conditional probability of cancer from O1 to O2.1

These same qualitative predictions emerged when we applied Bayes net formalisms to the graphical models in Fig. 2. Our approach was similar to that used in a number of previous Bayes net applications (e.g., Fernbach et al., 2011, Griffiths and Tenenbaum, 2005, Rehder, 2010) with the novel feature of allowing the relevant variables in the graphical model to be represented twice, once for each of the two mammogram tests administered (on machines A and B). This aspect of the model is similar to the assumptions made in “dynamic” Bayes nets (cf. Neapolitan, 2004, Rottman and Keil, 2012). The details of this model are given in Appendix A. The key prediction that emerges is that there should only be a modest increase in the judged conditional probability of cancer from O1 to O2 in the causal condition (≈5%), as compared with a much larger increase in the non-causal case (≈27%).2 Note that we do not assume that participants’ probability estimates for each positive test will closely match those predicted by the model. Previous work with similar judgment problems shows that people typically over-estimate the conditional probability of cancer relative to normative values (e.g., Barbey and Sloman, 2007, Gigerenzer and Hoffrage, 1995) even when relevant causal information is present (Krynski & Tenenbaum, 2007). Nevertheless, even if probability estimates are generally higher than those predicted by the Bayes net model, we expected that the non-causal group would show a more substantial change in estimates across observations than the causal group.

Section snippets

Experiment 1

Experiment 1 tested these predictions of the causal Bayes net model. Different groups were presented with the non-causal or causal version of the mammogram problem and asked to estimate the probability of cancer following observation of a single positive mammogram or two positive mammogram tests.

Experiment 2

This experiment aimed to further examine the robustness of the causal stability effect. In this case we again presented a scenario where two independent mammogram tests were performed. However, each participant now made two estimates; an initial estimate after observing one positive mammogram, and again after observing two positive mammograms. Notably this design allowed us to examine stability and change in individual probability estimates after observing each test result. Our causal Bayesian

General discussion

These experiments aimed to test predictions of a causal Bayes net model of judgments under uncertainty with multiple observations. According to this approach people formulate causal models of judgment problems and attempt to incorporate given statistical information into these models (Krynski & Tenenbaum, 2007). We tested a novel prediction of this approach concerning the impact of beliefs about alternate sources of evidence across multiple observations. If an alternative explanation for a

Acknowledgements

This research was supported by Australian Research Council Discovery Grant DP120100266 to the first and third authors. The authors would like to thank Ann Martin and Kelly Jones for their assistance in data collection.

References (44)

  • M.R. Waldmann

    Knowledge-based causal induction

  • W. Ahn et al.

    The causal status effect in categorization: An overview

  • I. Ajzen

    Intuitive theories of events and the effects of base-rate information on prediction

    Journal of Personality and Social Psychology

    (1977)
  • A.K. Barbey et al.

    Base rate respect: From ecological rationality to dual processes

    Behavioral and Brain Sciences

    (2007)
  • B. Bes et al.

    Non-Bayesian inference: Causal structure trumps correlation

    Cognitive Science

    (2012)
  • R. Beyth-Marom et al.

    Diagnosticity and pseudodiagnosticity

    Journal of Personality and Social Psychology

    (1983)
  • E.T. Cokely et al.

    Measuring risk literacy: The Berlin numeracy test

    Judgment and Decision Making

    (2012)
  • D.M. Eddy

    Probabilistic reasoning in clinical medicine

  • P.M. Fernbach et al.

    Asymmetries in predictive and diagnostic reasoning

    Journal of Experimental Psychology: General

    (2011)
  • G. Gigerenzer et al.

    How to improve Bayesian reasoning without instruction: Frequency formats

    Psychological Review

    (1995)
  • A. Gopnik et al.

    A theory of causal learning in children: Causal maps and Bayes nets

    Psychological Review

    (2004)
  • Y. Hagmayer et al.

    Decision makers conceive of themselves as interveners

    Journal of Experimental Psychology: General

    (2009)
  • Cited by (20)

    • Getting to the source of the illusion of consensus

      2022, Cognition
      Citation Excerpt :

      Krynski and Tenenbaum (2007) for example, suggested that the apparent neglect of base-rates in probabilistic reasoning tasks may be the result of an impoverished causal representation of the problem. When the causal relations between problem elements were clarified, the accuracy of probabilistic reasoning increased (Hayes, Hawkins, & Newell, 2016; Hayes, Hawkins, Newell, Pasqualino, & Rehder, 2014). Tools that help people to understand causal structure can also improve performance on complex probabilistic reasoning problems, involving multiple dependencies (Cruz et al., 2020).

    • Beyond Markov: Accounting for independence violations in causal reasoning

      2018, Cognitive Psychology
      Citation Excerpt :

      This formalism—causal graphical models—is generally considered to be normative and thus serves as a standard against which people’s causal judgments can be evaluated (Glymour, 1998; Jordan, 1999; Koller & Friedman, 2009; Pearl, 1988, 2000; Spirtes, Glymour, & Scheines, 2000). Causal graphical models are used as psychological models of causal reasoning (Ali, Chater, & Oaksford, 2011; Fernbach & Erb, 2013; Hayes, Hawkins, Newell, Pasqualino, & Rehder, 2014; Krynski & Tenenbaum, 2007; Meder, Mayrhofer, & Waldmann, 2014; Oppenheimer, 2004; Rehder, 2014a), including how people predict the effects of intervening on a system (Sloman & Lagnado, 2005; Waldmann & Hagmayer, 2005) and draw analogies between content domains (Holyoak, Lee, & Lu, 2010; Lee & Holyoak, 2008). They are the dominant framework for theories of causal learning, regardless of whether the question is how learners induce causal strength (Cheng, 1997; Lu, Yuille, Liljeholm, Cheng, & Holyoak, 2008; Waldmann, 2000; Waldmann & Holyoak, 1992), causal structure (Gopnik et al., 2004; Griffiths & Tenenbaum, 2005, 2009; Sobel, Tenenbaum, Gopnik, 2004), or learn by manipulating the system in question (Bramley, Lagnado, & Speekenbrink, 2014; Coenen, Rehder, & Gureckis, 2015; Gopnik et al., 2004; Steyvers, Tenenbaum, Wagenmakers, & Blum, 2003).

    • Diagnostic causal reasoning with verbal information

      2017, Cognitive Psychology
      Citation Excerpt :

      Most behavioral studies therefore provide subjects with precise quantitative information, which enables researchers to derive predictions from formal models. For instance, causal reasoning studies typically provide subjects with described numerical information, such as percentages or frequencies (e.g., Hayes, Hawkins, Newell, Pasqualino, & Rehder, 2014; Krynski & Tenenbaum, 2007; Rehder & Burnett, 2005) or sample data (e.g., Mayrhofer & Waldmann, 2015; Meder, Mayrhofer, & Waldmann, 2014; Rottman, 2016; Waldmann & Holyoak, 1992). In contrast, we investigated diagnostic causal inferences from effects to causes based on verbal terms and compared human judgments to those of a matched control group receiving precise numerical information.

    View all citing articles on Scopus
    View full text