Elsevier

Cognition

Volume 142, September 2015, Pages 12-38
Cognition

A decision network account of reasoning about other people’s choices

https://doi.org/10.1016/j.cognition.2015.05.006Get rights and content

Highlights

  • People can predict what others will choose and infer what others know and want.

  • Decision networks help to explain how people make inferences like these.

  • Decision networks extend Bayes nets by adding a notion of goal-directed choice.

  • In four experiments, people’s inferences were best predicted by decision networks.

Abstract

The ability to predict and reason about other people’s choices is fundamental to social interaction. We propose that people reason about other people’s choices using mental models that are similar to decision networks. Decision networks are extensions of Bayesian networks that incorporate the idea that choices are made in order to achieve goals. In our first experiment, we explore how people predict the choices of others. Our remaining three experiments explore how people infer the goals and knowledge of others by observing the choices that they make. We show that decision networks account for our data better than alternative computational accounts that do not incorporate the notion of goal-directed choice or that do not rely on probabilistic inference.

Introduction

People tend to assume that other people’s behavior results from their conscious choices—for example, choices about what outfit to wear, what movie to watch, or how to respond to a question (Gilbert and Malone, 1995, Ross, 1977). Reasoning about choices like these requires an understanding of how they are motivated by mental states, such as what others know and want. Even though mental states are abstract entities and are inaccessible to other people, most people find it natural to make predictions about what others will choose and infer why they made the choices they did. In this paper, we explore the computational principles that support such inferences.

Several models of how people reason about others’ actions have been proposed by social and developmental psychologists (e.g., Gilbert, 1998, Jones and Davis, 1965, Malle and Knobe, 1997, Wellman and Bartsch, 1988). Four examples are shown in Fig. 1. For instance, Fig. 1a shows a model of how people reason about others’ actions. According to this model, a person’s dispositional characteristics combine to produce an intention to take an action. Then, if that person has the necessary knowledge and ability to carry out the action, he or she takes the action, producing a set of effects. The other three models in Fig. 1 take into account additional variables such as belief and motivation. For example, Fig. 1b proposes that people take actions that they believe will satisfy their desires. The models in Fig. 1 are not computational models, but they highlight the importance of structured causal representations for reasoning about choices and actions. We show how representations like these can serve as the foundation for a computational account of social reasoning.

Our account draws on two general themes from the psychological literature. First, people tend to assume that choices, unlike world events, are goal-directed (Baker et al., 2009, Csibra and Gergely, 1998, Csibra and Gergely, 2007, Goodman et al., 2009, Shafto et al., 2012). We refer to this assumption as the principle of goal-directed choice, although it has also been called the intentional stance (Dennett, 1987) and the principle of rational action (Csibra and Gergely, 1998, Csibra and Gergely, 2007). Second, we propose that human reasoning relies on probabilistic inference. Recent work on inductive reasoning has emphasized the idea that probabilistic inference can be carried out over structured causal representations (Griffiths and Tenenbaum, 2005, Griffiths et al., 2010, Tenenbaum et al., 2011), and our account relies on probabilistic inference over representations similar to those in Fig. 1.

Our account is related to previous work on Bayes nets (short for Bayesian networks; Pearl, 2000), which have been widely used to account for causal reasoning (Gopnik et al., 2004, Sloman, 2005). Bayes nets rely on probabilistic inference over structured causal representations, but they do not capture the principle of goal-directed choice. In this paper, we present an extension of Bayes nets called decision networks1 (Howard and Matheson, 2005, Russell and Norvig, 2010) that naturally captures the principle of goal-directed choice. We propose that people reason about choice behavior by constructing mental models of other people’s choices that are similar to decision networks and then performing probabilistic inference over these mental models. Decision networks may therefore provide some computational substance to qualitative models like the ones in Fig. 1.

Our decision network account of reasoning about choices builds on previous approaches, including the theory theory of conceptual structure. The theory theory proposes that children learn and reason about the world by constructing scientific-like theories that are testable and subject to revision on the basis of evidence (Gopnik & Wellman, 1992). These theories can exist at different levels of abstraction. Framework theories capture fundamental principles that are expected to apply across an entire domain, and these framework theories provide a basis for constructing specific theories of concrete situations (see Wellman & Gelman, 1992). The decision network account can be viewed as a framework theory that captures the idea that choices are made in order to achieve goals, whereas individual decision networks can be viewed as specific theories. Gopnik and Wellman (2012) argue that Bayes nets provide a way to formalize the central ideas of the theory theory, and their reasons apply equally well to decision networks. For example, decision networks can be used to construct abstract causal representations of the world, to predict what will happen next, or to infer unobserved causes.

Although decision networks have not been previously explored as psychological models, they have been used by artificial intelligence researchers to create intelligent agents in multi-player games (Gal and Pfeffer, 2008, Koller and Milch, 2003, Suryadi and Gmytrasiewicz, 1999). In the psychological literature there are a number of computational accounts of reasoning about behavior (Bello and Cassimatis, 2006, Bonnefon, 2009, Bonnefon and Sloman, 2013, Hedden and Zhang, 2002, Oztop et al., 2005, Shultz, 1988, Van Overwalle, 2010, Wahl and Spada, 2000), and some of these accounts rely on Bayes nets (Goodman et al., 2006, Hagmayer and Osman, 2012, Hagmayer and Sloman, 2009, Sloman and Hagmayer, 2006, Sloman et al., 2012). However, our work is most closely related to accounts that extend the framework of Bayes nets to include the principle of goal-directed choice (Baker and Tenenbaum, 2014, Baker et al., 2008, Baker et al., 2009, Baker et al., 2011, Doshi et al., 2010, Goodman et al., 2009, Jara-Ettinger et al., 2012, Pantelis et al., 2014, Pynadath and Marsella, 2005, Tauber and Steyvers, 2011, Ullman et al., 2009). Much of this work uses a computational framework called Markov decision processes (MDPs; Baker and Tenenbaum, 2014, Baker et al., 2009).

In the next section, we describe the decision network framework in detail and explain how it is related to the Bayes net and MDP frameworks. We then present four experiments that test predictions of the decision network framework as an account of how people reason about choice behavior. Our first two experiments are specifically designed to highlight unique predictions of decision networks that distinguish them from an account based on standard Bayes nets. Our second two experiments focus on inferences about mental states. Experiment 3 focuses on inferences about what someone else knows and Experiment 4 focuses on inferences about what someone else’s goals are.

Section snippets

Decision networks

We will introduce the details of decision networks (decision nets for short) with the following running example. Suppose Alice is playing a game. In the game, a two-colored die is rolled, and if Alice chooses the color of the rolled die, she earns a reward. Suppose further that Alice is able to see the outcome of the rolled die before making her choice. This situation can be represented using the decision net in Fig. 2.

Decision nets distinguish between four different kinds of variables: world

Overview of experiments

We conducted four experiments to evaluate how well the decision net framework accounts for people’s inferences about choice behavior. Our first two experiments were designed to directly compare decision nets with Bayes nets that do not contain a notion of goal-directed choice. Experiment 1 focuses on predicting choices after a utility function changes, and Experiment 2 focuses on using observed choices to make causal inferences. Our second two experiments examine how people make inferences

Experiment 1: Predicting other people’s choices

Given that it is possible to compile any decision net into a Bayes net that makes identical choice predictions, it is important to ask whether the decision net is a better psychological model than the compiled Bayes net version of the same network. Earlier, we argued that one advantage of a decision net over a Bayes net is that a decision net can naturally accommodate changes to the utility function. When the utility function of a decision net changes, the decision net predicts that the

Experiment 2: Reasoning about goal-directed choices

The purpose of Experiment 2 was to explore whether people rely on the assumption of goal-directed choice even when they are not asked to make direct judgments about others’ choices. We asked participants to make a causal inference that was informed by someone’s choice. Specifically, we asked participants to observe another person play a single round of the cruise ship game and then infer which machine the player was using. Such inferences are possible by considering how much money a player

Experiment 3: Inferring what other people know

We have hypothesized that people reason about other people’s choices by constructing mental models of those choices and that these mental models are similar to decision nets. Our second two experiments further test this hypothesis by focusing on two more inferences from Table 1 that are related to choice behavior. In particular, Experiments 3 and 4 focus on inferences about mental states. Mental state inferences are common in social interactions. For example, when you see a man gossiping about

Experiment 4: Inferring other people’s goals

Experiment 3 showed that performing model selection over decision nets accounted well for people’s inferences about what someone else knows. The purpose of Experiment 4 was to apply this same account to a situation in which people must infer someone else’s goals.

General discussion

We proposed that decision networks can help to understand how people reason about the choices of others. Decision nets are extensions of Bayes nets, and the critical difference between the two is that decision nets incorporate the assumption of goal-directed choice. The results of our first two experiments suggested that people make use of this assumption when reasoning about other people’s choices. Experiment 1 showed that this assumption allows people to quickly adjust their predictions about

Conclusion

We presented a computational framework for reasoning about choices. We proposed that people reason about other people’s choices by constructing mental models that are similar to decision nets. Decision nets use structured causal representations and incorporate two key computational principles: goal-directed choice and probabilistic inference. Decision net models can be used to make predictions and inferences about other people’s choices, and we described four experiments in which decision nets

References (99)

  • D. Koller et al.

    Multi-agent influence diagrams for representing and solving games

    Games and Economic Behavior

    (2003)
  • T. Kushnir

    Developing a concept of choice

  • B.F. Malle et al.

    The folk concept of intentionality

    Journal of Experimental Social Psychology

    (1997)
  • D. Newtson

    Dispositional inference from effects of actions: Effects chosen and effects forgone

    Journal of Experimental Social Psychology

    (1974)
  • E. Oztop et al.

    Mental state inference using visual control parameters

    Cognitive Brain Research

    (2005)
  • P.C. Pantelis et al.

    Inferring the intentional states of autonomous virtual agents

    Cognition

    (2014)
  • L. Ross

    The intuitive psychologist and his shortcomings: Distortions in the attribution process

  • B.M. Rottman et al.

    Causal structure learning over time: Observations and interventions

    Cognitive Psychology

    (2012)
  • S.A. Sloman et al.

    The causal psycho-logic of choice

    Trends in Cognitive Sciences

    (2006)
  • M. Steyvers et al.

    Inferring causal networks from observations and interventions

    Cognitive Science

    (2003)
  • Y. Trope

    Inferential processes in the forced compliance situation: A Bayesian analysis

    Journal of Experimental Social Psychology

    (1974)
  • F. Van Overwalle

    Infants’ teleological and belief inference: A recurrent connectionist approach to their minimal representational and computational requirements

    NeuroImage

    (2010)
  • H.M. Wellman et al.

    Young children’s reasoning about beliefs

    Cognition

    (1988)
  • I. Ajzen et al.

    A Bayesian analysis of attribution processes

    Psychological Bulletin

    (1975)
  • I. Ajzen et al.

    Uniqueness of behavioral effects in causal attribution

    Journal of Personality

    (1976)
  • Baker, C. L., Goodman, N. D., & Tenenbaum, J. B. (2008). Theory-based social goal inference. In Proceedings of the 30th...
  • Baker, C. L., Saxe, R. R., & Tenenbaum, J. B. (2011). Bayesian theory of mind: Modeling joint belief-desire...
  • C.L. Baker et al.

    Modeling human plan recognition using Bayesian theory of mind

  • Bello, P., & Cassimatis, N. (2006). Developmental accounts of theory-of-mind acquisition: Achieving clarity via...
  • Bergen, L., Evans, O. R., & Tenenbaum, J. B. (2010). Learning structured preferences. In Proceedings of the 32nd annual...
  • S.A.J. Birch et al.

    The curse of knowledge in reasoning about false belief

    Psychological Science

    (2007)
  • J.F. Bonnefon

    A theory of utility conditions: Paralogical reasoning from decision-theoretic leakage

    Psychological Review

    (2009)
  • J.F. Bonnefon et al.

    The causal structure of utility conditionals

    Cognitive Science

    (2013)
  • C. Boutilier et al.

    Decision-theoretic planning: Structural assumptions and computational leverage

    Journal of Artificial Intelligence Research

    (1999)
  • G. Csibra et al.

    The teleological origins of mentalistic action explanations: A developmental hypothesis

    Developmental Science

    (1998)
  • D.C. Dennett

    The intentional stance

    (1987)
  • Deverett, B., & Kemp, C. (2012). Learning deterministic causal networks from observational data. In Proceedings of the...
  • Doshi, P., Qu, X., Goodie, A., & Young, D. (2010). Modeling recursive reasoning in humans using empirically informed...
  • Y. Gal et al.

    Networks of influence diagrams: A formalism for representing agents’ beliefs and decision-making processes

    Journal of Artificial Intelligence Research

    (2008)
  • A. Garnham

    Mental models as representations of discourse and text

    (1987)
  • D.T. Gilbert

    Ordinary personology

  • D.T. Gilbert et al.

    The correspondence bias

    Psychological Bulletin

    (1995)
  • A.I. Goldman

    In defense of simulation theory

    Mind & Language

    (1992)
  • Goodman, N. D., Baker, C. L., Bonawitz, E. B., Mansighka, V. K., Gopnik, A., Wellman, H., et al. (2006). Intuitive...
  • Goodman, N. D., Baker, C. L., & Tenenbaum, J. B., (2009). Cause and intent: Social reasoning in causal learning. In...
  • A. Gopnik et al.

    A theory of causal learning in children: Causal maps and Bayes nets

    Psychological Review

    (2004)
  • A. Gopnik et al.

    Bayesian networks, Bayesian learning and cognitive development

    Developmental Science

    (2007)
  • A. Gopnik et al.

    Why the child’s theory of mind really is a theory

    Mind & Language

    (1992)
  • A. Gopnik et al.

    Reconstructing constructivism: Causal models, Bayesian learning mechanisms and the theory theory

    Psychological Bulletin

    (2012)
  • Cited by (21)

    • The computational challenge of social learning

      2021, Trends in Cognitive Sciences
      Citation Excerpt :

      That these factors also interact with one another, further compounds the challenge of modeling how learning unfolds in the social world. The first challenge for modeling the social learning problem is the fact that another person’s mood, motives, and intentions, which we collectively refer to as their internal state (see Glossary), are not observable and are therefore highly uncertain [27,28]. This poses a problem for computing the values of our own potential actions, since they depend on our partners’ internal state (Figure 2, purple box) and how our partner will respond to our actions in turn (Figure 2, orange box).

    • A computational framework for understanding the roles of simplicity and rational support in people's behavior explanations

      2021, Cognition
      Citation Excerpt :

      First, behavior explanations are more likely to refer to mental states like beliefs and desires that gave rise to an intention to act (Malle, 1999). Second, when reasoning about other people's mental states, people expect others to behave in a goal-directed way (Baker, Jara-Ettinger, Saxe, & Tenenbaum, 2017; Baker, Saxe, & Tenenbaum, 2009; Baker & Tenenbaum, 2014; Jara-Ettinger, Gweon, Schulz, & Tenenbaum, 2016; Jern & Kemp, 2015; Ullman et al., 2010), an expectation that can even influence their causal judgments (Kirfel & Lagnado, 2019; Lagnado & Channon, 2008). This observation leads to the rational support principle.

    • The relational logic of moral inference

      2021, Advances in Experimental Social Psychology
      Citation Excerpt :

      Our recent work suggests an additional computational goal for moral inference: learners should seek to form accurate beliefs, but also should update those beliefs in ways that facilitate the development and maintenance of social relationships. Recent studies in cognitive science have sought to investigate the cognitive mechanisms through which people infer other's hidden preferences, intentions, and desires over time (Aksoy & Weesie, 2014; Diaconescu et al., 2014; Jern & Kemp, 2015). By measuring the behaviors of individual learners and then fitting descriptive models to learners' behavior, researchers can determine the extent to which behavior conforms to the predictions of the Bayesian ideal.

    • People learn other people's preferences through inverse decision-making

      2017, Cognition
      Citation Excerpt :

      Naïve utility calculus refers to the expectation people have that others will generally make choices that produce greater utility. Combining naïve utility calculus with inverse reasoning has led to a number of useful accounts of social inference in recent years (Baker, Jara-Ettinger, Saxe, & Tenenbaum, 2017; Baker, Saxe, & Tenenbaum, 2009; Baker & Tenenbaum, 2014; Jern & Kemp, 2015; Tauber & Steyvers, 2011; Ullman et al., 2009; Wu, Baker, Tenenbaum, & Schulz, 2014). However, few studies in this literature have explored the basic question of how people infer what other people like and dislike by observing their choices.

    View all citing articles on Scopus

    Data from Experiments 3 and 4 were presented at the 33rd Annual Conference of the Cognitive Science Society. We thank Jessica Lee for helping to collect the data for Experiments 1 and 2. We thank David Danks for feedback on the development of this work, and Jean-François Bonnefon, Mark Steyvers, and two anonymous reviewers for feedback on the manuscript. This work was supported by the Pittsburgh Life Sciences Greenhouse Opportunity Fund and by the National Science Foundation (NSF) Grant CDI-0835797. Alan Jern was supported in part by NIMH Training Grant T32MH019983.

    View full text