Elsevier

Cognition

Volume 210, May 2021, 104606
Cognition

A computational framework for understanding the roles of simplicity and rational support in people's behavior explanations

https://doi.org/10.1016/j.cognition.2021.104606Get rights and content

Abstract

When explaining other people's behavior, people generally find some explanations more satisfying than others. We propose that people judge behavior explanations based on two computational principles: simplicity and rational support—the extent to which an explanation makes the behavior “make sense” under the assumption that the person is a rational agent. Furthermore, we present a computational framework based on decision networks that can formalize both of these principles. We tested this account in a series of experiments in which subjects rated or generated explanations for other people's behavior. In Experiments 1 and 2, the explanations varied in what the other person liked and disliked. In Experiment 3, the explanations varied in what the other person knew or believed. Results from Experiments 1 and 2 supported the idea that people rely on both simplicity and rational support. However, Experiment 3 suggested that subjects rely only on rational support when judging explanations of people's behavior that vary in what someone knew.

Introduction

People are constantly trying to explain other people's behavior. Whenever you wonder about the motives, beliefs, desires, or influences behind someone else's behavior—that is, whenever you ask yourself why someone did something—you are attempting to explain that person's behavior.

As an example, suppose you saw a reckless driver careening down the road, far past the speed limit. You might wonder why she was doing that; that is, you might want to explain her behavior. Consider three possible explanations: (1) she was late to a meeting and was in a hurry; (2) she did not know the speed limit; and (3) her speedometer was broken and she did not know how fast she was going. Different people might disagree about how satisfying these explanations are, but people will likely agree that all of these explanations are more satisfying than the following explanation: she was speeding because she was late to a meeting, and she did not know the speed limit, and her speedometer was broken. Additionally, there are some explanations that most people will not find satisfying at all. For example, consider this explanation: she was speeding because she knew that George Washington was the first president of the United States.

How do people judge behavior explanations, and what makes some behavior explanations more satisfying than others? These questions are related to the broad study of social cognition and how people make judgments about other people's behavior (Gilbert, 1995; Kelley & Michela, 1980). Specifically, behavior explanation is closely related to interpersonal attribution: how people decide whether to attribute other people's behavior to either their individual characteristics or to situational factors. Some early research on interpersonal attribution focused on normative inference principles for making judgments about other people's behavior (Jones & Davis, 1965; Kelley, 1973). But most work on interpersonal attribution, particularly of the past few decades, has focused on the cognitive processes that drive these judgments, rather than the computational principles that underlie them (Anderson, Krull, & Weiner, 1996; Gilbert, 1998). The lack of focus on computational principles has meant that the majority of theories that have emerged from this research have been qualitative and are not capable of making quantitative predictions (Korman, Voiklis, & Malle, 2015). Consequently, our understanding of how people explain behavior is broad but shallow.

In this paper, we propose two computational principles that underlie people's judgments about behavior explanations. We call the two principles simplicity and rational support. We show that both principles can be formally instantiated using the computational framework of decision networks. This paper therefore represents a first step toward formalizing qualitative theories of behavior explanation (Korman et al., 2015).

Later, we provide a formal definition of the two principles. For now, we will provide a brief informal introduction to each principle using the reckless driver example from earlier. The simplicity principle states that simpler behavior explanations are more satisfying. For example, consider the explanation for the driver's behavior that gives three separate reasons for speeding (she was late to a meeting, she did not know the speed limit, and her speedometer was broken). This explanation does explain the driver's behavior, but it seems to over-explain or overdetermine her behavior. That is, it gives three independently sufficient reasons when each one would suffice. By comparison, each one of the three component explanations is simpler and therefore seems more satisfying than the full explanation.

The rational support principle states that a behavior explanation will be more satisfying to the extent that it makes the behavior “make sense”, under the assumption that the behavior was carried out by a rational agent. For example, consider the explanation for the driver's behavior that she was speeding because she knew that George Washington was the first president of the United States. This knowledge provides no rational support for speeding—it does not motivate someone to be in a hurry or to drive recklessly. As a result, this “explanation” does not explain anything at all. By contrast, someone who is late to a meeting would likely want to hurry to get to the meeting as quickly as possible. Therefore, explaining the driver's speeding by referring to the fact that she was late to a meeting does provide rational support for her behavior, making the explanation that mentions her being late more satisfying.

Both of these principles have existing empirical support. Evidence for the simplicity principle comes from work on causal explanation by psychologists (Keil & Wilson, 2000; Legare & Clegg, 2015; Lombrozo, 2006, Lombrozo, 2012; Pacer & Lombrozo, 2017; Pacer, Williams, Chen, Lombrozo, & Griffiths, 2013) and machine learning researchers (DeJong, 2006; Flores, Gàmez, & Moral, 2005; Nielsen, Pellet, & Elisseeff, 2008; Pacer et al., 2013; Yuan & Lu, 2007). For example, in one study (Lombrozo, 2007), subjects learned about several diseases that caused overlapping symptoms. Then, after reading about a patient's symptoms, subjects rated explanations for the patient's symptoms such as “the patient has diseases X and Y”. In general, subjects rated simpler explanations (explanations that invoked fewer causes) to be “better and more likely to be true” (Lombrozo, 2007). While psychological research on how people generate and judge causal explanations has been extensive, research on behavior explanation has been more limited. One relevant line of research has been motivated by Kelley's (1987) discounting principle, which is similar to our simplicity principle. The discounting principle states that people will discount the strength of a cause when alternative causes are present. The discounting principle has been taken by interpersonal attribution researchers to suggest that people will prefer behavior explanations with fewer causes. Specifically, studies showing that people sometimes rate explanations of other people's behavior that invoke more reasons as better than explanations that invoke fewer reasons (e.g., Leddo, Abelson, & Gross, 1984) have been presented as evidence against the discounting principle (McClure, 1998; Morris, Smith, & Turner, 1998).

However, behavior explanations likely rely on more than simplicity or discounting because behavior explanations are different than causal explanations in several respects. First, behavior explanations are more likely to refer to mental states like beliefs and desires that gave rise to an intention to act (Malle, 1999). Second, when reasoning about other people's mental states, people expect others to behave in a goal-directed way (Baker, Jara-Ettinger, Saxe, & Tenenbaum, 2017; Baker, Saxe, & Tenenbaum, 2009; Baker & Tenenbaum, 2014; Jara-Ettinger, Gweon, Schulz, & Tenenbaum, 2016; Jern & Kemp, 2015; Ullman et al., 2010), an expectation that can even influence their causal judgments (Kirfel & Lagnado, 2019; Lagnado & Channon, 2008). This observation leads to the rational support principle.

Some evidence for the rational support principle in behavior explanation comes from work by Malle, 1999, Malle, 2004, who has proposed the most complete psychological theory to date of behavior explanation. One claim of this theory is that explanations of intentional behavior are based on reasons rather than causes. Malle defines reasons as “agents' mental states whose content they considered and in light of which they formed an intention to act” (Malle, 1999). By contrast, explanations of unintentional behaviors like blushing or sweating are more likely to be based on simple causes that do not qualify as reasons. For a reason-based explanation to be considered valid, it must provide rational support for the behavior, meaning that the beliefs and desires identified in the explanation must make the behavior rational. Malle's theory has considerable empirical support (Malle, 1999; Malle, Knobe, & Nelson, 2007; Malle, Knobe, O'Laughlin, Pearce, & Nelson, 2000; O'Laughlin & Malle, 2002), but the theory is limited by its qualitative nature. For example, the theory can predict whether someone will generate a reason explanation or a cause explanation for a given behavior, but cannot predict how probable it is someone will generate a particular reason explanation from a set of possibilities. Malle and his colleagues themselves have stated that more computational work is needed (Korman et al., 2015; Malle, 2004).

Section snippets

Computational framework

In this section, we describe the computational framework of decision nets (short for decision networks1), and explain how decision nets can be used to formally instantiate the simplicity and rational support principles. Previous research has suggested that people represent other people's choices and behavior using decision nets (Jern & Kemp, 2015; Kleiman-Weiner, Gerstenberg, Levine, & Tenenbaum, 2015) or similar

Overview of experiments

We predicted that people would rate explanations with higher posterior probability according to Eq. (3) as more satisfying. We tested this prediction in a series of experiments in which people read descriptions of people's behavior and then rated explanations for why the people did what they did. Experiments 1 and 2 examined explanations that differed in what a person liked or disliked, as in Explanations 1–3 for Lori's behavior. Experiment 3 examined explanations that differed in what a person

Experiment 1A: Judging behavior explanations that differ in what someone likes and dislikes

We used the same story in Experiments 1A and 1B as in our running example involving Lori's choice of seat at a meeting. Subjects read about someone who arrived last at a meeting in which there was a single row of chairs and chose a seat. Subjects then rated explanations for why the person chose that seat.

Experiment 1B: Replication of Experiment 1A

The method and analysis plan for this experiment was pre-registered. The pre-registration plan may be viewed at https://aspredicted.org/vw7px.pdf.

Experiment 2: Generating behavior explanations that differ in what someone likes and dislikes

In Experiment 1, we provided subjects with a finite set of explanations to rate and provided all models with the same set of explanations. It is possible, however, that subjects had other explanations in mind that we did not ask them to rate that they would have considered to be even more satisfying. To test this, we conducted a follow-up experiment in which subjects generated their own explanations.

Experiment 3: Rating behavior explanations that differ in what someone knows

Experiments 1 and 2 only examined explanations that differed in what someone likes or dislikes, but our computational framework also makes predictions for explanations that differ in what someone knows. The primary purpose of Experiment 3, therefore, was to test the generality of our computational framework for other types of behavior explanations. Specifically, the task we presented to subjects in Experiment 3 was similar to the one in Experiment 1, but the story and explanations were

General discussion

We presented a computational framework for understanding people's behavior explanations that formalizes the principles of rational support and simplicity. Our framework constitutes a novel advance toward a formal account of behavior explanation. Across four experiments, we found evidence that people rely on rational support when judging explanations of other people's behavior when those explanations refer to people's preferences and people's beliefs or knowledge. These results are consistent

Conclusion

Understanding how people explain other people's behavior is not only a question for psychology. A better understanding of this ability also has implications for artificial intelligence (AI) and cognitive science more broadly. The ability to understand and explain other people's behavior is one important aspect of intelligence for which current AI is vastly inferior to humans (Breazeal, Buchsbaum, Gray, Gatenby, & Blumberg, 2015; Davis & Marcus, 2015; Fong, Nourbakhsh, & Dautenhahn, 2003; Lake,

Acknowledgments

This work was supported in part by the Rose-Hulman Independent Projects / Research Opportunities Program and ArcelorMittal. We thank Mike Oaksford and Patricia Mirabile for valuable feedback on the manuscript and Eric Reyes for help with statistical analysis. Part of this work was presented at the 37th Annual Conference of the Cognitive Science Society. The head silhouette images in Fig. 1 are from Freepik.com.

References (65)

  • T. Lombrozo

    Simplicity and probability in causal explanation

    Cognitive Psychology

    (2007)
  • J. Rissanen

    Modeling by shortest data description

    Automatica

    (1978)
  • C.A. Anderson et al.

    Explanations: Processes and consequences

  • C.L. Baker et al.

    Rational quantitative attribution of beliefs, desires and percepts in human mentalizing

    Nature Human Behavior

    (2017)
  • C.L. Baker et al.

    Modeling human plan recognition using Bayesian theory of mind

  • C. Breazeal et al.

    Learning from and about others: Towards using imitation to bootstrap the social understanding of others by robots

    Artificial Life

    (2015)
  • P.-C. Bürkner

    brms: An R package for Bayesian multilevel models using Stan

    Journal of Statistical Software

    (2017)
  • L.M. de Campos

    A scoring function for learning bayesian networks based on mutual information and conditional independence tests

    The Journal of Machine Learning Research

    (2006)
  • N. Chater et al.

    Ten years of the rational analysis of cognition

    Trends in Cognitive Sciences

    (1999)
  • E. Davis et al.

    Commonsense reasoning and commonsense knowledge in artificial intelligence

    Communications of the ACM

    (2015)
  • G. DeJong

    Toward robust real-world inference: A new perspective on explanation-based learning

  • I. Douven et al.

    Best, second-best, and good-enough explanations: How they matter to reasoning

    Journal of Experimental Psychology: Learning, Memory, and Cognition

    (2018)
  • I. Douven et al.

    Probabilistic alternatives to Bayesianism: The case of explanationism

    Frontiers in Psychology

    (2015)
  • D. Fass et al.

    Categorization under complexity: A unified MDL account of human learning of regular and irregular categories

  • J. Feldman

    Minimization of boolean complexity in human concept learning

    Nature

    (2000)
  • M.J. Flores et al.

    Abductive inference in Bayesian networks: Finding a partition of the explanation space

  • D.T. Gilbert

    Attribution and interpesonal perception

  • D.T. Gilbert

    Ordinary personology

  • A. Gopnik et al.

    A theory of causal learning in children: Causal maps and Bayes nets

    Psychological Review

    (2004)
  • H.P. Grice

    Logic and conversation

  • R.A. Howard et al.

    Influence diagrams

    Decision Analysis

    (2005)
  • E.E. Jones et al.

    From acts to dispositions: The attribution process in person perception

  • Cited by (2)

    • How explanation guides belief change

      2021, Trends in Cognitive Sciences
      Citation Excerpt :

      Most notably, using computer simulations comparing Bayes’ rule with various probabilistic versions of abduction, it was found that the abductive rules tend to lead to a faster average convergence to the truth than Bayes’ rule, which can obviously benefit reasoners [8,9]. There is already a considerable amount of work on the role of explanation in various high-level cognitive processes, including categorization, generalization, understanding, and interpreting language and behavior [12]. The question of how explanation guides belief change is still underexplored and could, for instance, also benefit from input from developmental psychologists [13].

    • The experimental philosophy of logic and formal epistemology: Conditionals

      2023, The Compact Compendium of Experimental Philosophy
    View full text