Belief polarization is said to occur when two people respond to the same evidence by updating their beliefs in opposite directions. This response is considered to be “irrational” because it involves contrary updating, a form of belief updating that appears to violate normatively optimal responding, as for example dictated by Bayes' theorem. In light of much evidence that people are capable of normatively optimal behavior, belief polarization presents a puzzling exception. We show that Bayesian networks, or Bayes nets, can simulate (...) rational belief updating. When fit to experimental data, Bayes nets can help identify the factors that contribute to polarization. We present a study into belief updating concerning the reality of climate change in response to information about the scientific consensus on anthropogenic global warming. The study used representative samples of Australian and U.S. participants. Among Australians, consensus information partially neutralized the influence of worldview, with free-market supporters showing a greater increase in acceptance of human-caused global warming relative to free-market opponents. In contrast, while consensus information overall had a positive effect on perceived consensus among U.S. participants, there was a reduction in perceived consensus and acceptance of human-caused global warming for strong supporters of unregulated free markets. Fitting a Bayes net model to the data indicated that under a Bayesian framework, free-market support is a significant driver of beliefs about climate change and trust in climate scientists. Further, active distrust of climate scientists among a small number of U.S. conservatives drives contrary updating in response to consensus information among this particular group. (shrink)
Science strives for coherence. For example, the findings from climate science form a highly coherent body of knowledge that is supported by many independent lines of evidence: greenhouse gas emissions from human economic activities are causing the global climate to warm and unless GHG emissions are drastically reduced in the near future, the risks from climate change will continue to grow and major adverse consequences will become unavoidable. People who oppose this scientific body of knowledge because the implications of cutting (...) GHG emissions—such as regulation or increased taxation—threaten their worldview or livelihood cannot provide an alternative view that is coherent by the standards of conventional scientific thinking. Instead, we suggest that people who reject the fact that the Earth’s climate is changing due to greenhouse gas emissions oppose whatever inconvenient finding they are confronting in piece-meal fashion, rather than systematically, and without considering the implications of this rejection to the rest of the relevant scientific theory and findings. Hence, claims that the globe “is cooling” can coexist with claims that the “observed warming is natural” and that “the human influence does not matter because warming is good for us.” Coherence between these mutually contradictory opinions can only be achieved at a highly abstract level, namely that “something must be wrong” with the scientific evidence in order to justify a political position against climate change mitigation. This high-level coherence accompanied by contradictory subordinate propositions is a known attribute of conspiracist ideation, and conspiracism may be implicated when people reject well-established scientific propositions. (shrink)
Computational modeling is now ubiquitous in psychology, and researchers who are not modelers may find it increasingly difficult to follow the theoretical developments in their field. This book presents an integrated framework for the development and application of models in psychology and related disciplines. Researchers and students are given the knowledge and tools to interpret models published in their area, as well as to develop, fit, and test their own models. Both the development of models and key features of any (...) model are covered, as are the applications of models in a variety of domains across the behavioural sciences. A number of chapters are devoted to fitting models using maximum likelihood and Bayesian estimation, including fitting hierarchical and mixture models. Model comparison is described as a core philosophy of scientific inference, and the use of models to understand theories and advance scientific discourse is explained. (shrink)
Determining the knowledge that guides human judgments is fundamental to understanding how people reason, make decisions, and form predictions. We use an experimental procedure called ‘‘iterated learning,’’ in which the responses that people give on one trial are used to generate the data they see on the next, to pinpoint the knowledge that informs people's predictions about everyday events (e.g., predicting the total box office gross of a movie from its current take). In particular, we use this method to discriminate (...) between two models of human judgments: a simple Bayesian model (Griffiths & Tenenbaum, 2006) and a recently proposed alternative model that assumes people store only a few instances of each type of event in memory (MinK; Mozer, Pashler, & Homaei, 2008). Although testing these models using standard experimental procedures is difficult due to differences in the number of free parameters and the need to make assumptions about the knowledge of individual learners, we show that the two models make very different predictions about the outcome of iterated learning. The results of an experiment using this methodology provide a rich picture of how much people know about the distributions of everyday quantities, and they are inconsistent with the predictions of the MinK model. The results suggest that accurate predictions about everyday events reflect relatively sophisticated knowledge on the part of individuals. (shrink)
Political scientists have conventionally assumed that achieving democracy is a one-way ratchet. Only very recently has the question of “democratic backsliding” attracted any research attention. We argue that democratic instability is best understood with tools from complexity science. The explanatory power of complexity science arises from several features of complex systems. Their relevance in the context of democracy is discussed. Several policy recommendations are offered to help stabilize current systems of representative democracy.
The 11 articles in this issue explore how people respond to climate change and other global challenges. The articles pursue three broad strands of enquiry that relate to the effects and causes of “skepticism” about climate change, the purely cognitive challenges that are posed by a complex scientific issue, and the ways in which climate change can be communicated to a wider audience. Cognitive science can contribute to understanding people's responses to global challenges in many ways, and it may also (...) contribute to implementing solutions to those problems. (shrink)
Information changes as it is passed from person to person, with this process of cultural transmission allowing the minds of individuals to shape the information that they transmit. We present mathematical models of cultural transmission which predict that the amount of information passed from person to person should affect the rate at which that information changes. We tested this prediction using a function-learning task, in which people learn a functional relationship between two variables by observing the values of those variables. (...) We varied the total number of observations and the number of those observations that take unique values. We found an effect of the number of observations, with functions transmitted using fewer observations changing form more quickly. We did not find an effect of the number of unique observations, suggesting that noise in perception or memory may have affected learning. (shrink)
Is consolidation needed to account for retroactive interference in free recall? Interfering mental activity during the retention interval of a memory task impairs performance, in particular if the interference occurs in temporal proximity to the encoding of the to-be-remembered information. There are at least two rival theoretical accounts of this temporal gradient of retroactive interference. The cognitive neuroscience literature has suggested neural consolidation is a pivotal factor determining item recall. According to this account, interfering activity interrupts consolidation processes that would (...) otherwise stabilize the memory representations of TBR items post-encoding. Temporal distinctiveness theory, by contrast, proposes that the retrievability of items depends on their isolation in psychological time. According to this theory, information processed after the encoding of TBR material will reduce the temporal distinctiveness of the TBR information. To test between these accounts, implementations of consolidation were added to the SIMPLE model of memory and learning. We report data from two experiments utilizing a two-list free recall paradigm. Modeling results imply that SIMPLE was able to model the data and did not benefit from the addition of consolidation. It is concluded that the temporal gradient of retroactive interference cannot be taken as evidence for memory consolidation. (shrink)
Algorithms for approximate Bayesian inference, such as those based on sampling (i.e., Monte Carlo methods), provide a natural source of models of how people may deal with uncertainty with limited cognitive resources. Here, we consider the idea that individual differences in working memory capacity (WMC) may be usefully modeled in terms of the number of samples, or “particles,” available to perform inference. To test this idea, we focus on two recent experiments that report positive associations between WMC and two distinct (...) aspects of categorization performance: the ability to learn novel categories, and the ability to switch between different categorization strategies (“knowledge restructuring”). In favor of the idea of modeling WMC as a number of particles, we show that a single model can reproduce both experimental results by varying the number of particles—increasing the number of particles leads to both faster category learning and improved strategy‐switching. Furthermore, when we fit the model to individual participants, we found a positive association between WMC and best‐fit number of particles for strategy switching. However, no association between WMC and best‐fit number of particles was found for category learning. These results are discussed in the context of the general challenge of disentangling the contributions of different potential sources of behavioral variability. (shrink)
The breadth-first search adopted by Bayesian researchers to map out the conceptual space and identify what the framework can do is beneficial for science and reflective of its collaborative and incremental nature. Theoretical pluralism among researchers facilitates refinement of models within various levels of analysis, which ultimately enables effective cross-talk between different levels of analysis.
We introduce this special issue on Thinking about Climate Change by reflecting on the role of psychology in responding adaptively to catastrophic global threats. By way of illustration we compare the threat posed by climate change with the extinction-level threat considered in the recent film Don’t Look Up [McKay, A. (Director). (2021). Don’t Look Up [Film]. Hyperobject Industries]. Human psychology is a critical element in both scenarios. The papers in this special issue discuss the importance of clear communication of scientific (...) information, the dangers of misinformation and the possible role played by motivated reasoning, all themes that are taken up in the film. Ultimately, though, it is not enough to consider psychological factors in isolation: we must also acknowledge that cognitive flaws and psychological motivations are exploited by vested interests that profit from delaying climate action. A global response to a global crisis requires us to ‘look up’ to recognise the threat and to ‘look around’ to go beyond specialist disciplines and national boundaries. (shrink)
We focus on two components of Page's argument in favour of localist representations in connectionist networks: First, we take issue with the claim that localist representations can give rise to generalisation and show that whenever generalisation occurs, distributed representations are involved. Second, we counter the alleged shortcomings of distributed representations and show that their properties are preferable to those of localist approaches.
The articles in this theme issue seek to understand the evolutionary bases of social learning and the consequences of cultural transmission for the evolution of human behaviour. In this introductory article, we provide a summary of these articles and a personal view of some promising lines of development suggested by the work summarized here.
We take up two issues discussed by Chow: the claim by critics of hypothesis testing that the null hypothesis (H0) is always false, and the claim that reporting effect sizes is more appropriate than relying on statistical significance. Concerning the former, we agree with Chow's sentiment despite noting serious shortcomings in his discussion. Concerning the latter, we agree with Chow that effect size need not translate into scientific relevance, and furthermore reiterate that with small samples effect size measures cannot substitute (...) for significance. (shrink)