Belief polarization is said to occur when two people respond to the same evidence by updating their beliefs in opposite directions. This response is considered to be “irrational” because it involves contrary updating, a form of belief updating that appears to violate normatively optimal responding, as for example dictated by Bayes' theorem. In light of much evidence that people are capable of normatively optimal behavior, belief polarization presents a puzzling exception. We show that Bayesian networks, or Bayes nets, can simulate (...) rational belief updating. When fit to experimental data, Bayes nets can help identify the factors that contribute to polarization. We present a study into belief updating concerning the reality of climate change in response to information about the scientific consensus on anthropogenic global warming. The study used representative samples of Australian and U.S. participants. Among Australians, consensus information partially neutralized the influence of worldview, with free-market supporters showing a greater increase in acceptance of human-caused global warming relative to free-market opponents. In contrast, while consensus information overall had a positive effect on perceived consensus among U.S. participants, there was a reduction in perceived consensus and acceptance of human-caused global warming for strong supporters of unregulated free markets. Fitting a Bayes net model to the data indicated that under a Bayesian framework, free-market support is a significant driver of beliefs about climate change and trust in climate scientists. Further, active distrust of climate scientists among a small number of U.S. conservatives drives contrary updating in response to consensus information among this particular group. (shrink)
The breadth-first search adopted by Bayesian researchers to map out the conceptual space and identify what the framework can do is beneficial for science and reflective of its collaborative and incremental nature. Theoretical pluralism among researchers facilitates refinement of models within various levels of analysis, which ultimately enables effective cross-talk between different levels of analysis.
The 11 articles in this issue explore how people respond to climate change and other global challenges. The articles pursue three broad strands of enquiry that relate to the effects and causes of “skepticism” about climate change, the purely cognitive challenges that are posed by a complex scientific issue, and the ways in which climate change can be communicated to a wider audience. Cognitive science can contribute to understanding people's responses to global challenges in many ways, and it may also (...) contribute to implementing solutions to those problems. (shrink)
Science strives for coherence. For example, the findings from climate science form a highly coherent body of knowledge that is supported by many independent lines of evidence: greenhouse gas emissions from human economic activities are causing the global climate to warm and unless GHG emissions are drastically reduced in the near future, the risks from climate change will continue to grow and major adverse consequences will become unavoidable. People who oppose this scientific body of knowledge because the implications of cutting (...) GHG emissions—such as regulation or increased taxation—threaten their worldview or livelihood cannot provide an alternative view that is coherent by the standards of conventional scientific thinking. Instead, we suggest that people who reject the fact that the Earth’s climate is changing due to greenhouse gas emissions oppose whatever inconvenient finding they are confronting in piece-meal fashion, rather than systematically, and without considering the implications of this rejection to the rest of the relevant scientific theory and findings. Hence, claims that the globe “is cooling” can coexist with claims that the “observed warming is natural” and that “the human influence does not matter because warming is good for us.” Coherence between these mutually contradictory opinions can only be achieved at a highly abstract level, namely that “something must be wrong” with the scientific evidence in order to justify a political position against climate change mitigation. This high-level coherence accompanied by contradictory subordinate propositions is a known attribute of conspiracist ideation, and conspiracism may be implicated when people reject well-established scientific propositions. (shrink)
Information changes as it is passed from person to person, with this process of cultural transmission allowing the minds of individuals to shape the information that they transmit. We present mathematical models of cultural transmission which predict that the amount of information passed from person to person should affect the rate at which that information changes. We tested this prediction using a function-learning task, in which people learn a functional relationship between two variables by observing the values of those variables. (...) We varied the total number of observations and the number of those observations that take unique values. We found an effect of the number of observations, with functions transmitted using fewer observations changing form more quickly. We did not find an effect of the number of unique observations, suggesting that noise in perception or memory may have affected learning. (shrink)
The articles in this theme issue seek to understand the evolutionary bases of social learning and the consequences of cultural transmission for the evolution of human behaviour. In this introductory article, we provide a summary of these articles and a personal view of some promising lines of development suggested by the work summarized here.
We take up two issues discussed by Chow: the claim by critics of hypothesis testing that the null hypothesis (H0) is always false, and the claim that reporting effect sizes is more appropriate than relying on statistical significance. Concerning the former, we agree with Chow's sentiment despite noting serious shortcomings in his discussion. Concerning the latter, we agree with Chow that effect size need not translate into scientific relevance, and furthermore reiterate that with small samples effect size measures cannot substitute (...) for significance. (shrink)
Is consolidation needed to account for retroactive interference in free recall? Interfering mental activity during the retention interval of a memory task impairs performance, in particular if the interference occurs in temporal proximity to the encoding of the to-be-remembered information. There are at least two rival theoretical accounts of this temporal gradient of retroactive interference. The cognitive neuroscience literature has suggested neural consolidation is a pivotal factor determining item recall. According to this account, interfering activity interrupts consolidation processes that would (...) otherwise stabilize the memory representations of TBR items post-encoding. Temporal distinctiveness theory, by contrast, proposes that the retrievability of items depends on their isolation in psychological time. According to this theory, information processed after the encoding of TBR material will reduce the temporal distinctiveness of the TBR information. To test between these accounts, implementations of consolidation were added to the SIMPLE model of memory and learning. We report data from two experiments utilizing a two-list free recall paradigm. Modeling results imply that SIMPLE was able to model the data and did not benefit from the addition of consolidation. It is concluded that the temporal gradient of retroactive interference cannot be taken as evidence for memory consolidation. (shrink)
We focus on two components of Page's argument in favour of localist representations in connectionist networks: First, we take issue with the claim that localist representations can give rise to generalisation and show that whenever generalisation occurs, distributed representations are involved. Second, we counter the alleged shortcomings of distributed representations and show that their properties are preferable to those of localist approaches.
Computational modeling is now ubiquitous in psychology, and researchers who are not modelers may find it increasingly difficult to follow the theoretical developments in their field. This book presents an integrated framework for the development and application of models in psychology and related disciplines. Researchers and students are given the knowledge and tools to interpret models published in their area, as well as to develop, fit, and test their own models. Both the development of models and key features of any (...) model are covered, as are the applications of models in a variety of domains across the behavioural sciences. A number of chapters are devoted to fitting models using maximum likelihood and Bayesian estimation, including fitting hierarchical and mixture models. Model comparison is described as a core philosophy of scientific inference, and the use of models to understand theories and advance scientific discourse is explained. (shrink)