Are people rational? This question was central to Greek thought and has been at the heart of psychology and philosophy for millennia. This book provides a radical and controversial reappraisal of conventional wisdom in the psychology of reasoning, proposing that the Western conception of the mind as a logical system is flawed at the very outset. It argues that cognition should be understood in terms of probability theory, the calculus of uncertain reasoning, rather than in terms of logic, the calculus (...) of certain reasoning. (shrink)
It is widely assumed that human learning and the structure of human languages are intimately related. This relationship is frequently suggested to derive from a language-specific biological endowment, which encodes universal, but communicatively arbitrary, principles of language structure (a Universal Grammar or UG). How might such a UG have evolved? We argue that UG could not have arisen either by biological adaptation or non-adaptationist genetic processes, resulting in a logical problem of language evolution. Specifically, as the processes of language change (...) are much more rapid than processes of genetic change, language constitutes a both over time and across different human populations, and, hence, cannot provide a stable environment to which language genes could have adapted. We conclude that a biologically determined UG is not evolutionarily viable. Instead, the original motivation for UG arises because language has been shaped to fit the human brain, rather than vice versa. Following Darwin, we view language itself as a complex and interdependent which evolves under selectional pressures from human learning and processing mechanisms. That is, languages themselves are shaped by severe selectional pressure from each generation of language users and learners. This suggests that apparently arbitrary aspects of linguistic structure may result from general learning and processing biases deriving from the structure of thought processes, perceptuo-motor factors, cognitive limitations, and pragmatics. (shrink)
Memory is fleeting. New material rapidly obliterates previous material. How, then, can the brain deal successfully with the continual deluge of linguistic input? We argue that, to deal with this “Now-or-Never” bottleneck, the brain must compress and recode linguistic input as rapidly as possible. This observation has strong implications for the nature of language processing: the language system must “eagerly” recode and compress linguistic input; as the bottleneck recurs at each new representational level, the language system must build a multilevel (...) linguistic representation; and the language system must deploy all available information predictively to ensure that local linguistic ambiguities are dealt with “Right-First-Time”; once the original input is lost, there is no way for the language system to recover. This is “Chunk-and-Pass” processing. Similarly, language learning must also occur in the here and now, which implies that language acquisition is learning to process, rather than inducing, a grammar. Moreover, this perspective provides a cognitive foundation for grammaticalization and other aspects of language change. Chunk-and-Pass processing also helps explain a variety of core properties of language, including its multilevel representational structure and duality of patterning. This approach promises to create a direct relationship between psycholinguistics and linguistic theory. More generally, we outline a framework within which to integrate often disconnected inquiries into language processing, language acquisition, and language change and evolution. (shrink)
According to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic – the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining (...) the end-point of cognitive development; and contemporary psychology of reasoning has focussed on comparing human reasoning against logical standards. (shrink)
'The Probabilistic Mind' is a follow-up to the influential and highly cited 'Rational Models of Cognition'. It brings together developments in understanding how, and how far, high-level cognitive processes can be understood in rational terms, and particularly using probabilistic Bayesian methods.
Remarkable progress in the mathematics and computer science of probability has led to a revolution in the scope of probabilistic models. In particular, ‘sophisticated’ probabilistic methods apply to structured relational systems such as graphs and grammars, of immediate relevance to the cognitive sciences. This Special Issue outlines progress in this rapidly developing field, which provides a potentially unifying perspective across a wide range of domains and levels of explanation. Here, we introduce the historical and conceptual foundations of the approach, explore (...) how the approach relates to studies of explicit probabilistic reasoning, and give a brief overview of the field as it stands today. (shrink)
This book shows how these developments have led researchers to view people's conditional reasoning behaviour more as succesful probabilistic reasoning rather ...
This book explores a new approach to understanding the human mind - rational analysis - that regards thinking as a facility adapted to the structure of the world. This approach is most closely associated with the work of John R Anderson, who published the original book on rational analysis in 1990. Since then, a great deal of work has been carried out in a number of laboratories around the world, and the aim of this book is to bring this work (...) together for the benefit of the general psychological audience. The book contains chapters by some of the world's leading researchers in memory, categorisation, reasoning, and search, who show how the power of rational analysis can be applied to the central question of how humans think. It will be of interest to students and researchers in cognitive psychology, cognitive science, and animal behaviour. (shrink)
A recent development in the cognitive science of reasoning has been the emergence of a probabilistic approach to the behaviour observed on ostensibly logical tasks. According to this approach the errors and biases documented on these tasks occur because people import their everyday uncertain reasoning strategies into the laboratory. Consequently participants' apparently irrational behaviour is the result of comparing it with an inappropriate logical standard. In this article, we contrast the probabilistic approach with other approaches to explaining rationality, and then (...) show how it has been applied to three main areas of logical reasoning: conditional inference, Wason's selection task and syllogistic reasoning. (shrink)
Probability ratio and likelihood ratio measures of inductive support and related notions have appeared as theoretical tools for probabilistic approaches in the philosophy of science, the psychology of reasoning, and artificial intelligence. In an effort of conceptual clarification, several authors have pursued axiomatic foundations for these two families of measures. Such results have been criticized, however, as relying on unduly demanding or poorly motivated mathematical assumptions. We provide two novel theorems showing that probability ratio and likelihood ratio measures can be (...) axiomatized in a way that overcomes these difficulties. (shrink)
Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning of theoretical developments and online (...) corpus creation has enabled large models to be tested, revealing probabilistic constraints in processing, undermining acquisition arguments based on a perceived poverty of the stimulus, and suggesting fruitful links with probabilistic theories of categorization and ambiguity resolution in perception. (shrink)
Recent research suggests that language evolution is a process of cultural change, in which linguistic structures are shaped through repeated cycles of learning and use by domain-general mechanisms. This paper draws out the implications of this viewpoint for understanding the problem of language acquisition, which is cast in a new, and much more tractable, form. In essence, the child faces a problem of induction, where the objective is to coordinate with others (C-induction), rather than to model the structure of the (...) natural world (N-induction). We argue that, of the two, C-induction is dramatically easier. More broadly, we argue that understanding the acquisition of any cultural form, whether linguistic or otherwise, during development, requires considering the corresponding question of how that cultural form arose through processes of cultural evolution. This perspective helps resolve the “logical” problem of language acquisition and has far-reaching implications for evolutionary psychology. (shrink)
We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute’s subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision’s context and also the background, real-world distribution of attribute (...) values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities. Ó 2005 Elsevier Inc. All rights reserved. (shrink)
Social interaction is both ubiquitous and central to understanding human behavior. Such interactions depend, we argue, on shared intentionality: the parties must form a common understanding of an ambiguous interaction. Yet how can shared intentionality arise? Many well-known accounts of social cognition, including those involving “mind-reading,” typically fall into circularity and/or regress. For example, A’s beliefs and behavior may depend on her prediction of B’s beliefs and behavior, but B’s beliefs and behavior depend in turn on her prediction of A’s (...) beliefs and behavior. One possibility is to embrace circularity and take shared intentionality as imposing consistency conditions on beliefs and behavior, but typically there are many possible solutions and no clear criteria for choosing between them. We argue that addressing these challenges requires some form of we-reasoning, but that this raises the puzzle of how the collective agent arises from the individual agents. This puzzle can be solved by proposing that the will of the collective agent arises from a simulated process of bargaining: agents must infer what they would agree, were they able to communicate. This model explains how, and which, shared intentions are formed. We also propose that such “virtual bargaining” may be fundamental to understanding social interactions. (shrink)
An influential line of thinking in behavioral science, to which the two authors have long subscribed, is that many of society's most pressing problems can be addressed cheaply and effectively at the level of the individual, without modifying the system in which the individual operates. We now believe this was a mistake, along with, we suspect, many colleagues in both the academic and policy communities. Results from such interventions have been disappointingly modest. But more importantly, they have guided many (though (...) by no means all) behavioral scientists to frame policy problems in individual, not systemic, terms: to adopt what we call the “i-frame,” rather than the “s-frame.” The difference may be more consequential than i-frame advocates have realized, by deflecting attention and support away from s-frame policies. Indeed, highlighting the i-frame is a long-established objective of corporate opponents of concerted systemic action such as regulation and taxation. We illustrate our argument briefly for six policy problems, and in depth with the examples of climate change, obesity, retirement savings, and pollution from plastic waste. We argue that the most important way in which behavioral scientists can contributed to public policy is by employing their skills to develop and implement value-creating system-level change. (shrink)
This interdisciplinary new work explores one of the central theoretical problems in linguistics: learnability. The authors, from different backgrounds---linguistics, philosophy, computer science, psychology and cognitive science-explore the idea that language acquisition proceeds through general purpose learning mechanisms, an approach that is broadly empiricist both methodologically and psychologically. Written by four researchers in the full range of relevant fields: linguistics, psychology, computer science, and cognitive science, the book sheds light on the central problems of learnability and language, and traces their implications (...) for key questions of theoretical linguistics and the study of language acquisition. (shrink)
Rational analysis (Anderson 1990, 1991a) is an empiricalprogram of attempting to explain why the cognitive system isadaptive, with respect to its goals and the structure of itsenvironment. We argue that rational analysis has two importantimplications for philosophical debate concerning rationality. First,rational analysis provides a model for the relationship betweenformal principles of rationality (such as probability or decisiontheory) and everyday rationality, in the sense of successfulthought and action in daily life. Second, applying the program ofrational analysis to research on human reasoning (...) leads to a radicalreinterpretation of empirical results which are typically viewed asdemonstrating human irrationality. (shrink)
Four experiments investigated the effects of probability manipulations on the indicative four card selection task (Wason, 1966, 1968). All looked at the effects of high and low probability antecedents (p) and consequents (q) on participants' data selections when determining the truth or falsity of a conditional rule, if p then q . Experiments 1 and 2 also manipulated believability. In Experiment 1, 128 participants performed the task using rules with varied contents pretested for probability of occurrence. Probabilistic effects were observed (...) which were partly consistent with some probabilistic accounts but not with non-probabilistic approaches to selection task performance. No effects of believability were observed, a finding replicated in Experiment 2 which used 80 participants with standardised and familiar contents. Some effects in this experiment appeared inconsistent with existing probabilistic approaches. To avoid possible effects of content, Experiments 3 (48 participants) and 4 (20 participants) used abstract material. Both experiments revealed probabilistic effects. In the Discussion we examine the compatibility of these results with the various models of selection task performance. (shrink)
If Bayesian Fundamentalism existed, Jones & Love's (J&L's) arguments would provide a necessary corrective. But it does not. Bayesian cognitive science is deeply concerned with characterizing algorithms and representations, and, ultimately, implementations in neural circuits; it pays close attention to environmental structure and the constraints of behavioral data, when available; and it rigorously compares multiple models, both within and across papers. J&L's recommendation of Bayesian Enlightenment corresponds to past, present, and, we hope, future practice in Bayesian cognitive science.
Much research on judgment and decision making has focussed on the adequacy of classical rationality as a description of human reasoning. But more recently it has been argued that classical rationality should also be rejected even as normative standards for human reasoning. For example, Gigerenzer and Goldstein and Gigerenzer and Todd argue that reasoning involves “fast and frugal” algorithms which are not justified by rational norms, but which succeed in the environment. They provide three lines of argument for this view, (...) based on: the importance of the environment; the existence of cognitive limitations; and the fact that an algorithm with no apparent rational basis, Take-the-Best, succeeds in an judgment task. We reconsider –, arguing that standard patterns of explanation in psychology and the social and biological sciences, use rational norms to explain why simple cognitive algorithms can succeed. We also present new computer simulations that compare Take-the-Best with other cognitive models. Although Take-the-Best still performs well, it does not perform noticeably better than the other models. We conclude that these results provide no strong reason to prefer Take-the-Best over alternative cognitive models. (shrink)
It has been argued that dual process theories are not consistent with Oaksford and Chater’s probabilistic approach to human reasoning (Oaksford and Chater in Psychol Rev 101:608–631, 1994 , 2007 ; Oaksford et al. 2000 ), which has been characterised as a “single-level probabilistic treatment[s]” (Evans 2007 ). In this paper, it is argued that this characterisation conflates levels of computational explanation. The probabilistic approach is a computational level theory which is consistent with theories of general cognitive architecture that invoke (...) a WM system and an LTM system. That is, it is a single function dual process theory which is consistent with dual process theories like Evans’ ( 2007 ) that use probability logic (Adams 1998 ) as an account of analytic processes. This approach contrasts with dual process theories which propose an analytic system that respects standard binary truth functional logic (Heit and Rotello in J Exp Psychol Learn 36:805–812, 2010 ; Klauer et al. in J Exp Psychol Learn 36:298–323, 2010 ; Rips in Psychol Sci 12:29–134, 2001 , 2002 ; Stanovich in Behav Brain Sci 23:645–726, 2000 , 2011 ). The problems noted for this latter approach by both Evans Psychol Bull 128:978–996, ( 2002 , 2007 ) and Oaksford and Chater (Mind Lang 6:1–38, 1991 , 1998 , 2007 ) due to the defeasibility of everyday reasoning are rehearsed. Oaksford and Chater’s ( 2010 ) dual systems implementation of their probabilistic approach is then outlined and its implications discussed. In particular, the nature of cognitive decoupling operations are discussed and a Panglossian probabilistic position developed that can explain both modal and non-modal responses and correlations with IQ in reasoning tasks. It is concluded that a single function probabilistic approach is as compatible with the evidence supporting a dual systems theory. (shrink)
Natural language is full of patterns that appear to fit with general linguistic rules but are ungrammatical. There has been much debate over how children acquire these “linguistic restrictions,” and whether innate language knowledge is needed. Recently, it has been shown that restrictions in language can be learned asymptotically via probabilistic inference using the minimum description length (MDL) principle. Here, we extend the MDL approach to give a simple and practical methodology for estimating how much linguistic data are required to (...) learn a particular linguistic restriction. Our method provides a new research tool, allowing arguments about natural language learnability to be made explicit and quantified for the first time. We apply this method to a range of classic puzzles in language acquisition. We find some linguistic rules appear easily statistically learnable from language experience only, whereas others appear to require additional learning mechanisms (e.g., additional cues or innate constraints). (shrink)
Judea Pearl has argued that counterfactuals and causality are central to intelligence, whether natural or artificial, and has helped create a rich mathematical and computational framework for formally analyzing causality. Here, we draw out connections between these notions and various current issues in cognitive science, including the nature of mental “programs” and mental representation. We argue that programs (consisting of algorithms and data structures) have a causal (counterfactual-supporting) structure; these counterfactuals can reveal the nature of mental representations. Programs can also (...) provide a causal model of the external world. Such models are, we suggest, ubiquitous in perception, cognition, and language processing. (shrink)
Human cognition requires coping with a complex and uncertain world. This suggests that dealing with uncertainty may be the central challenge for human reasoning. In Bayesian Rationality we argue that probability theory, the calculus of uncertainty, is the right framework in which to understand everyday reasoning. We also argue that probability theory explains behavior, even on experimental tasks that have been designed to probe people's logical reasoning abilities. Most commentators agree on the centrality of uncertainty; some suggest that there is (...) a residual role for logic in understanding reasoning; and others put forward alternative formalisms for uncertain reasoning, or raise specific technical, methodological, or empirical challenges. In responding to these points, we aim to clarify the scope and limits of probability and logic in cognitive science; explore the meaning of the explanation of cognition; and re-evaluate the empirical case for Bayesian rationality. (shrink)
These volumes provide a resource that makes this research accessible across disciplines and clarifies its importance for the social sciences and philosophy as ...
Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than (...) probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability. (shrink)