Existing models of strategic decision making typically assume that only the attributes of the currently played game need be considered when reaching a decision. The results presented in this article demonstrate that the so-called “cooperativeness” of the previously played prisoner’s dilemma games influence choices and predictions in the current prisoner’s dilemma game, which suggests that games are not considered independently. These effects involved reinforcement-based assimilation to the previous choices and also a perceptual contrast of the present game with preceding games, (...) depending on the range and the rank of their cooperativeness. A. Parducci’s (1965) range frequency theory and H. Helson’s (1964) adaptation level theory are plausible theories of relative judgment of magnitude information, which could provide an account of these context effects. (shrink)
Prospect Relativity 2 Abstract In many theories of decision under risk (e.g., expected utility theory, rank dependent utility theory, and prospect theory) the utility or value of a prospect is independent of other prospects or options in the choice set. The experiments presented here show a large effect of the available options set, suggesting instead that prospects are valued relative to one another. The judged certainty equivalent is strongly influenced by the options available. Similarly, the selection of a preferred option (...) from a set of prospects is strongly influenced by the prospects available. Alternative theories of decision under risk (e.g., the stochastic difference model, multialternative decision field theory, and range frequency theory), where prospects themselves or prospect attributes are valued relative to one another, can provide an account of these context effects. (shrink)
We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute’s subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reﬂects both the immediate distribution of attribute values from the current decision’s context and also the background, real-world distribution of attribute (...) values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities. Ó 2005 Elsevier Inc. All rights reserved. (shrink)
Estimating the financial value of pain informs issues as diverse as the market price of analgesics, the cost-effectiveness of clinical treatments, compensation for injury, and the response to public hazards. Such costs are assumed to reflect a stable trade-off between relief of discomfort and money. Here, using an auction-based health market experiment, we show the price people pay for relief of pain is strongly determined by the local context of the market, determined either by recent intensities of pain, (...) or their immediately disposable income, but not overall wealth. The absence of a stable valuation metric suggests that the dynamic behaviour of health markets is not predictable from the static behaviour of individuals. We conclude that the results follow the dynamics of habit formation models of economic theory, and as such, the study provides the first scientific basis for this type of preference modelling. (shrink)
Judea Pearl has argued that counterfactuals and causality are central to intelligence, whether natural or artificial, and has helped create a rich mathematical and computational framework for formally analyzing causality. Here, we draw out connections between these notions and various current issues in cognitive science, including the nature of mental “programs” and mental representation. We argue that programs (consisting of algorithms and data structures) have a causal (counterfactual-supporting) structure; these counterfactuals can reveal the nature of mental representations. Programs can also (...) provide a causal model of the external world. Such models are, we suggest, ubiquitous in perception, cognition, and language processing. (shrink)
This article reviews a number of different areas in the foundations of formal learning theory. After outlining the general framework for formal models of learning, the Bayesian approach to learning is summarized. This leads to a discussion of Solomonoff's Universal Prior Distribution for Bayesian learning. Gold's model of identification in the limit is also outlined. We next discuss a number of aspects of learning theory raised in contributed papers, related to both computational and representational complexity. The article concludes with a (...) description of how semi-supervised learning can be applied to the study of cognitive learning models. Throughout this overview, the specific points raised by our contributing authors are connected to the models and methods under review. (shrink)
Children learn their native language by exposure to their linguistic and communicative environment, but apparently without requiring that their mistakes be corrected. Such learning from “positive evidence” has been viewed as raising “logical” problems for language acquisition. In particular, without correction, how is the child to recover from conjecturing an over-general grammar, which will be consistent with any sentence that the child hears? There have been many proposals concerning how this “logical problem” can be dissolved. In this study, we review (...) recent formal results showing that the learner has sufficient data to learn successfully from positive evidence, if it favors the simplest encoding of the linguistic input. Results include the learnability of linguistic prediction, grammaticality judgments, language production, and form-meaning mappings. The simplicity approach can also be “scaled down” to analyze the learnability of specific linguistic constructions, and it is amenable to empirical testing as a framework for describing human language acquisition. (shrink)
It has been argued that dual process theories are not consistent with Oaksford and Chater’s probabilistic approach to human reasoning (Oaksford and Chater in Psychol Rev 101:608–631, 1994 , 2007 ; Oaksford et al. 2000 ), which has been characterised as a “single-level probabilistic treatment[s]” (Evans 2007 ). In this paper, it is argued that this characterisation conflates levels of computational explanation. The probabilistic approach is a computational level theory which is consistent with theories of general cognitive architecture that invoke (...) a WM system and an LTM system. That is, it is a single function dual process theory which is consistent with dual process theories like Evans’ ( 2007 ) that use probability logic (Adams 1998 ) as an account of analytic processes. This approach contrasts with dual process theories which propose an analytic system that respects standard binary truth functional logic (Heit and Rotello in J Exp Psychol Learn 36:805–812, 2010 ; Klauer et al. in J Exp Psychol Learn 36:298–323, 2010 ; Rips in Psychol Sci 12:29–134, 2001 , 2002 ; Stanovich in Behav Brain Sci 23:645–726, 2000 , 2011 ). The problems noted for this latter approach by both Evans Psychol Bull 128:978–996, ( 2002 , 2007 ) and Oaksford and Chater (Mind Lang 6:1–38, 1991 , 1998 , 2007 ) due to the defeasibility of everyday reasoning are rehearsed. Oaksford and Chater’s ( 2010 ) dual systems implementation of their probabilistic approach is then outlined and its implications discussed. In particular, the nature of cognitive decoupling operations are discussed and a Panglossian probabilistic position developed that can explain both modal and non-modal responses and correlations with IQ in reasoning tasks. It is concluded that a single function probabilistic approach is as compatible with the evidence supporting a dual systems theory. (shrink)
We report the results of a dual-task study in which participants performed a tracking and typing task under various experimental conditions. An objective payoff function was used to provide explicit feedback on how participants should trade off performance between the tasks. Results show that participants’ dual-task interleaving strategy was sensitive to changes in the difficulty of the tracking task and resulted in differences in overall task performance. To test the hypothesis that people select strategies that maximize payoff, a Cognitively Bounded (...) Rational Analysis model was developed. This analysis evaluated a variety of dual-task interleaving strategies to identify the optimal strategy for maximizing payoff in each condition. The model predicts that the region of optimum performance is different between experimental conditions. The correspondence between human data and the prediction of the optimal strategy is found to be remarkably high across a number of performance measures. This suggests that participants were honing their behavior to maximize payoff. Limitations are discussed. (shrink)
Mere facts about how the world is cannot determine how we ought to think or behave. Elqayam & Evans (E&E) argue that this undercuts the use of rational analysis in explaining how people reason, by ourselves and with others. But this presumed application of the fallacy is itself fallacious. Rational analysis seeks to explain how people do reason, for example in laboratory experiments, not how they ought to reason. Thus, no ought is derived from an is; and rational analysis is (...) unchallenged by E&E's arguments. (shrink)
Debates concerning the types of representations that aid reading acquisition have often been influenced by the relationship between measures of early phonological awareness (the ability to process speech sounds) and later reading ability. Here, a complementary approach is explored, analyzing how the functional utility of different representational units, such as whole words, bodies (letters representing the vowel and final consonants of a syllable), and graphemes (letters representing a phoneme) may change as the number of words that can be read gradually (...) increases. Utility is measured by applying a Simplicity Principle to the problem of mapping from print to sound; that is, assuming that the “best” representational units for reading are those which allow the mapping from print to sounds to be encoded as efficiently as possible. Results indicate that when only a small number of words are read whole-word representations are most useful, whereas when many words can be read graphemic representations have the highest utility. (shrink)
Recent research suggests that language evolution is a process of cultural change, in which linguistic structures are shaped through repeated cycles of learning and use by domain-general mechanisms. This paper draws out the implications of this viewpoint for understanding the problem of language acquisition, which is cast in a new, and much more tractable, form. In essence, the child faces a problem of induction, where the objective is to coordinate with others (C-induction), rather than to model the structure of the (...) natural world (N-induction). We argue that, of the two, C-induction is dramatically easier. More broadly, we argue that understanding the acquisition of any cultural form, whether linguistic or otherwise, during development, requires considering the corresponding question of how that cultural form arose through processes of cultural evolution. This perspective helps resolve the “logical” problem of language acquisition and has far-reaching implications for evolutionary psychology. (shrink)
Natural language is full of patterns that appear to fit with general linguistic rules but are ungrammatical. There has been much debate over how children acquire these “linguistic restrictions,” and whether innate language knowledge is needed. Recently, it has been shown that restrictions in language can be learned asymptotically via probabilistic inference using the minimum description length (MDL) principle. Here, we extend the MDL approach to give a simple and practical methodology for estimating how much linguistic data are required to (...) learn a particular linguistic restriction. Our method provides a new research tool, allowing arguments about natural language learnability to be made explicit and quantified for the first time. We apply this method to a range of classic puzzles in language acquisition. We find some linguistic rules appear easily statistically learnable from language experience only, whereas others appear to require additional learning mechanisms (e.g., additional cues or innate constraints). (shrink)
According to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic – the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining (...) the end-point of cognitive development; and contemporary psychology of reasoning has focussed on comparing human reasoning against logical standards. (shrink)
The rational analysis method, first proposed by John R. Anderson, has been enormously influential in helping us understand high-level cognitive processes. -/- 'The Probabilistic Mind' is a follow-up to the influential and highly cited 'Rational Models of Cognition' (OUP, 1998). It brings together developments in understanding how, and how far, high-level cognitive processes can be understood in rational terms, and particularly using probabilistic Bayesian methods. It synthesizes and evaluates the progress in the past decade, taking into account developments in Bayesian (...) statistics, statistical analysis of the cognitive 'environment' and a variety of theoretical and experimental lines of research. The scope of the book is broad, covering important recent work in reasoning, decision making, categorization, and memory. Including chapters from many of the leading figures in this field, -/- 'The Probabilistic Mind' will be valuable for psychologists and philosophers interested in cognition. (shrink)
Our target article argued that a genetically specified Universal Grammar (UG), capturing arbitrary properties of languages, is not tenable on evolutionary grounds, and that the close fit between language and language learners arises because language is shaped by the brain, rather than the reverse. Few commentaries defend a genetically specified UG. Some commentators argue that we underestimate the importance of processes of cultural transmission; some propose additional cognitive and brain mechanisms that may constrain language and perhaps differentiate humans from nonhuman (...) primates; and others argue that we overstate or understate the case against co-evolution of language genes. In engaging with these issues, we suggest that a new synthesis concerning the relationship between brains, genes, and language may be emerging. (shrink)
Are people rational? This question was central to Greek thought and has been at the heart of psychology and philosophy for millennia. This book provides a radical and controversial reappraisal of conventional wisdom in the psychology of reasoning, proposing that the Western conception of the mind as a logical system is flawed at the very outset. It argues that cognition should be understood in terms of probability theory, the calculus of uncertain reasoning, rather than in terms of logic, the calculus (...) of certain reasoning. (shrink)
Remarkable progress in the mathematics and computer science of probability has led to a revolution in the scope of probabilistic models. In particular, ‘sophisticated’ probabilistic methods apply to structured relational systems such as graphs and grammars, of immediate relevance to the cognitive sciences. This Special Issue outlines progress in this rapidly developing field, which provides a potentially unifying perspective across a wide range of domains and levels of explanation. Here, we introduce the historical and conceptual foundations of the approach, explore (...) how the approach relates to studies of explicit probabilistic reasoning, and give a brief overview of the field as it stands today. (shrink)
We argue that solving the heterogeneous problems arising from the standard game theory requires looking both at reasoning heuristics, as in Colman's analysis, and at how people represent games and the quantities that define them.
Carruthers’ argument depends on viewing logical form as a linguistic level. But logical form is typically viewed as underpinning general purpose inference, and hence as having no particular connection to language processing. If logical form is tied directly to language, two problems arise: a logical problem concerning language acquisition and the empirical problem that aphasics appear capable of cross-modular reasoning.
We argue that confusability between items should be distinguished from generalization between items. Shepard's data concern confusability, but the theories proposed by Shepard and by Tenenbaum & Griffiths concern generalization, indicating a gap between theory and data. We consider the empirical and theoretical work involved in bridging this gap. [Shepard; Tenenbaum & Griffiths].
A recent development in the cognitive science of reasoning has been the emergence of a probabilistic approach to the behaviour observed on ostensibly logical tasks. According to this approach the errors and biases documented on these tasks occur because people import their everyday uncertain reasoning strategies into the laboratory. Consequently participants' apparently irrational behaviour is the result of comparing it with an inappropriate logical standard. In this article, we contrast the probabilistic approach with other approaches to explaining rationality, and then (...) show how it has been applied to three main areas of logical reasoning: conditional inference, Wason's selection task and syllogistic reasoning. (shrink)
This commentary focuses on three issues raised by Gigerenzer, Todd, and the ABC Research Group (1999). First, I stress the need for further experimental evidence to determine which heuristics people use in cognitive judgment tasks. Second, I question the scope of cognitive models based on simple heuristics, arguing that many aspects of cognition are too sophisticated to be modeled in this way. Third, I note the complementary role that rational explanation can play to Gigenerenzer et al.'s “ecological” analysis of why (...) heuristics succeed. (shrink)
Rational analysis (Anderson 1990, 1991a) is an empiricalprogram of attempting to explain why the cognitive system isadaptive, with respect to its goals and the structure of itsenvironment. We argue that rational analysis has two importantimplications for philosophical debate concerning rationality. First,rational analysis provides a model for the relationship betweenformal principles of rationality (such as probability or decisiontheory) and everyday rationality, in the sense of successfulthought and action in daily life. Second, applying the program ofrational analysis to research on human reasoning (...) leads to a radicalreinterpretation of empirical results which are typically viewed asdemonstrating human irrationality. (shrink)
Gold & Stoljar argue persuasively that there is presently not a good case for the “radical neuron doctrine.” There are strong reasons to believe that this doctrine is false. An analogy between psychology and economics strongly throws the radical neuron doctrine into doubt.
Four experiments investigated the effects of probability manipulations on the indicative four card selection task (Wason, 1966, 1968). All looked at the effects of high and low probability antecedents (p) and consequents (q) on participants' data selections when determining the truth or falsity of a conditional rule, if p then q . Experiments 1 and 2 also manipulated believability. In Experiment 1, 128 participants performed the task using rules with varied contents pretested for probability of occurrence. Probabilistic effects were observed (...) which were partly consistent with some probabilistic accounts but not with non-probabilistic approaches to selection task performance. No effects of believability were observed, a finding replicated in Experiment 2 which used 80 participants with standardised and familiar contents. Some effects in this experiment appeared inconsistent with existing probabilistic approaches. To avoid possible effects of content, Experiments 3 (48 participants) and 4 (20 participants) used abstract material. Both experiments revealed probabilistic effects. In the Discussion we examine the compatibility of these results with the various models of selection task performance. (shrink)
Van Gelder's specification of the dynamical hypothesis does not improve on previous notions. All three key attributes of dynamical systems apply to Turing machines and are hence too general. However, when a more restricted definition of a dynamical system is adopted, it becomes clear that the dynamical hypothesis is too underspecified to constitute an interesting cognitive claim.
The Schyns et al. target article demonstrates that different classifications entail different representations, implying “flexible space learning.” We argue that flexibility is required even at the within-category level.