Judea Pearl has argued that counterfactuals and causality are central to intelligence, whether natural or artificial, and has helped create a rich mathematical and computational framework for formally analyzing causality. Here, we draw out connections between these notions and various current issues in cognitive science, including the nature of mental “programs” and mental representation. We argue that programs (consisting of algorithms and data structures) have a causal (counterfactual-supporting) structure; these counterfactuals can reveal the nature of mental representations. Programs can also (...) provide a causal model of the external world. Such models are, we suggest, ubiquitous in perception, cognition, and language processing. (shrink)
It has been argued that dual process theories are not consistent with Oaksford and Chater’s probabilistic approach to human reasoning (Oaksford and Chater in Psychol Rev 101:608–631, 1994 , 2007 ; Oaksford et al. 2000 ), which has been characterised as a “single-level probabilistic treatment[s]” (Evans 2007 ). In this paper, it is argued that this characterisation conflates levels of computational explanation. The probabilistic approach is a computational level theory which is consistent with theories of general cognitive architecture that invoke (...) a WM system and an LTM system. That is, it is a single function dual process theory which is consistent with dual process theories like Evans’ ( 2007 ) that use probability logic (Adams 1998 ) as an account of analytic processes. This approach contrasts with dual process theories which propose an analytic system that respects standard binary truth functional logic (Heit and Rotello in J Exp Psychol Learn 36:805–812, 2010 ; Klauer et al. in J Exp Psychol Learn 36:298–323, 2010 ; Rips in Psychol Sci 12:29–134, 2001 , 2002 ; Stanovich in Behav Brain Sci 23:645–726, 2000 , 2011 ). The problems noted for this latter approach by both Evans Psychol Bull 128:978–996, ( 2002 , 2007 ) and Oaksford and Chater (Mind Lang 6:1–38, 1991 , 1998 , 2007 ) due to the defeasibility of everyday reasoning are rehearsed. Oaksford and Chater’s ( 2010 ) dual systems implementation of their probabilistic approach is then outlined and its implications discussed. In particular, the nature of cognitive decoupling operations are discussed and a Panglossian probabilistic position developed that can explain both modal and non-modal responses and correlations with IQ in reasoning tasks. It is concluded that a single function probabilistic approach is as compatible with the evidence supporting a dual systems theory. (shrink)
According to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic – the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining (...) the end-point of cognitive development; and contemporary psychology of reasoning has focussed on comparing human reasoning against logical standards. (shrink)
The rational analysis method, first proposed by John R. Anderson, has been enormously influential in helping us understand high-level cognitive processes. -/- 'The Probabilistic Mind' is a follow-up to the influential and highly cited 'Rational Models of Cognition' (OUP, 1998). It brings together developments in understanding how, and how far, high-level cognitive processes can be understood in rational terms, and particularly using probabilistic Bayesian methods. It synthesizes and evaluates the progress in the past decade, taking into account developments in Bayesian (...) statistics, statistical analysis of the cognitive 'environment' and a variety of theoretical and experimental lines of research. The scope of the book is broad, covering important recent work in reasoning, decision making, categorization, and memory. Including chapters from many of the leading figures in this field, -/- 'The Probabilistic Mind' will be valuable for psychologists and philosophers interested in cognition. (shrink)
In this article, we argue for the general importance of normative theories of argument strength. We also provide some evidence based on our recent work on the fallacies as to why Bayesian probability might, in fact, be able to supply such an account. In the remainder of the article we discuss the general characteristics that make a specifically Bayesian approach desirable, and critically evaluate putative flaws of Bayesian probability that have been raised in the argumentation literature.
The notion of “the burden of proof” plays an important role in real-world argumentation contexts, in particular in law. It has also been given a central role in normative accounts of argumentation, and has been used to explain a range of classic argumentation fallacies. We argue that in law the goal is to make practical decisions whereas in critical discussion the goal is frequently simply to increase or decrease degree of belief in a proposition. In the latter case, it is (...) not necessarily important whether that degree of belief exceeds a particular threshold (e.g., ‘reasonable doubt’). We explore the consequences of this distinction for the role that the “burden of proof” has played in argumentation and in theories of fallacy. (shrink)
Are people rational? This question was central to Greek thought and has been at the heart of psychology and philosophy for millennia. This book provides a radical and controversial reappraisal of conventional wisdom in the psychology of reasoning, proposing that the Western conception of the mind as a logical system is flawed at the very outset. It argues that cognition should be understood in terms of probability theory, the calculus of uncertain reasoning, rather than in terms of logic, the calculus (...) of certain reasoning. (shrink)
We examine in detail three classic reasoning fallacies, that is, supposedly ``incorrect'' forms of argument. These are the so-called argumentam ad ignorantiam, the circular argument or petitio principii, and the slippery slope argument. In each case, the argument type is shown to match structurally arguments which are widely accepted. This suggests that it is not the form of the arguments as such that is problematic but rather something about the content of those examples with which they are typically justified. This (...) leads to a Bayesian reanalysis of these classic argument forms and a reformulation of the conditions under which they do or do not constitute legitimate forms of argumentation. (shrink)
In this paper, it is argued that Ferguson’s (2003, Argumentation 17, 335–346) recent proposal to reconcile monotonic logic with defeasibility has three counterintuitive consequences. First, the conclusions that can be derived from his new rule of inference are vacuous, a point that as already made against default logics when there are conflicting defaults. Second, his proposal requires a procedural “hack” to the break the symmetry between the disjuncts of the tautological conclusions to which his proposal leads. Third, Ferguson’s proposal amounts (...) to arguing that all everyday inferences are sound by definition. It is concluded that the informal logic response to defeasibility, that an account of the context in which inferences are sound or unsound is required, still stands. It is also observed that another possible response is given by Bayesian probability theory (Oaksford and Chater, in press, Bayesian Rationality: The Probabilistic Approach to Human Reasoning, Oxford University Press, Oxford, UK; Hahn and Oaksford, in press, Synthese). (shrink)
In this paper the arguments for optimal data selection and the contrast class account of negations in the selection task and the conditional inference task are summarised, and contrasted with the matching bias approach. It is argued that the probabilistic contrast class account provides a unified, rational explanation for effects across these tasks. Moreover, there are results that are only explained by the contrast class account that are also discussed. The only major anomaly is the explicit negations effect in the (...) selection task (Evans, Clibbens, & Rood, 1996), which it is argued may not be the result of normal interpretative processes. It is concluded that the effects of negation on human reasoning provide good evidence for the view that human reasoning processes may be rational according to a probabilistic standard. (shrink)
Espino, Santamaria, and Garcia-Madruga (2000) report three results on the time taken to respond to a probe word occurring as end term in the premises of a syllogistic argument. They argue that these results can only be predicted by the theory of mental models. It is argued that two of these results, on differential reaction times to end-terms occurring in different premises and in different figures, are consistent with Chater and Oaksford's (1999) probability heuristics model (PHM). It is argued that (...) the third finding, on different reaction times between figures, does not address the issue of processing difficulty where PHM predicts no differences between figures. It is concluded that Espino et al.'s results do not discriminate between theories of syllogistic reasoning as effectively as they propose. (shrink)
A recent development in the cognitive science of reasoning has been the emergence of a probabilistic approach to the behaviour observed on ostensibly logical tasks. According to this approach the errors and biases documented on these tasks occur because people import their everyday uncertain reasoning strategies into the laboratory. Consequently participants' apparently irrational behaviour is the result of comparing it with an inappropriate logical standard. In this article, we contrast the probabilistic approach with other approaches to explaining rationality, and then (...) show how it has been applied to three main areas of logical reasoning: conditional inference, Wason's selection task and syllogistic reasoning. (shrink)
Rational analysis (Anderson 1990, 1991a) is an empiricalprogram of attempting to explain why the cognitive system isadaptive, with respect to its goals and the structure of itsenvironment. We argue that rational analysis has two importantimplications for philosophical debate concerning rationality. First,rational analysis provides a model for the relationship betweenformal principles of rationality (such as probability or decisiontheory) and everyday rationality, in the sense of successfulthought and action in daily life. Second, applying the program ofrational analysis to research on human reasoning (...) leads to a radicalreinterpretation of empirical results which are typically viewed asdemonstrating human irrationality. (shrink)
Rolls defines emotion as innate reward and punishment. This could not explain our results showing that people learn faster in a negative mood. We argue that what people know about their world affects their emotional state. Negative emotion signals a failure to predict negative reward and hence prompts learning to resolve the ignorance. Thus what you don't know affects how you feel.
This commentary questions the claim that Take-The-Best provides a cognitively more plausible account of cue utilisation in decision making because it is faster and more frugal than alternative algorithms. It is also argued that the experimental evidence for Take-The-Best, or non-integrative algorithms, is weak and appears consistent with people normally adopting an integrative approach to cue utilisation.
Paradoxical individual differences, where a dysfunctional trait correlates positively with some preconceived notion of the normatively correct answer, provide compelling evidence that the wrong norm has been adopted. We have found that logical performance on conditional inference is positively correlated with schizotypy. Following Stanovich & West's reasoning, we conclude that logic is not normative in conditional inference, the prototypically logical task.
Four experiments investigated the effects of probability manipulations on the indicative four card selection task (Wason, 1966, 1968). All looked at the effects of high and low probability antecedents (p) and consequents (q) on participants' data selections when determining the truth or falsity of a conditional rule, if p then q . Experiments 1 and 2 also manipulated believability. In Experiment 1, 128 participants performed the task using rules with varied contents pretested for probability of occurrence. Probabilistic effects were observed (...) which were partly consistent with some probabilistic accounts but not with non-probabilistic approaches to selection task performance. No effects of believability were observed, a finding replicated in Experiment 2 which used 80 participants with standardised and familiar contents. Some effects in this experiment appeared inconsistent with existing probabilistic approaches. To avoid possible effects of content, Experiments 3 (48 participants) and 4 (20 participants) used abstract material. Both experiments revealed probabilistic effects. In the Discussion we examine the compatibility of these results with the various models of selection task performance. (shrink)
Green, Over, and Pyne's (1997) paper (hereafter referred to as ''GOP") seems to provide a novel approach to examining probabilistic effects in Wason's selection task. However, in this comment, it is argued that their chosen experimental paradigm confounds most of their results. The task demands of the externalisation procedure (Green, 1995) enforce a correlation between card selections and the probability of finding a counterexample, which was the main finding of GOP's experiments. Consequently GOP cannot argue that their data support Kirby's (...) (1994) proposal that people's normal strategy in the selection task is to seek falsifying evidence. Despite this methodological problem, effects of the probability of the antecedent ( p ) of a conditional rule, if p then q , predicted by Kirby (1994) and by Oaksford and Chater (1994) were observed, although they were inconsistent between Experiments 1 and 2. Moreover, the probability estimates that GOP collected, which are not vulnerable to that methodological criticism, do support the idea that when P ( p ) > P ( q ), participants revise P ( p ) down as suggested by Oaksford and Chater (1994). (shrink)
Classical symbolic computational models of cognition are at variance with the empirical findings in the cognitive psychology of memory and inference. Standard symbolic computers are well suited to remembering arbitrary lists of symbols and performing logical inferences. In contrast, human performance on such tasks is extremely limited. Standard models donot easily capture content addressable memory or context sensitive defeasible inference, which are natural and effortless for people. We argue that Connectionism provides a more natural framework in which to model this (...) behaviour. In addition to capturing the gross human performance profile, Connectionist systems seem well suited to accounting for the systematic patterns of errors observed in the human data. We take these arguments to counter Fodor and Pylyshyn's (1988) recent claim that Connectionism is, in principle, irrelevant to psychology. (shrink)