This book represents the first major attempt by any author to provide an integrated account of the evidence for bias in human reasoning across a wide range of disparate psychological literatures. The topics discussed involve both deductive and inductive reasoning as well as statistical judgement and inference. In addition, the author proposes a general theoretical approach to the explanations of bias and considers the practical implications for real world decision making. The theoretical stance of the book is based on a (...) distinction between preconscious heuristic processes which determine the mental representation of 'relevant' features of the problem content, and subsequent analytic reasoning processes which generate inferences and judgements. Phenomena discussed and interpreted within this framework include feature matching biases in propositional reasoning, confirmation bias, biasing and debiasing effects of knowledge on reasoning, and biases in statistical judgement normally attributed to 'availability' and 'representativeness' heuristics. In the final chapter, the practical consequences of bias for real life decision making are considered, together with various issues concerning the problem of 'debiasing'. The major approaches discussed are those involving education and training on the one hand, and the development of intelligent software and interactive decision aids on the other. (shrink)
We propose a critique of normativism, deﬁned as the idea that human thinking reﬂects a normative system against which it should be measured and judged. We analyze the methodological problems associated with normativism, proposing that it invites the controversial “is-ought” inference, much contested in the philosophical literature. This problem is triggered when there are competing normative accounts (the arbitration problem), as empirical evidence can help arbitrate between descriptive theories, but not between normative systems. Drawing on linguistics as a model, we (...) propose that a clear distinction between normative systems and competence theories is essential, arguing that equating them invites an “is-ought” inference: to wit, supporting normative “ought” theories with empirical “is” evidence. We analyze in detail two research programmes with normativist features – Oaksford and Chater’s rational analysis and Stanovich and West’s individual differences approach – demonstrating how, in each case, equating norm and competence leads to an is-ought inference. Normativism triggers a host of research biases in the psychology of reasoning and decision making: focusing on untrained participants and novel problems, analyzing psychological processes in terms of their normative correlates, and neglecting philosophically signiﬁcant paradigms when they do not supply clear standards for normative judgement. For example, in a dual-process framework, normativism can lead to a fallacious “ought-is” inference, in which normative responses are taken as diagnostic of analytic reasoning. We propose that little can be gained from normativism that cannot be achieved by descriptivist computational-level analysis, illustrating our position with Hypothetical Thinking Theory and the theory of the suppositional conditional. We conclude that descriptivism is a viable option, and that theories of higher mental processing would be better off freed from normative considerations. (shrink)
This book explores the idea that much of our behaviour is controlled by automatic and intuitive mental processes, which shape and compete with our conscious thinking and decision making. Accessibly written, and assuming no prior knowledge of the field, the book will be fascinating reading for all those interested in human behaviour.
In common with a number of other authors I believe that there has been a paradigm shift in the psychology of reasoning, specifically the area traditionally labelled as the study of deduction. The deduction paradigm was founded in a philosophical tradition that assumed logicality as the basis for rational thought, and provided binary propositional logic as the agreed normative framework. By contrast, many contemporary authors assume that people have degrees of uncertainty in both premises and conclusions, and reject binary logic (...) as a workable normative system. I discuss a number of questions and challenges for this new psychology of reasoning, including the following: (a) Do we need an alternative normative system, such as Bayesianism, for the new paradigm? (b) Is there any longer a clear distinction between the study of deductive and inductive reasoning, the latter having its own tradition and literature? (c) Precisely how is the integrated study of reasoning and decision making facilitated by the new paradigm? (d) What difficulties with dual-processing approaches need to be resolved, if they are to take us forward? (shrink)
We propose a critique ofnormativism, defined as the idea that human thinking reflects a normative system against which it should be measured and judged. We analyze the methodological problems associated with normativism, proposing that it invites the controversial “is-ought” inference, much contested in the philosophical literature. This problem is triggered when there are competing normative accounts, as empirical evidence can help arbitrate between descriptive theories, but not between normative systems. Drawing on linguistics as a model, we propose that a clear (...) distinction between normative systems and competence theories is essential, arguing that equating them invites an “is-ought” inference: to wit, supporting normative “ought” theories with empirical “is” evidence. We analyze in detail two research programmes with normativist features – Oaksford and Chater's rational analysis and Stanovich and West's individual differences approach – demonstrating how, in each case, equating norm and competence leads to an is-ought inference. Normativism triggers a host of research biases in the psychology of reasoning and decision making: focusing on untrained participants and novel problems, analyzing psychological processes in terms of their normative correlates, and neglecting philosophically significant paradigms when they do not supply clear standards for normative judgement. For example, in a dual-process framework, normativism can lead to a fallacious “ought-is” inference, in which normative responses are taken as diagnostic of analytic reasoning. We propose that little can be gained from normativism that cannot be achieved by descriptivist computational-level analysis, illustrating our position with Hypothetical Thinking Theory and the theory of the suppositional conditional. We conclude that descriptivism is a viable option, and that theories of higher mental processing would be better off freed from normative considerations. (shrink)
In this paper, I show that the question of how dual process theories of reasoning and judgement account for conflict between System 1 (heuristic) and System 2 (analytic) processes needs to be explicated and addressed in future research work. I demonstrate that a simple additive probability model that describes such conflict can be mapped on to three different cognitive models. The pre-emptive conflict resolution model assumes that a decision is made at the outset as to whether a heuristic or analytic (...) process will control the response. The parallel-competitive model assumes that each system operates in parallel to deliver a putative response, resulting sometimes in conflict that then needs to be resolved. Finally, the default-interventionist model involves the cueing of default responses by the heuristic system that may or may not be altered by subsequent intervention of the analytic system. A second, independent issue also emerges from this discussion. The superior performance of higher-ability participants on reasoning tasks may be due to the fact that they engage in more analytic reasoning ( quantity hypothesis ) or alternatively to the fact that the analytic reasoning they apply is more effective ( quality hypothesis ). (shrink)
The two main psychological theories of the ordinary conditional were designed to account for inferences made from assumptions, but few premises in everyday life can be simply assumed true. Useful premises usually have a probability that is less than certainty. But what is the probability of the ordinary conditional and how is it determined? We argue that people use a two stage Ramsey test that we specify to make probability judgements about indicative conditionals in natural language, and we describe experiments (...) that support this conclusion. Our account can explain why most people give the conditional probability as the probability of the conditional, but also why some give the conjunctive probability. We discuss how our psychological work is related to the analysis of ordinary indicative conditionals in philosophical logic. (shrink)
In this study, we examine the belief bias effect in syllogistic reasoning under both standard presentation and in a condition where participants are required to respond within 10 seconds. As predicted, the requirement for rapid responding increased the amount of belief bias observed on the task and reduced the number of logically correct decisions, both effects being substantial and statistically significant. These findings were predicted by the dual-process account of reasoning, which posits that fast heuristic processes, responsible for belief bias, (...) compete with slower analytic processes that can lead to correct logical decisions. Requiring rapid responding thus differentially inhibits the operation of analytic reasoning processes, leading to the results observed. (shrink)
[About the book] This book explores the idea that we have two minds - automatic, unconscious, and fast, the other controlled, conscious, and slow. In recent years there has been great interest in so-called dual-process theories of reasoning and rationality. According to such theories, there are two distinct systems underlying human reasoning - an evolutionarily old system that is associative, automatic, unconscious, parallel, and fast, and a more recent, distinctively human system that is rule-based, controlled, conscious, serial, and slow. Within (...) the former, processes the former, processes are held to be innate and to use heuristics that evolved to solve specific adaptive problems. In the latter, processes are taken to be learned, flexible, and responsive to rational norms. Despite the attention these theories are attracting, there is still poor communication between dual-process theorists themselves, and the substantial bodies of work on dual processes in cognitive psychology and social psychology remain isolated from each other. This book brings together leading researchers on dual processes to summarize the state-of-the-art, highlight key issues, present different perspectives, explore implications, and provide a stimulus to further work. It includes new ideas about the human mind both by contemporary philosophers interested in broad theoretical questions about mental architecture and by psychologists specialising in traditionally distinct and isolated fields. For all those in the cognitive sciences, this is a book that will advance dual-process theorizing, promote interdisciplinary communication, and encourage further applications of dual-process approaches. (shrink)
M. Oaksford and N. Chater presented a Bayesian analysis of the Wason selection task in which they proposed that people choose cards in order to maximize expected information gain as measured by reduction in uncertainty in the Shannon-Weaver information theory sense. It is argued that the EIG measure is both psychologically implausible and normatively inadequate as a measure of epistemic utility. The article is also concerned with the descriptive account of findings in the selection task literature offered by Oaksford and (...) Chater. First, it is shown that their analysis data reported in the recent article of K. N. Kirby is unsound; second, an EIG analysis is presented of the experiments of P. Pollard and J. St. B. T. Evans that provides a strong empirical disconfirmation of the theory. (shrink)
We tested the hypothesis that choices determined by Type 1 processes are compelling because they are fluent, and for this reason they are less subject to analytic thinking than other answers. A total of 104 participants completed a modified version of Wason's selection task wherein they made decisions about one card at a time using a two-response paradigm. In this paradigm participants gave a fast, intuitive response, rated their feeling of rightness for that response, and were then allowed free time (...) to reconsider their answers. As we predicted, answers consistent with a matching heuristic were made more quickly than other answers, were given higher FOR ratings, and received less subsequent analysis as measured by rethinking time and the probability of changing answers. These data suggest that reasoning biases may be compelling because they are fluently generated; this is turn creates a strong FOR, which acts as a signal that further analysis is not necessary. (shrink)
I argue that views of human rationality are strongly affected by the adoption of a two minds theory in which humans have an old mind which evolved early and shares many features of animal cognition, as well as new mind which evolved later and is distinctively developed in humans. Both minds have a form of instrumental rationality—striving for the attainment of goals—but by very different mechanisms. The old mind relies on a combination of evolution and experiential learning, and is therefore (...) driven entirely by repeating behaviours which succeeded in the past. The new mind, however, permits the solution of novel problems by reasoning about the future, enabling consequential decision making. I suggest that the concept of epistemic rationality—striving for true knowledge—can only usefully be applied to the new mind with its access to explicit knowledge and beliefs. I also suggest that we commonly interpret behaviour as irrational when the old mind conflicts with the new and frustrates the goals of the conscious person. (shrink)
Dual-process theories of higher cognition, distinguishing between intuitive (Type 1) and reflective (Type 2) thinking, have become increasingly popular, although also subject to recent criticism. A key question, to which a number of contributions in this special issue relate, is how to define the difference between the two kinds of processing. One issue discussed is whether they differ at Marr’s computational level of analysis. I believe they do but that ultimately the debate will decided at the implementational level where distinct (...) cognitive and neural systems need to be demonstrated. Other distinctions raised in the issue are the unique ability for metarepresentation, cognitive decoupling and hypothetical thinking at the Type 2 level, and the association of emotion and metacognitive feelings with the Type 1 level. The relation of the latter to cognitive control is also discussed. (shrink)
Johnson-Laird and Byrne present a theory of conditional inference based upon the manipulation of mental models. In the present paper, the theory is critically examined with regard to its ability to account for psychological data, principally with respect to the rate at which people draw the four basic inferences of modus ponens, denial of the antecedent, affirmation of the consequent and modus tollens. It is argued first that the theory is unclear in its definition and in particular with regard to (...) predictions of problem difficulty. Clarification and specification of principles are consequently provided here. Next, it is argued that there are a number of phenomena in the conditional reasoning literature for which the theory cannot account in its present form. Specifically, the relatively frequency of DA and AC inferences on affirmative conditionals is not as predicted by the theory, differences occur between inferences on if then and only if rules beyond the capacity of the theory to explain and there is no account of the “negative conclusion bias” observed when negated components are introduced into the rules. A number of revisions to the mental model theory of conditional reasoning are proposed in order to account for these findings. (shrink)
In two experiments we tested the hypothesis that the mechanisms that produce belief bias generalise across reasoning tasks. In formal reasoning (i.e., syllogisms) judgements of validity are influenced by actual validity, believability of the conclusions, and an interaction between the two. Although apparently analogous effects of belief and argument strength have been observed in informal reasoning, the design of those studies does not permit an analysis of the interaction effect. In the present studies we redesigned two informal reasoning tasks: the (...) Argument Evaluation Task (AET) and a Law of Large Numbers (LLN) task in order to test the similarity of the phenomena concerned. Our findings provide little support for the idea that belief bias on formal and informal reasoning is a unitary phenomenon. First, there was no correlation across individuals in the extent of belief bias shown on the three tasks. Second, evidence for belief by strength interaction was observed only on AET and under conditions not required for the comparable finding on syllogistic reasoning. Finally, we found that while conclusion believability strongly influenced assessments of arguments strength, it had a relatively weak influence on the verbal justifications offered on the two informal reasoning tasks. (shrink)
In this paper, I discuss conditionals as illocutionary speech acts whose interpretation depends upon the whole of the social context in which they are uttered and whose purpose is to affect the opinions and actions of others. I argue for a suppositional approach to conditional statements based in what philosophers call the Ramsey test and developing the psychological theory that conditionals elicit a process of hypothetical thinking in their listeners. By reference to the experimental psychological literature on conditionals, I show (...) that in general conditionals, even ones that are basic or abstract in nature, are not treated as truth-functional or material by ordinary people. Drawing upon the suppositional nature of conditionals and the influence of pragmatic implicature, I discuss uses of conditionals as advice, inducement, persuasions and dissuasion, arguing that speakers use conditionals to try to influence the beliefs and actions of their listeners by shaping their hypothetical thought about possibilities. (shrink)
Dienes' & Perner's proposals are discussed in relation to the distinction between explicit and implicit systems of thinking. Evans and Over (1996) propose that explicit processing resources are required for hypothetical thinking, in which mental models of possible world states are constructed. Such thinking requires representations in which the individuals' propositional attitudes including relevant beliefs and goals are made fully explicit.
This excellent target article helps to resolve a problem for dual-process theories of higher cognition. Theorists posit two systems, one of which appears to be conscious and volitional. It seems to control some behaviours but to confabulate explanations for others. I argue that this system is only conscious in an illusory sense and that all self-explanations are confabulatory, as Carruthers suggests.
I agree with Oaksford & Chater (O&C) that human beings resemble Bayesian reasoners much more closely than ones engaging standard logic. However, I have many problems with their framework, which appears to be rooted in normative rather than ecological rationality. The authors also overstate everyday rationality and neglect to account for much relevant psychological work on reasoning.
Yama (2001) has presented an ingenious series of experiments in which he attempts to separate two accounts in the literature of the cause of "matching bias" in conditional reasoning. One account is that the bias arises from the way in which people process negations and the other is that it is due to the larger set sizes associated with negative propositions, rather than negation per se . Yama's experiments show influences of both negation and set size, from which he concludes (...) that both factors contribute to the matching bias that is normally observed. In this note, it is argued that this conclusion is at odds with other findings in the literature, particularly those investigating implicit negation as the cause of the bias. Introducing explicit negations has been shown to remove matching bias completely and not partially, as Yama's account must predict. A possible reconciliation is proposed in terms of subtle contextual differences introduced by Yama's experiments. (shrink)
We agree that current evolutionary accounts of base-rate neglect are unparsimonious, but we dispute the authors' account of the effect in terms of parallel associative and rule-based processes. We also question their assumption that cueing of nested set relations facilitates performance due to recruitment of explicit reasoning processes. In our account, such reasoning is always involved, but usually unsuccessful.
Originally identified by Hume, the validity of is–ought inference is much debated in the meta-ethics literature. Our work shows that inference from is to ought typically proceeds from contextualised, value-laden causal utility conditional, bridging into a deontic conclusion. Such conditional statements tell us what actions are needed to achieve or avoid consequences that are good or bad. Psychological research has established that people generally reason fluently and easily with utility conditionals. Our own research also has shown that people’s reasoning from (...) is to ought is pragmatically sensitive and adapted to achieving the individual’s goals. But how do we acquire the necessary deontic rules? In this paper, we provide a rationale for this facility linked to Evans’s framework of dual mind rationality. People have an old mind which derives its rationality by repeating what has worked in the past, mostly by experiential learning. New mind rationality, in contrast, is evolutionarily recent, uniquely developed in humans, and draws on our ability to mentally simulate hypothetical events removed in time and place. We contend that the new mind achieves its goals by inducing and applying deontic rules and that a mechanism of deontic introduction evolved for this purpose. (shrink)
Thinking is the essence of what it means to be human and defines us more than anything else as a species. Jonathan Evans explores cognitive psychological approaches to understanding the nature of thinking and reasoning, problem solving, and decision making.
Throughout this article the authors presume – without justification – that decision making must be a conscious process unless proved otherwise, and they place an unreasonably strict burden of proof on anyone wishing to claim a role for unconscious processing. In addition, I show that their arguments do not, as implied here, impact upon contemporary dual-process theories of reasoning and decision making.
Carruthers’proposals would seem to implicate language in what is known as System 2 thinking (explicit) rather than System 1 thinking (implicit) in contemporary dual process theories of thinking and reasoning. We provide outline description of these theories and show that while Carruthers’characterization of non-verbal processes as domain-specific identifies one critical feature of System 1 thinking, he appears to overlook the fact that much cognition of this type results from domain-general learning processes. We also review cognitive psychological evidence that shows that (...) language and the explicit representations it supports are heavily involved in supporting System 1 thinking, but falls short of supporting his claim that it is the medium in which domain-general thinking occurs. (shrink)