Dienes' & Perner's proposals are discussed in relation to the distinction between explicit and implicit systems of thinking. Evans and Over (1996) propose that explicit processing resources are required for hypothetical thinking, in which mental models of possible world states are constructed. Such thinking requires representations in which the individuals' propositional attitudes including relevant beliefs and goals are made fully explicit.
'IF' is one of the most important and interesting words in the English language, being used to express hypothetical thought. The use of conditionals such as 'if' also distinguishes human intelligence from that of all other animals. In this volume, JonathanEvans and David Over present a new theoretical approach to understanding hypothetical thought. The book draws on studies from the psychology of judgement and decision making, as well as philosophical logic.
In common with a number of other authors I believe that there has been a paradigm shift in the psychology of reasoning, specifically the area traditionally labelled as the study of deduction. The deduction paradigm was founded in a philosophical tradition that assumed logicality as the basis for rational thought, and provided binary propositional logic as the agreed normative framework. By contrast, many contemporary authors assume that people have degrees of uncertainty in both premises and conclusions, and reject binary logic (...) as a workable normative system. I discuss a number of questions and challenges for this new psychology of reasoning, including the following: (a) Do we need an alternative normative system, such as Bayesianism, for the new paradigm? (b) Is there any longer a clear distinction between the study of deductive and inductive reasoning, the latter having its own tradition and literature? (c) Precisely how is the integrated study of reasoning and decision making facilitated by the new paradigm? (d) What difficulties with dual-processing approaches need to be resolved, if they are to take us forward? (shrink)
Dual-process theories of higher cognition, distinguishing between intuitive (Type 1) and reflective (Type 2) thinking, have become increasingly popular, although also subject to recent criticism. A key question, to which a number of contributions in this special issue relate, is how to define the difference between the two kinds of processing. One issue discussed is whether they differ at Marr’s computational level of analysis. I believe they do but that ultimately the debate will decided at the implementational level where distinct (...) cognitive and neural systems need to be demonstrated. Other distinctions raised in the issue are the unique ability for metarepresentation, cognitive decoupling and hypothetical thinking at the Type 2 level, and the association of emotion and metacognitive feelings with the Type 1 level. The relation of the latter to cognitive control is also discussed. (shrink)
In this paper we argue that it is often adaptive to use one's background beliefs when interpreting information that, from a normative point of view, is incomplete. In both of the experiments reported here participants were presented with an item possessing two features and were asked to judge, in the light of some evidence concerning the features, to which of two categories it was more likely that the item belonged. It was found that when participants received evidence relevant to (...) just one of these hypothesised categories (i.e. evidence that did not form a Bayesian likelihood ratio) they used their background beliefs to interpret this information. In Experiment 2, on the other hand, participants behaved in a broadly Bayesian manner when the evidence they received constituted a completed likelihood ratio. We discuss the circumstances under which participants, when making their judgements, consider the alternative hypothesis. We conclude with a discussion of the implications of our results for an understanding of hypothesis testing, belief revision, and categorisation. (shrink)
Yama (2001) has presented an ingenious series of experiments in which he attempts to separate two accounts in the literature of the cause of "matching bias" in conditional reasoning. One account is that the bias arises from the way in which people process negations and the other is that it is due to the larger set sizes associated with negative propositions, rather than negation per se . Yama's experiments show influences of both negation and set size, from which he concludes (...) that both factors contribute to the matching bias that is normally observed. In this note, it is argued that this conclusion is at odds with other findings in the literature, particularly those investigating implicit negation as the cause of the bias. Introducing explicit negations has been shown to remove matching bias completely and not partially, as Yama's account must predict. A possible reconciliation is proposed in terms of subtle contextual differences introduced by Yama's experiments. (shrink)
I agree with Oaksford & Chater (O&C) that human beings resemble Bayesian reasoners much more closely than ones engaging standard logic. However, I have many problems with their framework, which appears to be rooted in normative rather than ecological rationality. The authors also overstate everyday rationality and neglect to account for much relevant psychological work on reasoning.
In this paper, I show that the question of how dual process theories of reasoning and judgement account for conflict between System 1 (heuristic) and System 2 (analytic) processes needs to be explicated and addressed in future research work. I demonstrate that a simple additive probability model that describes such conflict can be mapped on to three different cognitive models. The pre-emptive conflict resolution model assumes that a decision is made at the outset as to whether a heuristic or analytic (...) process will control the response. The parallel-competitive model assumes that each system operates in parallel to deliver a putative response, resulting sometimes in conflict that then needs to be resolved. Finally, the default-interventionist model involves the cueing of default responses by the heuristic system that may or may not be altered by subsequent intervention of the analytic system. A second, independent issue also emerges from this discussion. The superior performance of higher-ability participants on reasoning tasks may be due to the fact that they engage in more analytic reasoning ( quantity hypothesis ) or alternatively to the fact that the analytic reasoning they apply is more effective ( quality hypothesis ). (shrink)
[About the book] This book explores the idea that we have two minds - automatic, unconscious, and fast, the other controlled, conscious, and slow. In recent years there has been great interest in so-called dual-process theories of reasoning and rationality. According to such theories, there are two distinct systems underlying human reasoning - an evolutionarily old system that is associative, automatic, unconscious, parallel, and fast, and a more recent, distinctively human system that is rule-based, controlled, conscious, serial, and slow. Within (...) the former, processes the former, processes are held to be innate and to use heuristics that evolved to solve specific adaptive problems. In the latter, processes are taken to be learned, flexible, and responsive to rational norms. Despite the attention these theories are attracting, there is still poor communication between dual-process theorists themselves, and the substantial bodies of work on dual processes in cognitive psychology and social psychology remain isolated from each other. This book brings together leading researchers on dual processes to summarize the state-of-the-art, highlight key issues, present different perspectives, explore implications, and provide a stimulus to further work. It includes new ideas about the human mind both by contemporary philosophers interested in broad theoretical questions about mental architecture and by psychologists specialising in traditionally distinct and isolated fields. For all those in the cognitive sciences, this is a book that will advance dual-process theorizing, promote interdisciplinary communication, and encourage further applications of dual-process approaches. (shrink)
Carruthers’proposals would seem to implicate language in what is known as System 2 thinking (explicit) rather than System 1 thinking (implicit) in contemporary dual process theories of thinking and reasoning. We provide outline description of these theories and show that while Carruthers’characterization of non-verbal processes as domain-specific identifies one critical feature of System 1 thinking, he appears to overlook the fact that much cognition of this type results from domain-general learning processes. We also review cognitive psychological evidence that shows that (...) language and the explicit representations it supports are heavily involved in supporting System 1 thinking, but falls short of supporting his claim that it is the medium in which domain-general thinking occurs. (shrink)
The phenomenon known as matching bias consists of a tendency to see cases as relevant in logical reasoning tasks when the lexical content of a case matches that of a propositional rule, normally a conditional, which applies to that case. Matching is demonstrated by use of the negations paradigm that is by using conditionals in which the presence and absence of negative components is systematically varied. The phenomenon was first published in 1972 and the present paper reviews the history of (...) research and theorising on the problem in the subsequent 25 years. Theories of matching bias considered include those based on several broad frameworks including the heuristic-analytic theory, the mental models theory, the theory of optimal data selection, and relevance theory as well as the specific processing-negations account. The ability of these theories to account for a range of phenomena is considered, including the effects of linguistic form, realistic content, and explicit negation on the matching bias effect. Of particular importance are recent findings showing that the bias is observable on a wider range of linguistic forms than has generally been thought, and that it is almost entirely dependent on the use of implicit negation in the logical cases to which rules are applied. The reasons for the general suppression of matching when realistic content is used are, however, unclear and a need for further research is identified here. It is concluded that matching bias is a highly robust effect which is closely connected with the problem of understanding implicit negation. Most of the theories in the literature are unable to account for at least some of the major phenomena discovered in research on the bias. The accounts that fare best are those that posit local effects of negation, including the heuristic-analytic and processing negations theories. (shrink)
In this study, we examine the belief bias effect in syllogistic reasoning under both standard presentation and in a condition where participants are required to respond within 10 seconds. As predicted, the requirement for rapid responding increased the amount of belief bias observed on the task and reduced the number of logically correct decisions, both effects being substantial and statistically significant. These findings were predicted by the dual-process account of reasoning, which posits that fast heuristic processes, responsible for belief bias, (...) compete with slower analytic processes that can lead to correct logical decisions. Requiring rapid responding thus differentially inhibits the operation of analytic reasoning processes, leading to the results observed. (shrink)
The two main psychological theories of the ordinary conditional were designed to account for inferences made from assumptions, but few premises in everyday life can be simply assumed true. Useful premises usually have a probability that is less than certainty. But what is the probability of the ordinary conditional and how is it determined? We argue that people use a two stage Ramsey test that we specify to make probability judgements about indicative conditionals in natural language, and we describe experiments (...) that support this conclusion. Our account can explain why most people give the conditional probability as the probability of the conditional, but also why some give the conjunctive probability. We discuss how our psychological work is related to the analysis of ordinary indicative conditionals in philosophical logic. (shrink)
We agree that current evolutionary accounts of base-rate neglect are unparsimonious, but we dispute the authors' account of the effect in terms of parallel associative and rule-based processes. We also question their assumption that cueing of nested set relations facilitates performance due to recruitment of explicit reasoning processes. In our account, such reasoning is always involved, but usually unsuccessful.
In this paper, I discuss conditionals as illocutionary speech acts whose interpretation depends upon the whole of the social context in which they are uttered and whose purpose is to affect the opinions and actions of others. I argue for a suppositional approach to conditional statements based in what philosophers call the Ramsey test and developing the psychological theory that conditionals elicit a process of hypothetical thinking in their listeners. By reference to the experimental psychological literature on conditionals, I show (...) that in general conditionals, even ones that are basic or abstract in nature, are not treated as truth-functional or material by ordinary people. Drawing upon the suppositional nature of conditionals and the influence of pragmatic implicature, I discuss uses of conditionals as advice, inducement, persuasions and dissuasion, arguing that speakers use conditionals to try to influence the beliefs and actions of their listeners by shaping their hypothetical thought about possibilities. (shrink)
The aim of the present research was to develop a difficulty model for logical reasoning problems involving complex ordered arrays used in the Graduate Record Examination. The approach used involved breaking down the problems into their basic cognitive elements such as the complexity of the rules used, the number of mental models required to represent the problem, and question type. Weightings for these different elements were derived from two experimental studies and from the reasoning literature. Based on these weights, difficulty (...) models were developed which were then tested against new data. The models had excellent predictive validity and showed the relative influence of rule based factors and factors relating to the number of underlying models. Different difficulty models were needed for different question types, suggesting that people used a variety of approaches and, at a wider level, that both mental models and mental rules may be used in reasoning. (shrink)
In two experiments we tested the hypothesis that the mechanisms that produce belief bias generalise across reasoning tasks. In formal reasoning (i.e., syllogisms) judgements of validity are influenced by actual validity, believability of the conclusions, and an interaction between the two. Although apparently analogous effects of belief and argument strength have been observed in informal reasoning, the design of those studies does not permit an analysis of the interaction effect. In the present studies we redesigned two informal reasoning tasks: the (...) Argument Evaluation Task (AET) and a Law of Large Numbers (LLN) task in order to test the similarity of the phenomena concerned. Our findings provide little support for the idea that belief bias on formal and informal reasoning is a unitary phenomenon. First, there was no correlation across individuals in the extent of belief bias shown on the three tasks. Second, evidence for belief by strength interaction was observed only on AET and under conditions not required for the comparable finding on syllogistic reasoning. Finally, we found that while conclusion believability strongly influenced assessments of arguments strength, it had a relatively weak influence on the verbal justifications offered on the two informal reasoning tasks. (shrink)
We report the results of three experiments designed to assess the role of suppositions in human reasoning. Theories of reasoning based on formal rules propose that the ability to make suppositions is central to deductive reasoning. Our first experiment compared two types of problem that could be solved by a suppositional strategy. Our results showed no difference in difficulty between problems requiring affirmative or negative suppositions and very low logical solution rates throughout. Further analysis of the error data showed a (...) pattern of responses, which suggested that participants reason from a superficial representation of the premises in these arguments and this drives their choice of conclusion. Our second experiment employed a different set of suppositional problems but with extremely similar proofs in terms of the rules applied and number of inferential steps required. As predicted by our interpretation of reasoning strategies employed in Experiment 1, logical performance was very much higher on these problems. Our third experiment showed that problems that could be solved by constructing an initial representation of the premises were easier than problems in which this representation was not sufficient. This effect was independent of the suppositional structure of the problems. We discuss the implications of this research for theories of reasoning based on mental models and inference rules. (shrink)
Using covariant derivatives and the operator definitions of quantum mechanics, gauge invariant Proca and Lehnert equations are derived and the Lorenz condition is eliminated in U(1) invariant electrodynamics. It is shown that the structure of the gauge invariant Lehnert equation is the same in an O(3) invariant theory of electrodynamics.
Stanovich & West (S&W) distinguish between evolutionary rationality and normative rationality, and System 1 and System 2 mental processes. They hold that the main function of System 2 has to do with normative and not evolutionary rationality. We ask how System 2 could then be an adaptation, especially given S&W's own work on individual differences.
Nucleation of Ag islands on the five-fold surface of icosahedral Al?Pd?Mn is influenced strongly by trap sites. Submonolayers of Ag prepared by deposition at 365?K and with a flux of 1???10?3 monolayers/s exhibit a variation in Ag island densities across different terraces. Comparisons with previous work and with rate equation analysis indicate that trap sites are not saturated under these experimental conditions and that the difference in island densities is not necessarily due to variation in trap densities. While it could (...) have a number of different origins, our results point to a terrace-dependent value of the effective diffusion barrier for Ag adatoms. (shrink)
[About the book]: This volume is a state-of-the-art survey of the psychology of reasoning, based around, and in tribute to, one of the field's most eminent figures: Jonathan St B.T. Evans.In this collection of cutting edge research, Evans' collaborators and colleagues review a wide range of important and developing areas of inquiry. These include biases in thinking, probabilistic and causal reasoning, people's use of 'if' sentences in arguments, the dual-process theory of thought, and the nature of human (...) rationality. These foundational issues are examined from various angles and finally integrated in a concluding panoramic chapter written by Evans himself.The eighteen chapters, all written by leading international researchers, combine state-of the-art research with investigation into the most fundamental questions surrounding human mental life, such as:What is the architecture of the human mind?Are humans rational, and what is the nature of this rationality?How do we think hypothetically? The Science of Reason offer s a unique combination of breadth, depth and integrative vision, making it an indispensable resource for researchers and students of human reason. (shrink)
A study is reported which focused on the problem-solving strategies employed by expert electronics engineers pursuing a real-world task: integrated-circuit design. Verbal protocol data were analysed so as to reveal aspects of the organisation and sequencing of ongoing design activity. These analyses indicated that the designers were implementing a highly systematic solution-development strategy which deviated only a small degree from a normatively optimal top-down and breadth-first method. Although some of the observed deviation could be described as opportunistic in nature, much (...) of it reflected the rapid depth-first exploration of tentative solution ideas. We argue that switches from a predominantly breadth-first mode of problem solving to depth-first or opportunistic modes may be an important aspect of the expert's strategic knowledge about how to conduct the design process effectively when faced with difficulties, uncertainties, and design impasses. (shrink)
Probabilistic models have started to replace classical logic as the standard reference paradigm in human deductive reasoning. Mental probability logic emphasizes general principles where human reasoning deviates from classical logic, but agrees with a probabilistic approach (like nonmonotonicity or the conditional event interpretation of conditionals). -/- This contribution consists of two parts. In the ﬁrst part we discuss general features of reasoning systems including consequence relations, how uncertainty may enter argument forms, probability intervals, and probabilistic informativeness. These concepts are of (...) central importance for the psychological task analysis. In the second part we report new experimental data on the paradoxes of the material conditional, the probabilistic modus ponens, the complement task, and data on the probabilistic truth table task. The results of the experiments provide evidence for the hypothesis that people represent indicative conditionals by conditional probability assertions. (shrink)
This collection by a distinguished group of philosophers, psychologists, and physiologists reflects an interdisciplinary approach to the central question of cognitive science: how do we model the mind? Among the topics explored are the relationships (theoretical, reductive, and explanatory) between philosophy, psychology, computer science, and physiology; what should be asked of models in science generally, and in cognitive science in particular; whether theoretical models must make essential reference to objects in the environment; whether there are human competences that are resistant, (...) in principle, to modelling; whether simulated thinking and intentionality are really thinking and intentionality; how semantics can be generated from syntactics; the meaning of the terms "representations" and "modelling;" whether the nature of the "hardware" matters; and whether computer models of humans are "dehumanizing." Contributors include Donald Davidson, Daniel C. Dennett, Margaret A. Boden, Adam Morton, Dennis Noble, T. Poggio, Colin Blakemore, K.V. Wilkes, P.N. Johnson-Laird, and Jonathan St. B.T. Evans. (shrink)
In this study both adolescents with autism spectrum disorder (ASD) and typically developing controls were presented with conditional reasoning problems using familiar content. In this task both valid and fallacious conditional inferences that would otherwise be drawn can be suppressed if counterexample cases are brought to mind. Such suppression occurs when additional premises are presented, whose effect is to suggest such counterexample cases. In this study we predicted and observed that this suppression effect was substantially and significantly weaker for autistic (...) participants. We take this as evidence that autistics are less contextualised in their reasoning, a finding that can be linked to research on autism on a variety of other cognitive tasks. (shrink)
This book explores the idea that we have two minds - automatic, unconscious, and fast, the other controlled, conscious, and slow. In recent years there has been great interest in so-called dual-process theories of reasoning and rationality. According to such theories, there are two distinct systems underlying human reasoning - an evolutionarily old system that is associative, automatic, unconscious, parallel, and fast, and a more recent, distinctively human system that is rule-based, controlled, conscious, serial, and slow. Within the former, processes (...) the former, processes are held to be innate and to use heuristics that evolved to solve specific adaptive problems. In the latter, processes are taken to be learned, flexible, and responsive to rational norms. -/- Despite the attention these theories are attracting, there is still poor communication between dual-process theorists themselves, and the substantial bodies of work on dual processes in cognitive psychology and social psychology remain isolated from each other. This book brings together leading researchers on dual processes to summarize the state-of-the-art, highlight key issues, present different perspectives, explore implications, and provide a stimulus to further work. It includes new ideas about the human mind both by contemporary philosophers interested in broad theoretical questions about mental architecture and by psychologists specialising in traditionally distinct and isolated fields. For all those in the cognitive sciences, this is a book that will advance dual-process theorizing, promote interdisciplinary communication, and encourage further applications of dual-process approaches. (shrink)
I have become more convinced, over the years, by the truth of Wittgenstein’s characterisation of philosophy as arising through misconceptions of grammar. Such a misconception of grammar characterises a very popular approach to indexicality which has been current since the 1970s, stemming from the work of Casteñeda, and Kaplan. Gareth Evans was inclined to allow, for instance, that one could say ‘“To the left (I am hot)” is true, as uttered by x at t iff there is someone moderately (...) near to the left of x such that, if he were to utter the sentence “I am hot” at t, what he would thereby say is true’ (Evans 1985: 358). But not only does this disturb the proper relation between direct and indirect speech, it continues a Fregean tradition which these very cases show to be quite mistaken about the logic of intensions. In this paper, however, I want primarily to point out how this misconception of grammar has distorted our view of people. For some of the above thinkers have tried to make out that human motivation is related to the possession of a certain category of indexical belief, by Lewis called ‘de se beliefs’. I shall look here at how the matter arises in Hugh Mellor’s work on Time. In connection with Time, indexicality arises in McTaggart’s ‘A-series’, and Mellor treats this indexicality in parallel with Evans’ language. First, therefore, I aim to show how Mellor’s discussion of Time grammatically misconceives the situation, and leads to a misrepresentation of the motivation of human action. But a larger conclusion about Fregean intensions is also then immediately available. (shrink)
Theistic philosophers have perennially cited mystical experiences-- experiences of God--as evidence for God's existence and for other truths about God. In recent years, the attractiveness of this line of thought has been reflected in its use by a significant number of philosophers.1 But both philosophers and mystics agree that not all mystical experiences can be relied upon; many are the stuff of delusion.2 So they have somehow to be checked out, their bona-fides revealed. But can they be? I will be (...) arguing that (a) they must indeed be cross-checked to serve as good evidence; and that (b) they can't be--or not nearly well enough to permit pressing them into service as serious support for theism. The need for cross-checking, necessary in any case, is made acute by two facts: the extreme variability of mystical experiences and the doctrines they are recruited to support, and the fact that, especially in the face of this variability, mystical experiences are much more effectively explained naturalistically. Furthermore, our ability adequately to cross-check mystical experiences (hereafter, ME's), in a way that would reveal the hand of God, is crippled by the fact that theists offer no hypothesis concerning the causal mechanism by means of which God shows Himself to mystics. (shrink)