Though nativist hypotheses have played a pivotal role in the development of cognitive science, it remains exceedingly obscure how they—and the debates in which they ﬁgure—ought to be understood. The central aim of this paper is to provide an account which addresses this concern and in so doing: a) makes sense of the roles that nativist theorizing plays in cognitive science and, moreover, b), explains why it really matters to the contemporary study of cognition. I conclude by outlining a range (...) of further implications of this account for current debate in cognitive science. (shrink)
In recent years evolutionary psychologists have developed and defended the Massive Modularity Hypothesis, which maintains that our cognitive architecture—including the part that subserves ‘central processing’ —is largely or perhaps even entirely composed of innate, domain-specific computational mechanisms or ‘modules’. In this paper I argue for two claims. First, I show that the two main arguments that evolutionary psychologists have offered for this general architectural thesis fail to provide us with any reason to prefer it to a competing picture of the (...) mind which I call the Library Model of Cognition. Second, I argue that this alternative model is compatible with the central theoretical and methodological commitments of evolutionary psychology. Thus I argue that, at present, the endorsement of the Massive Modularity Hypothesis by evolutionary psychologists is both unwarranted and unmotivated. (shrink)
Though we are in broad agreement with much of Elqayam & Evans' (E&E's) position, we criticize two aspects of their argument. First, rejecting normativism is unlikely to yield the benefits that E&E seek. Second, their conception of rational norms is overly restrictive and, as a consequence, their arguments at most challenge a relatively restrictive version of normativism.
The concept of innateness appears in systematic research within cognitive science, but it also appears in less systematic modes of thought that long predate the scientific study of the mind. The present studies therefore explore the relationship between the properly scientific uses of this concept and its role in ordinary folk understanding. Studies 1-4 examined the judgments of people with no specific training in cognitive science. Results showed (a) that judgments about whether a trait was innate were not affected by (...) whether or not the trait was learned, but (b) such judgments were impacted by moral considerations. Study 5 looked at the judgments of both non-scientists and scientists, in conditions that encouraged either thinking about individual cases or thinking about certain general principles. In the case-based condition, both non-scientists and scientists showed an impact of moral considerations but little impact of learning. In the principled condition, both non-scientists and scientists showed an impact of learning but little impact of moral considerations. These results suggest that both non-scientists and scientists are drawn to a conception of innateness that differs from the one at work in contemporary scientific research but that they are also both capable of 'filtering out' their initial intuitions and using a more scientific approach. (shrink)
During the last 25 years, researchers studying human reasoning and judgment in what has become known as the “heuristics and biases” tradition have produced an impressive body of experimental work which many have seen as having “bleak implications” for the rationality of ordinary people (Nisbett and Borgida 1975). According to one proponent of this view, when we reason about probability we fall victim to “inevitable illusions” (Piattelli-Palmarini 1994). Other proponents maintain that the human mind is prone to “systematic deviations from (...) rationality” (Bazerman & Neale 1986) and is “not built to work by the rules of probability” (Gould 1992). It has even been suggested that human beings are “a species that is uniformly probability-blind” (Piattelli-Palmarini 1994). This provocative and pessimistic interpretation of the experimental findings has been challenged from many different directions over the years. One of the most recent and energetic of these challenges has come from the newly emerging field of evolutionary psychology, where it has been argued that it’s singularly implausible to claim that our species would have evolved with no “instinct for probability” and, hence, be “blind to chance” (Pinker 1997, 351). Though evolutionary psychologists concede that it is possible to design experiments that “trick our probability.. (shrink)
has a more speciﬁc role to play in the development of Of course, the conclusion to draw is not that innateness innate cognitive structure. In particular, a common claim claims are trivially false or that they cannot be character-.
Machery argues that concepts do not constitute a natural kind. We argue that this is a mistake. When appropriately construed, his discussion in fact bolsters the claim that concepts are a natural kind.
What are the elements from which the human mind is composed? What structures make up our _cognitive architecture?_ One of the most recent and intriguing answers to this question comes from the newly emerging interdisciplinary field of evolutionary psychology. Evolutionary psychologists defend a _massively modular_ conception of mental architecture which views the mind –including those parts responsible for such ‘central processes’ as belief revision and reasoning— as composed largely or perhaps even entirely of innate, special-purpose computational mechanisms or ‘modules’ that (...) have been shaped by natural selection to handle the sorts of recurrent information processing problems that confronted our hunter-gatherer forebears (Cosmides and Tooby,192; Sperber, 1994; Samuels, 1998a). (shrink)
Among the most pervasive and fundamental assumptions in cognitive science is that the human mind (or mind-brain) is a mechanism of some sort: a physical device com- posed of functionally speciﬁable subsystems. On this view, functional decomposition – the analysis of the overall system into functionally speciﬁable parts – becomes a central project for a science of the mind, and the resulting theories of cognitive archi- tecture essential to our understanding of human psychology.
Samuels and Stich explore the debate over the extent to which ordinary human reasoning and decision making is rational. One prominent cluster of views, often associated with the heuristics and biases tradition in psychology, maintains that human reasoning is, in important respects, normatively problematic or irrational. Samuels and Stich start by sketching some key experimental findings from this tradition and describe a range of pessimistic claims about the rationality of ordinary people that these and related findings are sometimes taken to (...) support. Such pessimistic interpretations of the experimental findings have not gone unchallenged however: Samuels and Stich outline some of the research on reasoning that has been done by evolutionary psychologists and sketch a cluster of more optimistic theses about ordinary reasoning that such psychologists defend. Although Samuels and Stich think that the most dire pronouncements made by writers in the heuristics and biases tradition are unwarranted, they also maintain that the situation is rather more pessimistic than sometimes suggested by evolutionary psychologists. They conclude by defending this “middle way” and sketch a family of “dual processing” theories of reasoning which, they argue, offer some support for the moderate interpretation they advocate. (shrink)
In this paper I defend the classical computational account of reasoning against a range of highly influential objections, sometimes called relevance problems. Such problems are closely associated with the frame problem in artificial intelligence and, to a first approximation, concern the issue of how humans are able to determine which of a range of representations are relevant to the performance of a given cognitive task. Though many critics maintain that the nature and existence of such problems provide grounds for rejecting (...) classical computationalism, I show that this is not so. Some of these putative problems are a cause for concern only on highly implausible assumptions about the extent of our cognitive capacities, whilst others are a cause for concern only on similarly implausible views about the commitments of classical computationalism. Finally, some versions of the relevance problem are not really objections but hard research issues that any satisfactory account of cognition needs to address. I conclude by considering the diagnostic issue of why accounts of cognition in general—and classical computational accounts, in particular—have faired so poorly in addressing such research issues.Keywords: Computationalism; Frame problem; Relevance. (shrink)
There is a puzzling tension in contemporary scientific attitudes towards human nature. On the one hand, evolutionary biologists correctly maintain that the traditional essentialist conception of human nature is untenable; and moreover that this is obviously so in the light of quite general and exceedingly well-known evolutionary considerations. On the other hand, talk of human nature abounds in certain regions of the sciences, especially in linguistics, psychology and cognitive science. In this paper I articulate a conception of human nature that (...) a) captures how cognitive and behavioral scientists tend to deploy the notion whilst, b) evading standard evol-utionary objections, and c) allowing human nature – and theories thereof – to fulfill many of their traditional theoretical roles. (shrink)
Over the past few decades, reasoning and rationality have been the focus of enormous interdisciplinary attention, attracting interest from philosophers, psychologists, economists, statisticians and anthropologists, among others. The widespread interest in the topic reflects the central status of reasoning in human affairs. But it also suggests that there are many different though related projects and tasks which need to be addressed if we are to attain a comprehensive understanding of reasoning.
There are multiple formal characterizations of the natural numbers available. Despite being inter-derivable, they plausibly codify different possible applications of the naturals – doing basic arithmetic, counting, and ordering – as well as different philosophical conceptions of those numbers: structuralist, cardinal, and ordinal. Some influential philosophers of mathematics have argued for a non-egalitarian attitude according to which one of those characterizations is ‘more basic’ or ‘more fundamental’ than the others. This paper addresses two related issues. First, we review some of (...) these non-egalitarian arguments, lay out a laundry list of different, legitimate, notions of relative priority, and suggest that these arguments plausibly employ different such notions. Secondly, we argue that given a metaphysical-cum-epistemological gloss suggested by Frege's foundationalist epistemology, the ordinals are plausibly more basic than the cardinals. This is just one orientation to relative priority one could take, however. Ultimately, we subscribe to an egalitarian attitude towards these formal characterizations: they are, in some sense, equally ‘legitimate’. (shrink)
One of the more distinctive features of Bob Hale and Crispin Wright’s neologicism about arithmetic is their invocation of Frege’s Constraint – roughly, the requirement that the core empirical applications for a class of numbers be “built directly into” their formal characterization. In particular, they maintain that, if adopted, Frege’s Constraint adjudicates in favor of their preferred foundation – Hume’s Principle – and against alternatives, such as the Dedekind-Peano axioms. In what follows we establish two main claims. First, we show (...) that, if sound, Hale and Wright’s arguments for Frege’s Constraint at most establish a version on which the relevant application of the naturals is transitive counting – roughly, the counting procedure by which numerals are used to answer “how many”-questions. Second, we show that this version of Frege’s Constraint fails to adjudicate in favor of Hume’s Principle. If this is the version of Frege’s Constraint that a foundation for arithmetic must respect, then Hume’s Principle no more – and no less – meets the requirement than the Dedekind-Peano axioms do. (shrink)
Do accounts of scientific theory formation and revision have implications for theories of everyday cognition? We maintain that failing to distinguish between importantly different types of theories of scientific inference has led to fundamental misunderstandings of the relationship between science and everyday cognition. In this article, we focus on one influential manifestation of this phenomenon which is found in Fodor's well-known critique of theories of cognitive architecture. We argue that in developing his critique, Fodor confounds a variety of distinct claims (...) about the holistic nature of scientific inference. Having done so, we outline more promising relations that hold between theories of scientific inference and ordinary cognition. (shrink)
The philosophy of cognitive science is concerned with fundamental philosophical and theoretical questions connected to the sciences of the mind. How does the brain give rise to conscious experience? Does speaking a language change how we think? Is a genuinely intelligent computer possible? What features of the mind are innate? Advances in cognitive science have given philosophers important tools for addressing these sorts of questions; and cognitive scientists have, in turn, found themselves drawing upon insights from philosophy--insights that have often (...) taken their research in novel directions. The Oxford Handbook of Philosophy of Cognitive Science brings together twenty-one newly commissioned chapters by leading researchers in this rich and fast-growing area of philosophy. It is an indispensible resource for anyone who seeks to understand the implications of cognitive science for philosophy, and the role of philosophy within cognitive science. (shrink)
There is a venerable philosophical tradition that views human beings as intrinsically rational, though even the most ardent defender of this view would admit that under certain circumstances people’s decisions and thought processes can be very irrational indeed. When people are extremely tired, or drunk, or in the grip of rage, they sometimes reason and act in ways that no account of rationality would condone. About thirty years ago, Amos Tversky, Daniel Kahneman and a number of other psychologists began reporting (...) findings suggesting much deeper problems with the traditional idea that human beings are intrinsically rational animals. What these studies demonstrated is that even under quite ordinary circumstances where fatigue, drugs and strong emotions are not factors, people reason and make judgments in ways that systematically violate familiar canons of rationality on a wide array of problems. Those first surprising studies sparked the growth of a major research tradition whose impact has been felt in economics, political theory, medicine and other areas far removed from cognitive science. In Section 2, we will sketch a few of the better known experimental findings in this area. We’ve chosen these particular findings because they will play a role at a later stage of the paper. For readers who would like a deeper and more systematic account of the fascinating and disquieting research on reasoning and judgment, there are now several excellent texts and anthologies available. (shrink)
A core commitment of Bob Hale and Crispin Wright’s neologicism is their invocation of Frege’s Constraint—roughly, the requirement that the core empirical applications for a class of numbers be “built directly into” their formal characterization. According to these neologicists, if legitimate, Frege’s Constraint adjudicates in favor of their preferred foundation—Hume’s Principle—and against alternatives, such as the Dedekind–Peano axioms. In this paper, we consider a recent argument for legitimating Frege’s Constraint due to Hale, according to which the primary empirical application of (...) the naturals is transitive counting, or answering ‘how many’-questions using numerals. We make two claims regarding Hale’s argument. First, it fails to legitimate Frege’s Constraint in virtue of resting on unsupported and highly contentious assumptions. Secondly, even if sound, Hale’s argument would vindicate a version of Frege’s Constraint which fails to adjudicate in favor of Hume’s Principle over alternative characterizations of the naturals. (shrink)
This chapter examines the core explanatory strategies of cognitive science and their application to the study of psychopathology. In addition to providing a taxonomy of different strategies, we illustrate their application, with special attention to Autism Spectrum Disorder and Major Depressive Disorder. We conclude by considering two challenges to the prospects of a developed cognitive science of psychopathology.
This chapter offers a high-level overview of the philosophy of cognitive science and an introduction to the Oxford Handbook of Philosophy of Cognitive Science. The philosophy of cognitive science emerged out of a set of common and overlapping interests among philosophers and scientists who study the mind. We identify five categories of issues that illustrate the best work in this broad field: (1) traditional philosophical issues about the mind that have been invigorated by research in cognitive science, (2) issues regarding (...) the practice of cognitive science and its foundational assumptions, (3) issues regarding the explication and clarification of core concepts in cognitive science, (4) first-order empirical issues where philosophers participate in the interdisciplinary investigation of particular psychological phenomena, (5) traditional philosophical issues that aren’t about the mind but that can be informed by a better understanding of how the mind works. (shrink)
A central claim of the target article is that language is the medium of domain-general, cross-modular thought; and according to Carruthers, the main, direct evidence for this thesis comes from a series of fascinating studies on spatial reorientation. I argue that the these studies, in fact, provide us with no reason whatsoever to accept this cognitive conception of language.
In his recent John Locke Lectures – published as Between Saying and Doing – Brandom extends and refines his views on the nature of language and philosophy by developing a position that he calls Analytic Pragmatism. Although Brandom’s project bears on an extraordinarily rich array of different philosophical issues, we focus here on the contention that certain vocabularies have a privileged status within our linguistic practices, and that when adequately understood, the practices in which these vocabularies figure can help furnish (...) us with an account of semantic intentionality. Brandom’s claim is that such vocabularies are privileged because they are a species of what he calls universal LX vocabulary –roughly, vocabulary whose mastery is implicit in any linguistic practice whatsoever. We show that, contrary to Brandom’s claim, logical vocabulary per se fails to satisfy the conditions that must be met for something to count as universal LX vocabulary. Further, we show that exactly analogous considerations undermine his claim that modal vocabulary is universal LX. If our arguments are sound, then, contrary to what Brandom maintains, intentionality cannot be explicated as a “pragmatically mediated semantic phenomenon”, at any rate not of the sort that he proposes. (shrink)
This dissertation focuses on the massive modularity hypothesis defended by evolutionary psychologists---the hypothesis that the human mind is composed largely or perhaps even entirely of special purpose information processing organs or "modulees" that have been shaped by natural selection to handle the sorts of recurrent information processing problems that confronted our hunter-gatherer forebears. ;In discussing MMH, I have three central goals. First, I aim to clarify the hypothesis and develop theoretically useful notions of "module" and "domain-specificity" that can play the (...) roles required of them by evolutionary psychology. Second, I aim to evaluate the plausibility MMH in the light of the broad range of arguments that have been developed and defended in the literature. I argue that all the main, general arguments both for and against MMH are unsatisfactory. Moreover, I suggest that if the case for MMH is to be made, it will only result from the successive accumulation of specific, empirical evidence for the existence of particular modules. ;Finally, I address a range of issues that arise from evolutionary psychological approaches to reasoning and rationality. Much of what evolutionary psychologists have said about human reasoning is in response to a widely discussed "pessimistic" interpretation that has been developed and defended by Kahneman, Tversky and their followers. According to this view, human beings are prone to systematic deviations from appropriate norms of rationality because they lack the underlying competence to handle a wide array of reasoning tasks. Evolutionary psychologists appear to reject this pessimistic interpretation in favor of the view that we possess a wide range of reasoning modules that employ rational rules of inference. I argue, however, that there is no genuine disagreement between evolutionary psychologists and their opponents over the extent to which human beings are rational. I also discuss the distinction between competence efforts and performance efforts. Although this distinction has played a central role in recent discussions of human rationality, I argue that if MMH is true, then we face insurmountable problems in trying to draw the performance error/competence error distinction. (shrink)