Given the fundamental role that concepts play in theories of cognition, philosophers and cognitive scientists have a common interest in concepts. Nonetheless, there is a great deal of controversy regarding what kinds of things concepts are, how they are structured, and how they are acquired. This chapter offers a detailed high-level overview and critical evaluation of the main theories of concepts and their motivations. Taking into account the various challenges that each theory faces, the chapter also presents a novel approach (...) to concepts that is organized around two ideas. The first is a pluralistic view of differing types of conceptual structure. The second is a model that treats concepts as atomic representations that are linked to various types of conceptual structures. (shrink)
What is a concept? Philosophers have given many different answers to this question, reflecting a wide variety of approaches to the study of mind and language. Nonetheless, at the most general level, there are two dominant frameworks in contemporary philosophy. One proposes that concepts are mental representations, while the other proposes that they are abstract objects. This paper looks at the differences between these two approaches, the prospects for combining them, and the issues that are involved in the dispute. We (...) argue that powerful motivations have been offered in support of both frameworks. This suggests the possibility of combining the two. Unlike Frege, we hold that the resulting position is perfectly coherent and well worth considering. Nonetheless, we argue that it should be rejected along with the view that concepts are abstract objects. (shrink)
Noam Chomsky's Poverty of the Stimulus Argument is one of the most famous and controversial arguments in the study of language and the mind. Though widely endorsed by linguists, the argument has met with much resistance in philosophy. Unfortunately, philosophical critics have often failed to fully appreciate the power of the argument. In this paper, we provide a systematic presentation of the Poverty of the Stimulus Argument, clarifying its structure, content, and evidential base. We defend the argument against a variety (...) of philosophical criticisms, new and old, and argue that the Poverty of the Stimulus Argument continues to deserve its guiding role in the study of language and the mind. (shrink)
Conceptual analysis is undergoing a revival in philosophy, and much of the credit goes to Frank Jackson. Jackson argues that conceptual analysis is needed as an integral component of so-called serious metaphysics and that it also does explanatory work in accounting for such phenomena as categorization, meaning change, communication, and linguistic understanding. He even goes so far as to argue that opponents of concep- tual analysis are implicitly committed to it in practice. We show that he is wrong on all (...) of these points and that his case for conceptual analysis doesn. (shrink)
Radical concept nativism is the thesis that virtually all lexical concepts are innate. Notoriously endorsed by Jerry Fodor (1975, 1981), radical concept nativism has had few supporters. However, it has proven difficult to say exactly what’s wrong with Fodor’s argument. We show that previous responses are inadequate on a number of grounds. Chief among these is that they typically do not achieve sufficient distance from Fodor’s dialectic, and, as a result, they do not illuminate the central question of how new (...) primitive concepts are acquired. To achieve a fully satisfactory response to Fodor’s argument, one has to juxtapose questions about conceptual content with questions about cognitive development. To this end, we formulate a general schema for thinking about how concepts are acquired and then present a detailed illustration. (shrink)
In this paper, I develop a novel account of concept acquisition for an atomistic theory of concepts. Conceptual atomism is rarely explored in cognitive science because of the feeling that atomistic treatments of concepts are inherently nativistic. My model illustrates, on the contrary, that atomism does not preclude the learning of a concept.
This paper takes a fresh look at the nativism–empiricism debate, presenting and defending a nativist perspective on the mind. Empiricism is often taken to be the default view both in philosophy and in cognitive science. This paper argues, on the contrary, that there should be no presumption in favor of empiricism (or nativism), but that the existing evidence suggests that nativism is the most promising framework for the scientific study of the mind. Our case on behalf of nativism has four (...) parts. (1) We characterize nativism’s core commitments relative to the contemporary debate between empiricists and nativists, (2) we present the positive case for nativism in terms of two central nativist arguments (the poverty of the stimulus argument and the argument from animals), (3) we respond to a number of influential objections to nativist theories, and (4) we explain the nativist approach to the conceptual system. (shrink)
Theories of number concepts often suppose that the natural numbers are acquired as children learn to count and as they draw an induction based on their interpretation of the ﬁrst few count words. In a bold critique of this general approach, Rips, Asmuth, Bloomﬁeld [Rips, L., Asmuth, J. & Bloomﬁeld, A.. Giving the boot to the bootstrap: How not to learn the natural numbers. Cognition, 101, B51–B60.] argue that such an inductive inference is consistent with a representational system that clearly (...) does not express the natural numbers and that possession of the natural numbers requires further principles that make the inductive inference superﬂuous. We argue that their critique is unsuccessful. Provided that children have access to a suitable initial system of representation, the sort of inductive inference that Rips et al. call into question can in fact facilitate the acquisition of larger integer concepts without the addition of any further principles. Ó 2007 Elsevier B.V. All rights reserved. (shrink)
This article provides a critical overview of competing theories of conceptual structure (definitional structure, probabilistic structure, theory structure), including the view that concepts have no structure (atomism). We argue that the explanatory demands that these different theories answer to are best accommodated by an organization in which concepts are taken to have atomic cores that are linked to differing types of conceptual structure.
In LOT 2: The Language of Thought Revisited, Jerry Fodor argues that concept learning of any kind—even for complex concepts—is simply impossible. In order to avoid the conclusion that all concepts, primitive and complex, are innate, he argues that concept acquisition depends on purely noncognitive biological processes. In this paper, we show (1) that Fodor fails to establish that concept learning is impossible, (2) that his own biological account of concept acquisition is unworkable, and (3) that there are in fact (...) many promising general models for explaining how concepts are learned. (shrink)
At least since W. V. O. Quine's famous critique of the analytic/synthetic distinction, philosophers have been deeply divided over whether there are any analytic truths. One line of thought suggests that the simple fact that people have ' intuitions of analyticity' might provide an independent argument for analyticities. If defenders of analyticity can explain these intuitions and opponents cannot, then perhaps there are analyticities after all. We argue that opponents of analyticity have some unexpected resources for explaining these intuitions and (...) that, accordingly, the argument from intuition fails. (shrink)
Where do human numerical abilities come from? This article is a commentary on Leibovich et al.’s “From 'sense of number' to 'sense of magnitude' —The role of continuous magnitudes in numerical cognition”. Leibovich et al. argue against nativist views of numerical development by noting limitations in newborns’ vision and limitations regarding newborns’ ability to individuate objects. I argue that these considerations do not undermine competing nativist views and that Leibovich et al.'s model itself presupposes that infant learners have numerical representations.
Creations of the Mind presents sixteen original essays by theorists from a wide variety of disciplines who have a shared interest in the nature of artifacts and their implications for the human mind. All the papers are written specially for this volume, and they cover a broad range of topics concerned with the metaphysics of artifacts, our concepts of artifacts and the categories that they represent, the emergence of an understanding of artifacts in infants' cognitive development, as well as the (...) evolution of artifacts and the use of tools by non-human animals. This volume will be a fascinating resource for philosophers, cognitive scientists, and psychologists, and the starting point for future research in the study of artifacts and their role in human understanding, development, and behaviour.Contributors: John R. Searle, Richard E. Grandy, Crawford L. Elder, Amie L. Thomasson, Jerrold Levinson, Barbara C. Malt, Steven A. Sloman, Dan Sperber, Hilary Kornblith, Paul Bloom, Bradford Z. Mahon, Alfonso Caramazza, Jean M. Mandler, Deborah Kelemen, Susan Carey, Frank C. Keil, Marissa L. Greif, Rebekkah S. Kerner, James L. Gould, Marc D. Hauser, Laurie R. Santos, Steven Mithen. (shrink)
This entry provides an overview of theories of concepts that is organized around five philosophical issues: (1) the ontology of concepts, (2) the structure of concepts, (3) empiricism and nativism about concepts, (4) concepts and natural language, and (5) concepts and conceptual analysis.
This article is a commentary on Machery (2009) Doing without Concepts. Concepts are mental symbols that have semantic structure and processing structure. This approach (1) allows for different disciplines to converge on a common subject matter; (2) it promotes theoretical unification; and (3) it accommodates the varied processes that preoccupy Machery. It also avoids problems that go with his eliminativism, including the explanation of how fundamentally different types of concepts can be co-referential.
Philosophers have often claimed that general ideas or representations have their origin in abstraction, but it remains unclear exactly what abstraction as a psychological process consists in. We argue that the Lockean aspiration of using abstraction to explain the origins of all general representations cannot work and that at least some general representations have to be innate. We then offer an explicit framework for understanding abstraction, one that treats abstraction as a computational process that operates over an innate quality space (...) of fine-grained general representations. We argue that this framework has important philosophical implications for the nativism-empiricism dispute, for questions about the acquisition of unstructured representations, and for questions about the relation between human and animal minds. (shrink)
Hilary Putnam's Twin Earth thought experiment has come to have an enormous impact on contemporary philosophical thought. But while most of the discussion has taken place within the context of the philosophy of mind and language, Terence Horgan and Mark Timmons (H8cT) have defended the intriguing suggestion that a variation on the original thought experiment has important consequences for ethics.' In a series of papers, they' ve developed the idea of a Moral Twin Earth and have argued that its significance (...) is that it has the resources to undermine naturalistic versions of moral realism.' H8t T don't hold back in their assessment. "Moral Twin.. (shrink)
The Conceptual Mind’s twenty-four newly commissioned essays cover the most important recent theoretical developments in the study of concepts, identifying and exploring the big ideas that will guide further research over the next decade. Topics include concepts and animals, concepts and the brain, concepts and evolution, concepts and perception, concepts and language, concepts across cultures, concept acquisition and conceptual change, concepts and normativity, concepts in context, and conceptual individuation.
One of the most important abilities we have as humans is the ability to think about number. In this chapter, we examine the question of whether there is an essential connection between language and number. We provide a careful examination of two prominent theories according to which concepts of the positive integers are dependent on language. The first of these claims that language creates the positive integers on the basis of an innate capacity to represent real numbers. The second claims (...) that language’s function is to integrate contents from modules that humans share with other animals. We argue that neither model is successful. (shrink)
A standard view within psychology is that there have been two important shifts in the study of concepts and that each has led to some improvements. The first shift was from the classical theory of concepts to probabilistic theories, including the prototype theory. The second shift was from probabilistic theories to theory-based theories. In this article, I critically evaluate the view that the first shift was a major advance and argue that the prototype theory suffers some of the same problems (...) that have been thought to challenge the classical theory. (shrink)
In a survey of his views in the philosophy of mind, David Lewis criticizes much recent work in the field by attacking an imaginary opponent, Strawman. His case against Strawman focuses on four central theses which Lewis takes to be widely accepted among contemporary philosophers of mind. These theses concerns (1) the language of thought hypothesis and its relation to folk psychology, (2) narrow content, (3) de se content, and (4) rationality. We respond to Lewis, arguing (among other things) that (...) he underestimates Strawman’s theoretical resources in a variety of important ways. (shrink)
The Language of Thought Hypothesis is often taken to have the fatal flaw that it generates an explanatory regress. The language of thought is invoked to explain certain features of natural language (e.g., that it is learned, understood, and is meaningful), but, according to the regress argument, the language of thought itself has these same features and hence no explanatory progress has been made. We argue that such arguments rely on the tacit assumption that the entire motivation for the language (...) of thought consists in explaining the explanandum that allegedly generates the regress. But this tacit assumption is simply false. The Language of Thought Hypothesis is a cogent view and one with considerable explanatory advantages. (shrink)
The philosophy of cognitive science is concerned with fundamental philosophical and theoretical questions connected to the sciences of the mind. How does the brain give rise to conscious experience? Does speaking a language change how we think? Is a genuinely intelligent computer possible? What features of the mind are innate? Advances in cognitive science have given philosophers important tools for addressing these sorts of questions; and cognitive scientists have, in turn, found themselves drawing upon insights from philosophy--insights that have often (...) taken their research in novel directions. The Oxford Handbook of Philosophy of Cognitive Science brings together twenty-one newly commissioned chapters by leading researchers in this rich and fast-growing area of philosophy. It is an indispensible resource for anyone who seeks to understand the implications of cognitive science for philosophy, and the role of philosophy within cognitive science. (shrink)
Many psychologists think that concepts should be understood on analogy with the terms of scientific theories, yet the significance of this claim has always been obscure. In this paper, I clarify the psychological content of the theory analogy, focusing on influential pieces by Susan Carey. Once plainly put, the analogy amounts to the view that a mental representation has its semantic properties by virtue of its role in a restricted knowledge structure. One of the commendable things about Carey's work is (...) that, unlike many other psychologists who appeal to the theory analogy, she takes seriously the need to specify how these structures are constrained. At the same time, the constraints she offers are insufficient. Her account also faces challenges from work in the semantics of natural kind terms. (shrink)
Strong nativist views about numerical concepts claim that human beings have at least some innate precise numerical representations. Weak nativist views claim only that humans, like other animals, possess an innate system for representing approximate numerical quantity. We present a new strong nativist model of the origins of numerical concepts and defend the strong nativist approach against recent cross-cultural studies that have been interpreted to show that precise numerical concepts are dependent on language and that they are restricted to speakers (...) of languages with the right kind of structure. (shrink)
This chapter provides a critical overview of ten central arguments that philosophers have given in support of a distinction between the conceptual and the nonconceptual. We use these arguments to examine the question of whether (and in what sense) perceptual states might be deemed nonconceptual and also whether (and in what sense) animals and infants might be deemed to lack concepts. We argue that philosophers have implicitly relied on a wide variety of different ways to draw the conceptual/nonconceptual distinction and (...) that all ten of the arguments we discuss face considerable difficulties. (shrink)
The Language of Thought Hypothesis (LOT) is at the centre of a number of the most fundamental debates about the mind. Yet many philosophers want to reject LOT out of hand on the grounds that it is essentially a recid- ivistic doctrine, one that has long since been refuted. According to these philosophers, LOT is subject to a devastating regress argument. There are several versions of the argument, but the basic idea is as follows. (1) Natu- ral language has some (...) important feature, X.<sup>1</sup> (2) Defenders of LOT appeal to an internal system of representation in order to explain this feature of natural language. (3) Yet the hypothesized language of thought also has X. (4) This raises the following dilemma: If we offer an analogous explanation of the language of thought. (shrink)
This chapter offers a high-level overview of the philosophy of cognitive science and an introduction to the Oxford Handbook of Philosophy of Cognitive Science. The philosophy of cognitive science emerged out of a set of common and overlapping interests among philosophers and scientists who study the mind. We identify five categories of issues that illustrate the best work in this broad field: (1) traditional philosophical issues about the mind that have been invigorated by research in cognitive science, (2) issues regarding (...) the practice of cognitive science and its foundational assumptions, (3) issues regarding the explication and clarification of core concepts in cognitive science, (4) first-order empirical issues where philosophers participate in the interdisciplinary investigation of particular psychological phenomena, (5) traditional philosophical issues that aren’t about the mind but that can be informed by a better understanding of how the mind works. (shrink)
Conceptual structures are commonly likened to scientific theories, yet the content and motivation of the theory analogy are rarely discussed. Gregory Murphy and Douglas Medin's The Role of Theories in Conceptual Coherence is a notable exception and has become an authoritative exposition of the utility of the theory analogy. For Murphy and Medin, the theory analogy solves what they call the problem of conceptual coherence or the problem of conceptual glue. I argue that they conflate a number of issues under (...) these rubrics and that in each case either the problem to be solved isn't subject to a general solution or the theory analogy is of little use. The issues I consider are: (1) what makes a concept efficient, useful, and informative, (2) what makes a concept refer to what it does, (3) what makes a set of objects form a single category, and (4) what makes concepts combine in one way rather than another. (shrink)
One of the most important recent developments in the study of concepts has been the resurgence of interest in nativist accounts of the human conceptual system. However, many theorists suppose that a key feature of neural organization—the brain’s plasticity—undermines the nativist approach to concept acquisition. We argue that, on the contrary, not only does the brain’s plasticity fail to undermine concept nativism, but a detailed examination of the neurological evidence actually provides powerful support for concept nativism.
We examine a proposal of Eric Lormand's for dealing with perhaps the chief difficulty facing holistic theories of meaning—meaning instability. The problem is that, given a robust holism, small changes in a representational system are likely to lead to meaning changes throughout the system. Consequently, different individuals are likely never to mean the same thing. Lormand suggests that holists can avoid this problem—and even secure more stability than non-holists—by positing that symbols have multiple meanings. We argue that the proposal doesn't (...) work, however, since multiple meanings are unstable for much the same reason that single meanings are. (shrink)
This article is a commentary on Carey (2009) The Origin of Concepts. Carey rightly rejects the building blocks model of concept acquisition on the grounds that new primitive concepts can be learned via the process of bootstrapping. But new primitives can be learned by other acquisition processes that do not involve bootstrapping, and bootstrapping itself is not a unitary process. Nonetheless, the processes associated with bootstrapping provide important insights into conceptual change.
Conceptual analysis is undergoing a revival in philosophy, and much of the credit goes to Frank Jackson. Jackson argues that conceptual analysis is needed as an integral component of so-called serious metaphysics and that it also does explanatory work in accounting for such phenomena as categorization, meaning change, communication, and linguistic understanding. He even goes so far as to argue that opponents of conceptual analysis are implicitly committed to it in practice. We show that he is wrong on all of (...) these points and that his case for conceptual analysis doesn’t succeed. At the same time, we argue that the sorts of intuitions that figure in conceptual analysis may still have a significant role to play in philosophy. So naturalists needn’t disregard intuitions altogether. (shrink)
The topic of this thesis is the nature of human concepts understood as mental symbols or representations. ;Many discussions in this area presuppose an inferential model of concepts taken together with what I call the standard model of concept learning. An inferential model of concepts says that a concept's identity depends upon its participating in inferential dispositions linking it to certain other concepts. For example, one might think that part of what makes a mental symbol the concept BIRD is that (...) it participates in an inferential disposition linking it to the concept ANIMAL. The standard model of concept learning says that learning a concept involves assembling it from previously available concepts. For example, one might think that the concept BIRD is learned by putting together the concept ANIMAL with concepts like WING, FLIES, and so on, yielding a more complex structure. The standard model of concept learning presupposes a form of the inferential model of concepts. Thus one of the reasons many people endorse the inferential model is that they are committed to a broadly empiricist picture of the mind; they assume that most concepts are learned, so they must also assume that most concepts have the sort of structure that the model presupposes. ;My thesis challenges both of these points with respect to a class of concept that has been of central interest in philosophy and cognitive science, namely, the lexical concepts . I argue that the independent evidence for the inferential model in this case is no good and consequently that the standard model of concept learning doesn't work. Still, it remains plausible that many lexical concepts are learned. What we need are new learning models, ones that depart in significant ways from the standard model. I end the thesis by outlining a proposal that may work for the natural kind concepts. (shrink)
This collection of 16 original articles by prominent theorists from a variety of disciplines provides an excellent insight into current thinking about artifacts. The four sections address issues concerning the metaphysics of artifacts, the nature and cognitive development of artifact concepts, and the place of artifacts in evolutionary history. The most overtly philosophical contributions are in the first two sections. Metaphysical issues addressed include the ‘mind-dependence’ of artifacts and the bearing of this on their ‘real’ existence, and the distinction between (...) natural and artifact kinds, and its implications for issues in epistemology and semantics – for example, whether there is ‘maker's knowledge’ of artifacts, and whether ‘direct’ theories of reference apply to artifact-kind terms. The papers concerned with the nature of artifact concepts – the ways in which we represent artifacts to ourselves – address how judgements of artifact identity track judgements about manifest appearance, function, and maker's intentions, and the neuroscientific basis for artifact categorization. Papers …. (shrink)