It would be ever so nice if there were a viable analytic/synthetic distinction. Though nobody knows for sure, there would seem to be several major philosophical projects that having one would advance. For example: analytic sentences2 are supposed to have their truth values solely in virtue of the meanings (together with the syntactic arrangement) of their constituents; i.e., their truth values are supposed to supervene on their linguistic properties alone.3 So they are true in every possible world where they mean (...) what they mean here.4 So they are necessarily true. So if there were a viable analytic/synthetic distinction (‘a/s distinction’ often hereafter), we would understand the necessity of at least some necessary truths. If, in particular, it were to turn out that the logical and/or the mathematical truths are analytic, we would understand why they are necessary. It would be ever so nice to understand why the logical and/or mathematical truths are necessary (cf. Gibson 1998; Quine 1998). Any account of necessity would be welcome, but one according to which necessary truths are analytic has special virtues. Necessity isn’t, of course, an epistemic property. Still, suppose that the necessity of a sentence arises from the meanings of its parts. It’s natural to assume that one of the things one knows in virtue of knowing one’s language is what the expressions of the language mean (cf., e.g., Boghossian 1994). A treatment of modality in terms of analyticity therefore connects the concept of necessity with the concept of knowledge; and knowledge is, of course, an epistemic property. So maybe if there is an a/s distinction, we could explain why the necessary truths, or at least some of the necessary truths, are knowable a priori by anybody who knows a language that can express them (cf. Quine 1991). It bears emphasis that not every theory of.. (shrink)
The idea that quotidian, middle-level concepts typically have internal structure -- definitional, statistical, or whatever -- plays a central role in practically every current approach to cognition. Correspondingly, the idea that words that express quotidian, middle-level concepts have complex representations "at the semantic level" is recurrent in linguistics; it's the defining thesis of what is often called "lexical semantics," and it unites the generative and interpretive traditions of grammatical analysis. Recently, Hale and Keyser (1993) have provided a budget of sophisticated (...) and persuasive arguments for the claim that `denominal' verbs are typically derived from phrases containing the corresponding nouns: `singvtr' is supposed to come from something like DO A SONG; `saddlevtr' is supposed to come from something like PUT A SADDLE ON; `shelvevtr' is supposed to come from something like PUT ON A SHELF, and so forth.1 We think these are among the most persuasive arguments for lexical decomposition in the linguistics literature. Still, this paper is going to claim that they are finally unconvincing. In Part 1, we will show that there are quite serious arguments of a familiar kind against the decompositional analyses that Hale and Keyser (henceforth, HK) propose; in Part 2 we'll show that the arguments that HK offer in favor of their analyses are flawed. (shrink)
It matters to a number of projects whether monomorphemic lexical items (‘boy’, ‘cat’, ‘give’, ‘break’, etc.) have internal linguistic structure. (Call the theory that they do the Decomposition Hypothesis (DC).) The cognitive science consensus is, overwhelmingly, that DC is true; for example, that there is a level of grammar at which ‘breaktr’ has the structure ‘cause to breakint’ and so forth. We find this consensus surprising since, as far as we can tell, there is practically no evidence to support it. (...) (For example, there is no psychological evidence that you can’t have a word that expresses the concept BREAKTR unless you have the concept CAUSE. But there ought to be if CAUSE is a constituent of BREAKTR) This isn’t, of course, to say that there are no prima facie arguments at all for DC. The best one’s we’ve heard are the Impossible Word Arguments (IWA). That being so we’re very interested in whether IWAs are, in fact, sound. (shrink)
This is a long paper with a long title, but its moral is succinct. There are supposed to be two, closely related, philosophical problems about sentences1 with truth value gaps: If a sentence can't be semantically evaluated, how can it mean anything at all? and How can classical logic be preserved for a language which contains such sentences? We are neutral on whether either of these supposed problems is real. But we claim that, if either is, supervaluation won't solve it.
What kind of theory is the theory of natural selection? -- Internal constraints : what the new biology tells us -- Whole genomes, networks, modules and other complexities -- Many constraints, many environments -- The return of the laws of form -- Many are called but few are chosen : the problem of 'selection-for' -- No exit? : some responses to the problem of 'selection-for' -- Did the dodo lose its ecological niche? : or was it the other way around? (...) -- Summary and postlude. (shrink)
“It’s not good enough to say there’s some mechanism such that you start out with amoebas and you end up with us. Everybody agrees with that. The question is in this case in the mechanical details. What you need is an account, as it were step by step, about what the constraints are, what the environmental variables are, and Darwin doesn’t give you that.”.
Darwinism consists of two parts: a phylogenesis of biological species (ours included) and the claim that the primary mechanism of the evolution of phenotypes is natural selection. I assume that Darwin’s account of phylogeny is essentially correct; attention is directed to the theory of natural selection. I claim that Darwin’s account of evolution by natural selection cannot be sustained. The basic problem is that, according to the consensus view, evolution consists in changes of the distribution of phenotypic traits in populations (...) of organisms. An evolutionary theory must therefore explicate not just the notion of organisms being selected, but also the notion of organisms being selected for their phenotypic traits. I argue that that there is no way for a theory of natural selection to do so, and that Darwin’s assumption to the contrary was likely the consequence of placing too much weight on the analogy between natural selection and artificial selection. The paper ends with the suggestion that selectionist explanations, insofar as they are convincing, are best construed as post hoc historical narratives: natural history rather than biology. (shrink)
Jerry Fodor is one of the leading philosophers of mind and language in the world today. He is best known for his work developing two theses which give theirnames to his books The Modularity of Mind and The Language of Thought. He teaches philosophy at Rutgers and at the CUNY Graduate Center.
Jerry Fodor presents a new development of his famous Language of Thought hypothesis, which has since the 1970s been at the centre of interdisciplinary debate about how the mind works. Fodor defends and extends the groundbreaking idea that thinking is couched in a symbolic system realized in the brain. This idea is central to the representational theory of mind which Fodor has established as a key reference point in modern philosophy, psychology, and cognitive science. The foundation stone of our present (...) cognitive science is Turing's suggestion that cognitive processes are not associations but computations; and computation requires a language of thought. So the latest on the Language of Thought hypothesis, from its progenitor, promises to be a landmark in the study of the mind. LOT 2 offers a more cogent presentation and a fuller explication of Fodor's distinctive account of the mind, with various intriguing new features. The central role of compositionality in the representational theory of mind is revealed: most of what we know about concepts follows from the compositionality of thoughts. Fodor shows the necessity of a referentialist account of the content of intentional states, and of an atomistic account of the individuation of concepts. Not least among the new developments is Fodor's identification and persecution of pragmatism as the leading source of error in the study of the mind today. LOT 2 sees Fodor advance undaunted towards the ultimate goal of a theory of the cognitive mind, and in particular a theory of the intentionality of cognition. No one who works on the mind can ignore Fodor's views, expressed in the coruscating and provocative style which has delighted and disconcerted countless readers over the years. (shrink)
We take it that Brandom’s sense of the geography is that our way of proceeding is more or less the first and his is more or less the second. But we think this way of describing the situation is both unclear and misleading, and we want to have this out right at the start. Our problem is that we don’t know what “you start with” means either in formulations like “you start with the content of words and proceed to the (...) content of sentences” or in formulations like “you start with the content of sentences and you proceed to the content of words.” Brandom’s official view seems to be that he’s talking about explanatory priorities (see the preceding quote); but we think that can’t really be what he has in mind, and we can’t find any alternative interpretation that seems plausible. Speaking just for ourselves, we’re inclined towards a relatively pragmatic view of explanation; what explanation we should “start with” depends, inter alia, on what it’s an explanation of and whom it’s an explanation for. But, in any case, we would have thought that explanatory priority is of more than heuristic interest only if it reflects a priority of some other kind: ontological, semantical, psychological, or whatever. In talking about what one “starts with”, Brandom must be claiming more than. (shrink)
Hume? Yes, David Hume, that's who Jerry Fodor looks to for help in advancing our understanding of the mind. Fodor claims his Treatise of Human Nature as the foundational document of cognitive science: it launched the project of constructing an empirical psychology on the basis of a representational theory of mind. Going back to this work after more than 250 years we find that Hume is remarkably perceptive about the components and structure that a theory of mind requires. Careful study (...) of the Treatise helps us to see what is amiss with much twentieth-century philosophy of mind, and to get on the right track. (shrink)
I started with no goal more ambitious than a critical discussion of Fiona Cowieâ€™s new book about innateness; it seemed to me that her arguments, unless refuted in detail, were likely to affront some or other abstract entity whose cause I favor: The Good, The True, The Beautiful; whatever. But there were so many things that the book struck me as being wrong about that the proposed critique became, in effect, an explication of the kind of nativism I think a (...) rationalist in cognitive psychology should endorse. And the more of that I came to explicate, the more digressions and elaborations suggested themselves. And elaborations of the digressions. And digressions from the elaborations. (shrink)
Compositionality is the idea that the meanings of complex expressions (or concepts) are constructed from the meanings of the less complex expressions (or concepts) that are their constituents.1 Over the last few years, we have just about convinced ourselves that compositionality is the sovereign test for theories of lexical meaning.2 So hard is this test to pass, we think, that it filters out practically all of the theories of lexical meaning that are current in either philosophy or cognitive science. Among (...) the casualties are, for example, the theory that lexical meanings are statistical structures (like stereotypes); the theory that the meaning of a word is its use; the theory that knowing the meaning of (at least some) words requires having a recognitional capacity for (at least some) of the things that it applies to; and the theory that knowing the meaning of a word requires knowing criteria for applying it. Indeed, we think that only two theories of the lexicon survive the compositionality constraint: viz., the theory that all lexical meanings are primitive and the theory that some lexical meanings are primitive and the rest are definitions. So compositionality does a lot of work in lexical semantics, according to our lights. (shrink)