In the beginning, there was the DN (Deductive Nomological) model of explanation, articulated by Hempel and Oppenheim (1948). According to DN, scientific explanation is subsumption under natural law. Individual events are explained by deducing them from laws together with initial conditions (or boundary conditions), and laws are explained by deriving them from other more fundamental laws, as, for example, the simple pendulum law is derived from Newton's laws of motion.
As a procedure, reflective equilibrium is simply a familiar kind of standard scientific method with a new name. A theory is constructed to account for a set of observations. Recalcitrant data may be rejected as noise or explained away as the effects of interference of some sort. Recalcitrant data that cannot be plausibly dismissed force emendations in theory. What counts as a plausible dismissal depends, among other things, on the going theory, as well as on background theory and on knowledge (...) that may be relevant to under-standing the experimental design that is generating the observations, including knowledge of the apparatus and observation conditions. This sort of mutual adjustment between theory and data is a familiar feature of scientific practice. Whatever authority RE seems to have comes, I think, from a tacit or explicit recognition that it has the same form as this familiar sort of scientific inference. One way to see the rationale underlying this procedure in science is to focus on prediction. Think of prediction as a matter of projecting what is known onto uncharted territory. To do this, you need a vehicle—a theory—that captures some invariant or pattern in what is known so that you can project it onto the unknown. How convincing the projection is depends on two factors: how sure one is of the observational base, and how sure one is that the theory gets the invariants right. The two factors are not independent, of course. One's confidence in the observational base will be affected by how persuasively the theory identifies and dismisses noise; one's confidence in the theory, on the other hand, will depend on one's confidence in the observations it takes seriously. Prediction is important as a test of theory precisely because verified predictions seem to show that the theory has correctly captured the general in the particular, that it has got the drift of the observational evidence in which our confidence is ultimately grounded. (shrink)
The purpose of this paper is to set forth a sense in which programs can and do explain behavior, and to distinguish from this a number of senses in which they do not. Once we are tolerably clear concerning the sort of explanatory strategy being employed, two rather interesting facts emerge; (1) though it is true that programs are "internally represented," this fact has no explanatory interest beyond the mere fact that the program is executed; (2) programs which are couched (...) in information processing terms may have an explanatory interest for a given range of behavior which is independent of physiological explanations of the same range of behavior. (shrink)
Neo-teleology is the two part thesis that, e.g., (i) we have hearts because of what hearts are for: Hearts are for blood circulation, not the production of a pulse, so hearts are there--animals have them--because their function is to circulate the blood, and (ii) that (i) is explained by natural selection: traits spread through populations because of their functions. This paper attacks this popular doctrine. The presence of a biological trait or structure is not explained by appeal to its function. (...) To suppose otherwise is to trivialize natural selection. (shrink)
The thesis of this paper is that the causal theory of mental content (hereafter CT) is incompatible with an elementary fact of perceptual psychology, namely, that the detection of distal properties generally requires the mediation of a “theory.” I shall call this fact the nontransducibility of distal properties (hereafter NTDP). The argument proceeds in two stages. The burden of stage one is that, taken together, CT and the language of thought hypothesis (hereafter LOT) are incompatible with NTDP. The burden of (...) stage two is that acceptance of CT requires acceptance of LOT as well. It follows that CT is incompatible with NTDP. I organize things in this way in part because it makes the argument easier to understand, and in part because the stage-two thesis—that CT entails LOT—has some independent interest and is therefore worth separating from the rest of the argument. (shrink)
The current debate over systematicity concerns the formal conditions a scheme of mental representation must satisfy in order to explain the systematicity of thought.1 The systematicity of thought is assumed to be a pervasive property of minds, and can be characterized (roughly) as follows: anyone who can think T can think systematic variants of T, where the systematic variants of T are found by permuting T’s constituents. So, for example, it is an alleged fact that anyone who can think the (...) thought that John loves Mary can think the thought that Mary loves John, where the latter thought is a systematic variant of the former. (shrink)
In this paper, we introduce a novel difficulty for teleosemantics, viz., its inability to account for what we call unexploited content—content a representation has, but which the system that harbors it is currently unable to exploit. In section two, we give a characterization of teleosemantics. Since our critique does not depend on any special details that distinguish the variations in the literature, the characterization is broad, brief and abstract. In section three, we explain what we mean by unexploited content, and (...) argue that any theory of content adequate to ground representationalist theories in cognitive science must allow for it.1 In section four, we show that teleosemantic theories of the sort we identify in section two cannot accommodate unexploited content, and are therefore unacceptable if intended as attempts to ground representationalist cognitive science. Finally, in section five, we speculate that the existence and importance of unexploited content has likely been obscured by a failure to distinguish representation from indication, and by a tendency to think of representation as reference. (shrink)
What are the prospects for a cognitive science of meaning? As stated, we think this question is ill posed, for it invites the conflation of several importantly different semantic concepts. In this paper, we want to distinguish the sort of meaning that is an explanandum for cognitive science—something we are going to call meaning—from the sort of meaning that is an explanans in cognitive science—something we are not going to call meaning at all, but rather content. What we are going (...) to call meaning is paradigmatically a property of linguistic expressions or acts: what one’s utterance or sentence means, and what one means by it. What we are going to call content is a property of, among other things, mental representations and indicator signals. We will argue that it is a mistake to identify meaning with content, and that, once this is appreciated, some serious problems emerge for grounding meaning in the sorts of content that cognitive science is likely to provide. (shrink)
D O N A L D D AV I D S O N’S “ Meaning and Truth,” re vo l u t i o n i zed our conception of how truth and meaning are related (Davidson ). In that famous art i c l e , Davidson put forw a rd the bold conjecture that meanings are satisfaction conditions, and that a Tarskian theory of truth for a language is a theory of meaning for that language. (...) In “Meaning and Truth,” Davidson proposed only that a Tarskian truth theory is a theory of meaning. But in “Theories of Me a n i n g and Learnable Languages,” he argued that the ﬁnite base of a Tarskian theory, together with the now familiar combinatorics, would explain how a language with unbounded expre s s i ve capacity could be learned with finite means ( Davidson ). This certainly seems to imply that learning a language is, in p a rt at least, learning a Tarskian truth theory for it, or, at least, learning what is speciﬁed by such a theory. Davisdon was cagey about committing to the view that meanings actually a re satisfaction conditions, but subsequent followers had no such scru p l e s . We can sum this up in a trio of claims: Davidson’s Conjecture () A theory of meaning for L is a truth-conditional semantics for L. () To know the meaning of an expression in L is to know a satisfaction condition for that expression. () Meanings are satisfaction conditions. For the most part, it will not matter in what follows which of these claims is at stake. I will simply take the three to be different ways of formulating what I will call Davidson’s Conjecture (or sometimes just The Conjecture). Davidson’s Conjecture was a very bold conjecture. I think we are now in a.. (shrink)
It is commonly supposed that evolutionary explanations of cognitive phenomena involve the assumption that the capacities to be explained are both innate and modular. This is understandable: independent selection of a trait requires that it be both heritable and largely decoupled from other `nearby' traits. Cognitive capacities realized as innate modules would certainly satisfy these contraints. A viable evolutionary cognitive psychology, however, requires neither extreme nativism nor modularity, though it is consistent with both. In this paper, we seek to show (...) that rather weak assumptions about innateness and modularity are consistent with evolutionary explanations of cognitive capacities. Evolutionary pressures can affect the degree to which the development of a capacity is canalized by biasing acquisition/ learning in ways that favor development of concepts and capacities that proved adaptive to an organism's ancestors. q 1999 Elsevier Science B.V. All rights reserved. (shrink)
There is a certain view abroad in the land concerning the philosophical problems raised by Tarskian semantics. This view has it that a Tarskian theory of truth in a language accomplishes nothing of interest beyond the definition of truth in terms of satisfaction, and, further, that what is missing — the only thing that would yield a solution to the philosophical problem of truth when added to Tarskian semantics — is a reduction of satisfaction to a non-semantic relation. It seems (...) to me that this view either misidentifies the philosophical problem altogether, or encourages a seriously misleading picture of the nature of the problem.The view I have in mind is nowhere more persuasively at work than in a recent paper by Hartry Field.1 In this paper Field argues that a Tarskian theory of truth for a natural language is impossible if we insist on Tarski's case-by-case elimination of ‘satisfies'. More fundamentally, however, he argues that a Tarskian theory could provide nothing of philosophical interest beyond the admittedly interesting reduction of truth to satisfaction, and his ground for this claim is, roughly, that a Tarskian theory does not reduce its primitive semantic relation — satisfaction — to a non-semantic relation. (shrink)
This paper is about two kinds of mental content and how they are related. We are going to call them representation and indication. We will begin with a rough characterization of each. The differences, and why they matter, will, hopefully, become clearer as the paper proceeds.
The Knowledge Argument of Frank Jackson has not persuaded physicalists, but their replies have not dispelled the intuition that someone raised in a black and white environment gains genuinely new knowledge when she sees colors for the first time. In what follows, we propose an explanation of this particular kind of knowledge gain that displays it as genuinely new, but orthogonal to both physicalism and phenomenology. We argue that Mary’s case is an instance of a common phenomenon in which something (...) new is learned as the result of exploiting representational resources that were not previously exploited, and that this results in gaining genuinely new information. (shrink)
It has been commonplace in epistemology since its inception to idealize away from computational resource constraints, i.e., from the constraints of time and memory. One thought is that a kind of ideal rationality can be specified that ignores the constraints imposed by limited time and memory, and that actual cognitive performance can be seen as an interaction between the norms of ideal rationality and the practicalities of time and memory limitations. But a cornerstone of naturalistic epistemology is that normative assessment (...) is constrained by capacities: you cannot require someone to do something they cannot or, as it is usually put, ought implies can. This much we take to be uncontroversial. We argue that differences in architectures, goals and resources imply substantial differences in capacity, and that some of these differences are ineliminable. It follows that some differences in goals and architectural and computational resources matter at the normative level: they constrain what principles of normative epistemology can be used to describe and prescribe their behavior. As a result, we can expect there to be important epistemic differences between the way brains, individuals, and science work. (shrink)
Robert Cummins presents a series of essays motivated by the following question: Is the mind a collection of beliefs and desires that respond to and condition our feeling and perceptual experiences, or is this just a natural way to talk about it? What sort of conceptual framework do we need to understand what is really going on in our brains?
Proponents of the dominant paradigm in evolutionary psychology argue that a viable evolutionary cognitive psychology requires that specific cognitive capacities be heritable and “quasi-independent” from other heritable traits, and that these requirements are best satisfied by innate cognitive modules. We argue here that neither of these are required in order to describe and explain how evolution shaped the mind.
The thesis that subsumption is sufficient for explanation is dying out, but the thesis that it is necessary is alive and well. It is difficult to attack this thesis: non-subsumptive counter-examples are declared incomplete, or mere promissory notes. No theory, it is thought, can be explanatory unless it resorts to subsumption at some point. In this paper I attack this thesis by describing a theory that (1) would explain every event it could describe, (2) does not explain by subsumption, and (...) (3) is fundamental in that it is understood to be irreducible (hence there are no unstated laws waiting in the wings). (shrink)
I've tried to argue that there is more to representational content than CRS can acknowledge. CRS is attractive, I think, because of its rejection of atomism, and because it is a plausible theory of targets. But those are philosopher's concerns. Someone interested in building a person needs to understand representation, because, as AI researchers have urged for some time, good representation is the secret of good performance. I have just gestured in the direction I think a viable theory of representation (...) must take. I hope, however, to have created some advance sympathy for the gesture by distinguishing the problem of representation from the problem of targets on the one hand, and from the problem truth-conditions for the attitudes on the other. (shrink)
This paper considers two ways functions figure into scientific explanations: (i) via laws?events are causally explained by subsuming those events under functional laws; and (ii) via designs?capacities are explained by specifying the functional design of a system. We argue that a proper understanding of how functions figure into design explanations of capacities makes it clear why such functions are ill-suited to figure into functional-cum-causal law explanations of events, as those explanations are typically understood. We further argue that a proper understanding (...) of how functions enter into design explanations of capacities enables us to show why two prominent objections to functionalism in the philosophy mind?the argument from metaphysically necessary effects (Bennett, 2007; Rupert, 2006) and causal exclusion (Kim, 1993, 1998; Malcolm, 1968)?are misguided when interpreted as posing a threat to functional explanation in science across the board. If those arguments pose a threat at all, they pose it to instances of (i); however, a great number of the functional explanations we find in psychology?and the sciences generally?are instances of (ii). (shrink)
This is a condensed version of the material in chapters 8-10 in Meaning and Mental Representation (MIT, 1989). It is an explanation and defence of a theory of content for the mind considered as a symbolic computational process. It is a view i abandoned shortly thereafter when I abandoned symbolic computatioalism as a viable theory of cognition.
I argue that Galileo regarded unaccelerated motion as requiring cause to sustain in. In an inclined plane experiment, the cause ceases when the incline ceases. When the incline ceases, what ceases is acceleration, not motion. Hence, unaccelerated motion requires no cause to sustain it.
In this paper, I sketch a revision of jonathan bennett's "meaning-Nominalist strategy" for explaining the conventional meanings of utterance-Types. Bennett's strategy does not explain sentence-Meaning by appeal to sub-Sentential meanings, And hence cannot hope to yield a theory that assigns a meaning to every sentence. I revise the strategy to make it applicable to predication and identification. The meaning-Convention for a term can then be used to fix its satisfaction conditions. Adapting a familiar trick of tarski's, We can then determine (...) an infinity of conventional meanings from a finite number of meaning-Conventions. (shrink)
In “On Begging the Systematicity Question,” Wayne Davis criticizes the suggestion of Cummins et al. that the alleged systematicity of thought is not as obvious as is sometimes supposed, and hence not reliable evidence for the language of thought hypothesis. We offer a brief reply.