In the beginning, there was the DN (Deductive Nomological) model of explanation, articulated by Hempel and Oppenheim (1948). According to DN, scientific explanation is subsumption under natural law. Individual events are explained by deducing them from laws together with initial conditions (or boundary conditions), and laws are explained by deriving them from other more fundamental laws, as, for example, the simple pendulum law is derived from Newton's laws of motion.
As a procedure, reflective equilibrium is simply a familiar kind of standard scientific method with a new name. A theory is constructed to account for a set of observations. Recalcitrant data may be rejected as noise or explained away as the effects of interference of some sort. Recalcitrant data that cannot be plausibly dismissed force emendations in theory. What counts as a plausible dismissal depends, among other things, on the going theory, as well as on background theory and on knowledge (...) that may be relevant to under-standing the experimental design that is generating the observations, including knowledge of the apparatus and observation conditions. This sort of mutual adjustment between theory and data is a familiar feature of scientific practice. Whatever authority RE seems to have comes, I think, from a tacit or explicit recognition that it has the same form as this familiar sort of scientific inference. One way to see the rationale underlying this procedure in science is to focus on prediction. Think of prediction as a matter of projecting what is known onto uncharted territory. To do this, you need a vehicle—a theory—that captures some invariant or pattern in what is known so that you can project it onto the unknown. How convincing the projection is depends on two factors: how sure one is of the observational base, and how sure one is that the theory gets the invariants right. The two factors are not independent, of course. One's confidence in the observational base will be affected by how persuasively the theory identifies and dismisses noise; one's confidence in the theory, on the other hand, will depend on one's confidence in the observations it takes seriously. Prediction is important as a test of theory precisely because verified predictions seem to show that the theory has correctly captured the general in the particular, that it has got the drift of the observational evidence in which our confidence is ultimately grounded. (shrink)
Neo-teleology is the two part thesis that, e.g., (i) we have hearts because of what hearts are for: Hearts are for blood circulation, not the production of a pulse, so hearts are there--animals have them--because their function is to circulate the blood, and (ii) that (i) is explained by natural selection: traits spread through populations because of their functions. This paper attacks this popular doctrine. The presence of a biological trait or structure is not explained by appeal to its function. (...) To suppose otherwise is to trivialize natural selection. (shrink)
In this paper, we introduce a novel difficulty for teleosemantics, viz., its inability to account for what we call unexploited content—content a representation has, but which the system that harbors it is currently unable to exploit. In section two, we give a characterization of teleosemantics. Since our critique does not depend on any special details that distinguish the variations in the literature, the characterization is broad, brief and abstract. In section three, we explain what we mean by unexploited content, and (...) argue that any theory of content adequate to ground representationalist theories in cognitive science must allow for it.1 In section four, we show that teleosemantic theories of the sort we identify in section two cannot accommodate unexploited content, and are therefore unacceptable if intended as attempts to ground representationalist cognitive science. Finally, in section five, we speculate that the existence and importance of unexploited content has likely been obscured by a failure to distinguish representation from indication, and by a tendency to think of representation as reference. (shrink)
D O N A L D D AV I D S O N’S “ Meaning and Truth,” re vo l u t i o n i zed our conception of how truth and meaning are related (Davidson ). In that famous art i c l e , Davidson put forw a rd the bold conjecture that meanings are satisfaction conditions, and that a Tarskian theory of truth for a language is a theory of meaning for that language. (...) In “Meaning and Truth,” Davidson proposed only that a Tarskian truth theory is a theory of meaning. But in “Theories of Me a n i n g and Learnable Languages,” he argued that the ﬁnite base of a Tarskian theory, together with the now familiar combinatorics, would explain how a language with unbounded expre s s i ve capacity could be learned with finite means ( Davidson ). This certainly seems to imply that learning a language is, in p a rt at least, learning a Tarskian truth theory for it, or, at least, learning what is speciﬁed by such a theory. Davisdon was cagey about committing to the view that meanings actually a re satisfaction conditions, but subsequent followers had no such scru p l e s . We can sum this up in a trio of claims: Davidson’s Conjecture () A theory of meaning for L is a truth-conditional semantics for L. () To know the meaning of an expression in L is to know a satisfaction condition for that expression. () Meanings are satisfaction conditions. For the most part, it will not matter in what follows which of these claims is at stake. I will simply take the three to be different ways of formulating what I will call Davidson’s Conjecture (or sometimes just The Conjecture). Davidson’s Conjecture was a very bold conjecture. I think we are now in a.. (shrink)
What are the prospects for a cognitive science of meaning? As stated, we think this question is ill posed, for it invites the conflation of several importantly different semantic concepts. In this paper, we want to distinguish the sort of meaning that is an explanandum for cognitive science—something we are going to call meaning—from the sort of meaning that is an explanans in cognitive science—something we are not going to call meaning at all, but rather content. What we are going (...) to call meaning is paradigmatically a property of linguistic expressions or acts: what one’s utterance or sentence means, and what one means by it. What we are going to call content is a property of, among other things, mental representations and indicator signals. We will argue that it is a mistake to identify meaning with content, and that, once this is appreciated, some serious problems emerge for grounding meaning in the sorts of content that cognitive science is likely to provide. (shrink)
It has been commonplace in epistemology since its inception to idealize away from computational resource constraints, i.e., from the constraints of time and memory. One thought is that a kind of ideal rationality can be specified that ignores the constraints imposed by limited time and memory, and that actual cognitive performance can be seen as an interaction between the norms of ideal rationality and the practicalities of time and memory limitations. But a cornerstone of naturalistic epistemology is that normative assessment (...) is constrained by capacities: you cannot require someone to do something they cannot or, as it is usually put, ought implies can. This much we take to be uncontroversial. We argue that differences in architectures, goals and resources imply substantial differences in capacity, and that some of these differences are ineliminable. It follows that some differences in goals and architectural and computational resources matter at the normative level: they constrain what principles of normative epistemology can be used to describe and prescribe their behavior. As a result, we can expect there to be important epistemic differences between the way brains, individuals, and science work. (shrink)
It is commonly supposed that evolutionary explanations of cognitive phenomena involve the assumption that the capacities to be explained are both innate and modular. This is understandable: independent selection of a trait requires that it be both heritable and largely decoupled from other `nearby' traits. Cognitive capacities realized as innate modules would certainly satisfy these contraints. A viable evolutionary cognitive psychology, however, requires neither extreme nativism nor modularity, though it is consistent with both. In this paper, we seek to show (...) that rather weak assumptions about innateness and modularity are consistent with evolutionary explanations of cognitive capacities. Evolutionary pressures can affect the degree to which the development of a capacity is canalized by biasing acquisition/ learning in ways that favor development of concepts and capacities that proved adaptive to an organism's ancestors. q 1999 Elsevier Science B.V. All rights reserved. (shrink)
Proponents of the dominant paradigm in evolutionary psychology argue that a viable evolutionary cognitive psychology requires that specific cognitive capacities be heritable and “quasi-independent” from other heritable traits, and that these requirements are best satisfied by innate cognitive modules. We argue here that neither of these are required in order to describe and explain how evolution shaped the mind.
The thesis that subsumption is sufficient for explanation is dying out, but the thesis that it is necessary is alive and well. It is difficult to attack this thesis: non-subsumptive counter-examples are declared incomplete, or mere promissory notes. No theory, it is thought, can be explanatory unless it resorts to subsumption at some point. In this paper I attack this thesis by describing a theory that (1) would explain every event it could describe, (2) does not explain by subsumption, and (...) (3) is fundamental in that it is understood to be irreducible (hence there are no unstated laws waiting in the wings). (shrink)
This paper is about two kinds of mental content and how they are related. We are going to call them representation and indication. We will begin with a rough characterization of each. The differences, and why they matter, will, hopefully, become clearer as the paper proceeds.
In “On Begging the Systematicity Question,” Wayne Davis criticizes the suggestion of Cummins et al. that the alleged systematicity of thought is not as obvious as is sometimes supposed, and hence not reliable evidence for the language of thought hypothesis. We offer a brief reply.
Haugeland doesn’t have what I would call a theory of mental representation. Indeed, it isn’t clear that he believes there is such a thing. But he does have a theory of intentionality and a correlative theory of objectivity, and it is this material that I will be discussing in what follows. It will facilitate the discussion that follows to have at hand some distinctions and accompanying terminology I introduced in Representations, Targets and Attitudes (Cummins, 1996; RTA hereafter). Couching the discussion (...) in these terms will, I hope, help to identify points of agreement and disagreement between Haugaland and myself. In RTA, I distinguished between the target a representation has on a given occasion of its application, and its content. RTA takes representation deployment to be the business of intenders: mechanisms whose business it is to represent some particular class of targets. Thus, on standard stories about speech perception, there is a mechanism (called a parser) whose business it is to represent the phrase structure of the linguistic input currently being processed. When this intender passes a representation R to the consumers of its products, those consumers will take R to be a representation of the phrase structure of the current input. There is no explicit vocabulary to mark the target-content distinction in ordinary language. Expressions like "what I referred to," "what I meant," and the like, are ambiguous. Sometimes they mean targets, sometimes contents. Consider the following dialogue. (shrink)
The background hypothesis of this essay is that psychological phenomena are typically explained, not by subsuming them under psychological laws, but by functional analysis. Causal subsumption is an appropriate strategy for explaining changes of state, but not for explaining capacities, and it is capacities that are the central explananda of psychology. The contrast between functional analysis and causal subsumption is illustrated, and the background hypothesis supported, by a critical reassessment of the motivational psychology of Clark Hull. I argue that Hull's (...) work makes little sense construed along the subsumptivist lines he advocated himself, but emerges as both interesting and methodologically sound when construed as an exercise in the sort of functional analysis featured in contemporary cognitive science. (shrink)
A viable evolutionary cognitive psychology requires that speciﬁc cognitive capacities be (a) heritable and (b) ‘quasi-independent’ from other heritable traits. They must be heritable because there can be no selection for traits that are not. They must be quasi-independent from other heritable traits, since adaptive variations in a speciﬁc cognitive capacity could have no distinctive consequences for ﬁtness if eﬀecting those variations required widespread changes in other unrelated traits and capacities as well. These requirements would be satisﬁed by innate cognitive (...) modules, as the dominant paradigm in evolutionary cognitive psychology assumes. However, those requirements would also be satisﬁed by heritable learning biases, perhaps in the form of architec- tural or chronotopic constraints, that operated to increase the canalization of speciﬁc cognitive capacities in the ancestral environment (Cummins and Cummins 1999). As an organism develops, cognitive capacities that are highly canalized as the result of heritable learning biases might result in an organism that is behaviourally quite similar to an organism whose innate modules come on line as the result of various environ- mental triggers. Taking this possibility seriously is increasingly important as the case against innate cognitive modules becomes increasingly strong. (shrink)
In response to Michael Morris, I attempt to refute the crucial second premise of the argument, which states that the formality condition cannot be satisfied “non-stipulatively” in computational systems. I defend the view of representation urged in Meaning and Mental Representation against the charge that it makes content stipulative and therefore irrelevant to the explanation of cognition. Some other reservations are expressed.
The current debate over systematicity concerns the formal conditions a scheme of mental representation must satisfy in order to explain the systematicity of thought.1 The systematicity of thought is assumed to be a pervasive property of minds, and can be characterized (roughly) as follows: anyone who can think T can think systematic variants of T, where the systematic variants of T are found by permuting T’s constituents. So, for example, it is an alleged fact that anyone who can think the (...) thought that John loves Mary can think the thought that Mary loves John, where the latter thought is a systematic variant of the former. (shrink)
I.1. Two reasons for studying inference. Inference is studied for two distinct reasons: for its bearing on justification and for its bearing on learning. By and large, philosophy has focused on the role of inference in justification, leaving its role in learning to psychology and artificial intelligence. This difference of role leads to a difference of conception. An inference based theory of learning does not require a conception of inference according to which a good inference is one that justifies its (...) conclusion, whereas, obviously, an inference based theory of justification does require such a conception.1 Because of its focus on normative issues of justification, philosophy has taken a retrospective approach to inference, whereas a focus on learning naturally leads to a prospective approach. A focus on learning leads us to ask, "Given what is known, what should be inferred? How can what is known lead, via inference, to new knowledge?" A focus on justification has led philosophers to concentrate instead on a retrospective question: "Given a belief, can it be validly inferred from what is known? How can what is known justify, via inference, a new belief?" Thus, for philosophy, inference can be regarded as permissive: one needn't worry about what to infer, only about whether what has been arrived at somehow or other is or can be inferentially justified. A theory of learning, on the other hand, requires a conception of inference that is directive, for the problem of inference based learning is precisely the problem of what to infer. (shrink)