I defend the historical definition of "function" originally given in my Language, Thought and Other Biological Categories (1984a). The definition was not offered in the spirit of conceptual analysis but is more akin to a theoretical definition of "function". A major theme is that nonhistorical analyses of "function" fail to deal adequately with items that are not capable of performing their functions.
" Biosemantics " was the title of a paper on mental representation originally printed in The Journal of Philosophy in 1989. It contained a much abbreviated version of the work on mental representation in Language Thought and Other Biological Categories. There I had presented a naturalist theory of intentional signs generally, including linguistic representations, graphs, charts and diagrams, road sign symbols, animal communications, the "chemical signals" that regulate the function of glands, and so forth. But the term " biosemantics " (...) has usually been applied only to the theory of mental representation. Let me first characterize a more general class of theories called "teleological theories of mental content" of which biosemantics is an example. Then I will discuss the details that distinguish biosemantics from other naturalistic teleological theories. (shrink)
Written by one of today's most creative and innovative philosophers, Ruth Garrett Millikan, this book examines basic empirical concepts; how they are acquired, how they function, and how they have been misrepresented in the traditional philosophical literature. Millikan places cognitive psychology in an evolutionary context where human cognition is assumed to be an outgrowth of primitive forms of mentality, and assumed to have 'functions' in the biological sense. Of particular interest are her discussions of the nature of abilities as different (...) from dispositions, her detailed analysis of the psychological act of reidentifying substances, and her critique of the language of thought for mental representation. In a radical departure from current philosophical and psychological theories of concepts, this book provides the first in-depth discussion on the psychological act of reidentification. (shrink)
There are no "special sciences" in Fodor's sense. There is a large group of sciences, "historical sciences," that differ fundamentally from the physical sciences because they quantify over a different kind of natural or real kind, nor are the generalizations supported by these kinds exceptionless. Heterogeneity, however, is not characteristic of these kinds. That there could be an univocal empirical science that ranged over multiple realizations of a functional property is quite problematic. If psychological predicates name multiply realized functionalist properties, (...) then there is no single science dealing with these: human psychology, ape psychology, Martian psychology and robot psychology are necessarily different sciences. (shrink)
A list of groceries, Professor Anscombe once suggested, might be used as a shopping list, telling what to buy, or it might be used as an inventory list, telling what has been bought (Anscombe 1957). If used as a shopping list, the world is supposed to conform to the representation: if the list does not match what is in the grocery bag, it is what is in the bag that is at fault. But if used as an inventory list, the (...) representation is supposed to conform to the world: if the list does not match what is in the bag, it is the list that is at fault. The first kind of representation, where the world is supposed to conform to the list, can be called "directive"; it represents or directs what is to be done. The second, where the list is supposed to conform to the world, can be called "descriptive"; it represents or describes what is the case. I wish to propose that there exist representations that face both these ways at once. With apologies to Dr. Doolittle, I call them pushmi-pullyu representations or PPRs. (shrink)
Concepts are highly theoretical entities. One cannot study them empirically without committing oneself to substantial preliminary assumptions. Among the competing theories of concepts and categorization developed by psychologists in the last thirty years, the implicit theoretical assumption that what falls under a concept is determined by description () has never been seriously challenged. I present a nondescriptionist theory of our most basic concepts, which include (1) stuffs (gold, milk), (2) real kinds (cat, chair), and (3) individuals (Mama, Bill Clinton, the (...) Empire State Building). On the basis of something important that all three have in common, our earliest and most basic concepts of substances are identical in structure. The membership of the category like that of is a natural unit in nature, to which the concept does something like pointing, and continues to point despite large changes in the properties the thinker represents the unit as having. For example, large changes can occur in the way a child identifies cats and the things it is willing to call without affecting the extension of its word The difficulty is to cash in the metaphor of in this context. Having substance concepts need not depend on knowing words, but language interacts with substance concepts, completely transforming the conceptual repertoire. I will discuss how public language plays a crucial role in both the acquisition of substance concepts and their completed structure. (shrink)
I give an analysis of how empirical terms do their work in communication and the gathering of knowledge that is fully externalist and that covers the full range of empirical terms. It rests on claims about ontology. A result is that armchair analysis fails as a tool for examining meanings of ‘basic’ empirical terms because their meanings are not determined by common methods or criteria of application passed from old to new users, by conventionally determined ‘intensions’. Nor do methods of (...) application used by individual speakers constitute definitive reference-determining intensions for their idiolect terms or associated concepts. Conventional intensions of non-basic empirical terms ultimately rest on basic empirical concepts, so no empirical meaning is found merely ‘in the head’. I discuss the nature of lexical definition, why empirical meanings cannot ultimately be modelled as functions from possible worlds to extensions, and traps into which armchair analysis of meaning can lead us. A coda explains how ‘Swampman’ examples, as used against teleosemantic theories of content, illustrate such traps. (shrink)
On Reading Signs; Some Differences between Us and The Others If there are certain kinds of signs that an animal cannot learn to interpret, that might be for any of a number of reasons. It might be, first, because the animal cannot discriminate the signs from one another. For example, although human babies learn to discriminate human speech sounds according to the phonological structures of their native languages very easily, it may be that few if any other animals are capable (...) of fully grasping the phonological structures of human languages. If an animal cannot learn to interpret certain signs it might be, second, because the decoding is too difficult for it. It could be, for example, that some animals are incapable of decoding signs that exhibit syntactic embedding, or signs that are spread out over time as opposed to over space. Problems of these various kinds might be solved by using another sign system, say, gestures rather than noises, or visual icons laid out in spatial order, or by separating out embedded propositions and presenting each separately. But a more interesting reason that an animal might be incapable of understanding a sign would be that it lacked mental representations of the necessary kind. It might be incapable of representing mentally what the sign conveys. When discussing what signs animals can understand or. (shrink)
The positions of Brandom and Millikan are compared with respect to their common origins in the works of Wilfrid Sellars and Wittgenstein. Millikan takes more seriously the ¿picturing¿ themes from Sellars and Wittgenstein. Brandom follows Sellars more closely in deriving the normativity of language from social practice, although there are also hints of a possible derivation from evolutionary theory in Sellars. An important claim common to Brandom and Millikan is that there are no representations without function or ¿attitude¿.
Suppose lightning strikes a dead tree in a swamp; I am standing nearby. My body is reduced to its elements, while entirely by coincidence (and out of different molecules) the tree is turned into my physical replica. My replica, The Swampman.....moves into my house and seems to write articles on radical interpretation. No one can tell the difference.
By whatever general principles and mechanisms animal behavior is governed, human behavior control rides piggyback on top of the same or very similar mechanisms. We have reflexes. We can be conditioned. The movements that make up our smaller actions are mostly caught up in perception-action cycles following perceived Gibsonian affordances. Still, without doubt there are levels of behavior control that are peculiar to humans. Following Aristotle, tradition has it that what is added in humans is rationality ("rational soul"). Rationality, however, (...) can be and has been characterized in many different ways. I am going to speculate about two different kinds of cognitive capacities that we humans seem to have, each of which is at least akin to rationality as Aristotle described it. The first I believe we share with many other animals, the second perhaps with none. Since this session of the conference on rational animals has been designated a "brainstorming" session, I will take philosopher's license, presenting no more than the softest sort of intuitive evidence for these ideas. (shrink)
In his essay "Consumers Need Information: Supplementing Teleosemantics with an Input Condition" (this issue) Nicholas Shea argues, with support from the work of Peter Godfrey-Smith (1996), that teleosemantics, as David Papinau and I have articulated it, cannot explain why "content attribution can be used to explain successful behavior." This failure is said to result from defining the intentional contents of representations by reference merely to historically normal conditions for success of their "outputs," that is, of their uses by interpreting or (...) consuming mechanisms, bypassing the more traditional focus, of those who would naturalize intentional content, on causal or informational inputs. Shea proposes to "add an input condition to teleosemantics," requiring that simple representations must carry "correlational information." I am grateful to Shea for his paper, as it presents me with an opportunity to clarify two fairly central features of my position on intentional content, one of which seems to have been overlooked in the literature (Millikan 1993a), the other of which I have stated previously only in a confusing way (Millikan 2004, Chapters 3-4). The first clarification concerns the general form that I take explanation by reference to intentional states to have. The second concerns my description of "locally recurrent natural information," why this kind of information is needed in place of Shea's "correlational information" to explain what feeds simple representational systems, and why no reference to natural information is needed to account for the success of behaviors by reference to the truth of representations that motivate them. (shrink)
On Reading Signs; Some Differences between Us and The Others If there are certain kinds of signs that an animal cannot learn to interpret, that might be for any of a number of reasons. It might be, first, because the animal cannot discriminate the signs from one another. For example, although human babies learn to discriminate human speech sounds according to the phonological structures of their native languages very easily, it may be that few if any other animals are capable (...) of fully grasping the phonological structures of human languages. If an animal cannot learn to interpret certain signs it might be, second, because the decoding is too difficult for it. It could be, for example, that some animals are incapable of decoding signs that exhibit syntactic embedding, or signs that are spread out over time as opposed to over space. Problems of these various kinds might be solved by using another sign system, say, gestures rather than noises, or visual icons laid out in spatial order, or by separating out embedded propositions and presenting each separately. But a more interesting reason that an animal might be incapable of understanding a sign would be that it lacked mental representations of the necessary kind. It might be incapable of representing mentally what the sign conveys. When discussing what signs animals can understand or.. (shrink)
....a notion of 'common, public language' that remains mysterious...useless for any form of theoretical explanation....There is simply no way of making sense of this prong of the externalist theory of meaning and language, as far as I can see, or of any of the work in theory of meaning and philosophy of language that relies on such notions, a statement that is intended to cut rather a large swath. (Chomsky 1995, pp. 48-9) It is a striking fact that despite the (...) constant reliance on some notion of 'community language' or 'abstract language,' there is virtually no attempt to explain what it might be. (Chomsky 1993, p. 39) ....either we must deprive the notion communication of all significance, or else we must reject the view that the purpose of language is communication. ...It is difficult to say what 'the purpose' of language is, except, perhaps, the expression of thought, a rather empty formulation. The functions of language are various. (Chomsky 1980, p. 230) I have yet to see a formulation that makes any sense of the position that "the essence of language is communication." (Chomsky 1980, p. 80; see also 1992b, p 215). (shrink)
At the start of Convention (1969) Lewis says that it is "a platitude that language is ruled by convention" and that he proposes to give us "an analysis of convention in its full generality, including tacit convention not created by agreement." Almost no clause, however, of Lewis's analysis has withstood the barrage of counter examples over the years,1 and a glance at the big dictionary suggests why, for there are a dozen different senses listed there. Left unfettered, convention wanders freely (...) from conventional wisdom through conventional medicine, conventions of art and "conventions of morality" to conventions of bidding in bridge.2 Surely it is unwise to try to fell these all with a single stone. Lewis's original goal, however, pursued further in (Lewis 1975), was to describe the conventionality of language, and this may be a more reasonable target. (shrink)
"According to informational semantics, if it's necessary that a creature can't distinguish Xs from Ys, it follows that the creature can't have a concept that applies to Xs but not Ys." (Jerry Fodor, The Elm and the Expert, p.32).
"Paleontologists like to say that to a first approximation, all species are extinct (ninety- nine percent is the usual estimate). The organisms we see around us are distant cousins, not great grandparents; they are a few scattered twig-tips of an enormous tree whose branches and trunk are no longer with us." (p. 343-44). The historical life bush consists mainly in dead ends.
Sainsbury and Tye (2011) propose that, in the case of names and other simple extensional terms, we should substitute for Frege's second level of content—for his senses—a second level of meaning vehicle—words in the language of thought. I agree. They also offer a theory of atomic concept reference—their ‘originalist’ theory—which implies that people knowing the same word have the ‘same concept’. This I reject, arguing for a symmetrical rather than an originalist theory of concept reference, claiming that individual concepts are (...) possessed only by individual people. Concepts are classified rather than identified across different people. (shrink)
It's a sort of moebus strip argument. Rather than circularly assuming what it should prove, it assumes one of the things Fodor says he has disproved. It assumes that the extensions of those concepts thought by some to be recognitional are in fact controlled by stereotypes. Why do I say that? Because Fodor assumes that what makes an instance of a concept a "good instance" is that it is an average instance, that it sports the properties statistically most commonly found (...) among instances of that concept. But that the "good instances" are always the common instances is remotely plausible only if we take concepts to be organized by stereotypes. True, a goldfish is not an average or stereotypical fish (SSis that true?) and the nursing profession is not average for a male and maleness is not average for a nurse. But there is surely is nothing borderline about the fishiness of a goldfish nor, typically, about the maleness of a male nurse or the petness of a pet fish. Notice also that good examples of some kinds of things are very hard to find, for example, good examples of the fallacy of accent, and good examples of wild children, and (nowadays) good examples of scurvy are hard to find. If good instances had to be instances that were average, including in respects having nothing to do with the point of the category being defined, and if recognitional concepts had to recognize by attending to average properties, then I suppose the recognitional ability defining the concept "sphere" would have to include the ability to tell whether a thing bounces! (shrink)
Many students of pragmatics and child language have come to believe that in order to learn a language a child must first have a 'theory of mind,' a grasp that speakers mentally represent the content they would convey when they speak. This view is reinforced by the Gricean theory of communication, according to which speakers intend their words to cause hearers to believe or to do certain things and hearers must recognize these intentions if they are to comply. The view (...) rests on an underlying assumption that learning language involves associating words with things (objects, kinds, events, properties and so forth) or with concepts of these, these associations being acquired, one by one, by observing the usage of others. Accomplishing this task is facilitated, it is thought, by engaging in joint attention with speakers who are attending to the things they are talking about as they talk, and joint attention requires an understanding that others have minds that represent things. (shrink)
I sketch in miniature the whole of my work on the relation between language and thought. Previously I have offered closeups of this terrain in various papers and books, and I reference them freely. But my main purpose here is to explain the relations among the parts, hoping this can serve as a short introduction to my work on language and thought for some, and for others as a clarification of the larger plan.
Brentano was surely mistaken, however, in thinking that bearing a relation to something nonexistent marks only the mental. Given any sort of purpose, it might not get fulfilled, hence might exhibit Brentano's relation, and there are many natural purposes, such as the purpose of one's stomach to digest food or the purpose of one's protective eye blink reflex to keep out the sand, that are not mental, nor derived from anything mental. Nor are stomachs and reflexes "of" or"about" anything. A (...) reply might be, I suppose, that natural purposes are "purposes" only in an analogical sense hence "fail to be fulfilled" only in an analogical way. They bear an analogy to things that have been intentionally designed by purposive minds, hence can fail to accomplish the purposes they analogically have. As such they also have only analogical "intentionality". Such a response begs the question, however, for it assumes that natural purposes are not purposes in the full sense exactly because they are not. (shrink)