I present an account of mental representation based upon the ‘SINBAD’ theory of the cerebral cortex. If the SINBAD theory is correct, then networks of pyramidal cells in the cerebral cortex are appropriately described as representing, or more specifically, as modelling the world. I propose that SINBAD representation reveals the nature of the kind of mental representation found in human and animal minds, since the cortex is heavily implicated in these kinds of minds. Finally, I show how SINBAD neurosemantics can (...) provide accounts of misrepresentation, equivocal representation, twin cases, and Frege cases. (shrink)
John is currently thinking that the sun is bright. Consider his occurrent belief or judgement that the sun is bright. Its content is that the sun is bright. This is a truth- evaluable content (which shall be our main concern) because it is capable of being true or false. In virtue of what natural, scientifically accessible facts does John’s judgement have this content? To give the correct answer to that question, and to explain why John’s judgement and other contentful mental (...) states have the contents they do in virtue of such facts, would be to naturalize mental content. (shrink)
Millikan and Her Critics offers a unique critical discussion of Ruth Millikan's highly regarded, influential, and systematic contributions to philosophy of mind and language, philosophy of biology, epistemology, and metaphysics. These newly written contributions present discussion from some of the most important philosophers in the field today and include replies from Millikan herself.
Reductive, naturalistic psychosemantic theories do not have a good track record when it comes to accommodating the representation of kinds. In this paper, I will suggest a particular teleosemantic strategy to solve this problem, grounded in the neurocomputational details of the cerebral cortex. It is a strategy with some parallels to one that Ruth Millikan has suggested, but to which insufficient attention has been paid. This lack of attention is perhaps due to a lack of appreciation for the severity of (...) the problem, so I begin by explaining why the situation is indeed a dire one. One of the main tasks for a naturalistic psychosemantic theory is to describe how the extensions of mental representations are determined. (Such a theory may also attempt to account for other aspects of the “meaning” of mental representations, if there are any.) Some mental representations, e.g. the concept of water, denote kinds (I shall be assuming this is non-negotiable). How is this possible? Unfortunately, I haven’t the space to canvass all the theories out there and show that each one fails to accommodate the representation of kinds, but I will point out the major types of problems that arise for the kinds of theories that, judging by the literature, are considered viable contenders.1 In general, the theories either attempt and fail to account for the representation of kinds, or they fall back on something like an intention to refer to a kind – not exactly the most auspicious move for a reductive theory. There are a number of problems that prevent non-teleosemantic theories from explaining how it is possible to represent kinds. A concept of a kind K must.. (shrink)
Introduction There are some exceptions, which we shall see below, but virtually all theories in psychology and cognitive science make use of the notion of representation. Arguably, folk psychology also traffics in representations, or is at least strongly suggestive of their existence. There are many different types of things discussed in the psychological and philosophical literature that are candidates for representation-hood. First, there are the propositional attitudes – beliefs, judgments, desires, hopes etc. (see Chapters 9 and 17 of this volume). (...) If the propositional attitudes are representations, they are person-level representations – the judgment that the sun is bright pertains to John, not a subpersonal part of John. By contrast, the representations of edges in V1 of the cerebral cortex that neuroscientists talk about and David Marr’s symbolic representations of “zero-crossings” in early vision (Marr 1982) are at the “sub-personal” level – they apply to parts or states of a person (e.g. neural parts or computational states of the visual system). Another important distinction is often made among perceptual, cognitive, and action-oriented representations (e.g. motor commands). Another contrast lies between “stored representations” (e.g. memories) and “active representations” (e.g. a current perceptual state). Related to this is the distinction between “dispositional representations” and “occurrent representations.” Beliefs that are not currently being entertained are dispositional, e.g. your belief that the United States is in North America - no doubt you had this belief two minutes ago, but you were not consciously accessing it until you read this sentence. Occurrent representations, by contrast, are active, conscious thoughts or perceptions. Which leads us to another important distinction: 1 between conscious and non-conscious mental representations, once a bizarre-sounding distinction that has become familiar since Freud (see Chapter 4 of this volume). I mention these distinctions at the outset to give you some idea of the range of phenomena we will be considering, and to set the stage for our central “problem of representation”: what is a mental representation, exactly, and how do we go about deciding whether there are any? We know there are public representations of various kinds: words, maps, and pictures, among others.. (shrink)
The ability to predict is the most importantability of the brain. Somehow, the cortex isable to extract regularities from theenvironment and use those regularities as abasis for prediction. This is a most remarkableskill, considering that behaviourallysignificant environmental regularities are noteasy to discern: they operate not only betweenpairs of simple environmental conditions, astraditional associationism has assumed, butamong complex functions of conditions that areorders of complexity removed from raw sensoryinputs. We propose that the brain's basicmechanism for discovering such complexregularities is implemented in (...) the dendritictrees of individual pyramidal cells in thecerebral cortex. Pyramidal cells have 5–8principal dendrites, each of which is capableof learning nonlinear input-to-outputtransfer functions. We propose that eachdendrite is trained, in learning its transferfunction, by all the other principal dendritesof the same cell. These dendrites teach eachother to respond to their separate inputs with matching outputs. Exposed to differentbut related information about the sensoryenvironment, principal dendrites of the samecell tune to functions over environmentalconditions that, while different, are correlated . As a result, the cell as awhole tunes to the source of the regularitiesdiscovered by the cooperating dendrites,creating a new representation. When organizedinto feed-forward/feedback layers, pyramidalcells can build their discoveries on thediscoveries of other cells, graduallyuncovering nature's hidden order. Theresulting associative network is powerfulenough to meet a troubling traditionalobjection to associationism: that it is toosimple an architecture to implement rationalprocesses. (shrink)
The central idea is that the cerebral cortex is a model building machine, where regularities in the world serve as templates for the models it builds. First it is shown how this idea can be naturalized, and how the representational contents of our internal models depend upon the evolutionarily endowed design principles of our model building machine. Current neuroscience suggests a powerful form that these design principles may take, allowing our brains to uncover deep structures of the world hidden behind (...) surface sensory stimulation, the individuals, kinds, and properties that form the objects of human perception and thought. It is then shown how this account solves various problems that arose for previous attempts at naturalizing intentionality, and also how it supports rather than undermines folk psychology. As in the parable of the blind men and the elephant, the seemingly unrelated pieces of earlier theories (information, causation, isomorphism, success, and teleology) emerge as different aspects of the evolved model-building mechanism that explains the intentional features of our kind of mind. (shrink)
There is good evidence that the cerebral cortex is the seat of the human mind, so an understanding of representation in the cortex could help us understand the nature of mental representation. I argue that the cortex represents in the way that models do; it is an evolutionarily designed model-building machine. The cortex belongs to a general class of model-building machines that produce isomorphisms to structures in the environment by interacting with them. The representational content of a particular model produced (...) by such a machine is determined by the operational principles according to which the machine was designed, and the history of machine-environment interaction that resulted in the production of that model. ;I explore the possibility that the operational principles according to which the cerebral cortex was designed, i.e. aspects of its causal profile that were selected for, are those described by the SINBAD theory. The SINBAD theory implies that it is the biological function of the cortex to make its constituent neurons come to interact in a way that is isomorphic to regularities structured around "sources of correlation". In the context of this isomorphism, it is the function of a particular SINBAD cell to correspond to a particular source of correlation, the one that is responsible for that cell's tuning. In other words, the cortex builds models of environmental regularities structured around sources of correlation. ;Understanding mental representation as cortical representation of this kind allows us to explain a number of important and/or puzzling features of mental intentionality as we know it: the possibility of equivocation, misrepresentation, empty representation, and twin cases, the relation between concepts and inferential roles, how it is possible for us to acquire objective concepts and beliefs via our subjective and idiosyncratic senses, and the distinction between usefulness and truth. I conclude by outlining an account of the occurrent propositional attitudes as non-representational uses of a SINBAD model that has been built up through experience. Non-representational use is cashed out in terms of causal role. Together with an account of the non-occurrent attitudes, this yields an understanding of the nature of psychological explanation. (shrink)
We propose that a top priority of the cerebral cortex must be the discovery and explicit representation of the environmental variables that contribute as major factors to environmental regularities. Any neural representation in which such variables are represented only implicitly (thus requiring extra computing to use them) will make the regularities more complex and therefore more difficult, if not impossible, to learn. The task of discovering such important environmental variables is not an easy one, since their existence is only indirectly (...) suggested by the sensory input patterns the cortex receives – these variables are “hidden.” We present a candidate computational strategy for (1) discovering regularity-simplifying environmental variables, (2) learning the regularities, and (3) using regularities in perceptual and decision-making tasks. The SINBAD computational model discovers useful environmental variables through a search for different, but nevertheless highly correlated functions of any kind over non-overlapping subsets of the known variables, this being indicative of some important environmental variable that is responsible for the correlation. We suggest that such a search is performed in the neocortex by the dendritic trees of individual pyramidal cells. According to the SINBAD model, the basic function of each pyramidal cell is (1) to discover and represent one of the regularity-simplifying environmental variables, and (2) to learn to infer the state of its variable from the states of other variables, represented by other pyramidal cells. A network of such cells – each cell just attending to representation of its variable – can function as a sophisticated and useful inferential model of the outside world. (shrink)
In this wide-ranging book, Jesse Prinz attempts to resuscitate a strand of empiricism continuous with the classical thesis that all Ideas are imagistic. His name for this strand is “concept empiricism,” and he formulates it as follows: “all (human) concepts are copies or combinations of copies of perceptual representations” (p. 108). In the process of defending concept empiricism, Prinz is careful not to commit himself to a number of other theses commonly associated with empiricism more broadly construed. For example, he (...) is prepared to accept that there are innate concepts and/or knowledge, denies that what a concept means consists in the experiences that prompt us to use or create it, implies that cognitive architecture is not associationist, and offers no opinion on whether all knowledge claims must be justified by sensory experience. Those who await a full resurrection will have to wait a little longer – but in the meantime, Prinz’s reconstructive surgery will tide you over. Although it falls short of miraculous, it is still pretty impressive. Prinz has brought a vast knowledge of the literature to bear on his project, from philosophy, psychology, and neuroscience. In fact, this book would serve as an excellent entrée for the philosopher into the scientific aspects of concept research, or for the scientist into philosophical concerns. Prinz writes with exemplary clarity, and wields his theory with aplomb in answering the many objections that have been raised against imagism. To take just one example, anyone who doubts that imagism can accommodate the large scope of human concepts would be well advised to read Chapter 7, which contains a wealth of ingenious suggestions for how imagism might handle difficult cases, including lofty concepts such as cause and truth. His discussions of nativism (Chapter 8) and compositionality are also particularly illuminating. The central theoretical construct in Prinz’s theory of concepts is the “proxytype,” a group of imagistic/perceptual representations.. (shrink)
Externalist theories of representation (including most naturalistic psychosemantic theories) typically require some relation to obtain between a representation and what it represents. As a result, empty concepts cause problems for such theories. I offer a naturalistic and externalist account of empty concepts that shows how they can be shared across individuals. On this account, the brain is a general-purpose model-building machine, where items in the world serve as templates for model construction. Shareable empty concepts arise when there is a common (...) template for different individuals' concepts, but where this template is not what the concept denotes. (shrink)
What makes a mental representation about what it's about? The majority view among naturalists seems to be that representation has something to do with causation, or information, or correlation, or some other related notion. But such "information-based" views (e.g. Fodor, Prinz, Stalnaker, Usher, Mandik, Tye, and lots of other people who gesture towards this kind of theory1) cannot accommodate representation of the distal.
First I should clarify my thesis. When I say the mind starts off as a blank slate, I’m saying that it’s devoid of substantive concepts or ideas, that is non-logical concepts or ideas. Some examples of substantive concepts are: the concept of a cat, the concept of a quark, the concept of being square, and the concept of heaviness.
A representationalist about qualia takes qualitative states to be aspects of the intentional content of sensory or sensory-like representations. When you experience the redness of an apple, they say, your visual system is merely representing that there is a red surface at such-and-such a place in front of you. And when you experience a red afterimage, your visual system is representing something similar . Your sensory state does not literally have an intrinsic quality of phenomenal redness, just as you do (...) not have a hairy mental state when you occurrently believe that Santa Claus is hairy. Judging by the literature, it is quite plausible to claim that the nature of occurrent beliefs is exhausted by their representational characteristics.1 Why is it that this “pure representation” ploy is so much less plausible in the case of sensory states? Typically, the reason given is that belief states are not qualitative while sensory states are, as revealed by introspection. Qualitativity, it is further maintained, cannot be purely representational – this is the intuition the representationalist must fight. In this paper I want to focus on a feature of sensory states, distinct from but related to their qualitativity, that encourages the anti-representationalist to object to the representational thesis. I shall call this feature “inhereness.” Instances of sensory. (shrink)
In this paper, I will introduce you to a new theory of mental representation, emphasizing two important features. First, the theory coheres very well with folk psychology; better, I believe, than its competitors (e.g. Cummins, 1996; Dretske, 1988; Fodor, 1987 and Millikan, 1989, with which it has the most in common), though I will do little by way of direct comparison in this paper. Second, it receives support from current neuroscience. While other theories may be consistent with current neuroscience, none (...) that I know of actually receives some degree of confirmation from it. There are many different kinds of representations. Some examples are maps, words, meter and gauge readings, diagrams, pictures, scale models, computer simulations, blueprints, charts, musical notation, smoke signals, semaphore, and computer data structures. Qua representations, they all possess intentionality, or aboutness: maps are about places, most words are about the entities they refer to, meters and gauges are about the quantities they measure, etc. However, it seems they have little in common beyond this aboutness (Millikan, 1984, p. 85). Therefore we should be open to the possibility that the aboutness of different representations is ultimately to be explained in different ways. It is becoming increasingly popular to understand the aboutness of a large class of these representations in terms of function.1 For example, a tire gauge represents one of the properties that it indicates or carries information about, namely air pressure. However, it also carries information about other quantities. If the pressure and volume of the tire are kept constant, the tire gauge will indicate the temperature of the air inside the tire, and if the temperature and pressure are kept constant, the gauge will indicate the tire volume. However, although the tire gauge indicates these things, it does not represent them. It only.. (shrink)
Stephen Mumford's Dispositions1 is an interesting and thought-provoking addition to a recent surge of publications on the topic.2 Dispositions have not been such a hot topic since the heyday of behaviourism. But as Mumford argues in his first chapter, the importance of dispositions to contemporary philosophy can hardly be underestimated. Dispositions are fundamental to causal role functionalism in the philosophy of mind, response-dependent truth conditional accounts of moral and other concepts,3 capacity accounts of concepts more generally,4 theories of belief, the (...) compatibilist conception of free will, the philosophy of matter, probability (propensities) and more. So it is natural that conceptual and ontological issues about dispositions have come again to the fore. The only surprise is that it's taken so long. (shrink)
Title page Representational theories propose a set of sufficient conditions for a state to be phenomenally conscious. It turns out that insofar as these conditions have been worked out in detail, the autonomic nervous system (ANS) ought to be conscious - but of course it’s not. In this paper, we’ll describe only a tiny portion of the complexities of the ANS, using these to counterexample only a single theory of phenomenal consciousness, namely, Fred Dretske’s. But we think the ANS comparison (...) strategy is a fruitful one in general, and we hope to convince you of this too. (shrink)
At first, Bloom's theory appears inimical to empiricism, since he credits very young children with highly sophisticated cognitive resources (e.g., a theory of mind and a belief that real kinds have essences), and he also attacks the empiricist's favoured learning theory, namely, associationism. We suggest that, on the contrary, the empiricist can embrace much of what Bloom says.