Introduction: representationalismMost theorists of cognition endorse some version of representationalism, which I will understand as the view that the human mind is an information-using system, and that human cognitive capacities are representational capacities. Of course, notions such as ‘representation’ and ‘information-using’ are terms of art that require explication. As a first pass, representations are “mediating states of an intelligent system that carry information” (Markman and Dietrich 2001, p. 471). They have two important features: (1) they are physically realized, and so (...) have causal powers; (2) they are intentional, in other words, they have meaning or representational content. This presumes a distinction between a representational vehicle—a physical state or structure that has causal powers and is responsible for producing behavior—and its content. Consider the following characterization of a device that computes the addition functionReaders will recognize the similarity t. (shrink)
This paper sets out a view about the explanatory role of representational content and advocates one approach to naturalising content – to giving a naturalistic account of what makes an entity a representation and in virtue of what it has the content it does. It argues for pluralism about the metaphysics of content and suggests that a good strategy is to ask the content question with respect to a variety of predictively successful information processing models in experimental psychology and cognitive (...) neuroscience; and hence that data from psychology and cognitive neuroscience should play a greater role in theorising about the nature of content. Finally, the contours of the view are illustrated by drawing out and defending a surprising consequence: that individuation of vehicles of content is partly externalist. (shrink)
Much of the philosophical work on perception has focused on vision. Recently, however, philosophers have begun to correct this ‘tunnel vision’ by considering other modalities. Nevertheless, relatively little has been written about the chemical senses—olfaction and gustation. The focus of this paper is olfaction. In this paper, I consider the question: does human olfactory experience represents objects as thus and so? If we take visual experience as the paradigm of how experience can achieve object representation, we might think that the (...) answer to this question is no. I argue that olfactory experience does indeed represent objects—just not in a way that is easily read from the dominant visual case. (shrink)
Marr’s celebrated contribution to cognitive science (Marr 1982, chap. 1) was the introduction of (at least) three levels of description/explanation. However, most contemporary research has relegated the distinction between levels to a rather dispensable remark. Ignoring such an important contribution comes at a price, or so we shall argue. In the present paper, first we review Marr’s main points and motivations regarding levels of explanation. Second, we examine two cases in which the distinction between levels has been neglected when considering (...) the structure of mental representations: Cummins et al.’s distinction between structural representation and encodings (Cummins in Journal of Philosophy, 93(12):591–614, 1996; Cummins et al. in Journal of Philosophical Research, 30:405–408, 2001) and Fodor’s account of iconic representation (Fodor 2008). These two cases illustrate the kind of problems in which researchers can find themselves if they overlook distinctions between levels and how easily these problems can be solved when levels are carefully examined. The analysis of these cases allows us to conclude that researchers in the cognitive sciences are well advised to avoid risks of confusion by respecting Marr’s old lesson. (shrink)
The ‘received view’ about computation is that all computations must involve representational content. Egan and Piccinini argue against the received view. In this paper, I focus on Egan’s arguments, claiming that they fall short of establishing that computations do not involve representational content. I provide positive arguments explaining why computation has to involve representational content, and how that representational content may be of any type. I also argue that there is no need for computational psychology to be individualistic. Finally, I (...) draw out a number of consequences for computational individuation, proposing necessary conditions on computational identity and necessary and sufficient conditions on computational I/O equivalence of physical systems.Keywords: Computation; Representation; Computational identity; Explanation; Narrow content; Physical computation. (shrink)
To what extent is the external world the way that it appears to us in perceptual experience? This perennial question in philosophy is no doubt ambiguous in many ways. For example, it might be taken as equivalent to the question of whether or not the external world is the way that it appears to be? This is a question about the epistemology of perception: Are our perceptual experiences by and large veridical representations of the external world? Alternatively, the question might (...) be taken as asking whether or not the external world is like its ways of appearing to us, where the expression “ways of appearing” is intended to pick out aspects of our perceptual experiences themselves. This is a metaphysical version of the question of the relationship between appearance and reality: What is the relationship between the phenomenal features that characterize perceptual experience, on the one hand, and the mind-independent features of the external objects of perception, on the other? There are some philosophers who might resist distinguishing between these two questions. For them, “ways of appearing” in the phenomenal sense just are the ways that things appear to be (let’s call the latter the “intentional sense” of “ways of appearing”).1 That is, the phenomenal character of an experience is nothing over and above its representational content. Phenomenal properties are represented properties—the properties that an experience attributes to the external objects of perception. The question of whether or not phenomenal properties can be identified with the represented properties of an experience mirrors traditional questions in the philosophy of perception. If they can be identified with each other, then in veridical perception we might be said to “directly grasp” features of the external world through perception. The properties that are present to the mind are the very same properties that belong to the external objects of perception. Such a view affords.... (shrink)
I review a widely accepted argument to the conclusion that the contents of our beliefs, desires and other mental states cannot be causally efficacious in a classical computational model of the mind. I reply that this argument rests essentially on an assumption about the nature of neural structure that we have no good scientific reason to accept. I conclude that computationalism is compatible with wide semantic causal efficacy, and suggest how the computational model might be modified to accommodate this possibility.
Computational properties, it is standardly assumed, are to be sharply distinguished from semantic properties. Specifically, while it is standardly assumed that the semantic properties of a cognitive system are externally or non-individualistically individuated, computational properties are supposed to be individualistic and internal. Yet some philosophers (e.g., Tyler Burge) argue that content impacts computation, and further, that environmental factors impact computation. Oron Shagrir has recently argued for these theses in a novel way, and gave them novel interpretations. In this paper I (...) present a conception of computation in cognitive science that takes Shagrir's conception as its starting point, but further develops it in various directions and strengthens it. I argue that the explanatory role of computational properties emerges from the idea that syntactical properties and the relevant external factors presented by cognitive systems compose wide computational properties. I also elaborate upon the notion of content that is in play, and argue that it is contents of the kind that are ascribed by transparent interpretations of content ascriptions that impact computation. This fact enables the thesis that external factors impact computation to rebuff the challenge which concerns the claim that psychology must be individualistic. (shrink)
Nonconceptualists maintain that there are ways of representing the world that do not reflect the concepts a creature possesses. They claim that the content of these representational states is genuine content because it is subject to correctness conditions, but it is nonconceptual because the creature to which we attribute it need not possess any of the concepts involved in the specification of that content. Appeals to nonconceptual content have seemed especially useful in attempts to capture the representational properties of perceptual (...) experiences, the representational states of pre-linguistic children and non-human animals, the states of subpersonal visual information-processing systems, and the subdoxastic states involved in tacit knowledge of the grammar of a language. Nonconceptual content is also invoked in the explanation of concept possession, concept acquisition, sensorimotor behaviour, and in the analysis of the notion of self-consciousness. The notion of nonconceptual content plays an important role in many discussions about the relationships between perception and thought. (shrink)
The view that the brain is a sort of computer has functioned as a theoretical guideline both in cognitive science and, more recently, in neuroscience. But since we can view every physical system as a computer, it has been less than clear what this view amounts to. By considering in some detail a seminal study in computational neuroscience, I first suggest that neuroscientists invoke the computational outlook to explain regularities that are formulated in terms of the information content of electrical (...) signals. I then indicate why computational theories have explanatory force with respect to these regularities:in a nutshell, they underscore correspondence relations between formal/mathematical properties of the electrical signals and formal/mathematical properties of the represented objects. I finally link my proposal to the philosophical thesis that content plays an essential role in computational taxonomy. (shrink)
This paper advances a novel argument that speech perception is a complex system best understood nonindividualistically and therefore that individualism fails as a general philosophical program for understanding cognition. The argument proceeds in four steps. First, I describe a "replaceability strategy", commonly deployed by individualists, in which one imagines replacing an object with an appropriate surrogate. This strategy conveys the appearance that relata can be substituted without changing the laws that hold within the domain. Second, I advance a "counterfactual test" (...) as an alternative to the replaceability strategy. Third, I show how the typical objects of cross-modal processes (in this case, auditory-visual speech perception), more clearly irreplaceable than the objects of the unimodal process examined by Burge [(1986) Individualism and psychology, The Philosophical Review, XCV, 3-45], supply a firm basis for a nonindividualist interpretation of such cases. Finally, I demonstrate that the routine violation of the individualist's Replaceability Condition occurs even in unimodal cases - so the violation of the replaceability constraint does not derive simply from the diversity of modal sources but rather from the causal complexity of psychological processes generally. The conclusion is that philosophical progress on this issue must await progress in psychology, or, at least, philosophical progress in accounting for psychological complexity--precisely the vicissitude predicted by a thoroughgoing naturalism. (shrink)
In this paper I address an important question concerning the nature of visual content: are the contents of human visual states and experiences exhaustively fixed or determined (in the non-causal sense) by our intrinsic physical properties? The individualist answers this question affirmatively. I will argue that such an answer is mistaken. A common anti-individualist or externalist tactic is to attempt to construct a twin scenario involving humanoid duplicates who are embedded in environments that diverge in such a way that it (...) appears to be necessary to attribute divergent contents to their respective visual states. In the first half of the paper I discuss some of the twin scenarios that are prominent in the literature and argue that they fail to undermine individualism. Indeed I argue that due to important facts about our internal workings, a convincing externalist twin scenario involving humanoid protagonists cannot be constructed. However, I argue that such a result does not conclusively establish an individualist thesis and that in order to settle the question at issue it is necessary to construct an independently motivated theory of visual content. I attempt to do this in the second half of the paper by developing a theory at the core of which is the idea that the contents of our visual states and experiences are determined by the causal powers vis-. (shrink)
The dispute between individualism and anti-individualism is about the individuation of psychological states, and individualism, on some accounts, is committed to the claim that psychological subjects together with their environments do not constitute integrated computational systems. Hence on this view the computational states that explain psychological states in computational accounts of mind will not involve the subject''s natural and social environment. Moreover, the explanation of a system''s interaction with the environment is, on this view, not the primary goal of computational (...) theorizing. Recent work in computational developmental psychology (by A. Karmiloff-Smith and J. Rutkowska) as well as artificial agents or embedded artificial systems (by L.P. Kaelbling, among others) casts doubt on these claims. In these computational models, the environment does not just trigger and sustain input for computational operations, but some computational operations actually involve environmental structures. (shrink)
We focus on Karmiloff-Smith's Representational redescription model, arguing that it poses some problems concerning the architecture of a redescribing system. To discuss the topic, we consider the implicit/explicit dichotomy and the relations between natur al language and the language of thought. We argue that the model regards how knowledge is employed rather than how it is represented in the system.