Notes
Readers will recognize the similarity to Cummins (1989) ‘tower-bridge’ idea.
Actually, this picture is an oversimplification: the bottom level is a compression of several “implementation” levels, because representational vehicles are not physical state types. Characterizing them—say, as symbols, or nodes in a network—involves significant abstraction and idealization.
See, for example, Ramsey (2007).
A property (or set of properties) is essential only relative to a particular taxonomy or way of type-individuating. So the relevant claim is that cognitive science type-individuates mental states in such a way as their contents are essential.
Fred Dretske defends the significance of misrepresentation as follows:
It is the power to misrepresent, the capacity to get things wrong, to say things that are not true, that helps define the notion of interest. That is why it is important to stress a system’s capacity for misrepresentation. For it is only if the system has this capacity does it have, in its power to get things right, something approximating meaning. (1988, p. 65).
Of course, the content-determining relation must allow for misrepresentation, so there must, in principle, be some circumstances where the specified relation holds between the internal state or structure and some other object or property. Naturalistic theories often founder trying to satisfy this requirement.
Dretske, a proponent of Hyper Representationalism, adds the requirement that content has causal powers, in some sense. He expects an adequate theory of representation to explain how content “gets its hands on the steering wheel.” (Dretske 1988).
He wrote a book in the 1970s called Rules and Representations. Though see Collins (2007) for the view that Chomsky has always been an anti-representationalist.
Chomsky sometimes suggests that we should dispense with representationalist talk altogether:
I do not know of any notion of ‘representational content’ that is clear enough to be invoked in accounts of how internal computational systems enter into the life of the organism. (2003, p. 274).
This sense of function-theoretic characterization is not to be confused with various other notions in the literature, in particular, with Cummins’ (1975) notion of functional analysis.
Of course, it does not follow from the absence of a teleological relation grounding the ascription of mathematical content that the mechanism that computes the specified mathematical function does not thereby contribute to the fitness of the organism, i.e. that it is not an adaptation. The mechanism itself has a teleological explanation.
To cite a well-known example, consider a frog that snaps out its tongue at any small dark thing moving in its visual field. Usually these are flies. But there are alternative candidates for the content of the structures constructed by a frog’s visual system and appealed to in the explanation of the frog’s feeding behavior: fly, food, fly or BB, small dark moving thing, fly stage, undetached fly part, etc. No purely naturalistic relation will privilege one of these candidates as ‘the object of perception.’ And it is something of a fool’s errand to try to decide just which is the single correct candidate. Various pragmatic considerations will motivate different content choices, as I explain below.
In so-called classical computational devices, this use is characterized as operations defined over explicit data structures.
I use caps here to indicate the structure (the vehicle), independent of the content it is assigned in what I am calling the ‘cognitive’ interpretation.
Instead the editor commissioned an environment-specific cognitive interpretation for each world, to accompany the environment-neutral account of the mechanism provided by the computational theory.
It is also possible for mathematical content to be non-veridical. If the device malfunctions, it may misrepresent the output of the specified function. If the function is defined on an infinite domain (as is, for example, the addition function) then the mathematical interpretation will involve some idealization. In general, the gap between competence (whose characterization will often involve idealization) and performance allows for the possibility of misrepresentation.
These possible scenarios would not affect ascription of mathematical content.
General environmental facts that Marr called physical constraints—such as that objects are rigid in translation (Ullman’s (1979) rigidity constraint) or that disparity varies smoothly almost everywhere, since matter is cohesive (the continuity constraint)—are also part of the computational theory proper.
Hyper Representationalists (e.g. Fodor) do not recognize a distinction. They assume that there is just the theory proper, and that representational contents are part of it, giving an essential characterization of computational mechanisms and processes.
It is not only the cognitive sciences that are grounded in our desire to understand ourselves and our place in the universe—the biological sciences are grounded in such interests as well. From the detached perspective of fundamental physics, the difference between life and non-living matter is no less arbitrary than the difference between a rational process and a mistake.
If we ever succeed in solving the so-called ‘hard problem’ of consciousness—providing a reductive explanation of phenomenal experience—we will undoubtedly need a phenomenal gloss that makes use of phenomenal concepts—concepts eschewed by the theory itself—to show that the theory addresses the phenomenon in question.
For a very useful discussion of the attempt to ground intentionality in phenomenal character see Kriegel (2013).
Horgan and Graham express a desire to “leave behind what Searle… tantalizing calls ‘the persistent objectivizing tendency of philosophy and science since the seventeenth century’ (Searle 1987, p. 145)” (p. 342), but the fact is that the cognitive sciences in general, and computational cognitive science in particular, are firmly in the tradition of post-Galilean science.
Thanks to Todd Ganson, Mohan Matthen, Robert Matthews, and participants at the Oberlin Philosophy Conference, May 2012 for comments on earlier versions of this paper.
References
Burge, T. (1986). Individualism and psychology. The Philosophical Review, 95, 3–45.
Chomsky, N. (1995). Language and nature. Mind, 104, 1–61.
Chomsky, N. (2003). Reply to Egan. In L. Antony & N. Hornstein (Eds.), Chomsky and his critics (pp. 89–104). Oxford: Blackwell.
Collins, J. (2007). Meta-scientific eliminativism: A reconsideration of Chomksy’s review of Skinner’s Verbal Behavior. British Journal for the Philosophy of Science, 58, 625–658.
Cummins, R. (1975). Functional analysis. The Journal of Philosophy, 72, 741–764.
Cummins, R. (1989). Meaning and mental representation. Cambridge, MA: MIT Press.
Davies, M. (1991). Individualism and perceptual content. Mind, 100, 461–484.
Dretske, F. (1981). Knowledge and the flow of information. Cambridge, MA: MIT Press.
Dretske, F. (1986). Misrepresentation. In R. Bogdan (Ed.), Belief: Form, content, and function. Oxford: OUP.
Dretske, F. (1988). Explaining behavior. Cambridge, MA: MIT Press.
Egan, F. (1995). Computation and content. The Philosophical Review, 104, 181–203.
Egan, F. (1999). In defense of narrow mindedness. Mind and Language, 14, 177–194.
Egan, F. (2010). Computational models: A modest role for content. Studies in History and Philosophy of Science, 41, 253–259. (special issue on Computation and Cognitive Science).
Fodor, J. A. (1987). Psychosemantics. Cambridge, MA: MIT Press.
Fodor, J. A. (1990). “A theory of content II: The theory”, in a theory of content and other essays (pp. 89–136). Cambridge, MA: MIT Press.
Fodor, J. (2008). LOT2: The language of thought revisited. Cambridge: Oxford University Press.
Gallistel, R. C. (1990). The organization of learning. Cambridge, MA: MIT Press.
Goodman, N. (1968). Languages of thought. Cambridge, MA: Harvard University Press.
Horgan, T., & Graham, G. (2012). Phenomenal intentionality and content determinacy. In R. Schantz (Ed.), Prospects for meaning (pp. 321–344). Boston: De Gruyter.
Kriegel, U. (2013). The phenomonal intentionality research program. In U. Kriegel (Ed.), Phenomenal intentionality, (pp. 1–26). New York: OUP.
Markman, A., & Dietrich, E. (2001). Extending the classical view of representation. Trends in Cognitive Science, 4, 470–475.
Marr, D. (1982). Vision. New York: Freeman.
Marr, D., & Hildreth, E. (1980). Theory of edge detection. Proceedings of the Royal Society, 207, 187–217.
Millikan, R. (1984). Language, thought, and other biological categories. Cambridge, MA: MIT Press.
Papineau, D. (1987). Reality and representation. Oxford: OUP.
Papineau, D. (1993). Philosophical naturalism. Oxford: Blackwell.
Ramsey, W. (2007). Representation reconsidered. Cambridge: Cambridge University Press.
Searle, J. (1980). Minds brains and programs. Behavioral and Brain Sciences, 3, 417–457.
Searle, J. (1987). Indeterminacy, empiricism, and the first person. The Journal of Philosophy, 84, 1123–1146.
Searle, J. (1993). The rediscovery of the mind. Cambridge, MA: MIT Press.
Segal, G. (1989). Seeing what is not there. The Philosophical Review, 98, 189–214.
Shadmehr, R., & Wise, S. (2005). The computational neurobiology of reaching and pointing: A foundation for motor learning. Cambridge, MA: MIT Press.
Shagrir, O. (2001). Content, computation and externalism. Mind, 110, 369–400.
Shagrir, O. (2010). Marr on computational-level theories. Philosophy of Science, 77, 477–500.
Shapiro, L. (1993). Content, kinds, and individualism in Marr’s theory of vision. The Philosophical Review, 102, 489–513.
Sueng, H. S. (1996). How the brain keeps the eyes still. Proceedings of the National Academy of Science USA, 93, 13339–13344.
Sueng, H. S., Lee, D., Reis, B., & Tank, D. (2000). Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron, 26, 259–271.
Ullman, S. (1979). The interpretation of visual motion. Cambridge, MA: MIT Press.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Egan, F. How to think about mental content. Philos Stud 170, 115–135 (2014). https://doi.org/10.1007/s11098-013-0172-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-013-0172-0