Skip to main content
Log in

Abstract

For the past 40 years, philosophers have generally assumed that a key to understanding mental representation is to develop a naturalistic theory of representational content. This has led to an outlook where the importance of content has been heavily inflated, while the significance of the representational vehicles has been somewhat downplayed. However, the success of this enterprise has been thwarted by a number of mysterious and allegedly non-naturalizable, irreducible dimensions of representational content. The challenge of addressing these difficulties has come to be known as the “hard problem of content” (Hutto & Myin, 2012), and many think it makes an account of representation in the brain impossible. In this essay, I argue that much of this is misguided and based upon the wrong set of priorities. If we focus on the functionality of representational vehicles (as recommended by teleosemanticists) and remind ourselves of the quirks associated with many functional entities, we can see that the allegedly mysterious and intractable aspects of content are really just mundane features associated with many everyday functional kinds. We can also see they have little to do with content and more to do with representation function. Moreover, we can begin to see that our explanatory priorities are backwards: instead of expecting a theory of content to be the key to understanding how a brain state can function as a representation, we should instead expect a theory of neural representation function to serve as the key to understanding how content occurs naturally.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. There is some need for clarification in the literature regarding how we should understand the term ‘content’ and what a theory of content is about. The term ‘content’ is commonly used to refer to the intentional object of a representation – the thing (or property, abstract entity, etc.) represented. With this usage, the content of a thought about Paris is Paris (or perhaps some proposition associated with Paris). However, this implies that a theory of content is thereby a theory about the things represented, like Paris (or propositions). But theories of content are not about these things. Theories of content are really theories about how mental representations can come to have content – theories about what the having of content amounts to. They are really theories of the intentionality relation between representations and the represented.

  2. Just as cognitive systems possess different levels of sophistication (with the basic minds of animals having fewer capacities than those of intelligent humans), so too, it is not unreasonable to assume that cognitive mechanisms like representations also come with different capabilities. Of course, for something to qualify as a functioning cognitive representation, it will need to do a great deal of what we ordinarily associate with mental representations. But as we’ll see below, some of the features we associate with more advanced, personal-level, conscious reflection should not be expected to apply equally to all sub-personal, low-level representational states and structures.

  3. For more on the ‘vehicle-content’ terminology and how it came about, see the interesting discussions here: https://philosophyofbrains.com/2010/03/16/first-mention-of-contentvehicle-distinction.aspx; also here: https://philpapers.org/bbs/thread.pl?tId=190. I’m grateful to an anonymous reviewer for pointing out this discussion.

  4. This language-oriented perspective does provide one noteworthy exception to vehicular neglect, at least concerning complex representations of propositions. According to the Language of Thought hypothesis, the vehicles representing full-blown propositions must have a combinatorial structure, such that the content of the molecular representation stems from the content of its atomic parts and their syntactic “arrangement” (Fodor 1975). Still, even on this view the nature of the atomic representations themselves is largely ignored.

  5. It should be noted that in a well-known paper, these considerations did encourage Fodor to promote a sort of “methodological solipsism” in our investigation of computational cognition (Fodor 1980).

  6. This project became popular in the 1980s, but there were several precursors accounts closely related to the project. These included Sellars’ (1957) version of intentional role semantics and Stampe’s (1977) causal theory of meaning.

  7. Here is how Fodor puts it: “Well, what would it be like to have a serious theory of representation? Here too, there is a consensus to work from. The worry about representation is above all that the semantic (and/or the intentional) will prove permanently recalcitrant to integration in the natural order; for example, that the semantic/intentional properties of things will fail to supervene upon their physical properties. What is required to relieve the worry is therefore, at a minimum, the framing of naturalistic conditions for representation” (1990, p. 32). Fodor goes on to suggest that this project can largely ignore questions about the sort of things that serve as representational vehicles.

  8. It should be noted that Grush (2004) also invokes a (somewhat different) notion of emulation as the basis for his account of representation. Since my aim is to highlight parallels between emulation with regard to camouflage and representational content, it is perhaps unsurprising that such a relation is associated with certain accounts of the latter.

  9. In ethology, biologists do regularly refer to mimicry. However, mimicry (as I understand it) is somewhat different, as it involves cases where a relatively harmless organism imitates a more poisonous or malevolent organism.

  10. It is easy to imagine auditory or olfactory versions of something similar. For example, if a certain predator experiences a hallucinatory tone (similar to ringing in the ears), we can imagine potential prey signaling danger by using a similar frequency, hiding the signal by emulating a non-existent environmental sound.

  11. In his recent book, Matej Kohar (2023) argues that a localist form of neural mechanist explanation of cognition cannot invoke representational content because intentional content extends beyond neural elements and processes. Insofar as his arguments are sound, they would seem to work equally well for a mechanistic explanation of the survival value of camouflage, or any other adaptation that involves organism-world relational properties. This suggests, unsurprisingly, that purely localist mechanistic explanation is insufficient for a complete accounting of behavior and adaptability.

  12. An illustration of the sort of this approach can be found in Keifer and Hohwy (2018), who emphasize a functionalist approach to understanding representation and content in the predictive error minimization framework.

  13. For those committed to embodied and/or embedded cognition, it should be noted that, as Piccinini (2022) points out, a deeper analysis of the functionality of representations reveals that such an agenda is not only compatible with representational theory of mind, but in many ways the two are mutually supportive.

References  

  • Allen, C., M. Bekoff, and G.V. Lauder, eds. 1998. Nature’s Purposes: Analyses of Function and Design in Biology. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Allen, C. and Neal, J. 2020. "Teleological Notions in Biology", The Stanford Encyclopedia of Philosophy (Spring 2020 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/spr2020/entries/teleology-biology/

  • Anderson, M.L., and G. Rosenberg. 2008. Content and Action: The Guidance Theory of Representation. The Journal of Mind and Behavior 29 (1 & 2): 55–86.

    Google Scholar 

  • Brentano, F. 1924. in O. von Kraus (ed.), Pschologie vom empirischen Standpunkt, Meiner Verlag, Leipzig. (English translation: in L. L. McAlister (ed.), Psychology from an Empirical Standpoint, A. C. Rancurello, D. B. Terrell and L. L. McAlister (trans.). London: Routledge & Kegan Paul, 1973.

  • Burgess, N., and J. O’Keefe. 2002. Spatial models of the hippocampus. In The Handbook of Brain Theory and Neural Networks, 2nd ed., ed. M.A. Arbib. Cambridge, MA: MIT press.

    Google Scholar 

  • Chemero, A. 2009. Radical Embodied Cognitive Science. Cambridge, MA: MIT Press.

    Book  Google Scholar 

  • Chisholm, R. 1957. Perception: A Philosophical Study. Ithaca, NY: Cornell University Press.

    Google Scholar 

  • Cummins, R. 1989. Meaning and Mental Representation. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dennett, D. 1978. Brainstorms. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dennett, D., and J. Haugeland. 1987. Intentionality. In The Oxford Companion to the Mind, ed. R. Gregory. Oxford: Oxford University Press.

    Google Scholar 

  • Dretske, F. 1988. Explaining Behavior. Cambridge, MA: MIT Press.

    Google Scholar 

  • Egan, F. 2014. How to Think About Mental Content. Philosophical Studies 170: 115–135.

    Article  Google Scholar 

  • Field, H. 1978. Mental Representation. Erkenntnis 13: 9–61.

    Article  Google Scholar 

  • Fodor, J. 1975. The Language of Thought. New York, NY: Thomas Y. Crowell.

    Google Scholar 

  • Fodor, J. 1980. Methodological Solipsism Considered as a Research Strategy in Cognitive Science. Behavioral and Brain Sciences 3 (1): 63–109.

    Article  Google Scholar 

  • Fodor, J. 1987. Psychosemantics. Cambridge, MA: MIT Press.

    Book  Google Scholar 

  • Forbes, P. 2009. Dazzled and Deceived: Mimicry and Camouflage. New Haven, CT: Yale University Press.

    Google Scholar 

  • Gladziejewski, P., and M. Milkowski. 2017. Structural Representations: Causally Relevant and Different From Detectors. Biology and Philosophy 32 (3): 337–355.

    Article  Google Scholar 

  • Goodman, N. 1968. Languages of Art: An Approach to a Theory of Symbols. Indianapolis, IN: Bobbs-Merrill.

    Google Scholar 

  • Grush, R. 2004. The Emulation Theory of Representation: Motor Control, Imagery, and Perception. Behavioral and Brain Sciences 27 (3): 377–396.

    Article  Google Scholar 

  • Hutto, D.D., and E. Myin. 2012. Radicalizing Enactivism. Cambridge, MA: MIT Press.

    Book  Google Scholar 

  • Keifer, A., and J. Hohwy. 2018. Content and Misrepresentation in Hierchical Generative Models. Synthese 195: 2387–2415.

    Article  Google Scholar 

  • Kohar, M. 2023. Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience. Cham, Switzerland: Springer.

    Book  Google Scholar 

  • Lee, J. 2021. Rise of the Swamp Creatures: Reflections on a Mechanistic Approach to Content. Philosophical Psychology 34 (6): 805–828.

    Article  Google Scholar 

  • Mann, S.F., and R. Pain. 2022. Teleosemantics and the Hard Problem of Content. Philosophical Psychology 35 (1): 22–46.

    Article  Google Scholar 

  • Milkowski, M. 2015. The Hard Problem of Content: Solved (Long Ago). Studies in Logic, Grammar and Rhetoric 41 (54): 73–88.

    Article  Google Scholar 

  • Millikan, R. 1984. Language, Thought and Other Biological Categories. Cambridge, MA: MIT Press.

    Book  Google Scholar 

  • Millikan, R. 2009. Biosemantics. In The Oxford Handbook of Philosophy of Mind, ed. B. Mclaughlin, A. Beckermann, and S. Walter, 394–406. Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Neander, K. 2017. The Mark of the Mental. Cambridge, MA: MIT Press.

    Book  Google Scholar 

  • Piccinini, G. 2022. Situated Neural Representation: Solving the Problems of Content. Frontiers in Neurorobotics 16: 1–13.

    Article  Google Scholar 

  • Putnam, H. 1975. The Meaning of ‘Meaning.’ In Language, Mind and Knowledge, ed. K. Gunderson, 131–193. Minnesota: University of Minnesota Press.

    Google Scholar 

  • Ramsey, W. 2007. Representation Reconsidered. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Ramsey, W. 2016. Untangling Two Questions About Mental Representation. New Ideas in Psychology 40: 3–12.

    Article  Google Scholar 

  • Searle, J. 1980. Minds, Brains and Programs. Behavioral and Brain Sciences 3: 417–424.

    Article  Google Scholar 

  • Sellars, W. 1957. “Intentionality and the Mental”, A symposium by correspondence with Roderick Chisholm. In Minnesota Studies in the Philosophy of Science, vol. II, ed. H. Feigl, M. Scriven, and G. Maxwell, 507–539. Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Shea, N. 2018. Representation in Cognitive Science. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Stampe, D. 1977. Towards a Causal Theory of Linguistic Representation. Midwest Studies in Philosophy 2 (1): 42–63.

    Article  Google Scholar 

  • Stich, S. 1983. From Folk Psychology to Cognitive Science: The Case Against Belief. Cambridge, MA: MIT Press.

    Google Scholar 

  • Stich, S., and T. Warfield. 1994. Mental Representation: A Reader. Oxford: Basil Blackwell.

    Google Scholar 

  • Swoyer, C. 1991. Structural Representation and Surrogative Reasoning. Synthese 87: 449–508.

    Article  Google Scholar 

  • van Gelder, T. 1995. What Might Cognition Be, If Not Computation? The Journal of Philosophy 91: 345–381.

    Article  Google Scholar 

  • von Eckart, B. 2012. The Representational Theory of Mind. In The Cambridge Handbook of Cognitive Science, ed. K. Frankish and W. Ramsey, 29–49. Cambridge: Cambridge University Press.

    Google Scholar 

  • Wright, L. 1976. Teleological Explanation. Berkeley, CA: University of California Press.

    Book  Google Scholar 

Download references

Acknowledgments

Earlier versions of this paper were presented at the University of Nevada, Las Vegas Philosophy Colloquium, March, 2022, University of California, Davis, Philosophy Colloquium, May, 2022, and the Workshop on the Borders of Cognition, Bergamo, Italy, June 2022. Feedback from these audiences was extremely helpful. I am also grateful to Lel Jones and two anonymous reviewers for their helpful comments and suggestions.

Funding

There is no noteworthy funding

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William Max Ramsey.

Ethics declarations

Competing Interests

There are no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ramsey, W.M. The Hard Problem of Content is Neither. Rev.Phil.Psych. (2023). https://doi.org/10.1007/s13164-023-00714-9

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13164-023-00714-9

Keywords

Navigation