David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
Philosophical Explorations 42:335-346 (1990)
There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem: How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations, which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., An X is a Y that is Z). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic module, however; the symbolic functions would emerge as an intrinsically dedicated symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded.
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
Mattia Riccardi (2013). Nietzsche's Sensualism. European Journal of Philosophy 21 (2):219-257.
Patrick Hayes, Stevan Harnad, Donald R. Perlis & Ned Block (1992). Virtual Symposium on Virtual Mind. Minds and Machines 2 (3):217-238.
Lawrence W. Barsalou (2010). Grounded Cognition: Past, Present, and Future. Topics in Cognitive Science 2 (4):716-724.
Susan Schneider (2009). LOT, CTM, and the Elephant in the Room. Synthese 170 (2):235 - 250.
Julian Kiverstein (2012). The Meaning of Embodiment. Topics in Cognitive Science 4 (4):740-758.
Similar books and articles
Stevan Harnad (2002). Symbol Grounding and the Origin of Language. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.
Vincent C. Müller (2009). Symbol Grounding in Computational Systems: A Paradox of Intentions. [REVIEW] Minds and Machines 19 (4):529-541.
Stevan Harnad (1995). Does Mind Piggyback on Robotic and Symbolic Capacity? In H. Morowitz & J. Singer (eds.), The Mind, the Brain, and Complex Adaptive Systems. Addison Wesley.
David J. Chalmers (2013). Summary. Theoria: Revista de Teoría, Historia y Fundamentos de la Ciencia 28 (1):171-173.
Stevan Harnad (1994). Computation is Just Interpretable Symbol Manipulation; Cognition Isn't. Minds and Machines 4 (4):379-90.
Stevan Harnad (1995). Grounding Symbols in Sensorimotor Categories with Neural Networks. Institute of Electrical Engineers Colloquium on "Grounding Representations.
Stevan Harnad & Stephen J. Hanson, Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding.
Stevan Harnad, Symbol Grounding is an Empirical Problem: Neural Nets Are Just a Candidate Component.
Stevan Harnad (1992). Connecting Object to Symbol in Modeling Cognition. In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer-Verlag. 75--90.
Added to index2009-01-28
Total downloads118 ( #9,041 of 1,100,122 )
Recent downloads (6 months)27 ( #6,947 of 1,100,122 )
How can I increase my downloads?