|Abstract||This paper presents an approach to solve the symbol grounding problem within the framework of embodied cognitive science. It will be argued that symbolic structures can be used within the paradigm of embodied cognitive science by adopting an alternative definition of a symbol. In this alternative definition, the symbol may be viewed as a structural coupling between an agent's sensorimotor activations and its environment. A robotic experiment is presented in which mobile robots develop a symbolic structure from scratch by engaging in a series of language games. In this experiment it is shown that robots can develop a symbolic structure with which they can communicate the names of a few objects with a remarkable degree of success. It is further shown that, although the referents may be interpreted differently on different occasions, the objects are usually named with only one form.|
|Keywords||No keywords specified (fix it)|
|Through your library||Configure|
Similar books and articles
John E. Hummel (2010). Symbolic Versus Associative Learning. Cognitive Science 34 (6):958-965.
Angelo Cangelosi, Alberto Greco & Stevan Harnad (2002). Symbol Grounding and the Symbolic Theft Hypothesis. In A. Cangelosi & D. Parisi (eds.), Simulating the Evolution of Language. Springer-Verlag.
Stevan Harnad (1995). Grounding Symbols in Sensorimotor Categories with Neural Networks. Institute of Electrical Engineers Colloquium on "Grounding Representations.
Vincent C. Müller (2009). Symbol Grounding in Computational Systems: A Paradox of Intentions. Minds and Machines 19 (4):529-541.
Stevan Harnad (1995). Does Mind Piggyback on Robotic and Symbolic Capacity? In H. Morowitz & J. Singer (eds.), The Mind, the Brain, and Complex Adaptive Systems. Addison Wesley.
Dairon Rodríguez, Jorge Hermosillo & Bruno Lara (2012). Meaning in Artificial Agents: The Symbol Grounding Problem Revisited. Minds and Machines 22 (1):25-34.
Karl F. MacDorman (1998). Feature Learning, Multiresolution Analysis, and Symbol Grounding. Behavioral and Brain Sciences 21 (1):32-33.
Stevan Harnad, Symbol Grounding is an Empirical Problem: Neural Nets Are Just a Candidate Component.
Graham White (2011). Descartes Among the Robots. Minds and Machines 21 (2):179-202.
Added to index2009-01-28
Total downloads5 ( #160,483 of 549,198 )
Recent downloads (6 months)1 ( #63,397 of 549,198 )
How can I increase my downloads?