Graduate studies at Western
|Abstract||In this paper I offer an explanation of how the grounding of stimuli in an initial analog world can effect the interpretability of symbolic representations of the behaviour of neural networks performing cognition. I make two assertions about the form of networks powerful enough to perform cognition, first that they be composed of non-linear elements and second that their architecture is recurrent. As nets of this type are equivalent to non-linear dynamical systems I then go on to consider how the behaviour of such systems can be represented symbolically. The crucial feature of such representations is that they must be non-deterministic, they therefore differ from deterministic symbol systems such as Searle's Chinese Room. A whole range of non-deterministic symbol systems representing a single underlying continuous processes can be produced at different levels of detail. Symbols in these representations are not indivisible, if the contents of a symbol in one level of representation are known then the subsequent behaviour of that symbol system may be interpreted in terms of a more detailed representation in which non-determinism acts at a finer scale. Knowing the contents of symbols therefore effects our ability to interpret system behaviour. Symbols only have contents in a grounded system so these multiple levels of interpretation are only possible if stimuli are grounded in a finely detailed world.|
|Keywords||No keywords specified (fix it)|
No categories specified
(categorize this paper)
|Through your library||Only published papers are available at libraries|
Similar books and articles
Stevan Harnad (1995). Grounding Symbols in Sensorimotor Categories with Neural Networks. Institute of Electrical Engineers Colloquium on "Grounding Representations.
Stevan Harnad (1992). Connecting Object to Symbol in Modeling Cognition. In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer-Verlag.
Stevan Harnad (1994). Computation is Just Interpretable Symbol Manipulation; Cognition Isn't. Minds and Machines 4 (4):379-90.
Stevan Harnad, Symbol Grounding is an Empirical Problem: Neural Nets Are Just a Candidate Component.
Vincent C. Müller (2009). Symbol Grounding in Computational Systems: A Paradox of Intentions. [REVIEW] Minds and Machines 19 (4):529-541.
Patrick Hayes, Stevan Harnad, Donald R. Perlis & Ned Block (1992). Virtual Symposium on Virtual Mind. 2 (3):217-238.
Stevan Harnad (1992). Virtual Symposium on Virtual Mind. Minds and Machines 2 (3):217-238.
Robert W. Kentridge (1995). Symbols, Neurons, Soap-Bubbles and the Neural Computation Underlying Cognition. Minds and Machines 4 (4):439-449.
C. Franklin Boyle (2001). Transduction and Degree of Grounding. Psycoloquy 12 (36).
Peter beim Graben (2004). Incompatible Implementations of Physical Symbol Systems. Mind and Matter 2 (2):29-51.
Eric Dietrich & A. Markman (2003). Discrete Thoughts: Why Cognition Must Use Discrete Representations. Mind and Language 18 (1):95-119.
Added to index2010-12-22
Total downloads3 ( #214,062 of 740,856 )
Recent downloads (6 months)1 ( #61,957 of 740,856 )
How can I increase my downloads?