David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
In this paper I offer an explanation of how the grounding of stimuli in an initial analog world can effect the interpretability of symbolic representations of the behaviour of neural networks performing cognition. I make two assertions about the form of networks powerful enough to perform cognition, first that they be composed of non-linear elements and second that their architecture is recurrent. As nets of this type are equivalent to non-linear dynamical systems I then go on to consider how the behaviour of such systems can be represented symbolically. The crucial feature of such representations is that they must be non-deterministic, they therefore differ from deterministic symbol systems such as Searle's Chinese Room. A whole range of non-deterministic symbol systems representing a single underlying continuous processes can be produced at different levels of detail. Symbols in these representations are not indivisible, if the contents of a symbol in one level of representation are known then the subsequent behaviour of that symbol system may be interpreted in terms of a more detailed representation in which non-determinism acts at a finer scale. Knowing the contents of symbols therefore effects our ability to interpret system behaviour. Symbols only have contents in a grounded system so these multiple levels of interpretation are only possible if stimuli are grounded in a finely detailed world.
|Keywords||No keywords specified (fix it)|
No categories specified
(categorize this paper)
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library||
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Stevan Harnad (1995). Grounding Symbols in Sensorimotor Categories with Neural Networks. Institute of Electrical Engineers Colloquium on "Grounding Representations.
Stevan Harnad (1992). Connecting Object to Symbol in Modeling Cognition. In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer-Verlag. 75--90.
Bruce J. MacLennan (1993). Grounding Analog Computers. Think 2:8-51.
Stevan Harnad (1994). Computation is Just Interpretable Symbol Manipulation; Cognition Isn't. Minds and Machines 4 (4):379-90.
Stevan Harnad, Symbol Grounding is an Empirical Problem: Neural Nets Are Just a Candidate Component.
Vincent C. Müller (2009). Symbol Grounding in Computational Systems: A Paradox of Intentions. [REVIEW] Minds and Machines 19 (4):529-541.
Patrick Hayes, Stevan Harnad, Donald R. Perlis & Ned Block (1992). Virtual Symposium on Virtual Mind. Minds and Machines 2 (3):217-238.
Stevan Harnad (1992). Virtual Symposium on Virtual Mind. Minds and Machines 2 (3):217-238.
Robert W. Kentridge (1995). Symbols, Neurons, Soap-Bubbles and the Neural Computation Underlying Cognition. Minds and Machines 4 (4):439-449.
C. Franklin Boyle (2001). Transduction and Degree of Grounding. Psycoloquy 12 (36).
Peter beim Graben (2004). Incompatible Implementations of Physical Symbol Systems. Mind and Matter 2 (2):29-51.
Eric Dietrich & A. Markman (2003). Discrete Thoughts: Why Cognition Must Use Discrete Representations. Mind and Language 18 (1):95-119.
Added to index2010-12-22
Total downloads4 ( #247,771 of 1,096,954 )
Recent downloads (6 months)1 ( #273,368 of 1,096,954 )
How can I increase my downloads?