David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
In H. Morowitz & J. Singer (eds.), The Mind, the Brain, and Complex Adaptive Systems. Addison Wesley (1995)
Cognitive science is a form of "reverse engineering" (as Dennett has dubbed it). We are trying to explain the mind by building (or explaining the functional principles of) systems that have minds. A "Turing" hierarchy of empirical constraints can be applied to this task, from t1, toy models that capture only an arbitrary fragment of our performance capacity, to T2, the standard "pen-pal" Turing Test (total symbolic capacity), to T3, the Total Turing Test (total symbolic plus robotic capacity), to T4 (T3 plus internal [neuromolecular] indistinguishability). All scientific theories are underdetermined by data. What is the right level of empirical constraint for cognitive theory? I will argue that T2 is underconstrained (because of the Symbol Grounding Problem and Searle's Chinese Room Argument) and that T4 is overconstrained (because we don't know what neural data, if any, are relevant). T3 is the level at which we solve the "other minds" problem in everyday life, the one at which evolution operates (the Blind Watchmaker is no mind-reader either) and the one at which symbol systems can be grounded in the robotic capacity to name and manipulate the objects their symbols are about. I will illustrate this with a toy model for an important component of T3 -- categorization -- using neural nets that learn category invariance by "warping" similarity space the way it is warped in human categorical perception: within-category similarities are amplified and between-category similarities are attenuated. This analog "shape" constraint is the grounding inherited by the arbitrarily shaped symbol that names the category and by all the symbol combinations it enters into. No matter how tightly one constrains any such model, however, it will always be more underdetermined than normal scientific and engineering theory. This will remain the ineliminable legacy of the mind/body problem
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Stevan Harnad, Symbol Grounding is an Empirical Problem: Neural Nets Are Just a Candidate Component.
Stevan Harnad (2001). Minds, Machines and Turing: The Indistinguishability of Indistinguishables. Philosophical Explorations.
Stevan Harnad (2000). Minds, Machines and Turing: The Indistinguishability of Indistinguishables. Journal of Logic, Language and Information 9 (4):425-445.
S. Harnad (2000). Minds, Machines and Turing. Journal of Logic, Language and Information 9 (4):425-445.
Stevan Harnad (1994). Computation is Just Interpretable Symbol Manipulation; Cognition Isn't. Minds and Machines 4 (4):379-90.
Stevan Harnad (1989). Minds, Machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1 (4):5-25.
Stevan Harnad (1992). Connecting Object to Symbol in Modeling Cognition. In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer-Verlag. 75--90.
Stevan Harnad & Stephen J. Hanson, Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding.
Stevan Harnad (1991). Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem. [REVIEW] Minds and Machines 1 (1):43-54.
Added to index2009-01-28
Total downloads26 ( #79,354 of 1,692,491 )
Recent downloads (6 months)2 ( #111,548 of 1,692,491 )
How can I increase my downloads?