David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
After people learn to sort objects into categories they see them differently. Members of the same category look more alike and members of different categories look more different. This phenomenon of within-category compression and between-category separation in similarity space is called categorical perception (CP). It is exhibited by human subjects, animals and neural net models. In backpropagation nets trained first to auto-associate 12 stimuli varying along a onedimensional continuum and then to sort them into 3 categories, CP arises as a natural side-effect because of four factors: (1) Maximal interstimulus separation in hidden-unit space during autoassociation learning, (2) movement toward linear separability during categorization learning, (3) inverse-distance repulsive force exerted by the between-category boundary, and (4) the modulating effects of input iconicity, especially in interpolating CP to untrained regions of the continuum. Once similarity space has been "warped" in this way, the compressed and separated "chunks" have symbolic labels which could then be combined into symbol strings that constitute propositions about objects. The meanings of such symbolic representations would be "grounded" in the system's capacity to pick out from their sensory projections the object categories that the propositions were about.
|Keywords||No keywords specified (fix it)|
No categories specified
(categorize this paper)
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library||
References found in this work BETA
No references found.
Citations of this work BETA
Stevan Harnad (1994). Computation is Just Interpretable Symbol Manipulation; Cognition Isn't. Minds and Machines 4 (4):379-90.
Similar books and articles
Stevan Harnad & SJ Hanson, Categorical Perception and the Evolution of Supervised Learning in Neural Nets.
Stevan Harnad (1992). Connecting Object to Symbol in Modeling Cognition. In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer-Verlag. 75--90.
Stevan Harnad, Symbol Grounding is an Empirical Problem: Neural Nets Are Just a Candidate Component.
Stevan Harnad (1990). The Symbol Grounding Problem. Philosophical Explorations 42:335-346.
Stevan Harnad (1995). Does Mind Piggyback on Robotic and Symbolic Capacity? In H. Morowitz & J. Singer (eds.), The Mind, the Brain, and Complex Adaptive Systems. Addison Wesley.
Stevan Harnad (2003). Categorical Perception. In L. Nadel (ed.), Encyclopedia of Cognitive Science. Nature Publishing Group. 67--4.
Stevan Harnad (2002). Symbol Grounding and the Origin of Language. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.
Stevan Harnad (1993). Grounding Symbols in the Analog World with Neural Nets. Philosophical Explorations 2 (1):12-78.
Yasmina Jraissati (2012). Categorical Perception of Color: Assessing the Role of Language. Croatian Journal of Philosophy 36 (3):439-462.
Constantine Tsinakis & Han Zhang (2004). Order Algebras as Models of Linear Logic. Studia Logica 76 (2):201 - 225.
Yasmina Jraissati (2012). Categorical Perception of Color. Croatian Journal of Philosophy 12 (3):439-462.
Bruce J. MacLennan (1993). Grounding Analog Computers. Philosophical Explorations 2:8-51.
John E. Hummel (2010). Symbolic Versus Associative Learning. Cognitive Science 34 (6):958-965.
Added to index2010-12-22
Total downloads6 ( #203,651 of 1,101,125 )
Recent downloads (6 months)2 ( #177,118 of 1,101,125 )
How can I increase my downloads?