David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
In [Book Chapter] (1987)
A provisional model is presented in which categorical perception (CP) provides our basic or elementary categories. In acquiring a category we learn to label or identify positive and negative instances from a sample of confusable alternatives. Two kinds of internal representation are built up in this learning by "acquaintance": (1) an iconic representation that subserves our similarity judgments and (2) an analog/digital feature-filter that picks out the invariant information allowing us to categorize the instances correctly. This second, categorical representation is associated with the category name. Category names then serve as the atomic symbols for a third representational system, the (3) symbolic representations that underlie language and that make it possible for us to learn by "description." Connectionism is one possible mechainsm for learning the sensory invariants underlying categorization and naming. Among the implications of the model are (a) the "cognitive identity of (current) indiscriminables": Categories and their representations can only be provisional and approximate, relative to the alternatives encountered to date, rather than "exact." There is also (b) no such thing as an absolute "feature," only those features that are invariant within a particular context of confusable alternatives. Contrary to prevailing "prototype" views, however, (c) such provisionally invariant features must underlie successful categorization, and must be "sufficient" (at least in the "satisficing" sense) to subserve reliable performance with all-or-none, bounded categories, as in CP. Finally, the model brings out some basic limitations of the "symbol-manipulative" approach to modeling cognition, showing how (d) symbol meanings must be functionally grounded in nonsymbolic, "shape-preserving" representations -- iconic and categorical ones. Otherwise, all symbol interpretations are ungrounded and indeterminate. This amounts to a principled call for a psychophysical (rather than a neural) "bottom-up" approach to cognition.
|Keywords||No keywords specified (fix it)|
No categories specified
(categorize this paper)
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Stevan Harnad (2002). Symbol Grounding and the Origin of Language. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.
Stevan Harnad & Stephen J. Hanson, Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding.
Stevan Harnad (1992). Connecting Object to Symbol in Modeling Cognition. In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer-Verlag. 75--90.
Stevan Harnad & SJ Hanson, Categorical Perception and the Evolution of Supervised Learning in Neural Nets.
Philippe G. Schyns, Robert L. Goldstone & Jean-Pierre Thibaut (1998). The Development of Features in Object Concepts. Behavioral and Brain Sciences 21 (1):1-17.
Stevan Harnad (1995). Does Mind Piggyback on Robotic and Symbolic Capacity? In H. Morowitz & J. Singer (eds.), The Mind, the Brain, and Complex Adaptive Systems. Addison Wesley.
Stevan Harnad (1990). The Symbol Grounding Problem. Philosophical Explorations 42:335-346.
Added to index2009-01-28
Total downloads26 ( #71,385 of 1,101,888 )
Recent downloads (6 months)12 ( #19,529 of 1,101,888 )
How can I increase my downloads?