How a neural net grows symbols

In Peter Bartlett (ed.), Proceedings of the Seventh Australian Conference on Neural Networks, Canberra. ACNN '96. pp. 91-96 (1996)
  Copy   BIBTEX

Abstract

Brains, unlike artificial neural nets, use symbols to summarise and reason about perceptual input. But unlike symbolic AI, they “ground” the symbols in the data: the symbols have meaning in terms of data, not just meaning imposed by the outside user. If neural nets could be made to grow their own symbols in the way that brains do, there would be a good prospect of combining neural networks and symbolic AI, in such a way as to combine the good features of each. The article argues the cluster analysis provides algorithms to perform the task, and that any solution to the task must be a form of cluster analysis

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2009-01-28

Downloads
259 (#93,251)

6 months
38 (#107,770)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

James Franklin
University of New South Wales

Citations of this work

A Causal-Mentalist View of Propositions.Jeremiah Joven Joaquin & James Franklin - 2022 - Organon F: Medzinárodný Časopis Pre Analytickú Filozofiu 29 (1):47-77.
Symbolic connectionism in natural language disambiguation.James Franklin & S. W. K. Chan - 1998 - IEEE Transactions on Neural Networks 9:739-755.

Add more citations

References found in this work

The symbol grounding problem.Stevan Harnad - 1990 - Physica D 42:335-346.
Knowledge-based artificial neural networks.Geoffrey G. Towell & Jude W. Shavlik - 1994 - Artificial Intelligence 70 (1-2):119-165.

Add more references