Representational trajectories in connectionist learning

Minds and Machines 4 (3):317-32 (1994)
Authors
Andy Clark
University of Edinburgh
Abstract
  The paper considers the problems involved in getting neural networks to learn about highly structured task domains. A central problem concerns the tendency of networks to learn only a set of shallow (non-generalizable) representations for the task, i.e., to miss the deep organizing features of the domain. Various solutions are examined, including task specific network configuration and incremental learning. The latter strategy is the more attractive, since it holds out the promise of a task-independent solution to the problem. Once we see exactly how the solution works, however, it becomes clear that it is limited to a special class of cases in which (1) statistically driven undersampling is (luckily) equivalent to task decomposition, and (2) the dangers of unlearning are somehow being minimized. The technique is suggestive nonetheless, for a variety of developmental factors may yield the functional equivalent of both statistical AND informed undersampling in early learning
Keywords Connectionism  Epistemology  Learning  Metaphysics  Representation
Categories (categorize this paper)
DOI 10.1007/BF00974197
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

Our Archive


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 33,686
Through your library

References found in this work BETA

View all 9 references / Add more references

Citations of this work BETA

Complexity and Individual Psychology.Yakir Levin & Itzhak Aharon - 2015 - Mind and Society 14 (2):203-219.

Add more citations

Similar books and articles

Analytics

Added to PP index
2009-01-28

Total downloads
38 ( #158,797 of 2,261,360 )

Recent downloads (6 months)
3 ( #145,102 of 2,261,360 )

How can I increase my downloads?

Monthly downloads

My notes

Sign in to use this feature