Representational trajectories in connectionist learning

Minds and Machines 4 (3):317-32 (1994)
  Copy   BIBTEX

Abstract

  The paper considers the problems involved in getting neural networks to learn about highly structured task domains. A central problem concerns the tendency of networks to learn only a set of shallow (non-generalizable) representations for the task, i.e., to miss the deep organizing features of the domain. Various solutions are examined, including task specific network configuration and incremental learning. The latter strategy is the more attractive, since it holds out the promise of a task-independent solution to the problem. Once we see exactly how the solution works, however, it becomes clear that it is limited to a special class of cases in which (1) statistically driven undersampling is (luckily) equivalent to task decomposition, and (2) the dangers of unlearning are somehow being minimized. The technique is suggestive nonetheless, for a variety of developmental factors may yield the functional equivalent of both statistical AND informed undersampling in early learning

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,867

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Currents in connectionism.William Bechtel - 1993 - Minds and Machines 3 (2):125-153.
Relational learning re-examined.Chris Thornton & Andy Clark - 1997 - Behavioral and Brain Sciences 20 (1):83-83.

Analytics

Added to PP
2009-01-28

Downloads
64 (#246,267)

6 months
11 (#340,569)

Historical graph of downloads
How can I increase my downloads?