Learning Representations of Wordforms With Recurrent Networks: Comment on Sibley, Kello, Plaut, & Elman (2008)

Cognitive Science 33 (7):1183-1186 (2009)
  Copy   BIBTEX

Abstract

Sibley et al. (2008) report a recurrent neural network model designed to learn wordform representations suitable for written and spoken word identification. The authors claim that their sequence encoder network overcomes a key limitation associated with models that code letters by position (e.g., CAT might be coded as C‐in‐position‐1, A‐in‐position‐2, T‐in‐position‐3). The problem with coding letters by position (slot‐coding) is that it is difficult to generalize knowledge across positions; for example, the overlap between CAT and TOMCAT is lost. Although we agree this is a critical problem with many slot‐coding schemes, we question whether the sequence encoder model addresses this limitation, and we highlight another deficiency of the model. We conclude that alternative theories are more promising.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,853

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Currents in connectionism.William Bechtel - 1993 - Minds and Machines 3 (2):125-153.
Beyond linguistic alignment.Allan Mazur - 2004 - Behavioral and Brain Sciences 27 (2):205-206.
Pr cis of connectionism and the philosophy of psychology.Terence Horgan & John Tienson - 1997 - Philosophical Psychology 10 (3):337 – 356.

Analytics

Added to PP
2013-12-01

Downloads
14 (#990,327)

6 months
2 (#1,198,779)

Historical graph of downloads
How can I increase my downloads?