Sequence Encoders Enable Large‐Scale Lexical Modeling: Reply to Bowers and Davis (2009)

Cognitive Science 33 (7):1187-1191 (2009)
  Copy   BIBTEX

Abstract

Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed‐width distributed representations of variable‐length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large‐scale word‐reading models. In this reply, it is noted that the sequence encoder has facilitated the creation of large‐scale word‐reading models. The reasons for this success are explained and stand as counterarguments to claims made by Bowers and Davis.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 90,221

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2013-11-21

Downloads
16 (#770,883)

6 months
1 (#1,027,696)

Historical graph of downloads
How can I increase my downloads?