Rumelhart and McClelland's chapter about learning the past tense created a degree of controversy extraordinary even in the adversarial culture of modern science. It also stimulated a vast amount of research that advanced the understanding of the past tense, inflectional morphology in English and other languages, the nature of linguistic representations, relations between language and other phenomena such as reading and object recognition, the properties of artificial neural networks, and other topics. We examine the impact of the Rumelhart and McClelland (...) model with the benefit of 25 years of hindsight. It is not clear who “won” the debate. It is clear, however, that the core ideas that the model instantiated have been assimilated into many areas in the study of language, changing the focus of research from abstract characterizations of linguistic competence to an emphasis on the role of the statistical structure of language in acquisition and processing. (shrink)
Page's proposal to stipulate representations in which individual units correspond to meaningful entities is too unconstrained to support effective theorizing. An approach combining general computational principles with domain-specific assumptions, in which learning is used to discover representations that are effective in solving tasks, provides more insight into why cognitive and neural systems are organized the way they are.
We share with Anderson & Lebiere (A&L) (and with Newell before them) the goal of developing a domain-general framework for modeling cognition, and we take seriously the issue of evaluation criteria. We advocate a more focused approach than the one reflected in Newell's criteria, based on analysis of failures as well as successes of models brought into close contact with experimental data. A&L attribute the shortcomings of our parallel-distributed processing framework to a failure to acknowledge a symbolic level of thought. (...) Our framework does acknowledge a symbolic level, contrary to their claim. What we deny is that the symbolic level is the level at which the principles of cognitive processing should be formulated. Models cast at a symbolic level are sometimes useful as high-level approximations of the underlying mechanisms of thought. The adequacy of this approximation will continue to increase as symbolic modelers continue to incorporate principles of parallel distributed processing. (shrink)
Connectionist models offer concretemechanisms for cognitive processes. When these modelsmimic the performance of human subjects theycan offer insights into the computationswhich might underlie human cognition. We illustratethis with the performance of a recurrentconnectionist network which produces the meaningof words in response to their spellingpattern. It mimics a paradoxical pattern oferrors produced by people trying to read degradedwords. The reason why the network produces thesurprising error pattern lies in the nature ofthe attractors which it develops as it learns tomap spelling patterns (...) to semantics. The keyrole of attractor structure in the successfulsimulation suggests that the normal adult semanticreading route may involve attractor dynamics, andthus the paradoxical error pattern isexplained. (shrink)
The search for a universal theory of reading is misguided. Instead, theories should articulate general principles of neural computation that interact with language-specific learning environments to explain the full diversity of observed reading-related phenomena across the world's languages.