Graduate studies at Western
Behavioral and Brain Sciences 23 (4):484-484 (2000)
|Abstract||Page argues that localist models can be applied to a number of problems that are difficult for distributed models. However, it is easy to find examples where the opposite is true. This commentary illustrates the superiority of distributed models in the domain of artificial grammar learning, a paradigm widely used to investigate implicit learning.|
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
|Through your library||Configure|
Similar books and articles
A. Mike Burton (2000). The Many Ways to Distribute Distributed Representations. Behavioral and Brain Sciences 23 (4):472-473.
Stephen Grossberg (1997). Neural Models of Development and Learning. Behavioral and Brain Sciences 20 (4):566-566.
R. Hans Phaf & Gezinus Wolters (2000). A Competitive Manifesto. Behavioral and Brain Sciences 23 (4):487-488.
J. Vokey (2004). Opposition Logic and Neural Network Models in Artificial Grammar Learning. Consciousness and Cognition 13 (3):565-578.
Colin Martindale (2000). Localist Representations Are a Desirable Emergent Property of Neurologically Plausible Neural Networks. Behavioral and Brain Sciences 23 (4):485-486.
Stephen Grossberg (2000). Localist but Distributed Representations. Behavioral and Brain Sciences 23 (4):478-479.
Mike Page (2000). Sticking to the Manifesto. Behavioral and Brain Sciences 23 (4):496-505.
Richard J. Tunney & David R. Shanks (2003). Does Opposition Logic Provide Evidence for Conscious and Unconscious Processes in Artificial Grammar Learning? Consciousness and Cognition 12 (2):201-218.
Mike Page (2000). Connectionist Modelling in Psychology: A Localist Manifesto. Behavioral and Brain Sciences 23 (4):443-467.
Added to index2009-01-28
Total downloads4 ( #189,470 of 740,223 )
Recent downloads (6 months)1 ( #61,960 of 740,223 )
How can I increase my downloads?