David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
Minds and Machines 16 (2):107-139 (2006)
Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and that they use this near-identity relation to distinguish sentences that are consistent or inconsistent with a familiar grammar. Recent simulations that were claimed to show that this model did not really learn these grammars [Vilcu, M., & Hadley, R. F. (2005). Minds and Machines, 15, 359–382] confounded syntactic types with speech sounds and did not perform standard statistical tests of results.
|Keywords||Artificial grammars Cascade-correlation Connectionism Generalization Neural networks Representation Sonority Syllables|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
Gary F. Marcus (2001). The Algebraic Mind. The MIT Press.
Rebecca L. Gómez & LouAnn Gerken (2000). Infant Artificial Language Learning and Language Acquisition. Trends in Cognitive Sciences 4 (5):178-186.
Donald Thomas Campbell (1966). Experimental and Quasi-Experimental Designs for Research. Chicago, R. Mcnally.
Marius Vilcu & Robert F. Hadley (2005). Two Apparent 'Counterexamples' to Marcus: A Closer Look. [REVIEW] Minds and Machines 15 (3-4):359-382.
Citations of this work BETA
No citations found.
Similar books and articles
Samuel W. K. Chan, Dynamic Context Generation for Natural Language Understanding: A Multifaceted Knowledge Approach.
Steve Donaldson (2008). A Neural Network for Creative Serial Order Cognitive Behavior. Minds and Machines 18 (1):53-91.
Robert F. Hadley & M. B. Hayward (1997). Strong Semantic Systematicity From Hebbian Connectionist Learning. Minds and Machines 7 (1):1-55.
Dan Hunter (1999). Out of Their Minds: Legal Theory in Neural Networks. [REVIEW] Artificial Intelligence and Law 7 (2-3):129-151.
Stan Franklin & Max Garzon (1992). On Stability and Solvability (or, When Does a Neural Network Solve a Problem?). Minds and Machines 2 (1):71-83.
Enrico Blanzieri (1997). Dynamical Learning Algorithms for Neural Networks and Neural Constructivism. Behavioral and Brain Sciences 20 (4):559-559.
Gualtiero Piccinini (2008). Some Neural Networks Compute, Others Don't. Neural Networks 21 (2-3):311-321.
Added to index2009-01-28
Total downloads15 ( #175,574 of 1,726,249 )
Recent downloads (6 months)5 ( #147,227 of 1,726,249 )
How can I increase my downloads?