Graduate studies at Western
Cognitive Science 35 (1):119-155 (2011)
|Abstract||This paper reconsiders the diphone-based word segmentation model of Cairns, Shillcock, Chater, and Levy (1997) and Hockema (2006), previously thought to be unlearnable. A statistically principled learning model is developed using Bayes’ theorem and reasonable assumptions about infants’ implicit knowledge. The ability to recover phrase-medial word boundaries is tested using phonetic corpora derived from spontaneous interactions with children and adults. The (unsupervised and semi-supervised) learning models are shown to exhibit several crucial properties. First, only a small amount of language exposure is required to achieve the model’s ceiling performance, equivalent to between 1 day and 1 month of caregiver input. Second, the models are robust to variation, both in the free parameter and the input representation. Finally, both the learning and baseline models exhibit undersegmentation, argued to have significant ramifications for speech processing as a whole|
|Keywords||Word segmentation Language acquisition Computational model Bayesian Unsupervised learning|
|Categories||categorize this paper)|
|Through your library||Configure|
Similar books and articles
Heather Bortfeld (2004). Which Came First: Infants Learning Language or Motherese? Behavioral and Brain Sciences 27 (4):505-506.
Marc Ettlinger, Amy S. Finn & Carla L. Hudson Kam (2011). The Effect of Sonority on Word Segmentation: Evidence for the Use of a Phonological Universal. Cognitive Science 36 (4):655-673.
Erik D. Thiessen (2010). Effects of Visual Information on Adults' and Infants' Auditory Statistical Learning. Cognitive Science 34 (6):1093-1106.
Bryan R. Gibson, Timothy T. Rogers & Xiaojin Zhu (2013). Human Semi-Supervised Learning. Topics in Cognitive Science 5 (1):132-172.
Erik D. Thiessen & Philip I. Pavlik (2013). iMinerva: A Mathematical Model of Distributional Statistical Learning. Cognitive Science 37 (2):310-343.
Axel Cleeremans (1993). Mechanisms of Implicit Learning: Connectionist Models of Sequence Processing. MIT Press.
Sean Fulop & Nick Chater (2013). Editors' Introduction: Why Formal Learning Theory Matters for Cognitive Science. Topics in Cognitive Science 5 (1):3-12.
Jukka Corander & Pekka Marttinen (2006). Bayesian Model Learning Based on Predictive Entropy. Journal of Logic, Language and Information 15 (1-2):5-20.
Archana Balyan, S. S. Agrawal & Amita Dev (2012). Automatic Phonetic Segmentation of Hindi Speech Using Hidden Markov Model. AI and Society 27 (4):543-549.
Keith S. Apfelbaum & Bob McMurray (2011). Using Variability to Guide Dimensional Weighting: Associative Mechanisms in Early Word Learning. Cognitive Science 35 (6):1105-1138.
Pierre Barbaroux & Gilles Enée (2005). Spontaneous Coordination and Evolutionary Learning Processes in an Agent-Based Model. Mind and Society 4 (2):179-195.
Sorry, there are not enough data points to plot this chart.
Added to index2010-12-10
Total downloads2 ( #246,325 of 739,305 )
Recent downloads (6 months)0
How can I increase my downloads?