Most words in English are ambiguous between different interpretations; words can mean different things in different contexts. We investigate the implications of different types of semantic ambiguity for connectionist models of word recognition. We present a model in which there is competition to activate distributed semantic representations. The model performs well on the task of retrieving the different meanings of ambiguous words, and is able to simulate data reported by Rodd, Gaskell, and Marslen‐Wilson [J. Mem. Lang. 46 (2002) 245] on (...) how semantic ambiguity affects lexical decision performance. In particular, the network shows a disadvantage forwords with multiple unrelated meanings (e.g., bark) that coexists with a benefit for words with multiple related word senses (e.g., twist). The ambiguity disadvantage arises because of interference between the different meanings, while the sense benefit arises because of differences in the structure of the attractor basins formed during learning. Words with few senses develop deep, narrow attractor basins, while words with many senses develop shallow, broad basins. We conclude that the mental representations of word meanings can be modelled as stable states within a high‐dimensional semantic space, and that variations in the meanings of words shape the landscape of this space. (shrink)
Overlaps in form and meaning between morphologically related words have led to ambiguities in interpreting priming effects in studies of lexical organization. In Semitic languages like Arabic, however, linguistic analysis proposes that one of the three component morphemes of a surface word is the CV-Skeleton, an abstract prosodic unit coding the phonological shape of the surface word and its primary syntactic function, which has no surface phonetic content (McCarthy, J. J. (1981). A prosodic theory of non-concatenative morphology, Linguistic Inquiry, 12 (...) 373-418). The other two morphemes are proposed to be the vocalic melody, which conveys additional syntactic information, and the root, which defines meaning. In three experiments using masked, cross-modal, and auditory-auditory priming we examined the role of the vocalic melody and the CV-Skeleton as potential morphemic units in the processing and representation of Arabic words. Prime/target pairs sharing the vocalic melody but not the CV-Skeleton consistently failed to prime. In contrast, word pairs sharing only the CV-Skeleton primed reliably throughout, with the amount of priming being as large as that observed between word pattern pairs sharing both vocalic melody and CV-Skeleton. Priming between morphologically related words can be observed when there is no overlap either in meaning or in surface phonetic form. (shrink)
A number of recent studies have examined the effects of phonological variation on the perception of speech. These studies show that both the lexical representations of words and the mechanisms of lexical access are organized so that natural, systematic variation is tolerated by the perceptual system, while a general intolerance of random deviation is maintained. Lexical abstraction distinguishes between phonetic features that form the invariant core of a word and those that are susceptible to variation. Phonological inference relies on the (...) context of surface changes to retrieve the underlying phonological form. In this article we present a model of these processes in speech perception, based on connectionist learning techniques. A simple recurrent network was trained on the mapping from the variant surface form of speech to the underlying form. Once trained, the network exhibited features of both abstraction and inference in its processing of normal speech, and predicted that similar behavior will be found in the perception of nonsense words. This prediction was confirmed in subsequent research (Gaskell & Marslen-Wilson, 1994). (shrink)
Norris et al. argue against using evidence from phonetic decision making to support top-down feedback in lexical access on the grounds that phonetic decision relies on processes outside the normal access sequence. This leaves open the possibility that bottom-up connectionist models, with some contextual constraints built into the access process, are still preferred models of spoken-word recognition.