It is widely assumed that human learning and the structure of human languages are intimately related. This relationship is frequently suggested to derive from a language-specific biological endowment, which encodes universal, but communicatively arbitrary, principles of language structure (a Universal Grammar or UG). How might such a UG have evolved? We argue that UG could not have arisen either by biological adaptation or non-adaptationist genetic processes, resulting in a logical problem of language evolution. Specifically, as the processes of language change (...) are much more rapid than processes of genetic change, language constitutes a both over time and across different human populations, and, hence, cannot provide a stable environment to which language genes could have adapted. We conclude that a biologically determined UG is not evolutionarily viable. Instead, the original motivation for UG arises because language has been shaped to fit the human brain, rather than vice versa. Following Darwin, we view language itself as a complex and interdependent which evolves under selectional pressures from human learning and processing mechanisms. That is, languages themselves are shaped by severe selectional pressure from each generation of language users and learners. This suggests that apparently arbitrary aspects of linguistic structure may result from general learning and processing biases deriving from the structure of thought processes, perceptuo-motor factors, cognitive limitations, and pragmatics. (shrink)
The notion that the form of a word bears an arbitrary relation to its meaning accounts only partly for the attested relations between form and meaning in the languages of the world. Recent research suggests a more textured view of vocabulary structure, in which arbitrariness is complemented by iconicity (aspects of form resemble aspects of meaning) and systematicity (statistical regularities in forms predict function). Experimental evidence suggests these form-to-meaning correspondences serve different functions in language processing, development, and communication: systematicity facilitates (...) category learning by means of phonological cues, iconicity facilitates word learning and communication by means of perceptuomotor analogies, and arbitrariness facilitates meaning individuation through distinctive forms. Processes of cultural evolution help to explain how these competing motivations shape vocabulary structure. (shrink)
Recent research suggests that language evolution is a process of cultural change, in which linguistic structures are shaped through repeated cycles of learning and use by domain-general mechanisms. This paper draws out the implications of this viewpoint for understanding the problem of language acquisition, which is cast in a new, and much more tractable, form. In essence, the child faces a problem of induction, where the objective is to coordinate with others (C-induction), rather than to model the structure of the (...) natural world (N-induction). We argue that, of the two, C-induction is dramatically easier. More broadly, we argue that understanding the acquisition of any cultural form, whether linguistic or otherwise, during development, requires considering the corresponding question of how that cultural form arose through processes of cultural evolution. This perspective helps resolve the “logical” problem of language acquisition and has far-reaching implications for evolutionary psychology. (shrink)
Previous research on lexical development has aimed to identify the factors that enable accurate initial word-referent mappings based on the assumption that the accuracy of initial word-referent associations is critical for word learning. The present study challenges this assumption. Adult English speakers learned an artificial language within a cross-situational learning paradigm. Visual fixation data were used to assess the direction of visual attention. Participants whose longest fixations in the initial trials fell more often on distracter images performed significantly better at (...) test than participants whose longest fixations fell more often on referent images. Thus, inaccurate initial word-referent mappings may actually benefit learning. (shrink)
Psychologists have used experimental methods to study language for more than a century. However, only with the recent availability of large-scale linguistic databases has a more complete picture begun to emerge of how language is actually used, and what information is available as input to language acquisition. Analyses of such “big data” have resulted in reappraisals of key assumptions about the nature of language. As an example, we focus on corpus-based research that has shed new light on the arbitrariness of (...) the sign: the longstanding assumption that the relationship between the sound of a word and its meaning is arbitrary. The results reveal a systematic relationship between the sound of a word and its meaning, which is stronger for early acquired words. Moreover, the analyses further uncover a systematic relationship between words and their lexical categories—nouns and verbs sound differently from each other—affecting how we learn new words and use them in sentences. Together, these results point to a division of labor between arbitrariness and systematicity in sound-meaning mappings. We conclude by arguing in favor of including “big data” analyses into the language scientist's methodological toolbox. (shrink)
The ability to convey our thoughts using an infinite number of linguistic expressions is one of the hallmarks of human language. Understanding the nature of the psychological mechanisms and representations that give rise to this unique productivity is a fundamental goal for the cognitive sciences. A long-standing hypothesis is that single words and rules form the basic building blocks of linguistic productivity, with multiword sequences being treated as units only in peripheral cases such as idioms. The new millennium, however, has (...) seen a shift toward construing multiword linguistic units not as linguistic rarities, but as important building blocks for language acquisition and processing. This shift—which originated within theoretical approaches that emphasize language learning and use—has far-reaching implications for theories of language representation, processing, and acquisition. Incorporating multiword units as integral building blocks blurs the distinction between grammar and lexicon; calls for models of production and comprehension that can accommodate and give rise to the effect of multiword information on processing; and highlights the importance of such units to learning. In this special topic, we bring together cutting-edge work on multiword sequences in theoretical linguistics, first-language acquisition, psycholinguistics, computational modeling, and second-language learning to present a comprehensive overview of the prominence and importance of such units in language, their possible role in explaining differences between first- and second-language learning, and the challenges the combined findings pose for theories of language. (shrink)
Our understanding of language, its origins and subsequent evolution, is shaped not only by data and theories from the language sciences, but also fundamentally by the biological sciences. Recent developments in genetics and evolutionary theory offer both very strong constraints on what scenarios of language evolution are possible and probable, but also offer exciting opportunities for understanding otherwise puzzling phenomena. Due to the intrinsic breathtaking rate of advancement in these fields, and the complexity, subtlety, and sometimes apparent non-intuitiveness of the (...) phenomena discovered, some of these recent developments have either being completely missed by language scientists or misperceived and misrepresented. In this short paper, we offer an update on some of these findings and theoretical developments through a selection of illustrative examples and discussions that cast new light on current debates in the language sciences. The main message of our paper is that life is much more complex and nuanced than anybody could have predicted even a few decades ago, and that we need to be flexible in our theorizing instead of embracing a priori dogmas and trying to patch paradigms that are no longer satisfactory. (shrink)
We agree with Caplan & Waters that there are problems with the single-resource theory of sentence comprehension. However, we challenge their dual-resource alternative on theoretical and empirical grounds and point to a more coherent solution that abandons the notion of working memory resources.
Intuitively, the accuracy of initial word-referent mappings should be positively correlated with the outcome of learning. Yet recent evidence suggests an inverse effect of initial accuracy in adults, whereby greater accuracy of initial mappings is associated with poorer outcomes in a cross-situational learning task. Here, we examine the impact of initial accuracy on 4-year-olds, 10-year-olds, and adults. For half of the participants most word-referent mappings were initially correct and for the other half most mappings were initially incorrect. Initial accuracy was (...) positively related to learning outcomes in 4-year-olds, had no effect on 10-year-olds' learning, and was inversely related to learning outcomes in adults. Examination of item learning patterns revealed item interdependence for adults and 4-year-olds but not 10-year-olds. These findings point to a qualitative change in language learning processes over development. (shrink)
Second-language learners rarely arrive at native proficiency in a number of linguistic domains, including morphological and syntactic processing. Previous approaches to understanding the different outcomes of first- versus second-language learning have focused on cognitive and neural factors. In contrast, we explore the possibility that children and adults may rely on different linguistic units throughout the course of language learning, with specific focus on the granularity of those units. Following recent psycholinguistic evidence for the role of multiword chunks in online language (...) processing, we explore the hypothesis that children rely more heavily on multiword units in language learning than do adults learning a second language. To this end, we take an initial step toward using large-scale, corpus-based computational modeling as a tool for exploring the granularity of speakers' linguistic units. Employing a computational model of language learning, the Chunk-Based Learner, we compare the usefulness of chunk-based knowledge in accounting for the speech of second-language learners versus children and adults speaking their first language. Our findings suggest that while multiword units are likely to play a role in second-language learning, adults may learn less useful chunks, rely on them to a lesser extent, and arrive at them through different means than children learning a first language. (shrink)
Cognitive developmental disorders cannot be properly understood without due attention to the developmental process, and we commend the authors’simulations in this regard. We note the contribution of these simulations to the nascent field of connectionist modeling of developmental disorders and outline a set of criteria for assessing individual models in the hope of furthering future modeling efforts.
Why are children better language learners than adults despite being worse at a range of other cognitive tasks? Here, we explore the role of multiword sequences in explaining L1–L2 differences in learning. In particular, we propose that children and adults differ in their reliance on such multiword units in learning, and that this difference affects learning strategies and outcomes, and leads to difficulty in learning certain grammatical relations. In the first part, we review recent findings that suggest that MWUs play (...) a facilitative role in learning. We then discuss the implications of these findings for L1–L2 differences: We hypothesize that adults are both less likely to extract MWUs and less capable of benefiting from them in the process of learning. In the next section, we draw on psycholinguistic, developmental, and computational findings to support these predictions. We end with a discussion of the relation between this proposal and other accounts of L1–L2 difficulty. (shrink)
Our target article argued that a genetically specified Universal Grammar (UG), capturing arbitrary properties of languages, is not tenable on evolutionary grounds, and that the close fit between language and language learners arises because language is shaped by the brain, rather than the reverse. Few commentaries defend a genetically specified UG. Some commentators argue that we underestimate the importance of processes of cultural transmission; some propose additional cognitive and brain mechanisms that may constrain language and perhaps differentiate humans from nonhuman (...) primates; and others argue that we overstate or understate the case against co-evolution of language genes. In engaging with these issues, we suggest that a new synthesis concerning the relationship between brains, genes, and language may be emerging. (shrink)