Results for 'Artificial language learning experiment'

999 found
Order:
  1. Influence of Perceptual Saliency Hierarchy on Learning of Language Structures: An Artificial Language Learning Experiment.Tao Gong, Yau W. Lam & Lan Shuai - 2016 - Frontiers in Psychology 7.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2.  62
    A Bayesian Model of Biases in Artificial Language Learning: The Case of a Word‐Order Universal.Jennifer Culbertson & Paul Smolensky - 2012 - Cognitive Science 36 (8):1468-1498.
    In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial languagelearning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word‐order patterns in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  3.  20
    Experience With a Linguistic Variant Affects the Acquisition of Its Sociolinguistic Meaning: An Alien‐LanguageLearning Experiment.Wei Lai, Péter Rácz & Gareth Roberts - 2020 - Cognitive Science 44 (4):e12832.
    How do speakers learn the social meaning of different linguistic variants, and what factors influence how likely a particular social–linguistic association is to be learned? It has been argued that the social meaning of more salient variants should be learned faster, and that learners' pre‐existing experience of a variant will influence its salience. In this paper, we report two artificiallanguagelearning experiments investigating this. Each experiment involved two languagelearning stages followed by a test. The first (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  4.  41
    The Relationship Between Artificial and Second Language Learning.Marc Ettlinger, Kara Morgan-Short, Mandy Faretta-Stutenberg & Patrick C. M. Wong - 2016 - Cognitive Science 40 (4):822-847.
    Artificial language learning experiments have become an important tool in exploring principles of language and language learning. A persistent question in all of this work, however, is whether ALL engages the linguistic system and whether ALL studies are ecologically valid assessments of natural language ability. In the present study, we considered these questions by examining the relationship between performance in an ALL task and second language learning ability. Participants enrolled in a (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  5.  6
    Predictability and Variation in Language Are Differentially Affected by Learning and Production.Aislinn Keogh, Simon Kirby & Jennifer Culbertson - 2024 - Cognitive Science 48 (4):e13435.
    General principles of human cognition can help to explain why languages are more likely to have certain characteristics than others: structures that are difficult to process or produce will tend to be lost over time. One aspect of cognition that is implicated in language use is working memory—the component of short‐term memory used for temporary storage and manipulation of information. In this study, we consider the relationship between working memory and regularization of linguistic variation. Regularization is a well‐documented process (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  58
    All Together Now: Concurrent Learning of Multiple Structures in an Artificial Language.Alexa R. Romberg & Jenny R. Saffran - 2013 - Cognitive Science 37 (7):1290-1320.
    Natural languages contain many layers of sequential structure, from the distribution of phonemes within words to the distribution of phrases within utterances. However, most research modeling language acquisition using artificial languages has focused on only one type of distributional structure at a time. In two experiments, we investigated adult learning of an artificial language that contains dependencies between both adjacent and non-adjacent words. We found that learners rapidly acquired both types of regularities and that the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  7.  8
    Drift as a Driver of Language Change: An Artificial Language Experiment.Rafael Ventura, Joshua B. Plotkin & Gareth Roberts - 2022 - Cognitive Science 46 (9):e13197.
    Over half a century ago, George Zipf observed that more frequent words tend to be older. Corpus studies since then have confirmed this pattern, with more frequent words being replaced and regularized less often than less frequent words. Two main hypotheses have been proposed to explain this: that frequent words change less because selection against innovation is stronger at higher frequencies, or that they change less because stochastic drift is stronger at lower frequencies. Here, we report the first experimental test (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8.  33
    Under What Conditions Can Recursion Be Learned? Effects of Starting Small in Artificial Grammar Learning of Center‐Embedded Structure.Fenna H. Poletiek, Christopher M. Conway, Michelle R. Ellefson, Jun Lai, Bruno R. Bocanegra & Morten H. Christiansen - 2018 - Cognitive Science 42 (8):2855-2889.
    It has been suggested that external and/or internal limitations paradoxically may lead to superior learning, that is, the concepts of starting small and less is more (Elman, ; Newport, ). In this paper, we explore the type of incremental ordering during training that might help learning, and what mechanism explains this facilitation. We report four artificial grammar learning experiments with human participants. In Experiments 1a and 1b we found a beneficial effect of starting small using two (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  9.  16
    Co‐Occurrence, Extension, and Social Salience: The Emergence of Indexicality in an Artificial Language.Aini Li & Gareth Roberts - 2023 - Cognitive Science 47 (5):e13290.
    We investigated the emergence of sociolinguistic indexicality using an artificial-language-learning paradigm. Sociolinguistic indexicality involves the association of linguistic variants with nonlinguistic social or contextual features. Any linguistic variant can acquire “constellations” of such indexical meanings, though they also exhibit an ordering, with first-order indices associated with particular speaker groups and higher-order indices targeting stereotypical attributes of those speakers. Much natural-language research has been conducted on this phenomenon, but little experimental work has focused on how indexicality emerges. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Language Learning and Control in Monolinguals and Bilinguals.James Bartolotti & Viorica Marian - 2012 - Cognitive Science 36 (6):1129-1147.
    Parallel language activation in bilinguals leads to competition between languages. Experience managing this interference may aid novel language learning by improving the ability to suppress competition from known languages. To investigate the effect of bilingualism on the ability to control native-language interference, monolinguals and bilinguals were taught an artificial language designed to elicit between-language competition. Partial activation of interlingual competitors was assessed with eye-tracking and mouse-tracking during a word recognition task in the novel (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  11.  13
    Order Matters! Influences of Linear Order on Linguistic Category Learning.Dorothée B. Hoppe, Jacolien Rij, Petra Hendriks & Michael Ramscar - 2020 - Cognitive Science 44 (11):e12910.
    Linguistic category learning has been shown to be highly sensitive to linear order, and depending on the task, differentially sensitive to the information provided by preceding category markers (premarkers, e.g., gendered articles) or succeeding category markers (postmarkers, e.g., gendered suffixes). Given that numerous systems for marking grammatical categories exist in natural languages, it follows that a better understanding of these findings can shed light on the factors underlying this diversity. In two discriminative learning simulations and an artificial (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  12.  20
    Order Matters! Influences of Linear Order on Linguistic Category Learning.Dorothée B. Hoppe, Jacolien van Rij, Petra Hendriks & Michael Ramscar - 2020 - Cognitive Science 44 (11):e12910.
    Linguistic category learning has been shown to be highly sensitive to linear order, and depending on the task, differentially sensitive to the information provided by preceding category markers (premarkers, e.g., gendered articles) or succeeding category markers (postmarkers, e.g., gendered suffixes). Given that numerous systems for marking grammatical categories exist in natural languages, it follows that a better understanding of these findings can shed light on the factors underlying this diversity. In two discriminative learning simulations and an artificial (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  13.  10
    The Role of Feedback in the Statistical Learning of Language‐Like Regularities.Felicity F. Frinsel, Fabio Trecca & Morten H. Christiansen - 2024 - Cognitive Science 48 (3):e13419.
    In language learning, learners engage with their environment, incorporating cues from different sources. However, in lab‐based experiments, using artificial languages, many of the cues and features that are part of real‐world language learning are stripped away. In three experiments, we investigated the role of positive, negative, and mixed feedback on the gradual learning of language‐like statistical regularities within an active guessing game paradigm. In Experiment 1, participants received deterministic feedback (100%), whereas probabilistic (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2014 - Cognitive Science 38 (4):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  15.  18
    Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2015 - Cognitive Science 39 (2):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural‐language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16.  39
    Exploiting Multiple Sources of Information in Learning an Artificial Language: Human Data and Modeling.Pierre Perruchet & Barbara Tillmann - 2010 - Cognitive Science 34 (2):255-285.
    This study investigates the joint influences of three factors on the discovery of new word‐like units in a continuous artificial speech stream: the statistical structure of the ongoing input, the initial word‐likeness of parts of the speech flow, and the contextual information provided by the earlier emergence of other word‐like units. Results of an experiment conducted with adult participants show that these sources of information have strong and interactive influences on word discovery. The authors then examine the ability (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  17.  25
    Individual Differences in Learning Abilities Impact Structure Addition: Better Learners Create More Structured Languages.Tamar Johnson, Noam Siegelman & Inbal Arnon - 2020 - Cognitive Science 44 (8):e12877.
    Over the last decade, iterated learning studies have provided compelling evidence for the claim that linguistic structure can emerge from non‐structured input, through the process of transmission. However, it is unclear whether individuals differ in their tendency to add structure, an issue with implications for understanding who are the agents of change. Here, we identify and test two contrasting predictions: The first sees learning as a pre‐requisite for structure addition, and predicts a positive correlation between learning accuracy (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  18.  44
    Relationships Between Language Structure and Language Learning: The Suffixing Preference and Grammatical Categorization.Michelle C. St Clair, Padraic Monaghan & Michael Ramscar - 2009 - Cognitive Science 33 (7):1317-1329.
    It is a reasonable assumption that universal properties of natural languages are not accidental. They occur either because they are underwritten by genetic code, because they assist in language processing or language learning, or due to some combination of the two. In this paper we investigate one such language universal: the suffixing preference across the world’s languages, whereby inflections tend to be added to the end of words. A corpus analysis of child‐directed speech in English found (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  19.  29
    Second Language Experience Facilitates Statistical Learning of Novel Linguistic Materials.Christine E. Potter, Tianlin Wang & Jenny R. Saffran - 2017 - Cognitive Science 41 (S4):913-927.
    Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  20.  46
    Adult Learning and Language Simplification.Mark Atkinson, Kenny Smith & Simon Kirby - 2018 - Cognitive Science 42 (8):2818-2854.
    Languages spoken in larger populations are relatively simple. A possible explanation for this is that languages with a greater number of speakers tend to also be those with higher proportions of non‐native speakers, who may simplify language during learning. We assess this explanation for the negative correlation between population size and linguistic complexity in three experiments, using artificial language learning techniques to investigate both the simplifications made by individual adult learners and the potential for such (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  21.  22
    Structured Sequence Learning: Animal Abilities, Cognitive Operations, and Language Evolution.Christopher I. Petkov & Carel ten Cate - 2020 - Topics in Cognitive Science 12 (3):828-842.
    Human language is a salient example of a neurocognitive system that is specialized to process complex dependencies between sensory events distributed in time, yet how this system evolved and specialized remains unclear. Artificial Grammar Learning (AGL) studies have generated a wealth of insights into how human adults and infants process different types of sequencing dependencies of varying complexity. The AGL paradigm has also been adopted to examine the sequence processing abilities of nonhuman animals. We critically evaluate this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  22.  39
    The Role of Prior Experience in Language Acquisition.Jill Lany, Rebecca L. Gómez & Lou Ann Gerken - 2007 - Cognitive Science 31 (3):481-507.
    Learners exposed to an artificial language recognize its abstract structural regularities when instantiated in a novel vocabulary (e.g., Gómez, Gerken, & Schvaneveldt, 2000; Tunney & Altmann, 2001). We asked whether such sensitivity accelerates subsequent learning, and enables acquisition of more complex structure. In Experiment 1, pre-exposure to a category-induction language of the form aX bY sped subsequent learning when the language is instantiated in a different vocabulary. In Experiment 2, while naíve learners (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  23. One Cue's Loss Is Another Cue's Gain—Learning Morphophonology Through Unlearning.Erdin Mujezinović, Vsevolod Kapatsinski & Ruben van de Vijver - 2024 - Cognitive Science 48 (5):e13450.
    A word often expresses many different morphological functions. Which part of a word contributes to which part of the overall meaning is not always clear, which raises the question as to how such functions are learned. While linguistic studies tacitly assume the co-occurrence of cues and outcomes to suffice in learning these functions (Baer-Henney, Kügler, & van de Vijver, 2015; Baer-Henney & van de Vijver, 2012), error-driven learning suggests that contingency rather than contiguity is crucial (Nixon, 2020; Ramscar, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24.  26
    Semantic Coherence Facilitates Distributional Learning.Ouyang Long, Boroditsky Lera & C. Frank Michael - 2017 - Cognitive Science 41 (S4):855-884.
    Computational models have shown that purely statistical knowledge about words’ linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that “postman” and “mailman” are semantically similar because they have quantitatively similar patterns of association with other words. In contrast to these computational results, artificial language learning experiments suggest that distributional statistics alone do not facilitate learning of linguistic categories. However, experiments in this paradigm expose participants (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25.  39
    Artificial Intelligence, Language, and the Study of Knowledge*,†.Ira Goldstein & Seymour Papert - 1977 - Cognitive Science 1 (1):84-123.
    This paper studies the relationship of Artificial Intelligence to the study of language and the representation of the underlying knowledge which supports the comprehension process. It develops the view that intelligence is based on the ability to use large amounts of diverse kinds of knowledge in procedural ways, rather than on the possession of a few general and uniform principles. The paper also provides a unifying thread to a variety of recent approaches to natural language comprehension. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  26.  60
    Large Language Models Demonstrate the Potential of Statistical Learning in Language.Pablo Contreras Kallens, Ross Deans Kristensen-McLachlan & Morten H. Christiansen - 2023 - Cognitive Science 47 (3):e13256.
    To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language–especially those involving computational modeling–have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally provide the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  27. Cognitive Biases, Linguistic Universals, and Constraint‐Based Grammar Learning.Jennifer Culbertson, Paul Smolensky & Colin Wilson - 2013 - Topics in Cognitive Science 5 (3):392-424.
    According to classical arguments, language learning is both facilitated and constrained by cognitive biases. These biases are reflected in linguistic typology—the distribution of linguistic patterns across the world's languages—and can be probed with artificial grammar experiments on child and adult learners. Beginning with a widely successful approach to typology (Optimality Theory), and adapting techniques from computational approaches to statistical learning, we develop a Bayesian model of cognitive biases and show that it accounts for the detailed pattern (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  28.  19
    From One Bilingual to the Next: An Iterated Learning Study on Language Evolution in Bilingual Societies.Pauline Palma, Sarah Lee, Vegas Hodgins & Debra Titone - 2023 - Cognitive Science 47 (5):e13289.
    Studies of language evolution in the lab have used the iterated learning paradigm to show how linguistic structure emerges through cultural transmission—repeated cycles of learning and use across generations of speakers. However, agent-based simulations suggest that prior biases crucially impact the outcome of cultural transmission. Here, we explored this notion through an iterated learning study of English-French bilingual adults (mostly sequential bilinguals dominant in English). Each participant learned two unstructured artificial languages in a counterbalanced fashion, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29.  37
    The Yerkish Language: From Operational Methodology to Chimpanzee Communication.M. C. Bettoni - 2007 - Constructivist Foundations 2 (2-3):32-38.
    Purpose: Yerkish is an artificial language created in 1971 for the specific purpose of exploring the linguistic potential of nonhuman primates. The aim of this paper is to remind the research community of some important issues and concepts related to Yerkish that seem to have been forgotten or appear to be distorted. These are, particularly, its success, its promising aspects for future research and last but not least that it was Ernst von Glasersfeld who invented Yerkish: he coined (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  30.  33
    Learning Harmony: The Role of Serial Statistics.Erin McMullen Jonaitis & Jenny R. Saffran - 2009 - Cognitive Science 33 (5):951-968.
    How do listeners learn about the statistical regularities underlying musical harmony? In traditional Western music, certain chords predict the occurrence of other chords: Given a particular chord, not all chords are equally likely to follow. In Experiments 1 and 2, we investigated whether adults make use of statistical information when learning new musical structures. Listeners were exposed to a novel musical system containing phrases generated using an artificial grammar. This new system contained statistical structure quite different from Western (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  31.  31
    Exploring Variation Between Artificial Grammar Learning Experiments: Outlining a Meta‐Analysis Approach.Antony S. Trotter, Padraic Monaghan, Gabriël J. L. Beckers & Morten H. Christiansen - 2020 - Topics in Cognitive Science 12 (3):875-893.
    Studies of AGL have frequently used training and test stimuli that might provide multiple cues for learning, raising the question what subjects have actually learned. Using a selected subset of studies on humans and non‐human animals, Trotter et al. demonstrate how a meta‐analysis can be used to identify relevant experimental variables, providing a first step in asssessing the relative contribution of design features of grammars as well as of species‐specific effects on AGL.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  32. The Effects of Linear Order in Category Learning: Some Replications of Ramscar et al. (2010) and Their Implications for Replicating Training Studies.Eva Viviani, Michael Ramscar & Elizabeth Wonnacott - 2024 - Cognitive Science 48 (5):e13445.
    Ramscar, Yarlett, Dye, Denny, and Thorpe (2010) showed how, consistent with the predictions of error‐driven learning models, the order in which stimuli are presented in training can affect category learning. Specifically, learners exposed to artificial language input where objects preceded their labels learned the discriminating features of categories better than learners exposed to input where labels preceded objects. We sought to replicate this finding in two online experiments employing the same tests used originally: A four pictures (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  12
    A Music-Mediated Language Learning Experience: Students’ Awareness of Their Socio-Emotional Skills.Esther Cores-Bilbao, Analí Fernández-Corbacho, Francisco H. Machancoses & M. C. Fonseca-Mora - 2019 - Frontiers in Psychology 10.
    In a society where mobility, globalization and contact with people from other cultures have become its basic descriptors, the enhancement of plurilingualism and intercultural understanding seem to be of the utmost concern. From a Positive Psychology Perspective, agency is the human capacity to affect other people positively or negatively through their actions. This agentic vision can be related to mediation, a concept rooted in the socio-cultural learning theory where social interaction is considered a fundamental cornerstone in the development of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  20
    Sentence processing in an artificial language: Learning and using combinatorial constraints.Michael S. Amato & Maryellen C. MacDonald - 2010 - Cognition 116 (1):143-148.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  35.  5
    Multi-language transfer learning for low-resource legal case summarization.Gianluca Moro, Nicola Piscaglia, Luca Ragazzi & Paolo Italiani - forthcoming - Artificial Intelligence and Law:1-29.
    Analyzing and evaluating legal case reports are labor-intensive tasks for judges and lawyers, who usually base their decisions on report abstracts, legal principles, and commonsense reasoning. Thus, summarizing legal documents is time-consuming and requires excellent human expertise. Moreover, public legal corpora of specific languages are almost unavailable. This paper proposes a transfer learning approach with extractive and abstractive techniques to cope with the lack of labeled legal summarization datasets, namely a low-resource scenario. In particular, we conducted extensive multi- and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36.  29
    Incidental and online learning of melodic structure.Martin Rohrmeier, Patrick Rebuschat & Ian Cross - 2011 - Consciousness and Cognition 20 (2):214-222.
    The cognition of music, like that of language, is partly rooted in enculturative processes of implicit and incidental learning. Musicians and nonmusicians alike are commonly found to possess detailed implicit knowledge of musical structure which is acquired incidentally through interaction with large samples of music. This paper reports an experiment combining the methodology of artificial grammar learning with musical acquisition of melodic structure. Participants acquired knowledge of grammatical melodic structures under incidental learning conditions in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  37.  17
    Learning Words While Listening to Syllables: Electrophysiological Correlates of Statistical Learning in Children and Adults.Ana Paula Soares, Francisco-Javier Gutiérrez-Domínguez, Alexandrina Lages, Helena M. Oliveira, Margarida Vasconcelos & Luis Jiménez - 2022 - Frontiers in Human Neuroscience 16.
    From an early age, exposure to a spoken language has allowed us to implicitly capture the structure underlying the succession of speech sounds in that language and to segment it into meaningful units. Statistical learning, the ability to pick up patterns in the sensory environment without intention or reinforcement, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language, including the discovery of word boundaries in the continuous acoustic stream. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Learners restrict their linguistic generalizations using preemption but not entrenchment: Evidence from artificial-language-learning studies with adults and children.Anna Samara, Elizabeth Wonnacott, Gaurav Saxena, Ramya Maitreyee, Judit Fazekas & Ben Ambridge - forthcoming - Psychological Review.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  53
    Human/animal communications, language, and evolution.Dominique Lestel - 2002 - Sign Systems Studies 30 (1):201-211.
    The article compares the research programs of teaching symbolic language to chimpanzees, pointing on the dichotomy between artificial language vs. ASL, and the dichotomy between researchers who decided to establish emotional relationships between themselves and the apes, and those who have seen apes as instrumental devices. It is concluded that the experiments with the most interesting results have been both with artificial language and ASL, but with strong affiliation between researchers and animal involved in the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  14
    The Influence of Different Prosodic Cues on Word Segmentation.Theresa Matzinger, Nikolaus Ritt & W. Tecumseh Fitch - 2021 - Frontiers in Psychology 12.
    A prerequisite for spoken language learning is segmenting continuous speech into words. Amongst many possible cues to identify word boundaries, listeners can use both transitional probabilities between syllables and various prosodic cues. However, the relative importance of these cues remains unclear, and previous experiments have not directly compared the effects of contrasting multiple prosodic cues. We used artificial language learning experiments, where native German speaking participants extracted meaningless trisyllabic “words” from a continuous speech stream, to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  46
    Iconicity and the Emergence of Combinatorial Structure in Language.Tessa Verhoef, Simon Kirby & Bart Boer - 2016 - Cognitive Science 40 (8):1969-1994.
    In language, recombination of a discrete set of meaningless building blocks forms an unlimited set of possible utterances. How such combinatorial structure emerged in the evolution of human language is increasingly being studied. It has been shown that it can emerge when languages culturally evolve and adapt to human cognitive biases. How the emergence of combinatorial structure interacts with the existence of holistic iconic form-meaning mappings in a language is still unknown. The experiment presented in this (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  42.  18
    Why artificial intelligence needs sociology of knowledge: parts I and II.Harry Collins - forthcoming - AI and Society:1-15.
    Recent developments in artificial intelligence based on neural nets—deep learning and large language models which together I refer to as NEWAI—have resulted in startling improvements in language handling and the potential to keep up with changing human knowledge by learning from the internet. Nevertheless, examples such as ChatGPT, which is a ‘large language model’, have proved to have no moral compass: they answer queries with fabrications with the same fluency as they provide facts. I (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43.  47
    Testing the Limits of Long-Distance Learning: Learning Beyond a Three-Segment Window.Sara Finley - 2012 - Cognitive Science 36 (4):740-756.
    Traditional flat-structured bigram and trigram models of phonotactics are useful because they capture a large number of facts about phonological processes. Additionally, these models predict that local interactions should be easier to learn than long-distance ones because long-distance dependencies are difficult to capture with these models. Long-distance phonotactic patterns have been observed by linguists in many languages, who have proposed different kinds of models, including feature-based bigram and trigram models, as well as precedence models. Contrary to flat-structured bigram and trigram (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  44.  42
    Answering the connectionist challenge: a symbolic model of learning the past tenses of English verbs.C. X. Ling & M. Marinov - 1993 - Cognition 49 (3):235-290.
    Supporters of eliminative connectionism have argued for a pattern association-based explanation of language learning and language processing. They deny that explicit rules and symbolic representations play any role in language processing and cognition in general. Their argument is based to a large extent on two artificial neural network (ANN) models that are claimed to be able to learn the past tenses of English verbs (Rumelhart & McClelland, 1986, Parallel distributed processing, Vol. 2, Cambridge, MA: MIT (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   46 citations  
  45.  39
    The metamorphosis of the statistical segmentation output: Lexicalization during artificial language learning.Tânia Fernandes, Régine Kolinsky & Paulo Ventura - 2009 - Cognition 112 (3):349-366.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  46.  4
    InstructPatentGPT: training patent language models to follow instructions with human feedback.Jieh-Sheng Lee - forthcoming - Artificial Intelligence and Law:1-44.
    In this research, patent prosecution is conceptualized as a system of reinforcement learning from human feedback. The objective of the system is to increase the likelihood for a language model to generate patent claims that have a higher chance of being granted. To showcase the controllability of the language model, the system learns from granted patents and pre-grant applications with different rewards. The status of “granted” and “pre-grant” are perceived as labeled human feedback implicitly. In addition, specific (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  23
    A novel deep learning approach for diagnosing Alzheimer's disease based on eye-tracking data.Jinglin Sun, Yu Liu, Hao Wu, Peiguang Jing & Yong Ji - 2022 - Frontiers in Human Neuroscience 16:972773.
    Eye-tracking technology has become a powerful tool for biomedical-related applications due to its simplicity of operation and low requirements on patient language skills. This study aims to use the machine-learning models and deep-learning networks to identify key features of eye movements in Alzheimer's Disease (AD) under specific visual tasks, thereby facilitating computer-aided diagnosis of AD. Firstly, a three-dimensional (3D) visuospatial memory task is designed to provide participants with visual stimuli while their eye-movement data are recorded and used (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  24
    A Comparative Perspective on the Role of Acoustic Cues in Detecting Language Structure.Jutta L. Mueller, Carel ten Cate & Juan M. Toro - 2018 - Topics in Cognitive Science 12 (3):859-874.
    Mueller et al. discuss the role of acoustic cues in detecting language structure more generally. Across languages, there are clear links between acoustic cues and syntactic structure. They show that AGL experiments implementing analogous links demonstrate that prosodic cues, as well as various auditory biases, facilitate the learning of structural rules. Some of these biases, e.g. for auditory grouping, are also present in other species.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  49.  77
    Artificial intelligence as law. [REVIEW]Bart Verheij - 2020 - Artificial Intelligence and Law 28 (2):181-206.
    Information technology is so ubiquitous and AI’s progress so inspiring that also legal professionals experience its benefits and have high expectations. At the same time, the powers of AI have been rising so strongly that it is no longer obvious that AI applications (whether in the law or elsewhere) help promoting a good society; in fact they are sometimes harmful. Hence many argue that safeguards are needed for AI to be trustworthy, social, responsible, humane, ethical. In short: AI should be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  50. Input Complexity Affects Long-Term Retention of Statistically Learned Regularities in an Artificial Language Learning Task.Ethan Jost, Katherine Brill-Schuetz, Kara Morgan-Short & Morten H. Christiansen - 2019 - Frontiers in Human Neuroscience 13.
1 — 50 / 999