Switch to: References

Add citations

You must login to add citations.
  1. Blind Speakers Show Language-Specific Patterns in Co-Speech Gesture but Not Silent Gesture.Şeyda Özçalışkan, Ché Lucero & Susan Goldin-Meadow - 2018 - Cognitive Science 42 (3):1001-1014.
    Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech, not without speech. We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish to 80 sighted adult speakers as they described three-dimensional (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Does language shape silent gesture?Şeyda Özçalışkan, Ché Lucero & Susan Goldin-Meadow - 2016 - Cognition 148 (C):10-18.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • When Gestures Do_ or _Do Not Follow Language‐Specific Patterns of Motion Expression in Speech: Evidence from Chinese, English and Turkish.Irmak Su Tütüncü, Jing Paul, Samantha N. Emerson, Murat Şengül, Melanie Knezevic & Şeyda Özçalışkan - 2023 - Cognitive Science 47 (4):e13261.
    Speakers of different languages (e.g., English vs. Turkish) show a binary split in how they package and order components of a motion event in speech and co‐speech gesture but not in silent gesture. In this study, we focused on Mandarin Chinese, a language that does not follow the binary split in its expression of motion in speech, and asked whether adult Chinese speakers would follow the language‐specific speech patterns in co‐speech but not silent gesture, thus showing a pattern akin to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Does language guide event perception? Evidence from eye movements.Anna Papafragou, Justin Hulbert & John Trueswell - 2008 - Cognition 108 (1):155.
  • Speaking and gesturing guide event perception during message conceptualization: Evidence from eye movements.Ercenur Ünal, Francie Manhardt & Aslı Özyürek - 2022 - Cognition 225 (C):105127.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The cross-linguistic categorization of everyday events: A study of cutting and breaking.Asifa Majid, James S. Boster & Melissa Bowerman - 2008 - Cognition 109 (2):235-250.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  • A developmental shift from similar to language-specific strategies in verb acquisition: A comparison of English, Spanish, and Japanese.Mandy J. Maguire, Kathy Hirsh-Pasek, Roberta Michnick Golinkoff, Mutsumi Imai, Etsuko Haryu, Sandra Vanegas, Hiroyuki Okada, Rachel Pulverman & Brenda Sanchez-Davis - 2010 - Cognition 114 (3):299-319.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Running across the mind or across the park: does speech about physical and metaphorical motion go hand in hand?Wojciech Lewandowski & Şeyda Özçalışkan - 2023 - Cognitive Linguistics 34 (3-4):411-444.
    Expression of physical motion (e.g., man runs by) shows systematic variability not only between language types (i.e., inter-typological) but also within a language type (i.e., intra-typological). In this study, we asked whether the patterns of variability extend to metaphorical motion events (e.g., time runs by). Our analysis of randomly selected 450 physical motion (150/language) and 450 metaphorical motion (150/language) event descriptions from written texts originally produced by German, Polish, and Spanish authors showed strong inter-typological differences in the expression of both (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Japanese Sound-Symbolism Facilitates Word Learning in English-Speaking Children.Katerina Kantartzis, Mutsumi Imai & Sotaro Kita - 2011 - Cognitive Science 35 (3):575-586.
    Sound-symbolism is the nonarbitrary link between the sound and meaning of a word. Japanese-speaking children performed better in a verb generalization task when they were taught novel sound-symbolic verbs, created based on existing Japanese sound-symbolic words, than novel nonsound-symbolic verbs (Imai, Kita, Nagumo, & Okada, 2008). A question remained as to whether the Japanese children had picked up regularities in the Japanese sound-symbolic lexicon or were sensitive to universal sound-symbolism. The present study aimed to provide support for the latter. In (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  • Cognitive Representation of Spontaneous Motion in a Second Language: An Exploration of Chinese Learners of English.Yinglin Ji - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • English and Chinese children’s motion event similarity judgments.Yinglin Ji & Jill Hohenstein - 2018 - Cognitive Linguistics 29 (1):45-76.
    This study explores the relationship between language and thought in similarity judgments by testing how monolingual children who speak languages with partial typological differences in motion description respond to visual motion event stimuli. Participants were either Chinese- or English-speaking, 3-year-olds, 8-year-olds and adults who judged the similarity between caused motion scenes in a match-to-sample task. The results suggest, first of all, that the two younger groups of 3-year-olds are predominantly path-oriented, irrespective of language, as evidenced by their significantly longer fixation (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Event segmentation: Cross-linguistic differences in verbal and non-verbal tasks.Johannes Gerwien & Christiane von Stutterheim - 2018 - Cognition 180 (C):225-237.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Easier Said Than Done? Task Difficulty's Influence on Temporal Alignment, Semantic Similarity, and Complexity Matching Between Gestures and Speech.Lisette De Jonge-Hoekstra, Ralf F. A. Cox, Steffie Van der Steen & James A. Dixon - 2021 - Cognitive Science 45 (6):e12989.
    Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • What Does a Horgous Look Like? Nonsense Words Elicit Meaningful Drawings.Charles P. Davis, Hannah M. Morrow & Gary Lupyan - 2019 - Cognitive Science 43 (10):e12791.
    To what extent do people attribute meanings to “nonsense” words? How general is such attribution of meaning? We used a set of words lacking conventional meanings to elicit drawings of made‐up creatures. Separate groups of participants rated the nonsense words and the drawings on several semantic dimensions and selected what name best corresponded to each creature. Despite lacking conventional meanings, “nonsense” words elicited a high level of consistency in the produced drawings. Meaning attributions made to nonsense words corresponded with meaning (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Visual Heuristics for Verb Production: Testing a Deep‐Learning Model With Experiments in Japanese.Franklin Chang, Tomoko Tatsumi, Yuna Hiranuma & Colin Bannard - 2023 - Cognitive Science 47 (8):e13324.
    Tense/aspect morphology on verbs is often thought to depend on event features like telicity, but it is not known how speakers identify these features in visual scenes. To examine this question, we asked Japanese speakers to describe computer‐generated animations of simple actions with variation in visual features related to telicity. Experiments with adults and children found that they could use goal information in the animations to select appropriate past and progressive verb forms. They also produced a large number of different (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The relation between event apprehension and utterance formulation in children: Evidence from linguistic omissions.Ann Bunger, John C. Trueswell & Anna Papafragou - 2012 - Cognition 122 (2):135-149.
    No categories
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   4 citations