Order:
  1.  36
    Interaction in Spoken Word Recognition Models: Feedback Helps.James S. Magnuson, Daniel Mirman, Sahil Luthra, Ted Strauss & Harlan D. Harris - 2018 - Frontiers in Psychology 9.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  2.  17
    EARSHOT: A Minimal Neural Network Model of Incremental Human Speech Recognition.James S. Magnuson, Heejo You, Sahil Luthra, Monica Li, Hosung Nam, Monty Escabí, Kevin Brown, Paul D. Allopenna, Rachel M. Theodore, Nicholas Monto & Jay G. Rueckl - 2020 - Cognitive Science 44 (4):e12823.
    Despite the lack of invariance problem (the many‐to‐many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side‐stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real‐world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it difficult to use them to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  3.  12
    Robust Lexically Mediated Compensation for Coarticulation: Christmash Time Is Here Again.Sahil Luthra, Giovanni Peraza-Santiago, Keia'na Beeson, David Saltzman, Anne Marie Crinnion & James S. Magnuson - 2021 - Cognitive Science 45 (4):e12962.
    A long-standing question in cognitive science is how high-level knowledge is integrated with sensory input. For example, listeners can leverage lexical knowledge to interpret an ambiguous speech sound, but do such effects reflect direct top-down influences on perception or merely postperceptual biases? A critical test case in the domain of spoken word recognition is lexically mediated compensation for coarticulation (LCfC). Previous LCfC studies have shown that a lexically restored context phoneme (e.g., /s/ in Christma#) can alter the perceived place of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  4.  8
    Contra assertions, feedback improves word recognition: How feedback and lateral inhibition sharpen signals over noise.James S. Magnuson, Anne Marie Crinnion, Sahil Luthra, Phoebe Gaston & Samantha Grubb - 2024 - Cognition 242 (C):105661.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  18
    Friends in Low‐Entropy Places: Orthographic Neighbor Effects on Visual Word Identification Differ Across Letter Positions.Sahil Luthra, Heejo You, Jay G. Rueckl & James S. Magnuson - 2020 - Cognitive Science 44 (12):e12917.
    Visual word recognition is facilitated by the presence of orthographic neighbors that mismatch the target word by a single letter substitution. However, researchers typically do not consider where neighbors mismatch the target. In light of evidence that some letter positions are more informative than others, we investigate whether the influence of orthographic neighbors differs across letter positions. To do so, we quantify the number of enemies at each letter position (how many neighbors mismatch the target word at that position). Analyses (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  5
    Auditory category learning is robust across training regimes.Chisom O. Obasih, Sahil Luthra, Frederic Dick & Lori L. Holt - 2023 - Cognition 237 (C):105467.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark