Switch to: References

Add citations

You must login to add citations.
  1. A Model of Language Processing as Hierarchic Sequential Prediction.Marten van Schijndel, Andy Exley & William Schuler - 2013 - Topics in Cognitive Science 5 (3):522-540.
    Computational models of memory are often expressed as hierarchic sequence models, but the hierarchies in these models are typically fairly shallow, reflecting the tendency for memories of superordinate sequence states to become increasingly conflated. This article describes a broad-coverage probabilistic sentence processing model that uses a variant of a left-corner parsing strategy to flatten sentence processing operations in parsing into a similarly shallow hierarchy of learned sequences. The main result of this article is that a broad-coverage model with constraints on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Inessential features, ineliminable features, and modal logics for model theoretic syntax.Hans-Jörg Tiede - 2008 - Journal of Logic, Language and Information 17 (2):217-227.
    While monadic second-order logic (MSO) has played a prominent role in model theoretic syntax, modal logics have been used in this context since its inception. When comparing propositional dynamic logic (PDL) to MSO over trees, Kracht (1997) noted that there are tree languages that can be defined in MSO that can only be defined in PDL by adding new features whose distribution is predictable. He named such features “inessential features”. We show that Kracht’s observation can be extended to other modal (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Modeling Structure‐Building in the Brain With CCG Parsing and Large Language Models.Miloš Stanojević, Jonathan R. Brennan, Donald Dunagan, Mark Steedman & John T. Hale - 2023 - Cognitive Science 47 (7):e13312.
    To model behavioral and neural correlates of language comprehension in naturalistic environments, researchers have turned to broad‐coverage tools from natural‐language processing and machine learning. Where syntactic structure is explicitly modeled, prior work has relied predominantly on context‐free grammars (CFGs), yet such formalisms are not sufficiently expressive for human languages. Combinatory categorial grammars (CCGs) are sufficiently expressive directly compositional models of grammar with flexible constituency that affords incremental interpretation. In this work, we evaluate whether a more expressive CCG provides a better (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Grammatical dependencies shape compositional units in sentence production.Shota Momma - 2023 - Cognition 240 (C):105577.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Composition in Distributional Models of Semantics.Jeff Mitchell & Mirella Lapata - 2010 - Cognitive Science 34 (8):1388-1429.
    Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  • When is cataphoric reference recognised?Ruth Filik & Anthony J. Sanford - 2008 - Cognition 107 (3):1112-1121.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Probabilistic Modeling of Discourse‐Aware Sentence Processing.Amit Dubey, Frank Keller & Patrick Sturt - 2013 - Topics in Cognitive Science 5 (3):425-451.
    Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • A ‘no’ with a trace of ‘yes’: A mouse-tracking study of negative sentence processing.Emily J. Darley, Christopher Kent & Nina Kazanina - 2020 - Cognition 198 (C):104084.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • MEG Evidence for Incremental Sentence Composition in the Anterior Temporal Lobe.Jonathan R. Brennan & Liina Pylkkänen - 2017 - Cognitive Science 41 (S6):1515-1531.
    Research investigating the brain basis of language comprehension has associated the left anterior temporal lobe with sentence-level combinatorics. Using magnetoencephalography, we test the parsing strategy implemented in this brain region. The number of incremental parse steps from a predictive left-corner parsing strategy that is supported by psycholinguistic research is compared with those from a less-predictive strategy. We test for a correlation between parse steps and source-localized MEG activity recorded while participants read a story. Left-corner parse steps correlated with activity in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Explicatures are NOT Cancellable.Alessandro Capone - 2013 - In Alessandro Capone, Franco Lo Piparo & Marco Carapezza (eds.), Perspectives on linguistic pragmatics. Springer. pp. 131-151.
    Explicatures are not cancellable. Theoretical considerations.
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  • LTAG-spinal and the Treebank.Lucas Champollion & Aravind K. Joshi - unknown
    We introduce LTAG-spinal, a novel variant of traditional Lexicalized Tree Adjoining Grammar (LTAG) with desirable linguistic, computational and statistical properties. Unlike in traditional LTAG, subcategorization frames and the argument-adjunct distinction are left underspecified in LTAG-spinal. LTAG-spinal with adjunction constraints is weakly equivalent to LTAG. The LTAG-spinal formalism is used to extract an LTAG-spinal Treebank from the Penn Treebank with Propbank annotation. Based on Propbank annotation, predicate coordination and LTAG adjunction structures are successfully extracted. The LTAG-spinal Treebank makes explicit semantic relations (...)
    Direct download  
     
    Export citation  
     
    Bookmark