Search results for 'Syntactic Model' (try it on Scholar)

1000+ found
Order:
  1.  10
    Julia Ponzio (2009). The Rhythm of Laughter: Derrida's Contribution to a Syntactic Model of Interpretation. Derrida Today 2 (2):234-244.
    The focus of this paper is Derrida's idea of rhythm. I will analyse how the idea of rhythm can work in a contemporary semiotic, and in particular in a semiotic of interpretation, in order to eliminate the confusion between interpretation and semantics and to constitute a syntactic model of interpretation. In ‘The Double Session’ Derrida uses the Greek word rytmos in order to indicate the ‘law of spacing’. Rytmos is a form that is always about to change or (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  2.  5
    Marcelo Da Silva Corrêa & Edward Hermann Haeusler (1997). A Concrete Categorical Model for the Lambek Syntactic Calculus. Mathematical Logic Quarterly 43 (1):49-59.
    We present a categorical/denotational semantics for the Lambek Syntactic Calculus , indeed for a λlD-typed version Curry-Howard isomorphic to it. The main novelty of our approach is an abstract noncommutative construction with right and left adjoints, called sequential product. It is defined through a hierarchical structure of categories reflecting the implicit permission to sequence expressions and the inductive construction of compound expressions. We claim that Lambek's noncommutative product corresponds to a noncommutative bi-endofunctor into a category, which encloses all categories (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  3.  28
    David Reitter, Frank Keller & Johanna D. Moore (2011). A Computational Cognitive Model of Syntactic Priming. Cognitive Science 35 (4):587-637.
    The psycholinguistic literature has identified two syntactic adaptation effects in language production: rapidly decaying short-term priming and long-lasting adaptation. To explain both effects, we present an ACT-R model of syntactic priming based on a wide-coverage, lexicalized syntactic theory that explains priming as facilitation of lexical access. In this model, two well-established ACT-R mechanisms, base-level learning and spreading activation, account for long-term adaptation and short-term priming, respectively. Our model simulates incremental language production and in a (...)
    Direct download (9 more)  
     
    Export citation  
     
    My bibliography   10 citations  
  4.  8
    Stella Frank, Sharon Goldwater & Frank Keller (2013). Adding Sentence Types to a Model of Syntactic Category Acquisition. Topics in Cognitive Science 5 (3):495-521.
    The acquisition of syntactic categories is a crucial step in the process of acquiring syntax. At this stage, before a full grammar is available, only surface cues are available to the learner. Previous computational models have demonstrated that local contexts are informative for syntactic categorization. However, local contexts are affected by sentence-level structure. In this paper, we add sentence type as an observed feature to a model of syntactic category acquisition, based on experimental evidence showing that (...)
    Direct download (7 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  5.  18
    Dorit Ben Shalom (2000). Trace Deletion and Friederici's (1995) Model of Syntactic Processing. Behavioral and Brain Sciences 23 (1):22-23.
    This commentary discusses the relation between Grodzinsky's target article and Friederici's (1995) model of syntactic processing. The two models can be made more compatible if it is assumed that people with Broca's aphasia have a problem in trace construction rather than trace deletion, and that the process of trace construction takes place during the second early syntactic substage of Friederici's model.
    Direct download (7 more)  
     
    Export citation  
     
    My bibliography  
  6.  20
    Colin Johnston, The Determination of Form by Syntactic Employment: A Model and a Difficulty.
    This paper develops a model for understanding the Tractarian doctrine that a sign insyntactic use determines a form. This doctrine is found to be in tension withWittgenstein's agnosticism with regard to forms of reality.
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  7.  2
    Daniel Jurafsky (1996). A Probabilistic Model of Lexical and Syntactic Access and Disambiguation. Cognitive Science 20 (2):137-194.
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography   31 citations  
  8.  5
    Theo Vosse & Gerard Kempen (2000). Syntactic Structure Assembly in Human Parsing: A Computational Model Based on Competitive Inhibition and a Lexicalist Grammar. Cognition 75 (2):105-143.
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   16 citations  
  9.  1
    Amit Dubey, Frank Keller & Patrick Sturt (2008). A Probabilistic Corpus-Based Model of Syntactic Parallelism. Cognition 109 (3):326-344.
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography   4 citations  
  10. T. A. Cartwright & M. R. Brent (1997). Early Acquisition of Syntactic Categories: A Formal Model. Cognition 63:121-170.
     
    Export citation  
     
    My bibliography   2 citations  
  11.  2
    Andrew Carstairs-McCarthy (1998). The Frame/Content Model and Syntactic Evolution. Behavioral and Brain Sciences 21 (4):515-516.
    The frame/content theory suggests that chewing was tinkered into speaking. A simple extrapolation of this approach suggests that syllable structure may have been tinkered into syntax. That would explain the widely noted parallels between sentence structure and syllable structure, and also the otherwise mysterious pervasiveness of the grammatical distinction between sentences and noun phrases.
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  12. Stephen P. Stich (1983). From Folk Psychology to Cognitive Science. MIT Press.
  13. Cory D. Wright (2000). Eliminativist Undercurrents in the New Wave Model of Psychoneural Reduction. Journal of Mind and Behavior 21 (4):413-436.
    "New wave" reductionism aims at advancing a kind of reduction that is stronger than unilateral dependency of the mental on the physical. It revolves around the idea that reduction between theoretical levels is a matter of degree, and can be laid out on a continuum between a "smooth" pole (theoretical identity) and a "bumpy" pole (extremely revisionary). It also entails that both higher and lower levels of the reductive relationship sustain some degree of explanatory autonomy. The new wave predicts that (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  14.  13
    Mark D. Roberts (2004). P-Model Alternative to the T-Model. Web Journal of Formal, Computational and Logical Linguistics 5:1-18.
    Standard linguistic analysis of syntax uses the T-model. This model requires the ordering: D-structure > S-structure > LF, where D-structure is the sentences deep structure, S-structure is its surface structure, and LF is its logical form. Between each of these representations there is movement which alters the order of the constituent words; movement is achieved using the principles and parameters of syntactic theory. Psychological analysis of sentence production is usually either serial or connectionist. Psychological serial models do (...)
    Direct download  
     
    Export citation  
     
    My bibliography  
  15.  11
    Robert F. Hadley (1991). A Sense-Based, Process Model of Belief. Minds and Machines 1 (3):279-320.
    A process-oriented model of belief is presented which permits the representation of nested propositional attitudes within first-order logic. The model (NIM, for nested intensional model) is axiomatized, sense-based (via intensions), and sanctions inferences involving nested epistemic attitudes, with different agents and different times. Because NIM is grounded upon senses, it provides a framework in which agents may reason about the beliefs of another agent while remaining neutral with respect to the syntactic forms used to express the (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  16.  11
    W. Garrett Mitchener (2011). A Mathematical Model of Prediction-Driven Instability: How Social Structure Can Drive Language Change. [REVIEW] Journal of Logic, Language and Information 20 (3):385-396.
    I discuss a stochastic model of language learning and change. During a syntactic change, each speaker makes use of constructions from two different idealized grammars at variable rates. The model incorporates regularization in that speakers have a slight preference for using the dominant idealized grammar. It also includes incrementation: The population is divided into two interacting generations. Children can detect correlations between age and speech. They then predict where the population’s language is moving and speak according to (...)
    Direct download (6 more)  
     
    Export citation  
     
    My bibliography  
  17.  3
    Judith Gaspers & Philipp Cimiano (2014). A Computational Model for the Item‐Based Induction of Construction Networks. Cognitive Science 38 (2):439-488.
    According to usage-based approaches to language acquisition, linguistic knowledge is represented in the form of constructions—form-meaning pairings—at multiple levels of abstraction and complexity. The emergence of syntactic knowledge is assumed to be a result of the gradual abstraction of lexically specific and item-based linguistic knowledge. In this article, we explore how the gradual emergence of a network consisting of constructions at varying degrees of complexity can be modeled computationally. Linguistic knowledge is learned by observing natural language utterances in an (...)
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  18.  7
    Spyros Galanis (2011). Syntactic Foundations for Unawareness of Theorems. Theory and Decision 71 (4):593-614.
    We provide a syntactic model of unawareness. By introducing multiple knowledge modalities, one for each sub-language, we specifically model agents whose only mistake in reasoning (other than their unawareness) is to underestimate the knowledge of more aware agents. We show that the model is a complete and sound axiomatization of the set-theoretic model of Galanis (University of Southampton Discussion paper 709, 2007) and compare it with other unawareness models in the literature.
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  19.  8
    Dale Jacquette (1990). Intentionality and Stich's Theory of Brain Sentence Syntax. Philosophical Quarterly 40 (159):169-82.
    Direct download (6 more)  
     
    Export citation  
     
    My bibliography   4 citations  
  20.  41
    Sebastian Lutz (2015). What Was the Syntax‐Semantics Debate in the Philosophy of Science About? Philosophy and Phenomenological Research 92 (1):n/a-n/a.
    The debate between critics of syntactic and semantic approaches to the formalization of scientific theories has been going on for over 50 years. I structure the debate in light of a recent exchange between Hans Halvorson, Clark Glymour, and Bas van Fraassen and argue that the only remaining disagreement concerns the alleged difference in the dependence of syntactic and semantic approaches on languages of predicate logic. This difference turns out to be illusory.
    No categories
    Direct download (7 more)  
     
    Export citation  
     
    My bibliography  
  21. Lloyd Humberstone (2008). Modal Formulas True at Some Point in Every Model. Australasian Journal of Philosophy 6:70-82.
    In a paper on the logical work of the Jains, Graham Priest considers a consequence relation, semantically characterized, which has a natural analogue in modal logic. Here we give a syntactic/axiomatic description of the modal formulas which are consequences of the empty set by this relation, which is to say: those formulas which are, for every model, true at some point in that model.
     
    Export citation  
     
    My bibliography   2 citations  
  22.  22
    Danny Fox, Cyclic Linearization of Syntactic Structure.
    This paper proposes an architecture for the mapping between syntax and phonology — in particular, that aspect of phonology that determines ordering. In Fox and Pesetsky (in prep.), we will argue that this architecture, when combined with a general theory of syntactic domains ("phases"), provides a new understanding of a variety of phenomena that have received diverse accounts in the literature. This shorter paper focuses on two processes, both drawn from Scandinavian: the familiar process of Object Shift and the (...)
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  23.  16
    Jeremy Avigad & Richard Sommer (1997). A Model-Theoretic Approach to Ordinal Analysis. Bulletin of Symbolic Logic 3 (1):17-52.
    We describe a model-theoretic approach to ordinal analysis via the finite combinatorial notion of an α-large set of natural numbers. In contrast to syntactic approaches that use cut elimination, this approach involves constructing finite sets of numbers with combinatorial properties that, in nonstandard instances, give rise to models of the theory being analyzed. This method is applied to obtain ordinal analyses of a number of interesting subsystems of first- and second-order arithmetic.
    Direct download (9 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  24.  9
    Nicholas Asher & Hajime Wada (1988). A Computational Account of Syntactic, Semantic and Discourse Principles for Anaphora Resolution. Journal of Semantics 6 (1):309-344.
    We present a unified framework for the computational implementation of syntactic, semantic, pragmatic and even “stylistic” constraints on anaphora. We build on our BUILDERS implementation of Discourse Representation (DR) Theory and Lexical Functional Grammar (LFG) discussed in Wada & Asher (1986). We develop and argue for a semantically based processing model for anaphora resolution that exploits a number of desirable features: (1) the partial semantics provided by the discourse representation structures (DRSs) of DR theory, (2) the use of (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  25.  15
    Gianluca Giorgolo, Shalom Lappin & Alexander Clark, Towards a Statistical Model of Grammaticality.
    The question of whether it is possible to characterise grammatical knowledge in probabilistic terms is central to determining the relationship of linguistic representation to other cognitive domains. We present a statistical model of grammaticality which maps the probabilities of a statistical model for sentences in parts of the British National Corpus (BNC) into grammaticality scores, using various functions of the parameters of the model. We test this approach with a classifier on test sets containing different levels of (...)
    Direct download  
     
    Export citation  
     
    My bibliography  
  26.  28
    David Pesetsky, Cyclic Linearization of Syntactic Structure.
    This paper proposes an architecture for the mapping between syntax and phonology — in particular, that aspect of phonology that determines ordering. In Fox and Pesetsky (in prep.), we will argue that this architecture, when combined with a general theory of syntactic domains ("phases"), provides a new understanding of a variety of phenomena that have received diverse accounts in the literature. This shorter paper focuses on two processes, both drawn from Scandinavian: the familiar process of Object Shift and the (...)
    Direct download  
     
    Export citation  
     
    My bibliography  
  27.  17
    Thomas R. Shultz & Alan C. Bale (2006). Neural Networks Discover a Near-Identity Relation to Distinguish Simple Syntactic Forms. Minds and Machines 16 (2):107-139.
    Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and that (...)
    Direct download (6 more)  
     
    Export citation  
     
    My bibliography  
  28.  21
    Luca Bellotti (2007). Formalization, Syntax and the Standard Model of Arithmetic. Synthese 154 (2):199 - 229.
    I make an attempt at the description of the delicate role of the standard model of arithmetic for the syntax of formal systems. I try to assess whether the possible instability in the notion of finiteness deriving from the nonstandard interpretability of arithmetic affects the very notions of syntactic metatheory and of formal system. I maintain that the crucial point of the whole question lies in the evaluation of the phenomenon of formalization. The ideas of Skolem, (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  29.  16
    Shimon Edelman, Automatic Acquisition and Efficient Representation of Syntactic Structures.
    The distributional principle according to which morphemes that occur in identical contexts belong, in some sense, to the same category [1] has been advanced as a means for extracting syntactic structures from corpus data. We extend this principle by applying it recursively, and by using mutual information for estimating category coherence. The resulting model learns, in an unsupervised fashion, highly structured, distributed representations of syntactic knowledge from corpora. It also exhibits promising behavior in tasks usually thought to (...)
    Translate
      Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  30.  18
    Willem J. M. Levelt, Antje S. Meyer & Ardi Roelofs (2004). Relations of Lexical Access to Neural Implementation and Syntactic Encoding. Behavioral and Brain Sciences 27 (2):299-301.
    How can one conceive of the neuronal implementation of the processing model we proposed in our target article? In his commentary (Pulvermüller 1999, reprinted here in this issue), Pulvermüller makes various proposals concerning the underlying neural mechanisms and their potential localizations in the brain. These proposals demonstrate the compatibility of our processing model and current neuroscience. We add further evidence on details of localization based on a recent meta-analysis of neuroimaging studies of word production (Indefrey & Levelt 2000). (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  31.  18
    Holly P. Branigan & Martin J. Pickering (2004). Syntactic Representation in the Lemma Stratum. Behavioral and Brain Sciences 27 (2):296-297.
    Levelt, Roelofs, & Meyer (henceforth Levelt et al. 1999) propose a model of production incorporating a lemma stratum, which is concerned with the syntactic characteristics of lexical entries. We suggest that syntactic priming experiments provide evidence about how such syntactic information is represented, and that this evidence can be used to extend Levelt et al.'s model. Evidence from syntactic priming experiments also supports Levelt et al.'s conjecture that the lemma stratum is shared between the (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  32.  16
    George E. Weaver (1994). Syntactic Features and Synonymy Relations: A Unified Treatment of Some Proofs of the Compactness and Interpolation Theorems. Studia Logica 53 (2):325 - 342.
    This paper introduces the notion of syntactic feature to provide a unified treatment of earlier model theoretic proofs of both the compactness and interpolation theorems for a variety of two valued logics including sentential logic, first order logic, and a family of modal sentential logic includingM,B,S 4 andS 5. The compactness papers focused on providing a proof of the consequence formulation which exhibited the appropriate finite subset. A unified presentation of these proofs is given by isolating their essential (...)
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  33.  12
    Francis Jeffry Pelletier, The Effect of Syntactic Form on Simple.
    In this paper we report preliminary results on how people revise or update a previously held set of beliefs. When intelligent agents learn new things which conflict with their current belief set, they must revise their belief set. When the new information does not conflict, they merely must update their belief set. Various AI theories have been proposed to achieve these processes. There are two general dimensions along which these theories differ: whether they are syntactic-based or model-based, and (...)
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  34.  1
    Michel Hébert (1997). Syntactic Characterizations of Closure Under Pullbacks and of Locally Polypresentable Categories. Annals of Pure and Applied Logic 84 (1):73-95.
    We give syntactic characterizations of1. the theories whose categories of models are closed under the formation of pullbacks, and of2. the locally ω-polypresentable categories.A somewhat typical example is the category of algebraically closed fields. Case is proved by classical model-theoretic methods; it solves a problem raised by H. Volger . The solution of case is in the spirit of the ones for the locally ω-presentable and ω-multipresentable cases found by M. Coste and P.T. Johnstone respectively. The problem was (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  35.  13
    Christopher Manning, Fast Exact Inference with a Factored Model for Natural Language Parsing.
    We present a novel generative model for natural language tree structures in which semantic (lexical dependency) and syntactic (PCFG) structures are scored with separate models. This factorization provides conceptual simplicity, straightforward opportunities for separately improving the component models, and a level of performance comparable to similar, non-factored models. Most importantly, unlike other modern parsing models, the factored model admits an extremely effective A* parsing algorithm, which enables efficient, exact inference.
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  36.  13
    Christopher Manning, Natural Language Grammar Induction Using a Constituent-Context Model.
    This paper presents a novel approach to the unsupervised learning of syntactic analyses of natural language text. Most previous work has focused on maximizing likelihood according to generative PCFG models. In contrast, we employ a simpler probabilistic model over trees based directly on constituent identity and linear context, and use an EM-like iterative procedure to induce structure. This method produces much higher quality analyses, giving the best published results on the ATIS dataset.
    Direct download  
     
    Export citation  
     
    My bibliography  
  37.  3
    Tomasz Skura (1994). Syntactic Refutations Against Finite Models in Modal Logic. Notre Dame Journal of Formal Logic 35 (4):595-605.
    The purpose of the paper is to study syntactic refutation systems as a way of characterizing normal modal propositional logics. In particular it is shown that there is a decidable modal logic without the finite model property that has a simple finite refutation system.
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  38.  9
    Christopher D. Manning & Bill MacCartney, An Extended Model of Natural Logic.
    We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according (...)
    Direct download  
     
    Export citation  
     
    My bibliography  
  39.  10
    Christopher Manning, A Generative Model for Semantic Role Labeling.
    Determining the semantic role of sentence constituents is a key task in determining sentence meanings lying behind a veneer of variant syntactic expression. We present a model of natural language generation from semantics using the FrameNet semantic role and frame ontology. We train the model using the FrameNet corpus and apply it to the task of automatic semantic role and frame identification, producing results competitive with previous work (about 70% role labeling accuracy). Unlike previous models used for (...)
    Direct download  
     
    Export citation  
     
    My bibliography  
  40.  1
    Dan Klein & Christopher D. Manning, Natural Language Grammar Induction Using a Constituent-Context Model.
    This paper presents a novel approach to the unsupervised learning of syntactic analyses of natural language text. Most previous work has focused on maximizing likelihood according to generative PCFG models. In contrast, we employ a simpler probabilistic model over trees based directly on constituent identity and linear context, and use an EM-like iterative procedure to induce structure. This method produces much higher quality analyses, giving the best published results on the ATIS dataset.
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  41.  2
    Dan Klein & Christopher D. Manning, Fast Exact Inference with a Factored Model for Natural Language Parsing.
    We present a novel generative model for natural language tree structures in which semantic (lexical dependency) and syntactic (PCFG) structures are scored with separate models. This factorization provides conceptual simplicity, straightforward opportunities for separately improving the component models, and a level of performance comparable to similar, non-factored models. Most importantly, unlike other modern parsing models, the factored model admits an extremely effective A* parsing algorithm, which enables efficient, exact inference.
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  42.  1
    Cynthia A. Thompson, Roger Levy & Christopher D. Manning, A Generative Model for Semantic Role Labeling.
    Determining the semantic role of sentence constituents is a key task in determining sentence meanings lying behind a veneer of variant syntactic expression. We present a model of natural language generation from semantics using the FrameNet semantic role and frame ontology. We train the model using the FrameNet corpus and apply it to the task of automatic semantic role and frame identification, producing results competitive with previous work (about 70% role labeling accuracy). Unlike previous models used for (...)
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  43.  42
    Marcello Guarini (2001). A Defence of Connectionism Against the "Syntactic" Argument. Synthese 128 (3):287-317.
    In "Representations without Rules, Connectionism and the Syntactic Argument'', Kenneth Aizawa argues against the view that connectionist nets can be understood as processing representations without the use of representation-level rules, and he provides a positive characterization of how to interpret connectionist nets as following representation-level rules. He takes Terry Horgan and John Tienson to be the targets of his critique. The present paper marshals functional and methodological considerations, gleaned from the practice of cognitive modelling, to argue against Aizawa's characterization (...)
    Direct download (8 more)  
     
    Export citation  
     
    My bibliography  
  44.  56
    Walter Kintsch & Praful Mangalath (2011). The Construction of Meaning. Topics in Cognitive Science 3 (2):346-370.
    We argue that word meanings are not stored in a mental lexicon but are generated in the context of working memory from long-term memory traces that record our experience with words. Current statistical models of semantics, such as latent semantic analysis and the Topic model, describe what is stored in long-term memory. The CI-2 model describes how this information is used to construct sentence meanings. This model is a dual-memory model, in that it distinguishes between a (...)
    Direct download (9 more)  
     
    Export citation  
     
    My bibliography   5 citations  
  45.  14
    Stephen Read (2015). Semantic Pollution and Syntactic Purity. Review of Symbolic Logic 8 (4):649-661.
    Logical inferentialism claims that the meaning of the logical constants should be given, not model-theoretically, but by the rules of inference of a suitable calculus. It has been claimed that certain proof-theoretical systems, most particularly, labelled deductive systems for modal logic, are unsuitable, on the grounds that they are semantically polluted and suffer from an untoward intrusion of semantics into syntax. The charge is shown to be mistaken. It is argued on inferentialist grounds that labelled deductive systems are as (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  46.  19
    D. C. Gooding & T. R. Addis (2008). Modelling Experiments as Mediating Models. Foundations of Science 13 (1):17-35.
    Syntactic and structural models specify relationships between their constituents but cannot show what outcomes their interaction would produce over time in the world. Simulation consists in iterating the states of a model, so as to produce behaviour over a period of simulated time. Iteration enables us to trace the implications and outcomes of inference rules and other assumptions implemented in the models that make up a theory. We apply this method to experiments which we treat as models of (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  47.  11
    Felix Engelmann, Shravan Vasishth, Ralf Engbert & Reinhold Kliegl (2013). A Framework for Modeling the Interaction of Syntactic Processing and Eye Movement Control. Topics in Cognitive Science 5 (3):452-474.
    We explore the interaction between oculomotor control and language comprehension on the sentence level using two well-tested computational accounts of parsing difficulty. Previous work (Boston, Hale, Vasishth, & Kliegl, 2011) has shown that surprisal (Hale, 2001; Levy, 2008) and cue-based memory retrieval (Lewis & Vasishth, 2005) are significant and complementary predictors of reading time in an eyetracking corpus. It remains an open question how the sentence processor interacts with oculomotor control. Using a simple linking hypothesis proposed in Reichle, Warren, and (...)
    Direct download (7 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  48.  34
    Willem A. Labuschagne & Johannes Heidema (2005). Natural and Artificial Cognition: On the Proper Place of Reason. South African Journal of Philosophy 24 (2):137-149.
    We explore the psychological foundations of Logic and Artificial Intelligence, touching on representation, categorisation, heuristics, consciousness, and emotion. Specifically, we challenge Dennett's view of the brain as a syntactic engine that is limited to processing symbols according to their structural properties. We show that cognitive psychology and neurobiology support a dual-process model in which one form of cognition is essentially semantical and differs in important ways from the operation of a syntactic engine. The dual-process model illuminates (...)
    Direct download (7 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  49.  16
    Marius Vilcu & Robert F. Hadley (2005). Two Apparent 'Counterexamples' to Marcus: A Closer Look. [REVIEW] Minds and Machines 15 (3-4):359-382.
    Marcus et al.’s experiment (1999) concerning infant ability to distinguish between differing syntactic structures has prompted connectionists to strive to show that certain types of neural networks can mimic the infants’ results. In this paper we take a closer look at two such attempts: Shultz and Bale [Shultz, T.R. and Bale, A.C. (2001), Infancy 2, pp. 501–536] Altmann and Dienes [Altmann, G.T.M. and Dienes, Z. (1999) Science 248, p. 875a]. We were not only interested in how well these two (...)
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  50.  28
    Emmon Bach, ACTL Semantics: Compositionality and Morphosemantics: I: Syntactic and Semantic Assumptions: Compositionality.
    Theme of two lectures: how does meaning work in grammar and lexicon? General question: Are morphemes the minimal meaningful units of language? Are the meanings of the parts of words of the same kind as those of syntax? The answer to this question has an obvious bearing on the question of the derivation of complex words "in the syntax." Is the split between syntax and morphology the proper division for asking the previous question? Answer: No. The crucial distinction is that (...)
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
1 — 50 / 1000