Sabdabrahma Siddhanta, popularized by Patanjali and Bhartruhari will be scientifically analyzed. Sphota Vada, proposed and nurtured by the Sanskrit grammarians will be interpreted from modern physics and communication engineering points of view. Insight about the theory of language and modes of language acquisition and communication available in the Brahma Kanda of Vakyapadeeyam will be translated into modern computational terms. A flowchart of language processing in humans will be given. A gross model of human language acquisition, (...) class='Hi'>comprehension and communication process forming the basis to develop software for relevant mind-machine modeling will be presented. The implications of such a model to artificial intelligence and cognitive sciences will be discussed. The essentiality and necessity of a physics, communication engineering , biophysical and biochemical insight as both complementary and supplementary to using mathematical and computational methods in delineating the theory of Sanskrit language is put forward. Naturallanguagecomprehension as distinct and different from naturallanguage processing is pointed out. (shrink)
The famous advaitic expressions -/- Brahma sat jagat mithya jivo brahma eva na apraha and Asti bhaati priyam namam roopamcheti amsa panchakam AAdya trayam brahma roopam tato dwayam jagat roopam -/- will be analyzed through physics and electronics and interpreted. -/- Four phases of mind, four modes of language acquisition and communication and seven cognitive states of mind participating in human cognitive and language acquisition and communication processes will be identified and discussed. -/- Implications and application of such (...) an identification and analysis to the fields of cognitive sciences, mind-machine modeling, naturallanguagecomprehension field of artificial intelligence and physiological psychology will be hinted. A comprehensive modern scientific understanding of human consciousness, mind and mental functions will be presented. (shrink)
We discuss McMillan et al. (2005) paper devoted to study brain activity during comprehension of sentences with generalized quantiﬁers. According to the authors their results verify a particular computational model of naturallanguage quantiﬁer comprehension posited by several linguists and logicians (e. g. see van Benthem, 1986). We challenge this statement by invoking the computational diﬀerence between ﬁrst-order quantiﬁers and divisibility quantiﬁers (e. g. see Mostowski, 1998). Moreover, we suggest other studies on quantiﬁer comprehension, which (...) can throw more light on the role of working memory in processing quantiﬁers. (shrink)
In the dissertation we study the complexity of generalized quantifiers in naturallanguage. Our perspective is interdisciplinary: we combine philosophical insights with theoretical computer science, experimental cognitive science and linguistic theories. -/- In Chapter 1 we argue for identifying a part of meaning, the so-called referential meaning (model-checking), with algorithms. Moreover, we discuss the influence of computational complexity theory on cognitive tasks. We give some arguments to treat as cognitively tractable only those problems which can be computed in (...) polynomial time. Additionally, we suggest that plausible semantic theories of the everyday fragment of naturallanguage can be formulated in the existential fragment of second-order logic. -/- In Chapter 2 we give an overview of the basic notions of generalized quantifier theory, computability theory, and descriptive complexity theory. -/- In Chapter 3 we prove that PTIME quantifiers are closed under iteration, cumulation and resumption. Next, we discuss the NP-completeness of branching quantifiers. Finally, we show that some Ramsey quantifiers define NP-complete classes of finite models while others stay in PTIME. We also give a sufficient condition for a Ramsey quantifier to be computable in polynomial time. -/- In Chapter 4 we investigate the computational complexity of polyadic lifts expressing various readings of reciprocal sentences with quantified antecedents. We show a dichotomy between these readings: the strong reciprocal reading can create NP-complete constructions, while the weak and the intermediate reciprocal readings do not. Additionally, we argue that this difference should be acknowledged in the Strong Meaning hypothesis. -/- In Chapter 5 we study the definability and complexity of the type-shifting approach to collective quantification in naturallanguage. We show that under reasonable complexity assumptions it is not general enough to cover the semantics of all collective quantifiers in naturallanguage. The type-shifting approach cannot lead outside second-order logic and arguably some collective quantifiers are not expressible in second-order logic. As a result, we argue that algebraic (many-sorted) formalisms dealing with collectivity are more plausible than the type-shifting approach. Moreover, we suggest that some collective quantifiers might not be realized in everyday language due to their high computational complexity. Additionally, we introduce the so-called second-order generalized quantifiers to the study of collective semantics. -/- In Chapter 6 we study the statement known as Hintikka's thesis: that the semantics of sentences like ``Most boys and most girls hate each other'' is not expressible by linear formulae and one needs to use branching quantification. We discuss possible readings of such sentences and come to the conclusion that they are expressible by linear formulae, as opposed to what Hintikka states. Next, we propose empirical evidence confirming our theoretical predictions that these sentences are sometimes interpreted by people as having the conjunctional reading. -/- In Chapter 7 we discuss a computational semantics for monadic quantifiers in naturallanguage. We recall that it can be expressed in terms of finite-state and push-down automata. Then we present and criticize the neurological research building on this model. The discussion leads to a new experimental set-up which provides empirical evidence confirming the complexity predictions of the computational model. We show that the differences in reaction time needed for comprehension of sentences with monadic quantifiers are consistent with the complexity differences predicted by the model. -/- In Chapter 8 we discuss some general open questions and possible directions for future research, e.g., using different measures of complexity, involving game-theory and so on. -/- In general, our research explores, from different perspectives, the advantages of identifying meaning with algorithms and applying computational complexity analysis to semantic issues. It shows the fruitfulness of such an abstract computational approach for linguistics and cognitive science. (shrink)
Naturallanguage processing involves a tight coupling between action (the production of language) and perception (the comprehension of language). We argue that similar theoretical principles apply to language processing as to action/perception in general. Language production is not driven solely by the speaker's intentions; languagecomprehension is not only input-driven; production and perception use common representations. We will relate recent findings from our language production lab to the Theory of Event (...) Coding (TEC)'s principle of feature binding. (shrink)
A model of human consciousness is presented here in terms of physics and electronics using Upanishadic awareness. The form of Atman proposed in the Upanishads in relation to human consciousness as oscillating psychic energy-presence and its virtual or unreal energy reflection maya, responsible for mental energy and mental time-space are discussed. Analogy with Fresnel’s bi-prism experimental set up in physical optics is used to state, describe and understand the form, structure and function of Atman and maya, the ingredients of human (...) consciousness. A description of phases of mind in terms of conscious states and transformation of mental energy is given. Four states of consciousness and four modes of language communication and understanding processes are also given. Implications of above scientific awareness of Upanishadic wisdom to the modern scientific fields of physiological psychology, cognitive sciences, mind-machine modeling and naturallanguagecomprehension are suggested. -/- . (shrink)
Many people have argued that the evolution of the human language faculty cannot be explained by Darwinian natural selection. Chomsky and Gould have suggested that language may have evolved as the by-product of selection for other abilities or as a consequence of as-yet unknown laws of growth and form. Others have argued that a biological specialization for grammar is incompatible with every tenet of Darwinian theory – that it shows no genetic variation, could not exist in any (...) intermediate forms, confers no selective advantage, and would require more evolutionary time and genomic space than is available. We examine these arguments and show that they depend on inaccurate assumptions about biology or language or both. Evolutionary theory offers clear criteria for when a trait should be attributed to natural selection: complex design for some function, and the absence of alternative processes capable of explaining such complexity. Human language meets these criteria: Grammar is a complex mechanism tailored to the transmission of propositional structures through a serial interface. Autonomous and arbitrary grammatical phenomena have been offered as counterexamples to the position that language is an adaptation, but this reasoning is unsound: Communication protocols depend on arbitrary conventions that are adaptive as long as they are shared. Consequently, language acquisition in the child should systematically differ from language evolution in the species, and attempts to analogize them are misleading. Reviewing other arguments and data, we conclude that there is every reason to believe that a specialization for grammar evolved by a conventional neo-Darwinian process. (shrink)
Four modes of language acquisition and communication are presented translating ancient Indian expressions on human consciousness, mind, their form, structure and function clubbing with the Sabdabrahma theory of language acquisition and communication. The modern scientific understanding of such an insight is discussed. . A flowchart of language processing in humans will be given. A gross model of human language acquisition, comprehension and communication process forming the basis to develop software for relevantmind-machine modeling will be presented. (...) The implications of such a model to artificial intelligence and cognitive sciences will be discussed. The essential nature and necessity of a physics, communication engineering , biophysical and biochemical insight as both complementary and supplementary to using mathematical and computational methods in delineating the theory of language processing in humans is put forward. (shrink)
The present paper is meant to summarise and enlighten the theoretical implications of the twin theories of text comprehension and of text compression. Compatibility and non-exclusiveness of particle-like analysis of language and wave-like analysis of intentionality are also demonstrated within the newly established quantum linguistics framework. The informative state of language is viewed as being relatively stable; once activated and subject to motion, therefore reaching a communicative state, different phenomena occur, which may be observed, analysed and visualised (...) through CPP-TRS observational devices. Relativity theory may therefore be organised in terms of quanta with continuity and no contradiction. (shrink)
Knowledge is power. Knowledge about human psychology is increasingly being produced using naturallanguage processing (NLP) and related techniques. The power that accompanies and harnesses this knowledge should be subject to ethical controls and oversight. In this chapter, we address the ethical pitfalls that are likely to be encountered in the context of such research. These pitfalls occur at various stages of the NLP pipeline, including data acquisition, enrichment, analysis, storage, and sharing. We also address secondary uses of (...) the results and tools developed through psychometric NLP, such as profit-driven targeted advertising, political campaigns, and domestic and international psyops. Along the way, we reflect on potential ethical guidelines and considerations that may help researchers navigate these pitfalls. (shrink)
It is common in contemporary metaphysics to distinguish two levels of ontology: the ontology of ordinary objects and the ontology of fundamental reality. This papers argues that naturallanguage reflects not only the ontology of ordinary objects, but also a language-driven ontology, which is involved in the mass-count distinction and part-structure-sensitive semantic selection, as well as perhaps the light ontology of pleonastic entities. The paper recasts my older theory of situated part structures without situations, making use of (...) a primitive notion of unity. (shrink)
The aim of naturallanguage ontology is to uncover the ontological categories and structures that are implicit in the use of naturallanguage, that is, that a speaker accepts when using a language. This article aims to clarify what exactly the subject matter of naturallanguage ontology is, what sorts of linguistic data it should take into account, how naturallanguage ontology relates to other branches of metaphysics, in what ways (...) class='Hi'>naturallanguage ontology is important, and what may be distinctive of the ontological categories and structures reflected in naturallanguage. (shrink)
In this paper we propose a way to deal with naturallanguage inference by implementing Modern Type Theoretical Semantics in the proof assistant Coq. The paper is a first attempt to deal with NLI and naturallanguage reasoning in general by using the proof assistant technology. Valid NLIs are treated as theorems and as such the adequacy of our account is tested by trying to prove them. We use Luo’s Modern Type Theory with coercive subtyping as (...) the formal language into which we translate naturallanguage semantics, and we further implement these semantics in the Coq proof assistant. It is shown that the use of a MTT with an adequate subtyping mechanism can give us a number of promising results as regards NLI. Specifically, it is shown that a number of inference cases, i.e. quantifiers, adjectives, conjoined noun phrases and temporal reference among other things can be successfully dealt with. It is then shown, that even though Coq is an interactive and not an automated theorem prover, automation of all of the test examples is possible by introducing user-defined automated tactics. Lastly, the paper offers a number of innovative approaches to NL phenomena like adjectives, collective predication, comparatives and factive verbs among other things, contributing in this respect to the theoretical study of formal semantics using MTTs. (shrink)
The paper presents a proof-theoretic semantics (PTS) for a fragment of naturallanguage, providing an alternative to the traditional model-theoretic (Montagovian) semantics (MTS), whereby meanings are truth-condition (in arbitrary models). Instead, meanings are taken as derivability-conditions in a dedicated natural-deduction (ND) proof-system. This semantics is effective (algorithmically decidable), adhering to the meaning as use paradigm, not suffering from several of the criticisms formulated by philosophers of language against MTS as a theory of meaning. In particular, Dummett’s (...) manifestation argument does not obtain, and assertions are always warranted, having grounds of assertion. The proof system is shown to satisfy Dummett’s harmony property, justifying the ND rules as meaning conferring. The semantics is suitable for incorporation into computational linguistics grammars, formulated in type-logical grammar. (shrink)
One of the great successes in the study of language has been the application of formal methods, including those of formal logic. Even so, this chapter argues against one way of accounting for this success, by arguing that the study of naturallanguage semantics and of logical consequence relations are not the same. There is indeed a lot we can glean about logic from looking at our languages, and at our inferential practices, but the semantic properties of (...)natural languages do not determine genuine logical consequence relations. We can get from naturallanguage semantics to logical consequence, but only by a significant process of identification of logical constants, abstraction, and idealization. The chapter also discusses different approaches to the nature of logical consequence, and examines which allow logic and naturallanguage to come closer together. (shrink)
This light piece reflects on analogies between two often disjoint streams of research: the logical semantics and pragmatics of naturallanguage and dynamic logics of general information-driven agency. The two areas show significant overlap in themes and tools, and yet, the focus seems subtly different in each, defying a simple comparison. We discuss some unusual questions that emerge when the two are put side by side, without any pretense at covering the whole literature or at reaching definitive conclusions.
The paper sets out to offer an alternative to the function/argument approach to the most essential aspects of naturallanguage meanings. That is, we question the assumption that semantic completeness (of, e.g., propositions) or incompleteness (of, e.g., predicates) exactly replicate the corresponding grammatical concepts (of, e.g., sentences and verbs, respectively). We argue that even if one gives up this assumption, it is still possible to keep the compositionality of the semantic interpretation of simple predicate/argument structures. In our opinion, (...) compositionality presupposes that we are able to compare arbitrary meanings in term of information content. This is why our proposal relies on an ‘intrinsically’ type free algebraic semantic theory. The basic entities in our models are neither individuals, nor eventualities, nor their properties, but ‘pieces of evidence’ for believing in the ‘truth’ or ‘existence’ or ‘identity’ of any kind of phenomenon. Our formal language contains a single binary non-associative constructor used for creating structured complex terms representing arbitrary phenomena. We give a finite Hilbert-style axiomatisation and a decision algorithm for the entailment problem of the suggested system. (shrink)
This paper gives an outline of naturallanguage ontology as a subdiscipline of both linguistics and philosophy. It argues that part of the constructional ontology reflected in naturallanguage is in significant respects on a par with syntax (on the generative view).
This article contains the results of a theoretical analysis of the phenomenon of naturallanguage understanding (NLU), as a methodological problem. The combination of structural-ontological and informational-psychological approaches provided an opportunity to describe the subject matter field of NLU, as a composite function of the mind, which systemically combines the verbal and discursive structural layers. In particular, the idea of NLU is presented, on the one hand, as the relation between the discourse of a specific speech message and (...) the meta-discourse of a language, in turn, activated by the need-motivational factors. On the other hand, it is conceptualized as a process with a specific structure of information metabolism, the study of which implies the necessity to differentiate the affective (emotional) and need-motivational influences on the NLU, as well as to take into account their interaction. At the same time, the hypothesis about the influence of needs on NLU under the scenario similar to the pattern of Yerkes-Dodson is argued. And the theoretical conclusion that emotions fulfill the function of the operator of the structural features of the information metabolism of NLU is substantiated. Thus, depending on the modality of emotions in the process of NLU, it was proposed to distinguish two scenarios for the implementation of information metabolism - reduction and synthetic. The argument in favor of the conclusion about the productive and constitutive role of emotions in the process of NLU is also given. (shrink)
This paper elaborates distinctions between a core and a periphery in the ontological and the conceptual domain associated with naturallanguage. The ontological core-periphery distinction is essential for naturallanguage ontology and is the basis for the central thesis of my 2013 book Abstract Objects and the Semantics of NaturalLanguage, namely that naturallanguage permits reference to abstract objects in its periphery, but not its core.
Inner speech is a pervasive feature of our conscious mental lives. Yet its function and character remain an issue of philosophical debate. The present paper focuses on the relation between inner speech and naturallanguage and on the cognitive functions that various contributors have ascribed to inner speech. In particular, it is argued that inner speech does not consist of bare, context-free internal presentations of sentential (or subsentential) content, but rather has an ineliminably perspectival element. The proposed model (...) of inner speech, which characterizes inner speech as akin to the testimony of an inner interlocutor, accounts for this perspectival element and, it is argued, is explanatorily superior, insofar as it better explains, amongst other phenomena, the often condensed character of inner speech. (shrink)
It has recently been argued that public linguistic norms are implicated in the epistemology of testimony by way of underwriting the reliability of languagecomprehension. This paper argues that linguistic normativity, as such, makes no explanatory contribution to the epistemology of testimony, but instead emerges naturally out of a collective effort to maintain language as a reliable medium for the dissemination of knowledge. Consequently, the epistemologies of testimony and languagecomprehension are deeply intertwined from the (...) start, and there is no room for grounding the one in terms of the other. (shrink)
This paper gives a characterization of the ontology implicit in naturallanguage and the entities it involves, situates naturallanguage ontology within metaphysics, and responds to Chomskys' dismissal of externalist semantics.
The Monist’s call for papers for this issue ended: “if formalism is true, then it must be possible in principle to mechanize meaning in a conscious thinking and language-using machine; if intentionalism is true, no such project is intelligible”. We use the Grelling-Nelson paradox to show that naturallanguage is indefinitely extensible, which has two important consequences: it cannot be formalized and model theoretic semantics, standard for formal languages, is not suitable for it. We also point out (...) that object-object mapping theories of semantics, the usual account for the possibility of non intentional semantics, doesn’t seem able to account for the indefinitely extensible productivity of naturallanguage. (shrink)
Successful biomedical data mining and information extraction require a complete picture of biological phenomena such as genes, biological processes, and diseases; as these exist on different levels of granularity. To realize this goal, several freely available heterogeneous databases as well as proprietary structured datasets have to be integrated into a single global customizable scheme. We will present a tool to integrate different biological data sources by mapping them to a proprietary biomedical ontology that has been developed for the purposes of (...) making computers understand medical naturallanguage. (shrink)
Stephen Laurence and Eric Margolis have recently argued that certain kinds of regress arguments against the language of thought (LOT) hypothesis as an account of how we understand natural languages have been answered incorrectly or inadequately by supporters of LOT ('Regress arguments against the language of thought', Analysis, 57 (1), 60-6, J 97). They argue further that this does not undermine the LOT hypothesis, since the main sources of support for LOT are (or might be) independent of (...) it providing an account of how we understand naturallanguage. In my paper I seek to refute both these claims, and reinstate the putative explanation of naturallanguage understanding as a necessarily central part of the support for LOT. The main argument exploits the fact that Laurence and Margolis give too little weight to the ideas (a) that LOT might be innate (b) that for LOT supporters a semantic theory must apply to in-the-head tokens, not linguistic utterances. (shrink)
It has been shown that when participants are asked to make sensibility judgments on sentences that describe a transfer of an object toward or away from their body, they are faster to respond when the response requires a movement in the same direction as the transfer described in the sentence. This phenomenon is known as the action compatibility effect. This study investigates whether the ACE exists for volunteers with Alzheimer's disease, whether the ACE can facilitate languagecomprehension, and (...) also whether the ACE can still be produced if the order of the two events is inverted, that is, whether overt movement can prime comprehension of transfer sentences. In Study 1, participants with AD, younger, and older adults were tested on an adaptation of the ACE Paradigm. In Study 2, the same paradigm was modified to include an arm movement that participants had to perform prior to sentence exposure on screen. In Study 1, young, older adults, and individuals with AD were faster to respond when the direction of the response movement matched the directionality implied by the sentence. In Study 2, no traditional ACE was found; participants were faster when the direction of the movement immediately preceding the sentence matched the directionality of the sentence. It was found that compatibility effects generated a relative advantage, that transfer schemata are easier to process, and that an ACE-like effect can be the result of mutual priming between language and movement. Results suggested preservation in AD of the neural systems for action engaged during languagecomprehension, and conditions under which comprehension in AD can be facilitated in real life may be identified. (shrink)
In this paper, I shall provide a novel response to the argument from context-sensitivity against truth-conditional semantics. It is often argued that the contextual influences on truth-conditions outstrip the resources of standard truth-conditional accounts, and so truth-conditional semantics rests on a mistake. The argument assumes that truth-conditional semantics is legitimate if and only if naturallanguage sentences have truth-conditions. I shall argue that this assumption is mistaken. Truth-conditional analyses should be viewed as idealised approximations of the complexities of (...)naturallanguage meaning. From this perspective, disparity between the scientific model and its real-world target is to be expected. I elaborate on what such an approach to semantics would look like. (shrink)
Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. (...) Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation. We show how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, how they interweave production and comprehension processes, and how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal. (shrink)
In this paper, we investigate the possibility of translating a fragment of natural deduction system (NDS) for naturallanguage semantics into modern type theory (MTT), originally suggested by Luo (2014). Our main goal will be to examine and translate the basic rules of NDS (namely, meta-rules, structural rules, identity rules, noun rules and rules for intersective and subsective adjectives) to MTT. Additionally, we will also consider some of their general features.
The semantic rules governing naturallanguage quantifiers (e.g. "all," "some," "most") neither coincide with nor resemble the semantic rules governing the analogues of those expressions that occur in the artificial languages used by semanticists. Some semanticists, e.g. Peter Strawson, have put forth data-consistent hypotheses as to the identities of the semantic rules governing some natural-language quantifiers. But, despite their obvious merits, those hypotheses have been universally rejected. In this paper, it is shown that those hypotheses are (...) indeed correct. Moreover, data-consistent hypotheses are put forth as to the identities of the semantic rules governing the words "most" and "many," the semantic rules governing which semanticists have thus far been unable to identify. The points made in this paper are anticipated in a paper, published in the same issue of the Journal of Pragmatics, by Andrzej Boguslawski. (shrink)
We present Property Theory with Curry Typing (PTCT), an intensional first-order logic for naturallanguage semantics. PTCT permits fine-grained specifications of meaning. It also supports polymorphic types and separation types. We develop an intensional number theory within PTCT in order to represent proportional generalized quantifiers like âmost.â We use the type system and our treatment of generalized quantifiers in naturallanguage to construct a type-theoretic approach to pronominal anaphora that avoids some of the difficulties that undermine (...) previous type-theoretic analyses of this phenomenon. (shrink)
Modern weaponry is often too complex for unaided human operation, and is largely or totally controlled by computers. But modern software, particularly artificial intelligence software, exhibits such complexity and inscrutability that there are grave dangers associated with its use in non-benign applications. Recent efforts to make computer systems more accessible to military personnel through naturallanguage processing systems, as proposed in the Strategic Computing Initiative of the Defense Advanced Research Projects Agency, increases rather than decreases the dangers of (...) unpredictable behavior. Defense systems constitute, in fact, a paradigm case of the wrong kind of application for this technology. This cannot be expected to change, since the unpredictability stems from inherent properties of computer systems and of natural languages. (shrink)
We describe a knowledge representation and inference formalism, based on an intensional propositional semantic network, in which variables are structures terms consisting of quantifier, type, and other information. This has three important consequences for naturallanguage processing. First, this leads to an extended, more natural formalism whose use and representations are consistent with the use of variables in naturallanguage in two ways: the structure of representations mirrors the structure of the language and allows (...) re-use phenomena such as pronouns and ellipsis. Second, the formalism allows the specification of description subsumption as a partial ordering on related concepts (variable nodes in a semantic network) that relates more general concepts to more specific instances of that concept, as is done in language. Finally, this structured variable representation simplifies the resolution of some representational difficulties with certain classes of naturallanguage sentences, namely, donkey sentences and sentences involving branching quantifiers. The implementation of this formalism is called ANALOG (A NAtural LOGIC) and its utility for naturallanguage processing tasks is illustrated. (shrink)
If the begriffsschrift from Frege does represent the logical form of naturallanguage it either lacks a logical form itself, or its logical form is different to that of naturallanguage. But Frege insists that his notation has a logical form. So the second disjunct holds. This suggests that Frege's notation will generate consequences different to those that can be derived with naturallanguage, with its different logical form. For anyone looking for "a means (...) of avoiding misunderstandings", as Frege does, this would be unpalatable, to say the least. (shrink)
A formal, computational, semantically clean representation of naturallanguage is presented. This representation captures the fact that logical inferences in naturallanguage crucially depend on the semantic relation of entailment between sentential constituents such as determiner, noun, adjective, adverb, preposition, and verb phrases.The representation parallels naturallanguage in that it accounts for human intuition about entailment of sentences, it preserves its structure, it reflects the semantics of different syntactic categories, it simulates conjunction, disjunction, and (...) negation in naturallanguage by computable operations with provable mathematical properties, and it allows one to represent coordination on different syntactic levels. (shrink)
Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from naturallanguage stimuli, that is, words. We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: the acquisition of features that discriminate among categories, and the grouping of concepts into categories based on (...) those features. Our model learns categories incrementally using particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference that sequentially integrates newly observed data and can be viewed as a plausible mechanism for human learning. Experimental results show that our incremental learner obtains meaningful categories which yield a closer fit to behavioral data compared to related models while at the same time acquiring features which characterize the learned categories. (shrink)
This paper presents two projects concerned with the application of naturallanguage processing technology for improving communication between Public Administration and citizens. The first project, GIST,is concerned with automatic multilingual generation of instructional texts for form-filling. The second project, TAMIC, aims at providing an interface for interactive access to information, centered on naturallanguage processing and supposed to be used by the clerk but with the active participation of the citizen.
The article argues that cognitive linguistic theory may prove an alternative to the Montague paradigm for designing naturallanguage understanding systems. Within this framework it describes a system which models language understanding as a dialogical process between user and computer. The system operates with naturallanguage texts as input and represent language meaning as entity-relationship diagrams.
Logics of discrete time are, in Arthur Prior’s words, “applicable in limited fields of discourse in which we are concerned with what happens in a sequence of discrete states,” independent of “any serious metaphysical assumption that time is discrete.” This insight is applied to naturallanguage semantics, a widespread assumption in which is that time is, as is the real line, dense. “Limited fields of discourse” are construed as finite sets of temporal propositions, inducing bounded notions of temporal (...) granularity that can be refined to expand the discourse. The construal is developed in line with Prior’s view of what is “metaphysically fundamental”. (shrink)
In infancy, maternal socioeconomic status is associated with real-time language processing skills, but whether or not this relationship carries into adulthood is unknown. We explored the effects of maternal SES in college-aged adults on eye-tracked, spoken sentence comprehension tasks using the visual world paradigm. When sentences ended in highly plausible, expected target nouns, higher SES was associated with a greater likelihood of considering alternative endings related to the action of the sentence. Moreover, for unexpected sentence endings, individuals from (...) higher SES backgrounds were sensitive to whether the ending was action-related or unrelated, showing a benefit for plausible endings. Individuals from lower SES backgrounds did not show this advantage. This suggests maternal SES can influence the dynamics of sentence processing even in adulthood, with consequences for processing unexpected content. These findings highlight the importance of early lexical experience for adult language skills. (shrink)
Since the rise of modern natural science there has been deep tension between the conceptual and the natural. Wittgenstein's discussion of how we learn a sensation-language contains important resources that can help us relieve this tension. The key here, I propose, is to focus our attention on animal nature, conceived as partially re-enchanted. To see how nature, so conceived, helps us relieve the tension in question, it is crucial to gain a firm and detailed appreciation of how (...) the primitive-instinctive, a central part of animal nature, actually serves the conceptual. I offer such an appreciation by closely examining §244 of the Philosophical Investigations and Peter Winch's discussion of it. The general aim is to bring out a certain kind of Wittgensteinian “naturalism”, a “naturalism” that is fully alive to the rootedness of conceptuality in nature. A concomitant aim is to illustrate the truth of Wittgenstein's saying that in philosophy one often has to pay close attention to details. (shrink)