Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley Civilization, applying methods commonly used in cognitive science to the Indus script. We show that these methods can accurately predict the relative locations of archeological sites on the basis of artifacts (...) of known provenance, and we further apply these techniques to determine the most probable excavation sites of four sealings of unknown provenance. These findings suggest that inscription statistics reflect historical interactions among locations in the Indus Valley region, and they illustrate how computational methods can help localize inscribed archeological artifacts of unknown origin. The success of this method offers opportunities for the cognitive sciences in general and for computational anthropology specifically. (shrink)
The recent trend in cognitive robotics experiments on language learning, symbol grounding, and related issues necessarily entails a reduction of sensorimotor aspects from those provided by a human body to those that can be realized in machines, limiting robotic models of symbol grounding in this respect. Here, we argue that there is a need for modeling work in this domain to explicitly take into account the richer human embodiment even for concrete concepts that prima facie relate merely to simple actions, (...) and illustrate this using distributional methods from computationallinguistics which allow us to investigate grounding of concepts based on their actual usage. We also argue that these techniques have applications in theories and models of grounding, particularly in machine implementations thereof. Similarly, considering the grounding of concepts in human terms may be of benefit to future work in computationallinguistics, in particular in going beyond “grounding” concepts in the textual modality alone. Overall, we highlight the overall potential for a mutually beneficial relationship between the two fields. (shrink)
Some recent studies in computationallinguistics have aimed to take advantage of various cues presented by punctuation marks. This short survey is intended to summarise these research efforts and additionally, to outline a current perspective for the usage and functions of punctuation marks. We conclude by presenting an information-based framework for punctuation, influenced by treatments of several related phenomena in computationallinguistics.
Open peer commentary on the target article “How and Why the Brain Lays the Foundations for a Conscious Self” by Martin V. Butz. Excerpt: In this commentary to Martin V. Butz’s target article I am especially concerned with his remarks about language (§33, §§71–79, §91) and modularity (§32, §41, §48, §81, §§94–98). In that context, I would like to bring into discussion my own work on computational models of self-monitoring (cf. Neumann 1998, 2004). In this work I explore the (...) idea of an anticipatory drive as a substantial control device for modelling high-level complex language processes such as selfmonitoring and adaptive language use. My work is grounded in computationallinguistics and, as such, uses a mathematical and computational methodology. Nevertheless, it might provide some interesting aspects and perspectives for constructivism in general, and the model proposed in Butz’s article, in particular. (shrink)
We combine state-of-the-art techniques from computational linguisticsand theorem proving to build an engine for playing text adventures,computer games with which the player interacts purely through naturallanguage. The system employs a parser for dependency grammar and ageneration system based on TAG, and has components for resolving andgenerating referring expressions. Most of these modules make heavy useof inferences offered by a modern theorem prover for descriptionlogic. Our game engine solves some problems inherent in classical textadventures, and is an interesting test case (...) for the interactionbetween natural language processing and inference. (shrink)
There is currently much interest in bringing together the tradition of categorial grammar, and especially the Lambek calculus, with the recent paradigm of linear logic to which it has strong ties. One active research area is designing non-commutative versions of linear logic (Abrusci, 1995; Retoré, 1993) which can be sensitive to word order while retaining the hypothetical reasoning capabilities of standard (commutative) linear logic (Dalrymple et al., 1995). Some connections between the Lambek calculus and computations in groups have long been (...) known (van Benthem, 1986) but no serious attempt has been made to base a theory of linguistic processing solely on group structure. This paper presents such a model, and demonstrates the connection between linguistic processing and the classical algebraic notions of non-commutative free group, conjugacy, and group presentations. A grammar in this model, or G-grammar is a collection of lexical expressions which are products of logical forms, phonological forms, and inverses of those. Phrasal descriptions are obtained by forming products of lexical expressions and by cancelling contiguous elements which are inverses of each other. A G-grammar provides a symmetrical specification of the relation between a logical form and a phonological string that is neutral between parsing and generation modes. We show how the G-grammar can be oriented for each of the modes by reformulating the lexical expressions as rewriting rules adapted to parsing or generation, which then have strong decidability properties (inherent reversibility). We give examples showing the value of conjugacy for handling long-distance movement and quantifier scoping both in parsing and generation. The paper argues that by moving from the free monoid over a vocabulary V (standard in formal language theory) to the free group over V, deep affinities between linguistic phenomena and classical algebra come to the surface, and that the consequences of tapping the mathematical connections thus established can be considerable. (shrink)
Narrative passages told from a character's perspective convey the character's thoughts and perceptions. We present a discourse process that recognizes characters' thoughts and perceptions in third-person narrative. An effect of perspective on reference In narrative is addressed: references in passages told from the perspective of a character reflect the character's beliefs. An algorithm that uses the results of our discourse process to understand references with respect to an appropriate set of beliefs is presented.
Lexical semantics has become a major research area within computationallinguistics, drawing from psycholinguistics, knowledge representation, computer algorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored.Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerised lexicons (...) for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists, and computer scientists. Besides describing academic research, it also covers ongoing industrial projects. (shrink)
After reviewing some major features of theinteractions between Linguistics and Philosophyin recent years, I suggest that the depth and breadthof current inquiry into semanticshas brought this subject into contact both with questionsof the nature of linguistic competence and with modern andtraditional philosophical study of the nature ofour thoughts, and the problems of metaphysics.I see this development as promising for thefuture of both subjects.
In this paper we explore how compositional semantics, discourse structure, and the cognitive states of participants all contribute to pragmatic constraints on answers to questions in dialogue. We synthesise formal semantic theories on questions and answers with techniques for discourse interpretation familiar from computationallinguistics, and show how this provides richer constraints on responses in dialogue than either component can achieve alone.
The notions of argument and argumentation have become increasingly ubiquitous in Artificial Intelligence research, with various application and interpretations. Less attention has been, however, specifically devoted to rhetorical argument The work presented in this paper aims at bridging this gap, by proposing a framework for characterising rhetorical argumentation, based on Perelman and Olbrechts-Tyteca's New Rhetoric. The paper provides an overview of the state of the art of computational work based on, or dealing with, rhetorical aspects of argumentation, before presenting (...) the characterisation proposed, corroborated by walked-through examples. (shrink)
In this fascinating work, Scott Soames offers a new conception of the relationship between linguistic meaning and assertions made by utterances. He gives meanings of proper names and natural kind predicates and explains their use in attitude ascriptions. He also demonstrates the irrelevance of rigid designation in understanding why theoretical identities containing such predicates are necessary, if true.
In the TACITUS project for using commonsense knowledge in the understanding of texts about mechanical devices and their failures, we have been developing various commonsense theories that are needed to mediate between the way we talk about the behavior of such devices and causal models of their operation. Of central importance in this effort is the axiomatization of what might be called commonsense metaphysics. This includes a number of areas that figure in virtually every domain of discourse, such as granularity, (...) scales, time, space, material, physical objects, shape, causality, functionality, and force. Our effort has been to construct core theories of each of these areas, and then to define, or at least characterize, a large number of lexical items in terms provided by the core theories. In this paper we discuss our methodological principles and describe the key ideas in the various domains we are investigating. (shrink)
The proposed multilevel framework of discourse comprehension includes the surface code, the textbase, the situation model, the genre and rhetorical structure, and the pragmatic communication level. We describe these five levels when comprehension succeeds and also when there are communication misalignments and comprehension breakdowns. A computer tool has been developed, called Coh-Metrix, that scales discourse (oral or print) on dozens of measures associated with the first four discourse levels. The measurement of these levels with an automated tool helps researchers track (...) and better understand multilevel discourse comprehension. Two sets of analyses illustrate the utility of Coh-Metrix in discourse theory and educational practice. First, Coh-Metrix was used to measure the cohesion of the text base and situation model, as well as potential extraneous variables, in a sample of published studies that manipulated text cohesion. This analysis helped us better understand what was precisely manipulated in these studies and the implications for discourse comprehension mechanisms. Second, Coh-Metrix analyses are reported for samples of narrative and science texts in order to advance the argument that traditional text difficulty measures are limited because they fail to accommodate most of the levels of the multilevel discourse comprehension framework. (shrink)
This volume is a collection of original contributions from outstanding scholars in linguistics, philosophy and computationallinguistics exploring the relation between word meaning and human linguistic creativity. The papers present different aspects surrounding the question of what is word meaning, a problem that has been the center of heated debate in all those disciplines that directly or indirectly are concerned with the study of language and of human cognition. The discussions are centered around the newly emerging view (...) of the mental lexicon, as outlined in the Generative Lexicon theory (Pustejovsky, 1995), which proposes a unified model for defining word meaning. The individual contributors present their evidence for a generative approach as well as critical perspectives, which provides for a volume where word meaning is not viewed only from a particular angle or from a particular concern, but from a wide variety of topics, each introduced and explained by the editors. (shrink)
This book deals with a major problem in the study of language: the problem of reference. The ease with which we refer to things in conversation is deceptive. Upon closer scrutiny, it turns out that we hardly ever tell each other explicitly what object we mean, although we expect our interlocutor to discern it. Amichai Kronfeld provides an answer to two questions associated with this: how do we successfully refer, and how can a computer be programmed to achieve this? Beginning (...) with the major theories of reference, Dr Kronfeld provides a consistent philosophical view which is a synthesis of Frege's and Russell's semantic insights with Grice's and Searle's pragmatic theories. This leads to a set of guiding principles, which are then applied to a computational model of referring. The discussion is made accessible to readers from a number of backgrounds: in particular, students and researchers in the areas of computationallinguistics, artificial intelligence and the philosophy of language will want to read this book. (shrink)
Some of the systems used in natural language generation (NLG), a branch of applied computationallinguistics, have the capacity to create or assemble somewhat original messages adapted to new contexts. In this paper, taking Bernard Williams’ account of assertion by machines as a starting point, I argue that NLG systems meet the criteria for being speech actants to a substantial degree. They are capable of authoring original messages, and can even simulate illocutionary force and speaker meaning. Background intelligence (...) embedded in their datasets enhances these speech capacities. Although there is an open question about who is ultimately responsible for their speech, if anybody, we can settle this question by using the notion of proxy speech, in which responsibility for artificial speech acts is assigned legally or conventionally to an entity separate from the speech actant. (shrink)
In spite of alleged differences in purpose, descriptive and computationallinguistics share many problems, due to the fact that any precise study on language needs some form of knowledge representation. This constraint is mostly apparent when interpretation of sentences takes into account elements of the so-called “context”. The parametrization of context, i.e. the explicit listing of features relevant to some intepretation task, is difficult because it requires flexible formal structures for understanding or simulating inferential behaviour, as well as (...) a large amount of information about conventional structures in the given language. This paper aims at illustrating major difficulties in these two fields, in relation with the necessity of a contextual approach. It offers a (clearly partial) enumeration of open problems inthe reperesentation of commonsense knowledge and languages-dependent structures, with some attempt to delineate future solutions. (shrink)
A primary problem in the area of natural language processing has been semantic analysis. This book looks at the semantics of natural languages in context. It presents an approach to the computational processing of English text that combines current theories of knowledge representation and reasoning in Artificial Intelligence with the latest linguistic views of lexical semantics. The book will interest postgraduates and researchers in computationallinguistics as well as industrial research groups specializing in natural language processing.
Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony . This paper extends the model to account for learning of a broader set of phonological alternations and the formalization of these alternations as general rules. In Experiment 1, we (...) apply the original model to new data in Dutch and demonstrate its limitations in learning nonallophonic rules. In Experiment 2, we extend the model to allow it to learn general rules for alternations that apply to a class of segments. In Experiment 3, the model is further extended to allow for generalization by context; we argue that this generalization must be constrained by linguistic principles. (shrink)
We know from the literature in theoretical linguistics that interrogative constructions in Italian have particular syntactic properties, due to the liberal word order and the rich inflectional system. In this paper we show that the calculus of pregroups represents a flexible and efficient computational device for the analysis and derivation of Italian sentences and questions. In this context the distinction between direct vs. indirect statements and questions is explored.
This paper argues that truth values of sentences containing predicates of “personal taste” such as fun or tasty must be relativized to individuals. This relativization is of truth value only, and does not involve a relativization of semantic content: If you say roller coasters are fun, and I say they are not, I am negating the same content which you assert, and directly contradicting you. Nonetheless, both our utterances can be true (relative to their separate contexts). A formal semantic theory (...) is presented which gives this result by introducing an individual index, analogous to the world and time indices commonly used, and by treating the pragmatic context as supplying a particular value for this index. The context supplies this value in the derivation of truth values from content, not in the derivation of content from character. Predicates of personal taste therefore display a kind of contextual variation in interpretation which is unlike the familiar variation exhibited by pronouns and other indexicals. (shrink)
In a recent paper (Linguistics and Philosophy 23, 4, June 2000), Jason Stanley argues that there are no `unarticulated constituents', contrary to what advocates of Truth-conditional pragmatics (TCP) have claimed. All truth-conditional effects of context can be traced to logical form, he says. In this paper I maintain that there are unarticulated constituents, and I defend TCP. Stanley's argument exploits the fact that the alleged unarticulated constituents can be `bound', that is, they can be made to vary with the (...) values introduced by operators in the sentence. I show that Stanley's argument rests on a fallacy, and I provide alternative analyses of the data. (shrink)
In this paper, I defend the thesis that alleffects of extra-linguistic context on thetruth-conditions of an assertion are traceable toelements in the actual syntactic structure of thesentence uttered. In the first section, I develop thethesis in detail, and discuss its implications for therelation between semantics and pragmatics. The nexttwo sections are devoted to apparent counterexamples.In the second section, I argue that there are noconvincing examples of true non-sentential assertions.In the third section, I argue that there are noconvincing examples of what (...) John Perry has called`unarticulated constituents''. I conclude by drawingsome consequences of my arguments for appeals tocontext-dependence in the resolution of problems inepistemology and philosophical logic. (shrink)
We study the computational complexity of polyadic quantifiers in natural language. This type of quantification is widely used in formal semantics to model the meaning of multi-quantifier sentences. First, we show that the standard constructions that turn simple determiners into complex quantifiers, namely Boolean operations, iteration, cumulation, and resumption, are tractable. Then, we provide an insight into branching operation yielding intractable natural language multi-quantifier expressions. Next, we focus on a linguistic case study. We use computational complexity results to (...) investigate semantic distinctions between quantified reciprocal sentences. We show a computational dichotomy<br>between different readings of reciprocity. Finally, we go more into philosophical speculation on meaning, ambiguity and computational complexity. In particular, we investigate a possibility to<br>revise the Strong Meaning Hypothesis with complexity aspects to better account for meaning shifts in the domain of multi-quantifier sentences. The paper not only contributes to the field of the formal<br>semantics but also illustrates how the tools of computational complexity theory might be successfully used in linguistics and philosophy with an eye towards cognitive science. (shrink)
An important debate in the current literature is whether “all truth-conditional effects of extra-linguistic context can be traced to [a variable at; LM] logical form” (Stanley, ‘Context and Logical Form’, Linguistics and Philosophy, 23 (2000) 391). That is, according to Stanley, the only truth-conditional effects that extra-linguistic context has are localizable in (potentially silent) variable-denoting pronouns or pronoun-like items, which are represented in the syntax/at logical form (pure indexicals like I or today are put aside in this discussion). According (...) to Recanati (‘Unarticulated Constituents’, Linguistics and Philosophy, 25 (2002) 299), extra-linguistic context can have additional truth-conditional effects, in the form of optional pragmatic processes like ‘free enrichment’. This paper shows that Recanati’s position is not warranted, since there is an alternative line of analysis that obviates the need to assume free enrichment. In the alternative analysis, we need Stanley’s variables, but we need to give them the freedom to be or not to be generated in the syntax/present at logical form, a kind of optionality that has nothing to do with the pragmatics-related optionality of free enrichment. (shrink)
What are facts, situations, or events? When Situation Semantics was born in the eighties, I objected because I could not swallow the idea that situations might be chunks of information. For me, they had to be particulars like sticks or bricks. I could not imagine otherwise. The first manuscript of “An Investigation of the Lumps of Thought” that I submitted to Linguistics and Philosophy had a footnote where I distanced myself from all those who took possible situations to be (...) units of information. In that context and at that time, this meant Jon Barwise and John Perry. (shrink)
Ways of Scope Taking is concerned with syntactic, semantic and computational aspects of scope. Its starting point is the well-known but often neglected fact that different types of quantifiers interact differently with each other and other operators. The theoretical examination of significant bodies of data, both old and novel, leads to two central claims. (1) Scope is a by-product of a set of distinct Logical Form processes; each quantifier participates in those that suit its particular features. (2) Scope interaction (...) is further constrained by the semantics of the interacting operators. The arguments are developed using Minimalist syntax, Generalized Quantify theory, Discourse Representation Theory, and algebraic semantics. The contributors (Beghelli, Ben-Shalom, Doetjes, Farkas, Gutiérrez Rexach, Honcoop, Stabler, Stowell, Szabolcsi and Zwarts) make tightly related theoretical assumptions and focus on related empirical phenomena, which include the direct and inverse scope of quantifiers, distributivity, negation, modal and intensional contexts, weak islands, event-related readings, interrogatives, wh/quantifier interactions, and Hungarian syntax. An introduction to the formal semantics background is provided. Audience: Linguists, philosophers, computational and psycholinguists; advanced undergraduates, graduate students and researchers in these fields. (shrink)
Many of the formalisms used in Attribute Value grammar are notational variants of languages of propositional modal logic, and testing whether two Attribute Value Structures unify amounts to testing for modal satisfiability. In this paper we put this observation to work. We study the complexity of the satisfiability problem for nine modal languages which mirror different aspects of AVS description formalisms, including the ability to express re-entrancy, the ability to express generalisations, and the ability to express recursive constraints. Two main (...) techniques are used: either Kripke models with desirable properties are constructed, or modalities are used to simulate fragments of Propositional Dynamic Logic. Further possibilities for the application of modal logic in computationallinguistics are noted. (shrink)
Grice’s distinction between what is said and what is implicated has greatly clarified our understanding of the boundary between semantics and pragmatics. Although border disputes still arise and there are certain difficulties with the distinction itself (see the end of §1), it is generally understood that what is said falls on the semantic side and what is implicated on the pragmatic side. But this applies only to what is..
This article develops a Gricean account for the computation of scalarimplicatures in cases where one scalar term is in the scope ofanother. It shows that a cross-product of two quantitative scalesyields the appropriate scale for many such cases. One exception iscases involving disjunction. For these, I propose an analysis that makesuse of a novel, partially ordered quantitative scale for disjunction andcapitalizes on the idea that implicatures may have different epistemic status.