In their recent paper Bi-facial truth: a case for generalized truth values Zaitsev and Shramko [7] distinguish between an ontological and an epistemic interpretation of classical truth values. By taking the Cartesian product of the two disjoint sets of values thus obtained, they arrive at four generalized truth values and consider two “semi-classical negations” on them. The resulting semantics is used to define three novel logics which are closely related to Belnap’s well-known four valued logic. A syntactic characterization of these (...) logics is left for further work. In this paper, based on our previous work on a functionally complete extension of Belnap’s logic, we present a sound and complete tableau calculus for these logics. It crucially exploits the Cartesian nature of the four values, which is reflected in the fact that each proof consists of two tableaux. The bi-facial notion of truth of Z&S is thus augmented with a bi-facial notion of proof. We also provide translations between the logics for semi-classical negation and classical logic and show that an argument is valid in a logic for semi-classical negation just in case its translation is valid in classical logic. (shrink)
In their paper Nothing but the Truth Andreas Pietz and Umberto Rivieccio present Exactly True Logic, an interesting variation upon the four-valued logic for first-degree entailment FDE that was given by Belnap and Dunn in the 1970s. Pietz & Rivieccio provide this logic with a Hilbert-style axiomatisation and write that finding a nice sequent calculus for the logic will presumably not be easy. But a sequent calculus can be given and in this paper we will show that a calculus for (...) the Belnap-Dunn logic we have defined earlier can in fact be reused for the purpose of characterising ETL, provided a small alteration is made—initial assignments of signs to the sentences of a sequent to be proved must be different from those used for characterising FDE. While Pietz & Rivieccio define ETL on the language of classical propositional logic we also study its consequence relation on an extension of this language that is functionally complete for the underlying four truth values. On this extension the calculus gets a multiple-tree character—two proof trees may be needed to establish one proof. (shrink)
This book radically simplifies Montague Semantics and generalizes the theory by basing it on a partial higher order logic. The resulting theory is a synthesis of Montague Semantics and Situation Semantics. In the late sixties Richard Montague developed the revolutionary idea that we can understand the concept of meaning in ordinary languages much in the same way as we understand the semantics of logical languages. Unfortunately, however, he formalized his idea in an unnecessarily complex way - two outstanding researchers in (...) the field even compared his work to a `Rube Goldberg machine.' Muskens' work does away with such unnecessary complexities, obtains a streamlined version of the theory, shows how partialising the theory automatically provides us with the most central concepts of Situation Semantics, and offers a simple logical treatment of propositional attitude verbs, perception verbs and proper names. (shrink)
There are two kinds of semantic theories of anaphora. Some, such as Heim’s File Change Semantics, Groenendijk and Stokhof’s Dynamic Predicate Logic, or Muskens’ Compositional DRT (CDRT), seem to require full coindexing of anaphora and their antecedents prior to interpretation. Others, such as Kamp’s Discourse Representation Theory (DRT), do not require this coindexing and seem to have an important advantage here. In this squib I will sketch a procedure that the first group of theories may help themselves to so (...) that they can interleave interpretation and coindexing in DRT’s way. (shrink)
This paper developes a relational---as opposed to a functional---theory of types. The theory is based on Hilbert and Bernays' eta operator plus the identity symbol, from which Church's lambda and the other usual operators are then defined. The logic is intended for use in the semantics of natural language.
This paper embeds the core part of Discourse Representation Theory in the classical theory of types plus a few simple axioms that allow the theory to express key facts about variables and assignments on the object level of the logic. It is shown how the embedding can be used to combine core analyses of natural language phenomena in Discourse Representation Theory with analyses that can be obtained in Montague Semantics.
In this paper we introduce a Gentzen calculus for (a functionally complete variant of) Belnap's logic in which establishing the provability of a sequent in general requires \emph{two} proof trees, one establishing that whenever all premises are true some conclusion is true and one that guarantees the falsity of at least one premise if all conclusions are false. The calculus can also be put to use in proving that one statement \emph{necessarily approximates} another, where necessary approximation is a natural dual (...) of entailment. The calculus, and its tableau variant, not only capture the classical connectives, but also the `information' connectives of four-valued Belnap logics. This answers a question by Avron. (shrink)
Standard approaches to proper names, based on Kripke's views, hold that the semantic values of expressions are (set-theoretic) functions from possible worlds to extensions and that names are rigid designators, i.e.\ that their values are \emph{constant} functions from worlds to entities. The difficulties with these approaches are well-known and in this paper we develop an alternative. Based on earlier work on a higher order logic that is \emph{truly intensional} in the sense that it does not validate the axiom scheme of (...) Extensionality, we develop a simple theory of names in which Kripke's intuitions concerning rigidity are accounted for, but the more unpalatable consequences of standard implementations of his theory are avoided. The logic uses Frege's distinction between sense and reference and while it accepts the rigidity of names it rejects the view that names have direct reference. Names have constant denotations across possible worlds, but the semantic value of a name is not determined by its denotation. (shrink)
The paper shows how ideas that explain the sense of an expression as a method or algorithm for finding its reference, preshadowed in Frege’s dictum that sense is the way in which a referent is given, can be formalized on the basis of the ideas in Thomason (1980). To this end, the function that sends propositions to truth values or sets of possible worlds in Thomason (1980) must be replaced by a relation and the meaning postulates governing the behaviour of (...) this relation must be given in the form of a logic program. The resulting system does not only throw light on the properties of sense and their relation to computation, but also shows circular behaviour if some ingredients of the Liar Paradox are added. The connection is natural, as algorithms can be inherently circular and the Liar is explained as expressing one of those. Many ideas in the present paper are closely related to those in Moschovakis (1994), but receive a considerably lighter formalization. (shrink)
In this paper it is shown how the DRT (Discourse Representation Theory) treatment of temporal anaphora can be formalized within a version of Montague Semantics that is based on classical type logic.
A logic is called higher order if it allows for quantification over higher order objects, such as functions of individuals, relations between individuals, functions of functions, relations between functions, etc. Higher order logic began with Frege, was formalized in Russell [46] and Whitehead and Russell [52] early in the previous century, and received its canonical formulation in Church [14].1 While classical type theory has since long been overshadowed by set theory as a foundation of mathematics, recent decades have shown remarkable (...) comebacks in the fields of mechanized reasoning (see, e.g., Benzm¨. (shrink)
In this paper we define intensional models for the classical theory of types, thus arriving at an intensional type logic ITL. Intensional models generalize Henkin's general models and have a natural definition. As a class they do not validate the axiom of Extensionality. We give a cut-free sequent calculus for type theory and show completeness of this calculus with respect to the class of intensional models via a model existence theorem. After this we turn our attention to applications. Firstly, it (...) is argued that, since ITL is truly intensional, it can be used to model ascriptions of propositional attitude without predicting logical omniscience. In order to illustrate this a small fragment of English is defined and provided with an ITL semantics. Secondly, it is shown that ITL models contain certain objects that can be identified with possible worlds. Essential elements of modal logic become available within classical type theory once the axiom of Extensionality is given up. (shrink)
A short description of a toy theorem prover for 16-valued trilattice logics. Written for the occasion of my friend's Jan van Eijck's retirement. With a link to a swish interface to the prolog prover.
In this paper we consider the theory of predicate logics in which the principle of Bivalence or the principle of Non-Contradiction or both fail. Such logics are partial or paraconsistent or both. We consider sequent calculi for these logics and prove Model Existence. For L4, the most general logic under consideration, we also prove a version of the Craig-Lyndon Interpolation Theorem. The paper shows that many techniques used for classical predicate logic generalise to partial and paraconsistent logics once the right (...) set-up is chosen. Our logic L4 has a semantics that also underlies Belnap’s [4] and is related to the logic of bilattices. L4 is in focus most of the time, but it is also shown how results obtained for L4 can be transferred to several variants. (shrink)
Vector models of language are based on the contextual aspects of words and how they co-occur in text. Truth conditional models focus on the logical aspects of language, the denotations of phrases, and their compositional properties. In the latter approach the denotation of a sentence determines its truth conditions and can be taken to be a truth value, a set of possible worlds, a context change potential, or similar. In this short paper, we develop a vector semantics for language based (...) on the simply typed lambda calculus. Our semantics uses techniques familiar from the truth conditional tradition and is based on a form of dynamic interpretation inspired by Heim's context updates. (shrink)
Vector models of language are based on the contextual aspects of language, the distributions of words and how they co-occur in text. Truth conditional models focus on the logical aspects of language, compositional properties of words and how they compose to form sentences. In the truth conditional approach, the denotation of a sentence determines its truth conditions, which can be taken to be a truth value, a set of possible worlds, a context change potential, or similar. In the vector models, (...) the degree of co-occurrence of words in context determines how similar the meanings of words are. In this paper, we put these two models together and develop a vector semantics for language based on the simply typed lambda calculus models of natural language. We provide two types of vector semantics: a static one that uses techniques familiar from the truth conditional tradition and a dynamic one based on a form of dynamic interpretation inspired by Heim’s context change potentials. We show how the dynamic model can be applied to entailment between a corpus and a sentence and provide examples. (shrink)
In this paper we discuss a new perspective on the syntax-semantics interface. Semantics, in this new set-up, is not ‘read off’ from Logical Forms as in mainstream approaches to generative grammar. Nor is it assigned to syntactic proofs using a Curry-Howard correspondence as in versions of the Lambek Calculus, or read off from f-structures using Linear Logic as in Lexical-Functional Grammar (LFG, Kaplan & Bresnan [9]). All such approaches are based on the idea that syntactic objects (trees, proofs, fstructures) are (...) somehow prior and that semantics must be parasitic on those syntactic objects. We challenge this idea and develop a grammar in which syntax and semantics are treated in a strictly parallel fashion. The grammar will have many ideas in common with the (converging) frameworks of categorial grammar and LFG, but its treatment of the syntax-semantics interface is radically different. Also, although the meaning component of the grammar is a version of Montague semantics and although there are obvious affinities between Montague’s conception of grammar and the work presented here, the grammar is not compositional, in the sense that composition of meaning need not follow surface structure. (shrink)
The paper develops Lambda Grammars, a form of categorial grammar that, unlike other categorial formalisms, is non-directional. Linguistic signs are represented as sequences of lambda terms and are combined with the help of linear combinators.
This paper shows how the dynamic interpretation of natural language introduced in work by Hans Kamp and Irene Heim can be modeled in classical type logic. This provides a synthesis between Richard Montague's theory of natural language semantics and the work by Kamp and Heim.
This paper introduces λ-grammar, a form of categorial grammar that has much in common with LFG. Like other forms of categorial grammar, λ-grammars are multi-dimensional and their components are combined in a strictly parallel fashion. Grammatical representations are combined with the help of linear combinators, closed pure λ-terms in which each abstractor binds exactly one variable. Mathematically this is equivalent to employing linear logic, in use in LFG for semantic composition, but the method seems more practicable.
The ‘syntax’ and ‘combinatorics’ of my title are what Curry (1961) referred to as phenogrammatics and tectogrammatics respectively. Tectogrammatics is concerned with the abstract combinatorial structure of the grammar and directly informs semantics, while phenogrammatics deals with concrete operations on syntactic data structures such as trees or strings. In a series of previous papers (Muskens, 2001a; Muskens, 2001b; Muskens, 2003) I have argued for an architecture of the grammar in which finite sequences of lambda terms are the (...) basic data structures, pairs of terms syntax, semantics for example. These sequences then combine with the help of simple generalizations of the usual abstraction and application operations. This theory, which I call Lambda Grammars and which is closely related to the independently formulated theory of Abstract Categorial Grammars (de Groote, 2001; de Groote, 2002), in fact is an implementation of Curry’s ideas: the level of tectogrammar is encoded by the sequences of lambda-terms and their ways of combination, while the syntactic terms in those sequences constitute the phenogrammatical level. In de Groote’s formulation of the theory, tectogrammar is the level of abstract terms, while phenogrammar is the level of object terms. (shrink)
We present Logical Description Grammar (LDG), a model ofgrammar and the syntax-semantics interface based on descriptions inelementary logic. A description may simultaneously describe the syntacticstructure and the semantics of a natural language expression, i.e., thedescribing logic talks about the trees and about the truth-conditionsof the language described. Logical Description Grammars offer a naturalway of dealing with underspecification in natural language syntax andsemantics. If a logical description (up to isomorphism) has exactly onetree plus truth-conditions as a model, it completely specifies thatgrammatical (...) object. More common is the situation, corresponding tounderspecification, in which there is more than one model. A situation inwhich there are no models corresponds to an ungrammatical input. (shrink)
This paper develops a semantics for a fragment of English that is based on the idea of `impossible possible worlds'. This idea has earlier been formulated by authors such as Montague, Cresswell, Hintikka, and Rantala, but the present set-up shows how it can be formalized in a completely unproblematic logic---the ordinary classical theory of types. The theory is put to use in an account of propositional attitudes that is `hyperfine-grained', i.e. that does not suffer from the well-known problems involved with (...) replacing expressions by logical equivalents. (shrink)
Ambiguities in natural language can multiply so fast that no person or machine can be expected to process a text of even moderate length by enumerating all possible disambiguations. A sentence containing $n$ scope bearing elements which are freely permutable will have $n!$ readings, if there are no other, say lexical or syntactic, sources of ambiguity. A series of $m$ such sentences would lead to $(n!)^m$ possibilities. All in all the growth of possibilities will be so fast that generating readings (...) first and testing their acceptability afterwards will not be feasible. -/- This insight has led a series of researchers to adopt a level of representation at which ambiguities remain unresolved. The idea here is not to generate and test many possible interpretations but to first generate one `underspecified' representation which in a sense represents all its complete specifications and then use whatever information is available to further specify the result. -/- One central hypothesis in the paper will be that the relation between an underspecified representation and its full representations is not so much the relation between one structure and a set of other structures but is in fact the relation between a description (a set of logical sentences) and its models. (shrink)
In this paper it is shown how simple texts that can be parsed in a Lambek Categorial Grammar can also automatically be provided with a semantics in the form of a Discourse Representation Structure in the sense of Kamp [1981]. The assignment of meanings to texts uses the Curry-Howard-Van Benthem correspondence.
We present Logical Description Grammar (LDG), a model ofgrammar and the syntax-semantics interface based on descriptions inelementary logic. A description may simultaneously describe the syntacticstructure and the semantics of a natural language expression, i.e., thedescribing logic talks about the trees and about the truth-conditionsof the language described. Logical Description Grammars offer a naturalway of dealing with underspecification in natural language syntax andsemantics. If a logical description (up to isomorphism) has exactly onetree plus truth-conditions as a model, it completely specifies thatgrammatical (...) object. More common is the situation, corresponding tounderspecification, in which there is more than one model. A situation inwhich there are no models corresponds to an ungrammatical input. (shrink)
Classical Discourse Representation Theory (DRT) predicts that an indefinite noun phrase cannot antecede an anaphoric element if the noun phrase is, but the anaphoric element is not, in the scope of a negation; the theory also predicts that no anaphoric links are possible between the two parts of a disjunction. However, it is well known that these predictions meet with counterexamples. In particular, anaphora is often possible if a double negation intervenes between antecedent and anaphoric element, and also if the (...) antecedent not only occurs in the first part of a disjunction but also within the scope of a negation, while the anaphoric element is in the second part of the same disjunction. In this paper we argue that these recalcitrant phenomena are related and that a solution to the double negation problem will also provide us with a solution to the disjunction problem. We review the basic set-up of classical DRT and offer an extension (called ‘Double Negation DRT’) which validates the law of double negation An adaptation of the standard DRT construction algorithm which transforms texts into Discourse Representation Structures is sketched and it is shown that the problems with negation and disjunction that led to the definition of our new version of DRT are properly dealt with. (shrink)
Type-logical semantics studies linguistic meaning with the help of the theory of types. The latter originated with Russell as an answer to the paradoxes, but has the additional virtue that it is very close to ordinary language. In fact, type theory is so much more similar to language than predicate logic is, that adopting it as a vehicle of representation can overcome the mismatches between grammatical form and predicate logical form that were observed by Frege and Russell. The grammatical forms (...) of ordinary language sentences consequently may be taken to be much less misleading than logicians in the first half of the 20th century often thought them to be. This was realized by Richard Montague, who used the theory of types to translate fragments of ordinary language into a logical language. Semantics is commonly divided into lexical semantics, which studies the meaning of words, and compositional semantics, which studies the way in which complex phrases obtain a meaning from their constituents. The strength of type-logical semantics lies with the latter, but type-logical theories can be combined with many competing hypotheses about lexical meaning, provided these hypotheses are expressed using the language of type theory. (shrink)
This paper argues for the idea that in describing language we should follow Haskell Curry in distinguishing between the structure of an expression and its appearance or manifestation . It is explained how making this distinction obviates the need for directed types in type-theoretic grammars and a simple grammatical formalism is sketched in which representations at all levels are lambda terms. The lambda term representing the abstract structure of an expression is homomorphically translated to a lambda term representing its manifestation, (...) but also to a lambda term representing its semantics. (shrink)
In a recent paper we have defined an analytic tableau calculus PL_16 for a functionally complete extension of Shramko and Wansing's logic based on the trilattice SIXTEEN_3. This calculus makes it possible to define syntactic entailment relations that capture central semantic relations of the logic---such as the relations |=_t, |=_f, and |=_i that each correspond to a lattice order in SIXTEEN_3; and |=, the intersection of |=_t and |=_f,. -/- It turns out that our method of characterising these semantic relations---as (...) intersections of auxiliary relations that can be captured with the help of a single calculus---lends itself well to proving interpolation. All entailment relations just mentioned have the interpolation property, not only when they are defined with respect to a functionally complete language, but also in a range of cases where less expressive languages are considered. For example, we will show that |=, when restricted to L_{tf}, the language originally considered by Shramko and Wansing, enjoys interpolation. This answers a question that was recently posed by M. Takano. (shrink)
In this paper we give an analytic tableau calculus P L 1 6 for a functionally complete extension of Shramko and Wansing’s logic. The calculus is based on signed formulas and a single set of tableau rules is involved in axiomatising each of the four entailment relations ⊧ t, ⊧ f, ⊧ i, and ⊧ under consideration—the differences only residing in initial assignments of signs to formulas. Proving that two sets of formulas are in one of the first three entailment (...) relations will in general require developing four tableaux, while proving that they are in the ⊧ relation may require six. (shrink)
Logic has its roots in the study of valid argument, but while traditional logicians worked with natural language directly, modern approaches first translate natural arguments into an artificial language. The reason for this step is that some artificial languages now have very well developed inferential systems. There is no doubt that this is a great advantage in general, but for the study of natural reasoning it is a drawback that the original linguistic forms get lost in translation. An alternative approach (...) would be to develop a general theory of the natural logic behind human reasoning and human information processing by studying formal logics that operate directly on linguistic representations. That this is possible we will try to make plausible in this paper. It will turn out that one level of representation, that of Logical Form, can meaningfully be identified with the language of an existing and well-understood logic, a restricted form of the theory of types. It is not difficult to devise inference systems for this language, and it is thus possible to study reasoning systems that are based directly on language. (shrink)
The semantics of a sentence containing a perception verb such as see or hear depends to a high degree on the exact syntactic form of the perception verb’s complement. Let us compare sentence (1), where the complement is tenseless, with (2), where the complement is a tensed clause.
Kant said that existence is not a predicate and Russell agreed, arguing that a sentence such as ‘The king of France exists’, which seems to attribute existence to the king of France, really has a logical form that is not reflected in the surface structure of the sentence at all. While the surface form of the sentence consists of a subject and a predicate, the underlying logical form, according to Russell, is the formula given in. This formula obviously has no (...) subjectpredicate form and in fact has no single constituent that corresponds to the verb phrase ‘exists’ in the surface sentence. ∃x∀y The importance of Russell’s analysis becomes clear when we consider ‘The king of France does not exist’. If this sentence would attribute non-existence to the king it should entail that there is someone who does not exist, just as ‘Mary doesn’t like bananas’ entails that there is someone who doesn’t like bananas. Thus the idea that all sentences have subject-predicate form has led some philosophers to the view that there are objects that lack existence. This embarrassing position can be avoided once Russell’s analysis is accepted: if ‘The king of France does not exist’ is formalised as the negation of formula, no unwanted consequences follow. (shrink)
An attractive way to model the relation between an underspecified syntactic representation and its completions is to let the underspecified representation correspond to a logical description and the completions to the models of that description. This approach, which underlies the Description Theory of Marcus et al. 1983 has been integrated in Vijay-Shanker 1992 with a pure unification approach to Lexicalized Tree-Adjoining Grammars (Joshi et al. 1975, Schabes 1990). We generalize Description Theory by integrating semantic information, that is, we propose to (...) tackle both syntactic and semantic underspecification using descriptions. (shrink)
the world of phenomena is immensely large this means we can perceive only part of the world. We see, feel and hear parts of reality, not the whole of it, and it seems that a sentence containing a verb of perception like 'John sees a house burn' is most naturally treated as saying that the subject sees an incomplete world in which the embedded sentence is true (see Barwise (1981) for this analysis). But if we want to analyse perception verbs (...) thus, we must introduce some form of incompleteness into our formal system, the.. (shrink)
In mathematical languages and in predicate logic coreferential terms can be interchanged in any sentence without altering the truth value of that sentence. Replacing 3 + 5 by 12 − 4 in any formula of arithmetic will never lead from truth to falsity or from falsity to truth. But natural languages are different in this respect. While in some contexts it is always allowed to interchange coreferential terms, other contexts do not admit this. An example of the first sort of (...) context is likes bananas: for any two coreferential noun phrases A and B the sentence A likes bananas is true if and only if B likes bananas is. A context that does not allow intersubstitution of coreferents is The Ancients knew that appears at dawn. If we fill the hole with the noun phrase the Morning Star we get the true (1a), while if we plug in the Evening Star we get the false (1b). Yet the Morning Star and the Evening Star both refer to the planet Venus and are thus coreferential. (shrink)
Verbs such as know, believe, hope, fear, regret and desire are commonly taken to express an attitude that one may bear towards a proposition and are therefore called verbs of propositional attitude. Thus in (1) below the agent Cathy is reported to have a certain attitude.
In this paper it is shown how a formal theory of interpretation in Montague’s style can be reconciled with a view on meaning as a social construct. We sketch a formal theory in which agents can have their own theory of interpretation and in which groups can have common theories of interpretation. Frege solved the problem how different persons can have access to the same proposition by placing the proposition in a Platonic realm, independent from all language users but accessible (...) to all of them. Here we explore the alternative of letting meaning be socially constructed. The meaning of a sentence is accessible to each member of a linguistic community because the way the sentence is to be interpreted is common knowledge among the members of that community. Misunderstandings can arise when the semantic knowledge of two or more individuals is not completely in sync. (shrink)
In the tradition of Denotational Semantics one usually lets program constructs take their denotations in reflexive domains, i.e. in domains where self-application is possible. For the bulk of programming constructs, however, working with reflexive domains is an unnecessary complication. In this paper we shall use the domains of ordinary classical type logic to provide the semantics of a simple programming language containing choice and recursion. We prove that the rule of {\em Scott Induction\/} holds in this new setting, prove soundness (...) of a Hoare calculus relative to our semantics, give a direct calculus ${\cal C}$ on programs, and prove that the denotation of any program $P$ in our semantics is equal to the union of the denotations of all those programs $L$ such that $P$ follows from $L$ in our calculus and $L$ does not contain recursion or choice. (shrink)
This paper uses classical logic for a simultaneous description of the syntax and semantics of a fragment of English and it is argued that such an approach to natural language allows procedural aspects of linguistic theory to get a purely declarative formulation. In particular, it will be shown how certain construction rules in Discourse Representation Theory, such as the rule that indefinites create new discourse referents and definites pick up an existing referent, can be formulated declaratively if logic is used (...) as a metalanguage for English. In this case the declarative aspects of a rule are highlighted when we focus on the model theory of the description language while a procedural perspective is obtained when its proof theory is concentrated on. Themes of interest are Discourse Representation Theory, resolution of anaphora, resolution of presuppositions, and underspecification. (shrink)
The semantic valuations of classical logic, strong Kleene logic, the logic of paradox and the logic of first-degree entailment, all respect the Dunn conditions: we call them Dunn logics. In this paper, we study the interpolation properties of the Dunn logics and extensions of these logics to more expressive languages. We do so by relying on the \ calculus, a signed tableau calculus whose rules mirror the Dunn conditions syntactically and which characterizes the Dunn logics in a uniform way. In (...) terms of the \ calculus, we first introduce two different interpolation methods, each of which uniformly shows that the Dunn logics have the interpolation property. One of the methods is closely related to Maehara’s method but the other method, which we call the pruned tableau method, is novel to this paper. We provide various reasons to prefer the pruned tableau method to the Maehara-style method. We then turn our attention to extensions of Dunn logics with so-called appropriate implication connectives. Although these logics have been considered at various places in the literature, a study of the interpolation properties of these logics is lacking. We use the pruned tableau method to uniformly show that these extended Dunn logics have the interpolation property and explain that the same result cannot be obtained via the Maehara-style method. Finally, we show how the pruned tableau method constructs interpolants for functionally complete extensions of the Dunn logics. (shrink)
This book provides an in-depth view of the current issues, problems and approaches in the computation of meaning as expressed in language. Aimed at linguists, computer scientists, and logicians with an interest in the computation of meaning, this book focuses on two main topics in recent research in computational semantics. The first topic is the definition and use of underspecified semantic representations, i.e. formal structures that represent part of the meaning of a linguistic object while leaving other parts unspecified. The (...) second topic discussed is semantic annotation. Annotated corpora have become an indispensable resource both for linguists and for developers of language and speech technology, especially when used in combination with machine learning methods. The annotation in corpora has only marginally addressed semantic information, however, since semantic annotation methodologies are still in their infancy. This book discusses the development and application of such methodologies. (shrink)
The effects of utterances such as cue phrases, keep-turn markers, and grounding signals cannot be characterized as changes to a shared record of the propositions under discussed: the simplest (and arguably most natural) way of characterizing the meaning of these utterances is in terms of a theory in which the conversational score is seen as a record of the discourse situation, or at least of the speech acts that have been performed. The problem then becomes to explain how discourse entities (...) are accessible. We consider three hypotheses about the dynamics of a speech act-based theory of the conversational score, and argue that they could be implemented with relatively minor modifications to the technical tools already introduced in theories such as Compositional DRT. (shrink)
Mathematical game theory has been embraced by a variety of scholars: social scientists, biologists, linguists, and now, increasingly, logicians. This volume illustrates the recent advances of game theory in the field. Logicians benefit from things like game theory's ability to explain informational independence between connectives; meanwhile, game theorists have even begun to benefit from logical epistemic analyses of game states. In concert with such pioneering work, this volume also present surprising developments in classical fields, including first-order logic and set theory.