Current dynamic epistemic logics for analyzing effects of informational events often become cumbersome and opaque when common knowledge is added for groups of agents. Still, postconditions involving common knowledge are essential to successful multi-agent communication. We propose new systems that extend the epistemic base language with a new notion of ‘relativized common knowledge’, in such a way that the resulting full dynamic logic of information flow allows for a compositional analysis of all epistemic postconditions via perspicuous ‘reduction axioms’. We also (...) show how such systems can deal with factual alteration, rather than just information change, making them cover a much wider range of realistic events. After a warm-up stage of analyzing logics for public announcements, our main technical results are expressivity and completeness theorems for a much richer logic that we call LCC. This is a dynamic epistemic logic whose static base is propositional dynamic logic (PDL), interpreted epistemically. This system is capable of expressing all model-shifting operations with finite action models, while providing a compositional analysis for a wide range of informational events. This makes LCC a serious candidate for a standard in dynamic epistemic logic, as we illustrate by analyzing some complex communication scenarios, including sending successive emails with both ‘cc’ and ‘bcc’ lines, and other private announcements to subgroups. Our proofs involve standard modal techniques, combined with a new application of Kleene’s theorem on finite automata, as well as new Ehrenfeucht games of model comparison. (shrink)
We look at lying as an act of communication, where (i) the proposition that is communicated is not true, (ii) the utterer of the lie knows that what she communicates is not true, and (iii) the utterer of the lie intends the lie to be taken as truth. Rather than dwell on the moral issues, we provide a sketch of what goes on logically when a lie is communicated. We present a complete logic of manipulative updating, to analyse the effects (...) of lying in public discourse. Next, we turn to the study of lying in games. First, a game-theoretical analysis is used to explain how the possibility of lying makes such games interesting, and how lying is put to use in optimal strategies for playing the game. Finally, we give a matching logical analysis. Our running example of lying in games in liar’s dice. (shrink)
Almost forty years ago Richard Montague proposed to analyse natural language with the same tools as formal languages. In particular, he gave formal semantic analyses of several interesting fragments of English in terms of typed logic. This led to the development of Montague grammar as a particular style of formal analysis of natural language.
A new system of dynamic logic is introduced and motivated, witha novel approach to variable binding for incremental interpretation. Thesystem is shown to be equivalent to first order logic and complete.The new logic combines the dynamic binding idea from DynamicPredicate Logic with De Bruijn style variable free indexing. Quantifiersbind the next available variable register; the indexing mechanismguarantees that active registers are never overwritten by newquantifiers actions. Apart from its interest in its own right, theresulting system has certain advantages over Dynamic (...) Predicate Logic orDiscourse Representation Theory. It comes with a more well behaved(i.e., transitive) consequence relation, it gives a more explicitaccount of how anaphoric context grows as text gets processed, and ityields new insight into the dynamics of anaphoric linking in reasoning.Incremental dynamics also points to a new way of handling contextdynamically in Montague grammar. (shrink)
We implement the extension of the logical consequence relation to a partial order ≤ on arbitary types built from e (entities) and t (Booleans) that was given in [1], and the definition of monotonicity preserving and monotonicity reversing functions in terms of ≤. Next, we present a new algorithm for polarity marking, and implement this for a particular fragment of syntax. Finally, we list the reseach agenda that these definitions and this algorithm suggest. The implementations use Haskell [8], and are (...) given in ‘literate programming’ style [9]. (shrink)
The effects of public announcements, private communications, deceptive messages to groups, and so on, can all be captured by a general mechanism of updating multi-agent models with update action models, now in widespread use. There is a natural extension of the definition of a bisimulation to action models. Surely enough, updating with bisimilar action models gives the same result (modulo bisimulation). But the converse turns out to be false: update models may have the same update effects without being bisimilar. We (...) propose action emulation as a notion of equivalence more appropriate for action models, and generalizing standard bisimulation. It is proved that action emulation provides a full characterization of update effect. We first concentrate on the general case, and next focus on the important case of action models with propositional preconditions. Our notion of action emulation yields a simplification procedure for action models, and it gives designers of multi-agent systems a useful tool for comparing different ways of representing a particular communicative action. (shrink)
The effects of public announcements, private communications, deceptive messages to groups, and so on, can all be captured by a general mechanism of updating multi-agent models with update action models, now in widespread use. There is a natural extension of the definition of a bisimulation to action models. Surely enough, updating with bisimilar action models gives the same result. But the converse turns out to be false: update models may have the same update effects without being bisimilar. We propose action (...) emulation as a notion of equivalence more appropriate for action models, and generalizing standard bisimulation. It is proved that action emulation provides a full characterization of update effect. We first concentrate on the general case, and next focus on the important case of action models with propositional preconditions. Our notion of action emulation yields a simplification procedure for action models, and it gives designers of multi-agent systems a useful tool for comparing different ways of representing a particular communicative action. (shrink)
Notice: This PDF version was distributed by request to members of the Friends of the SEP Society and by courtesy to SEP content contributors. It is solely for their fair use. Unauthorized distribution is prohibited. To learn how to join the Friends of the..
In this paper we investigate Kripke models, used to model knowledge or belief in a static situation, and action models, used to model communicative actions that change this knowledge or belief. The appropriate notion for structural equivalence between modal structures such as Kripke models is bisimulation: Kripke models that are bisimilar are modally equivalent. We would like to find a structural relation that can play the same role for the action models that play a prominent role in information updating. Two (...) action models are equivalent if they yield the same results when updating Kripke models. More precisely, two action models are equivalent if it holds for all Kripke models that the result of updating with one action model is bisimilar to the result of updating with the other action model. We propose a new notion of action emulation that characterizes the structural equivalence of the important class of canonical action models. Since every action model has an equivalent canonical action model, this gives a method to decide the equivalence of any pair of action models. We also give a partial result that holds for the class of all action models. Our results extend the work in van Eijck et al. (Synthese 185(1):131–151, 2012). (shrink)
We explore some logics of change, focusing on commands to change the world in such a way that certain elementary propositions become true or false. This investigation starts out from the following two simplifying assumptions: (1) the world is a collection of facts (Wittgenstein), and (2), the world can be changed by changing elementary facts (Marx). These assumptions allow us to study the logic of imperatives in the simplest possible setting.
In this paper1, we develop an epistemic logic to specify and reason about the information flow on the underlying communication channels. By combining ideas from Dynamic Epistemic Logic (DEL) and Interpreted Systems (IS), our semantics offers a natural and neat way of modelling multi-agent communication scenarios with different assumptions about the observational power of agents. We relate our logic to the standard DEL and IS..
Transition systems can be viewed either as process diagrams or as Kripke structures. The rst perspective is that of process theory, the second that of modal logic. This paper shows how various formalisms of modal logic can be brought to bear on processes. Notions of bisimulation can not only be motivated by operations on transition systems, but they can also be suggested by investigations of modal formalisms. To show that the equational view of processes from process algebra is closely related (...) to modal logic, we consider various ways of looking at the relation between the calculus of basic process algebra and propositional dynamic logic. More concretely, the paper contains preservation results for various bisimulation notions, a result on the expressive power of propositional dynamic logic, and a de nition of bisimulation which is the proper notion of invariance for concurrent propositional dynamic logic. (shrink)
Dynamic logic, broadly conceived, is the logic that analyses change by decomposing actions into their basic building blocks and by describing the results of performing actions in given states of the world. The actions studied by dynamic logic can be of various kinds: actions on the memory state of a computer, actions of a moving robot in a closed world, interactions between cognitive agents performing given communication protocols, actions that change the common ground between speaker and hearer in a conversation, (...) actions that change the contextually available referents in a conversation, and so on. (shrink)
Discourse Representation Theory is a specific name for the work of Hans Kamp in the area of dynamic interpretation of natural language. Also, it has gradually become a generic term for proposals for dynamic interpretation of natural language in the same spirit. These proposals have in common that each new sentence is interpreted in terms of the contribution it makes to an existing piece of interpreted discourse. The interpretation conditions for sentences are given as instructions for updating the representation of (...) the discourse. (shrink)
We contrast Bonanno’s ‘Belief Revision in a Temporal Framework’ [15] with preference change and belief revision from the perspective of dynamic epistemic logic (DEL). For that, we extend the logic of communication and change of [11] with relational substitutions [8] for preference change, and show that this does not alter its properties. Next we move to a more constrained context where belief and knowledge can be defined from preferences [29; 14; 5; 7], prove completeness of a very expressive logic of (...) belief revision, and define a mechanism for updating belief revision models using a combination of action priority update [7] and preference substitution [8]. (shrink)
Guarded actions are changes with preconditions acting as a guard. Guarded action models are multimodal Kripke models with the valuations replaced by guarded actions. Call guarded action logic the result of adding product updates with guarded action models to PDL (propositional dynamic logic). We show that guarded action logic reduces to PDL.
Presuppositions of utterances are the pieces of information you convey with an utterance no matter whether your utterance is true or not We rst study presupposition in a very simple framework of updating propo sitional information with examples of how presuppositions of complex propositional updates can be calculated Next we move on to presupposi tions and quanti cation in the context of a dynamic version of predicate logic suitably modi ed to allow for presupposition failure In both the propositional and (...) the quanti cational case presupposition failure can be viewed as error abortion of procedures Thus a dynamic assertion logic which describes the preconditions for error abortion is the suitable tool for analysing presupposition.. (shrink)
We present a direct reduction of dynamic epistemic logic in the spirit of [4] to propositional dynamic logic (PDL) [17, 18] by program transformation. The program transformation approach associates with every update action a transformation on PDL programs. These transformations are then employed in reduction axioms for the update actions. It follows that the logic of public announcement, the logic of group announcements, the logic of secret message passing, and so on, can all be viewed as subsystems of PDL. Moreover, (...) the program transformation approach can be used to generate the appropriate reduction axioms for these logics. Our direct reduction of dynamic epistemic logic to PDL was inspired by the reduction of dynamic epistemic logic to automata PDL of [13]. Our approach shows how the detour through automata can be avoided. (shrink)
This is a case-study in knowledge representation. We analyze the ‘one hundred prisoners and a lightbulb’ puzzle. In this puzzle it is relevant what the agents (prisoners) know, how their knowledge changes due to observations, and how they affect the state of the world by changing facts, i.e., by their actions. These actions depend on the history of previous actions and observations. Part of its interest is that all actions are local, i.e. not publicly observable, and part of the problem (...) is therefore how to disseminate local results to other agents, and make them global. The various solutions to the puzzle are presented as protocols (iterated functions from agent’s local states, and histories of actions, to actions). The computational aspect is about average runtime termination under conditions of random (‘fair’) scheduling. (shrink)
Epistemic protocols are communication protocols aiming at transfer of knowledge in a controlled way. Typically, the preconditions or goals for protocol actions depend on the knowledge of agents, often in nested form. Informal epistemic protocol descriptions for muddy children, coordinated attack, dining cryptographers, Russian cards, secret key exchange are well known. The contribution of this paper is a formal study of a natural requirement on epistemic protocols, that the contents of the protocol can be assumed to be common knowledge. By (...) formalizing this requirement we can prove that there can be no unbiased deterministic protocol for the Russian cards problem. For purposes of our formal analysis we introduce an epistemic protocol language, and we show that its model checking problem is decidable. (shrink)
We carry out the Karttunen-Stalnaker pragmatic account of presupposition projection within a state-of-the art version of dynamic epistemic logic. It turns out that the basic projection facts can all be derived from a Gricean maxim ‘be informative’. This sheds light on a recent controversy on the appropriateness of dynamic semantics as a tool for analysing presupposition.
Current dynamic epistemic logics often become cumbersome and opaque when common knowledge is added for groups of agents. Still, postconditions regarding common knowledge express the essence of what communication achieves. We present some methods that yield so-called reduction axioms for common knowledge. We investigate the expressive power of public announcement logic with relativized common knowledge, and present reduction axioms that give a detailed account of the dynamics of common knowledge in some major communication types.
This is a case-study in knowledge representation and dynamic epistemic protocol verification. We analyze the ‘one hundred prisoners and a lightbulb’ puzzle. In this puzzle it is relevant what the agents know, how their knowledge changes due to observations, and how they affect the state of the world by changing facts, i.e., by their actions. These actions depend on the history of previous actions and observations. Part of its interest is that all actions are local, i.e. not publicly observable, and (...) part of the problem is therefore how to disseminate local results to other agents, and make them global. The various solutions to the puzzle are presented as protocols. The paper consists of three parts. First, we present different versions of the puzzle, and their solutions. This includes a probabilistic version, and a version assuming synchronicity. The latter is very informative for the prisoners, and allows different protocols. Then, we model the puzzle in an epistemic logic incorporating dynamic operators for the effects of information changing events. Such events include both informative actions, where agents become more informed about the non-changing state of the world, and factual changes, wherein the world and the facts describing it change themselves as well. Finally, we verify the basic protocol to solve the problem. Novel contributions in this paper are: Firstly, Protocol 2 and Protocol 4. Secondly, the modelling in dynamic epistemic logic in its entirety — we do not know of a case study that combines factual and informational dynamics in a setting of non-public events, or of a similar proposal to handle asynchronous behaviour in a dynamic epistemic logic. Thirdly, our method to verify dynamic epistemic protocols by reasoning over possibly infinite execution sequences of these protocols. A precursor of the present paper, entitled ‘One hundred prisoners and a lightbulb – logic and computation’, was presented at KR 2010, Toronto. The differences with the present contribution are as follows: the former contains a section with computational results, whereas the focus of the present paper is the verification of one of the presented protocols in the former. (shrink)
Three key ways of updating one’s knowledge are (i) perception of states of affairs, e.g., seeing with one’s own eyes that something is the case, (ii) reception of messages, e.g., being told that something is the case, and (iii) drawing new conclusions from known facts. If one represents knowledge by means of Kripke models, the implicit assumption is that drawing conclusions is immediate. This assumption of logical omniscience is a useful abstraction. It leaves the distinction between (i) and (ii) to (...) be accounted for. In current versions of Update Logic (Dynamic Epistemic Logic, Logic of Communication and Change) perception and message reception are not distinguished. This paper proposes an extension of Update Logic that makes this distinction explicit. The logic deals with three kinds of updates: announcements, changes of the world, and observations about the world in the presence of witnesses. The resulting logic is shown to be complete by means of a reduction to epistemic propositional dynamic logic by a well known method. (shrink)
In [11] it is shown how propositional dynamic logic (PDL) can be interpreted as a logic of belief revision that extends the logic of communication and change (LCC) given in [7]. This new version of epistemic/doxastic PDL does not impose any constraints on the basic relations and because of this it does not suffer from the drawback of LCC that these constraints may get lost under updates that are admitted by the system. Here, we will impose one constraint, namely that (...) the agent’s plausibility relations are linked. Linkedness is a natural extension of local connectedness to the multi-agent case and it assures that we know the agent’s preferences between all relevant alternatives. Since the belief updates that are used in [11] may not preserve linkedness, we limit ourselves to a particular kind of belief change that does preserve it. Our framework has obvious connections to coalition logic [17] and social choice theory [19]. We show how we can use it to model consensus seeking in plenary Dutch meetings. In Dutch meetings, a belief update is done for all agents in the meeting if a majority beliefs the proposition that is under discussion. A special case of these meetings is judgement aggregation, and we apply our framework to the discursive dilemma in this field [15]. (shrink)
Logical frameworks for analysing the dynamics of information processing abound [4, 5, 8, 10, 12, 14, 20, 22]. Some of these frameworks focus on the dynamics of the interpretation process, some on the dynamics of the process of drawing inferences, and some do both of these. Formalisms galore, so it is felt that some conceptual streamlining would pay off.This paper is part of a larger scale enterprise to pursue the obvious parallel between information processing and imperative programming. We demonstrate that (...) logical tools from theoretical computer science are relevant for the logic of information flow. More specifically, we show that the perspective of Hoare logic [13, 18] can fruitfully be applied to the conceptual simplification of information flow logics. (shrink)
Syllogistics reduces to only two rules of inference: monotonicity and symmetry, plus a third if one wants to take existential import into account. We give an implementation that uses only the monotonicity and symmetry rules, with an addendum for the treatment of existential import. Soundness follows from the monotonicity properties and symmetry properties of the Aristotelean quantifiers, while completeness for syllogistic theory is proved by direct inspection of the valid syllogisms. Next, the valid syllogisms are decomposed in terms of the (...) rules they involve. The implementation uses Haskell [8], and is given in ‘literate programming’ style [9]. (shrink)
The paper gives a formal analysis of public lies, explains how public lying is related to public announcement, and describes the process of recoveries from false beliefs engendered by public lying. The framework treats two kinds of public lies: simple lying update and two-step lying, which consists of suggesting that the lie may be true followed by announcing the lie. It turns out that agents’ convictions of what is true are immune to the first kind, but can be shattered by (...) the second kind. Next, recovery from public lying is analyzed. Public lies that are accepted by an audience cannot be undone simply by announcing their negation. The paper proposes a recovery process that works well for restoring beliefs about facts but cannot be extended to beliefs about beliefs. The formal machinery of the paper consists of KD45 models and conditional neighbourhood models, with various update procedures on them. Completeness proofs for a number of reasoning systems (converse belief logic, public lies logic, lying and recovery logic, conditional neighbourhood logic, plus its dynamic version) are included. (shrink)
Computer software is written in languages like C, Java or Haskell. In many cases social software is expressed in natural language. The paper explores connections between the areas of natural language analysis and analysis of social protocols, and proposes an extended program for natural language semantics, where the goals of natural language communication are derived from the demands of specific social protocols.
Hyper tableau reasoning is a version of clausal form tableau reasoning where all negative literals in a clause are resolved away in a single inference step. Constrained hyper tableaux are a generalization of hyper tableaux, where branch closing substitutions, from the point of view of model generation, give rise to constraints on satisfying assignments for the branch. These variable constraints eliminate the need for the awkward ‘purifying substitutions’ of hyper tableaux. The paper presents a non-destructive and proof confluent calculus for (...) constrained hyper tableaux, together with a soundness and completeness proof, with completeness based on a new way to generate models from open tableaux. It is pointed out that the variable constraint approach applies to free variable tableau reasoning in general. (shrink)
Thinking about Martin Stokhof as a philosopher and colleague, his formal analysis (together with Jeroen Groenendijk) of questions and question answering is the first thing that comes to mind. This work is part of a fruitful tradition that has recently spawned inquisitive semantics, and the focus on question answering in dynamic epistemic logic. The theme is still very much alive at ILLC today. Next, I am reminded of the dynamic turn in natural language semantics, of the way he and Jeroen (...) Groenendijk criticized Hans Kamp’s ideas about discourse representation, and how this led to the invention of dynamic predicate logic. Later it became clear that dynamic predicate logic can be viewed as the action part of quantified dynamic logic, invented much earlier by David Harel. This led to a connection, a semantic parallel, between the analysis of programming and the analysis of natural language. Thinking about Martin as a philosopher still longer, it is impossible to further postpone the thought of Ludwig Wittgenstein. This is the connection that I am least familiar with, so this is what I will write about, in the hope that developing my thought in writing will create a somewhat better understanding. (shrink)
To ascertain that a formalization of the intuitive notion of a ‘concept’ is linguistically interesting, one has to check whether it allows to get a grip on distinctions and notions from lexical semantics. Prime candidates are notions like ‘prototype’, ‘stereotypical attribute’, ‘essential attribute versus accidental attribute’, ‘intension versus extension’. We will argue that although the current paradigm of formal concept analysis as an application of lattice theory is not rich enough for an analysis of these notions, a lattice theoretical approach (...) to concepts is a suitable starting point for formalizing them. (shrink)
Categorization is probably one of the most central areas in the study of cognition, language and information. However, there is a serious gap running through the semantic treatments of categories and concepts [3]. On one side we find the ’classical’, formal approach, based on logical considerations, that has lent itself well for computational applications. In this approach, concepts are defined in terms of necessary and sufficient conditions. On the other side is an informal approach to categorization that is usually motivated (...) by the results of psychological experiments and that has not found its way into technologies on a large scale. Concepts here are based on prototypes, stereotypical attributes and family resemblances, which have become the hallmark of cognitive semantics. Obviously, it is important to bridge this gap, for theoretical and practical reasons. (shrink)
This paper introduces DEMO, a Dynamic Epistemic Modelling tool. DEMO allows modelling epistemic updates, graphical display of update results, graphical display of action models, formula evaluation in epistemic models, translation of dynamic epistemic formulas to PDL formulas, and so on. The paper implements the reduction of dynamic epistemic logic [16, 2, 3, 1] to PDL given in [12]. The reduction of dynamic epistemic logic to automata PDL from [24] is also discussed and implemented. Epistemic models are minimized under bisimulation, and (...) update action models are minimized under action emulation (the appropriate structural notion for having the same update effect, cf. [13]). The paper is an exemplar of tool building for epistemic update logic. It contains the full code of an implementation in Haskell [22], in ‘literate programming’ style [23], of DEMO. (shrink)
No index information on NPs, except for pronouns. Otherwise, virtually the same as a datatype declaration for a fragment of dynamic Montague grammar. The module Cat imports the standard List module. Lists will be employed to implement a simple feature agreement mechanism.
Dynamic predicate logic (DPL), presented in [5] as a formalism for representing anaphoric linking in natural language, can be viewed as a fragment of a well known formalism for reasoning about imperative programming [6]. An interesting difference from other forms of dynamic logic is that the distinction between formulas and programs gets dropped: DPL formulas can be viewed as programs. In this paper we show that DPL is in fact the basis of a hierarchy of formulas-as-programs languages.
An emerging standard for polymorphically typed, lazy, purely functional programming is Haskell, a language named after Haskell Curry. Haskell is based on (polymorphically typed) lambda calculus, which makes it an excellent tool for computational semantics.
This is indeed a very nice draft that I have read with great pleasure, and that has helped me to better understand the completeness proof for LCC. Modal fixed point logic allows for an illuminating new version (and a further extension) of that proof. But still. My main comment is that I think the perspective on substitutions in the draft paper is flawed. The general drift of the paper is that relativization, (predicate) substitution and product update are general operations on (...) models, and that it is important to check whether given logical languages are closed under these operations. FO logic is closed under relativization, predicate substitution and product constructions (such as those involved in relative interpretation). The minimal modal logic is closed under relativization, which explains the reduction of epistemic logic (withhout common knowledge) + public announcement to epistemic logic simpliciter (as observed in Van Benthem, [2]). The reduction breaks down as soon as one adds common knowledge. The minimal modal logic is also closed under substitution, which explains the reduction of epistemic logic plus (publicly observable) factual change to epistemic logic simpliciter, via the following reduction axioms (I use !p := φ for the operation of publicly changing the truth value of p to φ): [!p := φ]p ↔ φ [!p := φ]q ↔ q (p and q syntactically different ) [!p := φ]¬ψ ↔ ¬[!p := φ]ψ [!p := φ](ψ1 ∧ ψ2) ↔ [!p := φ]ψ1 ∧ [!p := φ]ψ2 [!p := φ][i]ψ ↔ [i][!p := φ]ψ Unlike the case of relativisation, this can be extended to the case of epistemic logic with common knowledge, by means of: [!p := φ]CGψ ↔ CG[!p := φ]ψ We get the following.. (shrink)
This talk shows how propositional dynamic logic (PDL) can be interpreted as a logic for multi-agent knowledge update and belief revision, or as a logic of preference change, if the basic relations are read as preferences instead of plausibilities. Our point of departure is the logic of communication and change (LCC) of [9]. Like LCC, our logic uses PDL as a base epistemic language. Unlike LCC, we start out from agent plausibilities, add their converses, and build knowledge and belief operators (...) from these with the PDL constructs. We extend the update mechanism of LCC to an update mechanism that handles belief change as relation substitution, and we show that the update part of this logic is more expressive than either that of LCC or that of epistemic/doxastic PDL with a belief change modality. Next, we show that the properties of knowledge and belief are preserved under any update, unlike in LCC. We prove completeness of the logic and give examples of its use. If there is time, we will also look at the preference interpretation of the system, and at preference change scenarios that can be modelled with it. (shrink)