Current dynamic epistemic logics for analyzing effects of informational events often become cumbersome and opaque when common knowledge is added for groups of agents. Still, postconditions involving common knowledge are essential to successful multi-agent communication. We propose new systems that extend the epistemic base language with a new notion of ‘relativized common knowledge’, in such a way that the resulting full dynamic logic of information flow allows for a compositional analysis of all epistemic postconditions via perspicuous ‘reduction axioms’. We also (...) show how such systems can deal with factual alteration, rather than just information change, making them cover a much wider range of realistic events. After a warm-up stage of analyzing logics for public announcements, our main technical results are expressivity and completeness theorems for a much richer logic that we call LCC. This is a dynamic epistemic logic whose static base is propositional dynamic logic (PDL), interpreted epistemically. This system is capable of expressing all model-shifting operations with finite action models, while providing a compositional analysis for a wide range of informational events. This makes LCC a serious candidate for a standard in dynamic epistemic logic, as we illustrate by analyzing some complex communication scenarios, including sending successive emails with both ‘cc’ and ‘bcc’ lines, and other private announcements to subgroups. Our proofs involve standard modal techniques, combined with a new application of Kleene’s theorem on finite automata, as well as new Ehrenfeucht games of model comparison. (shrink)
A new system of dynamic logic is introduced and motivated, witha novel approach to variable binding for incremental interpretation. Thesystem is shown to be equivalent to first order logic and complete.The new logic combines the dynamic binding idea from DynamicPredicate Logic with De Bruijn style variable free indexing. Quantifiersbind the next available variable register; the indexing mechanismguarantees that active registers are never overwritten by newquantifiers actions. Apart from its interest in its own right, theresulting system has certain advantages over Dynamic (...) Predicate Logic orDiscourse Representation Theory. It comes with a more well behaved(i.e., transitive) consequence relation, it gives a more explicitaccount of how anaphoric context grows as text gets processed, and ityields new insight into the dynamics of anaphoric linking in reasoning.Incremental dynamics also points to a new way of handling contextdynamically in Montague grammar. (shrink)
We look at lying as an act of communication, where (i) the proposition that is communicated is not true, (ii) the utterer of the lie knows that what she communicates is not true, and (iii) the utterer of the lie intends the lie to be taken as truth. Rather than dwell on the moral issues, we provide a sketch of what goes on logically when a lie is communicated. We present a complete logic of manipulative updating, to analyse the effects (...) of lying in public discourse. Next, we turn to the study of lying in games. First, a game-theoretical analysis is used to explain how the possibility of lying makes such games interesting, and how lying is put to use in optimal strategies for playing the game. Finally, we give a matching logical analysis. Our running example of lying in games in liar’s dice. (shrink)
The effects of public announcements, private communications, deceptive messages to groups, and so on, can all be captured by a general mechanism of updating multi-agent models with update action models, now in widespread use. There is a natural extension of the definition of a bisimulation to action models. Surely enough, updating with bisimilar action models gives the same result (modulo bisimulation). But the converse turns out to be false: update models may have the same update effects without being bisimilar. We (...) propose action emulation as a notion of equivalence more appropriate for action models, and generalizing standard bisimulation. It is proved that action emulation provides a full characterization of update effect. We first concentrate on the general case, and next focus on the important case of action models with propositional preconditions. Our notion of action emulation yields a simplification procedure for action models, and it gives designers of multi-agent systems a useful tool for comparing different ways of representing a particular communicative action. (shrink)
We implement the extension of the logical consequence relation to a partial order ≤ on arbitary types built from e (entities) and t (Booleans) that was given in , and the definition of monotonicity preserving and monotonicity reversing functions in terms of ≤. Next, we present a new algorithm for polarity marking, and implement this for a particular fragment of syntax. Finally, we list the reseach agenda that these definitions and this algorithm suggest. The implementations use Haskell , and are (...) given in ‘literate programming’ style . (shrink)
Almost forty years ago Richard Montague proposed to analyse natural language with the same tools as formal languages. In particular, he gave formal semantic analyses of several interesting fragments of English in terms of typed logic. This led to the development of Montague grammar as a particular style of formal analysis of natural language.
Notice: This PDF version was distributed by request to members of the Friends of the SEP Society and by courtesy to SEP content contributors. It is solely for their fair use. Unauthorized distribution is prohibited. To learn how to join the Friends of the..
Transition systems can be viewed either as process diagrams or as Kripke structures. The rst perspective is that of process theory, the second that of modal logic. This paper shows how various formalisms of modal logic can be brought to bear on processes. Notions of bisimulation can not only be motivated by operations on transition systems, but they can also be suggested by investigations of modal formalisms. To show that the equational view of processes from process algebra is closely related (...) to modal logic, we consider various ways of looking at the relation between the calculus of basic process algebra and propositional dynamic logic. More concretely, the paper contains preservation results for various bisimulation notions, a result on the expressive power of propositional dynamic logic, and a de nition of bisimulation which is the proper notion of invariance for concurrent propositional dynamic logic. (shrink)
In this paper1, we develop an epistemic logic to specify and reason about the information flow on the underlying communication channels. By combining ideas from Dynamic Epistemic Logic (DEL) and Interpreted Systems (IS), our semantics offers a natural and neat way of modelling multi-agent communication scenarios with different assumptions about the observational power of agents. We relate our logic to the standard DEL and IS..
Presuppositions of utterances are the pieces of information you convey with an utterance no matter whether your utterance is true or not We rst study presupposition in a very simple framework of updating propo sitional information with examples of how presuppositions of complex propositional updates can be calculated Next we move on to presupposi tions and quanti cation in the context of a dynamic version of predicate logic suitably modi ed to allow for presupposition failure In both the propositional and (...) the quanti cational case presupposition failure can be viewed as error abortion of procedures Thus a dynamic assertion logic which describes the preconditions for error abortion is the suitable tool for analysing presupposition.. (shrink)
Discourse Representation Theory is a speciﬁc name for the work of Hans Kamp in the area of dynamic interpretation of natural language. Also, it has gradually become a generic term for proposals for dynamic interpretation of natural language in the same spirit. These proposals have in common that each new sentence is interpreted in terms of the contribution it makes to an existing piece of interpreted discourse. The interpretation conditions for sentences are given as instructions for updating the representation of (...) the discourse. (shrink)
Epistemic protocols are communication protocols aiming at transfer of knowledge in a controlled way. Typically, the preconditions or goals for protocol actions depend on the knowledge of agents, often in nested form. Informal epistemic protocol descriptions for muddy children, coordinated attack, dining cryptographers, Russian cards, secret key exchange are well known. The contribution of this paper is a formal study of a natural requirement on epistemic protocols, that the contents of the protocol can be assumed to be common knowledge. By (...) formalizing this requirement we can prove that there can be no unbiased deterministic protocol for the Russian cards problem. For purposes of our formal analysis we introduce an epistemic protocol language, and we show that its model checking problem is decidable. (shrink)
We explore some logics of change, focusing on commands to change the world in such a way that certain elementary propositions become true or false. This investigation starts out from the following two simplifying assumptions: (1) the world is a collection of facts (Wittgenstein), and (2), the world can be changed by changing elementary facts (Marx). These assumptions allow us to study the logic of imperatives in the simplest possible setting.
The effects of public announcements, private communications, deceptive messages to groups, and so on, can all be captured by a general mechanism of updating multi-agent models with update action models, now in widespread use. There is a natural extension of the definition of a bisimulation to action models. Surely enough, updating with bisimilar action models gives the same result. But the converse turns out to be false: update models may have the same update effects without being bisimilar. We propose action (...) emulation as a notion of equivalence more appropriate for action models, and generalizing standard bisimulation. It is proved that action emulation provides a full characterization of update effect. We first concentrate on the general case, and next focus on the important case of action models with propositional preconditions. Our notion of action emulation yields a simplification procedure for action models, and it gives designers of multi-agent systems a useful tool for comparing different ways of representing a particular communicative action. (shrink)
We carry out the Karttunen-Stalnaker pragmatic account of presupposition projection within a state-of-the art version of dynamic epistemic logic. It turns out that the basic projection facts can all be derived from a Gricean maxim ‘be informative’. This sheds light on a recent controversy on the appropriateness of dynamic semantics as a tool for analysing presupposition.
Guarded actions are changes with preconditions acting as a guard. Guarded action models are multimodal Kripke models with the valuations replaced by guarded actions. Call guarded action logic the result of adding product updates with guarded action models to PDL (propositional dynamic logic). We show that guarded action logic reduces to PDL.
In  it is shown how propositional dynamic logic (PDL) can be interpreted as a logic of belief revision that extends the logic of communication and change (LCC) given in . This new version of epistemic/doxastic PDL does not impose any constraints on the basic relations and because of this it does not suffer from the drawback of LCC that these constraints may get lost under updates that are admitted by the system. Here, we will impose one constraint, namely that (...) the agent’s plausibility relations are linked. Linkedness is a natural extension of local connectedness to the multi-agent case and it assures that we know the agent’s preferences between all relevant alternatives. Since the belief updates that are used in  may not preserve linkedness, we limit ourselves to a particular kind of belief change that does preserve it. Our framework has obvious connections to coalition logic  and social choice theory . We show how we can use it to model consensus seeking in plenary Dutch meetings. In Dutch meetings, a belief update is done for all agents in the meeting if a majority beliefs the proposition that is under discussion. A special case of these meetings is judgement aggregation, and we apply our framework to the discursive dilemma in this field . (shrink)
This is a case-study in knowledge representation. We analyze the ‘one hundred prisoners and a lightbulb’ puzzle. In this puzzle it is relevant what the agents (prisoners) know, how their knowledge changes due to observations, and how they affect the state of the world by changing facts, i.e., by their actions. These actions depend on the history of previous actions and observations. Part of its interest is that all actions are local, i.e. not publicly observable, and part of the problem (...) is therefore how to disseminate local results to other agents, and make them global. The various solutions to the puzzle are presented as protocols (iterated functions from agent’s local states, and histories of actions, to actions). The computational aspect is about average runtime termination under conditions of random (‘fair’) scheduling. (shrink)
This is a case-study in knowledge representation and dynamic epistemic protocol verification. We analyze the ‘one hundred prisoners and a lightbulb’ puzzle. In this puzzle it is relevant what the agents know, how their knowledge changes due to observations, and how they affect the state of the world by changing facts, i.e., by their actions. These actions depend on the history of previous actions and observations. Part of its interest is that all actions are local, i.e. not publicly observable, and (...) part of the problem is therefore how to disseminate local results to other agents, and make them global. The various solutions to the puzzle are presented as protocols. The paper consists of three parts. First, we present different versions of the puzzle, and their solutions. This includes a probabilistic version, and a version assuming synchronicity. The latter is very informative for the prisoners, and allows different protocols. Then, we model the puzzle in an epistemic logic incorporating dynamic operators for the effects of information changing events. Such events include both informative actions, where agents become more informed about the non-changing state of the world, and factual changes, wherein the world and the facts describing it change themselves as well. Finally, we verify the basic protocol to solve the problem. Novel contributions in this paper are: Firstly, Protocol 2 and Protocol 4. Secondly, the modelling in dynamic epistemic logic in its entirety — we do not know of a case study that combines factual and informational dynamics in a setting of non-public events, or of a similar proposal to handle asynchronous behaviour in a dynamic epistemic logic. Thirdly, our method to verify dynamic epistemic protocols by reasoning over possibly infinite execution sequences of these protocols. A precursor of the present paper, entitled ‘One hundred prisoners and a lightbulb – logic and computation’, was presented at KR 2010, Toronto. The differences with the present contribution are as follows: the former contains a section with computational results, whereas the focus of the present paper is the verification of one of the presented protocols in the former. (shrink)
We contrast Bonanno’s ‘Belief Revision in a Temporal Framework’  with preference change and belief revision from the perspective of dynamic epistemic logic (DEL). For that, we extend the logic of communication and change of  with relational substitutions  for preference change, and show that this does not alter its properties. Next we move to a more constrained context where belief and knowledge can be defined from preferences [29; 14; 5; 7], prove completeness of a very expressive logic of (...) belief revision, and define a mechanism for updating belief revision models using a combination of action priority update  and preference substitution . (shrink)
Computer software is written in languages like C, Java or Haskell. In many cases social software is expressed in natural language. The paper explores connections between the areas of natural language analysis and analysis of social protocols, and proposes an extended program for natural language semantics, where the goals of natural language communication are derived from the demands of specific social protocols.
Three key ways of updating one’s knowledge are (i) perception of states of affairs, e.g., seeing with one’s own eyes that something is the case, (ii) reception of messages, e.g., being told that something is the case, and (iii) drawing new conclusions from known facts. If one represents knowledge by means of Kripke models, the implicit assumption is that drawing conclusions is immediate. This assumption of logical omniscience is a useful abstraction. It leaves the distinction between (i) and (ii) to (...) be accounted for. In current versions of Update Logic (Dynamic Epistemic Logic, Logic of Communication and Change) perception and message reception are not distinguished. This paper proposes an extension of Update Logic that makes this distinction explicit. The logic deals with three kinds of updates: announcements, changes of the world, and observations about the world in the presence of witnesses. The resulting logic is shown to be complete by means of a reduction to epistemic propositional dynamic logic by a well known method. (shrink)
Logical frameworks for analysing the dynamics of information processing abound [4, 5, 8, 10, 12, 14, 20, 22]. Some of these frameworks focus on the dynamics of the interpretation process, some on the dynamics of the process of drawing inferences, and some do both of these. Formalisms galore, so it is felt that some conceptual streamlining would pay off.This paper is part of a larger scale enterprise to pursue the obvious parallel between information processing and imperative programming. We demonstrate that (...) logical tools from theoretical computer science are relevant for the logic of information flow. More specifically, we show that the perspective of Hoare logic [13, 18] can fruitfully be applied to the conceptual simplification of information flow logics. (shrink)
This paper contains the full code of a prototype implementation in Haskell , in ‘literate programming’ style , of the tableau-based calculus and proof procedure for hybrid logic presented in .
Thinking about Martin Stokhof as a philosopher and colleague, his formal analysis (together with Jeroen Groenendijk) of questions and question answering is the first thing that comes to mind. This work is part of a fruitful tradition that has recently spawned inquisitive semantics, and the focus on question answering in dynamic epistemic logic. The theme is still very much alive at ILLC today. Next, I am reminded of the dynamic turn in natural language semantics, of the way he and Jeroen (...) Groenendijk criticized Hans Kamp’s ideas about discourse representation, and how this led to the invention of dynamic predicate logic. Later it became clear that dynamic predicate logic can be viewed as the action part of quantified dynamic logic, invented much earlier by David Harel. This led to a connection, a semantic parallel, between the analysis of programming and the analysis of natural language. Thinking about Martin as a philosopher still longer, it is impossible to further postpone the thought of Ludwig Wittgenstein. This is the connection that I am least familiar with, so this is what I will write about, in the hope that developing my thought in writing will create a somewhat better understanding. (shrink)
Functions of type n are characteristic functions on n-ary relations. Keenan  established their importance for natural language semantics, by showing that natural language has many examples of irreducible type n functions, i.e., functions of type n that cannot be represented as compositions of unary functions. Keenan proposed some tests for reducibility, and Dekker  improved on these by proposing an invariance condition that characterizes the functions with a reducible counterpart with the same behaviour on product relations. The present paper (...) generalizes the notion of reducibility (a quantiﬁer is reducible if it can be represented as a composition of quantiﬁers of lesser, but not necessarily unary, types), proposes a direct criterion for reducibility, and establishes a diamond theorem and a normal form theorem for reduction. These results are then used to show that every positive n function has a unique representation as a composition of positive irreducible functions, and to give an algorithm for ﬁnding this representation. With these formal tools it can be established that natural language has examples of n-ary quantiﬁcational expressions that cannot be reduced to any composition of quantiﬁers of lesser degree. (shrink)
Just as war can be viewed as continuation of diplomacy using other means, computational semantics is continuation of logical analysis of natural language by other means. For a long time, the tool of choice for this used to be Prolog. In our recent textbook we argue (and try to demonstrate by example) that lazy functional programming is a more appropriate tool. In the talk we will lay out a program for computational semantics, by linking computational semantics to the general analysis (...) of procedures for social interaction. The talk will give examples of how Haskell can be used to begin carrying out this program. (shrink)
Dynamic logic, broadly conceived, is the logic that analyses change by decomposing actions into their basic building blocks and by describing the results of performing actions in given states of the world. The actions studied by dynamic logic can be of various kinds: actions on the memory state of a computer, actions of a moving robot in a closed world, interactions between cognitive agents performing given communication protocols, actions that change the common ground between speaker and hearer in a conversation, (...) actions that change the contextually available referents in a conversation, and so on. (shrink)
Model checking techniques for communication protocols usually are phrased in terms of processes, basically labelled arcs in a labelled transition system. We propose to lift checking for such protocols to a more abstract level by analysing the protocols as composite communicative actions, with a communicative action viewed as a mapping on an appropriate class of epistemic models. As an example, we analyse an anonymous broadcast protocol (Chaum’s well-known dining cryptographers protocol) and an electronic voting protocol.
Current dynamic epistemic logics often become cumbersome and opaque when common knowledge is added for groups of agents. Still, postconditions regarding common knowledge express the essence of what communication achieves. We present some methods that yield so-called reduction axioms for common knowledge. We investigate the expressive power of public announcement logic with relativized common knowledge, and present reduction axioms that give a detailed account of the dynamics of common knowledge in some major communication types.
In current semantic theory compositional interpretations are assumed to go from linguistic items to their denotations in some model. This perspective still leaves room for a more dynamical account of how such interpretations are actually created. One natural idea is to assume that each sentence in a discourse is understood through some representation, ‘mediating’ between the language and its models. Thus, the old relation of interpretation splits up into two new ones, viz. that between linguistic items and their representations, and (...) that between these representations and actual models. Now, at the Cleves conference it was clear that discourse representations are many things to many people. Some view them as syntactic constructs, some as psychological ones (yet others prefer to remain confused over this issue). Again, one popular metaphor is that of the partial picture of reality, another that of a procedural recipe for verification. Finally, these representations are supposed to explain such diverse phenomena as anaphora and progressive discourse information. It is not obvious that one coherent notion could do all these fobs. On the other hand, it is not obvious either that one need not try. The purpose of this paper is to clarify some logical issues concerning discourse representations, while trying to bring together two of the main themes at the Cleves conference, viz. representation proper and the topic of partial information. General considerations will be found in section 1; section 2 contains applications and illustrations drawn from the two best-developed formal paradigms of discourse semantics (cf. Hintikkn (1979), Hintikka & Carlson (1979). Kamp (1981)). It is our contention that more clarity as to the nature and the purpose of discourse representation will unite, rather than divide the various currents in this developing area. (shrink)
To ascertain that a formalization of the intuitive notion of a ‘concept’ is linguistically interesting, one has to check whether it allows to get a grip on distinctions and notions from lexical semantics. Prime candidates are notions like ‘prototype’, ‘stereotypical attribute’, ‘essential attribute versus accidental attribute’, ‘intension versus extension’. We will argue that although the current paradigm of formal concept analysis as an application of lattice theory is not rich enough for an analysis of these notions, a lattice theoretical approach (...) to concepts is a suitable starting point for formalizing them. (shrink)
No index information on NPs, except for pronouns. Otherwise, virtually the same as a datatype declaration for a fragment of dynamic Montague grammar. The module Cat imports the standard List module. Lists will be employed to implement a simple feature agreement mechanism.
This paper introduces DEMO, a Dynamic Epistemic Modelling tool. DEMO allows modelling epistemic updates, graphical display of update results, graphical display of action models, formula evaluation in epistemic models, translation of dynamic epistemic formulas to PDL formulas, and so on. The paper implements the reduction of dynamic epistemic logic [16, 2, 3, 1] to PDL given in . The reduction of dynamic epistemic logic to automata PDL from  is also discussed and implemented. Epistemic models are minimized under bisimulation, and (...) update action models are minimized under action emulation (the appropriate structural notion for having the same update eﬀect, cf. ). The paper is an exemplar of tool building for epistemic update logic. It contains the full code of an implementation in Haskell , in ‘literate programming’ style , of DEMO. (shrink)
Dynamic predicate logic (DPL), presented in  as a formalism for representing anaphoric linking in natural language, can be viewed as a fragment of a well known formalism for reasoning about imperative programming . An interesting difference from other forms of dynamic logic is that the distinction between formulas and programs gets dropped: DPL formulas can be viewed as programs. In this paper we show that DPL is in fact the basis of a hierarchy of formulas-as-programs languages.
An emerging standard for polymorphically typed, lazy, purely functional programming is Haskell, a language named after Haskell Curry. Haskell is based on (polymorphically typed) lambda calculus, which makes it an excellent tool for computational semantics.
This is indeed a very nice draft that I have read with great pleasure, and that has helped me to better understand the completeness proof for LCC. Modal fixed point logic allows for an illuminating new version (and a further extension) of that proof. But still. My main comment is that I think the perspective on substitutions in the draft paper is flawed. The general drift of the paper is that relativization, (predicate) substitution and product update are general operations on (...) models, and that it is important to check whether given logical languages are closed under these operations. FO logic is closed under relativization, predicate substitution and product constructions (such as those involved in relative interpretation). The minimal modal logic is closed under relativization, which explains the reduction of epistemic logic (withhout common knowledge) + public announcement to epistemic logic simpliciter (as observed in Van Benthem, ). The reduction breaks down as soon as one adds common knowledge. The minimal modal logic is also closed under substitution, which explains the reduction of epistemic logic plus (publicly observable) factual change to epistemic logic simpliciter, via the following reduction axioms (I use !p := φ for the operation of publicly changing the truth value of p to φ): [!p := φ]p ↔ φ [!p := φ]q ↔ q (p and q syntactically different ) [!p := φ]¬ψ ↔ ¬[!p := φ]ψ [!p := φ](ψ1 ∧ ψ2) ↔ [!p := φ]ψ1 ∧ [!p := φ]ψ2 [!p := φ][i]ψ ↔ [i][!p := φ]ψ Unlike the case of relativisation, this can be extended to the case of epistemic logic with common knowledge, by means of: [!p := φ]CGψ ↔ CG[!p := φ]ψ We get the following.. (shrink)
This talk shows how propositional dynamic logic (PDL) can be interpreted as a logic for multi-agent knowledge update and belief revision, or as a logic of preference change, if the basic relations are read as preferences instead of plausibilities. Our point of departure is the logic of communication and change (LCC) of . Like LCC, our logic uses PDL as a base epistemic language. Unlike LCC, we start out from agent plausibilities, add their converses, and build knowledge and belief operators (...) from these with the PDL constructs. We extend the update mechanism of LCC to an update mechanism that handles belief change as relation substitution, and we show that the update part of this logic is more expressive than either that of LCC or that of epistemic/doxastic PDL with a belief change modality. Next, we show that the properties of knowledge and belief are preserved under any update, unlike in LCC. We prove completeness of the logic and give examples of its use. If there is time, we will also look at the preference interpretation of the system, and at preference change scenarios that can be modelled with it. (shrink)
We present a direct reduction of dynamic epistemic logic in the spirit of  to propositional dynamic logic (PDL) [17, 18] by program transformation. The program transformation approach associates with every update action a transformation on PDL programs. These transformations are then employed in reduction axioms for the update actions. It follows that the logic of public announcement, the logic of group announcements, the logic of secret message passing, and so on, can all be viewed as subsystems of PDL. Moreover, (...) the program transformation approach can be used to generate the appropriate reduction axioms for these logics. Our direct reduction of dynamic epistemic logic to PDL was inspired by the reduction of dynamic epistemic logic to automata PDL of . Our approach shows how the detour through automata can be avoided. (shrink)
Hyper tableau reasoning is a version of clausal form tableau reasoning where all negative literals in a clause are resolved away in a single inference step. Constrained hyper tableaux are a generalization of hyper tableaux, where branch closing substitutions, from the point of view of model generation, give rise to constraints on satisfying assignments for the branch. These variable constraints eliminate the need for the awkward ‘purifying substitutions’ of hyper tableaux. The paper presents a non-destructive and proof confluent calculus for (...) constrained hyper tableaux, together with a soundness and completeness proof, with completeness based on a new way to generate models from open tableaux. It is pointed out that the variable constraint approach applies to free variable tableau reasoning in general. (shrink)
Time discounting is the phenomenon that a desired result in the future is perceived as less valuable than the same result now. Economic theories can take this psychological fact into account in several ways. In the economic literature the most widely used type of additive time discounting is exponential discounting. In exponential discounting, the fall of valuation depends by a constant factor on the length of the delay period. It is well known, however, that exponential time discounting often does not (...) describe well how people actually behave. Most people are averse to short delays in gratiﬁcation now, while their future selves may not mind a bit of extra waiting. This behaviour can be described well by non-exponential discounting functions such as hyperbolic discounting. In hyperbolic discounting, valuations fall rapidly for small delay periods, but the fall gets slower for longer delay periods. Hyperbolic discounting captures phenomena such as procrastination, addiction and in general inconsistency over time. This chapter investigates whether forms of non-exponential discounting, in particular close to the so called Quasi-Hyperbolic model, could also be characterized in terms of dynamically consistent choices when individuals discount the welfare of future selves as well as their payoffs. (shrink)