Change, Choice and Inference develops logical theories that are necessary both for the understanding of adaptable human reasoning and for the design of intelligent systems. The book shows that reasoning processes - the drawing on inferences and changing one's beliefs - can be viewed as belonging to the realm of practical reason by embedding logical theories into the broader context of the theory of rational choice. The book unifies lively and significant strands of research in logic, philosophy, economics and artificial (...) intelligence. It elaborates on the relevant theories and provides a mathematically precise foundation for the thesis that large parts of theoretical reason can be subsumed under practical reason. (shrink)
According to the Lockean thesis, a proposition is believed just in case it is highly probable. While this thesis enjoys strong intuitive support, it is known to conflict with seemingly plausible logical constraints on our beliefs. One way out of this conflict is to make probability 1 a requirement for belief, but most have rejected this option for entailing what they see as an untenable skepticism. Recently, two new solutions to the conflict have been proposed that are alleged to be (...) non-skeptical. We compare these proposals with each other and with the Lockean thesis, in particular with regard to the question of how much we gain by adopting any one of them instead of the probability 1 requirement, that is, of how likely it is that one believes more than the things one is fully certain of. (shrink)
The problem of how to remove information from an agent's stock of beliefs is of paramount concern in the belief change literature. An inquiring agent may remove beliefs for a variety of reasons: a belief may be called into doubt or the agent may simply wish to entertain other possibilities. In the prominent AGM framework for belief change, upon which the work here is based, one of the three central operations, contraction, addresses this concern (the other two deal with the (...) incorporation of new information). Makinson has generalised this work by introducing the notion of a withdrawal operation. Underlying the account proffered by AGM is the idea of rational belief change. A belief change operation should be guided by certain principles or integrity constraints in order to characterise change by a rational agent. One of the most noted principles within the context of AGM is the Principle of Informational Economy. However, adoption of this principle in its purest form has been rejected by AGM leading to a more relaxed interpretation. In this paper, we argue that this weakening of the Principle of Informational Economy suggests that it is only one of a number of principles which should be taken into account. Furthermore, this weakening points toward a Principle of Indifference. This motivates the introduction of a belief removal operation that we call severe withdrawal. We provide rationality postulates for severe withdrawal and explore its relationship with AGM contraction. Moreover, we furnish possible worlds and epistemic entrenchment semantics for severe withdrawals. (shrink)
The paper attacks the almost universally held view that belief revison theories, as they have been studied in the literature of the past two decades, are founded on a Principle of Minimal Change, or Principle of Informational Economy. The principle comes in two versions. According to the first, an agent should, when accepting new information, aim at a posterior belief set that minimizes the items on which it disagrees with the prior belief set. If there are different ways to effect (...) the belief change, then according to the second version, the agent should give up the beliefs that are least entrenched. Although widely endorsed and advertised by belief revision theorists, the paper argues that both versions of the principle are dogmas that are not in fact (and perhaps should not be) adhered to. Two simple mathematical observations substantiate this claim, and it is defended against four possible objections that involve contractions, reconstructions, dispositions, and truths. (shrink)
In recent years there has been a growing consensus that ordinary reasoning does not conform to the laws of classical logic, but is rather nonmonotonic in the sense that conclusions previously drawn may well be removed upon acquiring further information. Even so, rational belief formation has up to now been modelled as conforming to some fundamental principles that are classically valid. The counterexample described in this paper suggests that a number of the most cherished of these principles should not be (...) regarded as valid for commonsense reasoning. An explanation of this puzzling failure is given, arguing that a problem in the theory of rational choice transfers to the realm of belief formation. (shrink)
In contrast to other prominent models of belief change, models based on epistemic entrenchment have up to now been applicable only in the context of very strong packages of requirements for belief revision. This paper decomposes the axiomatization of entrenchment into independent modules. Among other things it is shown how belief revision satisfying only the 'basic' postulates of Alchourrón, Gärdenfors and Makinson can be represented in terms of entrenchment.
This is a survey paper. Contents: 1 Introduction -- 2 The representation of belief -- 3 Kinds of belief change -- 4 Coherence constraints for belief revision -- 5 Different modes of belief change -- 6 Two strategies for characterizing rational changes of belief - 6.1 The postulates strategy - 6.2 The constructive strategy -- 7 An abstract view of the elements of belief change -- 8 Iterated changes of belief -- 9 Further developments - 9.1 Variants and extensions of (...) belief revision - 9.2 Updates - 9.3 Default inferences as expectation revision - 9.4 Belief merging -- 10 Concluding remarks. (shrink)
This paper reorganizes and further develops the theory of partial meet contraction which was introduced in a classic paper by Alchourrón, Gärdenfors, and Makinson. Our purpose is threefold. First, we put the theory in a broader perspective by decomposing it into two layers which can respectively be treated by the general theory of choice and preference and elementary model theory. Second, we reprove the two main representation theorems of AGM and present two more representation results for the finite case that (...) "lie between" the former, thereby partially answering an open question of AGM. Our method of proof is uniform insofar as it uses only one form of "revealed preference", and it explains where and why the finiteness assumption is needed. Third, as an application, we explore the logic characterizing theory contractions in the finite case which are governed by the structure of simple and prioritized belief bases. (shrink)
This paper combines various structures representing degrees of belief, degrees of disbelief, and degrees of non-belief (degrees of expectations) into a unified whole. The representation uses relations of comparative necessity and possibility, as well as non-probabilistic functions assigning numerical values of necessity and possibility. We define all-encompassing necessity structures which have weak expectations (mere hypotheses, guesses, conjectures, etc.) occupying the lowest ranks and very strong, ineradicable ('a priori') beliefs occupying the highest ranks. Structurally, there are no differences from the top (...) to the bottom. I argue that belief is a vague notion, and that thresholds for belief, if there are any, are context-dependent. (shrink)
Prioritized bases, i.e., weakly ordered sets of sentences, have been used for specifying an agent’s ‘basic’ or ‘explicit’ beliefs, or alternatively for compactly encoding an agent’s belief state without the claim that the elements of a base are in any sense basic. This paper focuses on the second interpretation and shows how a shifting of priorities in prioritized bases can be used for a simple, constructive and intuitive way of representing a large variety of methods for the change of belief (...) states—methods that have usually been characterized semantically by a system-of-spheres modeling. Among the methods represented are ‘radical’, ‘conservative’ and ‘moderate’ revision, ‘revision by comparison’ in its raising and lowering variants, as well as various constructions for belief expansion and contraction. Importantly, none of these methods makes any use of numbers. (shrink)
According to the Ramsey Test, conditionals reflect changes of beliefs: α > β is accepted in a belief state iff β is accepted in the minimal revision of it that is necessary to accommodate α. Since Gärdenfors’s seminal paper of 1986, a series of impossibility theorems (“triviality theorems”) has seemed to show that the Ramsey test is not a viable analysis of conditionals if it is combined with AGM-type belief revision models. I argue that it is possible to endorse that (...) Ramsey test for conditionals while staying true to the spirit of AGM. A main focus lies on AGM’s condition of Preservation according to which the original belief set should be fully retained after a revision by information that is consistent with it. I use concrete representations of belief states and (iterated) revisions of belief states as semantic models for (nested) conditionals. Among the four most natural qualitative models for iterated belief change, two are identified that indeed allow us to combine the Ramsey test with Preservation in the language containing only flat conditionals of the form α > β. It is shown, however, that Preservation for this simple language enforces a violation of Preservation for nested conditionals of the form α > (β > γ). In such languages, no two belief sets are ordered by strict subset inclusion. I argue that it has been wrong right from the start to expect that Preservation holds in languages containing nested conditionals. (shrink)
A sentence A is epistemically less entrenched in a belief state K than a sentence B if and only if a person in belief state K who is forced to give up either A or B will give up A and hold on to B. This is the fundamental idea of epistemic entrenchment as introduced by Gärdenfors (1988) and elaborated by Gärdenfors and Makinson (1988). Another distinguishing feature of relations of epistemic entrenchment is that they permit particularly simple and elegant (...) construction recipes for minimal changes of belief states. These relations, however, are required to satisfy rather demanding conditions. In the present paper we liberalize the concept of epistemic entrenchment by removing connectivity, minimality and maximality conditions. Correspondingly, we achieve a liberalization of the concept of rational belief change that does no longer presuppose the postulates of success and rational monotony. We show that the central results of Gärdenfors and Makinson are preserved in our more flexible setting. Moreover, the generalized concept of epistemic entrenchment turns out to be applicable also to relational and iterated belief changes. (shrink)
This paper dwells upon formal models of changes of beliefs, or theories, which are expressed in languages containing a binary conditional connective. After defining the basic concept of a (non-trivial) belief revision model. I present a simple proof of Gärdenfors''s (1986) triviality theorem. I claim that on a proper understanding of this theorem we must give up the thesis that consistent revisions (additions) are to be equated with logical expansions. If negated or might conditionals are interpreted on the basis of (...) autoepistemic omniscience, or if autoepistemic modalities (Moore) are admitted, even more severe triviality results ensue. It is argued that additions cannot be philosophically construed as parasitic (Levi) on expansions. In conclusion I outline somed logical consequences of the fact that we must not expect monotonic revisions in languages including conditionals. (shrink)
In their unifying theory to model uncertainty, Friedman and Halpern (1995–2003) applied plausibility measures to default reasoning satisfying certain sets of axioms. They proposed a distinctive condition for plausibility measures that characterizes “qualitative” reasoning (as contrasted with probabilistic reasoning). A similar and similarly fundamental, but more general and thus stronger condition was independently suggested in the context of “basic” entrenchment-based belief revision by Rott (1996–2003). The present paper analyzes the relation between the two approaches to formalizing basic notions of plausibility (...) as used in qualitative default reasoning. While neither approach is a special case of the other, translations can be found that elucidate their relationship. I argue that Rott’s notion of plausibility allows for a more modular set-up and has a better philosophical motivation than that of Friedman and Halpern. (shrink)
In this paper I discuss the foundations of a formal theory of coherent and conservative belief change that is suitable to be used as a method for constructing iterated changes of belief, sensitive to the history of earlier belief changes, and independent of any form of dispositional coherence. I review various ways to conceive the relationship between the beliefs actually held by an agent and her belief change strategies, show the problems they suffer from, and suggest that belief states should (...) be represented by unary revision functions that take sequences of inputs. Three concepts of coherence implicit in current theories of belief change are distinguished: synchronic, diachronic and dispositional coherence. Diachronic coherence is essentially identified with what is known as conservatism in epistemology. The present paper elaborates on the philosophical motivation of the general framework; formal details and results are provided in a companion paper. (shrink)
This paper studies the idea of conservatism with respect to belief change strategies in the setting of unary, iterated belief revision functions (based on the conclusions of Rott, ‘Coherence and Conservatism in the Dynamics of Belief, Part I: Finding the Right Framework’, Erkenntnis 50, 1999, 387–412). Special attention is paid to the case of ‘basic belief change’ where neither the (weak) AGM postulates concerning conservatism with respect to beliefs nor the (stong) supplementary AGM postulates concerning dispositional coherence need to be (...) satisfied. One‐step belief change generated by ‘basic entrenchment’ is combined with a natural conservative method of revising entrenchment relations. A logical characterization of this method is presented, and it is compared with three other methods known from the literature which I call ‘external’, ‘radical’ and ‘moderate’ belief revision. While conservative belief change turns out to be incoherent in its treatment of the recency of information, moderate belief change is more satisfactory in this respect. (shrink)
In this paper I discuss the relation between various properties that have been regarded as important for determining whether or not a belief constitutes a piece of knowledge: its stability, strength and sensitivity to truth, as well as the strength of the epistemic position in which the subject is with respect to this belief. Attempts to explicate the relevant concepts more formally with the help of systems of spheres of possible worlds (à la Lewis and Grove) must take care to (...) keep apart the very different roles that systems of spheres can play. Nozicks sensitivity account turns out to be closer to the stability analysis of knowledge (versions of which I identify in Plato, Descartes, Klein and Lehrer) than one might have suspected. (shrink)
This paper presents the model of ‘bounded revision’ that is based on two-dimensional revision functions taking as arguments pairs consisting of an input sentence and a reference sentence. The key idea is that the input sentence is accepted as far as (and just a little further than) the reference sentence is ‘cotenable’ with it. Bounded revision satisfies the AGM axioms as well as the Same Beliefs Condition (SBC) saying that the set of beliefs accepted after the revision does not depend (...) on the reference sentence (although the posterior belief state does depend on it). Bounded revision satisfies the Darwiche–Pearl (DP) axioms for iterated belief change. If the reference sentence is fixed to be a tautology or a contradiction, two well-known one-dimensional revision operations result. Bounded revision thus naturally fills the space between conservative revision (also known as natural revision) and moderate revision (also known as lexicographic revision). I compare this approach to the two-dimensional model of ‘revision by comparison’ investigated by Fermé and Rott (Artif Intell 157:5–47, 2004 ) that satisfies neither the SBC nor the DP axioms. I conclude that two-dimensional revision operations add substantially to the expressive power of qualitative approaches that do not make use of numbers as measures of degrees of belief. (shrink)
As part of the conference commemorating Theoria's 75th anniversary, a round table discussion on philosophy publishing was held in Bergendal, Sollentuna, Sweden, on 1 October 2010. Bengt Hansson was the chair, and the other participants were eight editors-in-chief of philosophy journals: Hans van Ditmarsch (Journal of Philosophical Logic), Pascal Engel (Dialectica), Sven Ove Hansson (Theoria), Vincent Hendricks (Synthese), Søren Holm (Journal of Medical Ethics), Pauline Jacobson (Linguistics and Philosophy), Anthonie Meijers (Philosophical Explorations), Henry S. Richardson (Ethics) and Hans Rott (Erkenntnis).
This paper addresses the question whether the past couple of decades of formal research in belief revision offers evidence of a new psychologism in logic. In the first part I examine five potential arguments in favour of this thesis and find them all wanting. In the second part of the paper I argue that belief revision research has climbed up a hierarchy of models for the change of doxastic states that appear to be clearly normative at the bottom, but are (...) more amenable to an empirical-descriptive interpretation on higher levels. I conclude that this observation might offer a foothold for the thesis that there is a new psychologism in logic. (shrink)
Pragmatists have argued that doxastic or epistemic norms do not apply to beliefs, but to changes of beliefs; thus not to the holding or not-holding, but to the acquisition or removal of beliefs. Doxastic voluntarism generally claims that humans acquire beliefs in a deliberate and controlled way. This paper introduces Negative Doxastic Voluntarism according to which there is a fundamental asymmetry in belief change: humans tend to acquire beliefs more or less automatically and unreflectively, but they tend to withdraw beliefs (...) in a controlled and deliberate way. I first present a variety of philosophical, empirical and logical arguments for Negative Doxastic Voluntarism. Then I raise two objections against it. First, the apparent asymmetry may result from a confusion of belief with other doxastic attitudes like assumption, supposition, hypothesis or opinion. Second, the apparent asymmetry seems to vanish if we focus on doxastic states rather than just beliefs. Some rejoinders and their consequences for the vague concept of belief are sketched. (shrink)
The paper aims at a perspicuous representation of Isaac Levi's pragmatist epistemology, spanning from the 1967 classic "Gambling with Truth" to his 2004 book on "Mild Contraction". Based on a formal framework for Levi's notion of inquiry, I analyse his decision-theoretic approach with truth and information as basic cognitive values, and with Shackle measures as emerging structures. Both cognitive values figure prominently in Levi's model of inductive belief expansion, but only the value of information is employed in his model of (...) belief contraction. I argue that the former model is more successful than the latter. (shrink)
The paper addresses the situation of a dispute in which one speaker says ϕ and a second speaker says not-ϕ. Proceeding on an idealising distinction between "basic" and "interesting" claims that may be formulated in a given idiolectal language, I investigate how it might be sorted out whether the dispute reflects a genuine disagreement, or whether the speakers are only having a merely verbal dispute, due to their using different interesting concepts. I show that four individually plausible principles for the (...) determination of the nature of a dispute are incompatible. As an example, I discuss the question whether Sarai lied in the story told in Genesis 12. (shrink)
This paper is about the situation in which an author (writer or speaker) presents a deductively invalid argument, but the addressee aims at a charitable interpretation and has reason to assume that the author intends to present a valid argument. How can he go about interpreting the author’s reasoning as enthymematically valid? We suggest replacing the usual find-the-missing-premise approaches by an approach based on systematic efforts to ascribe a belief state to the author against the background of which the argument (...) has to be evaluated. The suggested procedure includes rules for recording whether the author in fact accepts or denies the premises and the conclusion, as well as tests for enthymematic validity and strategies for revising belief state ascriptions. Different degrees of interpretive charity can be exercised. This is one reason why the interpretation or reconstruction of an enthymematic argument typically does not result in a unique outcome. (shrink)
A number of seminal papers on the logic of belief change by Alchourrön, Gärden-fors, and Makinson have given rise to what is now known as the AGM paradigm. The present discussion note is a response to Neil Tennant's , which aims at a critical appraisal of the AGM approach and the introduction of an alternative approach. We show that important parts of Tennants's critical remarks are based on misunderstandings or on lack of information. In the course of doing this, we (...) attend to some central philosophical issues in the theory of belief change, such as the choice of a representation for belief states and the meaning of an idealized rational agent. (shrink)
Richard Bradley has initiated a new debate, with Brian Hill and Jake Chandler as further participants, about the implications of a number of so-called triviality results surrounding the Ramsey test for conditionals. I comment on this debate and argue that ‘Inclusion’ and ‘Preservation’, which were originally introduced as postulates for the rational revision of factual beliefs, have little to recommend them in the first place when extended to languages containing conditionals. I question the philosophical method of postulation that was applied (...) in the new debate, and instead base my arguments on explicit representations of belief states and canonical constructions of belief state revisions. (shrink)
There are two prominent ways of formally modelling human belief. One is in terms of plain beliefs, i.e., sets of propositions. The second one is in terms of degrees of beliefs, which are commonly taken to be representable by subjective probability functions. In relating these two ways of modelling human belief, the most natural idea is a thesis frequently attributed to John Locke: a proposition is or ought to be believed just in case its subjective probability exceeds a contextually fixed (...) probability threshold \. This idea is known to have two serious drawbacks: first, it denies that beliefs are closed under conjunction, and second, it may easily lead to sets of beliefs that are logically inconsistent. In this paper I present two recent accounts of aligning plain belief with subjective probability: the Stability Theory of Leitgeb :1338–1389, 2013; Philos Rev 123:131–171, 2014; Proc Aristot Soc Suppl Vol 89:143–185, 2015a; The stability of belief: an essay on rationality and coherence. Oxford University Press, Oxford, 2015b) and the Probalogical Theory of Lin and Kelly :531–575, 2012a; J Philos Log 41:957–981, 2012b). I argue that Leitgeb’s theory may be too sceptical for the purposes of real life. (shrink)
This paper presents a number of apparent anomalies in rational choice scenarios, and their translation into the logic of everyday reasoning. Three classes of examples that have been discussed in the context of probabilistic choice since the 1960s (by Debreu, Tversky and others) are analyzed in a non-probabilistic setting. It is shown how they can at the same time be regarded as logical problems that concern the drawing of defeasible inferences from a given information base. I argue that initial appearances (...) notwithstanding, these cases should not be classed as instances of irrationality in choice or reasoning. One way of explaining away their apparent oddity is to view certain aspects of these examples as making particular options salient. The decision problems in point can then be solved by ‘picking’ these options, although they could not have been ‘chosen’ in a principled way, due to ties or incomparabilities with alternative options. (shrink)
The classical qualitative theory of belief change due to Alchourrón, Gärdenfors and Makinson has been widely known as being characterised by two packages of postulates. While the basic package consists of six postulates and is very weak, the full package that adds two further postulates is very strong. I revisit two classic constructions of theory contraction, viz., relational possible worlds contraction and entrenchment-based contraction and argue that four intermediate levels can be distinguished that play - or ought to play - (...) important roles within qualitative belief revision theory. Levels 3 and 4 encode two ways of interpreting the idea of imperfect discrimination of the plausibilities of possible worlds or propositions. (shrink)
Modern belief revision theory is based to a large extent on partial meet contraction that was introduced in the seminal article by Carlos Alchourrón, Peter Gärdenfors, and David Makinson that appeared in 1985. In the same year, Alchourrón and Makinson published a significantly different approach to the same problem, called safe contraction. Since then, safe contraction has received much less attention than partial meet contraction. The present paper summarizes the current state of knowledge on safe contraction, provides some new results (...) and offers a list of unsolved problems that are in need of investigation. (shrink)
The contributions to the Special Issue on Multiple Belief Change, Iterated Belief Change and Preference Aggregation are divided into three parts. Four contributions are grouped under the heading "multiple belief change" (Part I, with authors M. Falappa, E. Fermé, G. Kern-Isberner, P. Peppas, M. Reis, and G. Simari), five contributions under the heading "iterated belief change" (Part II, with authors G. Bonanno, S.O. Hansson, A. Nayak, M. Orgun, R. Ramachandran, H. Rott, and E. Weydert). These papers do not only pick (...) up the particular questions raised, but also extend and modify the framework of Alchourrón, Gärdenfors and Makinson. Part III deals with preference aggregation and consists of one contribution (by F. Herzberg and D. Eckert). (shrink)
The theory of theory change due to Alchourrón, Gärdenfors and Makinson ("AGM") has been widely known as being characterized by two sets of postulates, one being very weak and the other being very strong. Commenting on the three classic constructions of partial meet contraction, safe contraction and entrenchment-based construction, I argue that three intermediate levels can be distinguished that play decisive roles within the AGM theory.
This paper reorganizes and further develops the theory of partial meet contraction which was introduced in a classic paper by Alchourron, Gardenfors, and Makinson. Our purpose is threefold. First, we put the theory in a broader perspective by decomposing it into two layers which can respectively be treated by the general theory of choice and preference and elementary model theory. Second, we reprove the two main representation theorems of AGM and present two more representation results for the finite case that (...) "lie between" the former, thereby partially answering an open question of AGM. Our method of proof is uniform insofar as it uses only one form of "revealed preference", and it explains where and why the finiteness assumption is needed. Third, as an application, we explore the logic characterizing theory contractions in the finite case which are governed by the structure of simple and prioritized belief bases. (shrink)
Dieser Beitrag vergleicht G. F. Meiers Prinzip der hermeneutischen Billigkeit mit D. Davidsons „Principle of Charity". In der Literatur wurde darauf hingewiesen, daß diese sehr allgemeinen Prinzipien wohlwollender Interpretation insofern verwandt sind, als sie Sprechern und Autoren generell eine gewisse Form von Rationalität unterstellen. Doch weisen sie auch deutlich erkennbare Unterschiede auf. Während Meiers Auslegungskunst einen naiven Bedeutungsbegriff voraussetzt, wirkt Davidsons Prinzip in erster Linie bedeutungskonstitutiv. Ich setze die beiden Prinzipien in den Rahmen einer allgemeinen, hier nur grob entworfenen Theorie (...) der Interpretation, nach der die Interpretation von Äußerungen und Texten stets ein Rekonstruktionsunternehmen darstellt, welches die Existenz von Bedeutungen elementarer lexikalischer Einheiten und grammatikalischer Konstruktionen postuliert, die kontextuell-holistisch verstanden und kompositional verknüpft werden können. (shrink)
This paper compares three accounts of what can be inferred from a knowledge base that contains conditionals: Lehmann and Magidor’s Rational Entailment; Pearl’s System Z, later extended and refined in collaboration with Goldszmidt; and the present author’s Nonmonotonic conditional logic for belief revision. We show that although the ideas motivating these systems are strikingly different, they are formally equivalent. An explanation of the surprising parallel is offered in terms of the interpretation of conditionals in the context of nonmonotonic reasoning and (...) belief revision. Finally, some common problems with the equivalent systems are outlined, as well as some problems in assessing these problems which indicate that a general definition of dependence between the items in a knowledge base is needed. (shrink)
According to Otto Neurath, the practice of science consists in a large undertaking of setting up and maintaining systems of statements: In unified science we try... to create a consistent system of protocol statements and nonprotocol statements. When a new statement is presented to us we compare it with the system at our disposal and check whether the new statement is in contradiction with the system or not. If the new statement is in contradiction with the system, we can discard (...) this statement as unusable, for example, the statement: ‘In Africa lions sing only in major chords’ ; however, one can also ‘accept’ the statement and change the system accordingly so that it remains consistent if this statement is added. The statement may then be called ‘true’. (shrink)
In his paper ‘Changing the Theory of Theory Change: Reply to My Critics’, N. Tennant (1997b) reacts to the critical reception of an earlier article of his. The present note rectifies some of the most serious misrepresentations in Tennant's reply.
ABSTRACT In his paper ?On Having Bad Contractions, Or: No Room for Recovery? [Te97], N. Tennant attacks the AGM research program of belief revision. We show that he misrepresents the state of affairs in this field of research.