This volume presents the proceedings from the Eleventh Brazilian Logic Conference on Mathematical Logic held by the Brazilian Logic Society (co-sponsored by the Centre for Logic, Epistemology and the History of Science, State University of Campinas, Sao Paulo) in Salvador, Bahia, Brazil. The conference and the volume are dedicated to the memory of professor Mario Tourasse Teixeira, an educator and researcher who contributed to the formation of several generations of Brazilian logicians. Contributions were made from leading (...) Brazilian logicians and their Latin-American and European colleagues. All papers were selected by a careful refereeing processs and were revised and updated by their authors for publication in this volume. There are three sections: Advances in Logic, Advances in Theoretical Computer Science, and Advances in Philosophical Logic. Well-known specialists present original research on several aspects of model theory, proof theory, algebraic logic, category theory, connections between logic and computer science, and topics of philosophical logic of current interest. Topics interweave proof-theoretical, semantical, foundational, and philosophical aspects with algorithmic and algebraic views, offering lively high-level research results. (shrink)
When reasoning about complex domains, where information available is usually only partial, nonmonotonic reasoning can be an important tool. One of the formalisms introduced in this area is Reiter's DefaultLogic (1980). A characteristic of this formalism is that the applicability of default (inference) rules can only be verified in the future of the reasoning process. We describe an interpretation of defaultlogic in temporal epistemic logic which makes this characteristic explicit. It is shown (...) that this interpretation yields a semantics for defaultlogic based on temporal epistemic models. A comparison between the various semantics for defaultlogic will show the differences and similarities of these approaches and ours. (shrink)
Defaultlogic is one of the most popular and successful formalisms for non-monotonic reasoning. In 2002, Bonatti and Olivetti introduced several sequent calculi for credulous and skeptical reasoning in propositional defaultlogic. In this paper we examine these calculi from a proof-complexity perspective. In particular, we show that the calculus for credulous reasoning obeys almost the same bounds on the proof size as Gentzen’s system LK. Hence proving lower bounds for credulous reasoning will be as hard (...) as proving lower bounds for LK. On the other hand, we show an exponential lower bound to the proof size in Bonatti and Olivetti’s enhanced calculus for skeptical default reasoning. (shrink)
This paper presents statistical defaultlogic, an expansion of classical (i.e., Reiter) defaultlogic that allows us to model common inference patterns found in standard inferential statistics, including hypothesis testing and the estimation of a populations mean, variance and proportions. The logic replaces classical defaults with ordered pairs consisting of a Reiter default in the ﬁrst coordinate and a real number within the unit interval in the second coordinate. This real number represents an upper-bound (...) limit on the probability of accepting the consequent of an applied default and that consequent being false. A method for constructing extensions is then deﬁned that preserves this upper bound on the probability of error under a (skeptical) non-monotonic consequence relation. (shrink)
Statistical DefaultLogic (SDL) is an expansion of classical (i.e., Reiter) defaultlogic that allows us to model common inference patterns found in standard inferential statistics, e.g., hypothesis testing and the estimation of a population‘s mean, variance and proportions. This paper presents an embedding of an important subset of SDL theories, called literal statistical default theories, into stable model semantics. The embedding is designed to compute the signature set of literals that uniquely distinguishes each extension (...) on a statistical default theory at a pre-assigned error-bound probability. (shrink)
We present an epistemic defaultlogic, based on the metaphore of a meta-level architecture. Upward reflection is formalized by a nonmonotonic entailment relation, based on the objective facts that are either known or unknown at the object level. Then, the meta (monotonic) reasoning process generates a number of default-beliefs of object-level formulas. We extend this framework by proposing a mechanism to reflect these defaults down. Such a reflection is seen as essentially having a temporal flavour: defaults derived (...) at the meta-level are projected as facts in a next object level state. In this way, we obtain temporal models for default reasoning in meta-level formalisms which can be conceived as labeled branching trees. Thus, descending the tree corresponds to shifts in time that model downward reflection, whereas the branching of the tree corresponds to ways of combining possible defaults. All together, this yields an operational or procedural semantics of reasoning by default, which admits one to reason about it by means of branching-time temporal logic. Finally, we define sceptical and credulous entailment relations based on these temporal models and we characterize Reiter extensions in our semantics. (shrink)
Dynamic doxastic logic (DDL) is used in connexion with theories of belief revision. Here we try to show that languages of DDL are suitable also for discussing aspects of defaultlogic. One ingredient of our analysis is a concept of coherence-as-ratifiability.
Currently there is hardly any connection between philosophy of science and Artificial Intelligence research. We argue that both fields can benefit from each other. As an example of this mutual benefit we discuss the relation between Inductive-Statistical Reasoning and DefaultLogic. One of the main topics in AI research is the study of common-sense reasoning with incomplete information. Defaultlogic is especially developed to formalise this type of reasoning. We show that there is a striking resemblance (...) between inductive-statistical reasoning and defaultlogic. A central theme in the logical positivist study of inductive-statistical reasoning such as Hempels Criterion of Maximal Specificity turns out to be equally important in defaultlogic. We also discuss to what extent the relevance of the results of Logical Positivism to AI research could contribute to a reevaluation of Logical Positivism in general. (shrink)
We show that modal logics characterized by a class of frames satisfying the insertion property are suitable for Reiter's defaultlogic. We refine the canonical fix point construction defined by Marek, Schwarz and Truszczyński for Reiter's defaultlogic and thus we addrress a new paradigm for nonmonotonic logic. In fact, differently from the construction defined by these authors. we show that suitable modal logics for such a construction must indeed contain K D4. When reflexivity is (...) added to the modal logic used for the fix point construction then we come to the Marek Schwarz and Truszczyński framework for Reiter's defaultlogic. Our framework, in fact, is appropriate also to the family of modal logics in between S4 and S4f. If, instead, reflexivity is dropped, then we show that a new family of modal logics is gained, namely the modal logics in between KD4 and KD4Z. The upper bound can be extended to the modal logic KD4LZ whenever the propositional language taken into account is finite. (shrink)
This paper introduces a generalization of Reiter’s notion of “extension” for defaultlogic. The main difference from the original version mainly lies in the way conﬂicts among defaults are handled: in particular, this notion of “general extension” allows defaults not explicitly triggered to pre-empt other defaults. A consequence of the adoption of such a notion of extension is that the collection of all the general extensions of a default theory turns out to have a nontrivial algebraic structure. (...) This fact has two major technical fall-outs: ﬁrst, it turns out that every default theory has a general extension; second, general extensions allow one to deﬁne a well-behaved, skeptical relation of defeasible consequence for default theories, satisfying the principles of Reﬂexivity, Cut, and Cautious Monotonicity formulated by D. Gabbay. (shrink)
Presuppositions and entailments play an important role in determining the meaning of a natural language utterance. Considered as inferences, presuppositions and entailments can be derived from appropriate logical representations of the uttered sentence, the background real world knowledge, and knowledge concerning conversational principles. Presuppositions are conjectural or defeasible in nature, and entailments are deductive. In this paper we describe the application of DefaultLogic proof theory (which includes First Order Logic proof theory) to the generation of presuppositions (...) and entailments. Classical logic, which can generate the entailments, is enhanced with default rules which capture the linguistic knowledge required to produce the presuppositions. The similarities and differences between presuppositions and entailments when considered as inferences are discussed. We also show that the DefaultLogic paradigm, in addition to generating the appropriate presuppositions and entailments, has explanatory power. (shrink)
Lifschitz introduced the notion of defining extensions of predicate default theories not as absolute, but relative to a specified domain. We look specifically at default theories over a countable domain and show the set of default theories which possess an ω -extension is Σ 2 1 -complete. That the set is in Σ 2 1 is shown by writing a nearly circumscriptive formula whose ω -models correspond to the ω -extensions of a given default theory; similarly, (...) Σ 2 1 -hardness is established by a method for translating formulas into default theories in such a way that ω -models of the circumscriptive formula correspond to ω -extensions of the default theory. (shrink)
We study probabilistic logic under the viewpoint of the coherence principle of de Finetti. In detail, we explore how probabilistic reasoning under coherence is related to model- theoretic probabilistic reasoning and to default reasoning in System . In particular, we show that the notions of g-coherence and of g-coherent entailment can be expressed by combining notions in model-theoretic probabilistic logic with concepts from default reasoning. Moreover, we show that probabilistic reasoning under coherence is a generalization of (...)default reasoning in System . That is, we provide a new probabilistic semantics for System , which neither uses infinitesimal probabilities nor atomic bound (or big-stepped) probabilities. These results also provide new algorithms for probabilistic reasoning under coherence and for default reasoning in System , and they give new insight into default reasoning with conditional objects. (shrink)
We study a probabilistic logic based on the coherence principle of de Finetti and a related notion of generalized coherence (g-coherence). We examine probabilistic conditional knowledge bases associated with imprecise probability assessments defined on arbitrary families of conditional events. We introduce a notion of conditional interpretation defined directly in terms of precise probability assessments. We also examine a property of strong satisfiability which is related to the notion of toleration well known in default reasoning. In our framework we (...) give more general definitions of the notions of probabilistic consistency and probabilistic entailment of Adams. We also recall a notion of strict p-consistency and some related results. Moreover, we give new proofs of some results obtained in probabilistic default reasoning. Finally, we examine the relationships between conditional probability rankings and the notions of g-coherence and g-coherent entailment. (shrink)
Based on a close study of benchmark examples in default reasoning, such as Nixon Diamond, Penguin Principle, etc., this paper provides an in depth analysis of the basic features of default reasoning. We formalize default inferences based on Modus Ponens for Default Implication, and mark the distinction between "local inferences"(to infer a conclusion from a subset of given premises) and "global inferences"(to infer a conclusion from the entire set of given premises). These conceptual analyses are captured (...) by a formal semantics that is built upon the set-selection function technique. A minimal logic system M of default reasoning that accommodates Modus Ponens for Default Implication and suitable for local inferences is proposed, and its soundness is proved. (shrink)
We motivate and formalize the idea of sameness by default: two objects are considered the same if they cannot be proved to be different. This idea turns out to be useful for a number of widely different applications, including natural language processing, reasoning with incomplete information, and even philosophical paradoxes. We consider two formalizations of this notion, both of which are based on Reiter’s DefaultLogic. The first formalization is a new relation of indistinguishability that is introduced (...) by default. We prove that the corresponding default theory has a unique extension, in which every two objects are indistinguishable if and only if their non-equality cannot be proved from the known facts. We show that the indistinguishability relation has some desirable properties: it is reflexive, symmetric, and, while not transitive, it has a transitive “flavor.” The second formalization is an extension (modification) of the ordinary language equality by a similar default: two objects are equal if and only if their non-equality cannot be proved from the known facts. It appears to be less elegant from a formal point of view. In particular, it gives rise to multiple extensions. However, this extended equality is better suited for most of the applications discussed in this paper. (shrink)
Defaultlogic is computationally expensive. One of the most promising ways of easing this problem and developing powerful implementations is to split a default theory into smaller parts and compute extensions in a modular, local way. This paper compares two recent approaches, Turner's splitting and Cholewinski's stratification. It shows that the approaches are closely related – in fact the former can be viewed as a special case of the latter.
This is an expository article about the solution to the frame problem proposed in 1980 by Raymond Reiter. For years, his “frame default” remained untested and suspect. But developments in some seemingly unrelated areas of computer science—logic programming and satisfiability solvers—eventually exonerated the frame default and turned it into a basis for important applications.
The present paper aims at a synthesis of belief revision theory with the Sneed formalism known as the structuralist theory of science. This synthesis is brought about by a dynamisation of classical structuralism, with an abductive inference rule and base generated revisions in the style of Rott (2001). The formalism of prioritised defaultlogic (PDL) serves as the medium of the synthesis. Why seek to integrate the Sneed formalism into belief revision theory? With the hybrid system of the (...) present investigation, a substantial simplification of the ranking information that is necessary to define revisions and contractions uniquely is achieved. This system is, furthermore, expressive enough to capture complex and non-trivial scientific examples. It is thus closely related to a novel research area within belief revision theory which addresses the dynamics of scientific knowledge. (shrink)
Since the earliest formalisation of defaultlogic by Reiter many contributions to this appealing approach to nonmonotonic reasoning have been given. The different formalisations are here presented in a general framework that gathers the basic notions, concepts and constructions underlying defaultlogic. Our view is to interpret defaults as special rules that impose a restriction on the juxtaposition of monotonic Hubert-style proofs of a given logicL. We propose to describe defaultlogic as a (...) class='Hi'>logic where the juxtaposition of default proofs is subordinate to a restriction condition . Hence a defaultlogic is a pair (L, ) where properties of the logic , like compactness, can be interpreted through the restriction condition . Different default systems are then given a common characterization through a specific condition on the logicL. We also prove cumulativity for any defaultlogic (L, ) by slightly modifying the notion of default proof. We extend, in fact, the language ofL in a way close to that followed by Brewka in the formulation of his cumulative default system. Finally we show the existence of infinitely many intermediary default logics, depending on and called linear logics, which lie between Reiter's and ukaszewicz' versions of defaultlogic. (shrink)
Classic deductive logic entails that once a conclusion is sustained by a valid argument, the argument can never be invalidated, no matter how many new premises are added. This derived property of deductive reasoning is known as monotonicity. Monotonicity is thought to conflict with the defeasibility of reasoning in natural language, where the discovery of new information often leads us to reject conclusions that we once accepted. This perceived failure of monotonic reasoning to observe the defeasibility of natural-language arguments (...) has led some philosophers to abandon deduction itself (!), often in favor of new, non-monotonic systems of inference known as `default logics'. But these radical logics (e.g., Ray Reiter's defaultlogic) introduce their desired defeasibility at the expense of other, equally important intuitions about natural-language reasoning. And, as a matter of fact, if we recognize that monotonicity is a property of the form of a deductive argument and not its content (i.e., the claims in the premise(s) and conclusion), we can see how the common-sense notion of defeasibility can actually be captured by a purely deductive system. (shrink)
In a previous paper we developed a general theory of input/output logics. These are operations resembling inference, but where inputs need not be included among outputs, and outputs need not be reusable as inputs. In the present paper we study what happens when they are constrained to render output consistent with input. This is of interest for deontic logic, where it provides a manner of handling contrary-to-duty obligations. Our procedure is to constrain the set of generators of the input/output (...) system, considering only the maximal subsets that do not yield output conflicting with a given input. When inputs are authorised to reappear as outputs, both maxichoice revision in the sense of Alchourr6n/Makinson and the defaultlogic of Poole emerge as special cases, and there is a close relation with Reiter defaultlogic. However, our focus is on the general case where inputs need not be outputs. We show in what contexts the consistency of input with output may be reduced to its consistency with a truth-functional combination of components of generators, and under what conditions constrained output may be obtained by a derivation that is constrained at every step. (shrink)
The aim of this paper is to strengthen the point made by Horty about the relationship between reason holism and moral particularism. In the literature prima facie obligations have been considered as the only source of reason holism. I strengthen Horty’s point in two ways. First, I show that contrary-to-duties provide another independent support for reason holism. Next I outline a formal theory that is able to capture these two sources of holism. While in simple settings the proposed account coincides (...) with Horty’s one, this is not true in more complicated or “realistic” settings in which more than two norms collide. My chosen formalism is so-called input/output logic. A bottom-line example is introduced. It raises the issue of whether the conventional wisdom is right in assuming that normative reasons run parallel to epistemic ones. (shrink)
In this paper we analyze two recent conditional interpretations of defaults, one based on probabilities, and the other, on models. We study what makes them equivalent, explore their limitations and develop suitable extensions. The resulting framework ties together a number of important notions in default reasoning, like high-probabilities and model-preference, default priorities and argument systems, and independence assumptions and minimality considerations.
This work proposes an understanding of deductive, default and abductive reasoning as different instances of the same phenomenon: epistemic dynamics. It discusses the main intuitions behind each one of these reasoning processes, and suggest how they can be understood as different epistemic actions that modify an agent’s knowledge and/or beliefs in a different way, making formal the discussion with the use of the dynamic epistemic logic framework. The ideas in this paper put the studied processes under the same (...) umbrella, thus highlighting their relationship and allowing a better understanding of how they interact together. (shrink)
Compositional verification aims at managing the complexity of theverification process by exploiting compositionality of the systemarchitecture. In this paper we explore the use of a temporal epistemiclogic to formalize the process of verification of compositionalmulti-agent systems. The specification of a system, its properties andtheir proofs are of a compositional nature, and are formalized within acompositional temporal logic: Temporal Multi-Epistemic Logic. It isshown that compositional proofs are valid under certain conditions.Moreover, the possibility of incorporating default persistence ofinformation in (...) a system, is explored. A completion operation on aspecific type of temporal theories, temporal completion, is introducedto be able to use classical proof techniques in verification withrespect to non-classical semantics covering default persistence. (shrink)
Fixpoint semantics are provided for ambiguity blocking and propagating variants of Nute's defeasible logic. The semantics are based upon the well-founded semantics for logic programs. It is shown that the logics are sound with respect to their counterpart semantics and complete for locally finite theories. Unlike some other nonmonotonic reasoning formalisms such as Reiter's defaultlogic, the two defeasible logics are directly skeptical and so reject floating conclusions. For defeasible theories with transitive priorities on defeasible rules, (...) the logics are shown to satisfy versions of Cut and Cautious Monotony. For theories with either conflict sets closed under strict rules or strict rules closed under transposition, a form of Consistency Preservation is shown to hold. The differences between the two logics and other variants of defeasible logic—specifically those presented by Billington, Antoniou, Governatori, and Maher—are discussed. (shrink)
Deontic Logic goes back to Ernst Mally’s 1926 work, Grundgesetze des Sollens: Elemente der Logik des Willens [Mally. E.: 1926, Grundgesetze des Sollens: Elemente der Logik des Willens, Leuschner & Lubensky, Graz], where he presented axioms for the notion ‘p ought to be the case’. Some difficulties were found in Mally’s axioms, and the field has much developed. Logic of Knowledge goes back to Hintikka’s work Knowledge and Belief [Hintikka, J.: 1962, Knowledge and Belief: An Introduction to the (...)Logic of the Two Notions, Cornell University Press] in which he proposed formal logics of knowledge and belief. This field has also developed quite a great deal and is now the subject of the TARK conferences. However, there has been relatively little work combining the two notions of knowledge (belief) with the notion of obligation. (See, however, [Lomuscio, A. and Sergot, M.: 2003, Studia Logica 75 63–92; Moore, R. C.: 1990, In J. F. Allen, J. Hendler and A. Tate (eds.), Readings in Planning, Morgan Kaufmann Publishers, San Mateo, CA]) In this paper we point out that an agent’s obligations are often dependent on what the agent knows, and indeed one cannot reasonably be expected to respond to a problem if one is not aware of its existence. For instance, a doctor cannot be expected to treat a patient unless she is aware of the fact that he is sick, and this creates a secondary obligation on the patient or someone else to inform the doctor of his situation. In other words, many obligations are situation dependent, and only apply in the presence of the relevant information. Thus a case for combining Deontic Logic with the Logic of Knowledge is clear. We introduce the notion of knowledge based obligation and offer an S5, history based Kripke semantics to express this notion, as this semantics enables us to represent how information is transmitted among agents and how knowledge changes over time as a result of communications. We consider both the case of an absolute obligation (although dependent on information) as well as the (defeasible) notion of an obligation which may be over-ridden by more relevant information. For instance a physician who is about to inject a patient with drug d may find out that the patient is allergic to d and that she should use d′ instead. Dealing with the second kind of case requires a resort to non-monotonic reasoning and the notion of justified belief which is stronger than plain belief, but weaker than absolute knowledge in that it can be over-ridden. This notion of justified belief also creates a derived notion of default obligation where an agent has, as far as the agent knows, an obligation to do some action a. A dramatic application of this notion is our analysis of the Kitty Genovese case where, in 1964, a young woman was stabbed to death while 38 neighbours watched from their windows but did nothing. The reason was not indifference, but none of the neighbours had even a default obligation to act, even though, as a group, they did have an obligation to take some action to protect Kitty. (shrink)
There has been considerable discussion in the past about the assumptions and basis of different ethical rules. For instance, it is commonplace to say that ethical rules are defaults rules, which means that they tolerate exceptions. Some authors argue that morality can only be grounded in particular cases while others defend the existence of general principles related to ethical rules. Our purpose here is not to justify either position, but to try to model general ethical rules with artificial intelligence formalisms (...) and to compute logical consequences of different ethical theories. More precisely, this is an attempt to show that progress in non-monotonic logics, which simulates default reasoning, could provide a way to formalize different ethical conceptions. From a technical point of view, the model developed in this paper makes use of the Answer Set Programming (ASP) formalism. It is applied comparatively to different ethical systems with respect to their attitude towards lying. The advantages of such formalization are two-fold: firstly, to clarify ideas and assumptions, and, secondly, to use solvers to derive consequences of different ethical conceptions automatically, which can help in a rigorous comparison of ethical theories. (shrink)
In this essay we advance the view that analytical epistemology and artiﬁcial intelligence are complementary disciplines. Both ﬁelds study epistemic relations, but whereas artiﬁcial intelligence approaches this subject from the perspective of understanding formal and computational properties of frameworks purporting to model some epistemic relation or other, traditional epistemology approaches the subject from the perspective of understanding the properties of epistemic relations in terms of their conceptual properties. We argue that these two practices should not be conducted in isolation. We (...) illustrate this point by discussing how to represent a class of inference forms found in standard inferential statistics. This class of inference forms is interesting because its members share two properties that are common to epistemic relations, namely defeasibility and paraconsistency. Our modeling of standard inferential statistical arguments exploits results from both logical artificial intelligence and analytical epistemology. We remark how our approach to this modeling problem may be generalized to an interdisciplinary approach to the study of epistemic relation. (shrink)
A non-monotonic logic, the Logic of Plausible Reasoning (LPR), capable of coping with the demands of what we call complex reasoning, is introduced. It is argued that creative complex reasoning is the way of reasoning required in many instances of scientific thought, professional practice and common life decision taking. For managing the simultaneous consideration of multiple scenarios inherent in these activities, two new modalities, weak and strong plausibility, are introduced as part of the Logic of Plausible Deduction (...) (LPD), a deductive logic specially designed to serve as the monotonic support for LPR. Axiomatics and semantics for LPD, together with a completeness proof, are provided. Once LPD has been given, LPR may be defined via a concept of extension over LPD. Although the construction of LPR extensions is first presented in standard style, for the sake of comparison with existing non-monotonic formalisms, alternative more elegant and intuitive ways for constructing non-monotonic LPR extensions are also given and proofs of their equivalence are presented. (shrink)
A proposal by Ferguson [2003, Argumentation 17, 335–346] for a fully monotonic argument form allowing for the expression of defeasible generalizations is critically examined and rejected as a general solution. It is argued that (i) his proposal reaches less than the default-logician’s solution allows, e.g., the monotonously derived conclusion is one-sided and itself not defeasible. (ii) when applied to a suitable example, his proposal derives the wrong conclusion. Unsuccessful remedies are discussed.
We identify a pervasive contrast between implicit and explicit stances in logical analysis and system design. Implicit systems change received meanings of logical constants and sometimes also the notion of consequence, while explicit systems conservatively extend classical systems with new vocabulary. We illustrate the contrast for intuitionistic and epistemic logic, then take it further to information dynamics, default reasoning, and other areas, to show its wide scope. This gives a working understanding of the contrast, though we stop short (...) of a formal definition, and acknowledge limitations and borderline cases. Throughout we show how awareness of the two stances suggests new logical systems and new issues about translations between implicit and explicit systems, linking up with foundational concerns about identity of logical systems. But we also show how a practical facility with these complementary working styles has philosophical consequences, as it throws doubt on strong philosophical claims made by just taking one design stance and ignoring alternative ones. We will illustrate the latter benefit for the case of logical pluralism and hyper-intensional semantics. (shrink)
: Claus Oetke, in his "Ancient Indian Logic as a Theory of Non-monotonic Reasoning," presents a sweeping new interpretation of the early history of Indian logic. His main proposal is that Indian logic up until Dharmakirti was nonmonotonic in character-similar to some of the newer logics that have been explored in the field of Artificial Intelligence, such as defaultlogic, which abandon deductive validity as a requirement for formally acceptable arguments; Dharmakirti, he suggests, was the (...) first to consider that a good argument should be one for which it is not possible for the property identified as the "reason" (hetu) to occur without the property to be proved (sadhya)-a requirement akin to deductive validity. Oetke's approach is challenged here, arguing that from the very beginning in India something like monotonic, that is, deductively valid, reasoning was the ideal or norm, but that the conception of that ideal was continually refined, in that the criteria for determining when it is realized were progressively sharpened. (shrink)
We consider the connections between belief revision, conditional logic and nonmonotonic reasoning, using as a foundation the approach to theory change developed by Alchourrón, Gärdenfors and Makinson (the AGM approach). This is first generalized to allow the iteration of theory change operations to capture the dynamics of epistemic states according to a principle of minimal change of entrenchment. The iterative operations of expansion, contraction and revision are characterized both by a set of postulates and by Grove's construction based on (...) total pre-orders on the set of complete theories of the belief logic. We present a sound and complete conditional logic whose semantics is based on our iterative revision operation, but which avoids Gärdenfors's triviality result because of a severely restricted language of beliefs and hence the weakened scope of our extended postulates. In the second part of the paper, we develop a computational approach to theory dynamics using Rott's E-bases as a representation for epistemic states. Under this approach, a ranked E-base is interpreted as standing for the most conservative entrenchment compatible with the base, reflecting a kind of foundationalism in the acceptance of evidence for a belief. Algorithms for the computation of our iterative versions of expansion, contraction and revision are presented. Finally, we consider the relationship between nonmonotonic reasoning and both conditional logic and belief revision. Adapting the approach of Delgrande, we show that the unique extension of a default theory expressed in our conditional logic of belief revision corresponds to the most conservative belief state which respects the theory: however, this correspondence is limited to propositional default theories. Considering first order default theories, we present a belief revision algorithm which incorporates the assumption of independence of default instances and propose the use of a base logic for default reasoning which incorporates uniqueness of names. We conclude with an examination of the behavior of an implemented system on some of Lifschitz's benchmark problems in nonmonotonic reasoning. (shrink)