Wolfgang Spohn presents the first full account of the dynamic laws of belief, by means of ranking theory. This book is his long-awaited presentation of ranking theory and its ramifications.
The first reference on rationality that integrates accounts from psychology and philosophy, covering descriptive and normative theories from both disciplines. Both analytic philosophy and cognitive psychology have made dramatic advances in understanding rationality, but there has been little interaction between the disciplines. This volume offers the first integrated overview of the state of the art in the psychology and philosophy of rationality. Written by leading experts from both disciplines, The Handbook of Rationality covers the main normative and descriptive theories of (...) rationality—how people ought to think, how they actually think, and why we often deviate from what we can call rational. It also offers insights from other fields such as artificial intelligence, economics, the social sciences, and cognitive neuroscience. The Handbook proposes a novel classification system for researchers in human rationality, and it creates new connections between rationality research in philosophy, psychology, and other disciplines. Following the basic distinction between theoretical and practical rationality, the book first considers the theoretical side, including normative and descriptive theories of logical, probabilistic, causal, and defeasible reasoning. It then turns to the practical side, discussing topics such as decision making, bounded rationality, game theory, deontic and legal reasoning, and the relation between rationality and morality. Finally, it covers topics that arise in both theoretical and practical rationality, including visual and spatial thinking, scientific rationality, how children learn to reason rationally, and the connection between intelligence and rationality. (shrink)
It is natural and important to have a formal representation of plain belief, according to which propositions are held true, or held false, or neither. (In the paper this is called a deterministic representation of epistemic states). And it is of great philosophical importance to have a dynamic account of plain belief. AGM belief revision theory seems to provide such an account, but it founders at the problem of iterated belief revision, since it can generally account only for one step (...) of revision. The paper discusses and rejects two solutions within the confines of AGM theory. It then introduces ranking functions (as I prefer to call them now; in the paper they are still called ordinal conditional functions) as the proper (and, I find, still the best) solution of the problem, proves that conditional independence w.r.t. ranking functions satisfies the so-called graphoid axioms, and proposes general rules of belief change (in close analogy to Jeffrey's generalized probabilistic conditionalization) that encompass revision and contraction as conceived in AGM theory. Indeed, the parallel to probability theory is amazing. Probability theory can profit from ranking theory as well since it is also plagued by the problem of iterated belief revision even if probability measures are conceived as Popper measures (see No. 11). Finally, the theory is compared with predecessors which are numerous and impressive, but somehow failed to explain the all-important conditional ranks in the appropriate way. (shrink)
Conditionals somehow express conditional beliefs. However, conditional belief is a bi-propositional attitude that is generally not truth-evaluable, in contrast to unconditional belief. Therefore, this article opts for an expressivistic semantics for conditionals, grounds this semantics in the arguably most adequate account of conditional belief, that is, ranking theory, and dismisses probability theory for that purpose, because probabilities cannot represent belief. Various expressive options are then explained in terms of ranking theory, with the intention to set out a general interpretive scheme (...) that is able to account for the most variegated usage of conditionals. The Ramsey test is only the first option. Relevance is another, familiar, but little understood item, which comes in several versions. This article adds a further family of expressive options, which is able to subsume also counterfactuals and causal conditionals, and indicates at the end how this family allows for partial recovery of truth conditions for conditionals. (shrink)
The paper builds on the basically Humean idea that A is a cause of B iff A and B both occur, A precedes B, and A raises the metaphysical or epistemic status of B given the obtaining circumstances. It argues that in pursuit of a theory of deterministic causation this ‘status raising’ is best explicated not in regularity or counterfactual terms, but in terms of ranking functions. On this basis, it constructs a rigorous theory of deterministic causation that successfully deals (...) with cases of overdetermination and pre-emption. It finally indicates how the account's profound epistemic relativization induced by ranking theory can be undone. Introduction Variables, propositions, time Induction first Causation Redundant causation Objectivization. (shrink)
The paper takes an expressivistic perspective, i.e., it takes conditionals of all sorts to primarily express conditional beliefs. Therefore it is based on what it takes to be the best account of conditional belief, namely ranking theory. It proposes not to start looking at the bewildering linguistic phenomenology, but first to systematically study the various options of expressing features of conditional belief. Those options by far transcend the Ramsey test and include relevancies of various kinds and in particular the so-called (...) “circumstances are such that” reading, under which also all conditionals representing causal relations can be subsumed. In this way a unifying perspective on the many kinds of conditionals is offered. The final section explains the considerable extent to which truth conditions for conditionals, which may seem lost in the expressivistic or epistemic perspective, may be recovered. (shrink)
"A Survey of Ranking Theory": The paper gives an up-to-date survey of ranking theory. It carefully explains the basics. It elaborates on the ranking theoretic explication of reasons and their balance. It explains the dynamics of belief statable in ranking terms and indicates how the ranks can thereby be measured. It suggests how the theory of Bayesian nets can be carried over to ranking theory. It indicates what it might mean to objectify ranks. It discusses the formal and the philosophical (...) aspects of the tight relation and the complementarity of ranks and probabilities. It closes with comparative remarks on predecessors and other philosophical proposals as well as formal models developed in AI. (shrink)
Ranking theory delivers an account of iterated contraction; each ranking function induces a specific iterated contraction behavior. The paper shows how to reconstruct a ranking function from its iterated contraction behavior uniquely up to multiplicative constant and thus how to measure ranks on a ratio scale. Thereby, it also shows how to completely axiomatize that behavior. The complete set of laws of iterated contraction it specifies amend the laws hitherto discussed in the literature.
Ranking theory is one of the salient formal representations of doxastic states. It differs from others in being able to represent belief in a proposition (= taking it to be true), to also represent degrees of belief (i.e. beliefs as more or less firm), and thus to generally account for the dynamics of these beliefs. It does so on the basis of fundamental and compelling rationality postulates and is hence one way of explicating the rational structure of doxastic states. Thereby (...) it provides foundations for accounts of defeasible or nonmonotonic reasoning. It has widespread applications in philosophy, it proves to be most useful in Artificial Intelligence, and it has started to find applications as a model of reasoning in psychology. (shrink)
The paper displays the similarity between the theory of probabilistic causation developed by Glymour et al. since 1983 and mine developed since 1976: the core of both is that causal graphs are Bayesian nets. The similarity extends to the treatment of actions or interventions in the two theories. But there is also a crucial difference. Glymour et al. take causal dependencies as primitive and argue them to behave like Bayesian nets under wide circumstances. By contrast, I argue the behavior of (...) Bayesian nets to be ultimately the defining characteristic of causal dependence. (shrink)
The characteristic difference between laws and accidental generalizations lies in our epistemic or inductive attitude towards them. This idea has taken various forms and dominated the discussion about lawlikeness in the last decades. Likewise, the issue about ceteris paribus conditions is essentially about how we epistemically deal with exceptions. Hence, ranking theory with its resources of defeasible reasoning seems ideally suited to explicate these points in a formal way. This is what the paper attempts to do. Thus it will turn (...) out that a law is simply the deterministic analogue of a sequence of independent, identically distributed random variables. This entails that de Finetti's representation theorems can be directly transformed into an account of confirmation of laws thus conceived. (shrink)
The paper will show how one may rationalize one-boxing in Newcomb's problem and drinking the toxin in the Toxin puzzle within the confines of causal decision theory by ascending to so-called reflexive decision models which reflect how actions are caused by decision situations (beliefs, desires, and intentions) represented by ordinary unreflexive decision models.
Probability theory, epistemically interpreted, provides an excellent, if not the best available account of inductive reasoning. This is so because there are general and definite rules for the change of subjective probabilities through information or experience; induction and belief change are one and same topic, after all. The most basic of these rules is simply to conditionalize with respect to the information received; and there are similar and more general rules. 1 Hence, a fundamental reason for the epistemological success of (...) probability theory is that there at all exists a well-behaved concept of conditional probability. Still, people have, and have reasons for, various concerns over probability theory. One of these is my starting point: Intuitively, we have the notion of plain belief; we believe propositions2 to be true (or to be false or neither). Probability theory, however, offers no formal counterpart to this notion. Believing A is not the same as having probability 1 for A, because probability 1 is incorrigible3; but plain belief is clearly corrigible. And believing A is not the same as giving A a probability larger than some 1 - c, because believing A and believing B is usually taken to be equivalent to believing A & B.4 Thus, it seems that the formal representation of plain belief has to take a non-probabilistic route. Indeed, representing plain belief seems easy enough: simply represent an epistemic state by the set of all propositions believed true in it or, since I make the common assumption that plain belief is deductively closed, by the conjunction of all propositions believed true in it. But this does not yet provide a theory of induction, i.e. an answer to the question how epistemic states so represented are changed tbrough information or experience. There is a convincing partial answer: if the new information is compatible with the old epistemic state, then the new epistemic state is simply represented by the conjunction of the new information and the old beliefs. This answer is partial because it does not cover the quite common case where the new information is incompatible with the old beliefs. It is, however, important to complete the answer and to cover this case, too; otherwise, we would not represent plain belief as conigible. The crucial problem is that there is no good completion. When epistemic states are represented simply by the conjunction of all propositions believed true in it, the answer cannot be completed; and though there is a lot of fruitful work, no other representation of epistemic states has been proposed, as far as I know, which provides a complete solution to this problem. In this paper, I want to suggest such a solution. In [4], I have more fully argued that this is the only solution, if certain plausible desiderata are to be satisfied. Here, in section 2, I will be content with formally defining and intuitively explaining my proposal. I will compare my proposal with probability theory in section 3. It will turn out that the theory I am proposing is structurally homomorphic to probability theory in important respects and that it is thus equally easily implementable, but moreover computationally simpler. Section 4 contains a very brief comparison with various kinds of logics, in particular conditional logic, with Shackle's functions of potential surprise and related theories, and with the Dempster - Shafer theory of belief functions. (shrink)
The aim of the paper is to explicate the concept of causal independence between sets of factors and Reichenbach's screening-off-relation in probabilistic terms along the lines of Suppes' probabilistic theory of causality (1970). The probabilistic concept central to this task is that of conditional stochastic independence. The adequacy of the explication is supported by proving some theorems about the explicata which correspond to our intuitions about the explicanda.
The paper is motivated by the need of accounting for the practical syllogism as a piece of defeasible reasoning. To meet the need, the paper first refers to ranking theory as an account of defeasible descriptive reasoning. It then argues that two kinds of ought need to be distinguished, purely normative and fact-regarding obligations. It continues arguing that both kinds of ought can be iteratively revised and should hence be represented by ranking functions, too, just as iteratively revisable beliefs. Its (...) central proposal will then be that the fact-regarding normative ranking function must be conceived as the sum of a purely normative ranking function and an epistemic ranking function. The distinctions defends this proposal with a comparative discussion of some critical examples and some other distinctions made in the literature. It gives a more rigorous justification of this proposal. Finally, it starts developing the logic of purely normative and of fact-regarding normative defeasible reasoning, points to the difficulties of completing the logic of the fact-regarding side, but reaches the initial aim of accounting for the defeasible nature of the practical syllogism. (shrink)
The paper is motivated by the need of accounting for the practical syllogism as a piece of defeasible reasoning. To meet the need, the paper first refers to ranking theory as an account of defeasible descriptive reasoning. It then argues that two kinds of ought need to be distinguished, purely normative and fact-regarding obligations. It continues arguing that both kinds of ought can be iteratively revised and should hence be represented by ranking functions, too, just as iteratively revisable beliefs. Its (...) central proposal will then be that the fact-regarding normative ranking function must be conceived as the sum of a purely normative ranking function and an epistemic ranking function. The distinctions defends this proposal with a comparative discussion of some critical examples and some other distinctions made in the literature. It gives a more rigorous justification of this proposal. Finally, it starts developing the logic of purely normative and of fact-regarding normative defeasible reasoning, points to the difficulties of completing the logic of the fact-regarding side, but reaches the initial aim of accounting for the defeasible nature of the practical syllogism. (shrink)
The paper focuses on interpreting ceteris paribus conditions as normal conditions. After discussing six basic problems for the explication of normal conditions and seven interpretations that do not well solve those problems I turn to what I call the epistemic account. According to it the normal is, roughly, the not unexpected. This is developed into a rigorous constructive account of normal conditions, which makes essential use of ranking theory and in particular allows to explain the phenomenon of multiply exceptional conditions. (...) Finally, this static account is extended to a schematic dynamic model of how we may learn about those normal and exceptional conditions. (shrink)
The modalities come into the world by being projections or objectivizations of our epistemic constitution. Thus this paper is a statement of Humean projectivism. In fact, it goes beyond Simon Blackburn’s version. It is also designed as a comprehensive counter-program to David Lewis’ program of Humean supervenience. In detail, the paper explains: Already the basic fact that the world is a world of states of affairs is due to the nature of our epistemic states. Objects, which figure in states of (...) affairs and which embody metaphysical modality, are constitutable by their essential properties and in fact constituted by us according to our ontological policies. What the facts are, to which the correspondence notion of truth refers, is determined by applying an epistemic or pragmatic notion of truth to the world. Causation is a specific objectivization of our conditional beliefs. Nomicity is a ‘habit of belief’, a specific way of generalizing epistemic attitudes. This covers the basic metaphysical and natural modalities. The paper attempts to convey that talking of projection or objectivization is not just imagery, but a constructively realizable program. (shrink)
This paper attempts to develop a projectivistic understanding of chance or objective probability or partial determination. It does so by critically examining David Lewis philosophy of probability and his defense of Humean Supervenience, building thereupon the constructive projectivistic alternative, which will basically be a suitable reinterpretation of de Finettis position. Any treatment of the topic must show how it extends to natural necessity or deterministic laws or full determination in perfect parallel. The paper indicates at the end how this demand (...) can be met. (shrink)
And this paper is an attempt to say precisely how, thus addressing a philosophical problem which is commonly taken to be a serious one. It does so, however, in quite an idiosyncratic way. It is based on the account of inductive schemes I have given in (1988) and (1990a) and on the conception of causation I have presented in (1980), (1983), and (1990b), and it intends to fill one of many gaps which have been left by these papers. Still, I (...) have tried to make this paper self-contained. Section 1 explains the philosophical question this paper is about; in more general terms it asks what might be meant by objectifying epistemic states or features of them and to which extent epistemic states can be objectified. The next sections introduce the basis I rely on with formal precision and some explanation; section 2 deals with induction and section 3 with causation. Within these confines, section 4 attempts to give an explication of the relevant sense of objectification and section 5 investigates the extent to which various features of epistemic states are objectifiable. The two most salient results are roughly that the relation "A is a reason for B" cannot be objectified at all and that the relation "A is a cause of B" can be objectified only under substantial, though reasonable restrictions. What has all of this to do with probability? A lot. The paper trades on a pervasive duality between probabilistic and deterministic epistemology, between a probabilistic representation of epistemic states together with a theory of probabilistic causation and another representation of epistemic states which I call deterministic because it lends itself, in a perfectly parallel fashion, to a theory of deterministic causation. Here I explicitly deal only with the deterministic side, but the duality should pave the way for further conclusions concerning objective probabilities and statistical laws. This outlook is briefly expanded in the final section 6. (shrink)
Recently, Bengt Hansson presented a paper about dyadic deontic logic,2 criticizing some purely axiomatic systems of dyadic deontic logic and proposing three purely semantical systems of dyadic deontic logic which he confidently called dyadic standard systems of deontic logic (DSDL1–3). Here I shall discuss the third by far most interesting system DSDL3 which is operating with preference relations. First, I shall describe this semantical system (Sections 1.1–1.3). Then I shall give an axiomatic system (Section 1.4) which is proved to be (...) correct (Section 2) and complete (Section 3) with respect to Hansson's semantics. Finally, in face of these results Hansson's semantics will be discussed from a more intuitive standpoint. After emphasizing its intuitive attractiveness (Section 4.1) I will show that two objections often discussed in connection with preference relations do not apply to it (Section 4.2 and 4.3); more precisely, I will show that the connectedness condition for preference relations can be dropped and that, in a sense, it is not necessary to compare two possible worlds differing in infinitely many respects. (What exactly is meant by this, will become clear later on.) Yet there is a third objection to Hansson's semantics which points to a real intuitive inadequacy of DSDL3. A way of removing this inadequacy, which corresponds to Hansson's own intuitions as well as to familiar metaethical views, is suggested, but not technically realized (Section 4.4). In the last section (section 4.5) I shall briefly show that DSDL3 is decidable, as expected. (shrink)
First, ranking functions are argued to be superior to AGM belief revision theory in two crucial respects. Second, it is shown how ranking functions are uniquely reflected in iterated belief change. More precisely, conditions on threefold contractions are specified which suffice for representing contractions by a ranking function uniquely up to multiplication by a positive integer. Thus, an important advantage AGM theory seemed to have over ranking functions proves to be spurious.
As the paper explains, it is crucial to epistemology in general and to the theory of causation in particular to investigate the properties of conditional independence as completely as possible. The paper summarizes the most important results concerning conditional independence with respect to two important representations of epistemic states, namely (strictly positive) probability measures and natural conditional (or disbelief or ranking) functions. It finally adds some new observations.
The paper attempts to rationalize cooperation in the one-shot prisoners' dilemma (PD). It starts by introducing (and preliminarily investigating) a new kind of equilibrium (differing from Aumann's correlated equilibria) according to which the players' actions may be correlated (sect. 2). In PD the Pareto-optimal among these equilibria is joint cooperation. Since these equilibria seem to contradict causal preconceptions, the paper continues with a standard analysis of the causal structure of decision situations (sect. 3). The analysis then raises to a reflexive (...) point of view according to which the agent integrates his own present and future decision situations into the causal picture of his situation (sect. 4). This reflexive structure is first applied to the toxin puzzle and then to Newcomb's problem, showing a way to rationalize drinking the toxin and taking only one box with-out assuming causal mystery (sect. 5). The latter result is finally extended to a rationalization of cooperation in PD (sect. 6). (shrink)
The paper is based on ranking theory, a theory of degrees of disbelief (and hence belief). On this basis, it explains enumerative induction, the confirmation of a law by its positive instances, which may indeed take various schemes. It gives a ranking theoretic explication of a possible law or a nomological hypothesis. It proves, then, that such schemes of enumerative induction uniquely correspond to mixtures of such nomological hypotheses. Thus, it shows that de Finetti's probabilistic representation theorems may be transformed (...) into an account of confirmation of possible laws and that enumerative induction is equivalent to such an account. The paper concludes with some remarks about the apriority of lawfulness or the uniformity of nature. (shrink)
Modern theory of rationality has truly grown into a science of its own. Still, the general topic remained a genuinely philosophical one. This essay is concerned with giving a brief overview. Section 2 explains the fundamental scheme of all rationality assessments. With its help, a schematic order of the main questions concerning the theory of rationality can be given; the questions turn out to be quite unevenly addressed in the literature. Section 3 discusses the fundamental issue that the theory of (...) rationality seems to be both a normative and an empirical theory. Section 4, finally, shows how the unity of the theory of rationality can nevertheless be maintained. (shrink)
The paper proposes two principles of coherence (thus taking up work started in Spohn (1991) "A Reason for Explanation: Explanations Provide Stable Reasons"). The latter indeed serves as a weak, but precise explication of the notion of coherence as it is used in the current epistemological discussion. After discussing their epistemological setting, the paper considers four ways of establishing these principles. They may be inferred neither from enumerative induction, nor from the nature of propositions as objects of belief, nor in (...) a Kantian way from self-consciousness. Rather, I propose a fairly rigorous way to infer them from an even more fundamental rationality principle of non-dogmatism and an elementary theory of perception. (shrink)
We shall defend two theses: (a) if a decision situation exhibits a certain causal structure, then decision theory is in trouble, because the derivation of expected utilities fails; (b) this causal structure in fact obtains in a specific, but very common kind of situation, namely, when the intrinsically evaluated psychological states are in the domain of the utility function. It will be apparent that the problem is but a variant of Joseph Butler's criticism of hedonism. Thus, in a sense, the (...) point of our paper is that modern theorizing about practical deliberation has not dealt seriously with Butler's criticism. (shrink)
This paper compares the epistemological conception of Isaac Levi with mine. We are joined in both giving a constructive answer to the relation of belief and probability, without reducing one to the other. However, our constructions differ in at least nine more or less important ways, all discussed in the paper. In particular, the paper explains the similarities and differences of Shackle's functions of potential surprise, as used by Levi, and my ranking functions in formal as well as in philosophical (...) respects. The appendix explains how ranking and probability theory can be combined in the notion of a ranked probability measure (or probabilified ranking function). (shrink)
Putnam (1975) and Burge (1979) have made a convincing case that neither mea- nings nor beliefs are in the head. Most philosophers, it seems, have accepted their argument. Putnam explained that a subject.
This paper deals with Hans Reichenbach's common cause principle. It was propounded by him in, and has been developed and widely applied by Wesley Salmon, e.g. in and. Thus, it has become one of the focal points of the continuing discussion of causation. The paper addresses five questions. Section 1 asks: What does the principle say? And section 2 asks: What is its philosophical significance? The most important question, of course, is this: Is the principle true? To answer that question, (...) however, one must first consider how one might one argue about it at all. One can do so by way of examples, the subject of section 3, or more theoretically, which is the goal of section 4. Based on an explication of probabilistic causation proposed by me in,, and, section 4 shows that a variant of the principle is provable within a classical framework. The question naturally arises whether the proved variant is adequate, or too weak. This is pursued in section 5. My main conclusion will be that some version of Reichenbach's principle is provably true, and others may be. This may seem overly ambitious, but it is not. The paper does not make any progress on essential worries about the common cause principle arising in the quantum domain; it only establishes more rigorously what has been thought to be plausible at least within a classical framework. (shrink)
The central claim of the paper is, roughly, that the fact that it looks to somebody as if p is a defeasibly a priori reason for assuming that p (and vice versa), for any person, even for the perceiver himself. As a preparation, it outlines a doxastic conception suitable to explicate this claim and explains how to analyse dispositions within this conception. Since an observable p has the disposition to look as if p, this analysis generalizes to the central claim (...) which is then argued to be at the bottom of coherentism. Thus, the defense of the claim supports coherentism as opposed to foundationalism and at the same time provides an answer to skepticism about the external world. (shrink)
This paper argues for three kinds of possible worlds: Wittgensteinian totalities of facts, Lewisian worlds or universes, concrete objects of maximal essence, and the world, a concrete object of minimal essence. It moreover explains that correspondence truth applies to Wittgensteinian totalities and pragmatic truth to Lewisian universes. And it finally argues that this conceptualization lays proper foundations to two-dimensional semantics.
where _x_ stands for a visible object and _y_ for a perceiving subject (the reference to a time may be neglected).1 I take here ”character” in the sense of Kaplan (1977) as substantiated by Haas-Spohn (1995 and Chapter 14 in this book)). The point of using Kaplan’s framework is simple, but of utmost importance: It provides a scheme for clearly separating epistemological and metaphysical issues, for specifying how the two domains are related, and for connecting them to questions concerning meaning (...) where confusions are often only duplicated. All this is achieved by it better than by any alternative I know of.2. (shrink)
Objective standards for justification or for being a reason would be desirable, but inductive skepticism tells us that they cannot be presupposed. Rather, we have to start from subjective-relative notions of justification and of being a reason. The paper lays out the strategic options we have given this dilemma. The paper explains the requirements for this subject-relative notion and how they may be satisfied. Then it discusses four quite heterogeneous ways of providing more objective standards, which combine without guaranteeing complete (...) success. (shrink)
In this paper two theories of defeasible reasoning, Pollock's account and my theory of ranking functions, are compared, on a strategic level, since a strictly formal comparison would have been unfeasible. A brief summary of the accounts shows their basic difference: Pollock's is a strictly computational one, whereas ranking functions provide a regulative theory. Consequently, I argue that Pollock's theory is normatively defective, unable to provide a theoretical justification for its basic inference rules and thus an independent notion of admissible (...) rules. Conversely, I explain how quite a number of achievements of Pollock's account can be adequately duplicated within ranking theory. The main purpose of the paper, though, is not to settle a dispute within formal epistemology, but rather to emphasize the importance of formal methods to the whole of epistemology. (shrink)
This paper is the most complete presentation of my views on deterministic causation. It develops the deterministic theory in perfect parallel to my theory of probabilistic causation and thus unites the two aspects. It also argues that the theory presented is superior to all regularity and all counterfactual theories of causation.
In this paper two theories of defeasible reasoning, Pollock's account and my theory of ranking functions, are compared, on a strategic level, since a strictly formal comparison would have been unfeasible. A brief summary of the accounts shows their basic difference: Pollock's is a strictly computational one, whereas ranking functions provide a regulative theory. Consequently, I argue that Pollock's theory is normatively defective, unable to provide a theoretical justification for its basic inference rules and thus an independent notion of admissible (...) rules. Conversely, I explain how quite a number of achievements of Pollock's account can be adequately duplicated within ranking theory. The main purpose of the paper, though, is not to settle a dispute with formal epistemology, but rather to emphasize the importance of formal methods to the whole of epistemology. (shrink)