Two compelling principles, the Reasonable Range Principle and the Preservation of Irrelevant Evidence Principle, are necessary conditions that any response to peer disagreements ought to abide by. The Reasonable Range Principle maintains that a resolution to a peer disagreement should not fall outside the range of views expressed by the peers in their dispute, whereas the Preservation of Irrelevant Evidence Principle maintains that a resolution strategy should be able to preserve unanimous judgments of evidential irrelevance among the peers. No standard (...) Bayesian resolution strategy satisfies the PIE Principle, however, and we give a loss aversion argument in support of PIE and against Bayes. The theory of imprecise probability allows one to satisfy both principles, and we introduce the notion of a set-based credal judgment to frame and address a range of subtle issues that arise in peer disagreements. (shrink)
Jim Joyce argues for two amendments to probabilism. The first is the doctrine that credences are rational, or not, in virtue of their accuracy or “closeness to the truth” (1998). The second is a shift from a numerically precise model of belief to an imprecise model represented by a set of probability functions (2010). We argue that both amendments cannot be satisfied simultaneously. To do so, we employ a (slightly-generalized) impossibility theorem of Seidenfeld, Schervish, and Kadane (2012), who show that (...) there is no strictly proper scoring rule for imprecise probabilities. -/- The question then is what should give way. Joyce, who is well aware of this no-go result, thinks that a quantifiability constraint on epistemic accuracy should be relaxed to accommodate imprecision. We argue instead that another Joycean assumption — called strict immodesty— should be rejected, and we prove a representation theorem that characterizes all “mildly” immodest measures of inaccuracy. (shrink)
Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics maintain that dilation is a pathological feature of imprecise probability models, while others have thought the problem is with Bayesian updating. However, two points are often overlooked: (1) knowing that E is stochastically (...) independent of F (for all F in a partition of the underlying state space) is sufficient to avoid dilation, but (2) stochastic independence is not the only independence concept at play within imprecise probability models. In this paper we give a simple characterization of dilation formulated in terms of deviation from stochastic independence, propose a measure of dilation, and distinguish between proper and improper dilation. Through this we revisit the most sensational examples of dilation, which play up independence between dilator and dilatee, and find the sensationalism undermined by either fallacious reasoning with imprecise probabilities or improperly constructed imprecise probability models. (shrink)
Coherentism maintains that coherent beliefs are more likely to be true than incoherent beliefs, and that coherent evidence provides more confirmation of a hypothesis when the evidence is made coherent by the explanation provided by that hypothesis. Although probabilistic models of credence ought to be well-suited to justifying such claims, negative results from Bayesian epistemology have suggested otherwise. In this essay we argue that the connection between coherence and confirmation should be understood as a relation mediated by the causal relationships (...) among the evidence and a hypothesis, and we offer a framework for doing so by fitting together probabilistic models of coherence, confirmation, and causation. We show that the causal structure among the evidence and hypothesis is sometimes enough to determine whether the coherence of the evidence boosts confirmation of the hypothesis, makes no difference to it, or even reduces it. We also show that, ceteris paribus, it is not the coherence of the evidence that boosts confirmation, but rather the ratio of the coherence of the evidence to the coherence of the evidence conditional on a hypothesis. (shrink)
Traditionally, logic has been the dominant formal method within philosophy. Are logical methods still dominant today, or have the types of formal methods used in philosophy changed in recent times? To address this question, we coded a sample of philosophy papers from the late 2000s and from the late 2010s for the formal methods they used. The results indicate that the proportion of papers using logical methods remained more or less constant over that time period but the proportion of papers (...) using probabilistic methods was approximately three times higher in the late 2010s than it was in the late 2000s. Further analyses explored this change by looking more closely at specific methods, specific levels of technical engagement, and specific subdisciplines within philosophy. These analyses indicate that the increasing proportion of papers using probabilistic methods was pervasive, not confined to particular probabilistic methods, levels of sophistication, or subdisciplines. (shrink)
This essay presents results about a deviation from independence measure called focused correlation . This measure explicates the formal relationship between probabilistic dependence of an evidence set and the incremental confirmation of a hypothesis, resolves a basic question underlying Peter Klein and Ted Warfield's ‘truth-conduciveness’ problem for Bayesian coherentism, and provides a qualified rebuttal to Erik Olsson's claim that there is no informative link between correlation and confirmation. The generality of the result is compared to recent programs in Bayesian epistemology (...) that attempt to link correlation and confirmation by utilizing a conditional evidential independence condition. Several properties of focused correlation are also highlighted. Introduction Correlation Measures 2.1 Standard covariance and correlation measures 2.2 The Wayne–Shogenji measure 2.3 Interpreting correlation measures 2.4 Correlation and evidential independence Focused Correlation Conclusion Appendix CiteULike Connotea Del.icio.us What's this? (shrink)
In this paper, we present a new semantic challenge to the moral error theory. Its first component calls upon moral error theorists to deliver a deontic semantics that is consistent with the error-theoretic denial of moral truths by returning the truth-value false to all moral deontic sentences. We call this the ‘consistency challenge’ to the moral error theory. Its second component demands that error theorists explain in which way moral deontic assertions can be seen to differ in meaning despite necessarily (...) sharing the same intension. We call this the ‘triviality challenge’ to the moral error theory. Error theorists can either meet the consistency challenge or the triviality challenge, we argue, but are hard pressed to meet both. (shrink)
Recent debate about the error theory has taken a ‘formal turn’. On the one hand, there are those who argue that the error theory should be rejected because of its difficulties in providing a convincing formal account of the logic and semantics of moral claims. On the other hand, there are those who claim that such formal objections fail, maintaining that arguments against the error theory must be of a substantive rather than a formal kind. In this paper, we argue (...) that formal objections to the error theory cannot be eschewed but must be met head-on. (shrink)
Both dilation and non-conglomerability have been alleged to conflict with a fundamental principle of Bayesian methodology that we call \textit{Good's Principle}: one should always delay making a terminal decision between alternative courses of action if given the opportunity to first learn, at zero cost, the outcome of an experiment relevant to the decision. In particular, both dilation and non-conglomerability have been alleged to permit or even mandate choosing to make a terminal decision in deliberate ignorance of relevant, cost-free information. Although (...) dilation and non-conglomerability share some similarities, some authors maintain that there are important differences between the two that warrant endorsing different normative positions regarding dilation and non-conglomerability. This article reassesses the grounds for treating dilation and non-conglomerability differently. Our analysis exploits a new and general characterization result for dilation to draw a closer connection between dilation and non-conglomerability. (shrink)
In the age of big data and a machine epistemology that can anticipate, predict, and intervene on events in our lives, the problem once again is that a few individuals possess the knowledge of how to regulate these activities. But the question we face now is not how to share such knowledge more widely, but rather of how to enjoy the public benefits bestowed by this knowledge without freely sharing it. It is not merely personal privacy that is at stake (...) but a range of unsung benefits that come from ignorance and forgetting, traits that are inherently human and integral to the functioning of our society. (shrink)
Conditionals and conditional reasoning have been a long-standing focus of research across a number of disciplines, ranging from psychology through linguistics to philosophy. But almost no work has concerned itself with the question of how hearing or reading a conditional changes our beliefs. Given that we acquire much—perhaps most—of what we believe through the testimony of others, the simple matter of acquiring conditionals via others’ assertion of a conditional seems integral to any full understanding of the conditional and conditional reasoning. (...) In this paper we detail a number of basic intuitions about how beliefs might change in response to a conditional being uttered, and show how these are backed by behavioral data. In the remainder of the paper, we then show how these deceptively simple phenomena pose a fundamental challenge to present theoretical accounts of the conditional and conditional reasoning – a challenge which no account presently fully meets. (shrink)
Epistemic decision theory (EDT) employs the mathematical tools of rational choice theory to justify epistemic norms, including probabilism, conditionalization, and the Principal Principle, among others. Practitioners of EDT endorse two theses: (1) epistemic value is distinct from subjective preference, and (2) belief and epistemic value can be numerically quantified. We argue the first thesis, which we call epistemic puritanism, undermines the second.
Focused correlation compares the degree of association within an evidence set to the degree of association in that evidence set given that some hypothesis is true. A difference between the confirmation lent to a hypothesis by one evidence set and the confirmation lent to that hypothesis by another evidence set is robustly tracked by a difference in focused correlations of those evidence sets on that hypothesis, provided that all the individual pieces of evidence are equally, positively relevant to that hypothesis. (...) However, that result depends on a very strong equal relevance condition on individual pieces of evidence. In this essay, we prove tracking results for focused correlation analogous to Wheeler and Scheines’s results but for cases involving unequal relevance. Our result is robust as well, and we retain conditions for bidirectional tracking between incremental confirmation measures and focused correlation. (shrink)
In this chapter we draw connections between two seemingly opposing approaches to probability and statistics: evidential probability on the one hand and objective Bayesian epistemology on the other.
In his groundbreaking book, Against Coherence (2005), Erik Olsson presents an ingenious impossibility theorem that appears to show that there is no informative relationship between probabilistic measures of coherence and higher likelihood of truth. Although Olsson's result provides an important insight into probabilistic models of epistemological coherence, the scope of his negative result is more limited than generally appreciated. The key issue is the role conditional independence conditions play within the witness testimony model Olsson uses to establish his result. Olsson (...) maintains that his witness model yields charitable ceteris paribus conditions for any theory of probabilistic coherence. Not so. In fact, Olsson's model, like Bayesian witness models in general, selects a peculiar class of models that are in no way representative of the range of options available to coherence theorists. Recent positive results suggest that there is a way to develop a formal theory of coherence after all. Further, although Bayesian witness models are not conducive to the truth, they are conducive to reliability. (shrink)
A characterization result of dilation in terms of positive and negative association admits an extremal counterexample, which we present together with a minor repair of the result. Dilation may be asymmetric whereas covariation itself is symmetric. Dilation is still characterized in terms of positive and negative covariation, however, once the event to be dilated has been specified.
We examine the notion of conditionals and the role of conditionals in inductive logics and arguments. We identify three mistakes commonly made in the study of, or motivation for, non-classical logics. A nonmonotonic consequence relation based on evidential probability is formulated. With respect to this acceptance relation some rules of inference of System P are unsound, and we propose refinements that hold in our framework.
Information-based epistemology maintains that ‘being informed’ is an independent cognitive state that cannot be reduced to knowledge or to belief, and the modal logic KTB has been proposed as a model. But what distinguishes the KTB analysis of ‘being informed’, the Brouwersche schema (B), is precisely its downfall, for no logic of information should include (B) and, more generally, no epistemic logic should include (B), either.
Jon Williamson's Objective Bayesian Epistemology relies upon a calibration norm to constrain credal probability by both quantitative and qualitative evidence. One role of the calibration norm is to ensure that evidence works to constrain a convex set of probability functions. This essay brings into focus a problem for Williamson's theory when qualitative evidence specifies non-convex constraints.
Henry Kyburg’s lottery paradox (1961, p. 197) arises from considering a fair 1000 ticket lottery that has exactly one winning ticket. If this much is known about the execution of the lottery it is therefore rational to accept that one ticket will win. Suppose that an event is very likely if the probability of its occurring is greater than 0.99. On these grounds it is presumed rational to accept the proposition that ticket 1 of the lottery will not win. Since (...) the lottery is fair, it is rational to accept that ticket 2 won’t win either—indeed, it is rational to accept for any individual ticket i of the lottery that ticket i will not win. However, accepting that ticket 1 won’t win, accepting that ticket 2 won’t win, . . . , and accepting that ticket 1000 won’t win entails that it is rational to accept that no ticket will win, which entails that it is rational to accept the contradictory proposition that one ticket wins and no ticket wins. (shrink)
One goal of normative multi-agent system theory is to formulate principles for normative system change that maintain the rule-like structure of norms and preserve links between norms and individual agent obligations. A central question raised by this problem is whether there is a framework for norm change that is at once specific enough to capture this rule-like behavior of norms, yet general enough to support a full battery of norm and obligation change operators. In this paper we propose an answer (...) to this question by developing a bimodal logic for norms and obligations called NO. A key to our approach is that norms are treated as propositional formulas, and we provide some independent reasons for adopting this stance. Then we define norm change operations for a wide class of modal systems, including the class of NO systems, by constructing a class of modal revision operators that satisfy all the AGM postulates for revision, and constructing a class of modal contraction operators that satisfy all the AGM postulates for contraction. More generally, our approach yields an easily extendable framework within which to work out principles for a theory of normative system change. (shrink)
Logic is a celebrated representation language because of its formal generality. But there are two senses in which a logic may be considered general, one that concerns a technical ability to discriminate between different types of individuals, and another that concerns constitutive norms for reasoning as such. This essay embraces the former, permutation-invariance conception of logic and rejects the latter, Fregean conception of logic. The question of how to apply logic under this pure invariantist view is addressed, and a methodology (...) is given. The pure invariantist view is contrasted with logical pluralism, and a methodology for applied logic is demonstrated in remarks on a variety of issues concerning non-monotonic logic and non-monotonic inference, including Charles Morgan’s impossibility results for non-monotonic logic, David Makinson’s normative constraints for non-monotonic inference, and Igor Douven and Timothy Williamson’s proposed formal constraints on rational acceptance. (shrink)
Many philosophers of science have argued that a set of evidence that is "coherent" confirms a hypothesis which explains such coherence. In this paper, we examine the relationships between probabilistic models of all three of these concepts: coherence, confirmation, and explanation. For coherence, we consider Shogenji's measure of association (deviation from independence). For confirmation, we consider several measures in the literature, and for explanation, we turn to Causal Bayes Nets and resort to causal structure and its constraint on probability. All (...) else equal, we show that focused correlation, which is the ratio of the coherence of evidence and the coherence of the evidence conditional on a hypothesis, tracks confirmation. We then show that the causal structure of the evidence and hypothesis can put strong constraints on how coherence in the evidence does or does not translate into confirmation of the hypothesis. (shrink)
This paper presents statistical default logic, an expansion of classical (i.e., Reiter) default logic that allows us to model common inference patterns found in standard inferential statistics, including hypothesis testing and the estimation of a populations mean, variance and proportions. The logic replaces classical defaults with ordered pairs consisting of a Reiter default in the first coordinate and a real number within the unit interval in the second coordinate. This real number represents an upper-bound limit on the probability of accepting (...) the consequent of an applied default and that consequent being false. A method for constructing extensions is then defined that preserves this upper bound on the probability of error under a (skeptical) non-monotonic consequence relation. (shrink)
The structural view of rational acceptance is a commitment to developing a logical calculus to express rationally accepted propositions sufficient to represent valid argument forms constructed from rationally accepted formulas. This essay argues for this project by observing that a satisfactory solution to the lottery paradox and the paradox of the preface calls for a theory that both (i) offers the facilities to represent accepting less than certain propositions within an interpreted artificial language and (ii) provides a logical calculus of (...) rationally accepted formulas that preserves rational acceptance under consequence. The essay explores the merit and scope of the structural view by observing that some limitations to a recent framework advanced James Hawthorne and Luc Bovens are traced to their framework satisfying the first of these two conditions but not the second. (shrink)
A bounded formula is a pair consisting of a propositional formula φ in the first coordinate and a real number within the unit interval in the second coordinate, interpreted to express the lower-bound probability of φ. Converting conjunctive/disjunctive combinations of bounded formulas to a single bounded formula consisting of the conjunction/disjunction of the propositions occurring in the collection along with a newly calculated lower probability is called absorption. This paper introduces two inference rules for effecting conjunctive and disjunctive absorption and (...) compares the resulting logical system, called System Y, to axiom System P. Finally, we demonstrate how absorption resolves the lottery paradox and the paradox of the preference. (shrink)
Recent advances in philosophy, artificial intelligence, mathematical psychology, and the decision sciences have brought a renewed focus to the role and interpretation of probability in theories of uncertain reasoning. Henry E. Kyburg, Jr. has long resisted the now dominate Bayesian approach to the role of probability in scientific inference and practical decision. The sharp contrasts between the Bayesian approach and Kyburg's program offer a uniquely powerful framework within which to study several issues at the heart of scientific inference, decision, and (...) reasoning under uncertainty. The commissioned essays for this volume take measure of the scope and impact of Kyburg's views on probability and scientific inference, and include several new and important contributions to the field. Contributors: Gert de Cooman, Clark Glymour, William Harper, Isaac Levi, Ron Loui, Enrique Miranda, John Pollock, Teddy Seidenfeld, Choh Man Teng, Mariam Thalos, Gregory Wheeler, Jon Williamson, and Henry E. Kyburg, Jr. (shrink)
Focused correlation compares the degree of association within an evidence set to the degree of association in that evidence set given that some hypothesis is true. Wheeler and Scheines have shown that a difference in incremental confirmation of two evidence sets is robustly tracked by a difference in their focus correlation. In this essay, we generalize that tracking result by allowing for evidence having unequal relevance to the hypothesis. Our result is robust as well, and we retain conditions for bidirectional (...) tracking between incremental confirmation measures and focused correlation. (shrink)
Yet, in broader terms, formal epistemology is not merely a methodological tool for epistemologists, but a discipline in its own right. On this programmatic view, formal epistemology is an interdisciplinary research program that covers work by philosophers, mathematicians, computer scientists, statisticians, psychologists, operations researchers, and economists who aim to give mathematical and sometimes computational representations of, along with sound strategies for reasoning about, knowledge, belief, judgment and decision making.
No one has a well developed solution to Duhem's problem, the problem of how experimental evidence warrants revision of our theories. Deborah Mayo proposes a solution to Duhem's problem in route to her more ambitious program of providing a philosophical account of inductive inference and experimental knowledge. This paper is a response to Mayo's Error Statistics (ES) program, paying particular attention to her response to Duhem's problem. It turns out that Mayo's purported solution to Duhem's problem is very significant to (...) her project, for the epistemic license claimed by ES and the philosophical underpinnings to her account of experimental knowledge depend on this solution. By introducing the partition problem, I argue that ES fails to solve Duhem's problem and therefore fails to provide an adequate account of experimental knowledge. (shrink)
A sound and complete axiomatization of two tabloid blogs is presented, Leiter Logic (KB) and Deontic Leiter Logic (KDB), the latter of which can be extended to Shame Game Logic for multiple agents. The (B) schema describes the mechanism behind this class of tabloids, and illustrates the perils of interpreting a provability operator as an epistemic modal. To mark this difference, and to avoid sullying Brouwer's good name, the (B) schema for epistemic modals should be called the Blog Schema.
Classical modal logics, based on the neighborhood semantics of Scott and Montague, provide a generalization of the familiar normal systems based on Kripke semantics. This paper defines AGM revision operators on several first-order monotonic modal correspondents, where each first-order correspondence language is defined by Marc Pauly’s version of the van Benthem characterization theorem for monotone modal logic. A revision problem expressed in a monotone modal system is translated into first-order logic, the revision is performed, and the new belief set is (...) translated back to the original modal system. An example is provided for the logic of Risky Knowledge that uses modal AGM contraction to construct counter-factual evidence sets in order to investigate robustness of a probability assignment given some evidence set. A proof of correctness is given. (shrink)
Rabern and Rabern (Analysis 68:105–112 2 ) and Uzquiano (Analysis 70:39–44 4 ) have each presented increasingly harder versions of ‘the hardest logic puzzle ever’ (Boolos The Harvard Review of Philosophy 6:62–65 1 ), and each has provided a two-question solution to his predecessor’s puzzle. But Uzquiano’s puzzle is different from the original and different from Rabern and Rabern’s in at least one important respect: it cannot be solved in less than three questions. In this paper we solve Uzquiano’s puzzle (...) in three questions and show why there is no solution in two. Finally, to cement a tradition, we introduce a puzzle of our own. (shrink)
The theory of lower previsions is designed around the principles of coherence and sure-loss avoidance, thus steers clear of all the updating anomalies highlighted in Gong and Meng's "Judicious Judgment Meets Unsettling Updating: Dilation, Sure Loss, and Simpson's Paradox" except dilation. In fact, the traditional problem with the theory of imprecise probability is that coherent inference is too complicated rather than unsettling. Progress has been made simplifying coherent inference by demoting sets of probabilities from fundamental building blocks to secondary representations (...) that are derived or discarded as needed. (shrink)
Epistemic naturalism holds that the results or methodologies from the cognitive sciences are relevant to epistemology, and some have maintained that scientific methods are more compatible with externalist theories of justification than with internalist theories. But practically all discussions about naturalized epistemology are framed exclusively in terms of cognitive psychology, which is only one of the cognitive sciences. The question addressed in this essay is whether a commitment to naturalism really does favor externalism over internalism, and we offer reasons for (...) thinking that naturalism in epistemology is compatible with both internalist and externalist conceptions of justification. We also argue that there are some distinctively internalist aims that are currently being studied scientifically and these notions, and others, should be studied by scientific methods. (shrink)
Statistical Default Logic (SDL) is an expansion of classical (i.e., Reiter) default logic that allows us to model common inference patterns found in standard inferential statistics, e.g., hypothesis testing and the estimation of a population‘s mean, variance and proportions. This paper presents an embedding of an important subset of SDL theories, called literal statistical default theories, into stable model semantics. The embedding is designed to compute the signature set of literals that uniquely distinguishes each extension on a statistical default theory (...) at a pre-assigned error-bound probability. (shrink)
Michael Dummett famously maintained that analytic philosophy was simply philosophy that followed Frege in treating the philosophy of language as the basis for all other philosophy (1978, 441). But one important insight to emerge from computer science is how difficult it is to animate the linguistic artifacts that the analysis of thought produces. Yet, modeling the effects of thought requires a new skill that goes beyond analysis: procedural literacy. Some of the most promising research in philosophy makes use of a (...) variety of modeling techniques that go beyond basic logic and elementary probability theory. What unifies this approach is a focus on what Alan Perlis called procedural literacy. This essay argues that the future spoils in philosophical research will disproportionally go to those who are procedurally literate. (shrink)
This paper presents the progicnet programme. It proposes a general framework for probabilistic logic that can guide inference based on both logical and probabilistic input. After an introduction to the framework as such, it is illustrated by means of a toy example from psychometrics. It is shown that the framework can accommodate a number of approaches to probabilistic reasoning: Bayesian statistical inference, evidential probability, probabilistic argumentation, and objective Bayesianism. The framework thus provides insight into the relations between these approaches, it (...) illustrates how the results of different approaches can be combined, and it provides a basis for doing efficient inference in each of the approaches. (shrink)
This fourth volume of the Programme “The Philosophy of Science in a European Perspective” deals with new challenges in this field. In this regard, it seeks to broaden the scope of the philosophy of science in two directions. On the one hand, ...
In this essay we advance the view that analytical epistemology and artificial intelligence are complementary disciplines. Both fields study epistemic relations, but whereas artificial intelligence approaches this subject from the perspective of understanding formal and computational properties of frameworks purporting to model some epistemic relation or other, traditional epistemology approaches the subject from the perspective of understanding the properties of epistemic relations in terms of their conceptual properties. We argue that these two practices should not be conducted in isolation. We (...) illustrate this point by discussing how to represent a class of inference forms found in standard inferential statistics. This class of inference forms is interesting because its members share two properties that are common to epistemic relations, namely defeasibility and paraconsistency. Our modeling of standard inferential statistical arguments exploits results from both logical artificial intelligence and analytical epistemology. We remark how our approach to this modeling problem may be generalized to an interdisciplinary approach to the study of epistemic relation. (shrink)
Summary. This paper proposes a common framework for various probabilistic logics. It consists of a set of uncertain premises with probabilities attached to them. This raises the question of the strength of a conclusion, but without imposing a particular semantics, no general solution is possible. The paper discusses several possible semantics by looking at it from the perspective of probabilistic argumentation.
Modeling a complex phenomena such as the mind presents tremendous computational complexity challenges. Modeling field theory (MFT) addresses these challenges in a non-traditional way. The main idea behind MFT is to match levels of uncertainty of the model (also, a problem or some theory) with levels of uncertainty of the evaluation criterion used to identify that model. When a model becomes more certain, then the evaluation criterion is adjusted dynamically to match that change to the model. This process is called (...) the Dynamic Logic of Phenomena (DLP) for model construction and it mimics processes of the mind and natural evolution. This paper provides a formal description of DLP by specifying its syntax, semantics, and reasoning system. We also outline links between DLP and other logical approaches. Computational complexity issues that motivate this work are presented using an example of polynomial models. (shrink)