Epistemologists and philosophers of science have often attempted to express formally the impact of a piece of evidence on the credibility of a hypothesis. In this paper we will focus on the Bayesian approach to evidential support. We will propose a new formal treatment of the notion of degree of confirmation and we will argue that it overcomes some limitations of the currently available approaches on two grounds: (i) a theoretical analysis of the confirmation relation seen as an extension of (...) logical deduction and (ii) an empirical comparison of competing measures in an experimental inquiry concerning inductive reasoning in a probabilistic setting. (shrink)
This paper outlines an account of conditionals, the evidential account, which rests on the idea that a conditional is true just in case its antecedent supports its consequent. As we will show, the evidential account exhibits some distinctive logical features that deserve careful consideration. On the one hand, it departs from the material reading of ‘if then’ exactly in the way we would like it to depart from that reading. On the other, it significantly differs from the non-material accounts which (...) hinge on the Ramsey Test, advocated by Adams, Stalnaker, Lewis, and others. (shrink)
This paper develops a probabilistic analysis of conditionals which hinges on a quantitative measure of evidential support. In order to spell out the interpreta- tion of ‘if’ suggested, we will compare it with two more familiar interpretations, the suppositional interpretation and the strict interpretation, within a formal framework which rests on fairly uncontroversial assumptions. As it will emerge, each of the three interpretations considered exhibits specific logical features that deserve separate consideration.
Theory change is a central concern in contemporary epistemology and philosophy of science. In this paper, we investigate the relationships between two ongoing research programs providing formal treatments of theory change: the (post-Popperian) approach to verisimilitude and the AGM theory of belief change. We show that appropriately construed accounts emerging from those two lines of epistemological research do yield convergences relative to a specified kind of theories, here labeled “conjunctive”. In this domain, a set of plausible conditions are identified which (...) demonstrably capture the verisimilitudinarian effectiveness of AGM belief change, i.e., its effectiveness in tracking truth approximation. We conclude by indicating some further developments and open issues arising from our results. (shrink)
The conjunction fallacy has been a key topic in debates on the rationality of human reasoning and its limitations. Despite extensive inquiry, however, the attempt to provide a satisfactory account of the phenomenon has proved challenging. Here we elaborate the suggestion (first discussed by Sides, Osherson, Bonini, & Viale, 2002) that in standard conjunction problems the fallacious probability judgements observed experimentally are typically guided by sound assessments of _confirmation_ relations, meant in terms of contemporary Bayesian confirmation theory. Our main formal (...) result is a confirmation-theoretic account of the conjunction fallacy, which is proven _robust_ (i.e., not depending on various alternative ways of measuring degrees of confirmation). The proposed analysis is shown distinct from contentions that the conjunction effect is in fact not a fallacy, and is compared with major competing explanations of the phenomenon, including earlier references to a confirmation-theoretic account. (shrink)
Probability ratio and likelihood ratio measures of inductive support and related notions have appeared as theoretical tools for probabilistic approaches in the philosophy of science, the psychology of reasoning, and artificial intelligence. In an effort of conceptual clarification, several authors have pursued axiomatic foundations for these two families of measures. Such results have been criticized, however, as relying on unduly demanding or poorly motivated mathematical assumptions. We provide two novel theorems showing that probability ratio and likelihood ratio measures can be (...) axiomatized in a way that overcomes these difficulties. (shrink)
We provide a 'verisimilitudinarian' analysis of the well-known Linda paradox or conjunction fallacy, i.e., the fact that most people judge the probability of the conjunctive statement "Linda is a bank teller and is active in the feminist movement" (B & F) as more probable than the isolated statement "Linda is a bank teller" (B), contrary to an uncontroversial principle of probability theory. The basic idea is that experimental participants may judge B & F a better hypothesis about Linda as compared (...) to B because they evaluate B & F as more verisimilar than B. In fact, the hypothesis "feminist bank teller", while less likely to be true than "bank teller", may well be a better approximation to the truth about Linda. (shrink)
Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people’s goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the (...) reduction thereof. However, a variety of alternative entropy metrics are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism. (shrink)
The so‐called problem of irrelevant conjunction has been seen as a serious challenge for theories of confirmation. It involves the consequences of conjoining irrelevant statements to a hypothesis that is confirmed by some piece of evidence. Following Hawthorne and Fitelson, we reconstruct the problem with reference to Bayesian confirmation theory. Then we extend it to the case of conjoining irrelevant statements to a hypothesis that is dis confirmed by some piece of evidence. As a consequence, we obtain and formally present (...) a novel and more troublesome problem of irrelevant conjunction. We conclude by indicating a possible solution based on a measure‐sensitive approach and by critically discussing a major alternative way to address the problem. *Received December 2008; revised August 2009. †To contact the authors, please write to: Department of Philosophy, University of Turin, via Sant'Ottavio 20, 10124 Turin, Italy; e‐mail: firstname.lastname@example.org ; email@example.com or firstname.lastname@example.org. (shrink)
Crupi et al. (Think Reason 14:182–199, 2008) have recently advocated and partially worked out an account of the conjunction fallacy phenomenon based on the Bayesian notion of confirmation. In response, Schupbach (2009) presented a critical discussion as following from some novel experimental results. After providing a brief restatement and clarification of the meaning and scope of our original proposal, we will outline Schupbach’s results and discuss his interpretation thereof arguing that they do not actually undermine our point of view if (...) properly construed. Finally, we will foster such a claim by means of some novel data. (shrink)
Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than (...) probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability. (shrink)
The current state of inductive logic is puzzling. Survey presentations are recurrently offered and a very rich and extensive handbook was entirely dedicated to the topic just a few years ago . Among the contributions to this very volume, however, one finds forceful arguments to the effect that inductive logic is not needed and that the belief in its existence is itself a misguided illusion , while other distinguished observers have eventually come to see at least the label as “slightly (...) antiquated” .What seems not to have lost any of its currency is the problem which inductive logic is meant to address. Inference from limited ascertained information to uncertain hypotheses is ubiquitous in learning, prediction and discovery. The logical insight that such kind of inference is fallible m .. (shrink)
This essay addresses the methodology of philosophy of science and illustrates how formal and empirical methods can be fruitfully combined. Special emphasis is given to the application of experimental methods to confirmation theory and to recent work on the conjunction fallacy, a key topic in the rationality debate arising from research in cognitive psychology. Several other issue can be studied in this way. In the concluding section, a brief outline is provided of three further examples.
The Linda paradox is a key topic in current debates on the rationality of human reasoning and its limitations. We present a novel analysis of this paradox, based on the notion of verisimilitude as studied in the philosophy of science. The comparison with an alternative analysis based on probabilistic confirmation suggests how to overcome some problems of our account by introducing an adequately defined notion of verisimilitudinarian confirmation.
Analyses of the Sleeping Beauty Problem are polarised between those advocating the “1/2 view” and those endorsing the “1/3 view”. The disagreement concerns the evidential relevance of self-locating information. Unlike halfers, thirders regard self-locating information as evidentially relevant in the Sleeping Beauty Problem. In the present study, we systematically manipulate the kind of information available in different formulations of the Sleeping Beauty Problem. Our findings indicate that patterns of judgment on different formulations of the Sleeping Beauty Problem do not fit (...) either the “1/2 view” or the “1/3 view.” Human reasoners tend to acknowledge self-locating evidence as relevant, but discount its weight significantly. Accordingly, self-locating information may trigger more cautious judgments of confirmation than familiar kinds of statistical evidence. We also discuss how these results can advance the debate by providing a more nuanced and empirically grounded account or explication of the evidential impact of self-locating information. (shrink)
Bayesian epistemology postulates a probabilistic analysis of many sorts of ordinary and scientific reasoning. Huber has provided a novel criticism of Bayesianism, whose core argument involves a challenging issue: confirmation by uncertain evidence. In this paper, we argue that under a properly defined Bayesian account of confirmation by uncertain evidence, Huber's criticism fails. By contrast, our discussion will highlight what we take as some new and appealing features of Bayesian confirmation theory. Introduction Uncertain Evidence and Bayesian Confirmation Bayesian Confirmation by (...) Uncertain Evidence: Test Cases and Basic Principles. (shrink)
The main point of the paper is to show how popular probabilistic measures of incremental confirmation and statistical relevance with qualitatively different features can be embedded smoothly in generalized parametric families. In particular, I will show that the probability difference, log probability ratio, log likelihood ratio, odds difference, so-called improbability difference, and Gaifman’s measures of confirmation can all be subsumed within a convenient biparametric continuum. One intermediate step of this project may have interest on its own, as it provides a (...) unified representation of graded belief of which both probabilities and odds are special cases. (shrink)
In some recent works, Crupi and Iacona have outlined an analysis of ‘if’ based on Chrysippus’ idea that a conditional holds whenever the negation of its consequent is incompatible with its antecedent. This paper presents a sound and complete system of conditional logic that accommodates their analysis. The soundness and completeness proofs that will be provided rely on a general method elaborated by Raidl, which applies to a wide range of systems of conditional logic.