This is part II in a series of papers outlining Abstraction Theory, a theory that I propose provides a solution to the characterisation or epistemological problem of induction. Logic is built from first principles severed from language such that there is one universal logic independent of specific logical languages. A theory of (non-linguistic) meaning is developed which provides the basis for the dissolution of the `grue' problem and problems of the non-uniqueness of probabilities in inductive logics. The problem of counterfactual (...) conditionals is generalised to a problem of truth conditions of hypotheses and this general problem is then solved by the notion of abstractions. The probability calculus is developed with examples given. In future parts of the series the full decision theory is developed and its properties explored. (shrink)
This paper is concerned with learners who aim to learn patterns in infinite binary sequences: shown longer and longer initial segments of a binary sequence, they either attempt to predict whether the next bit will be a 0 or will be a 1 or they issue forecast probabilities for these events. Several variants of this problem are considered. In each case, a no-free-lunch result of the following form is established: the problem of learning is a formidably difficult one, in that (...) no matter what method is pursued, failure is incomparably more common that success; and difficult choices must be faced in choosing a method of learning, since no approach dominates all others in its range of success. In the simplest case, the comparison of the set of situations in which a method fails and the set of situations in which it succeeds is a matter of cardinality (countable vs. uncountable); in other cases, it is a topological matter (meagre vs. co-meagre) or a hybrid computational-topological matter (effectively meagre vs. effectively co-meagre). (shrink)
Evidentialists say that a necessary condition of sound epistemic reasoning is that our beliefs reflect only our evidence. This thesis arguably conflicts with standard Bayesianism, due to the importance of prior probabilities in the latter. Some evidentialists have responded by modelling belief-states using imprecise probabilities (Joyce 2005). However, Roger White (2010) and Aron Vallinder (2018) argue that this Imprecise Bayesianism is incompatible with evidentialism due to “inertia”, where Imprecise Bayesian agents become stuck in a state of ambivalence towards hypotheses. Additionally, (...) escapes from inertia apparently only create further conflicts with evidentialism. This dilemma gives a reason for evidentialist imprecise probabilists to look for alternatives without inertia. I shall argue that Henry E. Kyburg’s approach offers an evidentialist-friendly imprecise probability theory without inertia, and that its relevant anti-inertia features are independently justified. I also connect the traditional epistemological debates concerning the “ethics of belief” more systematically with formal epistemology than has been hitherto done. (shrink)
Calibration inductive logics are based on accepting estimates of relative frequencies, which are used to generate imprecise probabilities. In turn, these imprecise probabilities are intended to guide beliefs and decisions — a process called “calibration”. Two prominent examples are Henry E. Kyburg's system of Evidential Probability and Jon Williamson's version of Objective Bayesianism. There are many unexplored questions about these logics. How well do they perform in the short-run? Under what circumstances do they do better or worse? What is their (...) performance relative to traditional Bayesianism? -/- In this article, we develop an agent-based model of a classic binomial decision problem, including players based on variations of Evidential Probability and Objective Bayesianism. We compare the performances of these players, including against a benchmark player who uses standard Bayesian inductive logic. We find that the calibrated players can match the performance of the Bayesian player, but only with particular acceptance thresholds and decision rules. Among other points, our discussion raises some challenges for characterising “cautious” reasoning using imprecise probabilities. Thus, we demonstrate a new way of systematically comparing imprecise probability systems, and we conclude that calibration inductive logics are surprisingly promising for making decisions. (shrink)
We are often justified in acting on the basis of evidential confirmation. I argue that such evidence supports belief in non-quantificational generic generalizations, rather than universally quantified generalizations. I show how this account supports, rather than undermines, a Bayesian account of confirmation. Induction from confirming instances of a generalization to belief in the corresponding generic is part of a reasoning instinct that is typically (but not always) correct, and allows us to approximate the predictions that formal epistemology would make.
Douven (in press) observes that Schurz's meta-inductive justification of induction cannot explain the great empirical success of induction, and offers an explanation based on computer simulations of the social and evolutionary development of our inductive practices. In this paper, I argue that Douven's account does not address the explanatory question that Schurz's argument leaves open, and that the assumption of the environment's induction-friendliness that is inherent to Douven's simulations is not justified by Schurz's argument.
According to the objective Bayesian approach to inductive logic, premisses inductively entail a conclusion just when every probability function with maximal entropy, from all those that satisfy the premisses, satisfies the conclusion. When premisses and conclusion are constraints on probabilities of sentences of a first-order predicate language, however, it is by no means obvious how to determine these maximal entropy functions. This paper makes progress on the problem in the following ways. Firstly, we introduce the concept of a limit in (...) entropy and show that, if the set of probability functions satisfying the premisses contains a limit in entropy, then this limit point is unique and is the maximal entropy probability function. Next, we turn to the special case in which the premisses are categorical sentences of the logical language. We show that if the uniform probability function gives the premisses positive probability, then the maximal entropy function can be found by simply conditionalising this uniform prior on the premisses. We generalise our results to demonstrate agreement between the maximal entropy approach and Jeffrey conditionalisation in the case in which there is a single premiss that specifies the probability of a sentence of the language. We show that, after learning such a premiss, certain inferences are preserved, namely inferences to inductive tautologies. Finally, we consider potential pathologies of the approach: we explore the extent to which the maximal entropy approach is invariant under permutations of the constants of the language, and we discuss some cases in which there is no maximal entropy probability function. (shrink)
CL diagrams – the abbreviation of Cubus Logicus – are inspired by J.C. Lange’s logic machine from 1714. In recent times, Lange’s diagrams have been used for extended syllogistics, bitstring semantics, analogical reasoning and many more. The paper presents a method for testing statistical syllogisms (also called proportional syllogisms or inductive syllogisms) by using CL diagrams.
A standard way to challenge convergence-based accounts of inductive success is to claim that they are too weak to constrain inductive inferences in the short run. We respond to such a challenge by answering some questions raised by Juhl (1994). When it comes to predicting limiting relative frequencies in the framework of Reichenbach, we show that speed-optimal convergence—a long-run success condition—induces dynamic coherence in the short run.
This book explores the Bayesian approach to the logic and epistemology of scientific reasoning. Section 1 introduces the probability calculus as an appealing generalization of classical logic for uncertain reasoning. Section 2 explores some of the vast terrain of Bayesian epistemology. Three epistemological postulates suggested by Thomas Bayes in his seminal work guide the exploration. This section discusses modern developments and defenses of these postulates as well as some important criticisms and complications that lie in wait for the Bayesian epistemologist. (...) Section 3 applies the formal tools and principles of the first two sections to a handful of topics in the epistemology of scientific reasoning: confirmation, explanatory reasoning, evidential diversity and robustness analysis, hypothesis competition, and Ockham's Razor. (shrink)
Wenmackers and Romeijn (2016) formalize ideas going back to Shimony (1970) and Putnam (1963) into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn’s proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a version of this guarantee.
All else being equal, can granting the objective purport of moral experience support a presumption in favor of some form of moral objectivism? Don Loeb (2007) has argued that even if we grant that moral experience appears to present us with a realm of objective moral fact—something he denies we have reason to do in the first place—the objective purport of moral experience cannot by itself provide even prima facie support for moral objectivism. In this paper, I contend against Loeb (...) that granting the objective purport of ordinary moral experience is sufficient to support a defeasible presumption in favor of moral objectivism, and this by constituting non-explanatory, comparative confirmation that incrementally raises the prima facie likelihood that moral facts exist. More specifically, I appeal to a modest confirmation principle shared by Likelihoodists and Bayesieans - namely, the Weak Law of Likelihood - in an effort to show that (i) at a minimum, moral experience establishes a middling scrutable probability for a sufficient but not necessary condition of moral objectivism being true, and that (ii) this moderate probability in turn constitutes evidence that makes it prima facie more probable than not that some form of moral objectivism is true. (shrink)
The mindsponge mechanism (mindsponge framework, mindsponge concept, or mindsponge process) provides a way to explain how and why an individual observes and ejects cultural values conditional on the external setting. The term “mindsponge” derives from the metaphor that the mind is analogized to a sponge that squeezes out unsuitable values and absorbs new ones compatible with its core value. Thanks to the complexity and well-structuring, the mechanism has been used to develop various concepts in multiple disciplines.
John Maynard Keynes’s A Treatise on Probability is the seminal text for the logical interpretation of probability. According to his analysis, probabilities are evidential relations between a hypothesis and some evidence, just like the relations of deductive logic. While some philosophers had suggested similar ideas prior to Keynes, it was not until his Treatise that the logical interpretation of probability was advocated in a clear, systematic and rigorous way. I trace Keynes’s influence in the philosophy of probability through a heterogeneous (...) sample of thinkers who adopted his interpretation. This sample consists of Frederick C. Benenson, Roy Harrod, Donald C. Williams, Henry E. Kyburg and David Stove. The ideas of Keynes prove to be adaptable to their diverse theories of probability. My discussion indicates both the robustness of Keynes’s probability theory and the importance of its influence on the philosophers whom I describe. I also discuss the Problem of the Priors. I argue that none of those I discuss have obviously improved on Keynes’s theory with respect to this issue. (shrink)
The debates between Bayesian, frequentist, and other methodologies of statistics have tended to focus on conceptual justifications, sociological arguments, or mathematical proofs of their long run properties. Both Bayesian statistics and frequentist (“classical”) statistics have strong cases on these grounds. In this article, we instead approach the debates in the “Statistics Wars” from a largely unexplored angle: simulations of different methodologies’ performance in the short to medium run. We conducted a large number of simulations using a straightforward decision problem based (...) around tossing a coin with unknown bias and then placing bets. In this simulation, we programmed four players, inspired by Bayesian statistics, frequentist statistics, Jon Williamson’s version of Objective Bayesianism, and a player who simply extrapolates from observed frequencies to general frequencies. The last player functions as a benchmark: a statistical methodology should at least outperform a crude form of induction. We focused on the performance of these methodologies in guiding the players towards good decisions. Unlike an earlier simulation study of this type, we found no systematic difference in performance between the Bayesian and frequentist players, provided the Bayesian used a flat prior and the frequentist used a low confidence level. Unlike that study, we were able to use Big Data methods to mitigate problems of random error in the simulation results. The Williamsonian player, who is a novel element of our study, also had no systematic differences in their performance, provided that they use a low confidence level. These players performed similarly even in the very short run, when players were making different decisions. Our study indicates that all three methodologies should be taken seriously by philosophers and practitioners of statistics. However, the frequentist and Williamsonian performed poorly when their confidence levels were high, and the Bayesian was surprisingly harmed by biased priors, providing some unexpected lessons for these methodologies when facing this type of decision problem. (shrink)
Is it possible to combine different logics into a coherent system with the goal of applying it to specific problems so that it sheds some light on foundational aspects of those logics? These are two of the most basic issues of combining logics. Paranormal modal logic is a combination of paraconsistent logic and modal logic. In this paper, I propose two further combinatory developments, focusing on each one of these two issues. On the foundational side, I combine paranormal modal logic (...) with normal modal logic, resulting into a paraconsistent and paracomplete multimodal logic dealing with the notions of plausibility and certainty. On the application side, I combine this logic with Reiter’s default logic, resulting into an inductive and consequently nonmonotonic paraconsistent and paracomplete logic able to represent some key inductive principles. (shrink)
Many epistemological problems can be solved by the objective Bayesian view that there are rationality constraints on priors, that is, inductive probabilities. But attempts to work out these constraints have run into such serious problems that many have rejected objective Bayesianism altogether. I argue that the epistemologist should borrow the metaphysician’s concept of naturalness and assign higher priors to more natural hypotheses.
El estudio actual de la argumentación se encuentra distanciado de la lógica. En este artículo sostengo que restaurar el vínculo del estudio de la argumentación con esta disciplina podría resultar benéfico para la metas descriptivas y normativas de este campo de investigación. Tras destacar algunos aspectos del surgimiento la teoría de la argumentación contemporánea, enfatizando la idea de "perspectivas", explico cómo el reconocimiento de sus objetivos y tareas volvió problemática la coexistencia de varios enfoques o aproximaciones para el estudio de (...) la realidad argumentativa. Sugiero algunas hipótesis explicativas sobre el abandono del enfoque lógico en favor de otras alternativas y, a continuación, ofrezco razones para encauzar el estudio sistemático de la argumentación desde este enfoque. Además de las consideraciones normativas y las herramientas descriptivas que pueden aducirse en su favor, sostengo que el enfoque lógico puede proporcionar una clase de integración a este campo investigación de maneras que no están disponibles para otras aproximaciones. (shrink)
After long arguments between positivism and falsificationism, the verification of universal hypotheses was replaced with the confirmation of uncertain major premises. Unfortunately, Hemple proposed the Raven Paradox. Then, Carnap used the increment of logical probability as the confirmation measure. So far, many confirmation measures have been proposed. Measure F proposed by Kemeny and Oppenheim among them possesses symmetries and asymmetries proposed by Elles and Fitelson, monotonicity proposed by Greco et al., and normalizing property suggested by many researchers. Based on the (...) semantic information theory, a measure b* similar to F is derived from the medical test. Like the likelihood ratio, measures b* and F can only indicate the quality of channels or the testing means instead of the quality of probability predictions. Furthermore, it is still not easy to use b*, F, or another measure to clarify the Raven Paradox. For this reason, measure c* similar to the correct rate is derived. Measure c* supports the Nicod Criterion and undermines the Equivalence Condition, and hence, can be used to eliminate the Raven Paradox. An example indicates that measures F and b* are helpful for diagnosing the infection of Novel Coronavirus, whereas most popular confirmation measures are not. Another example reveals that all popular confirmation measures cannot be used to explain that a black raven can confirm “Ravens are black” more strongly than a piece of chalk. Measures F, b*, and c* indicate that the existence of fewer counterexamples is more important than more positive examples’ existence, and hence, are compatible with Popper’s falsification thought. (shrink)
As an application of his Material Theory of Induction, Norton (2018; manuscript) argues that the correct inductive logic for a fair infinite lottery, and also for evaluating eternal inflation multiverse models, is radically different from standard probability theory. This is due to a requirement of label independence. It follows, Norton argues, that finite additivity fails, and any two sets of outcomes with the same cardinality and co-cardinality have the same chance. This makes the logic useless for evaluating multiverse models based (...) on self-locating chances, so Norton claims that we should despair of such attempts. However, his negative results depend on a certain reification of chance, consisting in the treatment of inductive support as the value of a function, a value not itself affected by relabeling. Here we define a purely comparative infinite lottery logic, where there are no primitive chances but only a relation of ‘at most as likely’ and its derivatives. This logic satisfies both label independence and a comparative version of additivity as well as several other desirable properties, and it draws finer distinctions between events than Norton's. Consequently, it yields better advice about choosing between sets of lottery tickets than Norton's, but it does not appear to be any more helpful for evaluating multiverse models. Hence, the limitations of Norton's logic are not entirely due to the failure of additivity, nor to the fact that all infinite, co-infinite sets of outcomes have the same chance, but to a more fundamental problem: We have no well-motivated way of comparing disjoint countably infinite sets. (shrink)
In the Paradox of the Ravens, a set of otherwise intuitive claims about evidence seems to be inconsistent. Most attempts at answering the paradox involve rejecting a member of the set, which seems to require a conflict either with commonsense intuitions or with some of our best confirmation theories. In contrast, I argue that the appearance of an inconsistency is misleading: ‘confirms’ and cognate terms feature a significant ambiguity when applied to universal generalisations. In particular, the claim that some evidence (...) confirms a universal generalisation ordinarily suggests, in part, that the evidence confirms the reliability of predicting that something which satisfies the antecedent will also satisfy the consequent. I distinguish between the familiar relation of confirmation simpliciter and what I shall call ‘predictive confirmation’. I use them to formulate my answer, illustrate it in a very simple probabilistic model, and defend it against objections. I conclude that, once our evidential concepts are sufficiently clarified, there is no sense in which the initial claims are both plausible and inconsistent. (shrink)
In this review essay of Jan Sprenger and Stephan Hartman's new book Bayesian Philosophy of Science (2019), I discuss the objectivity of Bayesianism, its implications for the scientific realism debates, and the extent to which they have succeeded in formalising Karl Popper's concept of corroboration.
The resolving of the main problem of quantum mechanics about how a quantum leap and a smooth motion can be uniformly described resolves also the problem of how a distribution of reliable data and a sequence of deductive conclusions can be uniformly described by means of a relevant wave function “Ψdata”.
Gözlemlenenlerden gözlemlen(e)meyenlere diğer bir deyişle genel yasalara ulaşma imkânı veren çıkarım yöntemi olarak tümevarımsal ya da endüktif akıl yürütmenin rasyonel olarak temellendirilmesinin imkanına yönelik soruşturma tarih içerisinde tümevarım sorunu ya da endüksiyon problemi olarak tezahür etmiştir. Bu sorunun temel argümanı tarihsel okumalara baktığımızda İskoç ampirist filozof David Hume tarafından öne sürülmüştür. Hume, tümevarımsal çıkarımlar temelinde, gözlenmeyen meseleler hakkındaki inançlarımıza hangi gerekçelerle ulaştığımızı soruşturmaktadır. Hume soruşturmasının sonucunda gözlemlenenden gözlemlen(e)meyen durumlara ilişkin yapılan olgu meseleleri ile ilgili bütün tümevarımsal akıl yürütmelerin dolaylı ya (...) da dolaysız olarak nedensellik ilişkisine ve bu ilişkinin temelinde yer alan doğanın düzenliliği ilkesi ya da “gelecek her zaman geçmişe benzer” önermesine dayandığını ifade ederek bütün tümevarımsal akıl yürütmelerde ortak olan geleceğin her zaman geçmişe benzeyeceği ifadesinin rasyonel olarak temellendirilmesinin mümkün olmadığını belirtmektedir. Bu bağlamda, çalışmada tümevarımsal akıl yürütme sonucunda ulaşılan sonuca inanmanın hiçbir rasyonel temelinin olamayacağı yönündeki Hume’un görüşü argüman formunda yeniden yapılandırılarak ortaya konulacaktır. (shrink)
This book is the result of rethinking the standard playbook for critical thinking courses, to include only the most useful skills from the toolkits of philosophy, cognitive psychology, and behavioral economics. -/- The text focuses on: -/- - a mindset that avoids systematic error, more than the ability to persuade others - the logic of probability and decisions, more than than the logic of deductive arguments - a unified treatment of evidence, covering statistical, causal, and best-explanation inferences -/- The unified (...) account of evidence I offer is a broadly Bayesian one, but there aren’t any daunting theorems. (Without knowing it, students are taught to use a gentle form of the Bayes factor to measure the strength of evidence and to update.) It’s also shown how this framework illuminates aspects of the scientific method, such as the proper design of experiments. -/- The link above allows Anonymous Guest Access without an account. (shrink)
Efforts to formalize qualitative accounts of inference to the best explanation (IBE) confront two obstacles: the imprecise nature of such accounts and the unusual logical properties that explanations exhibit, such as contradiction-intolerance and irreflexivity. This paper aims to surmount these challenges by utilising a new, more precise theory that treats explanations as expressions that codify defeasible inferences. To formalise this account, we provide a sequent calculus in which IBE serves as an elimination rule for a connective that exhibits many of (...) the properties associated with the behaviour of the English expression ‘That... best explains why ... ’. We first construct a calculus that encodes these properties at the level of the turnstile, i.e. as a metalinguistic expression for classes of defeasible consequence relations. We then show how this calculus can be conservatively extended over a language that contains a best-explains-why operator. (shrink)
A complete calculus of inductive inference captures the totality of facts about inductive support within some domain of propositions as relations or theorems within the calculus. It is demonstrated that there can be no complete, non-trivial calculus of inductive inference.
Philosophers such as Goodman, Scheffler and Glymour aim to answer the Paradox of the Ravens by distinguishing between confirmation simpliciter and selective confirmation. In the latter concept, the evidence both supports a hypothesis and undermines one of its "rivals". In this article, I argue that while selective confirmation does seem to be an important scientific notion, no attempt to formalise it thus far has managed to solve the Paradox of the Ravens.
This textbook is not a textbook in the traditional sense. Here, what we have attempted is compile a set of assignments and exercise that may be used in critical thinking courses. To that end, we have tried to make these assignments as diverse as possible while leaving flexibility in their application within the classroom. Of course these assignments and exercises could certainly be used in other classes as well. Our view is that critical thinking courses work best when they are (...) presented as skills based learning opportunities. We hope that these assignments speak to that desire and can foster the kinds of critical thinking skills that are both engaging and fun Please feel free to contact us with comments and suggestions. We will strive to correct errors when pointed out, add necessary material, and make other additional and needed changes as they arise. Please check back for the most up to date version. Rebeka Ferreira and Anthony Ferrucci. (shrink)
This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work. This work is in the public domain (...) in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work. As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant. (shrink)
Many philosophers argue that Keynes’s concept of the “weight of arguments” is an important aspect of argument appraisal. The weight of an argument is the quantity of relevant evidence cited in the premises. However, this dimension of argumentation does not have a received method for formalisation. Kyburg has suggested a measure of weight that uses the degree of imprecision in his system of “Evidential Probability” to quantify weight. I develop and defend this approach to measuring weight. I illustrate the usefulness (...) of this measure by employing it to develop an answer to Popper’s Paradox of Ideal Evidence. (shrink)
Inductive Logic is a ‘thematic compilation’ by Avi Sion. It collects in one volume many (though not all) of the essays, that he has written on this subject over a period of some 23 years, which all demonstrate the possibility and conditions of validity of human knowledge, the utility and reliability of human cognitive means when properly used, contrary to the skeptical assumptions that are nowadays fashionable. A new essay, The Logic of Analogy, was added in 2022.
In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized (...) data-driven science. I investigate, in particular, a proposed definition of a universal prediction method that goes back to Solomonoff and Levin. This definition marks the birth of the theory of Kolmogorov complexity, and has a direct line to the information-theoretic approach in modern machine learning. Solomonoff's work was inspired by Carnap's program of inductive logic, and the more precise definition due to Levin can be seen as an explicit attempt to escape the diagonal argument that Putnam famously launched against the feasibility of Carnap's program. The Solomonoff-Levin definition essentially aims at a mixture of all possible prediction algorithms. An alternative interpretation is that the definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the Solomonoff-Levin definition fails to unite two necessary conditions to count as a universal prediction method, as turns out be entailed by Putnam's original argument after all; and I argue that this indeed shows that no definition can. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is already problematic itself. (shrink)
Working within the broad lines of general consensus that mark out the core features of John Stuart Mill’s (1806–1873) logic, as set forth in his A System of Logic (1843–1872), this chapter provides an introduction to Mill’s logical theory by reviewing his position on the relationship between induction and deduction, and the role of general premises and principles in reasoning. Locating induction, understood as a kind of analogical reasoning from particulars to particulars, as the basic form of inference that is (...) both free-standing and the sole load-bearing structure in Mill’s logic, the foundations of Mill’s logical system are briefly inspected. Several naturalistic features are identified, including its subject matter, human reasoning, its empiricism, which requires that only particular, experiential claims can function as basic reasons, and its ultimate foundations in ‘spontaneous’ inference. The chapter concludes by comparing Mill’s naturalized logic to Russell’s (1907) regressive method for identifying the premises of mathematics. (shrink)
In this paper we compare Leitgeb’s stability theory of belief and Spohn’s ranking-theoretic account of belief. We discuss the two theories as solutions to the lottery paradox. To compare the two theories, we introduce a novel translation between ranking functions and probability functions. We draw some crucial consequences from this translation, in particular a new probabilistic belief notion. Based on this, we explore the logical relation between the two belief theories, showing that models of Leitgeb’s theory correspond to certain models (...) of Spohn’s theory. The reverse is not true. Finally, we discuss how these results raise new questions in belief theory. In particular, we raise the question whether stability is rightly thought of as a property pertaining to belief. (shrink)
This paper has three interdependent aims. The first is to make Reichenbach’s views on induction and probabilities clearer, especially as they pertain to his pragmatic justification of induction. The second aim is to show how his view of pragmatic justification arises out of his commitment to extensional empiricism and moots the possibility of a non-pragmatic justification of induction. Finally, and most importantly, a formal decision-theoretic account of Reichenbach’s pragmatic justification is offered in terms both of the minimax principle and the (...) dominance principle. (shrink)
Many theorists have proposed that we can use the principle of indifference to defeat the inductive sceptic. But any such theorist must confront the objection that different ways of applying the principle of indifference lead to incompatible probability assignments. Huemer offers the explanatory priority proviso as a strategy for overcoming this objection. With this proposal, Huemer claims that we can defend induction in a way that is not question-begging against the sceptic. But in this article, I argue that the opposite (...) is true: if anything, Huemer’s use of the principle of indifference supports the rationality of inductive scepticism. (shrink)
Logic is a field studied mainly by researchers and students of philosophy, mathematics and computing. Inductive logic seeks to determine the extent to which the premises of an argument entail its conclusion, aiming to provide a theory of how one should reason in the face of uncertainty. It has applications to decision making and artificial intelligence, as well as how scientists should reason when not in possession of the full facts. In this work, Jon Williamson embarks on a quest to (...) find a general, reasonable, applicable inductive logic (GRAIL), all the while examining why pioneers such as Ludwig Wittgenstein and Rudolf Carnap did not entirely succeed in this task. (shrink)
We investigate the relative probabilistic support afforded by the combination of two analogies based on possibly different, structural similarity (as opposed to e.g. shared predicates) within the context of Pure Inductive Logic and under the assumption of Language Invariance. We show that whilst repeated analogies grounded on the same structural similarity only strengthen the probabilistic support this need not be the case when combining analogies based on different structural similarities. That is, two analogies may provide less support than each would (...) individually. (shrink)
In previous work, we studied four well known systems of qualitative probabilistic inference, and presented data from computer simulations in an attempt to illustrate the performance of the systems. These simulations evaluated the four systems in terms of their tendency to license inference to accurate and informative conclusions, given incomplete information about a randomly selected probability distribution. In our earlier work, the procedure used in generating the unknown probability distribution (representing the true stochastic state of the world) tended to yield (...) probability distributions with moderately high entropy levels. In the present article, we present data charting the performance of the four systems when reasoning in environments of various entropy levels. The results illustrate variations in the performance of the respective reasoning systems that derive from the entropy of the environment, and allow for a more inclusive assessment of the reliability and robustness of the four systems. (shrink)
In several papers, John Norton has argued that Bayesianism cannot handle ignorance adequately due to its inability to distinguish between neutral and disconfirming evidence. He argued that this inability sows confusion in, e.g., anthropic reasoning in cosmology or the Doomsday argument, by allowing one to draw unwarranted conclusions from a lack of knowledge. Norton has suggested criteria for a candidate for representation of neutral support. Imprecise credences (families of credal probability functions) constitute a Bayesian-friendly framework that allows us to avoid (...) inadequate neutral priors and better handle ignorance. The imprecise model generally agrees with Norton's representation of ignorance but requires that his criterion of self-duality be reformulated or abandoned. (shrink)
John Norton has proposed a position of “material induction” that denies the existence of a universal inductive inference schema behind scientific reasoning. In this vein, Norton has recently presented a “dome scenario” based on Newtonian physics that, in his understanding, is at variance with Bayesianism. The present note points out that a closer analysis of the dome scenario reveals incompatibilities with material inductivism itself.