Much of our information comes to us indirectly, in the form of conclusions others have drawn from evidence they gathered. When we hear these conclusions, how can we modify our own opinions so as to gain the benefit of their evidence? In this paper we study the method known as geometric pooling. We consider two arguments in its favour, raising several objections to one, and proposing an amendment to the other.
In a series of papers over the past twenty years, and in a new book, Igor Douven has argued that Bayesians are too quick to reject versions of inference to the best explanation that cannot be accommodated within their framework. In this paper, I survey their worries and attempt to answer them using a series of pragmatic and purely epistemic arguments that I take to show that Bayes’ Rule really is the only rational way to respond to your evidence.
This paper develops a trivalent semantics for indicative conditionals and extends it to a probabilistic theory of valid inference and inductive learning with conditionals.} On this account, (i) all complex conditionals can be rephrased as simple conditionals, connecting our account to Adams's theory of p-valid inference; (ii) we obtain Stalnaker's Thesis as a theorem while avoiding the well-known triviality results; (iii) we generalize Bayesian conditionalization to an updating principle for conditional sentences. The final result is a unified semantic and probabilistic (...) theory of conditionals with attractive results and predictions. (shrink)
IBE ('Inference to the best explanation' or abduction) is a popular and highly plausible theory of how we should judge the evidence for claims of past events based on present evidence. It has been notably developed and supported recently by Meyer following Lipton. I believe this theory is essentially correct. This paper supports IBE from a probability perspective, and argues that the retrodictive probabilities involved in such inferences should be analysed in terms of predictive probabilities and a priori probability ratios (...) of initial events. The key point is to separate these two features. Disagreements over evidence can be traced to disagreements over either the a priori probability ratios or predictive conditional ratios. In many cases, in real science, judgements of the former are necessarily subjective. The principles of iterated evidence are also discussed. The Sceptic's position is criticised as ignoring iteration of evidence, and characteristically failing to adjust a priori probability ratios in response to empirical evidence. (shrink)
It is well known that de se (or ‘self-locating’) propositions complicate the standard picture of how we should respond to evidence. This has given rise to a substantial literature centered around puzzles like Sleeping Beauty, Dr. Evil, and Doomsday—and it has also sparked controversy over a style of argument that has recently been adopted by theoretical cosmologists. These discussions often dwell on intuitions about a single kind of case, but it’s worth seeking a rule that can unify our treatment of (...) all evidence, whether de dicto or de se. -/- This paper is about three candidates for such a rule, presented as replacements for the standard updating rule. Each rule stems from the idea that we should treat ourselves as a random sample, a heuristic that underlies many of the intuitions that have been pumped in treatments of the standard puzzles. But each rule also yields some strange results when applied across the board. This leaves us with some difficult options. We can seek another way to refine the random-sample heuristic, e.g. by restricting one of our rules. We can try to live with the strange results, perhaps granting that useful principles can fail at the margins. Or we can reject the random-sample heuristic as fatally flawed—which means rethinking its influence in even the simplest cases. (shrink)
A large number of essays address the Sleeping Beauty problem, which undermines the validity of Bayesian inference and Bas Van Fraassen's 'Reflection Principle'. In this study a straightforward analysis of the problem based on probability theory is presented. The key difference from previous works is that apart from the random experiment imposed by the problem's description, a different one is also considered, in order to negate the confusion on the involved conditional probabilities. The results of the analysis indicate that no (...) inconsistency takes place, whereas both Bayesian inference and 'Reflection Principle' are valid. (shrink)
Dawid, DeGroot and Mortera showed, a quarter century ago, that any agent who regards a fellow agent as a peer--in particular, defers to the fellow agent's prior credences in the same way that she defers to her own--and updates by split-the-difference is prone to diachronic incoherence. On the other hand one may show that there are special scenarios in which Bayesian updating approximates difference splitting, so it remains an important question whether it remains a viable response to ``generic" peer update. (...) We look at arguments by two teams of philosophers against difference splitting. (shrink)
The goal of a partial belief is to be accurate, or close to the truth. By appealing to this norm, I seek norms for partial beliefs in self-locating and non-self-locating propositions. My aim is to find norms that are analogous to the Bayesian norms, which, I argue, only apply unproblematically to partial beliefs in non-self-locating propositions. I argue that the goal of a set of partial beliefs is to minimize the expected inaccuracy of those beliefs. However, in the self-locating framework, (...) there are two equally legitimate definitions of expected inaccuracy. And, while each gives rise to the same synchronic norm for partial beliefs, they give rise to different, inconsistent diachronic norms. I conclude that both norms are rationally permissible. En passant, I note that this entails that both Halfer and Thirder solutions to the well-known Sleeping Beauty puzzle are rationally permissible. (shrink)
In this paper, I provide an accuracy-based argument for conditionalization (via reflection) that does not rely on norms of maximizing expected accuracy. -/- (This is a draft of a paper that I wrote in 2013. It stalled for no very good reason. I still believe the content is right).
We often learn the opinions of others without hearing the evidence on which they're based. The orthodox Bayesian response is to treat the reported opinion as evidence itself and update on it by conditionalizing. But sometimes this isn't feasible. In these situations, a simpler way of combining one's existing opinion with opinions reported by others would be useful, especially if it yields the same results as conditionalization. We will show that one method---upco, also known as multiplicative pooling---is specially suited to (...) this role when the opinions you wish to pool concern hypotheses about chances. The result has interesting consequences: it addresses the problem of disagreement between experts; and it sheds light on the social argument for the uniqueness thesis. (shrink)
The proportional weight view in epistemology of disagreement generalizes the equal weight view and proposes that we assign to judgments of different people weights that are proportional to their epistemic qualifications. It is shown that if the resulting degrees of confidence are to constitute a probability function, they must be the weighted arithmetic means of individual degrees of confidence, while if the resulting degrees of confidence are to obey the Bayesian rule of conditionalization, they must be the weighted geometric means (...) of individual degrees of confidence. The double bind entails that the proportional weight view (and its moderate adjustment in favor of one’s own judgment) is inconsistent with Bayesianism. (shrink)
Recently, some have challenged the idea that there are genuine norms of diachronic rationality. Part of this challenge has involved offering replacements for diachronic principles. Skeptics about diachronic rationality believe that we can provide an error theory for it by appealing to synchronic updating rules that, over time, mimic the behavior of diachronic norms. In this paper, I argue that the most promising attempts to develop this position within the Bayesian framework are unsuccessful. I sketch a new synchronic surrogate that (...) draws upon some of the features of each of these earlier attempts. At the heart of this discussion is the question of what exactly it means to say that one norm is a surrogate for another. I argue that surrogacy, in the given context, can be taken as a proxy for the degree to which formal and traditional epistemology can be made compatible. (shrink)
Knowledge-first evidentialism combines the view that it is rational to believe what is supported by one's evidence with the view that one's evidence is what one knows. While there is much to be said for the view, it is widely perceived to fail in the face of cases of reasonable error—particularly extreme ones like new Evil Demon scenarios (Wedgwood, 2002). One reply has been to say that even in such cases what one knows supports the target rational belief (Lord, 201x, (...) this volume). I spell out two versions of the strategy. The direct one uses what one knows as the input to principles of rationality such as conditionalization, dominance avoidance, etc. I argue that it fails in hybrid cases that are Good with respect to one belief and Bad with respect to another. The indirect strategy uses what one knows to determine a body of supported propositions that is in turn the input to principles of rationality. I sketch a simple formal implementation of the indirect strategy and show that it avoids the difficulty. I conclude that the indirect strategy offers the most promising way for knowledge-first evidentialists to deal with the New Evil Demon problem. (shrink)
This paper explores the consequences of applying two natural ideas from epistemology to decision theory: (1) that knowledge should guide our actions, and (2) that we know a lot of non-trivial things. In particular, we explore the consequences of these ideas as they are applied to standard decision theoretic puzzles such as the St. Petersburg Paradox. In doing so, we develop a “knowledge-first” decision theory and we will see how it can help us avoid fanaticism with regard to the St. (...) Petersburg puzzle and related puzzles. The result will be a decision theory that gives a novel, but well-motivated, reason for discounting small probabilities when making decisions. We examine the merits and demerits of such a decision theory. (shrink)
In “Generalized Conditionalization and the Sleeping Beauty Problem,” Anna Mahtani and I offer a new argument for thirdism that relies on what we call “generalized conditionalization.” Generalized conditionalization goes beyond conventional conditionalization in two respects: first, by sometimes deploying a space of synchronic, essentially temporal, candidate-possibilities that are not “prior” possibilities; and second, by allowing for the use of preliminary probabilities that arise by first bracketing, and then conditionalizing upon, “old evidence.” In “Beauty and Conditionalization: Reply to Horgan and Mahtani,” (...) Joel Pust replies to the Horgan/Mahtani argument, raising several objections. In my view his objections do not undermine the argument, but they do reveal a need to provide several further elaborations of it—elaborations that I think are independently plausible. In this paper I will address his objections, by providing the elaborations that I think they prompt. (shrink)
Our main aims in this paper is to discuss and criticise the core thesis of a position that has become known as phenomenal conservatism. According to this thesis, its seeming to one that p provides enough justification for a belief in p to be prima facie justified (a thesis we label Standard Phenomenal Conservatism). This thesis captures the special kind of epistemic import that seemings are claimed to have. To get clearer on this thesis, we embed it, first, in a (...) probabilistic framework in which updating on new evidence happens by Bayesian conditionalization, and second, a framework in which updating happens by Jeffrey conditionalization. We spell out problems for both views, and then generalize some of these to non-probabilistic frameworks. The main theme of our discussion is that the epistemic import of a seeming (or experience) should depend on its content in a plethora of ways that phenomenal conservatism is insensitive to. (shrink)
A vexing question in Bayesian epistemology is how an agent should update on evidence which she assigned zero prior credence. Some theorists have suggested that, in such cases, the agent should update by Kolmogorov conditionalization, a norm based on Kolmogorov’s theory of regular conditional distributions. However, it turns out that in some situations, a Kolmogorov conditionalizer will plan to always assign a posterior credence of zero to the evidence she learns. Intuitively, such a plan is irrational and easily Dutch bookable. (...) In this paper, we propose a revised norm, Kolmogorov-Blackwell conditionalization, which avoids this problem. We prove a Dutch book theorem and converse Dutch book theorem for this revised norm, and relate our results to those of Rescorla (2018). (shrink)
Is more information always better? Or are there some situations in which more information can make us worse off? Good (1967) argues that expected utility maximizers should always accept more information if the information is cost-free and relevant. But Good's argument presupposes that you are certain you will update by conditionalization. If we relax this assumption and allow agents to be uncertain about updating, these agents can be rationally required to reject free and relevant information. Since there are good reasons (...) to be uncertain about updating, rationality can require you to prefer ignorance. (shrink)
Epistemologists who study credences have a well-developed account of how you should change them when you learn new evidence; that is, when your body of evidence grows. What's more, they boast a diverse range of epistemic and pragmatic arguments that support that account. But they do not have a satisfactory account of when and how you should change your credences when you become aware of possibilities and propositions you have not entertained before; that is, when your awareness grows. In this (...) paper, I consider the arguments for the credal epistemologist's account of how to respond to evidence, and I ask whether they can help us generate an account of how to respond to awareness growth. The results are surprising: the arguments that all support the same norms for responding to evidence growth support a number of different norms when they are applied to awareness growth. Some of these norms seem too weak, others too strong. I ask what we should conclude from this, and argue that our credal response to awareness growth is considerably less rigorously constrained than our credal response to new evidence. (shrink)
Standard arguments for Bayesian conditionalizing rely on assumptions that many epistemologists have criticized as being too strong: (i) that conditionalizers must be logically infallible, which rules out the possibility of rational logical learning, and (ii) that what is learned with certainty must be true (factivity). In this paper, we give a new factivity-free argument for the superconditionalization norm in a personal possibility framework that allows agents to learn empirical and logical falsehoods. We then discuss how the resulting framework should be (...) interpreted. Does it still model norms of rationality, or something else, or nothing useful at all? We discuss five ways of interpreting our results, three that embrace them and two that reject them. We find one of each kind wanting, and leave readers to choose among the remaining three. (shrink)
Hilary Greaves and David Wallace argue that conditionalization maximizes expected accuracy and so is a rational requirement, but their argument presupposes a particular picture of the bridge between rationality and accuracy: the Best-Plan-to-Follow picture. And theorists such as Miriam Schoenfield and Robert Steel argue that it's possible to motivate an alternative picture—the Best-Plan-to-Make picture—that does not vindicate conditionalization. I show that these theorists are mistaken: it turns out that, if an update procedure maximizes expected accuracy on the Best-Plan-to-Follow picture, it's (...) guaranteed to maximize expected accuracy on the Best-Plan-to-Make picture as well, in which case moving from the former to the latter can't help us avoid the conclusion that conditionalization is a rational requirement. If there's a problem with Greaves and Wallace’s argument, it must lie elsewhere. (shrink)
The thesis that agents should calibrate their beliefs in the face of higher-order evidence—i.e., should adjust their first-order beliefs in response to evidence suggesting that the reasoning underlying those beliefs is faulty—is sometimes thought to be in tension with Bayesian approaches to belief update: in order to obey Bayesian norms, it's claimed, agents must remain steadfast in the face of higher-order evidence. But I argue that this claim is incorrect. In particular, I motivate a minimal constraint on a reasonable treatment (...) of the evolution of self-locating beliefs over time and show that calibrationism is compatible with any generalized Bayesian approach that respects this constraint. I then use this result to argue that remaining steadfast isn't the response to higher-order evidence that maximizes expected accuracy. (shrink)
Two of the most influential arguments for Bayesian updating -- Hilary Greaves' and David Wallace's Accuracy Argumentand David Lewis' Diachronic Dutch Book Argument-- turn out to imose a strong and surprising limitation on rational uncertainty: that one can never be rationally uncertain of what one's evidence is. Many philosophers reject that claim, and now seem to face a difficult choice: either to endorse the arguments and give up Externalism, or to reject the arguments and lose some of the best justifications (...) of Bayesianism. The author argues that the key to resolving this conflict lies in recognizing that both arguments are plan-based, in that they argue for Conditionalization by first arguing that one should planto conditionalize. With this in view, we can identify the culprit common to both arguments: for an externalist, they misconceive the requirement to carry out a plan made at an earlier time. They should therefore not persuade us to reject Externalism. Furthermore, rethinking the nature of this requirement allows us to give two new arguments for Conditionalization that do not rule out rational uncertainty about one's evidence and that can thus serve as common ground in the debate between externalists and their opponents. (shrink)
Supra-Bayesianism is the Bayesian response to learning the opinions of others. Probability pooling constitutes an alternative response. One natural question is whether there are cases where probability pooling gives the supra-Bayesian result. This has been called the problem of Bayes-compatibility for pooling functions. It is known that in a common prior setting, under standard assumptions, linear pooling cannot be nontrivially Bayes-compatible. We show by contrast that geometric pooling can be nontrivially Bayes-compatible. Indeed, we show that, under certain assumptions, geometric and (...) Bayes-compatible pooling are equivalent. Granting supra-Bayesianism its usual normative status, one upshot of our study is thus that, in a certain class of epistemic contexts, geometric pooling enjoys a normative advantage over linear pooling as a social learning mechanism. We discuss the philosophical ramifications of this advantage, which we show to be robust to variations in our statement of the Bayes-compatibility problem. (shrink)
There is some consensus on the claim that imagination as suppositional thinking can have epistemic value insofar as it’s constrained by a principle of minimal alteration of how we know or believe reality to be – compatibly with the need to accommodate the supposition initiating the imaginative exercise. But in the philosophy of imagination there is no formally precise account of how exactly such minimal alteration is to work. I propose one. I focus on counterfactual imagination, arguing that this can (...) be modeled as simulated belief revision governed by Laplacian imaging. So understood, it can be rationally justified by accuracy considerations: it minimizes expected belief inaccuracy, as measured by the Brier score. (shrink)
Bayesianism can be characterized as the following twofold position: (i) rational credences obey the probability calculus; (ii) rational learning, i.e., the updating of credences, is regulated by some form of conditionalization. While the formal aspect of various forms of conditionalization has been explored in detail, the philosophical application to learning from experience is still deeply problematic. Some philosophers have proposed to revise the epistemology of perception; others have provided new formal accounts of conditionalization that are more in line with how (...) we learn from perceptual experience. The current investigation argues that Bayesian epistemology is still incomplete; the epistemology of perception and the epistemology of rational reasoning have not been reconciled. (shrink)
We start by presenting three different views that jointly imply that every person has many conscious beings in their immediate vicinity, and that the number greatly varies from person to person. We then present and assess an argument to the conclusion that how confident someone should be in these views should sensitively depend on how massive they happen to be. According to the argument, sometimes irreducibly de se observations can be powerful evidence for or against believing in metaphysical theories.
I argue that when we use ‘probability’ language in epistemic contexts—e.g., when we ask how probable some hypothesis is, given the evidence available to us—we are talking about degrees of support, rather than degrees of belief. The epistemic probability of A given B is the mind-independent degree to which B supports A, not the degree to which someone with B as their evidence believes A, or the degree to which someone would or should believe A if they had B as (...) their evidence. My central argument is that the degree-of-support interpretation lets us better model good reasoning in certain cases involving old evidence. Degree-of-belief interpretations make the wrong predictions not only about whether old evidence confirms new hypotheses, but about the values of the probabilities that enter into Bayes’ Theorem when we calculate the probability of hypotheses conditional on old evidence and new background information. (shrink)
Bayesian epistemologists support the norms of probabilism and conditionalization using Dutch book and accuracy arguments. These arguments assume that rationality requires agents to maximize practical or epistemic value in every doxastic state, which is evaluated from a subjective point of view (e.g., the agent’s expectancy of value). The accuracy arguments also presuppose that agents are opinionated. The goal of this paper is to discuss the assumptions of these arguments, including the measure of epistemic value. I have designed AI agents based (...) on the Bayesian model and a nonmonotonic framework and tested how they achieve practical and epistemic value in conditions in which an alternative set of assumptions holds. In one of the tested conditions, the nonmonotonic agent, which is not opinionated and fulfills neither probabilism nor conditionalization, outperforms the Bayesian in the measure of epistemic value that I argue for in the paper (α-value). I discuss the consequences of these results for the epistemology of rationality. (shrink)
This paper is about a tension between two theses. The first is Value of Evidence: roughly, the thesis that it is always rational for an agent to gather and use cost‐free evidence for making decisions. The second is Rationality of Imprecision: the thesis that an agent can be rationally required to adopt doxastic states that are imprecise, i.e., not representable by a single credence function. While others have noticed this tension, I offer a new diagnosis of it. I show that (...) it arises when an agent with an imprecise doxastic state engages in an unreflective inquiry, an inquiry where they revise their beliefs using an updating rule that doesn't satisfy a weak reflection principle. In such an unreflective inquiry, certain synchronic norms of instrumental rationality can make it instrumentally irrational for an agent to gather and use cost‐free evidence. I then go on to propose a diachronic norm of instrumental rationality that preserves Value of Evidence in unreflective inquiries. This, I suggest, may help us reconcile this thesis with Rationality of Imprecision. (shrink)
In this article, I cast doubt on an apparent truism, namely, that if evidence is available for gathering and use at a negligible cost, then it’s always instrumentally rational for us to gather that evidence and use it for making decisions. Call this ‘value of information’ (VOI). I show that VOI conflicts with two other plausible theses. The first is the view that an agent’s evidence can entail non-trivial propositions about the external world. The second is the view that epistemic (...) rationality requires us to update our credences by conditionalization. These two theses, given some plausible assumptions, make room for rationally biased inquiries where VOI fails. I go on to argue that this is bad news for defenders of VOI. (shrink)
The standard principle of expert deference says that conditional on the expert’s credence in a proposition _A_ being _x_, your credence in _A_ ought to be _x_. The so-called Adams conditionalization is an attractive update rule in situations when learning experience prompts a shift in your conditional credences. In this paper, I show that, except in some trivial situations, when your prior conditional credence in _A_ obeys the standard principle of expert deference and then is revised by Adams conditionalization in (...) response to a shift in your conditional credence for a proposition _B_ given _A_, your posterior conditional credence in _A_ cannot continue to obey that principle, on pain of inconsistency. I explain why this tension between Adams conditionalization and the standard principle of expert deference is puzzling and why the option of rejecting the update rule appears problematic. Finally, I suggest that in order to avoid this inconsistency, we should abandon the standard principle of expert deference and think of an expert’s probabilistic opinion as a constraint on your posterior credence distribution rather than your prior one. (shrink)
Recent literature on Stalnaker's Thesis, which seeks to vindicate it from Lewis (1976)'s triviality results, has featured linguistic data that is prima facie incompatible with Conditionalization in iterated cases (McGee 1989, 2000; Kaufmann 2015; Khoo & Santorio, 2018). In a recent paper (2021), Goldstein & Santorio make a bold claim: they hold that these departures light the way to a new, non‐conditionalizing theory of rational update.Here, I consider whether this new form of update is subject to a Dutch book. On (...) the official, invariantist version of the theory, I show that the answer is “yes”. On a competing, contextualist theory of indicative conditionals (Bacon, 2015), the answer is “no”, for reasons that have familiar connections to the limits of textbook Bayesianism. After presenting a concrete case, I explore the dialectical ramifications. The upshot is some hard choices for theories that seek to save the linguistic phenomena. (shrink)
Sometimes you are unreliable at fulfilling your doxastic plans: for example, if you plan to be fully confident in all truths, probably you will end up being fully confident in some falsehoods by mistake. In some cases, there is information that plays the classical role of evidence—your beliefs are perfectly discriminating with respect to some possible facts about the world—and there is a standard expected‐accuracy‐based justification for planning to conditionalize on this evidence. This planning‐oriented justification extends to some cases where (...) you do not have transparent evidence, in the sense that your beliefs are not perfectly discriminating with respect to any non‐trivial facts. In other cases, accuracy considerations do not tell you to plan to conditionalize on any information at all, but rather to plan to follow a different updating rule. Even in the absence of evidence, accuracy considerations can guide your doxastic plan. (shrink)
Van Fraassen does not merely perform Bayesian conditionalization on his pragmatic theory of scientific explanation; he uses inference to the best explanation (IBE) to justify it, contrary to what Prasetya thinks. Without first using IBE, we cannot carry out Bayesian conditionalization, contrary to what van Fraassen thinks. The argument from a bad lot, which van Fraassen constructs to criticize IBE, backfires on both the pragmatic theory and Bayesian conditionalization, pace van Fraassen and Prasetya.
Rescorla (Erkenntnis, 2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever I become certain of something, it is true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla’s new argument by giving a very general Dutch Book argument that applies to many cases of updating beyond those (...) covered by Conditionalization, and then showing how Rescorla’s version follows as a special case of that. Second, I want to show how to generalise R. A. Briggs and Richard Pettigrew’s Accuracy Dominance argument to avoid the assumption that Rescorla has identified (Briggs and Pettigrew in Noûs, 2018). In both cases, these arguments proceed by first establishing a very general reflection principle. (shrink)
Acknowledging that many members of the SM3D Portal need reference documents related to Bayesian Mindsponge Framework (BMF) analytics to conduct research projects effectively, we present the essential materials and most up-to-date studies employing the method in this post. By summarizing all the publications and preprints associated with BMF analytics, we also aim to help researchers reduce the time and effort for information seeking, enhance proactive self-learning, and facilitate knowledge exchange and community dialogue through transparency.
Rescorla explores the relation between Reflection, Conditionalization, and Dutch book arguments in the presence of a weakened concept of sure loss and weakened conditions of self‐transparency for doxastic agents. The literature about Reflection and about Dutch Book arguments, though overlapping, are distinct, and its history illuminates the import of Rescorla's investigation. With examples from a previous debate in the 70s and results about Reflection and Conditionalization in the 80s, I propose a way of seeing the epistemic enterprise in the light (...) of practical requirements to be met by demands for synchronic coherence and probability updating policies. This includes a defense of principles rejected by Rescorla, while allowing for the value of his results in the borderland between theories of cognition and formal epistemology. (shrink)
It is a consequence of the theory of imprecise credences that there exist situations in which rational agents inevitably become less opinionated toward some propositions as they gather more evidence. The fact that an agent's imprecise credal state can dilate in this way is often treated as a strike against the imprecise approach to inductive inference. Here, we show that dilation is not a mere artifact of this approach by demonstrating that opinion loss is countenanced as rational by a substantially (...) broader class of normative theories than has been previously recognised. Specifically, we show that dilation-like phenomena arise even when one abandons the basic assumption that agents have (precise or imprecise) credences of any kind, and follows directly from bedrock norms for rational comparative confidence judgements of the form `I am at least as confident in p as I am in q'. We then use the comparative confidence framework to develop a novel understanding of what exactly gives rise to dilation-like phenomena. By considering opinion loss in this more general setting, we are able to provide a novel assessment of the prospects for an account of inductive inference that is not saddled with the inevitability of rational opinion loss. (shrink)
Higher-order evidence is evidence about what is rational to think in light of your evidence. Many have argued that it is special – falling into its own evidential category, or leading to deviations from standard rational norms. But it is not. Given standard assumptions, almost all evidence is higher-order evidence.
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Ian Hacking and Roger White influentially argue that it is not. We approach this question through a systematic framework for self-locating epistemology. As it turns out, leading approaches to self-locating evidence agree that the fact that our own universe contains fine-tuned life indeed confirms the existence of a multiverse. This convergence is no accident: we present two theorems showing that, in this setting, any updating (...) rule that satisfies a few reasonable conditions will have the same feature. The conclusion that fine-tuned life provides evidence for a multiverse is hard to escape. (shrink)
Accuracy arguments for the core tenets of Bayesian epistemology differ mainly in the conditions they place on the legitimate ways of measuring the inaccuracy of our credences. The best existing arguments rely on three conditions: Continuity, Additivity, and Strict Propriety. In this paper, I show how to strengthen the arguments based on these conditions by showing that the central mathematical theorem on which each depends goes through without assuming Additivity.
Lewis proved a Dutch book theorem for Conditionalization. The theorem shows that an agent who follows any credal update rule other than Conditionalization is vulnerable to bets that inflict a sure loss. Lewis’s theorem is tailored to factive formulations of Conditionalization, i.e. formulations on which the conditioning proposition is true. Yet many scientific and philosophical applications of Bayesian decision theory require a non-factive formulation, i.e. a formulation on which the conditioning proposition may be false. I prove a Dutch book theorem (...) tailored to non-factive Conditionalization. I also discuss the theorem’s significance. (shrink)
This M.A. thesis explores the intricate Problem of Induction, contrasting three seminal approaches: Hume's habit-centric view, Reichenbach's emphasis on the Principle of Uniformity of Nature, and Strawson's belief in the innate rationality of induction. While Hume's perspective lays the groundwork for Kant's a priori and Cleve's a posteriori validation, Reichenbach and Salmon present pragmatic justifications, underscoring the methodological and probabilistic underpinnings of inductive reasoning, specifying epistemological ignorance as a guidance for the optimality criteria. Strawson, challenging prevailing notions, posits that induction, (...) anchored by prior probabilities and evidence, is inherently rational, obviating the need for external validation. The study integrates concepts of frequentism, conditionalization, and probabilistic laws to develop the truth-conducive nuance of induction. The research culminates by confronting the dual challenges of quantitative and profound scepticism, championing a holistic approach. Therefore, the study aims to enrich the discourse on the epistemic foundations of the Problem of Induction, particularly its implications for scientific inquiry and the laws of nature. (shrink)
Wenmackers and Romeijn (2016) formalize ideas going back to Shimony (1970) and Putnam (1963) into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn’s proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a version of this guarantee.
An aspect of Peirce’s thought that may still be underappreciated is his resistance to what Levi calls _pedigree epistemology_, to the idea that a central focus in epistemology should be the justification of current beliefs. Somewhat more widely appreciated is his rejection of the subjective view of probability. We argue that Peirce’s criticisms of subjectivism, to the extent they grant such a conception of probability is viable at all, revert back to pedigree epistemology. A thoroughgoing rejection of pedigree in the (...) context of probabilistic epistemology, however, _does_ challenge prominent subjectivist responses to the problem of the priors. (shrink)
Being a researcher is challenging, especially in the beginning. Early Career Researchers (ECRs) need achievements to secure and expand their careers. In today’s academic landscape, researchers are under many pressures: data collection costs, the expectation of novelty, analytical skill requirements, lengthy publishing process, and the overall competitiveness of the career. Innovative thinking and the ability to turn good ideas into good papers are the keys to success.