Much of our information comes to us indirectly, in the form of conclusions others have drawn from evidence they gathered. When we hear these conclusions, how can we modify our own opinions so as to gain the benefit of their evidence? In this paper we study the method known as geometric pooling. We consider two arguments in its favour, raising several objections to one, and proposing an amendment to the other.
Recent proposals that frame norms of action in terms of knowledge have been challenged by Bayesian decision theorists. Bayesians object that knowledge-based norms conflict with the highly successful and established view that rational action is rooted in degrees of belief. I argue that the knowledge-based and Bayesian pictures are not as incompatible as these objectors have made out. Attending to the mechanisms of practical reasoning exposes space for both knowledge and degrees of belief to play their respective roles.
Intellectual progress involves forming a more accurate picture of the world. But it also figuring out which concepts to use for theorizing about the world. Bayesian epistemology has had much to say about the former aspect of our cognitive lives, but little if at all about the latter. I outline a framework for formulating questions about conceptual change in a broadly Bayesian framework. By enriching the resources of Epistemic Utility Theory with a more expansive conception of epistemic value, I offer (...) a picture of our cognitive economy on which adopting new conceptual tools can sometimes be epistemically rational. (shrink)
Think of confirmation in the context of the Ravens Paradox this way. The likelihood ratio measure of incremental confirmation gives us, for an observed Black Raven and for an observed non-Black non-Raven, respectively, the following “full” likelihood ratios.
Disagreement about how best to think of the relation between theories and the realities they represent has a longstanding and venerable history. We take up this debate in relation to the free energy principle (FEP) - a contemporary framework in computational neuroscience, theoretical biology and the philosophy of cognitive science. The FEP is very ambitious, extending from the brain sciences to the biology of self-organisation. In this context, some find apparent discrepancies between the map (the FEP) and the territory (target (...) systems) a compelling reason to defend instrumentalism about the FEP. We take this to be misguided. We identify an important fallacy made by those defending instrumentalism about the FEP. We call it the literalist fallacy: this is the fallacy of inferring the truth of instrumentalism based on the claim that the properties of FEP models do not literally map onto real-world, target systems. We conclude that scientific realism about the FEP is a live and tenable option. (shrink)
Causal Modeling Semantics (CMS, e.g., Galles and Pearl 1998; Pearl 2000; Halpern 2000) is a powerful framework for evaluating counterfactuals whose antecedent is a conjunction of atomic formulas. We extend CMS to an evaluation of the probability of counterfactuals with disjunctive antecedents, and more generally, to counterfactuals whose antecedent is an arbitrary Boolean combination of atomic formulas. Our main idea is to assign a probability to a counterfactual (A ∨ B) > C at a causal model M as a weighted (...) average of the probability of C in those submodels that truthmake A∨ B (Briggs 2012; Fine 2016, 2017). The weights of the submodels are given by the inverse distance to the original model M, based on a distance metric proposed by Eva, Stern, and Hartmann (2019). Apart from solving a major problem in the epistemology of counterfactuals, our paper shows how work in semantics, causal inference and formal epistemology can be fruitfully combined. (shrink)
We provide a novel Bayesian justification of inference to the best explanation. More specifically, we present conditions under which explanatory considerations can provide a significant confirmatory boost for hypotheses that provide the best explanation of the relevant evidence. Furthermore, we show that the proposed Bayesian model of IBE is able to deal naturally with the best known criticisms of IBE such as van Fraassen?s?bad lot? argument.
Abstract. Richard Feldman’s Uniqueness Thesis holds that “a body of evidence justifies at most one proposition out of a competing set of proposi- tions”. The opposing position, permissivism, allows distinct rational agents to adopt differing attitudes towards a proposition given the same body of evidence. We assess various motivations that have been offered for Uniqueness, including: concerns about achieving consensus, a strong form of evidentialism, worries about epistemically arbitrary influences on belief, a focus on truth-conduciveness, and consequences for peer disagreement. (...) We argue that each of these motivations either misunderstands the commitments of permissivism or is question-begging. Better understanding permissivism makes it a much more plausible position. (shrink)
Knowledge was traditionally held to be justiﬁed true belief. This paper examines the implications of maintaining this view if justication is interpreted algorithmically. It is argued that if we move suﬃciently far from the small worlds to which Bayesian decision theory properly applies, we can steer between the rock of fallibilism and the whirlpool of skepticism only by explicitly building into our framing of the underlying decision problem the possibility that its attempt to describe the world is inadequate.
This paper argues that we need to look beyond Bayesian decision theory for an answer to the general problem of making rational decisions under uncertainty. The view that Bayesian decision theory is only genuinely valid in a small world was asserted very ﬁrmly by Leonard Savage  when laying down the principles of the theory in his path-breaking Foundations of Statistics. He makes the distinction between small and large worlds in a folksy way by quoting the proverbs ”Look before you (...) leap” and ”Cross that bridge when you come to it”. You are in a small world if it is feasible always to look before you leap. You are in a large world if there are some bridges that you cannot cross before you come to them. As Savage comments, when proverbs conﬂict, it is proverbially true that there is some truth in both—that they apply in diﬀerent contexts. He then argues that some decision situations are best modeled in terms of a small world, but others are not. He explicitly rejects the idea that all worlds can be treated as small as both ”ridiculous” and ”preposterous”. The ﬁrst half of his book is then devoted to a very successful development of the set of ideas now known as Bayesian decision theory for use in small worlds. The second half of the book is an attempt to develop a quite diﬀerent set of ideas for use in large worlds, but this part of the book is usually said to be a failure by those who are aware of its existence.2 Frank Knight  draws a similar distinction between making decision under risk or uncertainty.3 The pioneering work of Gilboa and Schmeidler  on making.. (shrink)
This essay defends the view that inductive reasoning involves following inductive rules against objections that inductive rules are undesirable because they ignore background knowledge and unnecessary because Bayesianism is not an inductive rule. I propose that inductive rules be understood as sets of functions from data to hypotheses that are intended as solutions to inductive problems. According to this proposal, background knowledge is important in the application of inductive rules and Bayesianism qualifies as an inductive rule. Finally, I consider a (...) Bayesian formulation of inductive skepticism suggested by Lange. I argue that while there is no good Bayesian reason for judging this inductive skeptic irrational, the approach I advocate indicates a straightforward reason not to be an inductive skeptic. (shrink)
Synthese 156 (3) (2007). Special issue ed. with Luc Bovens. With contributions by Max Albert, Branden Fitelson, Dennis Dieks, Igor Douven and Wouter Meijs, Alan Hájek, Colin Howson, James Joyce, and Patrick Suppes.
In this paper, we illustrate some serious difficulties involved in conveying information about uncertain risks and securing informed consent for risky interventions in a clinical setting. We argue that in order to secure informed consent for a medical intervention, physicians often need to do more than report a bare, numerical probability value. When probabilities are given, securing informed consent generally requires communicating how probability expressions are to be interpreted and communicating something about the quality and quantity of the evidence for (...) the probabilities reported. Patients may also require guidance on how probability claims may or may not be relevant to their decisions, and physicians should be ready to help patients understand these issues. (shrink)
The book develops the necessary background in probability theory underlying diverse treatments of stochastic processes and their wide-ranging applications. With this goal in mind, the pace is lively, yet thorough. Basic notions of independence and conditional expectation are introduced relatively early on in the text, while conditional expectation is illustrated in detail in the context of martingales, Markov property and strong Markov property. Weak convergence of probabilities on metric spaces and Brownian motion are two highlights. The historic role of size-biasing (...) is emphasized in the contexts of large deviations and in developments of Tauberian Theory. The authors assume a graduate level of maturity in mathematics, but otherwise the book will be suitable for students with varying levels of background in analysis and measure theory. In particular, theorems from analysis and measure theory used in the main text are provided in comprehensive appendices, along with their proofs, for ease of reference. Rabi Bhattacharya is Professor of Mathematics at the University of Arizona. Edward Waymire is Professor of Mathematics at Oregon State University. Both authors have co-authored numerous books, including the graduate textbook, Stochastic Processes with Applications. Advanced undergrads and graduate students, analysts. (shrink)
Many philosophers have argued that a hypothesis is better confirmed by some data if the hypothesis was not specifically designed to fit the data. ‘Prediction’, they argue, is superior to ‘accommodation’. Others deny that there is any epistemic advantage to prediction, and conclude that prediction and accommodation are epistemically on a par. This paper argues that there is a respect in which accommodation is superior to prediction. Specifically, the information that the data was accommodated rather than predicted suggests that the (...) data is less likely to have been manipulated or fabricated, which in turn increases the likelihood that the hypothesis is correct in light of the data. In some cases, this epistemic advantage of accommodation may even outweigh whatever epistemic advantage there might be to prediction, making accommodation epistemically superior to prediction all things considered. (shrink)
A longstanding question is the extent to which "reasonable doubt" may be expressed simply in terms of a threshold degree of belief. In this context, we examine the extent to which learning about possible alternatives may alter one's beliefs about a target hypothesis, even when no new "evidence" linking them to the hypothesis is acquired. Imagine the following scenario: a crime has been committed and Alice, the police's main suspect has been brought to trial. There are several pieces of evidence (...) that raise the probability that Alice committed the crime. Her attorney's defense strategy is not to challenge this evidence, but instead to provide personal details about Alice's neighbour, Jane. While Jane is one of many people the police spoke to, they saw no reason to investigate her further. You now learn that Jane, too, had access to the shed where the murder weapon was stored, just like Alice. To what extent should this alter your beliefs about Alice's guilt? In this paper, we provide a formal description of the problem and a solution indicating circumstances under which learning about Jane will more or less impact beliefs about Alice. (shrink)
This chapter seeks to recover an approach to consciousness from a general theory of brain function, namely the prediction error minimization theory. The way this theory applies to mental and developmental disorder demonstrates its relevance to consciousness. The resulting view is discussed in relation to a contemporary theory of consciousness, namely the idea that conscious perception depends on Bayesian metacognition; this theory is also supported by considerations of psychopathology. This Bayesian theory is first disconnected from the higher-order thought theory, and (...) then, via a prediction error conception of action, connected instead to the global neuronal workspace theory. Considerations of mental and developmental disorder therefore show that a very general theory of brain function is relevant to explaining the structure of conscious perception; furthermore, this theory can subsume and unify two contemporary approaches to consciousness, in a move that seeks to elucidate the fundamental mechanism for selection of representational content into consciousness. (shrink)
This paper explores the consequences of applying two natural ideas from epistemology to decision theory: (1) that knowledge should guide our actions, and (2) that we know a lot of non-trivial things. In particular, we explore the consequences of these ideas as they are applied to standard decision theoretic puzzles such as the St. Petersburg Paradox. In doing so, we develop a “knowledge-first” decision theory and we will see how it can help us avoid fanaticism with regard to the St. (...) Petersburg puzzle and related puzzles. The result will be a decision theory that gives a novel, but well-motivated, reason for discounting small probabilities when making decisions. We examine the merits and demerits of such a decision theory. (shrink)
Epistemologists who study credences have a well-developed account of how you should change them when you learn new evidence; that is, when your body of evidence grows. What's more, they boast a diverse range of epistemic and pragmatic arguments that support that account. But they do not have a satisfactory account of when and how you should change your credences when you become aware of possibilities and propositions you have not entertained before; that is, when your awareness grows. In this (...) paper, I consider the arguments for the credal epistemologist's account of how to respond to evidence, and I ask whether they can help us generate an account of how to respond to awareness growth. The results are surprising: the arguments that all support the same norms for responding to evidence growth support a number of different norms when they are applied to awareness growth. Some of these norms seem too weak, others too strong. I ask what we should conclude from this, and argue that our credal response to awareness growth is considerably less rigorously constrained than our credal response to new evidence. (shrink)
In this article, I explore the compatibility of inference to the best explanation (IBE) with several influential models and accounts of scientific explanation. First, I explore the different conceptions of IBE and limit my discussion to two: the heuristic conception and the objective Bayesian conception. Next, I discuss five models of scientific explanation with regard to each model’s compatibility with IBE. I argue that Philip Kitcher’s unificationist account supports IBE; Peter Railton’s deductive-nomological-probabilistic model, Wesley Salmon’s statistical-relevance Model, and Bas van (...) Fraassen’s erotetic account are incompatible with IBE; and Wesley Salmon’s causal-mechanical model is merely consistent with IBE. In short, many influential models of scientific explanation do not support IBE. I end by outlining three possible conclusions to draw: (1) either philosophers of science or defenders of IBE have seriously misconstrued the concept of explanation, (2) philosophers of science and defenders of IBE do not use the term ‘explanation’ univocally, and (3) the ampliative conception of IBE, which is compatible with any model of scientific explanation, deserves a closer look. (shrink)
This paper defends the view that discovering that our universe is fine-tuned should make us more confident that other universes exist. My defense exploits a distinction between ideal and non-ideal evidential support. I use that distinction in concert with a simple model to disarm the most influential objection—the this-universe objection—to the view that fine-tuning supports the existence of other universes. However, the simple model fails to capture some important features of our epistemic situation with respect to fine-tuning. To capture these (...) features, I introduce a more sophisticated model. I then use the more sophisticated model to show that, even once those complicating factors are taken into account, fine-tuning should boost our confidence in the existence of other universes. (shrink)
Bayesian epistemology provides a popular and powerful framework for modeling rational norms on credences, including how rational agents should respond to evidence. The framework is built on the assumption that ideally rational agents have credences, or degrees of belief, that are representable by numbers that obey the axioms of probability. From there, further constraints are proposed regarding which credence assignments are rationally permissible, and how rational agents’ credences should change upon learning new evidence. While the details are hotly disputed, all (...) flavors of Bayesianism purport to give us norms of ideal rationality. This raises the question of how exactly these norms apply to you and me, since perfect compliance with those ideal norms is out of reach for human thinkers. A common response is that Bayesian norms are ideals that human reasoners are supposed to approximate – the closer they come to being ideally rational, the better. To make this claim plausible, we need to make it more precise. In what sense is it better to be closer to ideally rational, and what is an appropriate measure of such closeness? This article sketches some possible answers to these questions. (shrink)
There is some consensus on the claim that imagination as suppositional thinking can have epistemic value insofar as it’s constrained by a principle of minimal alteration of how we know or believe reality to be – compatibly with the need to accommodate the supposition initiating the imaginative exercise. But in the philosophy of imagination there is no formally precise account of how exactly such minimal alteration is to work. I propose one. I focus on counterfactual imagination, arguing that this can (...) be modeled as simulated belief revision governed by Laplacian imaging. So understood, it can be rationally justified by accuracy considerations: it minimizes expected belief inaccuracy, as measured by the Brier score. (shrink)
Epistemic Utility Theory is often identified with the project of *axiology-first epistemology*—the project of vindicating norms of epistemic rationality purely in terms of epistemic value. One of the central goals of axiology-first epistemology is to provide a justification of the central norm of Bayesian epistemology, Probabilism. The first part of this paper presents a new challenge to axiology first epistemology: I argue that in order to justify Probabilism in purely axiological terms, proponents of axiology first epistemology need to justify a (...) claim about epistemic value—what I label ‘Downwards Propriety’—much stronger than any they have offered justification. The second part of this paper offers an argument that this challenge cannot be met: that there is no hope for providing a purely axiological justification of Downwards Propriety, at least given widely accepted assumptions about epistemic value. (shrink)
I argue that when we use ‘probability’ language in epistemic contexts—e.g., when we ask how probable some hypothesis is, given the evidence available to us—we are talking about degrees of support, rather than degrees of belief. The epistemic probability of A given B is the mind-independent degree to which B supports A, not the degree to which someone with B as their evidence believes A, or the degree to which someone would or should believe A if they had B as (...) their evidence. My central argument is that the degree-of-support interpretation lets us better model good reasoning in certain cases involving old evidence. Degree-of-belief interpretations make the wrong predictions not only about whether old evidence confirms new hypotheses, but about the values of the probabilities that enter into Bayes’ Theorem when we calculate the probability of hypotheses conditional on old evidence and new background information. (shrink)
Some epistemologists think that the Bayesian ideals matter because we can approximate them. That is, our attitudes can be more or less close to the ones of our ideal Bayesian counterpart. In this paper, I raise a worry for this justification of epistemic ideals. The worry is this: In order to correctly compare agents to their ideal counterparts, we need to imagine idealized agents who have the same relevant information, knowledge, or evidence. However, there are cases in which one’s ideal (...) counterpart cannot have one’s information, knowledge, or evidence. In these situations, agents cannot compare themselves to their ideal counterpart. (shrink)
Sometimes you are unreliable at fulfilling your doxastic plans: for example, if you plan to be fully confident in all truths, probably you will end up being fully confident in some falsehoods by mistake. In some cases, there is information that plays the classical role of evidence—your beliefs are perfectly discriminating with respect to some possible facts about the world—and there is a standard expected‐accuracy‐based justification for planning to conditionalize on this evidence. This planning‐oriented justification extends to some cases where (...) you do not have transparent evidence, in the sense that your beliefs are not perfectly discriminating with respect to any non‐trivial facts. In other cases, accuracy considerations do not tell you to plan to conditionalize on any information at all, but rather to plan to follow a different updating rule. Even in the absence of evidence, accuracy considerations can guide your doxastic plan. (shrink)
One type of argument to sceptical paradox proceeds by making a case that a certain kind of metaphysically “heavyweight or “cornerstone” proposition is beyond all possible evidence and hence may not be known or justifiably believed. Crispin Wright has argued that we can concede that our acceptance of these propositions is evidentially risky and still remain rationally entitled to those of our ordinary knowledge claims that are seemingly threatened by that concession. A problem for Wright’s proposal is the so-called Leaching (...) worry: if we are merely rationally entitled to accept the cornerstones without evidence, how can we achieve evidence-based knowledge of the multitude of quotidian propositions that we think we know, which require the cornerstones to be true? This paper presents a rigorous, novel explication of this worry within a Bayesian framework, and offers the Entitlement theorist two distinct responses. (shrink)
This paper is about teaching probability to students of philosophy who don’t aim to do primarily formal work in their research. These students are unlikely to seek out classes about probability or formal epistemology for various reasons, for example because they don’t realize that this knowledge would be useful for them or because they are intimidated by the material. However, most areas of philosophy now contain debates that incorporate probability, and basic knowledge of it is essential even for philosophers whose (...) work isn’t primarily formal. In this paper, I explain how to teach probability to students who are not already enthusiastic about formal philosophy, taking into account the common phenomena of math anxiety and the lack of reading skills for formal texts. I address course design, lesson design, and assignment design. Most of my recommendations also apply to teaching formal methods other than probability theory. (shrink)
Acknowledging that many members of the SM3D Portal need reference documents related to Bayesian Mindsponge Framework (BMF) analytics to conduct research projects effectively, we present the essential materials and most up-to-date studies employing the method in this post. By summarizing all the publications and preprints associated with BMF analytics, we also aim to help researchers reduce the time and effort for information seeking, enhance proactive self-learning, and facilitate knowledge exchange and community dialogue through transparency.
All of us, including scientists, make judgments about what is true or false, probable or improbable. And in the process, we frequently appeal to concepts such as evidential support or explanation. Bayesian philosophers of science have given illuminating formal accounts of these concepts. This paper aims to follow in their footsteps, providing a novel formal account of various additional concepts: the likelihood-prior trade-off, successful accommodation of evidence, ad hocness, and, finally, consilience—sometimes also called “unification”. Using these accounts, I also provide (...) a new Bayesian analysis of how someone such as Charles Darwin hypothetically could have reasoned in favor of evolution over special creationism. Lastly, I explore how these accounts relate to other topics and accounts in philosophy, and I chart out some areas for further research. (shrink)
Michael Nahm's preceding commentary accuses me of seven misrepresentations. One of these is an acknowledged good-faith error about a peripheral detail, while the remaining six are demonstrably accurate descriptions of Nahm's statements. At the same time, Nahm verifiably misrepresents me frequently and intentionally over issues that he takes to be consequential, which is a much more serious offense. All authors should call out when an interlocutor get their points wrong, but only when they can definitively back up the charge. Where (...) Nahm weakly attempts to show that I misrepresented him, I will show that, if anything, his showcase consists of six verifiably accurate characterizations of his Bigelow Institute contest-winning essay's conclusions. His commentary exemplifies the truism that one can appeal to a million frivolous reasons to dismiss what an opponent has to say if one is absolutely determined not to hear him. Though committed survivalists will undoubtedly be satisfied that survival researchers have responded to me regardless of whether they have responded well, those that care about the underlying issues will hopefully find value in my reply. -/- 1. Introduction -- 2. My (One) Accidental Misattribution -- 3. Nahm's (Repeated) Intentional Mischaracterizations -- 4. Nahm's Mischaracterizations to Shift the Burden of Proof -- 5. Nahm's Seven Wonders of the World -- 6. Conclusion: Assertions Should be Supported. (shrink)
It is a common intuition in scientific practice that positive instances confirm. This confirmation, at least based purely on syntactic considerations, is what Nelson Goodman’s ‘Grue Problem’, and more generally the ‘New Riddle’ of Induction, attempt to defeat. One treatment of the Grue Problem has been made along Bayesian lines, wherein the riddle reduces to a question of probability assignments. In this paper, I consider this so-called Bayesian Grue Problem and evaluate how one might proffer a solution to this problem (...) utilizing what I call a phenomenological approach. I argue that this approach to the problem can be successful on the Bayesian framework. (shrink)
While many authors distinguish belief from acceptance, it seems almost universally agreed that no similar distinction can be drawn between degrees of belief, or credences, and degrees of acceptance. I challenge this assumption in this paper. Acceptance comes in degrees and acknowledging this helps to resolve problems in at least two philosophical domains. Degrees of acceptance play vital roles when we simplify our reasoning, and they ground the common ground of a conversation if we assume context probabilism, i.e., that the (...) common ground must be represented with probability spaces rather than possible worlds. (shrink)
The argument from inductive risk is considered to be one of the strongest challenges for value-free science. A great part of its appeal lies in the idea that even an ideal epistemic agent—the “perfect scientist” or “scientist qua scientist”—cannot escape inductive risk. In this paper, I scrutinize this ambition by stipulating an idealized Bayesian decision setting. I argue that inductive risk does not show that the “perfect scientist” must, descriptively speaking, make non-epistemic value-judgements, at least not in a way that (...) undermines the value-free ideal. However, the argument is more successful in showing that there are cases where the “perfect scientist” should, normatively speaking, use non-epistemic values. I also show that this is possible without creating problems of illegitimate prescription and wishful thinking. Thus, while inductive risk does not refute value-freedom completely, it still represents a powerful critique of value-free science. (shrink)
Aristotle divided arguments that persuade into the rhetorical (which happen to persuade), the dialectical (which are strong so ought to persuade to some degree) and the demonstrative (which must persuade if rightly understood). Dialectical arguments were long neglected, partly because Aristotle did not write a book about them. But in the sixteenth and seventeenth century late scholastic authors such as Medina, Cano and Soto developed a sound theory of probable arguments, those that have logical and not merely psychological force but (...) fall short of demonstration. Informed by late medieval treatments of the law of evidence and problems in moral theology and aleatory contracts, they considered the reasons that could render legal, moral, theological, commercial and historical arguments strong though not demonstrative. At the same time, demonstrative arguments became better understood as Galileo and other figures of the Scientific Revolution used mathematical proof in arguments in physics. Galileo moved both dialectical and demonstrative arguments into mathematical territory. (shrink)
We model scientific theories as Bayesian networks. Nodes carry credences and function as abstract representations of propositions within the structure. Directed links carry conditional probabilities and represent connections between those propositions. Updating is Bayesian across the network as a whole. The impact of evidence at one point within a scientific theory can have a very different impact on the network than does evidence of the same strength at a different point. A Bayesian model allows us to envisage and analyze the (...) differential impact of evidence and credence change at different points within a single network and across different theoretical structures. (shrink)
Higher-order evidence is evidence about what is rational to think in light of your evidence. Many have argued that it is special – falling into its own evidential category, or leading to deviations from standard rational norms. But it is not. Given standard assumptions, almost all evidence is higher-order evidence.
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Ian Hacking and Roger White influentially argue that it is not. We approach this question through a systematic framework for self-locating epistemology. As it turns out, leading approaches to self-locating evidence agree that the fact that our own universe contains fine-tuned life indeed confirms the existence of a multiverse. This convergence is no accident: we present two theorems showing that, in this setting, any updating (...) rule that satisfies a few reasonable conditions will have the same feature. The conclusion that fine-tuned life provides evidence for a multiverse is hard to escape. (shrink)
I argue that a popular view about self-locating evidence implies that there are cases in which agents have surprisingly strong evidence for their own reincarnation. The central case is an ‘Immortal Beauty' scenario, modelled after the well-known Sleeping Beauty puzzle. I argue that if the popular ‘thirder’ solution to the puzzle is correct, then Immortal Beauty should be confident that she's going to be reincarnated. The essay also examines another pro-reincarnation argument due to Michael Huemer (2021). I argue that his (...) argument fails, and that my argument establishes an alternative way in which mere existence can be evidence for reincarnation. I then examine whether my result generalizes. (shrink)
The thesis develops a naturalist theory of phenomenal consciousness. The argument proceeds in three broad steps. The first consists in a defense of a representationalist view of consciousness. The second part argues that the relevant form of mental representation can be explained in terms of the predictive processing approach to brain function. The final part consists in an attack on metaphysical realism inspired by Bayesian approaches to cognition and a discussion of the implications of metaphysical anti-realism for the hard problem (...) of consciousness. (shrink)
Peter Achinstein has argued at length and on many occasions that the view according to which evidential support is defined in terms of probability-raising faces serious counterexamples and, hence, should be abandoned. Proponents of the positive probabilistic relevance view have remained unconvinced. The debate seems to be in a deadlock. This paper is an attempt to move the debate forward and revisit some of the central claims within this debate. My conclusion here will be that while Achinstein may be right (...) that his counterexamples undermine probabilistic relevance views of what it is for e to be evidence that h, there is still room for a defence of a related probabilistic view about an increase in being supported, according to which, if p > p, then h is more supported given e than it is without e. My argument relies crucially on an insight from recent work on the linguistics of gradable adjectives. (shrink)
Even prior to its publication, John Norton’s book has stimulated debates about induction. Its publication will galvanize these discussions. Does it merit all this attention? Yes, and not just from philosophers of science. Practically all philosophers will find novel and thought-provoking ideas, with implications for their research.
Several philosophers and psychologists have characterized belief in conspiracy theories as a product of irrational reasoning. Proponents of conspiracy theories apparently resist revising their beliefs given disconfirming evidence and tend to believe in more than one conspiracy, even when the relevant beliefs are mutually inconsistent. In this paper, we bring leading views on conspiracy theoretic beliefs closer together by exploring their rationality under a probabilistic framework. We question the claim that the irrationality of conspiracy theoretic beliefs stems from an inadequate (...) response to disconfirming evidence and internal incoherence. Drawing analogies to Lakatosian research programs, we argue that maintaining a core conspiracy belief can be Bayes-rational when it is embedded in a network of auxiliary beliefs, which can be revised to protect the more central belief from disconfirmation. We propose that the irrationality associated with conspiracy belief lies not in a flawed updating method, but in a failure to converge toward wellconfirmed, stable belief networks in the long run. This approach not only reconciles previously disjointed views, but also points toward more specific descriptions of why agents may be prone to adopting beliefs in conspiracy theories. (shrink)
Park (2017, 2018, 2019) argues that Bas van Fraassen uses inference to the best explanation to defend his contextual theory of explanation. If Park is right, then van Fraassen is in trouble because he rejects IBE as a rational rule of inference. In this reply, I argue that van Fraassen does not use IBE in defending the contextual theory of explanation. I distinguish between several conceptions of IBE: heuristic IBE, objective Bayesian IBE, and ampliative IBE. I argue that van Fraassen (...) holds the ampliative conception of IBE and that his rejection of IBE concerns only ampliative IBE. I also argue that van Fraassen’s defense of the contextual theory of explanation, at best, can be interpreted as an instance of heuristic IBE, but not ampliative IBE. Therefore, I argue, Park’s criticism of van Fraassen misfires. (shrink)
This book explores the Bayesian approach to the logic and epistemology of scientific reasoning. Section 1 introduces the probability calculus as an appealing generalization of classical logic for uncertain reasoning. Section 2 explores some of the vast terrain of Bayesian epistemology. Three epistemological postulates suggested by Thomas Bayes in his seminal work guide the exploration. This section discusses modern developments and defenses of these postulates as well as some important criticisms and complications that lie in wait for the Bayesian epistemologist. (...) Section 3 applies the formal tools and principles of the first two sections to a handful of topics in the epistemology of scientific reasoning: confirmation, explanatory reasoning, evidential diversity and robustness analysis, hypothesis competition, and Ockham's Razor. (shrink)
Orthodox Bayesianism is a highly idealized theory of how we ought to live our epistemic lives. One of the most widely discussed idealizations is that of logical omniscience: the assumption that an agent’s degrees of belief must be probabilistically coherent to be rational. It is widely agreed that this assumption is problematic if we want to reason about bounded rationality, logical learning, or other aspects of non-ideal epistemic agency. Yet, we still lack a satisfying way to avoid logical omniscience within (...) a Bayesian framework. Some proposals merely replace logical omniscience with a different logical idealization; others sacrifice all traits of logical competence on the altar of logical non-omniscience. We think a better strategy is available: by enriching the Bayesian framework with tools that allow us to capture what agents can and cannot infer given their limited cognitive resources, we can avoid logical omniscience while retaining the idea that rational degrees of belief are in an important way constrained by the laws of probability. In this paper, we offer a formal implementation of this strategy, show how the resulting framework solves the problem of logical omniscience, and compare it to orthodox Bayesianism as we know it. (shrink)