It is well known that de se (or ‘self-locating’) propositions complicate the standard picture of how we should respond to evidence. This has given rise to a substantial literature centered around puzzles like Sleeping Beauty, Dr. Evil, and Doomsday—and it has also sparked controversy over a style of argument that has recently been adopted by theoretical cosmologists. These discussions often dwell on intuitions about a single kind of case, but it’s worth seeking a rule that can unify our treatment of (...) all evidence, whether de dicto or de se. -/- This paper is about three candidates for such a rule, presented as replacements for the standard updating rule. Each rule stems from the idea that we should treat ourselves as a random sample, a heuristic that underlies many of the intuitions that have been pumped in treatments of the standard puzzles. But each rule also yields some strange results when applied across the board. This leaves us with some difficult options. We can seek another way to refine the random-sample heuristic, e.g. by restricting one of our rules. We can try to live with the strange results, perhaps granting that useful principles can fail at the margins. Or we can reject the random-sample heuristic as fatally flawed—which means rethinking its influence in even the simplest cases. (shrink)
In standard probability theory, probability zero is not the same as impossibility. But many have suggested that only impossible events should have probability zero. This can be arranged if we allow infinitesimal probabilities, but infinitesimals do not solve all of the problems. We will see that regular probabilities are not invariant over rigid transformations, even for simple, bounded, countable, constructive, and disjoint sets. Hence, regular chances cannot be determined by space-time invariant physical laws, and regular credences cannot satisfy seemingly reasonable (...) symmetry principles. Moreover, the examples here are immune to the objections against Williamson’s infinite coin flips. (shrink)
Many philosophers have been attracted to a restricted version of the principle of indifference in the case of self-locating belief. Roughly speaking, this principle states that, within any given possible world, one should be indifferent between different hypotheses concerning who one is within that possible world, so long as those hypotheses are compatible with one’s evidence. My first goal is to defend a more precise version of this principle. After responding to several existing criticisms of such a principle, I argue (...) that existing formulations of the principle are crucially ambiguous, and I go on to defend a particular disambiguation of the principle. According to the disambiguation I defend, how we should apply this restricted principle of indifference sensitively depends on our background metaphysical beliefs. My second goal is to apply this disambiguated principle to classical skeptical problems in epistemology. In particular, I argue that Eternalism threatens to lead us to external world skepticism, and Modal Realism threatens to lead us to inductive skepticism. (shrink)
One popular approach to statistical mechanics understands statistical mechanical probabilities as measures of rational indifference. Naive formulations of this ``indifference approach'' face reversibility worries - while they yield the right prescriptions regarding future events, they yield the wrong prescriptions regarding past events. This paper begins by showing how the indifference approach can overcome the standard reversibility worries by appealing to the Past Hypothesis. But, the paper argues, positing a Past Hypothesis doesn't free the indifference approach from all reversibility worries. For (...) while appealing to the Past Hypothesis allows it to escape one kind of reversibility worry, it makes it susceptible to another - the Meta-Reversibility Objection. And there is no easy way for the indifference approach to escape the Meta-Reversibility Objection. As a result, reversibility worries pose a steep challenge to the viability of the indifference approach. (shrink)
In this paper, I examine Plantinga’s (1993, 2000, 2011) Evolutionary Argument Against Naturalism (EAAN). While there has been much discussion about Plantinga’s use of probabilities in the argument, I contend that insufficient attention has been paid to the question of how we are to interpret those probabilities. In this paper, I argue that views Plantinga defends elsewhere limit the range of interpretations available to him here. The upshot is that the EAAN is more limited in its applicability than Plantinga alleges.
We start by presenting three different views that jointly imply that every person has many conscious beings in their immediate vicinity, and that the number greatly varies from person to person. We then present and assess an argument to the conclusion that how confident someone should be in these views should sensitively depend on how massive they happen to be. According to the argument, sometimes irreducibly de se observations can be powerful evidence for or against believing in metaphysical theories.
It is a consequence of the theory of imprecise credences that there exist situations in which rational agents inevitably become less opinionated toward some propositions as they gather more evidence. The fact that an agent's imprecise credal state can dilate in this way is often treated as a strike against the imprecise approach to inductive inference. Here, we show that dilation is not a mere artifact of this approach by demonstrating that opinion loss is countenanced as rational by a substantially (...) broader class of normative theories than has been previously recognised. Specifically, we show that dilation-like phenomena arise even when one abandons the basic assumption that agents have (precise or imprecise) credences of any kind, and follows directly from bedrock norms for rational comparative confidence judgements of the form `I am at least as confident in p as I am in q'. We then use the comparative confidence framework to develop a novel understanding of what exactly gives rise to dilation-like phenomena. By considering opinion loss in this more general setting, we are able to provide a novel assessment of the prospects for an account of inductive inference that is not saddled with the inevitability of rational opinion loss. (shrink)
An indifference principle says that your credences should be distributed uniformly over each of the possibilities you recognise. A chance deference principle says that your credences should be aligned with the chances. My thesis is that, if we are anti-Humeans about chance, then these two principles are incompatible. Anti-Humeans think that it is possible for the actual frequencies to depart from the chances. So long as you recognise possibilities like this, you cannot both spread your credences evenly and defer to (...) the chances. I discuss some weaker forms of indifference which will allow anti-Humeans to defer to the chances. (shrink)
How do you make decisions under ignorance? That is, how do you decide when you lack subjective probabilities for some of your options’ possible outcomes? One answer is that you follow the Laplace Rule: you assign an equal probability to each state of nature for which you lack a subjective probability (that is, you use the Principle of Indifference) and then you maximize expected utility. The most influential objection to the Laplace Rule is that it is sensitive to the individuation (...) of states of nature. This sensitivity is problematic because the individuation of states seems arbitrary. In this paper, however, I argue that this objection proves too much. I argue that all plausible rules for decisions under ignorance are sensitive to the individuation of states of nature. (shrink)
This M.A. thesis explores the intricate Problem of Induction, contrasting three seminal approaches: Hume's habit-centric view, Reichenbach's emphasis on the Principle of Uniformity of Nature, and Strawson's belief in the innate rationality of induction. While Hume's perspective lays the groundwork for Kant's a priori and Cleve's a posteriori validation, Reichenbach and Salmon present pragmatic justifications, underscoring the methodological and probabilistic underpinnings of inductive reasoning, specifying epistemological ignorance as a guidance for the optimality criteria. Strawson, challenging prevailing notions, posits that induction, (...) anchored by prior probabilities and evidence, is inherently rational, obviating the need for external validation. The study integrates concepts of frequentism, conditionalization, and probabilistic laws to develop the truth-conducive nuance of induction. The research culminates by confronting the dual challenges of quantitative and profound scepticism, championing a holistic approach. Therefore, the study aims to enrich the discourse on the epistemic foundations of the Problem of Induction, particularly its implications for scientific inquiry and the laws of nature. (shrink)
An aspect of Peirce’s thought that may still be underappreciated is his resistance to what Levi calls _pedigree epistemology_, to the idea that a central focus in epistemology should be the justification of current beliefs. Somewhat more widely appreciated is his rejection of the subjective view of probability. We argue that Peirce’s criticisms of subjectivism, to the extent they grant such a conception of probability is viable at all, revert back to pedigree epistemology. A thoroughgoing rejection of pedigree in the (...) context of probabilistic epistemology, however, _does_ challenge prominent subjectivist responses to the problem of the priors. (shrink)
The purpose of the paper is to cast doubt on the alleged intuitive or natural character of the skeptical argument about the external world. In §1, we examine a version of the skeptical argument based on the epistemic closure principle and the indifference principle. In §2, in order to deepen the view defended by Michael Williams, we offer a novel examination of the Cartesian skeptical argumentation to show that it is clear that the alleged naturalness claimed by the skeptic is (...) nowhere to be found in two arguments which can be found in such argumentative strategy; moreover, to reach her conclusion, the skeptic needs to commit to epistemological realism, namely, the claim that each of our beliefs belongs to an epistemological hierarchy based solely on its content. In §3, based on arguments inspired by Wittgenstein, contra epistemological realism, we show how each belief has a justificatory role based on its context. (shrink)
An important line of response to scepticism appeals to the best explanation. But anti-sceptics have not engaged much with work on explanation in the philosophy of science. I plan to investigate whether plausible assumptions about best explanations really do favour anti-scepticism. I will argue that there are ways of constructing sceptical hypotheses in which the assumptions do favour anti-scepticism, but the size of the support for anti-scepticism is small.
Whilst Bayesian epistemology is widely regarded nowadays as our best theory of knowledge, there are still a relatively large number of incompatible and competing approaches falling under that umbrella. Very recently, Wallmann and Williamson wrote an interesting article that aims at showing that a subjective Bayesian who accepts the principal principle and uses a known physical chance as her degree of belief for an event A could end up having incoherent or very implausible beliefs if she subjectively chooses the probability (...) of an event F for which she has much poorer evidence. They also argued that their own version of objective Bayesianism is completely immune to that challenge. In this article, after having presented the strongest version of Wallmann’s and Williamson’s argument, I will show that if successful, it has far-reaching consequences and would not only invalidate moderate subjective Bayesianism and imprecise probalism but also a form of objective Bayesianism that relies on conditionalisation, the principal principle, reference classes, and the principle of indifference applied to the most basic partitions. I then argue that their argument can be defeated by adding the rule that it is always irrational to choose a probability that can be computed from the known probabilities associated to one’s other beliefs. I finally argue that the authors’ main intuition that probabilities have different degrees of reliability favours imprecise Bayesianism over precise Bayesianism. (shrink)
The principle of insufficient reason assigns equal probabilities to each alternative of a random experiment whenever there is no reason to prefer one over the other. The maximum entropy principle generalizes PIR to the case where statistical information like expectations are given. It is known that both principles result in paradoxical probability updates for joint distributions of cause and effect. This is because constraints on the conditional P P\left result in changes of P P\left that assign higher probability to those (...) values of the cause that offer more options for the effect, suggesting “intentional behavior.” Earlier work therefore suggested sequentially maximizing entropy according to the causal order, but without further justification apart from plausibility on toy examples. We justify causal modifications of PIR and MaxEnt by separating constraints into restrictions for the cause and restrictions for the mechanism that generates the effect from the cause. We further sketch why causal PIR also entails “Information Geometric Causal Inference.” We briefly discuss problems of generalizing the causal version of MaxEnt to arbitrary causal DAGs. (shrink)
This paper highlights the role of Lewis’ Principal Principle and certain auxiliary conditions on admissibility as serving to explicate normal informal standards of what is reasonable. These considerations motivate the presuppositions of the argument that the Principal Principle implies the Principle of Indifference, put forward by Hawthorne et al.. They also suggest a line of response to recent criticisms of that argument, due to Pettigrew and Titelbaum and Hart, 621–632, 2020). The paper also shows that related concerns of Hart and (...) Titelbaum, 252–262, 2015) do not undermine the argument of Hawthorne et al.. (shrink)
The Principle of Indifference (POI) is a rule for rationally assigning precise degrees of confidence to possibilities among which we have no reason to discriminate. Despite criticism of the principle stemming from Bertrand's paradox, many have recently come to the defense of POI or adopted some restricted version of that principle, especially in discussions of self-locating belief. I argue that POI in both unrestricted and restricted forms is untenable, and that arguments for the more restricted principles are hostage to problems (...) similar to those that bedevil arguments for traditional POI. (shrink)
The epistemic probability of A given B is the degree to which B evidentially supports A, or makes A plausible. This paper is a first step in answering the question of what determines the values of epistemic probabilities. I break this question into two parts: the structural question and the substantive question. Just as an object’s weight is determined by its mass and gravitational acceleration, some probabilities are determined by other, more basic ones. The structural question asks what probabilities are (...) not determined in this way—these are the basic probabilities which determine values for all other probabilities. The substantive question asks how the values of these basic probabilities are determined. I defend an answer to the structural question on which basic probabilities are the probabilities of atomic propositions conditional on potential direct explanations. I defend this against the view, implicit in orthodox mathematical treatments of probability, that basic probabilities are the unconditional probabilities of complete worlds. I then apply my answer to the structural question to clear up common confusions in expositions of Bayesianism and shed light on the “problem of the priors.”. (shrink)
In a recent paper in this journal, James Hawthorne, Jürgen Landes, Christian Wallmann, and Jon Williamson argue that the principal principle entails the principle of indifference. In this article, I argue that it does not. Lewis’s version of the principal principle notoriously depends on a notion of admissibility, which Lewis uses to restrict its application. HLWW base their argument on certain intuitions concerning when one proposition is admissible for another: Conditions 1 and 2. There are two ways of reading their (...) argument, depending on how you understand the status of these conditions. Reading 1: The correct account of admissibility is determined independently of these two principles, and yet these two principles follow from that correct account. Reading 2: The correct account of admissibility is determined in part by these two principles, so that the principles follow from that account but only because the correct account is constrained so that it must satisfy them. HLWW show that given an account of admissibility on which Conditions 1 and 2 hold, the principal principle entails the principle of indifference. I argue that on either reading of the argument, it fails. First, I argue that there is a plausible account of admissibility on which Conditions 1 and 2 are false. That defeats Reading 1. Next, I argue that the intuitions that lead us to assent to Condition 2 also lead us to assent to other very closely related principles that are inconsistent with Condition 2. This, I claim, casts doubt on the reliability of those intuitions, and thus removes our justification for Condition 2. This defeats Reading 2 of the HLWW argument. Thus, the argument fails. 1Introduction 2Introducing the Principal Principle 3Introducing the Principle of Indifference 4The HLWW Argument 4.1Reading 1: Admissibility justifies Conditions 1 and 2 4.2Reading 2: Conditions 1 and 2 constrain admissibility 5Conclusion. (shrink)
Roger White argued for a principle of indifference. Hart and Titelbaum showed that White’s argument relied on an intuition about conditioning on biconditionals that, while widely shared, is incorrect. Hawthorne, Landes, Wallmann, and Williamson argue for a principle of indifference. Remarkably, their argument relies on the same faulty intuition. We explain their intuition, explain why it’s faulty, and show how it generates their principle of indifference. 1Introduction 2El Caminos and Indifference 2.1Overview 2.2Fins and antennas 2.3HLWW in the example 2.4The restrictiveness (...) of Condition 2 2.5Summary 3The Specifics of HLWW s Argument 3.1Mapping their conditions to our equations 3.2HLWW’s responses to objections. (shrink)
One well-known objection to the principle of maximum entropy is the so-called Judy Benjamin problem, first introduced by van Fraassen. The problem turns on the apparently puzzling fact that, on the basis of information relating an event’s conditional probability, the maximum entropy distribution will almost always assign to the event conditionalized on a probability strictly less than that assigned to it by the uniform distribution. In this article, I present an analysis of the Judy Benjamin problem that can help to (...) make sense of this seemingly odd feature of maximum entropy inference. My analysis is based on the claim that, in applying the principle of maximum entropy, Judy Benjamin is not acting out of a concern to maximize uncertainty in the face of new evidence, but is rather exercising a certain brand of epistemic charity towards her informant. This epistemic charity takes the form of an assumption on the part of Judy Benjamin that her informant’s evidential report leaves out no relevant information. Such a reconceptualization of the motives underlying Judy Benjamin’s appeal to the principle of maximum entropy can help to further our understanding of the true epistemological grounds of this principle and correct a common misapprehension regarding its relationship to the principle of insufficient reason. 1Introduction2The Principle of Maximum Entropy3An Apologia for Judy Benjamin4Conclusion: Entropy and Insufficient Reason. (shrink)
The principle of indifference states that in the absence of any relevant evidence, a rational agent will distribute their credence equally among all the possible outcomes under consideration. Despite its intuitive plausibility, PI famously falls prey to paradox, and so is widely rejected as a principle of ideal rationality. In this article, I present a novel rehabilitation of PI in terms of the epistemology of comparative confidence judgments. In particular, I consider two natural comparative reformulations of PI and argue that (...) while one of them prescribes the adoption of patently irrational epistemic states, the other provides a consistent formulation of PI that overcomes the most salient limitations of existing formulations. (shrink)
If the laws of nature are as the Humean believes, it is an unexplained cosmic coincidence that the actual Humean mosaic is as extremely regular as it is. This is a strong and well-known objection to the Humean account of laws. Yet, as reasonable as this objection may seem, it is nowadays sometimes dismissed. The reason: its unjustified implicit assignment of equiprobability to each possible Humean mosaic; that is, its assumption of the principle of indifference, which has been attacked on (...) many grounds ever since it was first proposed. In place of equiprobability, recent formal models represent the doxastic state of total ignorance as suspension of judgment. In this paper I revisit the cosmic coincidence objection to Humean laws by assessing which doxastic state we should endorse. By focusing on specific features of our scenario I conclude that suspending judgment results in an unnecessarily weak doxastic state. First, I point out that recent literature in epistemology has provided independent justifications of the principle of indifference. Second, given that the argument is framed within a Humean metaphysics, it turns out that we are warranted to appeal to these justifications and assign a uniform and additive credence distribution among Humean mosaics. This leads us to conclude that, contrary to widespread opinion, we should not dismiss the cosmic coincidence objection to the Humean account of laws. (shrink)
Shepard’s (1987) universal law of generalisation (ULG) illustrates that an invariant gradient of generalisation across species and across stimuli conditions can be obtained by mapping the probability of a generalisation response onto the representations of similarity between individual stimuli. Tenenbaum and Griffiths (2001) Bayesian account of generalisation expands ULG towards generalisation from multiple examples. Though the Bayesian model starts from Shepard’s account it refrains from any commitment to the notion of psychological similarity to explain categorisation. This chapter presents the conceptual (...) spaces theory (Gärdenfors 2000, 2014) as a mediator between Shepard’s and Tenenbaum & Griffiths’ conflicting views on the role of psychological similarity for a successful model of categorisation. It suggests that the conceptual spaces theory can help to improve the Bayesian model while finding an explanatory role for psychological similarity. (shrink)
Certain mathematical problems prove very hard to solve because some of their intuitive features have not been assimilated or cannot be assimilated by the available mathematical resources. This state of affairs triggers an interesting dynamic whereby the introduction of novel conceptual resources converts the intuitive features into further mathematical determinations in light of which a solution to the original problem is made accessible. I illustrate this phenomenon through a study of Bertrand’s paradox.
Epistemic Permissivists face a special problem about the relationship between our first- and higher-order attitudes. They claim that rationality often permits a range of doxastic responses to the evidence. Given plausible assumptions about the relationship between your first- and higher-order attitudes, it can't be rational to adopt a credence on the edge of that range. But Permissivism says that, for some such range, any credence in that range is rational. Permissivism, in its traditional form, cannot be right. I consider some (...) new ways of developing Permissivism to avoid this argument, but each has problems of its own. (shrink)
In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized (...) data-driven science. I investigate, in particular, a proposed definition of a universal prediction method that goes back to Solomonoff and Levin. This definition marks the birth of the theory of Kolmogorov complexity, and has a direct line to the information-theoretic approach in modern machine learning. Solomonoff's work was inspired by Carnap's program of inductive logic, and the more precise definition due to Levin can be seen as an explicit attempt to escape the diagonal argument that Putnam famously launched against the feasibility of Carnap's program. The Solomonoff-Levin definition essentially aims at a mixture of all possible prediction algorithms. An alternative interpretation is that the definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the Solomonoff-Levin definition fails to unite two necessary conditions to count as a universal prediction method, as turns out be entailed by Putnam's original argument after all; and I argue that this indeed shows that no definition can. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is already problematic itself. (shrink)
We develop a Bayesian framework for thinking about the way evidence about the here and now can bear on hypotheses about the qualitative character of the world as a whole, including hypotheses according to which the total population of the world is infinite. We show how this framework makes sense of the practice cosmologists have recently adopted in their reasoning about such hypotheses.
Hawthorne, Landes, Wallmann and Williamson argue that the Principal Principle implies a version of the Principle of Indifference. We show that what the Authors take to be the Principle of Indifference can be obtained without invoking anything which would seem to be related to the Principal Principle. In the Appendix we also discuss several Conditions proposed in the same paper.
We argue that David Lewis’s principal principle implies a version of the principle of indifference. The same is true for similar principles that need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism. 1 The Argument2 Some Objections Met.
In a recent paper in this journal, James Hawthorne, Jürgen Landes, Christian Wallmann, and Jon Williamson argue that the principal principle entails the principle of indifference. In this paper, I argue that it does not. Lewis’s version of the principal principle notoriously depends on a notion of admissibility, which Lewis uses to restrict its application. HLWW base their argument on certain intuitions concerning when one proposition is admissible for another: Conditions 1 and 2. There are two ways of reading their (...) argument, depending on how you understand the status of these conditions. Reading 1: The correct account of admissibility is determined independently of these two principles, and yet these two principles follow from that correct account. Reading 2: The correct account of admissibility is determined in part by these two principles, so that the principles follow from that account but only because the correct account is constrained so that it must satisfy them. HLWWshow that, given an account of admissibility on which Conditions 1 and 2 hold, the principal principle entails the principle of indifference. I argue that, on either reading of the argument, it fails. First, I argue that there is a plausible account of admissibility on which Conditions 1 and 2 are false. That defeats reading 1. Next, I argue that the intuitions that lead us to assent to Condition 2 also lead us to assent to other very closely related principles that are inconsistent with Condition 2. This, I claim, casts doubt on the reliability of those intuitions, and thus removes our justification for Condition 2. This defeats the second reading of the HLWW argument. Thus, the argument fails. (shrink)
I present an argument against the thesis of Uniqueness and in favour of Permissivism. Counterexamples to Uniqueness are provided, based on ‘Safespot’ propositions – i.e. a proposition that is guaranteed to be true provided the subject adopts a certain attitude towards it. The argument relies on a plausible principle: (roughly stated) If S knows that her believing p would be a true belief, then it is rationally permitted for S to believe p. One motivation for denying this principle – viz. (...) opposition to ‘epistemic consequentialism’ – is briefly discussed. The principle is extended to cover degrees of belief and compared with a couple of other well-known constraints on rational degrees of belief. (shrink)
Many theorists have proposed that we can use the principle of indifference to defeat the inductive sceptic. But any such theorist must confront the objection that different ways of applying the principle of indifference lead to incompatible probability assignments. Huemer offers the explanatory priority proviso as a strategy for overcoming this objection. With this proposal, Huemer claims that we can defend induction in a way that is not question-begging against the sceptic. But in this article, I argue that the opposite (...) is true: if anything, Huemer’s use of the principle of indifference supports the rationality of inductive scepticism. (shrink)
An a priori semimeasure (also known as “algorithmic probability” or “the Solomonoff prior” in the context of inductive inference) is defined as the transformation, by a given universal monotone Turing machine, of the uniform measure on the infinite strings. It is shown in this paper that the class of a priori semimeasures can equivalently be defined as the class of transformations, by all compatible universal monotone Turing machines, of any continuous computable measure in place of the uniform measure. Some consideration (...) is given to possible implications for the association of algorithmic probability with certain foundational principles of statistics. (shrink)
It is well known that there are, at least, two sorts of cases where one should not prefer a direct inference based on a narrower reference class, in particular: cases where the narrower reference class is gerrymandered, and cases where one lacks an evidential basis for forming a precise-valued frequency judgment for the narrower reference class. I here propose (1) that the preceding exceptions exhaust the circumstances where one should not prefer direct inference based on a narrower reference class, and (...) (2) that minimal frequency information for a narrower (non-gerrymandered) reference class is sufficient to yield the defeat of a direct inference for a broader reference class. By the application of a method for inferring relatively informative expected frequencies, I argue that the latter claim does not result in an overly incredulous approach to direct inference. The method introduced here permits one to infer a relatively informative expected frequency for a reference class R', given frequency information for a superset of R' and/or frequency information for a sample drawn from R'. (shrink)
Richard Pettigrew offers an extended investigation into a particular way of justifying the rational principles that govern our credences. The main principles that he justifies are the central tenets of Bayesian epistemology, though many other related principles are discussed along the way. Pettigrew looks to decision theory in order to ground his argument. He treats an agent's credences as if they were a choice she makes between different options, gives an account of the purely epistemic utility enjoyed by different sets (...) of credences, and then appeals to the principles of decision theory to show that, when epistemic utility is measured in this way, the credences that violate the principles listed above are ruled out as irrational. The account of epistemic utility set out here is the veritist's: the sole fundamental source of epistemic utility for credences is their accuracy. Thus, Pettigrew conducts an investigation in the version of epistemic utility theory known as accuracy-first epistemology. (shrink)
In Bayesian epistemology, the problem of the priors is this: How should we set our credences (or degrees of belief) in the absence of evidence? That is, how should we set our prior or initial credences, the credences with which we begin our credal life? David Lewis liked to call an agent at the beginning of her credal journey a superbaby. The problem of the priors asks for the norms that govern these superbabies. -/- The Principle of Indifference gives a (...) very restrictive answer. It demands that such an agent divide her credences equally over all possibilities. That is, according to the Principle of Indifference, only one initial credence function is permissible, namely, the uniform distribution. In this paper, we offer a novel argument for the Principle of Indifference. I call it the Argument from Accuracy. (shrink)
This paper has the aim of making Johannes von Kries’s masterpiece, Die Principien der Wahrscheinlichkeitsrechnung of 1886, a little more accessible to the modern reader in three modest ways: first, it discusses the historical background to the book ; next, it summarizes the basic elements of von Kries’s approach ; and finally, it examines the so-called “principle of cogent reason” with which von Kries’s name is often identified in the English literature.
In several papers, John Norton has argued that Bayesianism cannot handle ignorance adequately due to its inability to distinguish between neutral and disconfirming evidence. He argued that this inability sows confusion in, e.g., anthropic reasoning in cosmology or the Doomsday argument, by allowing one to draw unwarranted conclusions from a lack of knowledge. Norton has suggested criteria for a candidate for representation of neutral support. Imprecise credences (families of credal probability functions) constitute a Bayesian-friendly framework that allows us to avoid (...) inadequate neutral priors and better handle ignorance. The imprecise model generally agrees with Norton's representation of ignorance but requires that his criterion of self-duality be reformulated or abandoned. (shrink)
The Doomsday argument and anthropic reasoning are two puzzling examples of probabilistic confirmation. In both cases, a lack of knowledge apparently yields surprising conclusions. Since they are formulated within a Bayesian framework, they constitute a challenge to Bayesianism. Several attempts, some successful, have been made to avoid these conclusions, but some versions of these arguments cannot be dissolved within the framework of orthodox Bayesianism. I show that adopting an imprecise framework of probabilistic reasoning allows for a more adequate representation of (...) ignorance in Bayesian reasoning and explains away these puzzles. (shrink)
Cosmology raises novel philosophical questions regarding the use of probabilities in inference. This work aims at identifying and assessing lines of arguments and problematic principles in probabilistic reasoning in cosmology. -/- The first, second, and third papers deal with the intersection of two distinct problems: accounting for selection effects, and representing ignorance or indifference in probabilistic inferences. These two problems meet in the cosmology literature when anthropic considerations are used to predict cosmological parameters by conditionalizing the distribution of, e.g., the (...) cosmological constant on the number of observers it allows for. However, uniform probability distributions usually appealed to in such arguments are an inadequate representation of indifference, and lead to unfounded predictions. It has been argued that this inability to represent ignorance is a fundamental flaw of any inductive framework using additive measures. In the first paper, I examine how imprecise probabilities fare as an inductive framework and avoid such unwarranted inferences. In the second paper, I detail how this framework allows us to successfully avoid the conclusions of Doomsday arguments in a way no Bayesian approach that represents credal states by single credence functions could. -/- There are in the cosmology literature several kinds of arguments referring to self- locating uncertainty. In the multiverse framework, different "pocket-universes" may have different fundamental physical parameters. We don’t know if we are typical observers and if we can safely assume that the physical laws we draw from our observations hold elsewhere. The third paper examines the validity of the appeal to the "Sleeping Beauty problem" and assesses the nature and role of typicality assumptions often endorsed to handle such questions. -/- A more general issue for the use of probabilities in cosmology concerns the inadequacy of Bayesian and statistical model selection criteria in the absence of well-motivated measures for different cosmological models. The criteria for model selection commonly used tend to focus on optimizing the number of free parameters, but they can select physically implausible models. The fourth paper examines the possibility for Bayesian model selection to circumvent the lack of well-motivated priors. (shrink)
Bertand’s paradox is a fundamental problem in probability that casts doubt on the applicability of the indifference principle by showing that it may yield contradictory results, depending on the meaning assigned to “randomness”. Jaynes claimed that symmetry requirements solve the paradox by selecting a unique solution to the problem. I show that this is not the case and that every variant obtained from the principle of indifference can also be obtained from Jaynes’ principle of transformation groups. This is because the (...) same symmetries can be mathematically implemented in different ways, depending on the procedure of random selection that one uses. I describe a simple experiment that supports a result from symmetry arguments, but the solution is different from Jaynes’. Jaynes’ method is thus best seen as a tool to obtain probability distributions when the principle of indifference is inconvenient, but it cannot resolve ambiguities inherent in the use of that principle and still depends on explicitly defining the selection procedure. (shrink)
The classical interpretation of probability together with the principle of indifference is formulated in terms of probability measure spaces in which the probability is given by the Haar measure. A notion called labelling invariance is defined in the category of Haar probability spaces; it is shown that labelling invariance is violated, and Bertrand’s paradox is interpreted as the proof of violation of labelling invariance. It is shown that Bangu’s attempt to block the emergence of Bertrand’s paradox by requiring the re-labelling (...) of random events to preserve randomness cannot succeed non-trivially. A non-trivial strategy to preserve labelling invariance is identified, and it is argued that, under the interpretation of Bertrand’s paradox suggested in the paper, the paradox does not undermine either the principle of indifference or the classical interpretation and is in complete harmony with how mathematical probability theory is used in the sciences to model phenomena. It is shown in particular that violation of labelling invariance does not entail that labelling of random events affects the probabilities of random events. It also is argued, however, that the content of the principle of indifference cannot be specified in such a way that it can establish the classical interpretation of probability as descriptively accurate or predictively successful. (shrink)
The classical interpretation of probability together with the principle of indifference is formulated in terms of probability measure spaces in which the probability is given by the Haar measure. A notion called labelling invariance is defined in the category of Haar probability spaces; it is shown that labelling invariance is violated, and Bertrand’s paradox is interpreted as the proof of violation of labelling invariance. It is shown that Bangu’s attempt to block the emergence of Bertrand’s paradox by requiring the re-labelling (...) of random events to preserve randomness cannot succeed non-trivially. A non-trivial strategy to preserve labelling invariance is identified, and it is argued that, under the interpretation of Bertrand’s paradox suggested in the paper, the paradox does not undermine either the principle of indifference or the classical interpretation and is in complete harmony with how mathematical probability theory is used in the sciences to model phenomena. It is shown in particular that violation of labelling invariance does not entail that labelling of random events affects the probabilities of random events. It also is argued, however, that the content of the principle of indifference cannot be specified in such a way that it can establish the classical interpretation of probability as descriptively accurate or predictively successful. 1 The Main Claims2 The Elementary Classical Interpretation of Probability3 The General Classical Interpretation of Probability in Terms of Haar Measures4 Labelling Invariance and Labelling Irrelevance5 General Bertrand’s Paradox6 Attempts to Save Labelling Invariance7 Comments on the Classical Interpretation of Probability. (shrink)
God's Dice.Vasil Penchev - 2015 - In S. Oms, J. Martínez, M. García-Carpintero & J. Díez (eds.), Actas: VIII Conference of the Spanish Society for Logic, Methodology, and Philosophy of Sciences. Barcelona: Universitat de Barcelona. pp. 297-303.details
Einstein wrote his famous sentence "God does not play dice with the universe" in a letter to Max Born in 1920. All experiments have confirmed that quantum mechanics is neither wrong nor “incomplete”. One can says that God does play dice with the universe. Let quantum mechanics be granted as the rules generalizing all results of playing some imaginary God’s dice. If that is the case, one can ask how God’s dice should look like. God’s dice turns out to be (...) a qubit and thus having the shape of a unit ball. Any item in the universe as well the universe itself is both infinitely many rolls and a single roll of that dice for it has infinitely many “sides”. Thus both the smooth motion of classical physics and the discrete motion introduced in addition by quantum mechanics can be described uniformly correspondingly as an infinite series converges to some limit and as a quantum jump directly into that limit. The second, imaginary dimension of God’s dice corresponds to energy, i.e. to the velocity of information change between two probabilities in both series and jump. (shrink)
The _Principle of Indifference_ was once regarded as a linchpin of probabilistic reasoning, but has now fallen into disrepute as a result of the so-called _problem of multiple of partitions_. In ‘Evidential symmetry and mushy credence’ Roger White suggests that we have been too quick to jettison this principle and argues that the problem of multiple partitions rests on a mistake. In this paper I will criticise White’s attempt to revive POI. In so doing, I will argue that what underlies (...) the problem of multiple partitions is a fundamental tension between POI and the very idea of _evidential incomparability_. (shrink)
Bertrand's paradox is a famous problem of probability theory, pointing to a possible inconsistency in Laplace's principle of insufficient reason. In this article, we show that Bertrand's paradox contains two different problems: an “easy” problem and a “hard” problem. The easy problem can be solved by formulating Bertrand's question in sufficiently precise terms, so allowing for a non-ambiguous modelization of the entity subjected to the randomization. We then show that once the easy problem is settled, also the hard problem becomes (...) solvable, provided Laplace's principle of insufficient reason is applied not to the outcomes of the experiment, but to the different possible “ways of selecting” an interaction between the entity under investigation and that producing the randomization. This consists in evaluating a huge average over all possible “ways of selecting” an interaction, which we call a universal average. Following a strategy similar to that used in the definition of the Wiener measure, we calculate such universal average and therefore solve the hard problem of Bertrand's paradox. The link between Bertrand's problem of probability theory and the measurement problem of quantum mechanics is also briefly discussed. (shrink)
Sometimes different partitions of the same space each seem to divide that space into propositions that call for equal epistemic treatment. Famously, equal treatment in the form of equal point-valued credence leads to incoherence. Some have argued that equal treatment in the form of equal interval-valued credence solves the puzzle. This paper shows that, once we rule out intervals with extreme endpoints, this proposal also leads to incoherence.