This article discusses how the concept of a fair finite lottery can best be extended to denumerably infinite lotteries. Techniques and ideas from non-standard analysis are brought to bear on the problem.
Non-Archimedean probability functions allow us to combine regularity with perfect additivity. We discuss the philosophical motivation for a particular choice of axioms for a non-Archimedean probability theory and answer some philosophical objections that have been raised against infinitesimal probabilities in general. _1_ Introduction _2_ The Limits of Classical Probability Theory _2.1_ Classical probability functions _2.2_ Limitations _2.3_ Infinitesimals to the rescue? _3_ NAP Theory _3.1_ First four axioms of NAP _3.2_ Continuity and conditional probability _3.3_ The final axiom of NAP (...) _3.4_ Infinite sums _3.5_ Definition of NAP functions via infinite sums _3.6_ Relation to numerosity theory _4_ Objections and Replies _4.1_ Cantor and the Archimedean property _4.2_ Ticket missing from an infinite lottery _4.3_ Williamson’s infinite sequence of coin tosses _4.4_ Point sets on a circle _4.5_ Easwaran and Pruss _5_ Dividends _5.1_ Measure and utility _5.2_ Regularity and uniformity _5.3_ Credence and chance _5.4_ Conditional probability _6_ General Considerations _6.1_ Non-uniqueness _6.2_ Invariance Appendix. (shrink)
This article compares inference to the best explanation with Bayes’s rule in a social setting, specifically, in the context of a variant of the Hegselmann–Krause model in which agents not only update their belief states on the basis of evidence they receive directly from the world, but also take into account the belief states of their fellow agents. So far, the update rules mentioned have been studied only in an individualistic setting, and it is known that in such a setting (...) both have their strengths as well as their weaknesses. It is shown here that in a social setting, inference to the best explanation outperforms Bayes’s rule according to every desirable criterion. 1 What Is Inference to the Best Explanation?2 Judging the Rules—By Which Lights?3 From an Individualistic to a Social Perspective 3.1 The Hegselmann–Krause model 3.2 A probabilistic extension of the Hegselmann–Krause model 3.3 Simulations4 Results and Discussion5 Interpretation6 Conclusion. (shrink)
We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned probability zero (in other words: the probability functions are regular). We use a non-Archimedean field as the range of the probability function. As a result, the property of countable additivity in Kolmogorov’s axiomatization of probability is replaced by (...) a different type of infinite additivity. (shrink)
This paper considers Kamp and Partee's account of graded membership within a conceptual spaces framework and puts the account to the test in the domain of colors. Three experiments are reported that are meant to determine, on the one hand, the regions in color space where the typical instances of blue and green are located and, on the other hand, the degrees of blueness/greenness of various shades in the blue–green region as judged by human observers. From the locations of the (...) typical blue and typical green regions in conjunction with Kamp and Partee's account follow degrees of blueness/greenness for the color shades we are interested in. These predicted degrees are compared with the judged degrees, as obtained in the experiments. The results of the comparison support the account of graded membership at issue. (shrink)
We present a conservative extension of a Bayesian account of confirmation that can deal with the problem of old evidence and new theories. So-called open-minded Bayesianism challenges the assumption—implicit in standard Bayesianism—that the correct empirical hypothesis is among the ones currently under consideration. It requires the inclusion of a catch-all hypothesis, which is characterized by means of sets of probability assignments. Upon the introduction of a new theory, the former catch-all is decomposed into a new empirical hypothesis and a new (...) catch-all. As will be seen, this motivates a second update rule, besides Bayes’ rule, for updating probabilities in light of a new theory. This rule conserves probability ratios among the old hypotheses. This framework allows for old evidence to confirm a new hypothesis due to a shift in the theoretical context. The result is a version of Bayesianism that, in the words of Earman, “keep[s] an open mind, but not so open that your brain falls out”. (shrink)
Many conditionals seem to convey the existence of a link between their antecedent and consequent. We draw on a recently proposed typology of conditionals to argue for an old philosophical idea according to which the link is inferential in nature. We show that the proposal has explanatory force by presenting empirical results on the evidential meaning of certain English and Dutch modal expressions.
According to the Principle of Conditional Non-Contradiction (CNC), conditionals of the form “If p, q” and “If p, not q” cannot both be true, unless p is inconsistent. This principle is widely regarded as an adequacy constraint on any semantics that attributes truth conditions to conditionals. Gibbard has presented an example of a pair of conditionals that, in the context he describes, appear to violate CNC. He concluded from this that conditionals lack truth conditions. We argue that this conclusion is (...) rash by proposing a new diagnosis of what is going on in Gibbard’s argument. We also provide empirical evidence in support of our proposal. (shrink)
Human freedom is in tension with nomological determinism and with statistical determinism. The goal of this paper is to answer both challenges. Four contributions are made to the free-will debate. First, we propose a classification of scientific theories based on how much freedom they allow. We take into account that indeterminism comes in different degrees and that both the laws and the auxiliary conditions can place constraints. A scientific worldview pulls towards one end of this classification, while libertarianism pulls towards (...) the other end of the spectrum. Second, inspired by Hoefer, we argue that an interval of auxiliary conditions corresponds to a region in phase space, and to a bundle of possible block universes. We thus make room for a form of non-nomological indeterminism. Third, we combine crucial elements from the works of Hoefer and List; we attempt to give a libertarian reading of this combination. On our proposal, throughout spacetime, there is a certain amount of freedom that can be interpreted as the result of agential choices. Fourth, we focus on the principle of alternative possibilities throughout and propose three ways of strengthening it. (shrink)
Non-Archimedean probability functions allow us to combine regularity with perfect additivity. We discuss the philosophical motivation for a particular choice of axioms for a non-Archimedean probability theory and answer some philosophical objections that have been raised against infinitesimal probabilities in general.
A popular way to relate probabilistic information to binary rational beliefs is the Lockean Thesis, which is usually formalized in terms of thresholds. This approach seems far from satisfactory: the value of the thresholds is not well-specified and the Lottery Paradox shows that the model violates the Conjunction Principle. We argue that the Lottery Paradox is a symptom of a more fundamental and general problem, shared by all threshold-models that attempt to put an exact border on something that is intrinsically (...) vague. We propose application of the language of relative analysis—a type of non-standard analysis—to formulate a new model for rational belief, called Stratified Belief. This contextualist model seems well-suited to deal with a concept of beliefs based on probabilities ‘sufficiently close to unity’ and satisfies a moderately weakened form of the Conjunction Principle. We also propose an adaptation of the model that is able to deal with beliefs that are less firm than ‘almost certainty’. The adapted version is also of interest for the epistemicist account of vagueness. (shrink)
This dissertation is a contribution to formal and computational philosophy. -/- In the first part, we show that by exploiting the parallels between large, yet finite lotteries on the one hand and countably infinite lotteries on the other, we gain insights in the foundations of probability theory as well as in epistemology. Case 1: Infinite lotteries. We discuss how the concept of a fair finite lottery can best be extended to denumerably infinite lotteries. The solution boils down to the introduction (...) of infinitesimal probability values, which can be achieved using non-standard analysis. Our solution can be generalized to uncountable sample spaces, giving rise to a Non-Archimedean Probability (NAP) theory. Case 2: Large but finite lotteries. We propose application of the language of relative analysis (a type of non-standard analysis) to formulate a new model for rational belief, called Stratified Belief. This contextualist model seems well-suited to deal with a concept of beliefs based on probabilities ‘sufficiently close to unity’. -/- The second part presents a case study in social epistemology. We model a group of agents who update their opinions by averaging the opinions of other agents. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating. To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. The probability of ending up in an inconsistent belief state turns out to be always smaller than 2%. (shrink)
Traditionally, epistemologists have held that only truth-related factors matter in the question of whether a subject can be said to know a proposition. Various philosophers have recently departed from this doctrine by claiming that the answer to this question also depends on practical concerns. They take this move to be warranted by the fact that people’s knowledge attributions appear sensitive to contextual variation, in particular variation due to differing stakes. This paper proposes an alternative explanation of the aforementioned fact, one (...) that allows us to stick to the orthodoxy. The alternative applies the conceptual spaces approach to the concept of knowledge. With knowledge conceived of spatially, the variability in knowledge attributions follows from recent work on identity, according to which our standards for judging things (including concepts) to be identical are context-dependent. On the proposal to be made, it depends on what is at stake in a context whether it is worth distinguishing between knowing and being at least close to knowing. (shrink)
The Snow White problem is introduced to demonstrate how learning something of which one could not have learnt the opposite can change an agent’s probability assignment. This helps us to analyse the Sleeping Beauty problem, which is deconstructed as a combinatorial engine and a subjective wrapper. The combinatorial engine of the problem is analogous to Bertrand’s boxes paradox and can be solved with standard probability theory. The subjective wrapper is clarified using the Snow White problem. Sample spaces for all three (...) problems are presented. The conclusion is that subjectivity plays no irreducible role in solving the Sleeping Beauty problem and that no reference to centered worlds is required to provide the answer. (shrink)
The SnowWhite problem is introduced to demonstrate how learning something of which one could not have learnt the opposite can change an agent’s probability assignment. This helps us to analyse the Sleeping Beauty problem, which is deconstructed as a combinatorial engine and a subjective wrapper. The combinatorial engine of the problem is analogous to Bertrand’s boxes paradox and can be solved with standard probability theory. The subjective wrapper is clarified using the Snow White problem. Sample spaces for all three problems (...) are presented. The conclusion is that subjectivity plays no irreducible role in solving the Sleeping Beauty problem and that no reference to centered worlds is required to provide the answer. (shrink)
By exploiting the parallels between large, yet finite lotteries on the one hand and countably infinite lotteries on the other, we gain insights in the foundations of probability theory as well as in epistemology. We solve the 'adding problems' that occur in these two contexts using a similar strategy, based on non-standard analysis.
Mathematics may seem unreasonably effective in the natural sciences, in particular in physics. In this essay, I argue that this judgment can be attributed, at least in part, to selection effects. In support of this central claim, I offer four elements. The first element is that we are creatures that evolved within this Universe, and that our pattern finding abilities are selected by this very environment. The second element is that our mathematics—although not fully constrained by the natural world—is strongly (...) inspired by our perception of it. Related to this, the third element finds fault with the usual assessment of the efficiency of mathematics: our focus on the rare successes leaves us blind to the ubiquitous failures. The fourth element is that the act of applying mathematics provides many more degrees of freedom than those internal to mathematics. This final element will be illustrated by the usage of ‘infinitesimals’ in the context of mathematics and that of physics. In 1960, Wigner wrote an article on this topic  and many later authors have echoed his assessment that the success of mathematics in physics is a mystery. At the end of this essay, I will revisit Wigner and three earlier replies that harmonize with my own view. I will also explore some of Einstein’s ideas that are connected to this. But first, I briefly expose my views of science and mathematics, since these form the canvass of my central claim. (shrink)
In this contribution, we focus on probabilistic problems with a denumerably or non-denumerably infinite number of possible outcomes. Kolmogorov (1933) provided an axiomatic basis for probability theory, presented as a part of measure theory, which is a branch of standard analysis or calculus. Since standard analysis does not allow for non-Archimedean quantities (i.e. infinitesimals), we may call Kolmogorov's approach "Archimedean probability theory". We show that allowing non-Archimedean probability values may have considerable epistemological advantages in the infinite case. The current paper (...) focuses on the motivation for our new axiomatization. (shrink)
At least many conditionals seem to convey the existence of a link between their antecedent and consequent. We draw on a recently proposed typology of conditionals to revive an old philosophical idea according to which the link is inferential in nature. We show that the proposal has explanatory force by presenting empirical results on two Dutch linguistic markers.
In this paper, we take a fresh look at three Popperian concepts: riskiness, falsifiability, and truthlikeness of scientific hypotheses or theories. First, we make explicit the dimensions that underlie the notion of riskiness. Secondly, we examine if and how degrees of falsifiability can be defined, and how they are related to various dimensions of the concept of riskiness as well as the experimental context. Thirdly, we consider the relation of riskiness to truthlikeness. Throughout, we pay special attention to probabilistic theories (...) and we offer a tentative, quantitative account of verisimilitude for probabilistic theories. (shrink)
A New Chance for Infinitesimals?This article discusses the connection between the Zenonian paradox of magnitude and probability on infinite sample spaces. Two important premises in the Zenonian argument are: the Archimedean axiom, which excludes infinitesimal magnitudes, and perfect additivity. Standard probability theory uses real numbers that satisfy the Archimedean axiom, but it rejects perfect additivity. The additivity requirement for real-valued probabilities is limited to countably infinite collections of mutually incompatible events. A consequence of this is that there exists no standard (...) probability function that describes a fair lottery on the natural numbers. If we reject the Archimedean axiom, allowing infinitesimal probability values, we can retain perfect additivity and describe a fair, countable infinite lottery. The article gives a historical overview to understand how the first option has become the current standard, whereas the latter remains ‘non-standard’. (shrink)
We discuss two research projects in material science in which the results cannot be stated with an estimation of the error: a spectroscopic ellipsometry study aimed at determining the orientation of DNA molecules on diamond and a scanning tunneling microscopy study of platinum-induced nanowires on germanium. To investigate the reliability of the results, we apply ideas from the philosophy of models in science. Even if the studies had reported an error value, the trustworthiness of the result would not depend on (...) that value alone. (shrink)
We present a model for studying communities of epistemically interacting agents who update their belief states by averaging the belief states of other agents in the community. The agents in our model have a rich belief state, involving multiple independent issues which are interrelated in such a way that they form a theory of the world. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating. To that end, (...) an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. It is shown that, under the assumptions of our model, an agent always has a probability of less than 2% of ending up in an inconsistent belief state. Moreover, this probability can be made arbitrarily small by increasing the number of independent issues the agents have to judge or by increasing the group size. A real-world situation to which this model applies is a group of experts participating in a Delphi-study. (shrink)