Taking Joyce’s (1998; 2009) recent argument(s) for probabilism as our point of departure, we propose a new way of grounding formal, synchronic, epistemic coherence requirements for (opinionated) full belief. Our approach yields principled alternatives to deductive consistency, sheds new light on the preface and lottery paradoxes, and reveals novel conceptual connections between alethic and evidential epistemic norms.
Many philosophers have become worried about the use of standard real numbers for the probability function that represents an agent’s credences. They point out that real numbers can’t capture the distinction between certain extremely unlikely events, and actually impossible ones — both get credence 0, which violates a principle known as “regularity”. Following Lewis  and Skyrms , they recommend that we should instead use a much richer set of numbers, called the “hyperreals”. I think that this popular view is (...) the result of two mistakes. The first, which I call the “numerical fallacy”, is to assume that all there is to a mathematical representation is the numbers. In this case, I claim that the standard mathematical representation already captures the needed distinctions, if we look beyond the numbers. The second mistake is like that made by the cartographers in Borges’ story. A representation is no good if it is too large to use. As I show at the end of this paper, the hyperreals have exactly this problem — they are too rich a structure to fit in the world, even if they could help us restore regularity. (shrink)
Arguments for probabilism aim to undergird/motivate a synchronic probabilistic coherence norm for partial beliefs. Standard arguments for probabilism are all of the form: An agent S has a non-probabilistic partial belief function b iff (⇐⇒) S has some “bad” property B (in virtue of the fact that their p.b.f. b has a certain kind of formal property F). These arguments rest on Theorems (⇒) and Converse Theorems (⇐): b is non-Pr ⇐⇒ b has formal property F.
It is sometimes alleged that arguments that probability functions should be countably additive show too much, and that they motivate uncountable additivity as well. I show this is false by giving two naturally motivated arguments for countable additivity that do not motivate uncountable additivity.
I defend a causal reductionist account of the nature of rates of change like velocity and acceleration. This account identifies velocity with the past derivative of position and acceleration with the future derivative of velocity. Unlike most reductionist accounts, it can preserve the role of velocity as a cause of future positions and acceleration as the effect of current forces. I show that this is possible only if all the fundamental laws are expressed by differential equations of the same order. (...) Consideration of the continuity of time explains why the differential equations are all second order. This explanation is not available on non-causal or non-reductionist accounts of rates of change. Finally, I argue that alleged counterexamples to the reductionist account involving physically impossible worlds are irrelevant to an analysis of the properties that play a causal role in the actual world. 1. Background2. Grounding3. Causation4. The Proposal5. Why No Third Derivatives?6. Why Any Derivatives?7. Counterexamples? (shrink)
To the extent that we have reasons to avoid these “bad B -properties”, these arguments provide reasons not to have an incoherent credence function b — and perhaps even reasons to have a coherent one. But, note that these two traditional arguments for probabilism involve what might be called “pragmatic” reasons (not) to be (in)coherent. In the case of the Dutch Book argument, the “bad” property is pragmatically bad (to the extent that one values money). But, it is not clear (...) whether the DBA pinpoints any epistemic defect of incoherent agents. The same can be said for Representation Theorem arguments, since they involve the structure of an agent’s preferences. (shrink)
Pascal’s Wager holds that one has pragmatic reason to believe in God, since that course of action has infinite expected utility. The mixed strategy objection holds that one could just as well follow a course of action that has infinite expected utility but is unlikely to end with one believing in God. Monton (2011. Mixed strategies can’t evade Pascal’s Wager. Analysis 71: 642–45.) has argued that mixed strategies can’t evade Pascal’s Wager, while Robertson (2012. Some mixed strategies can evade Pascal’s (...) Wager: a reply to Monton. Analysis 72: 295–98.) has argued that Monton is mistaken. We show that Monton is correct, highlight the crucial assumptions that he relies on, and shed some light on the role of mixed strategies in decision theory. (shrink)
In the first paper, I discussed the basic claims of Bayesianism (that degrees of belief are important, that they obey the axioms of probability theory, and that they are rationally updated by either standard or Jeffrey conditionalization) and the arguments that are often used to support them. In this paper, I will discuss some applications these ideas have had in confirmation theory, epistemol- ogy, and statistics, and criticisms of these applications.
Bayesianism is a collection of positions in several related fields, centered on the interpretation of probability as something like degree of belief, as contrasted with relative frequency, or objective chance. However, Bayesianism is far from a unified movement. Bayesians are divided about the nature of the probability functions they discuss; about the normative force of this probability function for ordinary and scientific reasoning and decision making; and about what relation (if any) holds between Bayesian and non-Bayesian concepts.
As is clear from the other articles in this volume, logic has applications in a broad range of areas of philosophy. If logic is taken to include the mathematical disciplines of set theory, model theory, proof theory, and recursion theory (as well as first-order logic, second-order logic, and modal logic), then the only other area of mathematics with such wide-ranging applications in philosophy is probability theory.
In a series of papers, Don Fallis points out that although mathematicians are generally unwilling to accept merely probabilistic proofs, they do accept proofs that are incomplete, long and complicated, or partly carried out by computers. He argues that there are no epistemic grounds on which probabilistic proofs can be rejected while these other proofs are accepted. I defend the practice by presenting a property I call ‘transferability’, which probabilistic proofs lack and acceptable proofs have. I also consider what this (...) says about the similarities between mathematics and, on the one hand natural sciences, and on the other hand philosophy. (shrink)
In his paper , Hud Hudson presents an interesting argument to the conclusion that two temporally–continuous, spatially–unextended material objects can travel together for all but the last moment of their existences and yet end up one metre apart. What is surprising about this is that Hudson argues that it can be achieved without either object changing in size or moving discontinuously. This would be quite a trick were it to work, but it is far from clear that it does. The (...) problem is that Hudson’s implicit notion of continuity is not the standard one. On the standardly–accepted definition of continuity, his example is straightforwardly a case of discontinuous motion. And there is no surprise that Hudson’s trick can be achieved by invoking discontinuous.. (shrink)
Fine has shown that assigning any value to the Pasadena game is consistent with a certain standard set of axioms for decision theory. However, I suggest that it might be reasonable to believe that the value of an individual game is constrained by the long-run payout of repeated plays of the game. Although there is no value that repeated plays of the Pasadena game converges to in the standard strong sense, I show that there is a weaker sort of convergence (...) it exhibits, and use this to define a notion of ‘weak expectation’ that can give values to the Pasadena game and many others, though not to all games that fail to have a strong expectation in the standard sense. CiteULike Connotea Del.icio.us What's this? (shrink)
To answer the question of whether mathematics needs new axioms, it seems necessary to say what role axioms actually play in mathematics. A first guess is that they are inherently obvious statements that are used to guarantee the truth of theorems proved from them. However, this may neither be possible nor necessary, and it doesn’t seem to fit the historical facts. Instead, I argue that the role of axioms is to systematize uncontroversial facts that mathematicians can accept from a wide (...) variety of philosophical positions. Once the axioms are generally accepted, mathematicians can expend their energies on proving theorems instead of arguing philosophy. Given this account of the role of axioms, I give four criteria that axioms must meet in order to be accepted. Penelope Maddy has proposed a similar view in Naturalism in Mathematics, but she suggests that the philosophical questions bracketed by adopting the axioms can in fact be ignored forever. I contend that these philosophical arguments are in fact important, and should ideally be resolved at some point, but I concede that their resolution is unlikely to affect the ordinary practice of mathematics. However, they may have effects in the margins of mathematics, including with regards to the controversial “large cardinal axioms” Maddy would like to support. (shrink)