People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call *Countable (...) Independence*. In violating these principles, they display foibles like those of people who deviate from standard expected utility theory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representation theorem, and the principle we began with, that every prospect has a value that some outcome could have. (shrink)
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Ian Hacking and Roger White influentially argue that it is not. We approach this question through a systematic framework for self-locating epistemology. As it turns out, leading approaches to self-locating evidence agree that the fact that our own universe contains fine-tuned life indeed confirms the existence of a multiverse. This convergence is no accident: we present two theorems showing that, in this setting, any updating (...) rule that satisfies a few reasonable conditions will have the same feature. The conclusion that fine-tuned life provides evidence for a multiverse is hard to escape. (shrink)
The problem of evil is the most prominent argument against the existence of God. Skeptical theists contend that it is not a good argument. Their reasons for this contention vary widely, involving such notions as CORNEA, epistemic appearances, 'gratuitous' evils, 'levering' evidence, and the representativeness of goods. We aim to dispel some confusions about these notions, in particular by clarifying their roles within a probabilistic epistemology. In addition, we develop new responses to the problem of evil from both the phenomenal (...) conception of evidence and the knowledge-first view of evidence. (shrink)
Sometimes you are unreliable at fulfilling your doxastic plans: for example, if you plan to be fully confident in all truths, probably you will end up being fully confident in some falsehoods by mistake. In some cases, there is information that plays the classical role of evidence—your beliefs are perfectly discriminating with respect to some possible facts about the world—and there is a standard expected‐accuracy‐based justification for planning to conditionalize on this evidence. This planning‐oriented justification extends to some cases where (...) you do not have transparent evidence, in the sense that your beliefs are not perfectly discriminating with respect to any non‐trivial facts. In other cases, accuracy considerations do not tell you to plan to conditionalize on any information at all, but rather to plan to follow a different updating rule. Even in the absence of evidence, accuracy considerations can guide your doxastic plan. (shrink)
David Builes presents a paradox concerning how confident you should be that any given member of an infinite collection of fair coins landed heads, conditional on the information that they were all flipped and only finitely many of them landed heads. We argue that if you should have any conditional credence at all, it should be 1/2.
Suppose that, for reasons of animal welfare, it would be better if everyone stopped eating chicken. Does it follow that you should stop eating chicken? Proponents of the “inefficacy objection” argue that, due to the scale and complexity of markets, the expected effects of your chicken purchases are negligible. So the expected effects of eating chicken do not make it wrong. -/- We argue that this objection does not succeed, in two steps. First, empirical data about chicken production tells us (...) that the expected effects of consuming *many* chickens are not negligible. Second, this implies that the expected effect of consuming one chicken is ordinarily not negligible. *Parity* between your purchase and other counterfactual purchases and *uncertainty* about others’ consumption behavior each tend to pull the expected effect of a single purchase toward the average large scale effect. While some purchases do have negligible expected effects, many do not. (shrink)
The fine-tuning argument purports to show that particular aspects of fundamental physics provide evidence for the existence of God. This argument is legitimate, yet there are numerous doubts about its legitimacy. There are various misgivings about the fine-tuning argument which are based on misunderstandings. In this paper we will go over several major misapprehensions, and explain why they do not undermine the basic cogency of the fine-tuning argument.
Ginger Schultheis offers a novel and interesting argument against epistemic permissivism. While we think that her argument is ultimately uncompelling, we think its faults are instructive. We explore the relationship between epistemic permissivism, Margin-for-Error principles, and an epistemological version of Dominance reasoning.
Laurie Paul has recently argued that transformative experiences pose a problem for decision theory. According to Paul, agents facing transformative experiences do not possess the states required for decision theory to formulate its prescriptions. Agents facing transformative experiences are impoverished relative to their decision problems, and decision theory doesn’t know what to do with impoverished agents. Richard Pettigrew takes Paul’s challenge seriously. He grants that decision theory cannot handle decision problems involving transformative experiences. To deal with the problems posed by (...) transformative experiences, Pettigrew proposes two alterations to decision theory. The first alteration is meant to handle the problem posed by epistemically transformative experiences, and the second alteration is meant to handle the problem posed by personally transformative experiences. I argue that Pettigrew’s proposed alterations are untenable. Pettigrew’s novel decision theory faces both formal and philosophical problems. It is doubtful that Pettigrew can formulate the sort of decision theory he wants, and further doubtful that he should want such a decision theory in the first place. Moreover, the issues with Pettigrew’s proposed alterations help reveal issues with Paul’s initial challenge to decision theory. I suggest that transformative experiences should not be taken to pose a problem for decision theory, but should instead be taken to pose a topic for ethics. (shrink)
In response to Smith, I argue that probabilities cannot be rationally neglected. I show that Smith’s proposal for ignoring low-probability outcomes must, on pain of violating dominance reasoning, license taking arbitrarily high risk for arbitrarily little reward.
Many philosophers think that given the choice between saving the life of an innocent person and averting any number of minor ailments or inconveniences, it would be better to save the life. How, then, should one compare the risk of an innocent person’s life to such minor ailments and inconveniences? If lives are infinitely more important than insignificant factors then any risk cannot be outweighed, and that is untenable. An alternative approach seems more promising: let the values of such insignificant (...) factors be bounded, as then there will be well-behaved tradeoffs between insignificant things and the risk to an innocent life. We argue, however, that bounding the values of insignificant factors poses myriad problems. (shrink)
Our decision-theoretic states are not luminous. We are imperfectly reliable at identifying our own credences, utilities and available acts, and thus can never be more than imperfectly reliable at identifying the prescriptions of decision theory. The lack of luminosity affords decision theory a remarkable opportunity — to issue guidance on the basis of epistemically inaccessible facts. We show how a decision theory can guarantee action in accordance with contingent truths about which an agent is arbitrarily uncertain. It may seem that (...) such advantages would require dubiously adverting to externalist facts that go beyond the internalism of traditional decision theory, but this is not so. Using only the standard repertoire of decision-theoretic tools, we show how to modify existing decision theories to take advantage of this opportunity. These improved decision theories require agents to maximize conditional expected utility — expected utility conditional upon an agent’s actual decision situation. We call such modified decision theories ‘self-confident’. These self-confident decision theories have a distinct advantage over standard decision theories: their prescriptions are better. (shrink)
The epistemology of disagreement standardly divides conciliationist views from steadfast views. But both sorts of views are subject to counterexample—indeed, both sorts of views are subject to the same counterexample. After presenting this counterexample, I explore how the epistemology of disagreement should be reconceptualized in light of it.
How should the opinion of a group be related to the opinions of the group members? In this article, we will defend a package of four norms – coherence, locality, anonymity and unanimity. Existing results show that there is no tenable procedure for aggregating outright beliefs or for aggregating credences that meet these criteria. In response, we consider the prospects for aggregating credal pairs – pairs of prior probabilities and evidence. We show that there is a method of aggregating credal (...) pairs that possesses all four virtues. (shrink)
– We offer a new motivation for imprecise probabilities. We argue that there are propositions to which precise probability cannot be assigned, but to which imprecise probability can be assigned. In such cases the alternative to imprecise probability is not precise probability, but no probability at all. And an imprecise probability is substantially better than no probability at all. Our argument is based on the mathematical phenomenon of non-measurable sets. Non-measurable propositions cannot receive precise probabilities, but there is a natural (...) way for them to receive imprecise probabilities. The mathematics of non-measurable sets is arcane, but its epistemological import is far-reaching; even apparently mundane propositions are liable to be affected by non-measurability. The phenomenon of non-measurability dramatically reshapes the dialectic between critics and proponents of imprecise credence. Non-measurability offers natural rejoinders to prominent critics of imprecise credence. Non-measurability even reverses some of the critics’ arguments—by the very lights that have been used to argue against imprecise credences, imprecise credences are better than precise credences. (shrink)
Accounting for Intrinsic Values in the Federal Student Loan System.Yoaav Isaacs & Jason Iuliano - 2018 - In David Boonin, Katrina L. Sifferd, Tyler K. Fagan, Valerie Gray Hardcastle, Michael Huemer, Daniel Wodak, Derk Pereboom, Stephen J. Morse, Sarah Tyson, Mark Zelcer, Garrett VanPelt, Devin Casey, Philip E. Devine, David K. Chan, Maarten Boudry, Christopher Freiman, Hrishikesh Joshi, Shelley Wilcox, Jason Brennan, Eric Wiland, Ryan Muldoon, Mark Alfano, Philip Robichaud, Kevin Timpe, David Livingstone Smith, Francis J. Beckwith, Dan Hooley, Russell Blackford, John Corvino, Corey McCall, Dan Demetriou, Ajume Wingo, Michael Shermer, Ole Martin Moen, Aksel Braanen Sterri, Teresa Blankmeyer Burke, Jeppe von Platz, John Thrasher, Mary Hawkesworth, William MacAskill, Daniel Halliday, Janine O’Flynn, Yoaav Isaacs, Jason Iuliano, Claire Pickard, Arvin M. Gouw, Tina Rulli, Justin Caouette, Allen Habib, Brian D. Earp, Andrew Vierra, Subrena E. Smith, Danielle M. Wenner, Lisa Diependaele, Sigrid Sterckx, G. Owen Schaefer, Markus K. Labude, Harisan Unais Nasir, Udo Schuklenk, Benjamin Zolf & Woolwine (eds.), The Palgrave Handbook of Philosophy and Public Policy. Springer Verlag. pp. 469-477.details
There is a growing sentiment that federal student loans should be allocated according to students’ expected earning potential. If federal student loans were given so that the government could make a profit, then such a system would make sense. But this is not so. Instead, the US government issues student loans with the goal of benefiting society—and, in particular, of benefitting the loan recipients themselves. Although some of this benefit is expressed in higher earning potential, much of it is not. (...) In this chapter, we argue that the federal student loan system should be structured to account for the intrinsic values that accrue to individuals both from increased education and from certain occupational choices. (shrink)