For two ideally rational agents, does learning a finite amount of shared evidence necessitate agreement? No. But does it at least guard against belief polarization, the case in which their opinions get further apart? No. OK, but are rational agents guaranteed to avoid polarization if they have access to an infinite, increasing stream of shared evidence? No.
Epistemic decision theory produces arguments with both normative and mathematical premises. I begin by arguing that philosophers should care about whether the mathematical premises (1) are true, (2) are strong, and (3) admit simple proofs. I then discuss a theorem that Briggs and Pettigrew (2020) use as a premise in a novel accuracy-dominance argument for conditionalization. I argue that the theorem and its proof can be improved in a number of ways. First, I present a counterexample that shows that one (...) of the theorem’s claims is false. As a result of this, Briggs and Pettigrew’s argument for conditionalization is unsound. I go on to explore how a sound accuracy-dominance argument for conditionalization might be recovered. In the course of doing this, I prove two new theorems that correct and strengthen the result reported by Briggs and Pettigrew. I show how my results can be combined with various normative premises to produce sound arguments for conditionalization. I also show that my results can be used to support normative conclusions that are stronger than the one that Briggs and Pettigrew’s argument supports. Finally, I show that Briggs and Pettigrew’s proofs can be simplified considerably. (shrink)
The best accuracy arguments for probabilism apply only to credence functions with finite domains, that is, credence functions that assign credence to at most finitely many propositions. This is a significant limitation. It reveals that the support for the accuracy-first program in epistemology is a lot weaker than it seems at first glance, and it means that accuracy arguments cannot yet accomplish everything that their competitors, the pragmatic (Dutch book) arguments, can. In this paper, I investigate the extent to which (...) this limitation can be overcome. Building on the best arguments in finite domains, I present two accuracy arguments for probabilism that are perfectly general—they apply to credence functions with arbitrary domains. I then discuss how the arguments’ premises can be challenged. We will see that it is particularly difficult to characterize admissible accuracy measures in infinite domains. (shrink)
In a recent paper, Pettigrew reports a generalization of the celebrated accuracy-dominance theorem due to Predd et al., but Pettigrew’s proof is incorrect. I will explain the mistakes and provide a correct proof.
This essay has two aims. The first is to correct an increasingly popular way of misunderstanding Belot's Orgulity Argument. The Orgulity Argument charges Bayesianism with defect as a normative epistemology. For concreteness, our argument focuses on Cisewski et al.'s recent rejoinder to Belot. The conditions that underwrite their version of the argument are too strong and Belot does not endorse them on our reading. A more compelling version of the Orgulity Argument than Cisewski et al. present is available, however---a point (...) that we make by drawing an analogy with de Finetti's argument against mandating countable additivity. Having presented the best version of the Orgulity Argument, our second aim is to develop a reply to it. We extend Elga's idea of appealing to finitely additive probability to show that the challenge posed by the Orgulity Argument can be met. (shrink)
Hursthouse :57–68, 1991) argues that arational actions—e.g. kicking a door out of anger—cannot be explained by belief–desire pairs. The Humean Response to Hursthouse :25–38, 2000b) defends the Humean model from Hursthouse’s challenge. We argue that the Humean Response fails because belief–desire pairs are neither necessary nor sufficient for causing emotional actions. The Emotionist Response is to embrace Hursthouse’s conclusion that emotions provide an independent source of explanation for intentional actions. We consider Döring’s :214–230, 2003) feeling-based Emotionist account and argue that (...) it fails to explain arational actions. Finally, we develop our own Emotionist account, grounded in the Motivational Theory of Emotions one of us has developed. On our account, arational actions form a non-homogeneous class, some members of which must be understood as instrumental actions and some members of which must be understood as displacement behaviors of the kind animals display when their motivations are thwarted or in conflict. (shrink)
A standard way to challenge convergence-based accounts of inductive success is to claim that they are too weak to constrain inductive inferences in the short run. We respond to such a challenge by answering some questions raised by Juhl (1994). When it comes to predicting limiting relative frequencies in the framework of Reichenbach, we show that speed-optimal convergence—a long-run success condition—induces dynamic coherence in the short run.
Recent impossibility theorems for fair risk assessment extend to the domain of epistemic justice. We translate the relevant model, demonstrating that the problems of fair risk assessment and just credibility assessment are structurally the same. We motivate the fairness criteria involved in the theorems as also being appropriate in the setting of testimonial justice. Any account of testimonial justice that implies the fairness/justice criteria must be abandoned, on pain of triviality.
We provide counterexamples to some purported characterizations of dilation due to Pedersen and Wheeler :1305–1342, 2014, ISIPTA ’15: Proceedings of the 9th international symposium on imprecise probability: theories and applications, 2015).
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the literature. (...) We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling. (shrink)
Our aim here is to present a result that connects some approaches to justifying countable additivity. This result allows us to better understand the force of a recent argument for countable additivity due to Easwaran. We have two main points. First, Easwaran’s argument in favour of countable additivity should have little persuasive force on those permissive probabilists who have already made their peace with violations of conglomerability. As our result shows, Easwaran’s main premiss – the comparative principle – is strictly (...) stronger than conglomerability. Second, with the connections between the comparative principle and other probabilistic concepts clearly in view, we point out that opponents of countable additivity can still make a case that countable additivity is an arbitrary stopping point between finite and full additivity. (shrink)
I show that de Finetti’s coherence theorem is equivalent to the Hahn-Banach theorem and discuss some consequences of this result. First, the result unites two aspects of de Finetti’s thought in a nice way: a corollary of the result is that the coherence theorem implies the existence of a fair countable lottery, which de Finetti appealed to in his arguments against countable additivity. Another corollary of the result is the existence of sets that are not Lebesgue measurable. I offer a (...) subjectivist interpretation of this corollary that is concordant with de Finetti’s views. I conclude by pointing out that my result shows that there is a sense in which de Finetti’s theory of subjective probability is necessarily nonconstructive. This raises questions about whether the coherence theorem can underwrite a legitimate theory of rational belief. (shrink)
This paper contributes to a recent research program that extends arguments supporting elementary conditionalization to arguments supporting conditionalization with general, measure-theoretic conditional probabilities. I begin by suggesting an amendment to the framework that Rescorla (2018) has used to characterize regular conditional probabilities in terms of avoiding Dutch book. If we wish to model learning scenarios in which an agent gains complete membership knowledge about some subcollection of the events of interest to her, then we should focus on updating policies that (...) are what I shall call proper. I go on to characterize regular conditional probabilities in proper learning scenarios using what van Fraassen (1999) calls The General Reflection Principle. (shrink)
Bayesians since Savage (1972) have appealed to asymptotic results to counter charges of excessive subjectivity. Their claim is that objectionable differences in prior probability judgments will vanish as agents learn from evidence, and individual agents will converge to the truth. Glymour (1980), Earman (1992) and others have voiced the complaint that the theorems used to support these claims tell us, not how probabilities updated on evidence will actually}behave in the limit, but merely how Bayesian agents believe they will behave, suggesting (...) that the theorems are too weak to underwrite notions of scientific objectivity and intersubjective agreement. I investigate, in a very general framework, the conditions under which updated probabilities actually converge to a settled opinion and the conditions under which the updated probabilities of two agents actually converge to the same settled opinion. I call this mode of convergence deterministic, and derive results that extend those found in Huttegger (2015b). The results here lead to a simple characterization of deterministic convergence for Bayesian learners and give rise to an interesting argument for what I call strong regularity, the view that probabilities of non-empty events should be bounded away from zero. (shrink)
Bayesians often appeal to “merging of opinions” to rebut charges of excessive subjectivity. But what happens in the short run is often of greater interest than what happens in the limit. Seidenfeld and coauthors use this observation as motivation for investigating the counterintuitive short run phenomenon of dilation, since, they allege, dilation is “the opposite” of asymptotic merging of opinions. The measure of uncertainty relevant for dilation, however, is not the one relevant for merging of opinions. We explicitly investigate the (...) short run behavior of the metric relevant for merging, and show that dilation is independent of the opposite of merging. (shrink)
We discuss Herzberg’s :319–337, 2015) treatment of linear aggregation for profiles of infinitely many finitely additive probabilities and suggest a natural alternative to his definition of linear continuous aggregation functions. We then prove generalizations of well-known characterization results due to :410–414, 1981). We also characterize linear aggregation of probabilities in terms of a Pareto condition, de Finetti’s notion of coherence, and convexity.
Olav Vassend has recently (2021) presented a decision-theoretic argument for updating utility functions by what he calls “utility conditionalization.” Vassend’s argument is meant to mirror closely the well-known argument for Bayesian conditionalization due to Hilary Greaves and David Wallace (2006). I show that Vassend’s argument is inconsistent with ZF set theory and argue that it therefore does not provide support for utility conditionalization.
This article strengthens a dilemma posed by Eva and Hartmann (2021). They show that accounts of partial subjunctive supposition based on imaging sometimes violate a natural monotonicity condition. The paper develops a more general framework for modelling partial supposition and shows that, in this framework, imaging-based accounts of partial subjunctive supposition always violate monotonicity. In fact, the only account of partial supposition that satisfies monotonicity is the one that Eva and Hartmann defend for indicative suppositions. Insofar as one is committed (...) to monotonicity, then, one cannot distinguish the indicative and subjunctive moods for partial supposition. One might avoid this result by rejecting the general framework that it relies upon, but that itself would be a surprising and interesting outcome. (shrink)
Must probabilities be countably additive? On the one hand, arguably, requiring countable additivity is too restrictive. As de Finetti pointed out, there are situations in which it is reasonable to use merely finitely additive probabilities. On the other hand, countable additivity is fruitful. It can be used to prove deep mathematical theorems that do not follow from finite additivity alone. One of the most philosophically important examples of such a result is the Bayesian convergence to the truth theorem, which says (...) that conditional probabilities converge to 1 for true hypotheses and to 0 for false hypotheses. In view of the long-standing debate about countable additivity, it is natural to ask in what circumstances finitely additive theories deliver the same results as the countably additive theory. This paper addresses that question and initiates a systematic study of convergence to the truth in a finitely additive setting. There is also some discussion of how the formal results can be applied to ongoing debates in epistemology and the philosophy of science. (shrink)
I argue against the halfer response to the Sleeping Beauty case by presenting a new problem for halfers. When the original Sleeping Beauty case is generalized, it follows from the halfer’s key premise that Beauty must update her credence in a fair coin’s landing heads in such a way that it becomes arbitrarily close to certainty. This result is clearly absurd. I go on to argue that the halfer’s key premise must be rejected on pain of absurdity, leaving the halfer (...) response to the original Sleeping Beauty case unsupported. I consider two ways that halfers might avoid the absurdity without giving up their key premise. Neither way succeeds. My argument lends support to the thirder response, and, in particular, to the idea that agents may be rationally compelled to update their beliefs despite not having learned any new evidence. (shrink)
Two studies are reported in which participants’ reports of altered states of consciousness were manipulated using priming methods. Study 1 used both subtle and blunt supraliminal priming methods, while Study 2 used a subliminal priming method. Across the two studies, two different methods for inducing ASC were used. In both studies a direct and an indirect measure of ASC was employed in order to separate the more nonconscious and spontaneous from the more conscious and directive reports of ASC. As predicted, (...) results indicated that the indirect measures of ASC were consistent with the ASC primes participants received. Implications and future research directions are discussed. (shrink)
People's opinions toward polygamy were examined in a study of 1369 adults who were current or former members of the Church of Jesus Christ of Latter-day Saints. Questions addressed several areas: polygamy and the law, respondents' perceptions of polygamous women, the potential link between legalizing gay marriage and legalizing polygamy, polygamists' reliance on social welfare programs, and the ability of teens raised in polygamy to leave that lifestyle. Consistent with the contact hypothesis, multiple regression analyses showed that people who knew (...) a polygamist held more favorable opinions of polygamy. Polygamists, men, infrequent church attenders, and older people also tended to hold more favorable opinions of polygamy. Educational attainment showed weak associations with opinions, while marital status failed to predict opinions toward polygamy. (shrink)
Vernor Vinge's “singularity” is a worthy contribution to the long tradition of contemplations about human transcendence. Throughout history, most of these musings have dwelled upon the spiritual – the notion that human beings can achieve a higher state through prayer, moral behavior, or mental discipline.
This paper attempts to uncover some of the factors that may have influenced the choice of model for the systematic training of nurses in Denmark. From a historical analysis of selected historical literature, nursing magazines, and archival documents, three themes emerge: the interest from (and of) the medical profession, the strategy on nurse training of the Danish Nurses’ Organization, and the societal context of the time. Despite there being a Deaconess Institution in Copenhagen from 1863, it was ultimately the Nightingale (...) model that formed the basis for the training programme. Findings suggest that the Nightingale influence within the medical establishment and among leaders in Danish nursing played a vital role in this choice. As well, the Danish Nurses’ Organization actively promoted their preferred 3‐year training model, which had strong similarities to the Nightingale model. Changes in family and social life after the 1864 defeat in the Prussian war fostered a growing demand for the liberalization of women's rights and a need for increased professionalization in the nursing and caring function. (shrink)