For two ideally rational agents, does learning a finite amount of shared evidence necessitate agreement? No. But does it at least guard against belief polarization, the case in which their opinions get further apart? No. OK, but are rational agents guaranteed to avoid polarization if they have access to an infinite, increasing stream of shared evidence? No.
The question of how the probabilistic opinions of different individuals should be aggregated to form a group opinion is controversial. But one assumption seems to be pretty much common ground: for a group of Bayesians, the representation of group opinion should itself be a unique probability distribution, 410–414, ; Bordley Management Science, 28, 1137–1148, ; Genest et al. The Annals of Statistics, 487–501, ; Genest and Zidek Statistical Science, 114–135, ; Mongin Journal of Economic Theory, 66, 313–351, ; Clemen and (...) Winkler Risk Analysis, 19, 187–203, ; Dietrich and List ; Herzberg Theory and Decision, 1–19, ). We argue that this assumption is not always in order. We show how to extend the canonical mathematical framework for pooling to cover pooling with imprecise probabilities by employing set-valued pooling functions and generalizing common pooling axioms accordingly. As a proof of concept, we then show that one IP construction satisfies a number of central pooling axioms that are not jointly satisfied by any of the standard pooling recipes on pain of triviality. Following Levi, 3–11, ), we also argue that IP models admit of a much better philosophical motivation as a model of rational consensus. (shrink)
Supra-Bayesianism is the Bayesian response to learning the opinions of others. Probability pooling constitutes an alternative response. One natural question is whether there are cases where probability pooling gives the supra-Bayesian result. This has been called the problem of Bayes-compatibility for pooling functions. It is known that in a common prior setting, under standard assumptions, linear pooling cannot be nontrivially Bayes-compatible. We show by contrast that geometric pooling can be nontrivially Bayes-compatible. Indeed, we show that, under certain assumptions, geometric and (...) Bayes-compatible pooling are equivalent. Granting supra-Bayesianism its usual normative status, one upshot of our study is thus that, in a certain class of epistemic contexts, geometric pooling enjoys a normative advantage over linear pooling as a social learning mechanism. We discuss the philosophical ramifications of this advantage, which we show to be robust to variations in our statement of the Bayes-compatibility problem. (shrink)
This essay has two aims. The first is to correct an increasingly popular way of misunderstanding Belot's Orgulity Argument. The Orgulity Argument charges Bayesianism with defect as a normative epistemology. For concreteness, our argument focuses on Cisewski et al.'s recent rejoinder to Belot. The conditions that underwrite their version of the argument are too strong and Belot does not endorse them on our reading. A more compelling version of the Orgulity Argument than Cisewski et al. present is available, however---a point (...) that we make by drawing an analogy with de Finetti's argument against mandating countable additivity. Having presented the best version of the Orgulity Argument, our second aim is to develop a reply to it. We extend Elga's idea of appealing to finitely additive probability to show that the challenge posed by the Orgulity Argument can be met. (shrink)
I provide a characterization of weakly pseudo-rationalizable choice functions---that is, choice functions rationalizable by a set of acyclic relations---in terms of hyper-relations satisfying certain properties. For those hyper-relations Nehring calls extended preference relations, the central characterizing condition is weaker than (hyper-relation) transitivity but stronger than (hyper-relation) acyclicity. Furthermore, the relevant type of hyper-relation can be represented as the intersection of a certain class of its extensions. These results generalize known, analogous results for path independent choice functions.
In many assessment problems—aptitude testing, hiring decisions, appraisals of the risk of recidivism, evaluation of the credibility of testimonial sources, and so on—the fair treatment of different groups of individuals is an important goal. But individuals can be legitimately grouped in many different ways. Using a framework and fairness constraints explored in research on algorithmic fairness, I show that eliminating certain forms of bias across groups for one way of classifying individuals can make it impossible to eliminate such bias across (...) groups for another way of dividing people up. And this point generalizes if we require merely that assessments be approximately bias-free. Moreover, even if the fairness constraints are satisfied for some given partitions of the population, the constraints can fail for the coarsest common refinement, that is, the partition generated by taking intersections of the elements of these coarser partitions. This shows that these prominent fairness constraints admit the possibility of forms of intersectional bias. (shrink)
This paper generalizes rationalizability of a choice function by a single acyclic binary relation to rationalizability by a set of such relations. Rather than selecting those options in a menu that are maximal with respect to a single binary relation, a weakly pseudo-rationalizable choice function selects those options that are maximal with respect to at least one binary relation in a given set. I characterize the class of weakly pseudo-rationalizable choice functions in terms of simple functional properties. This result also (...) generalizes Aizerman and Malishevski's characterization of pseudo-rationalizable choice functions, that is, choice functions rationalizable by a set of total orders. (shrink)
Recent impossibility theorems for fair risk assessment extend to the domain of epistemic justice. We translate the relevant model, demonstrating that the problems of fair risk assessment and just credibility assessment are structurally the same. We motivate the fairness criteria involved in the theorems as also being appropriate in the setting of testimonial justice. Any account of testimonial justice that implies the fairness/justice criteria must be abandoned, on pain of triviality.
We provide counterexamples to some purported characterizations of dilation due to Pedersen and Wheeler :1305–1342, 2014, ISIPTA ’15: Proceedings of the 9th international symposium on imprecise probability: theories and applications, 2015).
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the literature. (...) We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling. (shrink)
Our aim here is to present a result that connects some approaches to justifying countable additivity. This result allows us to better understand the force of a recent argument for countable additivity due to Easwaran. We have two main points. First, Easwaran’s argument in favour of countable additivity should have little persuasive force on those permissive probabilists who have already made their peace with violations of conglomerability. As our result shows, Easwaran’s main premiss – the comparative principle – is strictly (...) stronger than conglomerability. Second, with the connections between the comparative principle and other probabilistic concepts clearly in view, we point out that opponents of countable additivity can still make a case that countable additivity is an arbitrary stopping point between finite and full additivity. (shrink)
An aspect of Peirce’s thought that may still be underappreciated is his resistance to what Levi calls _pedigree epistemology_, to the idea that a central focus in epistemology should be the justification of current beliefs. Somewhat more widely appreciated is his rejection of the subjective view of probability. We argue that Peirce’s criticisms of subjectivism, to the extent they grant such a conception of probability is viable at all, revert back to pedigree epistemology. A thoroughgoing rejection of pedigree in the (...) context of probabilistic epistemology, however, _does_ challenge prominent subjectivist responses to the problem of the priors. (shrink)
We explore which types of probabilistic updating commute with convex IP pooling. Positive results are stated for Bayesian conditionalization, imaging, and a certain parameterization of Jeffrey conditioning. This last observation is obtained with the help of a slight generalization of a characterization of externally Bayesian pooling operators due to Wagner :336–345, 2009). These results strengthen the case that pooling should go by imprecise probabilities since no precise pooling method is as versatile.
Bayesians often appeal to “merging of opinions” to rebut charges of excessive subjectivity. But what happens in the short run is often of greater interest than what happens in the limit. Seidenfeld and coauthors use this observation as motivation for investigating the counterintuitive short run phenomenon of dilation, since, they allege, dilation is “the opposite” of asymptotic merging of opinions. The measure of uncertainty relevant for dilation, however, is not the one relevant for merging of opinions. We explicitly investigate the (...) short run behavior of the metric relevant for merging, and show that dilation is independent of the opposite of merging. (shrink)
In the face of an impossibility result, some assumption must be relaxed. The Mere Addition Paradox is an impossibility result in population ethics. Here, I explore substantially weakening the decision-theoretic assumptions involved. The central finding is that the Mere Addition Paradox persists even in the general framework of choice functions when we assume Path Independence as a minimal decision-theoretic constraint. Choice functions can be thought of either as generalizing the standard axiological assumption of a binary “betterness” relation, or as providing (...) a general framework for a normative (rather than axiological) theory of population ethics. Path Independence, a weaker assumption than typically (implicitly) made in population ethics, expresses the idea that, in making a choice from a set of alternatives, the order in which options are assessed or considered is ethically arbitrary and should not affect the final choice. Since the result establishes a conflict between the relevant ethical principles and even very weak decision-theoretic principles, we have more reason to doubt the ethical principles. (shrink)
Epistemic states of uncertainty play important roles in ethical and political theorizing. Theories that appeal to a “veil of ignorance,” for example, analyze fairness or impartiality in terms of certain states of ignorance. It is important, then, to scrutinize proposed conceptions of ignorance and explore promising alternatives in such contexts. Here, I study Lerner’s probabilistic egalitarian theorem in the setting of imprecise probabilities. Lerner’s theorem assumes that a social planner tasked with distributing income to individuals in a population is “completely (...) ignorant” about which utility functions belong to which individuals. Lerner models this ignorance with a certain uniform probability distribution, and shows that, under certain further assumptions, income should be equally distributed. Much of the criticism of the relevance of Lerner’s result centers on the representation of ignorance involved. Imprecise probabilities provide a general framework for reasoning about various forms of uncertainty including, in particular, ignorance. To what extent can Lerner’s conclusion be maintained in this setting? (shrink)
This paper studies a generalization of rational choice theory. I briefly review the motivations that Helzner gives for his conditional choice construction . Then, I focus on the important class of conditional choice functions with vacuous second tiers. This class is interesting for both formal and philosophical reasons. I argue that this class makes explicit one of conditional choice’s normative motivations in terms of an account of neutrality advocated within a certain tradition in decision theory. The observations recorded—several of which (...) are generalizations of central results in the standard theory of rational choice—are intended to provide further insight into how conditional choice generalizes the standard account and are offered as additional evidence of the fruitfulness of the conditional choice framework. (shrink)
Given the role consensus is supposed to play in the social aspects of inquiry and deliberation, it is important that we may always identify a consensus as the basis of joint inquiry and deliberation. However, it turns out that if we think of an agent revising her beliefs to reach a consensus, then, on the received view of belief revision, AGM belief revision theory, certain simple and compelling consensus positions are not always available.