The scope of Aumann’s (1976) Agreement Theorem is needlessly limited by its restriction to Conditioning as the update rule. Here we prove the theorem in a more comprehensive framework, in which the evolution of probabilities is represented directly, without deriving new probabilities from new certainties. The framework allows arbitrary update rules subject only to Goldstein’s (1983) requirement that current expectations agree with current expectations of future expectations.
Consider a group of people whose preferences satisfy the axioms of one of the current versions of utility theory, such as von Neumann-Morgenstern (1944), Savage (1954), or Bolker-Jeﬀrey (1965). There are political and economic contexts in which it is of interest to ﬁnd ways of aggregating these individual preferences into a group preference ranking. The question then arises of whether methods of aggregation exist in which the group’s preferences also satisfy the axioms of the chosen utility theory, and in which (...) at the same time the aggregation process satisﬁes certain plausible conditions (e.g., the Pareto conditions below). (shrink)
We show that Bayesian ex post aggregation is unstable with respect to refinements. Suppose a group of Bayesians use ex post aggregation. Since it is a joint problem, each agent’s problem is captured by the same model, but probabilities and utilities may vary. If they analyze the same situation in more detail, their refined analysis should preserve their preferences among acts. However, ex post aggregation could bring about a preference reversal on the group level. Ex post aggregation thus depends on (...) how much information is used and may keep oscillating (‘‘flipping’’) as one keeps adding more information. (shrink)
Ranking theory delivers an account of iterated contraction; each ranking function induces a specific iterated contraction behavior. The paper shows how to reconstruct a ranking function from its iterated contraction behavior uniquely up to multiplicative constant and thus how to measure ranks on a ratio scale. Thereby, it also shows how to completely axiomatize that behavior. The complete set of laws of iterated contraction it specifies amend the laws hitherto discussed in the literature.
Nelson Goodman cast the ‘problem of induction’ as the task of articulating the principles and standards by which to distinguish valid from invalid inductive inferences. This paper explores some logical bounds on the ability of a rational reasoner to accomplish this task. By a simple argument, either an inductive inference method cannot admit its own fallibility, or there exists some non-inferable hypothesis whose non-inferability the method cannot infer (violating the principle of ‘negative introspection’). The paper discusses some implications of this (...) limited self-knowledge for the justifiability of inductive inferences, auto-epistemic logic, and the epistemic foundations of game theory. (shrink)
All conceptions of equal opportunity draw on some distinction between morally justified and unjustified inequalities. We discuss how this distinction varies across a range of philosophical positions. We find that these positions often advance equality of opportunity in tandem with distributive principles based on merit, desert, consequentialist criteria or individuals' responsibility for outcomes. The result of this amalgam of principles is a festering controversy that unnecessarily diminishes the widespread acceptability of opportunity concerns. We therefore propose to restore the conceptual separation (...) of opportunity principles concerning unjustified inequalities from distributive principles concerning justifiable inequalities. On this view, equal opportunity implies that that morally irrelevant factors should engender no differences in individuals' attainment, while remaining silent on inequalities due to morally relevant factors. We examine this idea by introducing the principle of ‘opportunity dominance' and explore in a simple application to what extent this principle may help us arbitrate between opposing distributive principles. We also compare this principle to the selection rules developed by John Roemer and Dirk Van de Gaer. (shrink)
This paper works within a model of ungraded belief that characterizes epistemic states as logically closed and consistent sets of sentences. The aim of this paper is to discuss three diachronic coherence conditions for such beliefs. These coherence conditions are formulated in terms of the reasoner's present beliefs about how his present beliefs will evolve in the future, for instance, in response to different pieces of future evidence.
I re-examine Coherence Arguments (Dutch Book Arguments, No Arbitrage Arguments) for diachronic constraints on Bayesian reasoning. I suggest to replace the usual game–theoretic coherence condition with a new decision–theoretic condition ('Diachronic Sure Thing Principle'). The new condition meets a large part of the standard objections against the Coherence Argument and frees it, in particular, from a commitment to additive utilities. It also facilitates the proof of the Converse Dutch Book Theorem. I first apply the improved Coherence Argument to van Fraassen's (...) (1984) Reflection principle. I then point out the failure of a Coherence Argument that is intended to support Conditionalization as a naive, universal, update rule. I also point out that Reflection is incompatible with the universal use of Conditionalization thus interpreted. The Coherence Argument therefore defeats the naive view on Bayesian learning that it was originally designed to justify. (shrink)