We discuss several features of coherent choice functions —where the admissible options in a decision problem are exactly those that maximize expected utility for some probability/utility pair in fixed set S of probability/utility pairs. In this paper we consider, primarily, normal form decision problems under uncertainty—where only the probability component of S is indeterminate and utility for two privileged outcomes is determinate. Coherent choice distinguishes between each pair of sets of probabilities regardless the “shape” or “connectedness” of the sets of (...) probabilities. We axiomatize the theory of choice functions and show these axioms are necessary for coherence. The axioms are sufficient for coherence using a set of probability/almost-state-independent utility pairs. We give sufficient conditions when a choice function satisfying our axioms is represented by a set of probability/state-independent utility pairs with a common utility. (shrink)
For Savage (1954) as for de Finetti (1974), the existence of subjective (personal) probability is a consequence of the normative theory of preference. (De Finetti achieves the reduction of belief to desire with his generalized Dutch-Book argument for Previsions.) Both Savage and de Finetti rebel against legislating countable additivity for subjective probability. They require merely that probability be finitely additive. Simultaneously, they insist that their theories of preference are weak, accommodating all but self-defeating desires. In this paper we dispute these (...) claims by showing that the following three cannot simultaneously hold: (i) Coherent belief is reducible to rational preference, i.e. the generalized Dutch-Book argument fixes standards of coherence. (ii) Finitely additive probability is coherent. (iii) Admissible preference structures may be free of consequences, i.e. they may lack prizes whose values are robust against all contingencies. (shrink)
Gordon Belot argues that Bayesian theory is epistemologically immodest. In response, we show that the topological conditions that underpin his criticisms of asymptotic Bayesian conditioning are self-defeating. They require extreme a priori credences regarding, for example, the limiting behavior of observed relative frequencies. We offer a different explication of Bayesian modesty using a goal of consensus: rival scientific opinions should be responsive to new facts as a way to resolve their disputes. Also we address Adam Elga’s rebuttal to Belot’s analysis, (...) which focuses attention on the role that the assumption of countable additivity plays in Belot’s criticisms. (shrink)
We review de Finetti’s two coherence criteria for determinate probabilities: coherence1defined in terms of previsions for a set of events that are undominated by the status quo – previsions immune to a sure-loss – and coherence2 defined in terms of forecasts for events undominated in Brier score by a rival forecast. We propose a criterion of IP-coherence2 based on a generalization of Brier score for IP-forecasts that uses 1-sided, lower and upper, probability forecasts. However, whereas Brier score is a strictly (...) proper scoring rule for eliciting determinate probabilities, we show that there is no real-valuedstrictly proper IP-score. Nonetheless, with respect to either of two decision rules – Γ-maximin or E-admissibility-+-Γ-maximin – we give a lexicographic strictly proper IP-scoring rule that is based on Brier score. (shrink)
Let κ be an uncountable cardinal. Using the theory of conditional probability associated with de Finetti and Dubins, subject to several structural assumptions for creating sufficiently many measurable sets, and assuming that κ is not a weakly inaccessible cardinal, we show that each probability that is not κ-additive has conditional probabilities that fail to be conglomerable in a partition of cardinality no greater than κ. This generalizes our result, where we established that each finite but not countably additive probability has (...) conditional probabilities that fail to be conglomerable in some countable partition. (shrink)
The Sleeping Beauty problem has spawned a debate between “thirders” and “halfers” who draw conflicting conclusions about Sleeping Beauty's credence that a coin lands heads. Our analysis is based on a probability model for what Sleeping Beauty knows at each time during the experiment. We show that conflicting conclusions result from different modeling assumptions that each group makes. Our analysis uses a standard “Bayesian” account of rational belief with conditioning. No special handling is used for self-locating beliefs or centered propositions. (...) We also explore what fair prices Sleeping Beauty computes for gambles that she might be offered during the experiment. (shrink)
Several axiom systems for preference among acts lead to a unique probability and a state-independent utility such that acts are ranked according to their expected utilities. These axioms have been used as a foundation for Bayesian decision theory and subjective probability calculus. In this article we note that the uniqueness of the probability is relative to the choice of whatcounts as a constant outcome. Although it is sometimes clear what should be considered constant, in many cases there are several possible (...) choices. Each choice can lead to a different "unique" probability and utility. By focusing attention on statedependent utilities, we determine conditions under which a truly unique probability and utility can be determined from an agent's expressed preferences among acts. Suppose that an agent's preference can be represented in terms of a probability P and a utility U.That is, the agent prefers one act to another iff the expected utility of that act is higher than that of the other. There are many other equivalent representations in terms of probabilities Q, which are mutually absolutely continuous with P, and state-dependent utilities V, which differ from U by possibly different positive affine transformations in each state of nature. We describe an example in which there are two different but equivalent state-independent utility representations for the same preference structure. They differ in which acts count as constants. The acts involve receiving different amounts of one or the other of two currencies, and the states are different exchange rates between the currencies. It is easy to see how it would not be possible for constant amounts of both currencies to have simultaneously constant values across the differentstates. Savage (1954, sec. 5.5) discovered a situation in which two seemingly equivalent preference structures are represented by different pairs of probability and utility. He attributed the phenomenon to the construction of a "small world." We show that the small world problem is just another example of two different, but equivalent, representations treating different actsas constants. Finally, we prove a theorem (similar to one of Karni 1985) that shows how to elicit a unique state-dependent utility and does not assume that there are prizes with constant value. To do this, we define a new hypothetical kind of act in which both the prize to be awarded and the state of nature are determined by an auxiliary experiment. (shrink)
This important collection of essays is a synthesis of foundational studies in Bayesian decision theory and statistics. An overarching topic of the collection is understanding how the norms for Bayesian decision making should apply in settings with more than one rational decision maker and then tracing out some of the consequences of this turn for Bayesian statistics. There are four principal themes to the collection: cooperative, non-sequential decisions; the representation and measurement of 'partially ordered' preferences; non-cooperative, sequential decisions; and pooling (...) rules and Bayesian dynamics for sets of probabilities. The volume will be particularly valuable to philosophers concerned with decision theory, probability, and statistics, statisticians, mathematicians, and economists. (shrink)
When real-valued utilities for outcomes are bounded, or when all variables are simple, it is consistent with expected utility to have preferences defined over probability distributions or lotteries. That is, under such circumstances two variables with a common probability distribution over outcomes – equivalent variables – occupy the same place in a preference ordering. However, if strict preference respects uniform, strict dominance in outcomes between variables, and if indifference between two variables entails indifference between their difference and the status quo, (...) then preferences over rich sets of unbounded variables, such as variables used in the St. Petersburg paradox, cannot preserve indifference between all pairs of equivalent variables. In such circumstances, preference is not a function only of probability and utility for outcomes. Then the preference ordering is not defined in terms of lotteries. (shrink)
The degree of incoherence, when previsions are not made in accordance with a probability measure, is measured by either of two rates at which an incoherent bookie can be made a sure loser. Each bet is considered as an investment from the points of view of both the bookie and a gambler who takes the bet. From each viewpoint, we define an amount invested (or escrowed) for each bet, and the sure loss of incoherent previsions is divided by the escrow (...) to determine the rate of incoherence. Potential applications include the treatment of arbitrage opportunities in financial markets and the degree of incoherence of classical statistical procedures. We illustrate the latter with the example of hypothesis testing at a fixed size. (shrink)
The Sleeping Beauty problem has spawned a debate between “Thirders” and “Halfers” who draw conflicting conclusions about Sleeping Beauty’s credence that a coin lands Heads. Our analysis is based on a probability model for what Sleeping Beauty knows at each time during the Experiment. We show that conflicting conclusions result from different modeling assumptions that each group makes. Our analysis uses a standard “Bayesian” account of rational belief with conditioning. No special handling is used for self-locating beliefs or centered propositions. (...) We also explore what fair prices Sleeping Beauty computes for gambles that she might be offered during the Experiment. (shrink)
It has long been known that the practice of testing all hypotheses at the same level , regardless of the distribution of the data, is not consistent with Bayesian expected utility maximization. According to de Finetti’s “Dutch Book” argument, procedures that are not consistent with expected utility maximization are incoherent and they lead to gambles that are sure to lose no matter what happens. In this paper, we use a method to measure the rate at which incoherent procedures are sure (...) to lose, so that we can distinguish slightly incoherent procedures from grossly incoherent ones. We present an analysis of testing a simple hypothesis against a simple alternative as a case‐study of how the method can work. (shrink)
When can a Bayesian investigator select an hypothesis H and design an experiment (or a sequence of experiments) to make certain that, given the experimental outcome(s), the posterior probability of H will be lower than its prior probability? We report an elementary result which establishes sufficient conditions under which this reasoning to a foregone conclusion cannot occur. Through an example, we discuss how this result extends to the perspective of an onlooker who agrees with the investigator about the statistical model (...) for the data but who holds a different prior probability for the statistical parameters of that model. We consider, specifically, one-sided and two-sided statistical hypotheses involving i.i.d. Normal data with conjugate priors. In a concluding section, using an "improper" prior, we illustrate how the preceding results depend upon the assumption that probability is countably additive. (shrink)
It has long been known that the practice of testing all hypotheses at the same level , regardless of the distribution of the data, is not consistent with Bayesian expected utility maximization. According to de Finetti’s “Dutch Book” argument, procedures that are not consistent with expected utility maximization are incoherent and they lead to gambles that are sure to lose no matter what happens. In this paper, we use a method to measure the rate at which incoherent procedures are sure (...) to lose, so that we can distinguish slightly incoherent procedures from grossly incoherent ones. We present an analysis of testing a simple hypothesis against a simple alternative as a case‐study of how the method can work. (shrink)
When can a Bayesian investigator select an hypothesis H and design an experiment to make certain that, given the experimental outcome, the posterior probability of H will be lower than its prior probability? We report an elementary result which establishes sufficient conditions under which this reasoning to a foregone conclusion cannot occur. Through an example, we discuss how this result extends to the perspective of an onlooker who agrees with the investigator about the statistical model for the data but who (...) holds a different prior probability for the statistical parameters of that model. We consider, specifically, one-sided and two-sided statistical hypotheses involving i.i.d. Normal data with conjugate priors. In a concluding section, using an "improper" prior, we illustrate how the preceding results depend upon the assumption that probability is countably additive. (shrink)
Statistical decision theory, whether based on Bayesian principles or other concepts such as minimax or admissibility, relies on minimizing expected loss or maximizing expected utility. Loss and utility functions are generally treated as unit-less numerical measures of value for consequences. Here, we address the issue of the units in which loss and utility are settled and the implications that those units have on the rankings of potential decisions. When multiple currencies are available for paying the loss, one must take explicit (...) account of which currency is used as well as the exchange rates between the various available currencies. (shrink)
We investigate differences between a simple Dominance Principle applied to sums of fair prices for variables and dominance applied to sums of forecasts for variables scored by proper scoring rules. In particular, we consider differences when fair prices and forecasts correspond to finitely additive expectations and dominance is applied with infinitely many prices and/or forecasts.
Experimenters sometimes insist that it is unwise to examine data before determining how to analyze them, as it creates the potential for biased results. I explore the rationale behind this methodological guideline from the standpoint of an error statistical theory of evidence, and I discuss a method of evaluating evidence in some contexts when this predesignation rule has been violated. I illustrate the problem of potential bias, and the method by which it may be addressed, with an example from the (...) search for the top quark. A point in favor of the error statistical theory is its ability, demonstrated here, to explicate such methodological problems and suggest solutions, within the framework of an objective theory of evidence. (shrink)
uniquely into a convex combination of a countably additive probability and a purely finitely additive (PFA) one. The coefficient of the PFA probability..
We give necessary and sufficient conditions for a scoring rule to be proper for a quantile if utility is linear, and the distribution is unrestricted. We also give results when the set of distributions is limited, for example, to distributions that have first moments.
We give an extension of de Finetti’s concept of coherence to unbounded random variables that allows for gambling in the presence of infinite previsions. We present a finitely additive extension of the Daniell integral to unbounded random variables that we believe has advantages over Lebesgue-style integrals in the finitely additive setting. We also give a general version of the Fundamental Theorem of Prevision to deal with conditional previsions and unbounded random variables.