When real-valued utilities for outcomes are bounded, or when all variables are simple, it is consistent with expected utility to have preferences defined over probability distributions or lotteries. That is, under such circumstances two variables with a common probability distribution over outcomes – equivalent variables – occupy the same place in a preference ordering. However, if strict preference respects uniform, strict dominance in outcomes between variables, and if indifference between two variables entails indifference between their difference and the status quo, (...) then preferences over rich sets of unbounded variables, such as variables used in the St. Petersburg paradox, cannot preserve indifference between all pairs of equivalent variables. In such circumstances, preference is not a function only of probability and utility for outcomes. Then the preference ordering is not defined in terms of lotteries. (shrink)
Experimenters sometimes insist that it is unwise to examine data before determining how to analyze them, as it creates the potential for biased results. I explore the rationale behind this methodological guideline from the standpoint of an error statistical theory of evidence, and I discuss a method of evaluating evidence in some contexts when this predesignation rule has been violated. I illustrate the problem of potential bias, and the method by which it may be addressed, with an example from the (...) search for the top quark. A point in favor of the error statistical theory is its ability, demonstrated here, to explicate such methodological problems and suggest solutions, within the framework of an objective theory of evidence. (shrink)
• Coherence1 for previsions of random variables with generalized betting; • Coherence2 for probability forecasts of events with Brier score penalty; • Coherence3 probability forecasts of events with various proper scoring rules.
Part 1 Background on de Finetti’s twin criteria of coherence: Coherence1: 2-sided previsions free from dominance through a Book. Coherence2: Forecasts free from dominance under Brier (squared error) score. Part 2 IP theory based on a scoring rule.
We discuss several features of coherent choice functions —where the admissible options in a decision problem are exactly those that maximize expected utility for some probability/utility pair in fixed set S of probability/utility pairs. In this paper we consider, primarily, normal form decision problems under uncertainty—where only the probability component of S is indeterminate and utility for two privileged outcomes is determinate. Coherent choice distinguishes between each pair of sets of probabilities regardless the “shape” or “connectedness” of the sets of (...) probabilities. We axiomatize the theory of choice functions and show these axioms are necessary for coherence. The axioms are sufficient for coherence using a set of probability/almost-state-independent utility pairs. We give sufficient conditions when a choice function satisfying our axioms is represented by a set of probability/state-independent utility pairs with a common utility. (shrink)
We extend de Finetti’s (1974) theory of coherence to apply also to unbounded random variables. We show that for random variables with mandated infinite prevision, such as for the St. Petersburg gamble, coherence precludes indifference between equivalent random quantities. That is, we demonstrate when the prevision of the difference between two such equivalent random variables must be positive. This result conflicts with the usual approach to theories of Subjective Expected Utility, where preference is defined over lotteries. In addition, we explore (...) similar results for unbounded variables when their previsions, though finite, exceed their expected values, as is permitted within de Finetti’s theory. In such cases, the decision maker’s coherent preferences over random quantities is not even a function of probability and utility. One upshot of these findings is to explain further the differences between Savage’s theory (1954), which requires bounded utility for non-simple acts, and de Finetti’s theory, which does not. And it raises a question whether there is a theory that fits between these two. (shrink)
The degree of incoherence, when previsions are not made in accordance with a probability measure, is measured by either of two rates at which an incoherent bookie can be made a sure loser. Each bet is considered as an investment from the points of view of both the bookie and a gambler who takes the bet. From each viewpoint, we define an amount invested (or escrowed) for each bet, and the sure loss of incoherent previsions is divided by the escrow (...) to determine the rate of incoherence. Potential applications include the treatment of arbitrage opportunities in financial markets and the degree of incoherence of classical statistical procedures. We illustrate the latter with the example of hypothesis testing at a fixed size. (shrink)
When can a Bayesian investigator select an hypothesis H and design an experiment (or a sequence of experiments) to make certain that, given the experimental outcome(s), the posterior probability of H will be lower than its prior probability? We report an elementary result which establishes sufficient conditions under which this reasoning to a foregone conclusion cannot occur. Through an example, we discuss how this result extends to the perspective of an onlooker who agrees with the investigator about the statistical model (...) for the data but who holds a different prior probability for the statistical parameters of that model. We consider, specifically, one-sided and two-sided statistical hypotheses involving i.i.d. Normal data with conjugate priors. In a concluding section, using an "improper" prior, we illustrate how the preceding results depend upon the assumption that probability is countably additive. (shrink)
For Savage (1954) as for de Finetti (1974), the existence of subjective (personal) probability is a consequence of the normative theory of preference. (De Finetti achieves the reduction of belief to desire with his generalized Dutch-Book argument for Previsions.) Both Savage and de Finetti rebel against legislating countable additivity for subjective probability. They require merely that probability be finitely additive. Simultaneously, they insist that their theories of preference are weak, accommodating all but self-defeating desires. In this paper we dispute these (...) claims by showing that the following three cannot simultaneously hold: (i) Coherent belief is reducible to rational preference, i.e. the generalized Dutch-Book argument fixes standards of coherence. (ii) Finitely additive probability is coherent. (iii) Admissible preference structures may be free of consequences, i.e. they may lack prizes whose values are robust against all contingencies. (shrink)
Statistical decision theory, whether based on Bayesian principles or other concepts such as minimax or admissibility, relies on the idea of minimizing expected loss or maximizing expected utility. Loss and utility functions are generally treated as unitless numerical measures of how costly or valuable are the various consequences of potential decisions. In this paper, we address directly the issue of the units in which loss and utility are settled and the implications that those units have on the rankings of potential (...) decisions. The simplest example is to imagine that the loss will be paid in units of some currency. If there are multiple currencies available for paying the loss, one must take explicit account of which currency is used as well as the exchange rates between the various available currencies. (shrink)