The usage of imprecise probabilities has been advocated in many domains: A number of philosophers have argued that our belief states should be “imprecise” in response to certain sorts of evidence, and imprecise probabilities have been thought to play an important role in disciplines such as artificial intelligence, climate science, and engineering. In this paper I’m interested in the question of whether the usage of imprecise probabilities can be given a practical motivation (a motivation based on (...) practical rather than epistemic, or alethic concerns). My aim is to challenge the central motivation for using imprecise probabilities in decision-making that has been offered in the literature: the idea that, in at least some contexts, it’s desirable to be ambiguity averse. If I succeed, this will show that we need to reconsider whether there are good reasons to use imprecise probabilities in contexts in which making good decisions is what's of primary concern. (shrink)
This chapter explores the topic of imprecise probabilities as it relates to model validation. IP is a family of formal methods that aim to provide a better representationRepresentation of severe uncertainty than is possible with standard probabilistic methods. Among the methods discussed here are using sets of probabilities to represent uncertainty, and using functions that do not satisfy the additvity property. We discuss the basics of IP, some examples of IP in computer simulation contexts, possible interpretations of the IP (...) framework and some conceptual problems for the approach. We conclude with a discussion of IP in the context of model validation. (shrink)
Many have argued that a rational agent's attitude towards a proposition may be better represented by a probability range than by a single number. I show that in such cases an agent will have unstable betting behaviour, and so will behave in an unpredictable way. I use this point to argue against a range of responses to the ‘two bets’ argument for sharp probabilities.
There is a trade-off between specificity and accuracy in existing models of belief. Descriptions of agents in the tripartite model, which recognizes only three doxastic attitudes—belief, disbelief, and suspension of judgment—are typically accurate, but not sufficiently specific. The orthodox Bayesian model, which requires real-valued credences, is perfectly specific, but often inaccurate: we often lack precise credences. I argue, first, that a popular attempt to fix the Bayesian model by using sets of functions is also inaccurate, since it requires us to (...) have interval-valued credences with perfectly precise endpoints. We can see this problem as analogous to the problem of higher order vagueness. Ultimately, I argue, the only way to avoid these problems is to endorse Insurmountable Unclassifiability. This principle has some surprising and radical consequences. For example, it entails that the trade-off between accuracy and specificity is in-principle unavoidable: sometimes it is simply impossible to characterize an agent’s doxastic state in a way that is both fully accurate and maximally specific. What we can do, however, is improve on both the tripartite and existing Bayesian models. I construct a new model of belief—the minimal model—that allows us to characterize agents with much greater specificity than the tripartite model, and yet which remains, unlike existing Bayesian models, perfectly accurate. (shrink)
An examination of topics involved in statistical reasoning with imprecise probabilities. The book discusses assessment and elicitation, extensions, envelopes and decisions, the importance of imprecision, conditional previsions and coherent statistical models.
Two compelling principles, the Reasonable Range Principle and the Preservation of Irrelevant Evidence Principle, are necessary conditions that any response to peer disagreements ought to abide by. The Reasonable Range Principle maintains that a resolution to a peer disagreement should not fall outside the range of views expressed by the peers in their dispute, whereas the Preservation of Irrelevant Evidence Principle maintains that a resolution strategy should be able to preserve unanimous judgments of evidential irrelevance among the peers. No standard (...) Bayesian resolution strategy satisfies the PIE Principle, however, and we give a loss aversion argument in support of PIE and against Bayes. The theory of impreciseprobability allows one to satisfy both principles, and we introduce the notion of a set-based credal judgment to frame and address a range of subtle issues that arise in peer disagreements. (shrink)
The question of how the probabilistic opinions of different individuals should be aggregated to form a group opinion is controversial. But one assumption seems to be pretty much common ground: for a group of Bayesians, the representation of group opinion should itself be a unique probability distribution, 410–414, ; Bordley Management Science, 28, 1137–1148, ; Genest et al. The Annals of Statistics, 487–501, ; Genest and Zidek Statistical Science, 114–135, ; Mongin Journal of Economic Theory, 66, 313–351, ; Clemen (...) and Winkler Risk Analysis, 19, 187–203, ; Dietrich and List ; Herzberg Theory and Decision, 1–19, ). We argue that this assumption is not always in order. We show how to extend the canonical mathematical framework for pooling to cover pooling with imprecise probabilities by employing set-valued pooling functions and generalizing common pooling axioms accordingly. As a proof of concept, we then show that one IP construction satisfies a number of central pooling axioms that are not jointly satisfied by any of the standard pooling recipes on pain of triviality. Following Levi, 3–11, ), we also argue that IP models admit of a much better philosophical motivation as a model of rational consensus. (shrink)
Understanding probabilities as something other than point values has often been motivated by the need to find more realistic models for degree of belief, and in particular the idea that degree of belief should have an objective basis in “statistical knowledge of the world.” I offer here another motivation growing out of efforts to understand how chance evolves as a function of time. If the world is “chancy” in that there are non-trivial, objective, physical probabilities at the macro-level, then the (...) chance of an event e that happens at a given time is \ until it happens. But whether the chance of e goes to one continuously or not is left open. Discontinuities in such chance trajectories can have surprising and troubling consequences for probabilistic analyses of causation and accounts of how events occur in time. This, coupled with the compelling evidence for quantum discontinuities in chance’s evolution, gives rise to a “continuity bind” with respect to chance probability trajectories. I argue that a viable option for circumventing the continuity bind is to understand the probabilities “imprecisely,” that is, as intervals rather than point values. I then develop and motivate an alternative kind of continuity appropriate for interval-valued chance probability trajectories. (shrink)
Those who model doxastic states with a set of probability functions, rather than a single function, face a pressing challenge: can they provide a plausible decision theory compatible with their view? Adam Elga and others claim that they cannot, and that the set of functions model should be rejected for this reason. This paper aims to answer this challenge. The key insight is that the set of functions model can be seen as an instance of the supervaluationist approach to (...) vagueness more generally. We can then generate our decision theory by applying the general supervaluationist semantics to decision-theoretic claims. The result: if an action is permissible according to all functions in the set, it’s determinately permissible; if impermissible according to all, determinately impermissible; and – crucially – if permissible according to some, but not all, it’s indeterminate whether it’s permissible. This proposal handles with ease some difficult cases ) on which alternative decision theories falter. One reason this view has been overlooked in the literature thus far is that all parties to the debate presuppose that an acceptable decision theory must classify each action as either permissible or impermissible. But I will argue that this thought is misguided. Seeing the set of functions model as an instance of supervaluationism provides a compelling motivation for the claim that there can be indeterminacy in the rationality of some actions. (shrink)
Randomized controlled clinical trials play an important role in the development of new medical therapies. There is, however, an ethical issue surrounding the use of randomized treatment allocation when the patient is suffering from a life threatening condition and requires immediate treatment. Such patients can only benefit from the treatment they actually receive and not from the alternative therapy, even if it ultimately proves to be superior. We discuss a novel new way to analyse data from such clinical trials based (...) on the use of the recently developed theory of imprecise probabilities. This work draws an explicit distinction between the related but nevertheless distinct questions of inference and decision in clinical trials. The traditional question of scientific interest asks 'Which treatment offers the greater chance of success?' and is the primary reason for conducting the clinical trial. The question of decision concerns the welfare of the patients in the clinical trial, asking whether the accumulated evidence favours one treatment over the other to such an extent that the next patient should decline randomization and instead express a preference for one treatment. Consideration of the decision question within the framework of imprecise probabilities leads to a mathematical definition of equipoise and a method for governing the randomization protocol of a clinical trial. This paper describes in detail the protocol for the conduct of clinical trials based on this new method of analysis, which is illustrated in a retrospective analysis of data from a clinical trial comparing the anti-emetic drugs ondansetron and droperidol in the treatment of postoperative nausea and vomiting. The proposed methodology is compared quantitatively using computer simulation studies with conventional clinical trial designs and is shown to maintain high statistical power with reduced sample sizes, at the expense of a high type I error rate that we argue is irrelevant in some specific circumstances. Particular emphasis is placed on describing the type of medical conditions and treatment comparisons where the new methodology is expected to provide the greatest benefit. (shrink)
We review de Finetti’s two coherence criteria for determinate probabilities: coherence1defined in terms of previsions for a set of events that are undominated by the status quo – previsions immune to a sure-loss – and coherence2 defined in terms of forecasts for events undominated in Brier score by a rival forecast. We propose a criterion of IP-coherence2 based on a generalization of Brier score for IP-forecasts that uses 1-sided, lower and upper, probability forecasts. However, whereas Brier score is a (...) strictly proper scoring rule for eliciting determinate probabilities, we show that there is no real-valuedstrictly proper IP-score. Nonetheless, with respect to either of two decision rules – Γ-maximin or E-admissibility-+-Γ-maximin – we give a lexicographic strictly proper IP-scoring rule that is based on Brier score. (shrink)
Orthodox Bayesian decision theory requires an agent’s beliefs representable by a real-valued function, ideally a probability function. Many theorists have argued this is too restrictive; it can be perfectly reasonable to have indeterminate degrees of belief. So doxastic states are ideally representable by a set of probability functions. One consequence of this is that the expected value of a gamble will be imprecise. This paper looks at the attempts to extend Bayesian decision theory to deal with such (...) cases, and concludes that all proposals advanced thus far have been incoherent. A more modest, but coherent, alternative is proposed. Keywords: Imprecise probabilities, Arrow’s theorem. (shrink)
The modus ponens (A -> B, A :. B) is, along with modus tollens and the two logically not valid counterparts denying the antecedent (A -> B, ¬A :. ¬B) and affirming the consequent, the argument form that was most often investigated in the psychology of human reasoning. The present contribution reports the results of three experiments on the probabilistic versions of modus ponens and denying the antecedent. In probability logic these arguments lead to conclusions with imprecise probabilities. (...) In the modus ponens tasks the participants inferred probabilities that agreed much better with the coherent normative values than in the denying the antecedent tasks, a result that mirrors results found with the classical argument versions. For modus ponens a surprisingly high number of lower and upper probabilities agreed perfectly with the conjugacy property (upper probabilities equal one complements of the lower probabilities). When the probabilities of the premises are imprecise the participants do not ignore irrelevant (“silent”) boundary probabilities. The results show that human mental probability logic is close to predictions derived from probability logic for the most elementary argument form, but has considerable difficulties with the more complex forms involving negations. (shrink)
We study probabilistically informative (weak) versions of transitivity by using suitable definitions of defaults and negated defaults in the setting of coherence and imprecise probabilities. We represent p-consistent sequences of defaults and/or negated defaults by g-coherent impreciseprobability assessments on the respective sequences of conditional events. Finally, we present the coherent probability propagation rules for Weak Transitivity and the validity of selected inference patterns by proving p-entailment of the associated knowledge bases.
Sometimes different partitions of the same space each seem to divide that space into propositions that call for equal epistemic treatment. Famously, equal treatment in the form of equal point-valued credence leads to incoherence. Some have argued that equal treatment in the form of equal interval-valued credence solves the puzzle. This paper shows that, once we rule out intervals with extreme endpoints, this proposal also leads to incoherence.
We propose a method for estimating subjective beliefs, viewed as a subjective probability distribution. The key insight is to characterize beliefs as a parameter to be estimated from observed choices in a well-defined experimental task and to estimate that parameter as a random coefficient. The experimental task consists of a series of standard lottery choices in which the subject is assumed to use conventional risk attitudes to select one lottery or the other and then a series of betting choices (...) in which the subject is presented with a range of bookies offering odds on the outcome of some event that the subject has a belief over. Knowledge of the risk attitudes of subjects conditions the inferences about subjective beliefs. Maximum simulated likelihood methods are used to estimate a structural model in which subjects employ subjective beliefs to make bets. We present evidence that some subjective probabilities are indeed best characterized as probability distributions with non-zero variance. (shrink)
This special issue of the International Journal of Approximate Reasoning grew out of the 8th International Symposium on ImpreciseProbability: Theories and Applications. The symposium was organized by the Society for ImpreciseProbability: Theories and Applications at the Université de Technologie de Compiègne in July 2013. The biennial ISIPTA meetings are well established among international conferences on generalized methods for uncertainty quantification. The first ISIPTA took place in Gent in 1999, followed by meetings in Cornell, Lugano, (...) Carnegie Mellon, Prague, Durham and Innsbruck. Compiègne proved to be a very nice location for ISIPTA 2013, offering wonderful opportunities for collaborations and discussions, as well as sightseeing places such as its imperial palace. (shrink)
Many philosophers argue that Keynes’s concept of the “weight of arguments” is an important aspect of argument appraisal. The weight of an argument is the quantity of relevant evidence cited in the premises. However, this dimension of argumentation does not have a received method for formalisation. Kyburg has suggested a measure of weight that uses the degree of imprecision in his system of “Evidential Probability” to quantify weight. I develop and defend this approach to measuring weight. I illustrate the (...) usefulness of this measure by employing it to develop an answer to Popper’s Paradox of Ideal Evidence. (shrink)
In his entry on "Quantum Logic and Probability Theory" in the Stanford Encyclopedia of Philosophy, Alexander Wilce (2012) writes that "it is uncontroversial (though remarkable) the formal apparatus quantum mechanics reduces neatly to a generalization of classical probability in which the role played by a Boolean algebra of events in the latter is taken over the 'quantum logic' of projection operators on a Hilbert space." For a long time, Patrick Suppes has opposed this view (see, for example, the (...) paper collected in Suppes and Zanotti (1996). Instead of changing the logic and moving from a Boolean algebra to a non-Boolean algebra, one can also 'save the phenomena' by weakening the axioms of probability theory and work instead with upper and lower probabilities. However, it is fair to say that despite Suppes' efforts upper and lower probabilities are not particularly popular in physics as well as in the foundations of physics, at least so far. Instead, quantum logic is booming again, especially since quantum information and computation became hot topics. Interestingly, however, imprecise probabilities are becoming more and more popular in formal epistemology as recent work by authors such as James Joye (2010) and Roger White (2010) demonstrates. (shrink)
The generalized Bayes’ rule (GBR) can be used to conduct ‘quasi-Bayesian’ analyses when prior beliefs are represented by impreciseprobability models. We describe a procedure for deriving coherent impreciseprobability models when the event space consists of a finite set of mutually exclusive and exhaustive events. The procedure is based on Walley’s theory of upper and lower prevision and employs simple linear programming models. We then describe how these models can be updated using Cozman’s linear programming (...) formulation of the GBR. Examples are provided to demonstrate how the GBR can be applied in practice. These examples also illustrate the effects of prior imprecision and prior-data conflict on the precision of the posterior probability distribution. (shrink)
An agent often does not have precise probabilities or utilities to guide resolution of a decision problem. I advance a principle of rationality for making decisions in such cases. To begin, I represent the doxastic and conative state of an agent with a set of pairs of a probability assignment and a utility assignment. Then I support a decision principle that allows any act that maximizes expected utility according to some pair of assignments in the set. Assuming that computation (...) of an option's expected utility uses comprehensive possible outcomes that include the option's risk, no consideration supports a stricter requirement. (shrink)
Evidentialists say that a necessary condition of sound epistemic reasoning is that our beliefs reflect only our evidence. This thesis arguably conflicts with standard Bayesianism, due to the importance of prior probabilities in the latter. Some evidentialists have responded by modelling belief-states using imprecise probabilities (Joyce 2005). However, Roger White (2010) and Aron Vallinder (2018) argue that this Imprecise Bayesianism is incompatible with evidentialism due to “inertia”, where Imprecise Bayesian agents become stuck in a state of ambivalence (...) towards hypotheses. Additionally, escapes from inertia apparently only create further conflicts with evidentialism. This dilemma gives a reason for evidentialist imprecise probabilists to look for alternatives without inertia. I shall argue that Henry E. Kyburg’s approach offers an evidentialist-friendly impreciseprobability theory without inertia, and that its relevant anti-inertia features are independently justified. I also connect the traditional epistemological debates concerning the “ethics of belief” more systematically with formal epistemology than has been hitherto done. (shrink)
Bayesians often confuse insistence that probability judgment ought to be indeterminate (which is incompatible with Bayesian ideals) with recognition of the presence of imprecision in the determination or measurement of personal probabilities (which is compatible with these ideals). The confusion is discussed and illustrated by remarks in a recent essay by R. C. Jeffrey.
Jim Joyce argues for two amendments to probabilism. The first is the doctrine that credences are rational, or not, in virtue of their accuracy or “closeness to the truth” (1998). The second is a shift from a numerically precise model of belief to an imprecise model represented by a set of probability functions (2010). We argue that both amendments cannot be satisfied simultaneously. To do so, we employ a (slightly-generalized) impossibility theorem of Seidenfeld, Schervish, and Kadane (2012), who (...) show that there is no strictly proper scoring rule for imprecise probabilities. -/- The question then is what should give way. Joyce, who is well aware of this no-go result, thinks that a quantifiability constraint on epistemic accuracy should be relaxed to accommodate imprecision. We argue instead that another Joycean assumption — called strict immodesty— should be rejected, and we prove a representation theorem that characterizes all “mildly” immodest measures of inaccuracy. (shrink)
This article considers the extent to which Bayesian networks with imprecise probabilities, which are used in statistics and computer science for predictive purposes, can be used to represent causal structure. It is argued that the adequacy conditions for causal representation in the precise context—the Causal Markov Condition and Minimality—do not readily translate into the imprecise context. Crucial to this argument is the fact that the independence relation between random variables can be understood in several different ways when the (...) joint probability distribution over those variables is imprecise, none of which provides a compelling basis for the causal interpretation of imprecise Bayes nets. I conclude that there are serious limits to the use of imprecise Bayesian networks to represent causal structure. (shrink)
It is well known that classical, aka ‘sharp’, Bayesian decision theory, which models belief states as single probability functions, faces a number of serious difficulties with respect to its handling of agnosticism. These difficulties have led to the increasing popularity of so-called ‘imprecise’ models of decision-making, which represent belief states as sets of probability functions. In a recent paper, however, Adam Elga has argued in favour of a putative normative principle of sequential choice that he claims to (...) be borne out by the sharp model but not by any promising incarnation of its imprecise counterpart. After first pointing out that Elga has fallen short of establishing that his principle is indeed uniquely borne out by the sharp model, I cast aspersions on its plausibility. I show that a slight weakening of the principle is satisfied by at least one, but interestingly not all, varieties of the imprecise model and point out that Elga has failed to motivate his stronger commitment. (shrink)
This article presents the results of a survey designed to test, with economically sophisticated participants, Ellsberg’s ambiguity aversion hypothesis, and Smithson’s conflict aversion hypothesis. Based on an original sample of 78 professional actuaries (all members of the French Institute of Actuaries), this article provides empirical evidence that ambiguity (i.e. uncertainty about the probability) affect insurers’ decision on pricing insurance. It first reveals that premiums are significantly higher for risks when there is ambiguity regarding the probability of the loss. (...) Second, it shows that insurers are sensitive to sources of ambiguity. The participants indeed, charged a higher premium when ambiguity came from conflict and disagreement regarding the probability of the loss than when ambiguity came from imprecision (imprecise forecast about the probability of the loss). This research thus documents the presence of both ambiguity aversion and conflict aversion in the field of insurance, and discuses economic and psychological rationales for the observed behaviours. (shrink)
A number of Bayesians claim that, if one has no evidence relevant to a proposition P, then one's credence in P should be spread over the interval [0, 1]. Against this, I argue: first, that it is inconsistent with plausible claims about comparative levels of confidence; second, that it precludes inductive learning in certain cases. Two motivations for the view are considered and rejected. A discussion of alternatives leads to the conjecture that there is an in-principle limitation on formal representations (...) of belief: they cannot be both fully accurate and maximally specific. (shrink)
Can we extend accuracy-based epistemic utility theory to imprecise credences? There's no obvious way of proceeding: some stipulations will be necessary for either (i) the notion of accuracy or (ii) the epistemic decision rule. With some prima facie plausible stipulations, imprecise credences are always required. With others, they’re always impermissible. Care is needed to reach the familiar evidential view of imprecise credence: that whether precise or imprecise credences are required depends on the character of one's evidence. (...) I propose an epistemic utility theoretic defense of a common view about how evidence places demands on imprecise credence: that your spread of credence should cover the range of chance hypotheses left open by your evidence. I argue that objections to the form of epistemic utility theoretic argument that I use will extend to the standard motivation for epistemically mandatory imprecise credences. (shrink)
There is currently much discussion about how decision making should proceed when an agent's degrees of belief are imprecise; represented by a set of probability functions. I show that decision rules recently discussed by Sarah Moss, Susanna Rinard and Rohan Sud all suffer from the same defect: they all struggle to rationalize diachronic ambiguity aversion. Since ambiguity aversion is among the motivations for imprecise credence, this suggests that the search for an adequate imprecise decision rule is (...) not yet over. (shrink)
Much recent philosophical attention has been devoted to the prospects of the Best System Analysis of chance for yielding high-level chances, including statistical mechanical and special science chances. But a foundational worry about the BSA lurks: there don’t appear to be uniquely correct measures of the degree to which a system exhibits theoretical virtues, such as simplicity, strength, and fit. Nor does there appear to be a uniquely correct exchange rate at which the theoretical virtues trade off against one another (...) in the determination of an overall best system. I argue that there’s no robustly best system for our world – no system that comes out best under every reasonable measure of the theoretical virtues and exchange rate between them – but rather a set of ‘tied-for-best’ systems: a set of very good systems, none of which is robustly best. Among the tied-for-best systems are systems that entail differing high-level probabilities. I argue that the advocate of the BSA should conclude that the high-level chances for our world are imprecise. (shrink)
On an attractive, naturalistically respectable theory of intentionality, mental contents are a form of measurement system for representing behavioral and psychological dispositions. This chapter argues that a consequence of this view is that the content/attitude distinction is measurement system relative. As a result, there is substantial arbitrariness in the content/attitude distinction. Whether some measurement of mental states counts as characterizing the content of mental states or the attitude is not a question of empirical discovery but of theoretical utility. If correct, (...) this observation has ramifications in the theory of rationality. Some epistemologists and decision theorists have argued that imprecise credences are rationally impermissible, while others have argued that precise credences are rationally impermissible. If the measure theory of mental content is correct, however, then neither imprecise credences nor precise credences can be rationally impermissible. (shrink)
Uncertainty and vagueness/imprecision are not the same: one can be certain about events described using vague predicates and about imprecisely specified events, just as one can be uncertain about precisely specified events. Exactly because of this, a question arises about how one ought to assign probabilities to imprecisely specified events in the case when no possible available evidence will eradicate the imprecision (because, say, of the limits of accuracy of a measuring device). Modelling imprecision by rough sets over an approximation (...) space presents an especially tractable case to help get one’s bearings. Two solutions present themselves: the first takes as upper and lower probabilities of the event X the (exact) probabilities assigned X ’s upper and lower rough-set approximations; the second, motivated both by formal considerations and by a simple betting argument, is to treat X ’s rough-set approximation as a conditional event and assign to it a point-valued (conditional) probability. (shrink)
According to the Imprecise Credence Framework (ICF), a rational believer's doxastic state should be modelled by a set of probability functions rather than a single probability function, namely, the set of probability functions allowed by the evidence ( Joyce  ). Roger White (  ) has recently given an arresting argument against the ICF, which has garnered a number of responses. In this article, I attempt to cast doubt on his argument. First, I point out (...) that it's not an argument against the ICF per se , but an argument for the Principle of Indifference. Second, I present an argument that's analogous to White's. I argue that if White's premises are true, the premises of this argument are too. But the premises of my argument entail something obviously false. Therefore, White's premises must not all be true. (shrink)
Does the strength of a particular belief depend upon the significance we attach to it? Do we move from one context to another, remaining in the same doxastic state concerning p yet holding a stronger belief that p in one context than in the other? For that to be so, a doxastic state must have a certain sort of context-sensitive complexity. So the question is about the nature of belief states, as we understand them, or as we think a theory (...) should model them. I explore the idea and how it relates to work on imprecise probabilities and second-order confidence. (shrink)
We study probabilistically informative (weak) versions of transitivity by using suitable definitions of defaults and negated defaults in the setting of coherence and imprecise probabilities. We represent p-consistent sequences of defaults and/or negated defaults by g-coherent impreciseprobability assessments on the respective sequences of conditional events. Moreover, we prove the coherent probability propagation rules for Weak Transitivity and the validity of selected inference patterns by proving p-entailment of the associated knowledge bases. Finally, we apply our results (...) to study selected probabilistic versions of classical categorical syllogisms and construct a new version of the square of opposition in terms of defaults and negated defaults. (shrink)
The purpose of this paper is to show that if one adopts conditional probabilities as the primitive concept of probability, one must deal with the fact that even in very ordinary circumstances at least some probability values may be imprecise, and that some probability questions may fail to have numerically precise answers.
This article provides an experimental analysis of attitude toward imprecise and variable information. Imprecise information is provided in the form of a set of possible probability values, such that it is virtually impossible for the subjects to guess or estimate, which one in the set is true or more likely to be true. We investigate how geometric features of such information pieces affect choices. We find that the subjects care about more features than the pairs of best-case (...) and worst-case, which is a counter-evidence to the well-known models, maximin and α-maximin. We find that presence of nonextreme points in the set affects choice, which suggests that attitude toward imprecision is ‘nonlinear.’ We also obtain an observation, though not significant, that information pieces have a complementarity that may not be explained by the Bayesian view. (shrink)
Many have claimed that unspecific evidence sometimes demands unsharp, indeterminate, imprecise, vague, or interval-valued probabilities. Against this, a variant of the diachronic Dutch Book argument shows that perfectly rational agents always have perfectly sharp probabilities.