People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call *Countable (...) Independence*. In violating these principles, they display foibles like those of people who deviate from standard expected utility theory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representation theorem, and the principle we began with, that every prospect has a value that some outcome could have. (shrink)
This volume brings together Terence Horgan's essays on paradoxes, both published and new. A common theme unifying these essays is that philosophically interesting paradoxes typically resist either easy solutions or solutions that are formally/mathematically highly technical. Another unifying theme is that such paradoxes often have deep-sometimes disturbing-philosophical morals.
In the two-envelope problem, a reasoner is offered two envelopes, one containing exactly twice the money in the other. After observing the amount in one envelope, it can be traded for the unseen contents of the other. It appears that it should not matter whether the envelope is traded, but recent mathematical analyses have shown that gains could be made if trading was a probabilistic function of amount observed. As a problem with a purely probabilistic solution, it provides a potentially (...) interesting way to test people's consistency with probabilistic models. Three experiments did this by varying the size of both the observed and maximum amounts, and their possible distribution. The results showed that trading decisions were affected by where the observed amount in the opened envelope fell in the distribution, though its size did not always do so. This suggests that participants’ trade decisions could be affected by the perceived probabilities of different observed amounts, which is consistent with the pro.. (shrink)
Using “brute reason” I will show why there can be only one valid interpretation of probability. The valid interpretation turns out to be a further refinement of Popper’s Propensity interpretation of probability. Via some famous probability puzzles and new thought experiments I will show how all other interpretations of probability fail, in particular the Bayesian interpretations, while these puzzles do not present any difficulties for the interpretation proposed here. In addition, the new interpretation casts doubt on some concepts often taken (...) as basic and unproblematic, like rationality, utility and expectation. This in turn has implications for decision theory, economic theory and the philosophy of physics. (shrink)
The "two-envelopes" problem has stimulated much discussion on probabilistic reasoning, but relatively little experimentation. The problem specifies two identical envelopes, one of which contains twice as much money as the other. You are given one of the envelopes and the option of keeping it or trading for the other envelope. Variables of interest include the possible amounts of money involved, what is known about the process by which the amounts of money were assigned to the envelopes, and whether you are (...) allowed to know how much money is in the envelope in hand before deciding whether to keep or trade. In an earlier study, Butler and Nickerson found that when participants were allowed to know how much was in the envelope in hand, they generally elected to trade if that amount was small relative to the range of possibilities and to keep otherwise. The present experiments showed that this propensity was independent of the amount of money in the envelopes. Participants made decisions with a strong bias for avoiding the risk of losing by trading, particularly when the amount in hand was known and large relative to the range of possible amounts, regardless of the absolute value of the gamble. The results illustrate the dependence of thinking on the context in which it occurs, and demonstrate a tendency to treat quantities that are large or small relative to a particular context in which they are encountered as though they were large or small in a more general sense. (shrink)
In the asymmetrical variant of the two-envelope paradox, the amount in envelope A is determined first, and then the amount in envelope B is determined to be either twice or half the amount in A by flipping a fair coin. Contra the common belief that B is preferable to A in this case, I show that the proposed arguments for this common belief all fail, and argue that B is not preferable to A if the expected values of the amounts (...) in the envelopes are infinite. Using the examples I deploy in my arguments against the common belief, I also refute certain proposed solutions to the two-envelope paradox and draw some general lessons. (shrink)
C onsider two contrary conditionals 1 about two envelopes, Ali and Baba: (a) If Ali has more money than Baba, the difference between the amounts in them is $5. (b) If Ali has more money than Baba, the difference between the amounts in them is $10. Can these both be true? The answer is a resounding yes on the standard account of conditionals, which identifies indicative con- ditionals with material conditionals. It is not the same with many other contemporary accounts (...) of c onditionals. They yield a quali- fied negative answer: (a) and (b) are not compatible 2 unless their common antecedent cannot be true. I think this is a wrong answer. The conditionals can both be true while their antecedent states a possibility. This is what I aim to show in this paper. In doing so, I present a solution to Raymond Smullyan’s intriguing version of the two-envelope paradox. (shrink)
It is commonly believed that when a finite value is received in a game that has an infinite expected value, it is in one’s interest to redo the game. We have argued against this belief, at least in the repeated St Petersburg two-envelope case. We also show a case where repeatedly opting for a higher expected value leads to a worse outcome.
Sutton ( 2010 ) claims that on our analysis (2007), the problem in the two-envelope paradox is an error in counterfactual reasoning. In fact, we distinguish two formulations of the paradox, only one of which, on our account, involves an error in conditional reasoning. According to Sutton, it is conditional probabilities rather than subjunctive conditionals that are essential to the problem. We argue, however, that his strategy for assigning utilities on the basis of conditional probabilities leads to absurdity. In addition, (...) we show that a crucial presupposition of Sutton’s argument — namely, that one can know that envelope A contains n simply on the basis of a stipulation — is mistaken. (shrink)
When David Lewis ( 1986 ) told us that possible worlds were a ‘paradise for philosophers’, he neglected to add that they are a minefield for decision theorists. Possibilities — be they nomological, metaphysical, or epistemic possibilities — have little to do with subjective probabilities, and it is these latter that matter most to decision theory. Bernard Katz and Doris Olin ( 2007 ) have tried to solve the two-envelope problem by appealing to possible worlds and counterfactual conditionals. In this (...) article, I explain why any such attempt is misguided, and why we, qua decision theorists, must focus on the probable rather than the possible. (shrink)
When David Lewis (1986) told us that possible worlds were a ‘paradise for philosophers,’ he neglected to add that they are a minefield for decision theorists. Possibilities—be they nomological, metaphysical, or epistemic possibilities—have little to do with subjective probabilities, and it is these latter that matter most to decision theory. Bernard Katz and Doris Olin (2007) have tried to solve the two-envelope problem by appealing to possible worlds and counterfactual conditionals. In this paper I explain why any such attempt is (...) misguided, and why we, qua decision theorists, must focus on the probable rather than the possible. (shrink)
In the two-envelope problem, one is offered a choice between two envelopes, one containing twice as much money as the other. After seeing the contents of the chosen envelope, the chooser is offered the opportunity to make an exchange for the other envelope. However, it appears to be advantageous to switch, regardless of what is observed in the chosen envelope. This problem has an extensive literature with connections to probability and decision theory. The literature is roughly divided between those that (...) attempt to explain what is flawed in arguments for the advantage of switching and those that attempt to explain when such arguments can be correct if counterintuitive. We observe that arguments in the literature of the two-envelope problem that the problem is paradoxical are not supported by the probability distributions meant to illustrate the paradoxical nature. To correct this, we present a distribution that does support the usual arguments. Aside from questions about the interpretation of variables, algebraic ambiguity, modal confusions and the like, most of the interesting aspects of the two-envelope problem are assumed to require probability distributions on an infinite space. Our next main contribution is to show that the same counterintuitive arguments can be reflected in finite versions of the problem; thus they do not inherently require reasoning about infinite values. A topological representation of the problem is presented that captures both finite and infinite cases, explicating intuitions underlying the arguments both that there is an advantage to switching and that there is not. (shrink)
The “exchange paradox”—also referred to in the literature by a variety of other names, notably the “two-envelopes problem”—is notoriously difficult, and experts are not all agreed as to its resolution. Some of the various expressions of the problem are open to more than one interpretation; some are stated in such a way that assumptions are required in order to fill in missing information that is essential to any resolution. In three experiments several versions of the problem were used, in each (...) of which the information given was sufficient to determine an optimal choice strategy when it exists or to justify indifference regarding keeping or trading when such a strategy does not exist. College students who were presented with the various versions of the problem tended to base their choices on simple heuristics and to give little evidence of understanding the probabilistic implications of the differences in the problem statements. (shrink)
You are presented with a choice between two envelopes. You know one envelope contains twice as much money as the other, but you don't know which contains more. You arbitrarily choose one envelope -- call it Envelope A -- but don't open it. Call the amount of money in that envelope X. Since your choice was arbitrary, the other envelope (Envelope B) is 50% likely to be the envelope with more and 50% likely to be the envelope with less. But, (...) strangely, that very fact might make Envelope B seem attractive: Wouldn't switching to Envelope B give you a 50% chance of doubling your money and a 50% chance of halving it? Since double or nothing is a fair bet, double or half is more than fair. Applying the standard expectation formula, you might calculate the expected value of switching to Envelope B as (.50)½X [50% chance it has less] + (.50)2X [50% chance it has more] = (1.25)X. So, it seems, you ought to switch to Envelope B: Your expected return -- your return on average, over the long run, if you did this many times -- would seem to be 25% more. But obviously that's absurd: A symmetrical calculation could persuade you to switch back to Envelope A. Hence the paradox. (shrink)
This paper deals with the two-envelope paradox. Two main formulations of the paradoxical reasoning are distinguished, which differ according to the partition of possibilities employed. We argue that in the first formulation the conditionals required for the utility assignment are problematic; the error is identified as a fallacy of conditional reasoning. We go on to consider the second formulation, where the epistemic status of certain singular propositions becomes relevant; our diagnosis is that the states considered do not exhaust the possibilities. (...) Thus, on our approach to the paradox, the fallacy, in each formulation, is found in the reasoning underlying the relevant utility matrix; in both cases, the paradoxical argument goes astray before one gets to questions of probability or calculations of expected utility. (shrink)
When decision makers have more to gain than to lose by changing their minds, and that is the only relevant fact, they thereby have a reason to change their minds. While this is sage advice, it is silent on when one stands more to gain than to lose. The two envelope paradox provides a case where the appearance of advantage in changing your mind is resilient despite being a chimera. Setups that are unproblematically modeled by decision tables that are used (...) in the formulation of the two envelope paradox are described, and variations on them are stipulated. The problems posed by the paradoxical modeling are then contrasted with the variations. The paper concludes with a brief explanation of why the paradoxical modeling does not gain support from the fact that one envelope has twice the amount that is in the other. (shrink)
The term “exchange paradox” refers to a situation in which it appears to be advantageous for each of two holders of an envelope containing some amount of money to always exchange his or her envelope for that of the other individual, which they know contains either half or twice their own amount. We review several versions of the problem and show that resolving the paradox depends on the specifics of the situation, which must be disambiguated, and on the player's beliefs. (...) The latter psychological variables are part and parcel of the resolution. Assuming reasonable subjective distributions, exchanging cannot always be advantageous for both players. We suggest several deep-rooted psychological reasons for the considerable difficulty people demonstrably have in dealing with this problem. Implicit widespread and compelling assumptions—that affect judgement in diverse contexts—obstruct the solution. Analysing this paradox underscores the close connection between psychology and probability theory. (shrink)
This book is an exploration of philosophical questions about infinity. Graham Oppy examines how the infinite lurks everywhere, both in science and in our ordinary thoughts about the world. He also analyses the many puzzles and paradoxes that follow in the train of the infinite. Even simple notions, such as counting, adding and maximising present serious difficulties. Other topics examined include the nature of space and time, infinities in physical science, infinities in theories of probability and decision, the nature of (...) part/whole relations, mathematical theories of the infinite, and infinite regression and principles of sufficient reason. (shrink)
After explaining the well-known two-envelope paradox by indicating the fallacy involved, we consider the two-envelope problem of evaluating the factual information provided to us in the form of the value contained by the envelope chosen first. We try to provide a synthesis of contributions from economy, psychology, logic, probability theory (in the form of Bayesian statistics), mathematical statistics (in the form of a decision-theoretic approach) and game theory. We conclude that the two-envelope problem does not allow a satisfactory solution. An (...) interpretation is made for statistical science at large. (shrink)
There has been much discussion on the two-envelope paradox. Clark and Shackel (2000) have proposed a solution to the paradox, which has been refuted by Meacham and Weisberg (2003). Surprisingly, however, the literature still contains no axiomatic justification for the claim that one should be indifferent between the two envelopes before opening one of them. According to Meacham and Weisberg, "decision theory does not rank swapping against sticking [before opening any envelope]" (p. 686). To fill this gap in the literature, (...) we present a simple axiomatic justification for indifference, avoiding any expectation reasoning, which is often considered problematic in infinite cases. Although the two-envelope paradox assumes an expectation-maximizing agent, we show that analogous paradoxes arise for agents using different decision principles such as maximin and maximax, and that our justification for indifference before opening applies here too. (shrink)
We pose and resolve several vexing decision theoretic puzzles. Some are variants of existing puzzles, such as 'Trumped' (Arntzenius and McCarthy 1997), 'Rouble trouble' (Arntzenius and Barrett 1999), 'The airtight Dutch book' (McGee 1999), and 'The two envelopes puzzle' (Broome 1995). Others are new. A unified resolution of the puzzles shows that Dutch book arguments have no force in infinite cases. It thereby provides evidence that reasonable utility functions may be unbounded and that reasonable credence functions need not be countably (...) additive. The resolution also shows that when infinitely many decisions are involved, the difference between making the decisions simultaneously and making them sequentially can be the difference between riches and ruin. Finally, the resolution reveals a new way in which the ability to make binding commitments can save perfectly rational agents from sure losses. (shrink)
The Two-Envelope Paradox is classically presented as a problem in decision theory that turns on the use of probabilities in calculating expected utilities. I formulate a Maximin Version of the paradox, one that is decision-theoretic but omits considerations of probability. I investigate the source of the error in this new argument, and apply the insights thereby gained to the analysis of the classical version.
1Department of Philosophy, University of Nottingham, University Park, Nottingham NG7 2RD, UK. [email protected] of Philosophy, University of Nottingham, University Park, Nottingham NG7 2RD, UK. [email protected].
Four variations on Two Envelope Paradox are stated and compared. The variations are employed to provide a diagnosis and an explanation of what has gone awry in the paradoxical modeling of the decision problem that the paradox poses. The canonical formulation of the paradox underdescribes the ways in which one envelope can have twice the amount that is in the other. Some ways one envelope can have twice the amount that is in the other make it rational to prefer the (...) envelope that was originally rejected. Some do not, and it is a mistake to treat them alike. The nature of the mistake is diagnosed by the different roles that rigid designators and definite descriptions play in unproblematic and in untoward formulations of decision tables that are employed in setting out the decision problem that gives rise to the paradox. The decision makerâs knowledge or ignorance of how one envelope came to have twice the amount that is in the other determines which of the different ways of modeling his decision problem is correct. Under this diagnosis, the paradoxical modeling of the Two Envelope problem is incoherent. (shrink)
Clark and Shackel have recently argued that previous attempts to resolve the two-envelope paradox fail, and that we must look to symmetries of the relevant expected-value calculations for a solution. Clark and Shackel also argue for a novel solution to the peeking case, a variant of the two-envelope scenario in which you are allowed to look in your envelope before deciding whether or not to swap. Whatever the merits of these solutions, they go beyond accepted decision theory, even contradicting it (...) in the peeking case. Thus if we are to take their solutions seriously, we must understand Clark and Shackel to be proposing a revision of standard decision theory. Understood as such, we will argue, their proposal is both implausible and unnecessary. (shrink)
I reason: (1) For any x, if I knew that A contained x, then the odds are even that B contains either 2x or x/2, so the expected amount in B would be 5x/4. So (2) for all x, if I knew that A contained x, I would have an expected gain in switching to B. So (3) I should switch to B. But this seems clearly wrong, as my information about A and B is symmetrical.
Given a choice between two sealed envelopes, one of which contains twice as much money as the other (and in any case some), you don't know which contains the larger sum and so choose one at random. You are then given the option of taking the other envelope instead. Is it rational to do so? Surely not. but a specious line of reasoning suggests otherwise.
The aim of this paper is to diagnose the so-called two envelopes paradox. Many writers have claimed that there is something genuinely paradoxical in the situation with the two envelopes, and some writers are now developing non-standards theories of expected utility. I claim that there is no paradox for expected utility theory as I understand that theory, and that contrary claims are confused. Expected utility theory is completely unaffected by the two-envelope paradox.
Previous claims to have resolved the two-envelope paradox have been premature. The paradoxical argument has been exposed as manifestly fallacious if there is an upper limit to the amount of money that may be put in an envelope; but the paradoxical cases which can be described if this limitation is removed do not involve mathematical error, nor can they be explained away in terms of the strangeness of infinity. Only by taking account of the partial sums of the infinite series (...) of expected gains can the paradox be resolved. (shrink)
In the exchange paradox, two players receive envelopes containing different amounts of money. The assignment of the amounts ensures each player has the same probability of receiving each possible amount. Nonetheless, for each specific amount a player may find in his envelope, there is a positive expectation of gain if the player swaps envelopes with the other player, in apparent contradiction with the symmetry of the game. I consider a variant form of the paradox that avoids problems with improper probabilities (...) and I argue that in it these expectations give no grounds for a decision to swap since that decision must be based on a summation of all the expectations. But this sum yields a non‐convergent series that has no meaningful value. The conflicting recommendations – that it is to one or the other player's advantage to swap – arise from different ways of grouping terms in the sum that yield an illusion of convergence. I describe a generalized exchange paradox, explore some of its properties and display another example. (shrink)
The two envelope paradox can be dissolved by looking closely at the connection between conditional and unconditional expectation and by being careful when summing an infinite series of positive and negative terms. The two envelope paradox is not another St. Petersburg paradox and that one does not need to ban talk of infinite expectation values in order to dissolve it. The article ends by posing a new puzzle to do with infinite expectations.
The two envelopes problem has generated a significant number of publications (I have benefitted from reading many of them, only some of which I cite; see the epilogue for a historical note). Part of my purpose here is to provide a review of previous results (with somewhat simpler demonstrations). In addition, I hope to clear up what I see as some misconceptions concerning the problem. Within a countably additive probability framework, the problem illustrates a breakdown of dominance with respect to (...) infinite partitions in circumstances of infinite expected utility. Within a probability framework that is only finitely additive, there are failures of dominance with respect to infinite partitions in circumstances of bounded utility with finitely many consequences (see the epilogue). (shrink)