Skip to main content
Log in

Arntzenius on ‘Why ain’cha rich?’

  • Original Article
  • Published:
Erkenntnis Aims and scope Submit manuscript

Abstract

The best-known argument for Evidential Decision Theory (EDT) is the ‘Why ain’cha rich?’ challenge to rival Causal Decision Theory (CDT). The basis for this challenge is that in Newcomb-like situations, acts that conform to EDT may be known in advance to have the better return than acts that conform to CDT. Frank Arntzenius has recently proposed an ingenious counter argument, based on an example in which, he claims, it is predictable in advance that acts that conform to EDT will do less well than acts that conform to CDT. We raise two objections to Arntzenius’s example. We argue, first, that the example is subtly incoherent, in a way that undermines its effectiveness against EDT; and, second, that the example relies on calculating the average return over an inappropriate population of acts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. On this standard version (Nozick 1970) you have the choice between (1) taking just an opaque box and (2) taking the opaque box plus a transparent box containing $1,000. You get to keep the contents of whichever box or boxes you take. Yesterday a very powerful predictor of human actions (who does not however ‘see’ the future in any way that involves backwards causation) put $1M into the opaque box if and only if it predicted that you would, now, take only the opaque box. Should you (1) ‘one-box’ or (2) ‘two-box’?

  2. Here and elsewhere expressions like by and from are not intended to indicate that the steps that they label are in all cases deductively valid. It is enough that they indicate that the step is supposed to be rationally compelling: for instance, it is our view that anyone who accepts (1) and (2) is rationally compelled to accept (3). This rational compulsion may however lapse in the presence of some defeater; indeed in our view that is precisely what happens in the case that Arntzenius describes.

  3. Of course there is a sense in which compatibly with (1) and (2) one-boxing does not foreseeably do better than two-boxing. One-boxing does foreseeably worse than two-boxing in the sense that on any particular encounter with a Newcomb problem, a one-boxer would have done better to have taken both boxes. In this ‘counterfactual’ sense of ‘foreseeably better’, two-boxing is foreseeably the better option.

    So distinguish that counterfactual sense of ‘foreseeably better’ from the sense in which it means: does in fact have the greater expected actual return. In that second sense—the one that we intend—all parties will agree that one-boxing does foreseeably better than two-boxing given that the predictor is foreseeably accurate. What is at issue between Arntzenius and us is not that point, but whether anything follows from that point about the superiority of EDT as a normative theory of rational choice. We say yes: Arntzenius says no. (Thanks to a referee.)

  4. The same point applies to Arntzenius’s other example (2008: 290), which resembles Newcomb’s problem, except that both boxes are transparent, and the predictor has placed $10 in the left-hand box iff he predicted that the agent would not take the right-hand box, which contains $1. Evidential and Causal Decision theories both advise taking the contents of both boxes. Arntzenius claims that agents who heed this advice will foreseeably make less money than those who—insanely—take only the box containing $10.

    Our complaint about the Yankees case transposes to this case as follows. If the agent knows that she is going to be able to choose what boxes she takes then she knows in advance that she can so contrive her choices as to make the predictor’s accuracy arbitrarily close to zero. (She can do this by taking both boxes on any occasion if and only if the predictor has on that occasion left $10 in the left-hand box.) But if she knows in advance that that is an option for her, then she cannot assume in advance that the predictor is going to be accurate; so she cannot after all foresee that the strategy endorsed by CDT (and by EDT) will be relatively unprofitable.

    This case also illustrates especially clearly why the incoherence that it shares with the Yankees example does not arise in the standard Newcomb case. In the standard Newcomb case one box is opaque; and the only way to discover its contents is to make the very decision whose return depends upon them. So there is no way of knowing in advance what on any occasion of choice you have been predicted to choose. Nor therefore is there any identifiable strategy for systematically falsifying those predictions.

  5. Here we slide over an important distinction within the class of Newcomb scenarios. In some such cases it is either stipulated or allowed that prior to choosing the agent is directly aware of a ‘tickle’—an inclination to choose in one direction or the other—whose presence screens off his act from the earlier prediction of it and so also from the contents of the opaque box (Eells 1982, chaps. 6–7).

    In these ‘tickle’ cases it is of course false that the agent has no evidence that relevantly distinguishes him from anyone else facing the problem, so in tickle cases Why Ain’cha Rich does not support one-boxing. But then neither does EDT support one-boxing in tickle cases: on the contrary, the presence of a screening-off inclination in either direction makes the agent’s act evidentially irrelevant to the contents of the opaque box and hence also entails the unique EDT-rationality of two-boxing.

    So the defender of EDT should be comfortable with this distinction and also with the consequent qualification of the statement in the text. His position will continue to be that Why Ain’cha Rich supports EDT over CDT because it mandates one-boxing in just those sorts of Newcomb cases where EDT recommends one-boxing and CDT does not. (Thanks to a referee.)

  6. It is reasonable to wonder whether this diagnosis of the error in Arntzenius’s argument is not sensitive to the way in which we are here applying the principle of ‘total evidence’. Our objection is that more specific information is available to Mary, on any occasion, than is used in Arntzenius’s calculation of the average return to each of her options on that occasion. But how are we supposed to incorporate this information?

    In the present framework the additional information (that ‘the bet is losing’ or that ‘the bet is winning’) is used as a description of the action whose expected utility is thus calculated. But why is that the right way of incorporating the additional evidence? In the simpler context of inductive reasoning—without considering actions as yet—the principle of total evidence would say: Given that the statistical probability of H (x) given E (x) is r, and given that one’s total evidence about the individual a is that E (a) is the case, one’s subjective probability that H (a) is the case ought to be r. So one’s evidence figures as a proposition on which one then conditionalizes.

    Applying the principle in this way yields the result that in any case a bet on the Red Sox is the better bet. For instance: since the statistical probability that x is a bet on a game that the Red Sox win, given that x is a bet on the Red Sox and x is a winning bet, is 1, one’s credence that the Red Sox will win this game given that this bet is a winning bet on the Red Sox should be 1. That yields one of the conditional probabilities figuring in (7); by similar means we arrive at the rest and so conclude that in any case Red Sox is the rational bet. But that is exactly what EDT implies and what we are here proposing: given the information that Mary has on any particular occasion, she is indeed rational on that occasion to bet on the Red Sox, regardless of (15). So our argument about Mary’s case is indeed robust to variations in the exact manner in which you are supposed to apply the principle of total evidence to it.

    A related objection is that conditionalizing on the information that, say, this bet is going to win, does nothing to affect Mary’s confidence that in the long run and taken over all bets, bets on the Yankees will do better than bets on the Red Sox. So even if she learns that she will win her next bet, is she not still entitled to be just as confident in (15) as she was before? And in that case doesn’t Arntzenius’s argument still go through?

    But the point is then not that Mary’s information makes (15) false but that it makes it inappropriate to apply (15) to her present situation. For sure, her next bet belongs to a population of bets of which (15) is true. But the oracle’s predication also puts it in a narrower population of which (25) is true. And the principle of total evidence tells us that she should be applying the generalization about the narrower population to her present bet rather than the (equally true) generalization about the broader population. Otherwise it would be rational not to visit the doctor, even given these rather serious symptoms, on the grounds that in the general population people who visit doctors fall sick more often than those who do not. (Thanks to a referee.)

  7. But couldn’t we make Mary’s early and soon-to-be reversed preference for a bet on the Yankees practically harmful to her? Suppose she knew that we were going to offer her: (1) a choice between betting on the Yankees or on the Red Sox before she learnt whether her next bet was going to be a winner; and then (2) the option to switch bets for a fee, after she had learnt whether her next bet was going to be a winner. EDT seems to commit her to (1) a bet on the Yankees and (2) paying the fee—as long as it is less than $1—and betting instead on the Red Sox. But this is irrational: when offered the choice (1) she could foresee that she would get information that would lead her to prefer a bet on the Red Sox, so the more rational thing to do would be to take the bet on the Red Sox then and save herself the fee.

    But if she is going to be offered (1) and (2) then EDT will not recommend, at the time of (1), that she take the bet on the Yankees. That recommendation relied on the assumption, implicit in (27), that the news value of a win for the Yankees, given that she bets before learning the outcome of her bet, is 1. But if Mary knows that she will change her mind and hence her bet (as she must do for an initial bet on the Yankees to be irrational), then this assumption no longer holds: at the time of (1) the value of a Yankees win given that Mary now bets on the Yankees is rather −1, because she knows that when the Yankees win she’ll be holding a Red Sox ticket. In fact in that situation EDT will prescribe betting early on the Red Sox and saving the fee.

  8. A similar but not quite identical situation arises in Newcomb’s problem itself: the follower of EDT begins with a preference that he takes only the opaque box in the knowledge that whatever its contents, he will later think that he would have done better to take both boxes. The difference is that in the Newcomb case it is not the relative news values of one-boxing and two-boxing that foreseeably fluctuate—for once the agent has taken one box his ex post news value for taking two is undefined—; rather it is that the agent can foresee regretting, so to speak counterfactually, what he currently prefers to do. Foreseeable regret is a much discussed phenomenon that has little bearing on our dispute with Arntzenius; what is important is that we distinguish it from the phenomenon of foreseeable preference instability, which is both relevant and relatively little discussed in these contexts.

    On the other hand the fact that EDT violates the principle of dominance in the Newcomb case certainly implies that a modification of that case accurately simulates Mary’s situation. Suppose that before acting the evidentialist agent gets to peek into the opaque box. Then he knows before peeking that (a) he now prefers one-boxing to two-boxing; and that (b) whatever he sees in the opaque box he will after seeing it prefer two-boxing to one-boxing. So this modified Newcomb case is also a case of foreseeable preference instability. (Thanks to a referee.)

References

  • Arntzenius, F. (2008). No regrets, or: Edith Piaf revamps decision theory. Erkenntnis, 68, 277–297.

    Article  Google Scholar 

  • Dummett, M. (1964). Bringing about the past. Philosophical Review, 73, 338–359.

    Article  Google Scholar 

  • Eells, E. (1982). Rational decision and causality. Cambridge: Cambridge University Press.

    Google Scholar 

  • Gibbard, A., & Harper, W. (1981). Counterfactuals and two kinds of expected utility. In W. Harper, R. Stalnaker, & G. Pearce (Eds.), Ifs: Conditionals, belief, decision, chance and time (pp. 153–192). Dordrecht: D. Reidel.

    Google Scholar 

  • Hitchcock, C. (1996). Causal decision theory and decision-theoretic causation. Noûs, 30, 508–526.

    Article  Google Scholar 

  • Joyce, J. (1999). The foundations of causal decision theory. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Joyce, J. (2007). Are Newcomb problems really decisions? Synthese, 156, 537–562.

    Article  Google Scholar 

  • Lewis, D. (1981a). Causal decision theory. In P. Gardenfors & N.-E. Sahlin (Eds.), Decision, probability and utility: Selected readings (pp. 377–405). Cambridge: Cambridge University Press.

    Google Scholar 

  • Lewis, D. (1981b). Why ain’cha rich? Noûs, 15, 377–380.

    Article  Google Scholar 

  • Nozick, R. (1970). Newcomb’s problem and two principles of choice. In N. Rescher (Ed.), Essays in honor of Carl G. Hempel (pp. 114–146). Dordrecht: D. Reidel.

    Google Scholar 

  • Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge: Cambridge University Press.

    Google Scholar 

  • Price, H. (1993). The direction of causation: Ramsey’s ultimate contingency. In D. Hull, M. Forbes, & K. Okruhlik (Eds.), PSA 1992 (Vol. 2, pp. 253–267). East Lansing, Michigan: Philosophy of Science Association.

    Google Scholar 

  • Rabinowicz, W. (2002). Does practical deliberation crowd out self-prediction? Erkenntnis, 57, 91–122.

    Article  Google Scholar 

Download references

Acknowledgments

We are grateful to Frank Arntzenius and to two referees for helpful comments on earlier drafts of this paper. AA wishes also to thank the Centre for Time, Department of Philosophy, University of Sydney, NSW 2006, Australia where he did the research behind this paper, and the Leverhulme Trust, which was funding his leave at the time. HP is grateful to the Australian Research Council and the University of Sydney, for research support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arif Ahmed.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ahmed, A., Price, H. Arntzenius on ‘Why ain’cha rich?’. Erkenn 77, 15–30 (2012). https://doi.org/10.1007/s10670-011-9355-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10670-011-9355-2

Keywords

Navigation