Skip to main content

Newcomb’s Problem and Two Principles of Choice

  • Chapter
Essays in Honor of Carl G. Hempel

Part of the book series: Synthese Library ((SYLI,volume 24))

Abstract

Suppose a being in whose power to predict your choices you have enormous confidence. (One might tell a science-fiction story about a being from another planet, with an advanced technology and science, who you know to be friendly, etc.) You know that this being has often correctly predicted your choices in the past (and has never, so far as you know, made an incorrect prediction about your choices), and furthermore you know that this being has often correctly predicted the choices of other people, many of whom are similar to you, in the particular situation to be described below. One might tell a longer story, but all this leads you to believe that almost certainly this being’s prediction about your choice in the situation to be discussed will be correct.

Both it and its opposite must involve no mere artificial illusion such as at once vanishes upon detection, but a natural and unavoidable illusion, which even after it has ceased to beguile still continues to delude though not to deceive us, and which though thus capable of being rendered harmless can never be eradicated.

Immanuel Kant, Critique of Pure Reason, A422, B450

It is not clear that I am entitled to present this paper. For the problem of choice which concerns me was constructed by someone else, and I am not satisfied with my attempts to work through the problem. But since I believe that the problem will interest and intrigue Peter Hempel and his many friends, and since its publication may call forth a solution which will enable me to stop returning, periodically, to it, here it is. It was constructed by a physicist, Dr. William Newcomb, of the Livermore Radiation Laboratories in California. I first heard the problem, in 1963, from his friend Professor Martin David Kruskal of the Princeton University Department of Astrophysical Sciences. I have benefitted from discussions, in 1963, with William Newcomb, Martin David Kruskal, and Paul Benacerraf. Since then, on and off, I have discussed the problem with many other friends whose attempts to grapple with it have encouraged me to publish my own. It is a beautiful problem. I wish it were mine.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. If the being predicts that you will consciously randomize your choice, e.g., flip a coin, or decide to do one of the actions if the next object you happen to see is blue, and otherwise do the other action, then he does not put the $M in the second box.

    Google Scholar 

  2. Try it on your friends or students and see for yourself. Perhaps some psychologists will investigate whether responses to the problem are correlated with some other interesting psychological variable that they know of.

    Google Scholar 

  3. If the questions and problems are handled as I believe they should be, then some of the ensuing discussion would have to be formulated differently. But there is no point to introducing detail extraneous to the central problem of this paper here.

    Google Scholar 

  4. This divergence between the dominance principle and the expected utility principle is pointed out in Robert Nozick. The Normative Theory of Individual Choice,unpublished doctoral dissertation, Princeton University, Princeton, 1963, and in Richard Jeffrey, The Logic of Decision,McGraw-Hill, New York, 1965.

    Google Scholar 

  5. This is shorthand for: action A is done and state S12 obtains or action B is done and state S1 obtains. The ‘or’ is the exclusive or.

    Google Scholar 

  6. Note that S 1 = A 1 & S 3 or A 2 & S 4 S 2 = A 1 & S 4 or A 2 & S 3 S 3 = A 1 & S 1 or A 2 & S 2 S 4 = A 1 & S 2 or A 2 & S 1 Similarly, the above identities hold for Newcomb’s example, with which I began, if one lets S 1 = The money is in the second box. S 2 = The money is not in the second box. S 3 = The being predicts your choice correctly. S 4 = The being incorrectly predicts your choice. A 1 = You take only what is in the second box. A 2 = You take what is in both boxes.

    Google Scholar 

  7. State S is not probabilistically independent of actions A and B if prob (S obtains/A is done) ≠ prob (S obtains/B is done).

    Google Scholar 

  8. In Newcomb’s predictor example, assuming that ‘He predicts correctly’ and ‘He predicts incorrectly’ are each probabilistically independent of my actions, then it is not the case that ‘He puts the money in’ and ‘He does not put the money in’ are each probabilistically independent of my actions. Usually it will be the case that if the members of the set of exhaustive and exclusive states are each probabilistically independent of the actions A 1 and A 2, then it will not be the case that the states equivalent to our contrived states are each probabilistically independent of both Al and A2. For example, suppose prob (S 1/A 1) = prob (S 1/A 2) = = prob (S 1); prob (S 2/A 2) = prob (S 2/A 1) = prob (S 2). Let: S 3 = A 1 & S 1 or A 2 & S 2 S 4 = A l & S 2 or A 2 & S 1 If prob (S 1) ≠ prob (S 2), then S 3 and S 4 are not probabilistically independent of A 1 and A 2. For prob (S 3/A 1) = prob (S 1/A 1) = prob (S 1), and prob (S 3/A 2) = prob (S 2/A 2) = prob (S 2). Therefore if prob (S 1) ≠ prob (S 2), then prob (S 3/A 1) ≠ prob (S 3/A 2). If prob (S 1) = prob (S 2) = 1/2, then the possibility of describing the states as we have will not matter. For if, for example, A l can be shifted around so as to dominate A 2, then before the shifting it will have a higher expected utility than Az. Generally, if the members of the set of exclusive and exhaustive states are probabilistically independent of both A 1 and A 2, then the members of the contrived set of states will be probabilistically independent of both A 1 and A 2 only if the probabilities of the original states which are components of the contrived states are identical. And in this case it will not matter which way one sets up the situation.

    Google Scholar 

  9. Note that this procedure seems to work quite well for situations in which the states are not only not probabilistically independent of the actions, but are not logically independent either. Suppose that a person is asked whether he prefers doing A to doing B,where the outcome of A is /p if S 1 and r if S 2/ and the outcome of B is /q if S 2 and r if S 1/. And suppose that he prefers p to q to r,and that S 1 = I do B,and S 2 = I do A. The person realizes that if he does A, S 2 will be the case and the outcome will be r,and he realizes that if he does B, S 1 will be the case and the outcome will be r. Since the outcome will be r in any case, he is indifferent between doing A and doing B. So let us suppose he flips a coin in order to decide which to do. But given that the coin is fair, it is now the case that the probability of S 1 =1/2 and the probability of S 2 = 1/2. If we mechanically started to compute the expected utility of A,and of B,we would find that A has a higher expected utility than does B. For mechanically computing the expected utilities, it would turn out that the expected utility of A = = 1/2 x u(p) + 1/2 x u(r),and the expected utility of B = 1/2 x u(q) + 1/2 x u(r). If, however, we use the conditional probabilities, then the expected utility of A = prob (S 1/A) x u(p) + prob (S 2/A) x u(r) = 0 x u(p) + 1 x u(r) = u(r). And the expected utility of B = prob (S 2/B) x u(q)+ prob (S 1/B) x u(r) = 0 x u(q)+1 x u(r)=u(r). Thus the expected utilities of A and B are equal, as one would wish.

    Google Scholar 

  10. This position was suggested, with some reservations due to Newcomb’s example, in Robert Nozick, The Normative Theory of Individual Choice, op. cit. It was also suggested in Richard Jeffrey, The Logic of Decision, op. cit.

    Google Scholar 

  11. Ishould mention, what the reader has no doubt noticed, that the previous example is not fully satisfactory. For it seems that preferring the academic life to the athlete’s life should be as strong evidence for the tendency as is choosing the academic life. And hence P’s choosing the athlete’s life, though he prefers the academic life, on expected utility grounds does not seem to make it likely that he does not have the tendency. What the example seems to require is an inherited tendency to decide to do A which is such that (1) The probability of its presence cannot be estimated on the basis of the person’s preferences, but only on the basis of knowing the genetic make-up of his parents, or knowing his actual decisions; and (2) The theory about how the tendency operates yields the result that it is unlikely that it is present if the person decides not to do A in the example-situation, even though he makes this decision on the basis of the stated expected utility grounds. It is not clear how, for this example, the details are to be coherently worked out.

    Google Scholar 

  12. That is, the Dominance Principle is legitimately applicable to situations in which ~ (∃S) (∃A) (∃B) [prob (S obtains/A is done) ≠ prob (S obtains/B is done)].

    Google Scholar 

  13. The other eleven possibilities about the states are:

    Google Scholar 

  14. Unless it is possible that there be causality or influence backwards in time. Ishall not here consider this possibility, though it may be that only on its basis can one defend, for some choice situations, the refusal to use the dominance principle. I try to explain later why, for some situations, even if one grants that there is no influence back in time, one may not escape the feeling that, somehow, there is.

    Google Scholar 

  15. Cf. R.Duncan Luce and Howard Raiffa, Games and Decisions,John Wiley & Sons, New York, 1957, pp. 94–102.

    Google Scholar 

  16. Almost certainty1> almost certainty2, since almost certainty2 is some function of the probability that brother Ihas the dominant action gene given that he performs the dominant action (= almost certaintyi), and of the probability that brother IIdoes the dominant action given that he has the dominant action gene.

    Google Scholar 

  17. In choosing the headings for the rows, I have ignored more complicated possibilities, which must be investigated for a fuller theory, e.g., some actions influence which state obtains and others do not.

    Google Scholar 

  18. I here consider only the case of two actions. Obvious and messy problems for the kind of policy about to be proposed are raised by the situation in which more than two actions are available (e.g., under what conditions do pairwise comparisons lead to a linear order), whose consideration is best postponed for another occasion.

    Google Scholar 

  19. See R. Duncan Luce and Howard Raiffa, op. cit.,pp. 275–298 and the references therein; Daniel Ellsberg, ‘Risk, Ambiguity, and the Savage Axioms’, Quarterly Journal of Economics 75 (1961), 643–669, and the articles by his fellow symposiasts Howard Raiffa and William Feller.

    Google Scholar 

  20. If the distinctions I have drawn are correct, then some of the existing literature is in need of revision. Many of the writers might be willing to just draw the distinctions we have adumbrated. But for the specific theories offered by some personal probability theorists, it is not clear how this is to be done. For example, L. J. Savage in The Foundations of Statistics,John Wiley & Sons, New York, 1954, recommends unrestricted use of dominance principles (his postulate P2), which would not do in case (I). And Savage seems explicitly to wish to deny himself the means of distinguishing case (I) from the others. (For further discussion, some of which must be revised in the light of this paper, of Savage’s important and ingenious work, see Robert Nozick, op. cit.,Chapter V.) And Richard Jeffrey, The Logic of Decision, op. cit.,recommends universal use of maximizing expected utility relative to the conditional probabilities of the states given the actions (see footnote 10 above). This will not do, I have argued, in cases (III) and (IV). But Jeffrey also sees it as a special virtue of this theory that it does not utilize certain notions, and these notions look like they might well be required to draw the distinctions between the different kinds of cases. While on the subject of how to distinguish the cases, let me (be the first to) say that I have used without explanation, and in this paper often interchangeably, the notions of influency, affecting, etc. I have felt free to use them without paying them much attention because even such unreflective use serves to open a whole area of concern. A detailed consideration of the different possible cases with many actions, some influencing, and in different degrees, some not influencing, combined with an attempt to state detailed principles using precise ‘influence’ notions undoubtedly would bring forth many intricate and difficult problems. These would show, I think, that my quick general statements about influence and what distinguishes the cases, are not, strictly speaking, correct. But going into these details would necessitate going into these details. So I will not.

    Google Scholar 

  21. Though perhaps it explains why I momentarily felt I had succeeded too well in constructing the vaccine case, and that perhaps one should perform the non-dominant action there.

    Google Scholar 

  22. But it also seems relevant that in Newcomb’s example not only is the action referred to in the explanation of which state obtains (though in a nonextensional belief context), but also there is another explanatory tie between the action and the state; namely, that both the state’s obtaining, and your actually performing the action are both partly explained in terms of some third thing (your being in a certain initial state earlier). A fuller investigation would have to pursue yet more complicated examples which incorporated this.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Nicholas Rescher

Rights and permissions

Reprints and permissions

Copyright information

© 1969 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Nozick, R. (1969). Newcomb’s Problem and Two Principles of Choice. In: Rescher, N. (eds) Essays in Honor of Carl G. Hempel. Synthese Library, vol 24. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-1466-2_7

Download citation

  • DOI: https://doi.org/10.1007/978-94-017-1466-2_7

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-8332-6

  • Online ISBN: 978-94-017-1466-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics