Skip to main content
Log in

Binding and its consequences

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

In “Bayesianism, Infinite Decisions, and Binding”, Arntzenius et al. (Mind 113:251–283, 2004) present cases in which agents who cannot bind themselves are driven by standard decision theory to choose sequences of actions with disastrous consequences. They defend standard decision theory by arguing that if a decision rule leads agents to disaster only when they cannot bind themselves, this should not be taken to be a mark against the decision rule. I show that this claim has surprising implications for a number of other debates in decision theory. I then assess the plausibility of this claim, and suggest that it should be rejected.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Arntzenius et al. (2004), pp. 268–269.

  2. This principle is suggested by the discussion in Arntzenius et al. (2004), but not explicitly stated. The authors have confirmed this understanding of their position in correspondence.

    Strictly speaking, this principle should include a caveat to bracket certain kinds of decision rules, such as those whose prescriptions explicitly depend on whether or not the agent can bind herself. For example, consider a decision rule that tells you to maximize expected utility if you can bind yourself, and to minimize expected utility if you cannot. Even though the counterintuitive results of the latter prescriptions only arise only for agents who cannot bind themselves, no one would want to claim that these results are not a mark against the decision rule. (Thanks to Adam Elga for this point.)

    One way to introduce such a caveat is to restrict the scope of the Binding Principle to decision rules of the standard form—rules whose prescriptions are functions of the agent’s current credences, utilities, and the set of available options. This restriction will rule out deviant decision rules of the kind just described, since whether or not one can bind oneself will not supervene on one’s current credences, utilities, and the set of available options.

  3. This understanding of utilities sets up decision theory as an account of prudential rationality. Alternatively, one can understand decision theory as an account of instrumental rationality, and take these utilities to be whatever is valuable according to the standard in question. In either case, decision theory, as understood here, is an account of what acts one ought to perform. It is not an account of how one ought to reason when making decisions, or of what preferences one ought to have (though given certain auxiliary assumptions, it may well bear on these issues).

  4. If we want to allow for well-defined infinite utilities, then we can use the extended reals to represent utilities instead of the reals.

  5. This characterization of (1) assumes a countable number of possibilities. To accommodate uncountably many possibilities, we can extend (1) in the usual way.

  6. I borrow this terminology from Collins (1996). For a discussion of some different ways of cashing out causal expected utility, see Joyce (1999).

  7. Of course, one might hold that “willpower”, “resolve”, and the like aren’t the right way to describe what binding agents (in the second sense) are like. (One might hold, for example, that the right way to model an agent with an iron will who decides to follow a given plan is to take them to be choosing an act which leads directly to the outcome that following this plan would lead to. If so, then agents with willpower should be understood as binding agents in the first sense, not the second.) The question of how to best understand this second notion of binding in an interesting one. But since this question is orthogonal to the issues we’ll be concerned with, I’ll put it aside.

  8. This characterization of decision trees builds in information about what the actual outcome of each act will be, regardless of whether that outcome is deterministic or indeterministic. This is a merely a matter of convenience; nothing of importance hangs on this choice.

  9. Two qualifications. First, I said that the ‘binding act’ corresponding to a comprehensive strategy needs to lead to the same outcome as the corresponding comprehensive strategy. Given a fine-grained notion of outcomes, we can’t require these outcomes to be exactly the same, since the fact that these outcomes were brought about by different sequences of choices is enough to distinguish them. So we can only require binding acts to lead to outcomes which are the same in the relevant respects to the outcomes of the corresponding strategy. (What counts as “relevant respects”? This is an interesting question. But since it’s orthogonal to the issues I’ll be concerned with, I’ll put it aside.)

    Second, the characterization just given only takes into account binding acts which skip to the end nodes of the tree. But one might want to consider ways of binding oneself which still leave some things open. For example, one might want to consider acts which effectively bind you to make certain choices if a particular situation comes up, but which otherwise leave your choices the same. We can extend the characterization of binding given above to include these possibilities. Let a partial strategy be a partial function from decision problems to acts. Then require the binding closure of a tree to also add to each node a set of ‘partially binding acts’, one act for each partial strategy. These ‘partially binding acts’ will lead to trees which look like the original tree, pruned to eliminate acts which conflict with the prescriptions of the corresponding partial strategy.

  10. Note that this only makes it rationally permissible to make choices that would lead to disaster, not rationally obligatory. (To get the result that it’s rationally obligatory, we need to impose some further constraints on Eve’s credences.)

  11. See Nozick (1969).

  12. As usual, we’re assuming that the agent’s utilities are linear in dollars.

  13. Gibbard and Harper (1985), p.153.

  14. See Gibbard and Harper (1985), Lewis (1981) and Joyce (1999).

  15. Gibbard and Harper (1985).

  16. For more discussion of the evidential decision theorist’s stance on this argument, see Lewis (1981).

  17. You’re not allowed to take boxes if you have a disposition which would make correct prediction impossible. For example, you’re not allowed to take the boxes you choose if your decision making dispositions are: “Take the first box if I see there’s nothing in it, and take both boxes if I see there’s a million in it.” (If your dispositions are such that the predictor can effectively choose which decision you make—you’ll take two boxes if you see nothing in the first box, and just the first box if you see the million—we can assume the predictor is stingy, and won’t put anything in the first box.)

  18. Suppose we modify the case so that contents of the second box are encoded in the agent’s initial credences. Then the binding evidential decision theorist will choose both boxes, and will end up poor, just like the binding causal decision theorist. So doesn’t the “why ain’cha rich” argument against evidential decision theory remain as well?

    No. The force of the “why ain’cha rich” argument against a decision rule X comes from the fact that cognitively ideal X-decision theorists can expect ahead of time that they will generally end up richer if they choose act a instead of act b, and yet once they’re in that decision problem, they’ll choose b anyway. In the Newcomb’s variant with two transparent boxes, for example, the evidential decision theorist expects dedicated one-boxers to end up richer than two-boxers, but she’ll choose both boxes anyway.

    This is not what happens in the case just described. If the contents of both boxes are encoded in her initial credences, it’s never the case that the evidential decision theorist expects one-boxing to make her rich: she always expects the one-boxer to get nothing and the two-boxer to get a thousand dollars. (And if her credences are accurate, she’s right.) So the “why ain’cha rich” argument doesn’t apply.

    One might try instead to set up an objective version of the “why ain’cha rich” argument against the binding evidential decision theorist using this case. One might stipulate that the predictions in question are made using a chance process that has a 99.9% chance of success, and point out that the expected gain of the binding evidential decision theorist, calculated using the objective chances (“expected chance ”), is lower than that of a dedicated one-boxer.

    But, again, this argument won’t work. If the binding evidential decision theorist doesn’t know what the chances are, then this argument is merely taking advantage of her ignorance. If the binding evidential decision theorist does know what the chances are, then the initial credences she’s been stipulated to have will violate something like the Principal Principle: her credences won’t line up with what she thinks the chances are. And it’s no surprise that an agent whose credences don’t line up with the chances can be expected chance to do poorly.

  19. You’re not allowed to bet if you have a disposition which would otherwise make the set-up of the case impossible. For example, you’re not allowed to bet if your betting dispositions are: “Bet on the Red Sox if I’m told I’ll win my bet, and bet on the Yankees if I’m told I’ll lose my bet.” Since the predictor can’t consistently tell you that you’ll win or lose your bet, these dispositions make the set-up of the case impossible.

  20. It’s worth clearing up a potential confusion regarding the role of the Binding Principle. The Binding Principle is being applied here to evaluate whether various prima facie counterintuitive results of evidential and causal decision theory—that in certain cases agents who follow their prescriptions will end up poor, even though these agents correctly expect subjects who act in a different manner to end up rich—should be taken as marks against these theories. The Binding Principle is not being applied to the “why ain’cha rich” argument itself, in order to (say) evaluate the merits of this argument. Such an application wouldn’t make sense. The Binding Principle only applies when we’re evaluating consequences or features of a particular decision rule, and it only makes claims about whether these consequences or features should bear on our evaluation of that rule. It doesn’t apply to arguments or considerations independently of a given decision rule, and it doesn’t make claims about their general merits.

    (Similar remarks apply to the discussions of self-recommendation and decision instability that follow. Thanks to Ted Sider for pointing out this potential confusion.)

  21. Skyrms (1982), p. 707.

  22. These are sometimes called cases of “pure” decision instability, with “impure” cases being ones in which the above condition only holds for some of the available acts (c.f. Richter (1986)).

  23. Assuming that you update by conditionalization, and that the evidence you get from performing an act is just that you’ve performed the act.

  24. Gibbard and Harper (1985), pp. 154–155.

  25. Though there are variants of canonical evidential decision theory, such as the “ratificationism” of Jeffrey (1983), which are also subject to decision instability. Our conclusions regarding decision instability and causal decision theory apply mutatis mutandis to these variants.

  26. We might adopt primitive conditional probabilities to get around this obstacle. But it still seems unlikely that decision instability will arise for evidential decision theory, for reasons given by Sobel (1983), Eells (1985), and Weirich (1985).

  27. For example, Sobel (1983), Eells (1985), and Weirich (1985).

  28. Gibbard and Harper (1985), p. 156.

  29. Egan (2007) presents some other cases in which decision instability arises, and argues that in these cases causal decision theory delivers the wrong verdicts. Arntzenius (2008) suggests that the proponent of causal decision theory can reasonably deny that the verdicts in question are counterintuitive. But in any case, Egan’s concern is not with decision instability per se, but with the fact that he thinks causal decision theory's prescriptions are counterintuitive.

    Weirich (1985) and Richter (1986) raise a different kind of worry: they argue that in the kinds of cases in which decision instability arises for causal decision theory, it should generally be rationally permissible to know what you’re going to do before you do it. But one cannot both satisfy causal expected utility-maximization in cases of decision instability and know what you’re going to do before you do it. So if we grant that knowing what you’re going to do before you do it is rationally permissible in cases like the Death in Damascus case, causal decision theory is in trouble.

  30. One might worry about whether his credence should increase. After all, there’s no reason to think he can’t predict how he’s likely to behave ahead of time. And why should his credence that he will end up at Aleppo increase as he takes more steps toward Aleppo if he knows he’s going to change his mind and turn around? Of course, similar reasoning can be applied if we claim that his credence that he’ll end up at Aleppo should remain the same. If his credence in ending up at Aleppo will remain the same as he takes steps toward Aleppo, then we’d expect him to end up at Aleppo, in which case it seems his credence that he’d end up at Aleppo should have been increasing after all.

    In any case, we can side-step the issue by stipulating that, after every step, the man has a chance of involuntarily taking another step in a random direction (and after that random step, a chance of taking yet another step in a random direction, and so on). With this addition, the man’s credence that he’ll end up at Aleppo should increase after he takes a step toward Aleppo, based solely on these chances.

  31. See Harper (1986).

  32. See Arntzenius (2008) for criticisms of this kind.

  33. Note that some of these rules will yield stronger prescriptions in the Satan’s Apple case than others. The cohesive decision theories described here, for example, yield the result that it is permissible to stop taking pieces at a some point. So these rules do not require agents to perform acts which will lead to disaster. Rules like those of Bratman (1987), Gauthier (1994) and McClennen (1990) yield the stronger result that it is obligatory to stop taking pieces at some point, at least in situations in which the agent has planned or committed themself to stopping at that point ahead of time. So (under certain conditions) these rules will forbid agents from performing acts which will lead them to disaster.

  34. I’m using these cohesive decision theories as counterexamples to the claim that every decision rule will lead agents who can’t bind themselves to disaster. But these theories are of independent interest. As such, it’s worth noting a few things about them.

    1. 1.

      Cohesive decision theories don’t require agents to have willpower, plans or foresight. Nor do they require agents to have committed themselves to future courses of action at some earlier time. In this respect, these cohesive decision theories differ from the proposals offered by Bratman (1987), Gauthier (1994) and McClennen (1990). Like standard decision theory, cohesive decision theories are simply rules which prescribe acts to agents in decision problems. They require no more of agents than standard decision theory does.

    2. 2.

      Cohesive decision theories do take what you’ve learned into account when prescribing acts. Cohesive decision theories prescribe the acts selected by the comprehensive strategies which maximize cohesive expected utility. And comprehensive strategies are functions from decision problems—ordered triples consisting of the agent’s credences, utilities and the set of available acts—to a subset of the available acts. So even though cohesive decision theories are insensitive to your current credences when evaluating comprehensive strategies, the comprehensive strategies themselves are sensitive to your current credences when recommending acts. As a result, what you’ve learned does end up getting taken into account by cohesive decision theories, once we get down to the level of which acts you ought to perform.

    3. 3.

      Several people have offered the following complaint about cohesive decision theory: “Why should the agent choose acts that seem reasonable according to her initial credences? She should choose acts that seem reasonable according to her current credences, not her initial ones.” Although this is a natural worry, it’s difficult to cash it out in a compelling way.

      One might be asking why an agent should do what’s reasonable according to her initial credences instead of what’s reasonable according to her current credences. But if we’re assuming that what is reasonable is what standard decision theory prescribes—expected utility maximization—then this is question begging. And if we’re assuming that what is reasonable is what the cohesive decision theory in question prescribes, then an agent who satisfies cohesive decision theory is doing what’s reasonable according to her current credences.

      Alternatively, one might be asking why one ought to believe that the cohesive decision theory in question is the right decision rule. But the answer to this question is straightforward: we’re justified in thinking that a version of cohesive decision theory is the right rule to the extent to which it provides the intuitively correct prescriptions. And, as we’ve seen, there are several ways in which cohesive decision theories are arguably more appealing than standard decision theory.

    4. 4.

      The content of these theories hangs on how we understand the ic function (2) and (3) employ. One possibility is to take ic at face value, as the agent’s first credence function. This option is uncomfortable, however, since the use of the subject’s first credence function (as opposed to her last one, say) seems arbitrary. A second possibility, and a more attractive one, is to take ic to be something like the agent’s “ur-priors”—the credences the agent ought to have if she had no evidence whatsoever. (Objective Bayesians will hold that all agents have the same ur-prior, while subjective Bayesians will hold that different agents can have different ur-prior functions.) A third and related possibility, suggested by Dennis Whitcomb, is to take ic to be something like the initial credences of an ideal subject in the agent’s situation. This would allow us to think of cohesive decision theories as a kind of “ideal observer theory” of prudential rationality. (If we’re considering (2), we might also take iu to be the initial utilities of this ideal subject. We could call the resulting theory the “WW(baby)JD?”-theory.)

    5. 5.

      One can think of cohesive decision theory as an attempt to allow for a kind of coordination between one’s actions at different times. One might want to allow for a similar kind of coordination between the actions of different agents. Here’s a natural way to formulate such a rule, given a commitment to something like objective Bayesianism. Let a global comprehensive strategy (GCS) pair every possible agent with a comprehensive strategy. Let oup stand for the objective ur-prior function. (The ‘objective’ part allows us to avoid the awkward task of selecting an agent to get priors from.) Let the “global cohesive expected utility” of a global cohesive strategy be:

      $$ GCoEU(GCS) = \sum_i cr(oup_{i}) \sum_j oup_i(w_j : GCS) \cdot u(w_j). $$
      (4)

      We can then characterize global cohesive decision theory as the rule which prescribes performing the act picked out for you by the global comprehensive strategy which maximizes global cohesive expected utility.

  35. Though we need to proceed with caution regarding this last claim. The characterization of being self-recommending we’ve been working with presupposes decision rules of the standard form—rules whose prescriptions are functions of the agent’s current credences, utilities, and the available acts (see Sect. 3.3). But (2) is not a rule of this kind (though (3) is). Likewise, rules like those of Bratman (1987), Gauthier (1994) and McClennen (1990) are not of this form.

  36. For example, the evidential versions of these rules will prescribe the one-boxing response to the Gibbard and Harper case discussed in Sect. 3.2.2 And these rules will recommend that the alcoholic discussed in Sect. 3.3 go to the bar, even though she believes she will probably start drinking if she does.

References

  • Arntzenius, F. (2008). No regrets, or: Edith Piaf Revamps Decision Theory. Erkenntnis, 68, 277–297.

    Article  Google Scholar 

  • Arntzenius, F., Elga, A., & Hawthorne, J. (2004). Bayesianism, infinite decisions, and binding. Mind, 113, 251–283.

    Article  Google Scholar 

  • Bratman, M. (1987). Intentions, plans and practical reason. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Collins, J. (1996). Supposition and choice: Why ‘Causal Decision Theory’ is a misnomer. Presented at the CUNY Graduate Center Philosophy Colloquium.

  • Eells, E. (1985). Weirich on decision instability. Australasian Journal of Philosophy, 63, 473–478.

    Article  Google Scholar 

  • Egan, A. (2007). Some counterexamples to causal decision theory. Philosophical Review, 116, 93–114.

    Article  Google Scholar 

  • Gauthier, D. (1994). Assure and threaten. Ethics, 104, 690–716.

    Article  Google Scholar 

  • Gibbard, A., & Harper, W. (1985). Counterfactuals and two kinds of expected utility. In: R. Campbell & L. Sowden (eds.) Paradoxes of rationality and cooperation: Prisoner’s dilemma and Newcomb’s problem. Vancouver: University of British Columbia Press.

    Google Scholar 

  • Harper, W. (1986). Mixed strategies and ratifiability in causal decision theory. Erkenntnis, 24, 25–36.

    Article  Google Scholar 

  • Jeffrey, R. C. (1983). The logic of decision (2nd ed.). Chicago and London: University of Chicago Press.

    Google Scholar 

  • Joyce, J. (1999). The foundations of causal decision theory. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Lewis, D. (1981). Why ain’cha rich? Nous, 15, 377–380.

  • McClennen, E. (1990). Rationality and dynamic choice. Cambridge: Cambridge University Press.

    Google Scholar 

  • Nozick, R. (1969). Newcomb’s problem and two principles of choice. In N. Rescher (Ed.), Essays in honor of Carl G. Hempel. Reidel: Dordretcht.

    Google Scholar 

  • Richter, R. (1986). Further comments on decision instability. Australasian Journal of Philosophy, 64, 345–349.

    Article  Google Scholar 

  • Skyrms, B. (1982). Causal decision theory. The Journal of Philosophy, 79, 695–711.

    Article  Google Scholar 

  • Sobel, H. (1983). Expected utilities, and rational actions and choices. Theoria, 49, 159–183.

    Article  Google Scholar 

  • Weirich, P. (1985). Decision instability. Australasian Journal of Philosophy, 63, 465–472.

    Article  Google Scholar 

Download references

Acknowledgements

I would like thank Frank Arntzenius, Philip Bricker, Maya Eddon, Adam Elga, David Etlin, Barry Lam, Ted Sider, Dennis Whitcomb, participants of the Second Formal Epistemology Festival, and participants of the Bellingham Summer Philosophy Conference, for helpful comments and discussion.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christopher J. G. Meacham.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Meacham, C.J.G. Binding and its consequences. Philos Stud 149, 49–71 (2010). https://doi.org/10.1007/s11098-010-9539-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-010-9539-7

Keywords

Navigation