I am interested in the “rational irrationality hypothesis” about voter behavior. According to this hypothesis, voters regularly vote for policies that are contrary to their interests because the act of voting for them isn’t. Gathering political information is time-consuming and inconvenient. Doing so is unlikely to lead to positive results since one's vote is unlikely to be decisive. However, we have preferences over our political beliefs. We like to see ourselves as members of certain groups (e.g. “rugged individualists”) and being (...) part of those groups depends on having certain beliefs (e.g. about welfare spending). Even if a decrease in welfare spending would be bad for me, I might still benefit by believing in and, consequently, voting for a decrease since my vote is unlikely to make a difference but getting to see myself as a rugged individualist will make a noticeable difference to my wellbeing. It is sometimes argued that this hypothesis fails for empirical reasons. I will argue that things are worse: it is conceptually incoherent. I will do so by first showing that it is a rationalizing explanation and then argue that rationalizing explanations must be reflectively stable from the agent's perspective. The rational irrationality hypothesis is not. (shrink)
The critics of rational choice theory frequently claim that RCT is self-defeating in the sense that agents who abide by RCT’s prescriptions are less successful in satisfying their preferences than they would be if they abided by some normative theory of choice other than RCT. In this paper, I combine insights from philosophy of action, philosophy of mind and the normative foundations of RCT to rebut this often-made criticism. I then explicate the implications of my thesis for the wider philosophical (...) debate concerning the normativity of RCT for both ideal agents who can form and revise their intentions instantly without cognitive costs and real-life agents who have limited control over the formation and the dynamics of their own intentions. (shrink)
According to the dogmatist, knowing p makes it rational to disregard future evidence against p. The standard response to the dogmatist holds that knowledge is defeasible: acquiring evidence against something you know undermines your knowledge. However, this response leaves a residual puzzle, according to which knowledge makes it rational to intend to disregard future counterevidence. I argue that we can resolve this residual puzzle by turning to an unlikely source: Kavka’s toxin puzzle. One lesson of the toxin puzzle is that (...) it is irrational to intend to do that which you know will be irrational. This yields a simple reply to the dogmatist: it is irrational to intend to disregard future evidence because you can know in advance that it will be irrational to do so. (shrink)
In this paper I revisit Gregory Kavka’s Toxin Puzzle and propose a novel solution to it. Like some previous accounts, mine postulates a tight link between intentions and reasons but, unlike them, in my account these are motivating rather than normative reasons, i.e. reasons that explain (rather than justify) the intended action. I argue that sensitivity to the absence of possible motivational explanations for the intended action is constitutive of deliberation-based intentions. Since ordinary rational agents display this sensitivity, when placed (...) in the toxin scenario they will believe that there is no motivational explanation for actually drinking the toxin and this is why they can’t form the intention to drink it in the first place. I thus argue that my Motivating-Explanatory Reason Principle correctly explains the toxin puzzle, thereby revealing itself as a genuine metaphysical constraint on intentions. I also explore at length the implications of my account for the nature of intention and rational agency. (shrink)
I focus on David Gauthier’s intriguing suggestion that actions are not to be evaluated directly but via an evaluation of deliberative procedures. I argue that this suggestion is misleading, since even the most direct evaluation of (intentional) actions involves the evaluation of different ways of deliberating about what to do. Relatedly, a complete picture of what an agent is or might be (intentionally) doing cannot be disentangled from a complete picture of how s/he is or might be deliberating. A more (...) viable contrast concerns whether actions and deliberative procedures are properly evaluated on the whole or, instead, through time. (shrink)
How must you think about time when you form an intention? Obviously, you must think about the time of action. Must you frame the action in any broader prospect or retrospect? In this essay I argue that you must: you thereby commit yourself to a specific prospect of a future retrospect – a retrospect, indeed, on that very prospect. In forming an intention you project a future from which you will not ask regretfully, referring back to your follow-through on that (...) intention, “What on earth was I thinking?” I argue that this broader attitude expresses the self-accountability necessary for practical commitment. (shrink)
I show that Kavka's toxin puzzle raises a problem for the “Responsibility Theodicy,” which holds that the reason God typically does not intervene to stop the evil effects of our actions is that such intervention would undermine the possibility of our being significantly responsible for overcoming and averting evil. This prominent theodicy seems to require that God be able to do what the agent in Kavka's toxin story cannot do: stick by a plan to do some action at a future (...) time even though when that time comes, there will be no good reason for performing that action. I assess various approaches to solving this problem. Along the way, I develop an iterated variant of Kavka's toxin case and argue that the case is not adequately handled by standard causal decision theory. (shrink)
This paper addresses a problem concerning the rational stability of intention. When you form an intention to φ at some future time t, you thereby make it subjectively rational for you to follow through and φ at t, even if—hypothetically—you would abandon the intention were you to redeliberate at t. It is hard to understand how this is possible. Shouldn't the perspective of your acting self be what determines what is then subjectively rational for you? I aim to solve this (...) problem by highlighting a role for narrative in intention. I'll argue that committing yourself to a course of action by intending to pursue it crucially involves the expectation that your acting self will be ‘swept along’ by its participation in a distinctively narrative form of self-understanding. I'll motivate my approach by criticizing Richard Holton's and Michael Bratman's recent treatments of the stability of intention, though my account also borrows from Bratman's work. I'll likewise criticize and borrow from David Velleman's work on narrative and self-intelligibility. When the pieces fall into place, we'll see how intending is akin to telling your future self a kind of story. My thesis is not that you address your acting self but that your acting self figures as a ‘character’ in the ‘story’ that you address to a still later self. Unlike other appeals to narrative in agency, mine will explain how as narrator you address a specifically intrapersonal audience. (shrink)
An externalist view of intention is developed on broadly Wittgensteinian grounds, and applied to show that the classic Thomist doctrine of double effect, though it has good uses in casuistry, has also been overused because of the internalism about intention that has generally been presupposed by its users. We need a good criterion of what counts as the content of our intentional actions; I argue, again on Wittgensteinian grounds, that the best criterion comes not from foresight, nor from foresight plus (...) some degree of probability, nor from any metaphysics of “closeness”, but simply from our ordinary shared understanding of what counts as doing a given action, and what does not. (shrink)
The paper will show how one may rationalize one-boxing in Newcomb's problem and drinking the toxin in the Toxin puzzle within the confines of causal decision theory by ascending to so-called reflexive decision models which reflect how actions are caused by decision situations (beliefs, desires, and intentions) represented by ordinary unreflexive decision models.
Practical commitment is Janus-faced, looking outward toward the expectations it creates and inward toward their basis in the agent’s will. This paper criticizes Kantian attempts to link these facets and proposes an alternative. Contra David Velleman, the availability of a conspiratorial perspective (not yours, not your interlocutor’s) is what allows you to understand yourself as making a lying promise – as committing yourself ‘outwardly’ with the deceptive reasoning that Velleman argues cannot provide a basis for self-understanding. Moreover, the intrapersonal availability (...) of such a third perspective is what enables you to commit yourself ‘inwardly.’ Here I offer an alternative to Christine Korsgaard’s account of practical commitment, on which committing yourself requires identifying yourself with a principle. You needn’t identify yourself with a principle, I argue, because the unity at which you aim when you commit yourself is a unity not with your acting self but with a later perspective, where the relation is one of self-intelligibility, not self-justification, and therefore needn’t be mediated by principles. This ‘twice-future’ perspective – neither your present intending nor your (once-)future acting but a third perspective that looks back on that relation – plays the intrapersonal role played in interpersonal commitment by potential co-conspirators. Kantians are therefore right to link your ability to commit yourself with your ability credibly to express that commitment to others. But the linkage generates a strikingly unKantian result. The nature of agency cannot provide an apriori basis for honesty because what enables you to commit yourself is what also enables you to lie. (shrink)
Gregory Kavka's 'Toxin Puzzle' suggests that I cannot intend to perform a counter-preferential action A even if I have a strong self-interested reason to form this intention. The 'Rationalist Solution,' however, suggests that I can form this intention. For even though it is counter-preferential, A-ing is actually rational given that the intention behind it is rational. Two arguments are offered for this proposition that the rationality of the intention to A transfers to A-ing itself: the 'Self-Promise Argument' and David Gauthier's (...) 'Rational Self-Interest Argument.' But both arguments – and therefore the Rationalist Solution – fail. The Self-Promise Argument fails because my intention to A does not constitute a promise to myself that I am obligated to honor. And Gauthier's Rational Self-Interest Argument fails to rule out the possibility of rational irrationality. (shrink)
Most contemporary action theorists accept – or at least find plausible – a belief condition on intention and a knowledge condition on intentional action. The belief condition says that I can only intend to ɸ if I believe that I will ɸ or am ɸ-ing, and the knowledge condition says that I am only intentionally ɸ-ing if I know that I am ɸ-ing. The belief condition in intention and the knowledge condition in action go hand in hand. After all, if (...) intending implies belief, and if ɸ-ing intentionally implies intending to ɸ, then in ɸ-ing, I intend to be ɸ-ing, and, by the belief condition, I believe that I am ɸ-ing, and if this belief is justified, and we are not in a Gettier situation, etc., then, I will also satisfy the knowledge condition. Moreover, the claim that when intentions properly result in action, the corresponding belief constitutes knowledge is a relatively safe assumption, at least as an assumption about what it is generally the case. (shrink)
Sometimes a series of choices do not serve one's concerns well even though each choice in the series seems perfectly well suited to serving one's concerns. In such cases, one has a dynamic choice problem. Otherwise put, one has a problem related to the fact that one's choices are spread out over time. This survey reviews some of the challenging choice situations and problematic preference structures that can prompt dynamic choice problems. It also reviews some proposed solutions, and explains how (...) some familiar but potentially puzzling phenomena — including, for example, self-destructive addictive behavior and dangerous environmental destruction — have been illuminated by dynamic choice theory. (shrink)
A variety of thought experiments suggest that, if the standard picture of practical rationality is correct, then practical rationality is sometimes an obstacle to practical success. For some, this in turn suggests that there is something wrong with the standard picture. In particular, it has been argued that we should revise the standard picture so that practical rationality and practical success emerge as more closely connected than the current picture allows. In this paper, I construct a choice situation—which I refer (...) to as the Newxin puzzle—and discuss its implications in relation to the revisionist approach just described. Using the Newxin puzzle, I argue that the approach leads to a more radically revisionist picture of practical rationality than current debate suggests. (shrink)
An autonomous reason for intending to A would be a reason for so intending that is not, and will not be, a reason for A-ing. Some puzzle cases, such as the one that figures in the toxin puzzle, suggest that there can be such reasons for intending, but these cases have special features that cloud the issue. This paper describes cases that more clearly favour the view that we can have practical reasons of this sort. Several objections to this view (...) are considered and rejected. Finally, it is considered whether the existence of such reasons would conflict with an attractive coherence principle linking the rationality of intending with that of acting as intended. The paper concludes with a qualified affirmation of autonomous reasons for intending. (shrink)
Why can't deliberation conclude in an intention except by considering whether to perform the intended action? I argue that the answer to this question entails that reasons for intention are determined by reasons for action. Understanding this feature of practical deliberation thus allows us to solve the toxin puzzle.
It is widely held that any justifying reason for making a decision must also be a justifying reason for doing what one thereby decides to do. Desires to win decision prizes, such as the one that figures in Kavka’s toxin puzzle, might be thought to be exceptions to this principle, but the principle has been defended in the face of such examples. Similarly, it has been argued that a command to intend cannot give one a justifying reason to intend as (...) commanded. Here it is argued that ordinary agents in ordinary cases can have justifying reasons for deciding that are not and will not be justifying reasons for doing what, in making those decisions, they come to intend to do. The paper concludes with some brief observations on the functions of decision-making. (shrink)
This chapter argues that, under certain conditions, forming an intention makes an action rational which would otherwise not have been rational, since intentions (together with beliefs) in and of themselves provide deductive reasons for further intentions and actions, an argument which builds on previous work by R M Hare, Michael Bratman and others, It also provides an articulation and defense of the concept of "minimally constrained maximization" as a unified general solution to the well-known paradoxes of rationality, including the paradox (...) of deterrence and the prisoner's dilemma. (shrink)
I hope to show that, although belief is subject to two quite robust forms of agency, "believing at will" is impossible; one cannot believe in the way one ordinarily acts. Further, the same is true of intention: although intention is subject to two quite robust forms of agency, the features of belief that render believing less than voluntary are present for intention, as well. It turns out, perhaps surprisingly, that you can no more intend at will than believe at will.
Abstract I challenge the view that, in cases where time for deliberation is not an issue, instrumental rationality precludes myopic planning. I show where there is room for instrumentally rational myopic planning, and then argue that such planning is possible not only in theory, it is something human beings can and do engage in. The possibility of such planning has, however, been disregarded, and this disregard has skewed related debates concerning instrumental rationality.
Some philosophers worry that it can never be reasonable to act simply on the basis of trust, yet you act on the basis of self-trust whenever you merely follow through on one of your own intentions. It is no more reasonable to follow through on an intention formed by an untrustworthy earlier self of yours than it is to act on the advice of an untrustworthy interlocutor. But reasonable mistrust equally presupposes untrustworthiness in the mistrusted, or evidence thereof. The concept (...) of an intention, I argue, codifies the fact that practical reason rests on a capacity for reasonable trust. (shrink)
This paper discusses David Gauthier’s attempt to refine the theory underlying constrained maximization so that it ceases to have a certain implication that he regards as objectionable. It argues that the refinement Gauthier introduces may be initially appealing, but actually does his theory more harm than good.
In their attempt to provide a reason to be moral, contractarians such as David Gauthier are concerned with situations allowing a group of agents the chance of mutual benefit, so long as at least some of them are prepared to constrain their maximising behaviour. But what justifies this constraint? Gauthier argues that it could be rational (because maximising) to intend to constrain one's behaviour, and in certain circumstances to act on this intention. The purpose of this paper is to examine (...) the conditions under which it is rational to form, and to act on, intentions. I introduce and examine in detail what Gauthier has to say on these issues, argue that it suffers from various problems, and propose an alternative account which I claim avoids them. (shrink)
Gregory Kavka’s toxin puzzle has spawned a lively literature about the nature of intention and of rational intention in particular. This paper is largely a critique of a pair of recent responses to the puzzle that focus on the connection between rationally forming an intention to A and rationally A-ing, one by David Gauthier and the other by Edward McClennen. It also critically assesses the two main morals Kavka takes reflection on the puzzle to support, morals about the nature of (...) intention and the consequences of a divergence between “reasons for intending and reasons for acting.”. (shrink)
Gregory Kavka’s toxin puzzle has spawned a lively literature about the nature of intention and of rational intention in particular. This paper is largely a critique of a pair of recent responses to the puzzle that focus on the connection between rationally forming an intention to A and rationally A-ing, one by David Gauthier and the other by Edward McClennen. It also critically assesses the two main morals Kavka takes reflection on the puzzle to support, morals about the nature of (...) intention and the consequences of a divergence between “reasons for intending and reasons for acting. (shrink)
To show it is sometimes rational to cooperate in the Prisoner's Dilemma, David Gauthier has claimed that if it is rational to form an intention then it is sometimes rational act on it. However, the Paradox of Deterrence and the Toxin Puzzle seem to put this general type of claim into doubt. For even if it is rational to form a deterrent intention, it is not rational act on it (if it is not successful); and even if it is rational (...) to form an intention to drink a toxin, it is not rational to act on it (come the time for drinking). This article employs an extended version of Michael Bratman's theory of intention to show how to argue systematically that it can be rational to act on rationally formed cooperative intentions, while not being committed to the rationality of apocalyptic retaliation, or pointless toxin drinking. (shrink)
In garden-variety instances of intentional action, according to a popular account, agents intend to perform actions of particular kinds, their intentions are based on reasons so to act, and the intentions issue in appropriate behaviour. On this account, the reasons that give rise to our intentions are reasons for action. Interesting questions for this view are raised by cases in which an agent seemingly has a reason to intend to do something while having no reason to do it. Can such (...) reasons to intend issue in appropriate intentions? If so, can these intentions issue in corresponding intentional actions -- even though the agent has no reason to perform those actions? If these questions are properly given an affirmative answer, at least one popular thesis in the philosophy of action is false. One could not properly "define an intentional act as one which the agent does for a reason." A popular thesis about the explanation of intentional actions would be false as well -- namely, that explaining an intentional action (qua intentional) requires reference to reasons for action. My point of departure in this paper is a puzzle—Gregory Kavka's toxin puzzle(1983) -- in which agents seem to have an excellent reason to intend to A while having no reason at all to A. Generally, commentators on the puzzle have set their sights on questions about rational intentions. However, the puzzle raises difficult questions about intending itself and about the nature of intentional action. Showing that this is so is easy. Answering the question is more challenging. (shrink)
The paper attempts to rationalize cooperation in the one-shot prisoners' dilemma (PD). It starts by introducing (and preliminarily investigating) a new kind of equilibrium (differing from Aumann's correlated equilibria) according to which the players' actions may be correlated (sect. 2). In PD the Pareto-optimal among these equilibria is joint cooperation. Since these equilibria seem to contradict causal preconceptions, the paper continues with a standard analysis of the causal structure of decision situations (sect. 3). The analysis then raises to a reflexive (...) point of view according to which the agent integrates his own present and future decision situations into the causal picture of his situation (sect. 4). This reflexive structure is first applied to the toxin puzzle and then to Newcomb's problem, showing a way to rationalize drinking the toxin and taking only one box with-out assuming causal mystery (sect. 5). The latter result is finally extended to a rationalization of cooperation in PD (sect. 6). (shrink)