Imagine both that (1) S1 is deliberating at t about whether or not to x at t' and that (2) although S1’s x-ing at t' would not itself have good consequences, good consequences would ensue if both S1 x's at t' and S2 y's at t", where S1 may or may not be identical to S2 and where t < t' ≤ t". In this paper, I consider how consequentialists should treat S2 and the possibility that S2 will y at (...) t". At one end of the spectrum, consequentialists would hold that, in deciding whether or not to x at t', S1 should always treat S2 as a force of nature over which she has no control and, thus, treat the possibility that S2 will y at t" as she would the possibility that a hurricane will take a certain path. On this view, S1 is to predict whether or not S2 will y and act accordingly. At the other end of the spectrum, consequentialists would hold that S1 should always treat S2 as someone available for mutual cooperation and, thus, treat the possibility that S2 will y at t" as something to be relied upon. On this view, S1 is to rely on S2’s cooperation and so play her part in the best cooperative scheme involving the two of them. A third and intermediate position would be to hold that whether S1 should treat S2 as a force of nature or as someone available for mutual cooperation depends on whether S1 can see to it that S2 will y at t" by, say, having the right set attitudes. I’ll argue for this third position. As we’ll see, an important implication of this view is that consequentialists should be concerned not just with an agent’s voluntary actions but also with their involuntary acquisitions of various mental attitudes, such as beliefs, desires, and intentions. Indeed, I will argue that consequentialists should hold both that (1) an agent’s most fundamental duty is to have all those attitudes that she has decisive reason to have and only those attitudes that she has sufficient reason to have and that (2) she has a derivative duty to perform an act x if and only if her fulfilling this fundamental duty ensures that she x’s. Thus, I argue (as Donald Regan did before me) that consequentialism should not be exclusively act-orientated – that it should require agents not only to perform certain voluntary actions but also to have certain attitudes. In the process, I develop a new version of consequentialism, which I call attitude-consequentialism. (The latest version of this paper can always be found at: https://dl.dropboxusercontent.com/u/14740340/Consequentialism%20and%20Coordination%20Problems.pdf) -/- . (shrink)
This is Chapter 4 of my Commonsense Consequentialism: Wherein Morality Meets Rationality. In this chapter, I argue that that any plausible nonconsequentialist theory can be consequentialized, which is to say that, for any plausible nonconsequentialist theory, we can construct a consequentialist theory that yields the exact same set of deontic verdicts that it yields.
This is Chapter 5 of my Commonsense Consequentialism: Wherein Morality Meets Rationality. In this chapter, I argue that those who wish to accommodate typical instances of supererogation and agent-centered options must deny that moral reasons are morally overriding and accept both that the reason that agents have to promote their own self-interest is a non-moral reason and that this reason can, and sometimes does, prevent the moral reason that they have to sacrifice their self-interest so as to do more to (...) promote the interests of others from generating a moral requirement. Furthermore, I argue that given that an act’s deontic status of both moral and non-moral reasons, the consequentialist must adopt dual-ranking act-consequentialism. I then defend dual-ranking act-consequentialism against a number of objections. (shrink)
This is Chapter 3 of my Commonsense Consequentialism: Wherein Morality Meets Rationality. In this chapter, I defend the teleological conception of practical reasons, which holds that the reasons there are for and against performing a given act are wholly determined by the reasons there are for and against preferring its outcome to those of its available alternatives, such that, if S has most reason to perform x, all things considered, then, of all the outcomes that S could bring about, S (...) has most reason to desire that Ox (i.e., x’s outcome) obtains, all things considered. (shrink)
In this paper, I present an argument that poses the following dilemma for moral theorists: either (a) reject at least one of three of our most firmly held moral convictions or (b) reject the view that moral reasons are morally overriding, that is, reject the view that moral reasons override non-moral reasons such that even the weakest moral reason defeats the strongest non-moral reason in determining an act’s moral status (e.g., morally permissible). I then argue that we should opt for (...) the second horn of this dilemma, in part because we should be loath to reject such firmly held moral convictions, but also because doing so allows us to dissolve an apparent paradox regarding supererogation. If I’m right, if non-moral reasons are relevant to determining what is and isn’t morally permissible, then it would seem that moral theorists have their work cut out for them. Not only will they need to determine what the fundamental right-making and wrong-making features of actions are (i.e., what moral reasons there are), but they will also need to determine what non-moral reasons there are and which of these are relevant to determining an act’s deontic status. And moral theorists will have to account for how these two very different sorts of reasons—moral and non-moral reasons—”come together” to determine an act’s deontic status. I will not attempt to do this work here, but rather only to argue that the work needs to be done. (shrink)
The point of having you write a philosophy paper is for you to develop and practice certain important fundamental skills. They include the following: (1) the ability to comprehend, reconstruct, and analyze complex philosophical arguments; (2) the ability to critically evaluate such arguments; (3) the ability to argue persuasively for your own views; and (4) the ability to articulate your thoughts in a clear, concise, and wellorganized manner.
We ought to perform our best option—that is, the option that we have most reason, all things considered, to perform. This is perhaps the most fundamental and least controversial of all normative principles concerning action. Yet, it is not, I believe, well understood. For even setting aside questions about what our reasons are and about how best to formulate the principle, there is a question about how we should construe our options. This question is of the upmost importance, for which (...) option will count as being best depends on how broadly or narrowly we are to construe our options. In this paper, I argue that we ought to construe an agent’s options at a time, t, as being those actions (or sets of actions) that are scrupulously securable by her at t. (shrink)
I argue that we should reject all traditional forms of act-consequentialism if moral rationalism is true. (Moral rationalism, as I define it, holds that if S is morally required to perform x, then S has decisive reason, all things considered, to perform x.) I argue that moral rationalism in conjunction with a certain conception of practical reasons (viz., the teleological conception of reasons) compels us to accept act-consequentialism. I give a presumptive argument in favor of moral rationalism. And I argue (...) that act-consequentialism is best construed as a theory that ranks outcomes, not according to their impersonal value, but according to how much reason each agent has to desire that they obtain. (shrink)
I argue that rule consequentialism sometimes requires us to act in ways that we lack sufficient reason to act. And this presents a dilemma for Parfit. Either Parfit should concede that we should reject rule consequentialism (and, hence, Triple Theory, which implies it) despite the putatively strong reasons that he believes we have for accepting the view or he should deny that morality has the importance he attributes to it. For if morality is such that we sometimes have decisive reason (...) to act wrongly, then what we should be concerned with, practically speaking, is not with the morality of our actions, but with whether our actions are supported by sufficient reasons. We could, then, for all intents and purposes just ignore morality and focus on what we have sufficient reason to do, all things considered. So if my arguments are cogent, they show that Parfit’s Triple Theory is either false or relatively unimportant in that we can, for all intents and purposes, simply ignore its requirements and just do whatever it is that we have sufficient reason to do, all things considered. (shrink)
We ought to perform our best option—that is, the option that we have most reason, all things considered, to perform. This is perhaps the most fundamental and least controversial of all normative principles concerning action. Yet, it is not, I believe, well understood. For even setting aside questions about what our options are and what our reasons are, there are prior questions concerning how best to formulate the principle. In this paper, I address these questions. One of the more interesting (...) upshots of this inquiry is that the deontic statuses (e.g., obligatory, optional, and impermissible) of individual actions are determined by the deontic statuses of the larger sets of actions of which they are a part. And, as I show, this has a number of interesting implications both for normative theory and for our understanding of practical reasons. (shrink)
I explain what teleological reasons are, distinguish between direct and indirect teleological reasons, and discuss both whether all practical reasons are teleological and whether all teleological reasons are direct.
Agents often face a choice of what to do. And it seems that, in most of these choice situations, the relevant reasons do not require performing some particular act, but instead permit performing any of numerous act alternatives. This is known as the basic belief. Below, I argue that the best explanation for the basic belief is not that the relevant reasons are incommensurable (Raz) or that their justifying strength exceeds the requiring strength of opposing reasons (Gert), but that they (...) are imperfect reasons—reasons that do not support performing any particular act, but instead support choosing any of the numerous alternatives that would each achieve the same worthy end. In the process, I develop and defend a novel theory of objective rationality, arguing that it is superior to its two most notable rivals. (shrink)
This is a book on morality, rationality, and the interconnections between the two. In it, I defend a version of consequentialism that both comports with our commonsense moral intuitions and shares with other consequentialist theories the same compelling teleological conception of practical reasons.
It is through our actions that we affect the way the world goes. Whenever we face a choice of what to do, we also face a choice of which of various possible worlds to actualize. Moreover, whenever we act intentionally, we act with the aim of making the world go a certain way. It is only natural, then, to suppose that an agent's reasons for action are a function of her reasons for preferring some of these possible worlds to others, (...) such that what she has most reason to do is to bring about the possible world which, of all those available to her, is the one that she has most reason to want to obtain. This is what is known as the `teleological conception of practical reasons'. Whether this is the correct conception of practical reasons is important not only in its own right, but also in virtue of its potential implications for what sort of moral theory we should accept. Below, I argue that the teleological conception is indeed the correct conception of practical reasons. (shrink)
A growing trend of thought has it that any plausible nonconsequentialist theory can be consequentialized, which is to say that it can be given a consequentialist representation. In this essay, I explore both whether this claim is true and what its implications are. I also explain the procedure for consequentializing a nonconsequentialist theory and give an account of the motivation for doing so.
AS MANY OF us know, millions of people on this planet are suffering for lack of potable water, basic healthcare, and adequate nutrition. And, as many of us also know, we (the well‐to‐do) could alleviate and/or prevent some of this suffering by making certain sacrifices, e.g., by donating some of our incomes to organizations such as Oxfam and UNICEF. Suppose, then, that we are wondering to what extent each of us is morally obligated to make sacrifices for the sake of (...) helping to alleviate such suffering. Could the answer to this question depend on the existence of beings on some distant planet, call it Zargon, over which we have not had, and will never have, any influence? Suppose that there is nothing that we can do to affect the lives of Zargonians in any way. We can neither harm nor benefit them; we cannot even have the slightest effect on their thoughts or experiences, for their planet is billions of light years away from ours and, consequently, far beyond the reach of our causal powers. We know about them only through the supernatural abilities of an oracle, who we know always tells the truth and who tells us everything about them.1 But although we know about them, they do not know about us, for we have no way to communicate with them, let alone affect their welfares. Given that we can have no effect on their lives and that they can have no effect on our lives beyond whatever little effect our knowledge of their doings has on us, how could their existence possibly affect how much one of us is required to sacrifice for the sake of alleviating some of the suffering here on Earth? It seems absurd to suppose that it could. Yet this is precisely what rule‐. (shrink)
In this paper, I argue that those moral theorists who wish to accommodate agentcentered options and supererogatory acts must accept both that the reason an agent has to promote her own interests is a nonmoral reason and that this nonmoral reason can prevent the moral reason she has to sacrifice those interests for the sake of doing more to promote the interests of others from generating a moral requirement to do so. These theorists must, then, deny that moral reasons morally (...) override nonmoral reasons, such that even the weakest moral reason trumps the strongest nonmoral reason in the determination of an act's moral status (e.g., morally permissible or impermissible). If this is right, then it seems that these theorists have their work cut out for them. It will not be enough for them to provide a criterion of Tightness that accommodates agent-centered options and supererogatory acts, for, in doing so, they incur a debt. As I will show, in accommodating agent-centered options, they commit themselves to the view that moral reasons are not morally overriding, and so they owe us an account of how both moral reasons and nonmoral reasons come together to determine an act's moral status. (shrink)
Dual-ranking act-consequentialism (DRAC) is a rather peculiar version of act-consequentialism. Unlike more traditional forms of act-consequentialism, DRAC doesn’t take the deontic status of an action to be a function of some evaluative ranking of outcomes. Rather, it takes the deontic status of an action to be a function of some non-evaluative ranking that is in turn a function of two auxiliary rankings that are evaluative. I argue that DRAC is promising in that it can accommodate certain features of commonsense morality (...) that no single-ranking version of act-consequentialism can: supererogation, agent-centered options, and the self-other asymmetry. I also defend DRAC against three objections: (1) that its dual-ranking structure is ad hoc, (2) that it denies (putatively implausibly) that it is always permissible to make self-sacrifices that don’t make things worse for others, and (3) that it violates certain axioms of expected utility theory, viz., transitivity and independence. (shrink)
This paper argues that the standard account of posthumous harm is untenable. The standard account presupposes the desire-fulfillment theory of welfare, but I argue that no plausible version of this theory can allow for the possibility of posthumous harm. I argue that there are, at least, two problems with the standard account from the perspective of a desire-fulfillment theorist. First, as most desire-fulfillment theorists acknowledge, the theory must be restricted in such a way that only those desires that pertain to (...) one’s own life count in determining one’s welfare. The problem is that no one has yet provided a plausible account of which desires these are such that desires for posthumous prestige and the like are included. Second and more importantly, if the desire-fulfillment theory is going to be at all plausible, it must, I argue, restrict itself not only to those desires that pertain to one’s own life but also to those desires that are future independent, and this would rule out the possibility of posthumous harm. If I’m right, then even the desire-fulfillment theorist should reject the standard account of posthumous harm. We cannot plausibly account for posthumous harm in terms of desire fulfillment (or the lack thereof). (shrink)
Many philosophers hold that the achievement of one’s goals can contribute to one’s welfare apart from whatever independent contributions that the objects of those goals, or the processes by which they are achieved, make. Call this the Achievement View, and call those who accept it achievementists. In this paper, I argue that achievementists should accept both (a) that one factor that affects how much the achievement of a goal contributes to one’s welfare is the amount that one has invested in (...) that goal and (b) that the amount that one has invested in a goal is a function of how much one has personally sacrificed for its sake, not a function of how much effort one has put into achieving it. So I will, contrary to at least one achievementist (viz., Keller 2004, 36), be arguing against the view that the greater the amount of productive effort that goes into achieving a goal, the more its achievement contributes to one’s welfare. Furthermore, I argue that the reason that the achievement of those goals for which one has personally sacrificed matters more to one’s welfare is that, in general, the redemption of one’s self-sacrifices in itself contributes to one’s welfare. Lastly, I argue that the view that the redemption of one’s self-sacrifices in itself contributes to one’s welfare is plausible independent of whether or not we find the Achievement View plausible. We should accept this view so as to account both for the Shape-of-a-Life Phenomenon and for the rationality of honoring “sunk” costs. (shrink)
WHEN ONE ASSUMES, as I will, that death marks the irrevocable end to one’s existence, it is difficult to make sense of the idea that a person could be harmed or benefited by events that take place after her death. How could a posthumous event either enhance or diminish the welfare of the deceased, who no longer exists? Yet we find that many people have a prudential (i.e., self-interested) concern for what’s going to happen after their deaths.1 People are, for (...) instance, concerned that their reputations not be slandered, that their achievements not be undermined, and that their contributions not be forgotten, not even after their deaths. Of course, many philosophers would insist that such a concern for what’s going to happen after one’s death must be based on, or a remnant of, a false belief in an afterlife. I, however, will argue that even if death marks the unequivocal and permanent end to one’s existence, people have good reason to be prudentially concerned with what’s going to happen after their deaths, for, as I will show, a person’s welfare can indeed be affected by posthumous events. (shrink)
Consequentialism is an agent-neutral teleological theory, and deontology is an agent-relative non-teleological theory. I argue that a certain hybrid of the two—namely, non-egoistic agent-relative teleological ethics (NATE)—is quite promising. This hybrid takes what is best from both consequentialism and deontology while leaving behind the problems associated with each. Like consequentialism and unlike deontology, NATE can accommodate the compelling idea that it is always permissible to bring about the best available state of affairs. Yet unlike consequentialism and like deontology, NATE accords (...) well with our commonsense moral intuitions. (shrink)
In this paper, I argue that maximizing act-consequentialism (MAC)—the theory that holds that agents ought always to act so as to produce the best available state of affairs—can accommodate both agent-centered options and supererogatory acts. Thus I will show that MAC can accommodate the view that agents often have the moral option of either pursuing their own personal interests or sacrificing those interests for the sake of the impersonal good. And I will show that MAC can accommodate the idea that (...) certain acts are supererogatory in the sense of not being morally required even though they are what the agent has most moral reason to do. These two theses are surprising in themselves, but even more surprising is how I arrive at them. I argue that anyone generally concerned to accommodate, in some coherent fashion, our pre-theoretical moral intuitions at both the normative and meta-ethical levels will have to give a certain account of agent-centered options and supererogatory acts and that this account is the very one that allows for the maximizing act-consequentialist to accommodate both. So my paper will not only be of interest to those concerned with the tenability of consequentialism, but also to anyone interested in giving a coherent account of our pre-theoretical moral intuitions. (shrink)
A theory is agent neutral if it gives every agent the same set of aims and agent relative otherwise. Most philosophers take act-consequentialism to be agent-neutral, but I argue that at the heart of consequentialism is the idea that all acts are morally permissible in virtue of their propensity to promote value and that, given this, it is possible to have a theory that is both agent-relative and act-consequentialist. Furthermore, I demonstrate that agent-relative act-consequentialism can avoid the counterintuitive implications associated (...) with utilitarianism while maintaining the compelling idea that it is never wrong to bring about the best outcome. (shrink)
In this paper, I criticize David McNaughton and Piers Rawling's formalization of the agent-relative/agent-neutral distinction. I argue that their formalization is unable to accommodate an important ethical distinction between two types of conditional obligations. I then suggest a way of revising their formalization so as to fix the problem.
On commonsense morality, there are two types of situations where an agent is not required to maximize the impersonal good. First, there are those situations where the agent is prohibited from doing so--constraints. Second, there are those situations where the agent is permitted to do so but also has the option of doing something else--options. I argue that there are three possible explanations for the absence of a moral requirement to maximize the impersonal good and that the commonsense moralist must (...) appeal to all three in order to account for the vast array of constraints and options we take there be. (shrink)
On the Total Principle, the best state of affairs (ceteris paribus) is the one with the greatest net sum of welfare value. Parfit rejects this principle, because he believes that it implies the Repugnant Conclusion, the conclusion that for any large population of people, all with lives well worth living, there will be some much larger population whose existence would be better, even though its members all have lives that are only barely worth living. Recently, however, a number of philosophers (...) have suggested that the Total Principle does not imply the Repugnant Conclusion provided that a certain axiological view (namely, the ‘Discontinuity View’) is correct. Nevertheless, as I point out, there are three different versions of the Repugnant Conclusion, and it appears that the Total Principle will imply two of the three even if the Discontinuity View is correct. I then go on to argue that one of the two remaining versions turns out not to be repugnant after all. Second, I argue that the last remaining version is not, as it turns out, implied by the Total Principle. Thus, my arguments show that the Total Principle has no repugnant implications. (shrink)
Consequentialism is usually thought to be unable to accommodate many of our commonsense moral intuitions. In particular, it has seemed incompatible with the intuition that agents should not violate someone's rights even in order to prevent numerous others from committing comparable rights violations. Nevertheless, I argue that a certain form of consequentialism can accommodate this intuition: agent-relative consequentialism--the view according to which agents ought always to bring about what is, from their own individual perspective, the best available outcome. Moreover, I (...) argue that the consequentialist's agent-focused account of the impermissibility of such preventive violations is more plausible than the deontologist's victim-focused account. Contrary to Frances Kamm, I argue that agent-relative consequentialism can adequately deal with single-agent cases, cases where an agent would have to commit one rights violation now in order to minimize her commissions of such rights violations over time. (shrink)