To 'consequentialise' is to take a putatively non-consequentialist moral theory and show that it is actually just another form of consequentialism. Some have speculated that every moral theory can be consequentialised. If this were so, then consequentialism would be empty; it would have no substantive content. As I argue here, however, this is not so. Beginning with the core consequentialist commitment to 'maximising the good', I formulate a precise definition of consequentialism and demonstrate that, given this definition, several sorts of (...) moral theory resist consequentialisation. My strategy is to decompose consequentialism into three conditions, which I call 'agent neutrality', 'no moral dilemmas', and 'dominance', and then to exhibit some moral theories which violate each of these. (shrink)
Suppose we believe that a property F is coextensive with moral permissibility. F may be, for example, the property of having the best consequences, if we are Consequentialists, or that of conforming to a universalisable maxim, if we are Kantians, and so on. This may raise the following problem. It is plausible that permissibility is “closed under implication”: any act that is implied by a permissible act must itself be permissible. Yet, in some cases, F might not be closed under (...) implication. If that is so, then F cannot be coextensive with permissibility. Maximalism has been proposed as a solution to this problem. A “maximal” act is one not implied by any other act. Maximalism restricts the claim that F is coextensive with permissibility to maximal acts only. A non-maximal act may be permissible without being F if it is implied by a maximal act that is F. The general aim of this paper is to investigate these issues by considering the formal structure of acts, or the “act-implication” relation. Discussions of Maximalism have tended to assume implicitly that acts have structure of some sort, but there has been little careful attention given to this structure. I aim to show that, by thinking about structure, we can provide a stronger defence of Maximalism. (shrink)
Should we allow grave harm to befall one individual so as to prevent minor harms befalling sufficiently many other individuals? This is a question of aggregation. Can many small harms ‘add up’, so that, collectively, they morally outweigh a greater harm? The ‘Close Enough View’ supports a moderate position: aggregation is permissible when, and only when, the conflicting harms are sufficiently similar, or ‘close enough’, to each other. This paper surveys a range of formally precise interpretations of this view, and (...) reveals some of the problems they face. It also proposes a novel interpretation which avoids these problems. (shrink)
The ‘No Ought From Is’ principle (or ‘NOFI’) states that a valid argument cannot have both an ethical conclusion and non-ethical premises. Arthur Prior proposed several well-known counterexamples, including the following: Tea-drinking is common in England; therefore, either tea-drinking is common in England or all New Zealanders ought to be shot. My aim in this paper is to defend NOFI against Prior’s counterexamples. I propose two novel interpretations of NOFI and prove that both are true.
In Better Never to Have Been, David Benatar argues that existence is always a harm. His argument, in brief, is that this follows from a theory of personal good which we ought to accept because it best explains several???asymmetries???. I shall argue here that Benatar's theory suffers from a defect which was already widely known to afflict similar theories, and that the main asymmetry he discusses is better explained in a way which allows that existence is often not a harm.
Prioritarianism is the view that we ought to give priority to benefiting those who are worse off. Sufficientism, on the other hand, is the view that we ought to give priority to benefiting those who are not sufficiently well off. This paper concerns the relative merits of these two views; in particular, it examines an argument advanced by Roger Crisp to the effect that sufficientism is the superior of the two. My aim is to show that Crisp's argument is unsound. (...) While I concede his objections against the particular prioritarian views that he considers, I propose a different version of prioritarianism that is invulnerable to those objections. (shrink)
How wrong is it to deceive a person into having sex with you? The common view seems to be that this depends on the nature of the deception. If it involves something very important, such as your identity, then the wrong done is very serious. But if it involves something more trivial, such as your natural hair colour, then the wrong seems less great. Tom Dougherty rejects this view. He argues that sexual deception is always seriously wrong. In this paper, (...) I present a response to Doughterty’s argument. I propose an analysis of the wrongness in deception according to which acts of deception, in sexual relations and elsewhere, may differ in their degree of wrongness, and some may not be seriously wrong. (shrink)
Ethical descriptivism is the view that all ethical properties are descriptive properties. Frank Jackson has proposed an argument for this view which begins with the premise that the ethical supervenes on the descriptive, any worlds that differ ethically must differ also descriptively. This paper observes that Jackson's argument has a curious structure, taking a linguistic detour between metaphysical starting and ending points, and raises some worries stemming from this. It then proposes an improved version of the argument, which avoids these (...) worries, and responds to some potential objections to this version of the argument. (shrink)
I compare two kinds of holism about values: G.E. Moore's 'organic unities', and Jonathan Dancy's 'value holism'. I propose a simple formal model for representing evaluations of parts and wholes. I then define two conditions, additivism and invariabilism, which together imply a third, atomism. Since atomism is absurd, we must reject one of the former two conditions. This is where Moore and Dancy part company: whereas Moore rejects additivism, Dancy rejects invariabilism. I argue that Moore's view is more plausible. Invariabilism (...) ought to be retained because (a) it eliminates the needless multiplication of values inherent in variable evaluations, and (b) it preserves a certain necessary connection between values and reasons, which Dancy himself endorses. (shrink)
Philosophical discussions of prioritarianism, the view that we ought to give priority to those who are worse off, have hitherto been almost exclusively focused on cases involving a fixed population. The aim of this paper is to extend the discussion of prioritarianism to encompass also variable populations. I argue that prioritarianism, in its simplest formulation, is not tenable in this area. However, I also propose several revised formulations that, so I argue, show more promise.
Should harms to different individuals be aggregated? Moderate views answer yes and no. Aggregation is appropriate in some but not all cases. Such views need to determine a threshold at which aggregation switches from appropriate to inappropriate. Alex Voorhoeve proposes a method for determining this threshold which links other-regarding and self-regarding ethics. This proposal, however, implies a spurious correlation between favoring aggregation and egoism.
The so-called “Levelling Down Objection” is commonly believed to occupy a central role in the debate between egalitarians and prioritarians. Egalitarians think that equality is good in itself, and so they are committed to finding value even in such equality as may only be achieved by “levelling down”–i.e., by merely reducing the better off to the level of the worse off. Although egalitarians might deny that levelling down could ever make for an all-things-considered improvement, they cannot deny that it may (...) make things better in at least one respect. Prioritarians, on the other hand, do deny this; according to them, levelling down cannot make things better in any respect. In this paper I argue that the Levelling Down Objection leans far too heavily on a heretofore unanalysed notion: namely, the notion of “being better in this or that respect.” I propose what I take to be a plausible analysis of that notion, and show that, given the proposed analysis, the prioritarian is no less vulnerable to the Levelling Down Objection than is the egalitarian. I conclude that proponents of the Levelling Down Objection need either to suggest a better analysis or abandon the Levelling Down Objection altogether. (shrink)
How do reasons combine? How is it that several reasons taken together can have a combined weight which exceeds the weight of any one alone? I propose an answer in mereological terms: reasons combine by composing a further, complex reason of which they are parts. Their combined weight is the weight of their combination. I develop a mereological framework, and use this to investigate some structural views about reasons, the main two being "Atomism" and "Holism". Atomism is the view that (...) atomic reasons are fundamental: all reasons reduce to atomic reasons. Holism is the view that whole reasons are fundamental. I argue for Holism, and against Atomism. I also consider whether reasons might be "context-sensitive". (shrink)
Moral conclusions cannot validly be inferred from nonmoral premises – this principle, commonly called “Hume’s law,” presents a conundrum. On one hand, it seems obviously true, and its truth is often simply taken for granted. On the other hand, an ingenious argument by A. N. Prior seems to refute it. My aim here is a resolution. I shall argue, first, that Hume’s law is ambiguous, admitting both a strong and a weak interpretation; second, that the strong interpretation is false, as (...) shown by Prior’s argument; and, third, that the weak interpretation is true. (shrink)
Recent epistemology has introduced a new criterion of adequacy for analyses of knowledge: such an analysis, to be adequate, must be compatible with the common view that knowledge is better than true belief. One account which is widely thought to fail this test is reliabilism, according to which, roughly, knowledge is true belief formed by reliable process. Reliabilism fails, so the argument goes, because of the "swamping problem". In brief, provided a belief is true, we do not care whether or (...) not it was formed by a reliable process. The value of reliability is "swamped" by the value of truth: truth combined with reliability is no better than truth alone. This paper approaches these issues from the perspective of decision theory. It argues that the "swamping effect" involves a sort of information-sensitivity that is well modelled decision-theoretically. It then employs this modelling to investigate a strategy, proposed by Goldman and Olsson, for saving reliabilism from the swamp, the so-called "conditional probability solution". It concludes that the strategy is only partially successful. (shrink)
The Argument from Inferiority holds that our world cannot be the creation of an omnipotent and omnibenevolent being; for if it were, it would be the best of all possible worlds, which evidently it is not. We argue that this argument rests on an implausible principle concerning which worlds it is permissible for an omnipotent being to create: roughly, the principle that such a being ought not to create a non-best world. More specifically, we argue that this principle is plausible (...) only if we assume that there is a best element in the set of all possible worlds. However, as we show, there are conceivable scenarios in which that assumption does not hold. (shrink)
Recent epistemology has introduced a new criterion of adequacy for analyses of knowledge: such an analysis, to be adequate, must be compatible with the common view that knowledge is better than true belief. One account which is widely thought to fail this test is reliabilism, according to which, roughly, knowledge is true belief formed by reliable process. Reliabilism fails, so the argument goes, because of the "swamping problem". In brief, provided a belief is true, we do not care whether or (...) not it was formed by a reliable process. The value of reliability is "swamped" by the value of truth: truth combined with reliability is no better than truth alone. This paper approaches these issues from the perspective of decision theory. It argues that the "swamping effect" involves a sort of information-sensitivity that is well modelled decision-theoretically. It then employs this modelling to investigate a strategy, proposed by Goldman and Olsson, for saving reliabilism from the swamp, the so-called "conditional probability solution". It concludes that the strategy is only partially successful. (shrink)
A good life, or a life worth living, is a one that is "better than nothing". At least that is a common thought. But it is puzzling. What does "nothing" mean here? It cannot be a quantifier in the familiar sense, yet nor, it seems, can it be a referring term. To what could it refer? This paper aims to resolve the puzzle by examining a number of analyses of the concept of a life worth living. Temporal analyses, which exploit (...) the temporal structure of lives, are distinguished from non-temporal ones. It is argued that the temporal analyses are better. (shrink)
Whether value is “additive,” that is, whether the value of a whole must equal the sum of the values of its parts, is widely thought to have significant implications in ethics. For example, additivity rules out “organic unities,” and is presupposed by “contrast arguments.” This paper reconsiders the significance of value additivity. The main thesis defended is that it is significant only for a certain class of “mereologies”, roughly, those in which both wholes and parts are “complete”, in the sense (...) that they can exist independently. For example, value additivity is significant in the case of a mereology of material objects, but not in the case of a mereology of propositions. (shrink)
Bart Streumer argues that a certain variety of consequentialism – he calls it ‘semi-global consequentialism’ – is false on account of its falsely implying the possibility of ‘blameless wrongdoing’. This article shows (i) that Streumer's argument is nothing new; (ii) that his presentation of the argument is misleading, since it suppresses a crucial premiss, commonly called ‘agglomeration’; and (iii) that, for all Streumer says, the proponent of semi-global consequentialism may easily resist his argument by rejecting agglomeration.