The counterfactual comparative account of harm faces problems in cases that involve overdetermination and preemption. An influential strategy for dealing with these problems, drawing on a suggestion made by Derek Parfit, is to appeal to _plural harm_—several events _together_ harming someone. We argue that the most well-known version of this strategy, due to Neil Feit, as well as Magnus Jedenheim Edling’s more recent version, is fatally flawed. We also present some general reasons for doubting that the overdetermination and preemption problems (...) for the counterfactual comparative account can be satisfactorily solved by appealing to plural harm. (shrink)
A popular view of harming is the causal account (CA), on which harming is causing harm. CA has several attractive features. In particular, it appears well equipped to deal with the most important problems for its main competitor, the counterfactual comparative account (CCA). However, we argue that, despite its advantages, CA is ultimately an unacceptable theory of harming. Indeed, while CA avoids several counterexamples to CCA, it is vulnerable to close variants of some of the problems that beset CCA.
ABSTRACT Suppose that, for every possible event and person who would exist whether or not the event were to occur, there is a well-being level that the person would occupy if the event were to occur, and a well-being level that the person would occupy if the event were not to occur. Do facts about such connections between events and well-being levels always suffice to determine whether an event would harm or benefit a person? Many seemingly attractive accounts of harm (...) and benefit entail an affirmative answer to this question, including the widely held Counterfactual Comparative Account (CCA). In this paper, however, we argue that all such accounts will be unsuccessful. (shrink)
In Consequentialism Reconsidered, Carlson strives to find a plausible formulation of the structural part of consequentialism. Key notions are analyzed, such as outcomes, alternatives and performability. Carlson argues that consequentialism should be understood as a maximizing rather than a satisficing theory, and as temporally neutral rather than future oriented. He also shows that certain moral theories cannot be reformulated as consequentialist theories. The relevant alternatives for an agent in a situation are taken to comprise all actions that they can perform (...) in the situation. The defense of this idea necessitates certain modifications to the standard consequentialist criteria of obligatoriness, rightness and wrongness. The problem of whether agents should adapt their actions to their own future actions is also addressed. Further, a conditional analysis of performability is suggested, and it is argued that particular actions should in this connection be regarded as `abstract' rather than `concrete'. The final chapter sketches a consequentialist theory for collective agents. (shrink)
We have argued that the counterfactual comparative account of harm and benefit (CCA) violates the plausible adequacy condition that an act that would harm an agent cannot leave her much better off than an alternative act that would benefit her. In a recent paper in this journal, however, Neil Feit objects that our argument presupposes questionable counterfactual backtracking. He also argues that CCA proponents can justifiably reject the condition by invoking so-called plural harm and benefit. In this reply, we argue (...) that Feit’s lines of criticism are both unsuccessful. (shrink)
The counterfactual comparative account of harm and benefit has several virtues, but it also faces serious problems. I argue that CCA is incompatible with the prudential and moral relevance of harm and benefit. Some possible ways to revise or restrict CCA, in order to avoid this conclusion, are discussed and found wanting. Finally, I try to show that appealing to the context-sensitivity of counterfactuals, or to the alleged contrastive nature of harm and benefit, does not provide a solution.
Ruth Chang has defended a concept of "parity", implying that two items may be evaluatively comparable even though neither item is better than or equally good as the other. This article takes no stand on whether there actually are cases of parity. Its aim is only to make the hitherto somewhat obscure notion of parity more precise, by defining it in terms of the standard value relations. Given certain plausible assumptions, the suggested definiens is shown to state a necessary and (...) sufficient condition for parity, as this relation is envisaged by Chang. (shrink)
In a recent Utilitas article, Neil Feit argues that every person occupies a well-being level of zero at all times and possible worlds at which she fails to exist. Views like his face the problem of the subject': how can someone have a well-being level in a scenario where she lacks intrinsic properties? Feit argues that this problem can be solved by noting, among other things, that a proposition about a person can be true at a possible world in which (...) neither she nor the proposition exists. In this response, we argue that Feit has not solved the problem of the subject, and also raise various related problems for his approach. (shrink)
John Broome has argued that incomparability and vagueness cannot coexist in a given betterness order. His argument essentially hinges on an assumption he calls the ‘collapsing principle’. In an earlier article I criticized this principle, but Broome has recently expressed doubts about the cogency of my criticism. Moreover, Cristian Constantinescu has defended Broome’s view from my objection. In this paper, I present further arguments against the collapsing principle, and try to show that Constantinescu’s defence of Broome’s position fails.
In a recent article in this journal, I claimed that the widely held counterfactual comparative account of harm violates two very plausible principles about harm and prudential reasons. Justin Klocksiem argues, in a reply, that CCA is in fact compatible with these principles. In this rejoinder, I shall try to show that Klocksiem’s defense of CCA fails.
A principal aim of the branch of ethics called ‘population theory’ or ‘population ethics’ is to find a plausible welfarist axiology, capable of comparing total outcomes with respect to value. This has proved an exceedingly difficult task. In this paper I shall state and discuss two ‘trilemmas’, or choices between three unappealing alternatives, which the population ethicist must face. The first trilemma is not new. It originates with Derek Parfit's well-known ‘Mere Addition Paradox’, and was first explicitly stated by Yew-Kwang (...) Ng. I shall argue that one horn of this trilemma is less unattractive than Parfit and others have claimed. The second trilemma, which is a kind of mirror image of the first, appears hitherto to have gone unnoticed. Apart from attempting to resolve the two trilemmas, I shall suggest certain features which I believe a plausible welfarist axiology should possess. The details of this projected axiology will, however, be left open. (shrink)
John Broome has argued that alleged cases of value incomparability are really examples of vagueness in the betterness relation. The main premiss of his argument is ‘the collapsing principle’. I argue that this principle is dubious, and that Broome's argument is therefore unconvincing. Correspondence:c1 [email protected].
In a discussion of Parfit's Drops of Water case, Zach Barnett has recently proposed a novel argument against “No Small Improvement”; that is, the claim that a single drop of water cannot affect the magnitude of a thirsty person's suffering. We first show that Barnett's argument can be significantly strengthened, and also that the fundamental idea behind it yields a straightforward argument for the transitivity of equal suffering. We then suggest that defenders of No Small Improvement could reject a Pareto (...) principle that is presupposed in Barnett's argument and our developments of it. However, this does not save No Small Improvement, since there is a convincing argument against this claim that does not presuppose the Pareto principle. (shrink)
Frances Howard -Snyder has argued that objective consequentialism violates the principle that ‘ought’ implies ‘can’. In most situations, she claims, we cannot produce the best consequences available, although objective consequentialism says that we ought to do so. Here I try to show that Howard -Snyder's argument is unsound. The claim that we typically cannot produce the best consequences available is doubtful. And even if there is a sense of ‘producing the best consequences’ in which we cannot do so, objective consequentialism (...) does not entail that we ought, in this sense, to produce the best consequences. (shrink)
Gustafsson and Espinoza have recently argued that the ‘small-improvement argument’, against completeness as a rationality requirement for preference orderings, is defective. They claim that the two main premises of the argument conflict, and hence should not both be accepted. I show that this conflict can be avoided by modifying one of the premises.
It is plausible to claim that it is morally worse to kill an innocent person than to give any number of people a mild one‐hour headache. Alaistar Norcross has argued that consequentialists, at least, should reject this claim. According to him, any harm that can befall a person can be morally outweighed by a sufficient number of very small harms. He gives a general argument for this view, and tries to show, by means of an argument from analogy, that it (...) is less counter‐intuitive than it appears. I show that his main argument relies on a false assumption, and argue that the purported analogy is dubious. (shrink)
The ‘non-identity problem’ raises a well-known challenge to the person-affecting view, according to which an action can be wrong only if it affects someone for the worse. In a recent article, however, Thomas D. Bontly proposes a novel way to solve the non-identity problem in person-affecting terms. Bontly's argument is based on a contrastive causal account of harm. In this response, we argue that Bontly's argument fails even assuming that the contrastive causal account is correct.
This paper criticizes the consequentialist theory recently put forward by Fred Feldman. I argue that this theory violates two crucial requirements. Another theory, proposed by Peter Vallentyne, is similarly flawed. Feldman's basic ideas could, however, be developed into a more plausible theory. I suggest one possible way of doing this.
Whether or not intrinsic value is additively measurable is often thought to depend on the truth or falsity of G. E. Moore's principle of organic unities. I argue that the truth of this principle is, contrary to received opinion, compatible with additive measurement. However, there are other very plausible evaluative claims that are more difficult to combine with the additivity of intrinsic value. A plausible theory of the good should allow that there are certain kinds of states of affairs whose (...) intrinsic value cannot be outweighed by any number of states of certain other, less valuable, kinds. Such``non-trade-off'' cannot reasonably be explained in terms of organic unities, and it can be reconciled with the additivity thesis only if we are prepared to give up some traditional claims about the nature of intrinsic value. (shrink)
Several distinguished philosophers have argued that since the state of affairs where nothing exists is the simplest and least arbitrary of all cosmological possibilities, we have reason to be surprised that there is in fact a non-empty universe. We review this traditional argument, and defend it against two recent criticisms put forward by Peter van Inwagen and Derek Parfit. Finally, we argue that the traditional argument nevertheless needs reformulation, and that the cogency of the reformulated argument depends partly on whether (...) there are certain conceptual limitations to what a person can hypothetically doubt. (shrink)
In a recent article in this journal, Justin Klocksiem proposes a novel response to the widely discussed failure to benefit problem for the counterfactual comparative account of harm (CCA). According to Klocksiem, proponents of CCA can deal with this problem by distinguishing between facts about there being harm and facts about an agent's having done harm. In this reply, we raise three sets of problems for Klocksiem's approach.
In this paper, we put forward two novel arguments against the counterfactual comparative account (CCA) of harm and benefit. In both arguments, the central theme is that CCA conflicts with plausible judgements about benefit and prudence.
Many philosophers have claimed that extensive or additive measurement is incompatible with the existence of "higher values", any amount of which is better than any amount of some other value. In this paper, it is shown that higher values can be incorporated in a non-standard model of extensive measurement, with values represented by sets of ordered pairs of real numbers, rather than by single reals. The suggested model is mathematically fairly simple, and it applies to structures including negative as well (...) as positive values. (shrink)
The well‐known “Consequence Argument” for the incompatibility of freedom and determinism relies on a certain rule of inference; “Principle Beta”. Thomas Crisp and Ted Warfield have recently argued that all hitherto suggested counterexamples to Beta can be easily circumvented by proponents of the Consequence Argument. I present a new counterexample which, I argue, is free from the flaws Crisp and Warfield detect in earlier examples.
Several philosophers have argued that our cosmos is either purposely created by some rational being, or else just one among a vast number of actually existing cosmoi. According to John Leslie and Peter van Inwagen, the existence of a cosmos containing rational beings is analogous to drawing the winning straw among millions of straws. The best explanation in the latter case, they maintain, is that the drawing was either rigged by someone, or else many such lotteries have taken place. Arnold (...) Zuboff claims that each person is justified in concluding that her existence did not depend on a particular sperm cell first reaching the egg. If it did so depend, her existence would be extremely improbable, and an incredible coincidence for her. Similarly, intelligent life would be an incredible coincidence for us, if this were the only actual cosmos. We reject both these purported analogies. Referring to the nonheredity of 'surprise value', we conclude that an evolutionary explanation of the presence of rational beings is sufficient; there is no further need to explain the basic features of our cosmos which make intelligent life possible. This point concerning surprise value also reveals a fundamental disanalogy between straw-drawing and cosmos creation. (shrink)
Charlotte Unruh has recently put forward a hybrid account of what it is to suffer harm – one that combines comparative and non‐comparative elements. We raise two problems for Unruh's account. The first concerns killing and death; the second concerns the causing of temporarily low or high welfare.
This chapter deals with an area of study sometimes called “formal value theory” or “formal axiology”. Roughly characterized, this area investigates the structural and logical properties of value properties and value relations, such as goodness, badness, and betterness. There is a long-standing controversy about whether goodness and badness can, in principle, be measured on a cardinal scale, in a way similar to the measurement of well-understood quantitative concepts like length. Sect. 28.1 investigates this issue, mainly by comparing the properties of (...) the relations “longer than” and “better than”. In Sect. 28.2, some attempts to define goodness and badness in terms of the betterness relation are discussed, and a novel suggestion is made. Sect. 28.3, finally, contains an attempt to define the recently much discussed value relation “on a par with” in terms of the more familiar betterness relation. (shrink)