Skip to main content
Log in

Collective action problems and conflicting obligations

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Enormous harms, such as climate change, often occur as the result of large numbers of individuals acting separately. In collective action problems, an individual has so little chance of making a difference to these harms that changing their behavior has insignificant expected utility. Even so, it is intuitive that individuals in many collective action problems should not be parts of groups that cause these great harms. This paper gives an account of when we do and do not have obligations to change our behavior in collective action problems. It also addresses a question insufficiently explored in the literature on this topic: when obligations arising out of collective action problems conflict with other obligations, what should we do? The paper explains how to adjudicate conflicts involving two collective action problems and conflicts involving collective action problems and other sorts of obligations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. The term is borrowed from Kutz (2000), but I’m using it in a somewhat broader sense than he does. For example, I include harms caused by voting as unstructured, as voters are relatively disorganized even though voters have (arguably) shared goals; Kutz would not.

  2. These sorts intuitions are almost universal among my students. The large body of literature on this topic confirms that these intuitions are shared, even when not endorsed, by philosophers; if they were not, there would not be so much work trying either to account for or argue against them.

  3. I adopt the common sense view that, when doing x maximizes expected utility but only by a very small amount, the fact that x maximizes does not automatically generate an obligation to x. Note also that, as I’m using “harm” broadly, I’m also using “expected utility” broadly.

  4. Voting often has some cost, and so in real-world cases the flat sections of the graph may not be perfectly flat. For those who wonder why I talk about the number opting in the voting case, rather than the percentage, see Sect. 3.4.

  5. Parfit (1984) also uses overdetermination to talk about CAP, as does Sartorio (2004, 2007). I discuss in Sect. 5 how my view differs from each of theirs.

  6. See, e.g., Moore (1999) for discussion.

  7. One might still wonder if it makes a moral difference when other agents, rather than machines, make unstructured harms overdetermined. It does not. If Barry came across two strangers who were about to randomly shoot Abel, rather than two machines, this would not change the wrong he does nor the strength of the obligation he violates. Nor would it change the explanation of that wrong that I give in Sect. 3.2.

  8. The bullet factory cases involve wrongful acts, but one can sometimes wrongfully opt in by omission. Imagine that Mark works as a bomb inspector. Each bomb is inspected by two machines and Mark; if two of the three flag the bomb as defective, the bomb is immediately disposed of in a safe way. Mark sees that a bomb has a defect, which he realizes will not be noted by the machines, and he knows the defect will make the bomb detonate and kill his co-worker. Mark can do nothing to stop this, so he shrugs and does not flag the bomb as defective. The co-worker is killed. While flagging the bomb as defective would not prevent the death, Mark should do it anyway.

  9. This test will not always be useful, such as when behavior impacts those of very different backgrounds or values from us, but I think it should work for the cases I am discussing. My thanks to Kathryn Lindeman for helping me get clearer on this issue. It is also worth noting that, while I’m using quasi-contractualist language, I am only claiming that this is a way of learning what is disrespectful, rather than that that this is the constitutive of, or the nature of, disrespect.

  10. This should be a familiar point from objections to Kant-inspired “universalization” or “generalization” principles (see e.g. Sandberg 2011 for such an objection).

  11. We can give the same sort of explanation for why it can be wrong to risk harming others, even when no harm actually occurs. If one takes a risk, then the agent does not know no harm will occur. This establishes a disrespectful relationship to a specific value (the value that would be lost were harm to occur). In situations like A, on the other hand, there is no risk because the agent knows no harm will occur; thus, the agent in A does not stand in a disrespectful relationship to any value.

  12. We can imagine that Abel might ask Barry not to shoot him in the case where no harm will occur. But this is more because the shot might be annoying, or Abel might not be fully convinced that it is perfectly safe, than because of the harm it would do were too many to shoot.

  13. This is different from appealing to expected utility. In CAP, the chance we make a difference is tiny, so the expected utility of opting out is miniscule. However, the expected unstructured harm that occurs can still be massive.

  14. This is the standard view in the population ethics literature spawned by Parfit (1984).

  15. It can be wrong to overdetermine a person’s being prevented from bringing happy children into the world. This is wrong because it violates the rights of the person prevented from having children, not because of the failure to bring children into the world. Failing to buy meat does not violate the rights of farmers, however.

  16. If the Bad candidate were sufficiently worse than the Moderate, expected utility might generate a duty to vote Moderate. However, voting Moderate would also risk making one an overdeterminer of the harm of electing the Moderate candidate. So one would also have a duty to vote Good. I’ll address conflicts of this sort—between duties in CAP and other sorts of duties—in the next section. The upshot will be that, if the expected utility of voting Moderate does generate a duty, this overrides the duty to vote Good.

  17. My thoughts on overdetermination in voting cases, and especially this particular point, were influenced by arguments in Sartorio (2004), although she does not address three option voting cases.

  18. It’s worth noting that, past some sufficiently high value of i, Andyi no longer does something wrong by planting the bomb. This is because it is permissible to incur extremely tiny risks. For example, it is typically permissible to take one’s infant for a gratuitous car ride, even though this risks the child’s life. It may be that these risks are permissible to incur because their expected utility is sufficiently close to zero. Or it may be that these risks are permissible to take just because the probabilities of bad outcomes are low enough (see, e.g. Aboodi et al. 2008).

  19. It’s not clear to me, however, how obligations to opt out compare to other obligations to refrain from acts that have no impact (e.g. how they compare to obligations involving unnoticed lies or promise breakings).

  20. This is something like consequentialism plus a virtue oriented view. See also Jamieson (2007) for discussion of how consequentialism and virtues can together explain obligations to opt out. Even if we give a version of this view that does fit for our intuitions, it is only more attractive than my view if there are prior compelling reasons to endorse consequentialism. I think our general evidence favors non-consequentialist moral theories.

  21. Versions of this objection are raised in Nefsky (2015) and Sandberg (2011). One can amend Parfit’s view so that it does not require that there be a unique smallest necessary group. Counterexamples to these variants can be generated by changing the ability of different actors to cause harm. Imagine a version of bullet factory in which machines 1 and 2 shoot Abel, as does Barry. 1’s shot would be sufficient to kill Abel by itself, but it would take 2 and Barry shooting together to kill Abel. It’s still intuitively wrong for Barry to shoot Abel. This fits my view: he overdetermines Abel’s death, although he is not part of the smallest group necessary to kill Abel.

  22. Thank you to Eric Chwang for pointing out this objection, and giving me this example.

  23. See Cullity (2006) for discussion of ways of determining the demandingness of obligations that come in a series.

  24. Thank you to Alastair Norcross and Doug Portmore for pointing out this objection.

  25. Thank you to Rebecca Chan for the ideas in this paragraph.

  26. This was originally discussed by Parfit (1984), although I’m using a version from Kagan (2011).

  27. There is a way of describing this case so that its harm graph has pronounced steps: the relevant harm is not the pain felt, but the violation of rights, and at some point a threshold is crossed and rights are violated. That description seems incorrect to me. If it were correct, however, then the view I’ve presented in this paper would say it is wrong to be one of the torturers.

  28. My experience is that intuitions tend to waver when we make it clear to intuitors that the torturers are not working with a unified goal and that there is never a point at which any additional torturer does or could make a noticeable difference—that is, they waver when we make clear that this is an unstructured harm and that no significant harm is overdetermined.

  29. Thank you to Molly Gardner for calling this issue to my attention.

References

  • Aboodi, R., Borer, A., & Enoch, D. (2008). Deontology, individualism, and uncertainty: A reply to Jackson and Smith. The Journal of Philosophy, 105(5), 259–272.

    Article  Google Scholar 

  • Björnsson, G. (2014). Essentially shared obligations. Midwest Studies in Philosophy, 38(1), 103–120.

    Article  Google Scholar 

  • Brand-Ballard, J. (2010). Limits of legality: The ethics of lawless judging. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Cullity, G. (2000). Pooled beneficence. In M. Almeida (Ed.), Imperceptible harms and benefits (pp. 1–23). Dordrecht: Kluwer Academic Publishers.

    Google Scholar 

  • Cullity, G. (2006). The moral demands of affluence. Oxford University Press on Demand.

  • Dietz, A. (2016). What we together ought to do. Ethics, 126, 955–982.

    Article  Google Scholar 

  • Gelman, A., Silver, N., & Edlin, A. (2012). What is the probability your vote will make a difference? Economic Inquiry, 50(2), 321–326.

    Article  Google Scholar 

  • Hume, D. (1752) Of the original contract.

  • Isaacs, T. (2011). Moral responsibility in collective contexts. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Jamieson, D. (2007). When utilitarians should be virtue theorists. Utilitas, 19(02), 160–183.

    Article  Google Scholar 

  • Kagan, S. (2011). Do I make a difference? Philosophy & Public Affairs, 39(2), 105–141.

    Article  Google Scholar 

  • Kutz, C. (2000). Complicity: Ethics and law for a collective age. New York: Penguin Group.

    Book  Google Scholar 

  • Moore, M. S. (1999). Causation and responsibility. Social Philosophy and Policy, 16(02), 1–51.

    Article  Google Scholar 

  • Morris, N. (1974). The future of imprisonment. Chicago: University of Chicago Press.

    Google Scholar 

  • Nefsky, J. (2011). Consequentialism and the problem of collective harm: A reply to Kagan. Philosophy & Public Affairs, 39(4), 364–395.

    Article  Google Scholar 

  • Nefsky, J. (2012). The morality of collective harm. Berkeley: University of California.

    Google Scholar 

  • Nefsky, J. (2015). Fairness, participation, and the real problem of collective harm. In M. Timmons (Ed.), Oxford studies in normative ethics (Vol. 5, pp. 245–271). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Norcross, A. (2004). Puppies, pigs, and people: Eating meat and marginal cases. Philosophical perspectives, 18(1), 229–245.

    Article  Google Scholar 

  • Otsuka, M. (1991). The paradox of group beneficence. Philosophy & Public Affairs, 20, 132–149.

    Google Scholar 

  • Parfit, D. (1984). Reasons and persons. Oxford: Oxford University Press.

    Google Scholar 

  • Pinkert, F. (2015). What if I cannot make a difference (and know it). Ethics, 125, 971–998.

    Article  Google Scholar 

  • Sandberg, J. (2011). My emissions make no difference. Environmental Ethics, 33(3), 229–248.

    Article  Google Scholar 

  • Sartorio, C. (2004). How to be responsible for something without causing it. Philosophical Perspectives, 18(1), 315–336.

    Article  Google Scholar 

  • Sartorio, C. (2007). Causation and responsibility. Philosophy Compass, 2(5), 749–765.

    Article  Google Scholar 

  • Strang, C. (1960). What if everyone did that? Durham University Journal, 53, 5–10.

    Google Scholar 

  • Vance, C. (2016). Climate change, individual emissions, and foreseeing harm. Journal of Moral Philosophy. doi:10.1163/17455243-46810060.

    Google Scholar 

Download references

Acknowledgements

My thanks to Rebecca Chan, Eric Chwang, Alex Dietz, Molly Gardner, Chris Heathwood, Charlie Kurth, Kathryn Lindeman, Julia Nefsky, Alastair Norcross, Julia Staffel, Doug Portmore, and audience members at the Rocky Mountain Ethics congress whose names I have forgotten. I also could not have written this paper without so many great students over the years who helped me to think about, and rethink, my views.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brian Talbot.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Talbot, B. Collective action problems and conflicting obligations. Philos Stud 175, 2239–2261 (2018). https://doi.org/10.1007/s11098-017-0957-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-017-0957-7

Keywords

Navigation