Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-23T11:04:40.368Z Has data issue: false hasContentIssue false

SUBJECTIVE RIGHTNESS

Published online by Cambridge University Press:  16 June 2010

Holly M. Smith
Affiliation:
Philosophy, Rutgers University

Abstract

Twentieth century philosophers introduced the distinction between “objective rightness” and “subjective rightness” to achieve two primary goals. The first goal is to reduce the paradoxical tension between our judgments of (i) what is best for an agent to do in light of the actual circumstances in which she acts and (ii) what is wisest for her to do in light of her mistaken or uncertain beliefs about her circumstances. The second goal is to provide moral guidance to an agent who may be uncertain about the circumstances in which she acts, and hence is unable to use her standard moral principle directly in deciding what to do. This paper distinguishes two important senses of “moral guidance”; proposes criteria of adequacy for accounts of subjective rightness; canvasses existing definitions for “subjective rightness”; finds them all deficient; and proposes a new and more successful account. It argues that each comprehensive moral theory must include multiple principles of subjective rightness to address the epistemic situations of the full range of moral decision-makers, and shows that accounts of subjective rightness formulated in terms of what it would reasonable for the agent to believe cannot provide that guidance.

Type
Research Article
Copyright
Copyright © Social Philosophy and Policy Foundation 2010

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Russell, Bertrand, “The Elements of Ethics” (originally published in 1910; reprinted from Russell, Philosophical Essays), in Readings in Ethical Theory, ed. Sellars, Wilfrid and Hospers, John, 2d ed. (New York: Appleton-Century-Crofts, 1970), 1015Google Scholar; Broad, C. D., Ethics, ed. Lewy, C. (Dordrecht: Martinus Nijhoff, 1985), chapter 3CrossRefGoogle Scholar (from lectures given in 1952–53); Prichard, H. A., “Duty and Ignorance of Fact” (1932), in Prichard, H. A., Moral Obligation and Duty and Interest (Oxford: Oxford University Press, 1968), 1839Google Scholar; Ross, W. D., Foundations of Ethics (Oxford: Clarendon Press, 1939), chapter 7Google Scholar. G. E. Moore has an early discussion of the “paradox” in question, but eventually concludes that we should say the action with the best consequences is right, although the person who does it, believing that it will have bad consequences, is to blame for his choice. See Moore, G. E., Ethics (1912; New York: Oxford University Press, 1965), 8083Google ScholarPubMed. Henry Sidgwick uses the terms “subjective rightness” and “objective rightness,” but uses the first term to refer to the agent's belief that an action is right (a status now often labeled “putatively right”), and the second term to refer to the fact that the action is the agent's duty in the actual circumstances. See Sidgwick, Henry, The Methods of Ethics (1874; Chicago: The University of Chicago Press, 1907), 206–8Google Scholar.

2 These locutions are somewhat misleading, since a morally right action is not necessarily the unique morally best action available to the agent, but may be one of several equally good options. However, for simplicity of exposition, I shall often use “right” when “ought to be done” or “obligatory” would be more accurate. I shall also frequently use “objective rightness” or “subjective rightness” to stand in for the objective or subjective moral status of an action more generally speaking. Note that here and throughout this essay, I am speaking only of all-things-considered moral status, not prima facie or pro tanto moral status. Many of the same issues arise for these latter concepts, and much of my discussion can be applied to them.

3 Prichard forcefully articulates this worry in “Duty and Ignorance of Fact.”

4 See, for example, the following discussions: Mulgan, Tim, The Demands of Consequentialism (Oxford: Clarendon Press, 2001)Google Scholar (discussing versions of consequentialism): “The objectively right action is always what would have produced the best consequences…. The subjectively right action is what seems to the agent to have the greatest expected value” (42); Sosa, David, “Consequences of Consequentialism,” Mind, New Series 102, no. 405 (January 1993)CrossRefGoogle Scholar: “[T]hey can agree that what he did was, say, ‘subjective-right’ and ‘objective-wrong’” (109); Oddie, Graham and Menzies, Peter, “An Objectivist's Guide to Subjective Value,” Ethics 102, no. 3 (April 1992)CrossRefGoogle Scholar: “The subjectivist claims that the primary notion for moral theory is given by what is best by the agent's lights … regardless of what is actually the best. The objectivist claims that the primary notion for moral theory is given by what is best regardless of how things seem to the agent” (512); and Hudson, James L., “Subjectivization in Ethics,” American Philosophical Quarterly 26, no. 3 (July 1989)Google Scholar: “In moral philosophy there is an important distinction between objective theories and subjective ones. An objective theory lays down conditions for right action which an agent may often be unable to use in determining her own behavior. In contrast, the conditions for right action laid down by a subjective theory guarantee the agent's ability to use them to guide her actions” (221; italics in the original).

Some contemporary theorists use the term “rational” to refer to what I am calling “subjectively right.” However, since what it would be rational for an agent to do, or what an agent has reason to do, may be ambiguous in just the same way that what it would be right for an agent to do may be ambiguous, I shall not adopt this terminology. Note, though, that the distinction between subjective and objective rightness arises not just in morality but also in other practical fields, such as law, prudence, etiquette, etc. My discussion will be confined to ethics, but much that is said here can be carried over into these other domains.

Some theorists have introduced the distinction between objective and subjective rightness (or something closely similar), not for the reasons I describe, but to serve other argumentative purposes, such as to address the criticism of utilitarianism that it requires agents to constantly calculate the utilities of their actions and thus diverts them from direct attention to the kinds of pursuits and relationships that make life worthwhile. See, for example, Railton, Peter, “Alienation, Consequentialism, and the Demands of Morality,” Philosophy and Public Affairs 13 (Spring 1984): 134171Google Scholar.

5 Pettit, Philip, in his essay “Consequentialism,” in Darwall, Stephen, ed., Consequentialism (Malden, MA: Blackwell Publishing, 2003)Google Scholar, points out that many nonconsequentialists assume that the properties of actions they find morally relevant are ones such that the agent will always be able to know whether or not an option will have one of those properties. In Pettit's view, this is not generally so. Hence, according to Pettit, “the non-consequentialist strategy will often be undefined” (ibid., 99). In Absolutist Moral Theories and Uncertainty,” The Journal of Philosophy 103, no. 6 (June 2006): 267–83CrossRefGoogle Scholar, Frank Jackson and Michael Smith argue that absolutist nonconsequentialist moral theorists cannot define a workable account of what it would be subjectively best to do in light of uncertainty.

6 For a selection of examples, see Bales, Eugene, “Act-Utilitarianism: Account of Right-Making Characteristics or Decision-Making Procedure?American Philosophical Quarterly 7 (July 1971): 256–65;Google Scholar; Darwall, Stephen, Impartial Reason (Ithaca, NY: Cornell University Press, 1983), 3031Google Scholar; Hudson, “Subjectivization in Ethics”; Gibbard, Allan, Wise Choices, Apt Feelings (Cambridge, MA: Harvard University Press, 1990), 43Google Scholar; Jackson, Frank, “Decision-Theoretic Consequentialism and the Nearest and Dearest Objection,” Ethics 101, no. 3 (April 1991): 461–82CrossRefGoogle Scholar; Korsgaard, Christine, The Sources of Normativity (Cambridge: Cambridge University Press, 1996), 8CrossRefGoogle Scholar; Milo, Ron, Immorality (Princeton, NJ: Princeton University Press: 1984), 22CrossRefGoogle Scholar (“Our primary purpose in passing judgments on our actions is to enable us to guide our choices about how to act”); Narveson, Jan, Morality and Utility (Baltimore, MD: The Johns Hopkins Press, 1967), 12Google Scholar; Smart, J. J. C., “An Outline of a System of Utilitarian Ethics,” in Smart, J. J. C. and Williams, Bernard, Utilitarianism: For and Against (Cambridge: Cambridge University Press, 1973), 44, 46CrossRefGoogle Scholar; Stocker, Michael, Plural and Conflicting Values (Oxford: Clarendon Press, 1990), 10Google Scholar; Timmons, Mark, Moral Theory: An Introduction (Lanham, MD: Rowman and Littlefield, 2002), 3Google Scholar; and Williams, Bernard, “A Critique of Utilitarianism,” in Smart, and Williams, , Utilitarianism: For and Against, 124Google Scholar.

7 Throughout this essay, I will talk about “theories,” “principles,” and “codes” of objective and subjective rightness. A particularist would reject such generalized statements of what makes actions right or wrong. Nonetheless, the particularist, too, will have to deal with problems arising from agents' mistakes and uncertainties, so he will need to attend to the issues addressed in this essay—something that appears to have been little discussed among particularists.

8 Occasionally people respond to Twin Towers III by saying, “Of course Pete can use his moral code to make his decision, since it tells him to save the lives of the people in the building, or to choose the method that has the greatest chance of saving their lives.” But Pete's moral code says only that he is to actually save their lives; advice about what he should do when it is uncertain which escape route would have the greatest chance of saving their lives is part of the job of principles of subjective rightness, and shows why we need them. We are so used to thinking in this fashion that we often do not notice we have switched from a judgment about objective rightness to a judgment about subjective rightness. But see also the remarks about “Remodeling” theorists in the text below.

9 They are also linked heavily to the concept of an excuse, and in particular to the fact that we excuse (not justify) people for their acts done in ignorance, but I will not try to spell out the ramifications of this in the present essay.

10 This is a term I employ in Making Morality Work (manuscript in progress), and represents a change from the terminology I employed in Two-Tier Moral Codes,” Social Philosophy and Policy 7, no. 1 (1989): 112–32CrossRefGoogle Scholar.

11 Common variants of this view would stipulate that the action must be most appropriate to the beliefs that a reasonable person would have in the agent's circumstances, or some similar constraint.

12 Both Prichard, “Duty and Ignorance of Fact,” and Ross, Foundations of Ethics, are Remodeling theorists. Recent discussions and defenses of Remodeling theories include Hudson, “Subjectivization in Ethics,” 221–29; Shaw, William H., Contemporary Ethics: Taking Account of Utilitarianism (Malden, MA: Blackwell Publishers, 1999): 2731Google Scholar; Hooker, Brad, Ideal Code, Real World (Oxford: Clarendon Press, 2000)Google Scholar; Zimmerman, Michael, “Is Moral Obligation Objective or Subjective?Utilitas 18, no. 4 (December 2006): 329–61CrossRefGoogle Scholar; and Jackson, , “Decision-Theoretic Consequentialism.” In Living with Uncertainty (Cambridge: Cambridge University Press, 2008)Google Scholar, Michael Zimmerman provides the most developed contemporary version and defense of this type of theory. Feldman, Fred argues, in “Actual Utility, the Objection from Impracticality, and the Move to Expected Utility,” Philosophical Studies 129 (2006): 4979CrossRefGoogle Scholar, that the Remodeling version of act-utilitarianism using expected utility cannot achieve all the goals its advocates have hoped for.

13 For initial investigations of these problems, see Lockhart, Ted, Moral Uncertainty and Its Consequences (New York: Oxford University Press, 2000)Google Scholar; Ross, Jacob, “Rejecting Ethical Deflationism,” Ethics 116 (July 2006): 742–68CrossRefGoogle Scholar; and Sepielli, Andrew, “What to Do When You Don't Know What to Do,” in Shafer-Landau, Russ, ed., Oxford Studies in Metaethics IV (Oxford: Oxford University Press, 2009)Google Scholar.

14 See Smith, Holly M., “Making Moral Decisions,” Noûs 22 (1988): 8993CrossRefGoogle Scholar, for a detailed discussion of the concepts of “theoretical” and “practical” domains of a moral principle. The statement of Criterion 2 is fairly rough. Moreover, given the possibility discussed in the text below that a non-possible action is subjectively right, we want the domain of principles of subjective rightness to extend beyond the domain of principles of objective rightness. In addition, Criterion 2 is too strong, since an agent may be totally unaware that a certain action (under any description) is available to him (for example, he may not believe he can touch his nose with the tip of his tongue, never having tried or even thought about trying to do this); thus, that action might have objective moral status without having any subjective moral status.

15 As I shall understand the concept of “moral guidance,” it includes permissions for agents to act in certain ways, as well as demands that they act in certain ways. Almost every situation is one in which there are several equally morally good options, even though there may be many morally bad options that must be avoided.

16 Note that there may be limits on this. Some otherwise plausible theories of objective rightness may not be compatible with any theory of subjective rightness. This is arguably a fault of these theories of objective rightness, not a deficiency in the definition of subjective rightness. See Frank Jackson and Michael Smith, “Absolutist Moral Theories and Uncertainty,” for an argument that absolutist nonconsequentialist theories suffer this failing.

17 See Holly M. Smith, “Making Moral Decisions,” 91–92, for discussion of this distinction. The definitions given in the text are overly simple; the definition of being able to use a principle as an internal guide is further refined by Definition (8) in Section V of the current essay.

18 For further discussion of this claim, see Holly M. Smith, “Making Moral Decisions,” section V. Väyrynen, Pekka has picked up and pursued this idea in “Ethical Theories and Moral Guidance,” Utilitas 18, no. 3 (September 2006): 291309CrossRefGoogle Scholar.

19 See Russell, “The Elements of Ethics,” 12 (“… the [act] which will probably be the most fortunate … I shall define … as the wisest act”); Smart, “An Outline of a System of Utilitarian Ethics,” 46–47 (“… the ‘rational’ … action … is, on the evidence available to the agent, likely to produce the best results …”); Lewis, C. I., Values and Imperatives (Stanford, CA: Stanford University Press, 1969), 3538Google Scholar (“… right if it probably would have the best consequences”), as quoted in Singer, Marcus C., “Actual Consequence Utilitarianism,” in Pettit, Philip, ed., Consequentialism (Aldershot, England: Dartmouth Publishing Company Limited, 1993), 299Google Scholar; Ross, Foundations of Ethics, 157; Hospers, John, Human Conduct (New York: Harcourt, Brace, and World, 1961), 217Google Scholar (“… our subjective duty, namely the act which, in those circumstances, was the most likely to produce the maximum good”).

20 See Parfit, Derek, Reasons and Persons (Oxford: Clarendon Press, 1984), 2425Google Scholar; William H. Shaw, Contemporary Ethics, 27–31 (as a theory of objective rightness); and Timmons, Moral Theory, 124.

21 See Brandt, Richard, “Towards a Credible Form of Utilitarianism,” in Castaneda, Hector-Neri and Nakhnikian, George, eds., Morality and the Language of Conduct (Detroit: Wayne State University, 1965), 112–14Google Scholar; Brandt, Richard, Ethical Theory (Englewood Cliffs, NJ: Prentice-Hall, 1959), 365Google Scholar (“… ‘did his duty’ in [the subjective] sense means ‘did what would have been his duty in the objective sense, if the facts of the particular situation had been as he thought they were, except for corrections he would have made if he had explored the situation as thoroughly as a man of good character would have done in the circumstances’”); Graham, Peter, “‘Ought’ Does Not Imply ‘Can’,” unpublished manuscript, 2007: 34Google Scholar, http://people.umass.edu/pgraham/Home.html; Feldman, Fred, Doing the Best We Can (Dordrecht: D. Reidel, 1986), 46CrossRefGoogle Scholar; Broad, Ethics, 141 (“… we must say that he is under a formal obligation to set himself to discharge what he knows would be his material obligation if the situation were as he mistakenly believes it to be”); Milo, Immorality, 18 (“If the agent is mistaken about a matter of fact, and, if, had the facts been as he supposed, his act would be wrong, then, unless there are excusing conditions, his act is blameworthy and immoral”); and Thomson, Judith Jarvis, “Imposing Risks,” in Parent, William, ed., Rights, Restitution, and Risk (Cambridge, MA: Harvard University Press, 1986), 179Google Scholar (“… presumably ‘He (subjectively) ought’ means ‘If all his beliefs of fact were true, then it would be the case that he (objectively) ought’”; although note that Thomson doubts there is any subjective sense of “ought”). Note that the American Law Institute's Model Penal Code, Section 2.04(2) provides that the defense of ignorance of fact “is not available if the defendant would be guilty of another offense had the situation been as he supposed….” Cited in Husak, Douglas and Von Hirsh, Andrew, “Culpability and Mistake of Law,” in Shute, Stephen, Gardner, John, and Horder, Jeremy, Action and Value in Criminal Law (Oxford: Clarendon Press, 1993), 161Google Scholar.

22 See Gibbard, Wise Choices, Apt Feelings, 42 (“Thus an act is … wrong in the subjective sense if it is wrong in light of what the agent had good reason to believe”; note that Gibbard uses the “good reason to believe” formulation of this definition); Prichard, “Duty and Ignorance of Fact,” 25 (“… the obligation depends on our being in a certain attitude of mind towards the situation in respect of knowledge, thought, or opinion”); Ross, Foundations of Ethics, 146–47 (“… when we call an act right we sometimes mean that … it suits the subjective features [of the situation]…. The subjective element consists of the agent's thoughts about the situation”; see also ibid., 150, 161, 164); Oddie, Graham and Menzies, Peter, “An Objectivist's Guide to Subjective Value,” Ethics 102 (April 1992): 512–33CrossRefGoogle Scholar, at 512 (“… is the morally right action the one which is best in the light of the agent's beliefs?”); and Jackson and Smith, “Absolutist Moral Theories and Uncertainty,” 270 (“… we are in fact talking about what a subject ought to do given their epistemic situation.”).

23 For example, Gibbard, Wise Choices, Apt Feelings, 42; Brandt, Ethical Theory, 365; and Hospers, Human Conduct, 217.

24 Some authors offer Definitions (1) and (2) as definitions of the concepts of subjective rightness/wrongness, while other authors seem to assume (without stating them) some more general definitions of these concepts, and offer (1) and (2) as substantive rules for determining which acts are subjectively right or wrong. My discussion will focus on (1) and (2) as proposed definitions.

25 Zimmerman, “Is Moral Obligation Objective or Subjective?” 334; Zimmerman takes the example from Jackson, “Decision-Theoretic Consequentialism,” 462–63.

26 Note that it would not help Definition (1) to rephrase it along “Reasonable Belief” lines as “Act A is subjectively right just in case A is the act which it would be reasonable for the agent to believe to be most likely to be objectively right, and A is subjectively wrong just in case A is not the act which it would be reasonable for the agent to believe to be most likely to be objectively right.” Adverting to what it is reasonable (etc.) for the agent to believe does not enable Definition (1) to escape the problem just discussed.

As several writers have noted, there are cases in which an act that is certain to be objectively wrong is nonetheless one of those that would be subjectively right: see Regan, Donald, Utilitarianism and Co-operation (Oxford: Oxford University Press, 1980), 264–65CrossRefGoogle Scholar; and Jackson, “Decision-Theoretic Consequentialism,” 462–63. We can see such a case if we add drug Z to Strong Medicine, and in Situation S*, drug Z would completely cure the patient, but in Situation S, drug Z would kill the patient (the opposite of drug Y in these situations). Then giving drug X is certain to be objectively wrong, because in Situation S, drug Y would be better, whereas in Situation S*, drug Z would be better.

27 Of course, it is possible that some advisor might simply inform Sue what the expected values of her options are, relieving her of the need to make these calculations. Regrettably, such advisors are thin on the ground for agents making complex decisions.

28 For a graphic description of these problems, see Feldman, “Actual Utility,” 49–79. Note that these problems arise whether the definition or principle of subjective rightness is phrased in terms of objective probabilities or subjective probabilities. Even if it is always possible for an agent to elicit his own subjective assignments of probability, he may not have time to do this before a decision must be made. (Of course, an agent might believe that some act would maximize expected value without having made any calculations.)

To be sure, decision theorists have proven that any decision-maker whose decisions conform to certain rationality postulates governing his subjective probability assignments and his choices over uncertain prospects will necessarily choose the action that maximizes his own expected value. For a classic presentation, see Luce, R. Duncan and Raiffa, Howard, Games and Decisions (New York: John Wiley and Sons, 1957), chapter 2Google Scholar. But these subjective values and probability estimates are latent dispositions to make choices in certain situations; the agent himself cannot know what these values and estimates are without a good deal of work. Prior to doing that work, he does not have the information necessary to consciously apply the principle advising him to maximize expected value. Moreover, there is no guarantee that his subjective values (revealed by an array of choices) are actually identical to the moral value that he consciously seeks to maximize in making the present decision. In any event, we are interested in providing a decision-maker with normative advice on how to proceed in choosing his action. To be told that he will, if rational, inevitably select the action that maximizes his expected value provides him with no moral guidance.

29 I argue elsewhere that Definition (2) also fails as a general definition of subjective rightness because it is incompatible with moral theories having certain structures (see my Making Morality Work, manuscript).

Note that it would not help Definition (2) to be restated in the form of a Reasonable Belief definition as “Act A is subjectively right just in case A is the act that it would be reasonable for the agent to believe has the highest expected value, and A is subjectively wrong just in case there is some alternative to A that it would be reasonable for the agent to believe has a higher expected value than A.” Here, too, adverting to what it might be reasonable (justified, etc.) for the agent to believe does not enable Definition (2) to escape the problem just discussed. There may indeed be cases in which the agent's evidence is sufficiently comprehensive that it would be possible to say that the agent (based on that evidence) would be justified in believing that a given act would have the highest expected value. However, there will be many other cases in which the agent's evidence (or the evidence available to him) is not sufficiently comprehensive to justify a belief about which act has the highest expected value. Moreover, at the time a decision must be made, the agent may not believe that he is justified in having any belief about which action would maximize expected value, or may not be able to identify which such belief would be justified (even though he may be so justified). For this reason, too, the agent could not use a principle of subjective rightness endorsed by this version of Definition (2) in order to make his decision.

30 That is, there are no probabilistic facts other than ones in which the probabilities are 1 or 0. But Pete's beliefs cannot be translated into facts such as these.

31 Similar conclusions hold if we interpret “probability” as “epistemic probability.” Thus, Pete's belief might be interpreted as “My credence level is .8 that the elevators will become inoperative.” But there is no way to get from the truth of this belief to a conclusion about what would be objectively right for Pete to do, given that his objective moral code simply tells him to save the lives of the people in the building.

32 Note that it would not help Definition (3) to be restated in the form of a Reasonable Belief theory such as “Act A is subjectively right just in case A would be objectively right if the facts had been as the agent had reason to believe them to be; and A is subjectively wrong just in case A would be objectively wrong if the facts had been as the agent had reason to believe them to be.” What the agent has reason to believe, in many cases, will be probabilistic (as in Pete's case), and so will run into the same problems as the original Definition (3).

33 It might be urged at this point that Definition (4) should be interpreted as identifying the subjectively right act in light of all the agent's beliefs—both his beliefs about his alternative actions, and his beliefs about his own beliefs. But this inclusive set of beliefs would seem to generate two inconsistent answers to what action is subjectively right for him (one arising from the content of his beliefs about the circumstances, and one arising from the content of his beliefs about his own beliefs), so this strategy seems likely to fail. Noting this, however, does call our attention to the fact that we may need to restrict the scope of the agent's beliefs that affect which actions are subjectively right and wrong for him. And, of course, an agent's beliefs about his beliefs about his beliefs about his actions can also be mistaken or uncertain.

34 For a recent influential philosophical discussion of this issue, see Williamson, Timothy, Knowledge and Its Limits (Oxford: Oxford University Press, 2000), chapter 4Google Scholar. Williamson introduced the term “luminous,” which he applies to cases in which we are in a position to know something. For a seminal discussion of the different types of (possible) “privileged access,” see Alston, William, “Varieties of Privileged Access,” American Philosophical Quarterly 8 (1971): 223–41Google Scholar. Although most philosophers (and almost all psychologists) would agree with my statements in the text, there has long been philosophical controversy over this point.

Note that the debate about whether the content of mental states, and in particular beliefs, is “broad” or “narrow” is relevant here as well. If the content of a belief (say, the belief that water quenches thirst) partly depends on matters external to the believer (e.g., whether the common liquid substance is H20 or XYZ), then clearly an agent can be mistaken or uncertain about these external matters, and thus mistaken or uncertain about the content of the beliefs he holds.

35 This case is based on one described in Deweese-Boyd, Ian, “Self-Deception,” Stanford Encyclopedia of Philosophy (October 17, 2006), section 3.0Google Scholar, http://plato.stanford.edu/entries/self-deception/.

36 Of course, we have already seen that such a principle is normatively faulty, but for reasons of simplicity I will use it in this example.

37 Note one complication here. I have described this case, and Allison's beliefs and uncertainties, relative to a particular principle of subjective rightness. But there may be additional principles of subjective rightness that ascribe subjective moral status to actions in light of different beliefs, and Allison might be certain what her beliefs about those matters are, even though she is not certain about the beliefs relevant to the principle in the text. Thus, she could be certain about what this second principle tells her it would be subjectively right to do even though she is not certain about what the original principle tells her. In such a case, her uncertainty about some of her beliefs does not stand in the way of her assigning subjective rightness to one of her actions, because she has certainty about other relevant beliefs. As I will argue later in the text, and have argued elsewhere (Smith, “Making Moral Decisions,” 98–99), each principle of objective rightness needs to be supplemented by a variety of principles of subjective rightness, since agents often need to make a decision even though they may not have all the beliefs required to apply the favored principle of subjective rightness to their circumstances. Thus, an agent would have to be uncertain (or mistaken) about a great many of her beliefs to be in a position in which she could not ascribe any subjective moral status to her potential actions.

It would be possible to define a Reasonable Belief version of Definition (4), along the following lines: “An act A is subjectively right just in case A is best in light of the beliefs it would be reasonable for the agent to have at the time she performs A; and A is subjectively wrong just in case A is not the best act in light of the beliefs it would be reasonable for the agent to have at the time she performs A.” However, this version of Definition (4) also violates the Guidance Adequacy Criterion—indeed, more pervasively than does the original Definition (4)—since agents are often unaware, mistaken, or uncertain about which beliefs it would be reasonable for them to have.

Note finally that the problem for Definition (4) discussed in this section also arises for Definition (3), and for Definition (2) when the agent must assess her own probability and value assignments.

38 This is denied by Jackson and Smith, “Absolutist Moral Theories and Uncertainty,” 269.

39 How precisely to define “to lie” is a complex and controversial issue. For a survey treatment, see Mahon, James Edwin, “The Definition of Lying and Deception,” Stanford Encyclopedia of Philosophy (published February 21, 2008)Google Scholar, http://plato.stanford.edu/entries/lying-definition/.

40 For a recent discussion of the assumption that intending to do A always involves believing that one will do A, and references to the literature, see Setiya, Kieran, “Cognitivism about Instrumental Reason,” Ethics 117, no. 4 (July 2007): 649–73CrossRefGoogle Scholar. On some views, intending only requires the weaker belief that doing X is likely to result in one's doing A.

41 Of course, criminal and tort law typically define disallowed conduct as including a belief element (e.g., in the definitions of fraud and murder).

42 For a recent defense of the “intentional” version of the Doctrine of Double Effect, see Moore, Michael S., “Patrolling the Borders of Consequentialist Justifications,” Law and Philosophy 27 no. 1 (January 2008): 3596CrossRefGoogle Scholar, as cited in Oberdiek, John, “Culpability and the Definition of Deontological Constraints,” Law and Philosophy 27 (March 2008): 105–22CrossRefGoogle Scholar. Of course, the full Doctrine of Double Effect also refers to the side-effects of the agent's action, and to the means to his goal.

43 In this case, to risk something involves believing there is a chance it will occur.

44 Of course, the Biblical command to honor one's parents includes a command to act toward them in certain ways (such as obeying them), but it also seems to involve a command to hold a certain attitude toward one's parents. My comments focus on this latter aspect of the commandment.

There are major issues, of course, about whether such mental activities are appropriate objects for moral duties, since it is unclear to what extent an individual can perform (or avoid performing) the activity voluntarily. The requirement that any duty be one that the agent has the ability to perform “on command” is a common but controversial one; this is not the occasion to discuss it further. See Adams, Robert, “Involuntary Sins,” The Philosophical Review 94, no. 1 (January 1985): 332CrossRefGoogle Scholar; Feldman, Richard, “The Ethics of Belief,” Philosophy and Phenomenological Research 60, no. 3 (May 2000): 667–95CrossRefGoogle Scholar; and Hieronymi, Pamela, “Responsibility for Believing,” Synthese 161, no. 3 (April 2008): 357–73CrossRefGoogle Scholar, for defenses of the idea that there can be duties or responsibilities to have certain mental states. Of course, some purely mental “activities” do seem to be ones over which we have the same kind of control that we do over bodily actions: on command, one can search one's memory, do mental arithmetic, review the considerations that favor a certain course of action, etc. In matters of belief, one's mental inquiry or search may be controlled, but not one's mental response to the result of the inquiry.

45 See Mahon, “The Definition of Lying and Deception,” for discussion. Clearly, this condition would be deemed to be relevant to the lie's moral status; eavesdroppers have no right that they not be misled.

46 Wrongful acts such as attempting to harm someone seem to depend on one's beliefs about what one is doing, not (for example) on the objective probability of one's acting in a way that will harm the person. I thank Preston Greene for pointing this out.

47 Note that subjectively right/wrong acts themselves are typically understood to have “objective” features in addition to what the agent believes of them: they must be acts that are potentially performable by the agent, not just figments of the agent's imagination. There may be temporal factors as well, linking the time of the action and the time of the agent's beliefs. If this is correct, then Definition (5) (discussed below in Section IV) must apply to acts having mixed “objective” and “subjective” features. But for discussion of this assumption, see the fifth point in my discussion of Definition (5) below.

48 I am grateful to Preston Greene, who persuaded me of this point.

49 This point is further enforced by the fact that many Remodeling theorists have advocated, as principles of objective rightness, principles with exactly the same content as principles advocated by others as principles of subjective rightness (e.g., “One ought to maximize expected utility”). Examination of the right-making feature identified by this principle does not tell us whether it is a principle of objective or subjective rightness.

50 I have argued for the necessity of a hierarchy of principles of subjective rightness in my essays “Making Moral Decisions,” and “Deciding How to Decide: Is There a Regress Problem?” in Bacharach, Michael and Hurley, Susan, eds., Essays in the Foundations of Decision Theory (Oxford: Basil Blackwell, 1991), 194219Google Scholar. For decision theorists' discussions of the need for multiple decision-guides, see Coombs, Clyde C., Dawes, Robyn M., and Tversky, Amos, Mathematical Psychology (Englewood Cliffs, NJ: Prentice-Hall, 1970), chapter 5Google Scholar; and Resnik, Michael, Choices (Minneapolis: University of Minnesota Press, 1987), 40Google Scholar.

51 “Right-making” is here construed as “all-things-considered right-making.” A parallel version of Definition (5) could be stated for “prima facie right-making” (and similarly for “wrong-making”).

52 Note that there may be cases in which an agent has “mixed” types of beliefs. For example, the agent might believe that he has several options (e.g., A, B, and C), and might be certain that A has a wrong-making feature according to Q, but uncertain whether B or C has right-making or wrong-making features. Definition (5) needs to be revised to accommodate such cases more cleanly.

53 Strictly speaking, it is not principle Q itself (“A is right if and only if A has F”) that serves as the principle of subjective rightness, but a version of this principle stated in terms of “if” rather than “if and only if.” This change is necessary to accommodate the fact that there may be more than one principle of subjective rightness. Note that Definition (5) leaves open whether the most appropriate principle of subjective rightness for an agent who has sufficiently rich beliefs to apply Q itself is principle Q itself (e.g., “A is right if A has F”) or a “subjectivized” version of Q that includes overt reference to the agent's beliefs (e.g., “A is right if the agent believes that A has F”). This means that Definition (5)'s clause “relative to principle Q and relative to the agent's non-normative beliefs” can be satisfied in either of two ways: the agent's beliefs can figure as part of the subjectively right-making features of the action stipulated by the principle (as is true in the subjectivized version of Q), or the agent's beliefs can figure as part of the conditions that make it appropriate to evaluate an action by a principle that specifies subjectively right-making characteristics that themselves involve no reference to the agent's beliefs. By virtue of this clause in Definition (5), every acceptable principle of subjective rightness will evaluate actions relative to the agent's beliefs.

54 I have argued above that one cannot determine that a normative principle is a principle of subjective rightness just by ascertaining that the right-making features it identifies refer to the agent's beliefs (since some principles of objective rightness also identify right-making features that refer to the agent's beliefs). In parallel, we can now note that it is not possible to infer that a principle of subjective rightness must identify right-making features that refer to the agent's beliefs. If a principle of objective rightness Q can serve as a principle of subjective rightness relative to itself in a case in which the agent believes of some act that it has the right-making feature identified by Q (and this feature does not refer to the agent's beliefs), then Q, in its guise as a principle of subjective rightness, does not identify right-making features that refer to beliefs. (See the previous note.) We also know this from theorists who argue that the best principles of subjective rightness for act-utilitarianism may be the rules of common-sense morality, which have no reference to the agent's beliefs. See the discussion below under the third implication of Definition (5).

Note also that a given normative principle might have unique features that make it an appropriate principle of subjective rightness for a single principle of objective rightness. Other normative principles may be appropriate for many principles of objective rightness.

55 Since, according to Definition (5), a principle of subjective rightness P prescribes actions relative to Q and relative to the agent's non-normative beliefs, the agent's beliefs form part of the basis for the subjective moral status of the agent's actions. This is true whether or not the principle of subjective rightness overtly stipulates that the agent's beliefs are part of the subjective-rightness-making features of the actions.

There is a question whether we should make subjective rightness rest on the agent's beliefs, or on all the agent's doxastic states, or on the agent's doxastic states together with relevant sub-doxastic states. We should certainly include the agent's credences—his degrees of belief in something. (Note that the line between “believing P” and “having credence C (very high, but less than 1.0) in P” is not a clean one, and, hence, the line between what it is best to choose in light of one's mistaken beliefs, and what it is best to do in light of one's uncertainties, may not be clean either.) We should probably include the agent's suspension of belief about some issues. But what about his unconscious or merely latent “stored” beliefs? I suspect these should not be included, since the agent may have no access to them, and by hypothesis is not aware of them at the time of decision. Thus, the agent is not in a position to consciously guide his decision in light of these unconscious beliefs. However, further work on this issue is needed. If such unconscious stored beliefs play a causal role in agents' decision-making, it is less plausible to deny them a role in what is subjectively right for the agent. (For example, the agent may not have a conscious belief that the floor under his feet is solid, but this unconscious belief may play a causal role in his decision to step forward.)

It would be natural to think that Definition (5) should be phrased in terms of the agent's non-normative beliefs about her action. However, some facts that are taken by many moral codes to be relevant to an action's moral status may not be conceptualized by agents as facts about the action, so it seems best not to restrict the content of the agent's non-normative beliefs any further.

56 What should be said about a case such as the following? Suppose the best principle of subjective rightness prescribes the act that, according to the agent's beliefs, would maximize expected value. Let us stipulate that Sue, in Strong Medicine (described in Section II.C–D), believes the facts described in the middle two columns of table 2, but lacks any beliefs about the facts stated in the right-most column (which describes the expected values of her options). So Sue has no belief of any action that it would maximize expected value, although the fact that giving Ron drug X would maximize expected value is entailed by her other non-normative beliefs.

I believe adherence to the Guidance Adequacy Criterion implies that we should interpret Definition (5)not to imply in such a case that Sue's giving Ron drug X would be subjectively right—since Sue herself does not believe of this act that it would maximize expected value. Although the contents of Sue's beliefs may entail that giving Ron drug X would maximize expected value, nonetheless she herself does not see this, since she has not derived the logical implications of her own beliefs. Perhaps in the next moment she will derive these implications. Definition (5) implies that it would then be subjectively right for her to give Ron drug X. The situation at the earlier time is a case in which the logical link between the contents of beliefs Sue does have and the content of the belief that would enable her to apply a given principle of subjective rightness is short and direct, so one may balk at refusing to say that giving Ron drug X would be subjectively right for Sue. However, there are other cases in which the link—although just as tight—is distant and obscure, and we are hardly surprised that the agent does not observe this link. In both cases, since we are focusing on what it is subjectively right for the agent to choose at ti, we need to focus on what her actual beliefs at ti would support.

57 However, this matter is complicated. In certain pathological cases, where the agent adheres to an erroneous ethical theory, his action in accord with the absolutely subjective right-making characteristics may be blameless, even though he himself views his action as wrong and blameworthy. See Bennett, Jonathan, “The Conscience of Huckleberry Finn,” Philosophy 49, no. 188 (April 1974): 123–34CrossRefGoogle Scholar. Moreover, since an agent can be criticized for performing an action that he believes to be subjectively right, but performs for the “wrong reason” (e.g., not because it is subjectively right but because it will harm his enemy), the tie cannot be as close as the text suggests. Note also, as Preston Greene points out, that the luminosity-of-beliefs problem also crops up in connection with such a definition of blameworthiness.

58 See, for example, John Stuart Mill, Utilitarianism, chapter II; Sidgwick, The Methods of Ethics, chapters III, IV, and V; Smart, “An Outline of a System of Utilitarian Ethics,” section 7; Hare, R. M., Moral Thinking: Its Levels, Method, and Point (Oxford: Clarendon Press, 1981)CrossRefGoogle Scholar, esp. section I.3 (“The Archangel and the Prole”); Shaw, Contemporary Ethics, 145–50; and perhaps Railton, Peter, “Alienation, Consequentialism, and the Demands of Morality,” in Railton, Peter, ed., Facts, Values, and Norms (Cambridge: Cambridge University Press, 2003): 165–68CrossRefGoogle Scholar. For relevant contemporary discussion in psychology, see Gigerenzer, Gerd, Todd, Peter M., and the ABC Research Group, Simple Heuristics That Make Us Smart (New York: Oxford University Press, 1999)Google Scholar.

59 This is a common (but not the only) account of what makes a principle of subjective rightness appropriate to an underlying principle of objective rightness.

60 This will be relevant to discussions of free will and moral responsibility when the agent could do no other than what she does, as in “Frankfurt-style” cases, originally described by Frankfurt, Harry in “Alternate Possibilities and Moral Responsibility,” Journal of Philosophy 66, no. 23 (December 4, 1969): 829–33CrossRefGoogle Scholar.

See Graham, “‘Ought’ Does Not Imply ‘Can’,” 4, for discussion of the fact that an act may be subjectively right even though the agent cannot perform it (although Graham dismisses the need for a concept of subjective rightness).

61 Possibly there will be agents whose belief sets, or mental capacities, are so impoverished that no principle of subjective rightness can assess which action would be best for them. This, however, is a not a problem reflecting any inadequacy in Definition (5).

62 Definition (5), like some of the others we have reviewed, opens the question whether “subjective rightness” should be restricted, as most discussions have restricted it, to the moral status of an action relative to the agent's beliefs at the time of choice. Advisors and onlookers may also have beliefs in virtue of which they appraise the agent's action (or prospective action). The agent himself may have different beliefs at different times (both before and after the action) relative to which the action can be appraised. The agent may gradually gain more information in the run-up to the action, in virtue of which its “subjective” status changes; and he may gain more information after having acted, in virtue of which the action's “subjective” status may change and he may regret having chosen it. Given the importance of these additional assessments, it would be both possible and perhaps useful to broaden the definition of “subjective rightness” so that it is relative to any given set of beliefs-at-a-time. However, for purposes of this essay I will leave subjective status as defined in terms of the agent's beliefs (implicitly) at the time of choice.

63 Note that an act may be subjectively right at ti (because it is prescribed by the highest principle of subjective rightness the agent can use at ti) even though the agent does not ask himself at ti the question of whether to perform the action, or whether it would be subjectively right to perform the action.

Definition (6) would have to be further developed to handle cases (such as the Regan-type case, described in note 26) in which the agent has mixed information about his various possible options—for example, having beliefs about what the expected value of some acts would be, but not having any beliefs about the expected value of other acts.

64 Bales, “Act-Utilitarianism,” 261.

65 There is a highly developed literature on rule-following that focuses on questions somewhat distinct from those at issue in this essay. See, for example, Railton, Peter, “Normative Guidance,” in Shafer-Landau, Russ, ed., Oxford Studies in Metaethics, 1 (Oxford: Oxford University Press, 2006), 334Google Scholar.

66 That is, slightly revising Alvin Goldman's definition of “ability to perform an act” in the epistemic sense, S believes (doubtless expressed in her own concepts) that

  1. (1)

    (1) There is an act-type A* which S truly believes at ti to be a basic act-type for her at tj;

  2. (2)

    (2) S truly believes that she is (or will be) in standard conditions with respect to A* at tj; and

  3. (3)

    (3) either

    1. (a)

      (a) S truly believes that A* = A, or

    2. (b)

      (b) S truly believes that there is a set of conditions C* obtaining at tj such that her doing A* would generate her doing A at tj.

See Goldman, Alvin I., A Theory of Human Action (Englewood Cliffs, NJ: Prentice-Hall, 1970), 203Google Scholar. Roughly speaking, a person is in standard conditions with respect to an act property just in case (a) there are no external physical constraints making it physically impossible for the person to exemplify the property, and (b) if the property involves a change into some state Z, then the person is not already in Z. See ibid., 64–65. Note that on Definition (8) the agent believes that she truly believes there is a basic act-type for her, etc., but she may be wrong about what she believes and whether her belief is true.

Further complications would have to be introduced to deal with cases in which the agent is uncertain whether some act is one she can actually perform, and to deal with deviant causal chain cases.

67 One would want variants on this for actions that are forbidden, but since our main focus is on an agent's deciding what to do (not just what not to do), in the interests of shorter exposition I will omit these variants.

68 Note that the principles of subjective rightness are phrased as sufficient conditions (“… if …”) rather than as necessary and sufficient conditions (“… if and only if …”). This phrasing is needed to accommodate the fact that there may be many principles of subjective rightness, so each can only offer a sufficient (but not necessary) condition for an act's being a candidate for being subjectively right.

69 In point (1) of this list of eight points, we construed the case as one in which principle P is not usable by S, since she does not believe that she believes of any act that it would maximize value. But, alternatively, the psychology of the case could be such that P is usable by S, since, given that S actually does believe of act A that it would maximize value, she might (to her surprise) derive a prescription for A from P. On this construal, the case would turn out as follows:

  1. (1′)

    (1′) Principle P is usable by S, since she believes of act A that it would maximize value, and if she wanted to derive a prescription from P she would do so, in virtue of this belief.

  2. (2′)

    (2′) Thus, Principle P is the highest usable principle of subjective rightness for S.

  3. (3′)

    (3′) In light of her information, S is in a position to conclude that P is the highest principle of subjective rightness usable by her.

  4. (4′)

    (4′) Hence, S is in a position to conclude that act A is subjectively right, since she is in a position to conclude that A is prescribed by the highest usable principle of subjective rightness relative to Q.

  5. (5′)

    (5′) Act A is prescribed by the highest usable principle of subjective rightness, and so is subjectively right.

On this alternative construal of this case, S is also able to use one of the principles of subjective rightness for Q as an internal decision guide.

Note that if 3′ (“S is in a position to conclude that P is the highest principle of subjective rightness usable by her”) is false, then S would mistakenly conclude that A is not subjectively right.

70 Note that S could have mistaken normative beliefs (she might not believe MT-1 contains the correct principle of objective rightness, or she might mistakenly believe that principle of subjective rightness R is higher than principle P, or she might not be able to grasp any or some of these principles). These cognitive errors, too, may lead her astray in various ways. These are complications I explore in Making Morality Work.

71 Similarly, the analysis can be duplicated for moral theories that are subjectivized, i.e., ones in which principles such as P explicitly refer to the agent's beliefs as grounds for the subjective status of the action, as in “An act Y is a candidate for being subjectively obligatory if the agent believes that Y would maximize value.” See note 53 for discussion of “subjectivizing” a moral principle.

72 Note that a version of Definition (5) phrased in terms of the beliefs S actually has that are reasonable would not be tenable, since many agents would have no reasonable beliefs relevant to the choice they must make, and yet still need guidance in making that choice.

73 For every moral theory, there may be a “bottom-level” principle of subjective rightness—the lowest principle in the hierarchy, to be used when the agent completely lacks any relevant information about his prospective acts. It is plausible that, for MT-1* (or any moral theory), the bottom-level principle should designate as morally permissible any act the agent can perform, since, by hypothesis, the agent has no way to rule out any act as inconsistent with the values of the principle of objective rightness. Thus, the bottom-level principle of subjective rightness for MT-1* would be “An act W is a candidate for being subjectively permissible if W is an act that it would be reasonable for the agent to believe he can perform.” Such a principle makes very limited cognitive demands on an agent. Nonetheless, it makes more demands than the parallel principle for MT-1 (“An act W is a candidate for being subjectively permissible if W is an act that the agent believes he can perform”), since it still requires that the agent have beliefs about what it is reasonable for him to believe—and many agents may not have such beliefs, either because they are not thinking about what it is reasonable for them to believe, or because they are uncertain what it is reasonable for them to believe. Thus, even when it is augmented by such bottom-level principles, MT-1* is less widely usable than MT-1.

74 One of the major arguments in favor of defining subjective rightness in terms of beliefs that it would be reasonable to have, rather than in terms of actual beliefs, is that “reasonable beliefs” rather than “actual beliefs” are arguably the beliefs most relevant to the agent's blameworthiness. This position on blameworthiness is itself controversial. I would argue that it is incorrect: while it is true that an agent may be blameworthy for not making the inquiries she could and should have made (or for not drawing the correct conclusions from her evidence), it does not follow from this that she is blameworthy for making the choice that appears best in light of the directly relevant beliefs she actually has at the time of decision. The role of principles of subjective rightness is to provide her with the guidance she needs and can use at the time she must make her decision, not the guidance that a better agent could use. For further discussion, see my Culpable Ignorance,” The Philosophical Review 92, no. 4 (October 1983): 543–71CrossRefGoogle Scholar. But even a theorist who holds that the blameworthiness of an agent depends on the beliefs it would be reasonable for her to have (as opposed to those she actually has) should still accept the original Definition (5) of subjective rightness, since it—but not Definition (5)*—provides autonomy to agents seeking to guide their decisions by reference to their potential acts' moral value. This theorist can then define “blameworthiness” in terms, not directly of the agent's performing what she believes to be the objectively or subjectively right act, but rather in terms of the agent's performing what a reasonable agent would have believed to be the objectively or subjectively right act. This conception needs further refinement, however, since surely an agent may blamelessly choose an act while mistakenly (but perhaps reasonably) believing it to be what a reasonable person would have believed to be subjectively wrong.

a * I am grateful for discussion on these topics to participants in my graduate seminar during the spring of 2008, and in particular to Preston Greene, who convinced me that principles of objective rightness might include reference to the agent's beliefs. I am also grateful to the other contributors to this volume (especially Mark Timmons) for helpful discussion, as well as to the participants (especially Evan Williams and Ruth Chang) in the Rutgers University Value Theory discussion group, the participants in Elizabeth Harman's 2009 ethics seminar, the participants in the 2009 Felician Ethics Conference (especially Melinda Roberts), the participants in the 2009 Dartmouth workshop on Making Morality Work (Julia Driver, Walter Sinnott-Armstrong, Mark Timmons, and Michael Zimmerman), and to Nancy Gamburd, Alvin Goldman, Preston Greene, and Andrew Sepielli for comments on earlier versions of this essay. Ellen Frankel Paul provided welcome encouragement to clarify a number of key points.