Hostname: page-component-848d4c4894-x24gv Total loading time: 0 Render date: 2024-05-04T16:26:42.701Z Has data issue: false hasContentIssue false

Two-Tier Moral Codes

Published online by Cambridge University Press:  13 January 2009

Holly M. Smith
Affiliation:
Philosophy, University of Arizona

Extract

A moral code consists of principles that assign moral status to individual actions – principles that evaluate acts as right or wrong, prohibited or obligatory, permissible or supererogatory. Many theorists have held that such principles must serve two distinct functions. On the one hand, they serve a theoretical function, insofar as they specify the characteristics in virtue of which acts possess their moral status. On the other hand, they serve a practical function, insofar as they provide an action-guide: a standard by reference to which a person can choose which acts to perform and which not. Although the theoretical and practical functions of moral principles are closely linked, it is not at all obvious that what enables a principle to fill one of these roles automatically equips it to fill the other. In this paper I shall briefly examine some of the reasons why a moral principle might fail to fill its practical role, i.e., be incapable of guiding decisions. I shall then sketch three common responses to this kind of failure, and examine in some detail the adequacy of one of the most popular of these responses.

Type
Research Article
Copyright
Copyright © Social Philosophy and Policy Foundation 1989

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 See Smith, Holly, “Making Moral Decisions,” Nous, vol. 22 (1988), pp. 9192CrossRefGoogle Scholar, for further discussion of the kinds of usability.

2 Regan, Donald H., Utilitarianism and Co-operation (Oxford: Clarendon Press, 1980), pp. 165–66.CrossRefGoogle Scholar

3 If the authority is reliable, the agent may even know that he ought to vote for the Democratic candidate.

4 See Smith for an account of the adequacy of the most popular technique for surmounting this problem, namely supplementing moral principles with auxiliary decision-guides or “rules of thumb” designed to deliver prescriptions when agents possess probabilistic information at best.

5 Lyons, David, The Forms and Limits of Utilitarianism (Oxford: Clarendon Press, 1965), p. 159.CrossRefGoogle Scholar

6 Rawls, John, A Theory of Justice (Cambridge: Harvard University Press, 1971), p. 132Google Scholar, and “Construction and Objectivity,” The Journal of Philosophy, vol. LXXVII (September 1980), p. 561.

7 Parfit, Derek, Reasons and Persons (Oxford: Clarendon Press, 1984), p. 5.Google Scholar Parfit applies his remarks to self-interest principles, not to patently moral principles.

8 Mill, John Stuart, Utilitarianism (Indianapolis: The Bobbs-Merrill Company, Inc., 1957), pp. 30, 32.Google Scholar My emphasis. It is not wholly clear that Mill had in mind by “subordinate principles” precisely what I do here.

9 Certain moral codes, such as utilitarianism, are often criticized on the ground that they demand too much of mere human beings by way of motivation: they require us to perform acts involving so much sacrifice of our own interests that no one could possibly be motivated to adhere to such principles. This is a criticism about the “strains of commitment.” Notice that the same three responses that I have just outlined to problems of cognitive deficiency could also be proposed as responses to problems of motivational deficiency.

10 Prichard, H.A., “Duty and Ignorance of Fact,” in Prichard, H.A., Moral Obligation and Duty and Interest (London: Oxford University Press, 1968), pp. 1839.Google Scholar

11 Williams, Bernard, “Moral Luck,” in Williams, Bernard, Moral Luck (Cambridge: Cambridge University Press, 1981), p. 21.CrossRefGoogle Scholar

12 Warnock, C.J., The Object of Morality (London: Methuen and Co., Ltd., 1971), p. 26.Google Scholar

13 Notice, however, that there seems no reason to demand that M* itself avoid the Problem of Error. That is, agents may make mistaken applications of M*, so long as their doing so does not lead them to violate M itself.

As Eric Mack pointed out in a discussion of this paper, there may be a difficult equilibrium problem in constructing coextensive pairs of M and M*, at least in cases where M is consequentialist. What concrete actions a consequentialist M requires depends on the specific historical context, which includes the nature of the moral code believed by the general population. Thus if the population believes code C, M may require agent S to perform act A (since it would lead C-believers to pursue certain courses of actions), while if the population believes code C', M may require agent S instead to perform act B (since it would lead C'-believers to pursue different courses of action than they would have had they believed in C). Thus to identify the relevant M*, we cannot simply start with M and ask what code would be coextensive with it; instead we have to start with M and a possible concrete historical context, including general belief in a given code, and ask whether that code the coextensive with M under those conditions. If not, we look at a different possible historical context and ask the parallel question, until finally we have found 2 matching pair. This may not be an easy task.

In this paper I am confining my attention to first-tier moral codes (i.e., candidates for M) that are purely behavioral: that is, they prescribe actions characterized solely in behavioral terms, not actions partly characterized in terms of the agent's beliefs, intentions, or other motivational states. Without this restriction it would be difficult or impossible to construct a coextensional M*, at least if that required the agent to have the same mental state as that required by M, as well as to perform the same bit of behavior required by M.

14 Sidgwick, Henry, The Methods of Ethics, 7th ed. (Chicago: The University of Chicago Press, 1962), pp. 489–90.CrossRefGoogle Scholar

15 Parfit, sec. 17.

16 In this paper I will focus primarily on the capacity of M* to secure the same pattern of action as M. Of course, on many views, M and M* would need to be compared on other grounds. For example, M* might be more costly overall to social welfare than M because it would be so difficult to teach; or M* might actually secure fewer right actions than M because even though people would be infallible in applying it, it would be far less capable of eliciting allegiance than M, and so produce less actual compliance. For the most part I shall leave these issues aside.

It is worth pointing out here, however, that a kind of two-tier morality (with a version of utilitarianism as the first tier, and a set of deontological rules as the second tier) has sometimes been proposed as a technique for avoiding normative objections to act-utilitarianism. Thus it is claimed that act-utilitarianism erroneously requires (for example) a sheriff to convict and punish an innocent person in order to avert race riots. This counter-intuitive result, it is said, can be averted by a system of rules prohibiting punishment of the innocent. Such a system allegedly could be justified on general utilitarian grounds, even though it would not prescribe every utility-maximizing individual act. This type of rationale for a two-tier system is not compatible with the kind of rationale I am exploring. The rationales explored in this paper assess a second-tier rule as better insofar as the acts it prescribes match those prescribed by the first-tier principle, while the normative-objection rationale only succeeds if the second-tier rules sometimes deliver prescriptions that disverge from those of the first-tier principle. I am grateful to Julia Annas for calling this point to my attention.

17 Williams, Bernard, “A Critique of Utilitarianism,” in Smart, J.J.C. and Williams, Bernard, Utilitarianism: For and Against (Cambridge: Cambridge University Press, 1973), pp. 138–39.Google Scholar

18 More accurately, the two moralities are coextensional except for the cases in which it is the populace's misapplication of M* which would lead them to do or want what M requires. But in these cases what the populace mistakenly thinks required by their theory is what is actually required by the rulers' theory M, so there will be no conflict between the populace and the rulers on the moral character of the policies in questions.

19 Rawls, Theory of Justice, p. 133. Rawls traces the history of the condition to Kant.

20 ibid. See also the Dewey, lectures (Journal of Philosophy, vol. LXXVII (September 1980))Google Scholar, where this idea is developed in the more detail.

21 There may be some disputes between members of the elite that must be carried out in full view of the general populace. In such cases, the elite cannot overtly appeal to M. However, they will be content to appeal to M* itself, since they know it generates the same prescriptions as M. Complexities might arise if the case in question is one in which the general populace would, through some erroneous factual belief not shared by the elite, derive an “incorrect” prescription from M* – a prescription that actually accords with what M itself prescribes (see note 13). In such a case, the elite would have to feign the same factual beliefs as the general populace.

22 Avoiding these bad effects may not be as simple as the text suggests. So far I have spoken as though both M and M* governed the actions of both the elite and the general population. Technically, however, M* need only govern the actions of the general population (since they are the only ones subject to the Problem of Error). Nonetheless, if M* failed to address the activities of the elite, it would be difficult to persuade the general population that such an incomplete M* was the genuine theoretical account of right and wrong. Hence M* must probably be constructed to govern the activities of all. Now, it is logically possible that the actions required by M for the two groups differ. For example, it might turn out, according to M, that the general population ought never to lie, while it is permissible for the elite to lie under circuirstances C (which never arise for the general population). Hence M* might be constructed to contain two components, M* (GP) which forbids the general population to lie, and M*(E) which permits the elite to lie under circumstances C. But it would probably be more psychologically effective to construct a coextensional M* which permitted lying to anyone so long as they found themselves in circumstances C. Thus, the general population would know that they, too, could lie if they ever were in circumstances C. (But suppose ‘circumstances C’ = ‘being an elite when the general population needs to be misled about the true moral code in order to avoid the Problem of Error’. An M* containing a clause referring to such a C would certainly tend to undermine the system as a Benighted Agent solution.)

23 An interesting proposal, somewhat along these lines, has been suggested by Smith, Nigel (see “Enchanted Forest,” Natural History, vol. 92 (August 1983), pp. 1420).Google Scholar Smith recounts the (patently false) superstitious beliefs that prevent the rural populations of the Amazon basin from destroying the jungle ecological system, and recommends “tapping” these folk beliefs in order to strengthen official conservation efforts.

24 See, for example, Herman, Barbara, “The Practice of Moral Judgment,” The Journal of Philosophy, voL LXXXII, no. 8 (August 1985), p. 431.Google Scholar

25 See, for example, Nozick, Robert, Philosophical Explanations (Cambridge: Harvard University Press, 1981), p. 284Google Scholar ; see also pp. 323–26. But see Goldman, Alvin, Epistemology and Cognition (Cambridge: Harvard University Press, 1986), p. 98.Google Scholar

26 Nozick, p. 321, speculates that the only way an “action can track an evaluative fact is via … the person's knowledge of the fact.” But our case is one in which there is a counterfactual connection between the evaluative facts (specified by M) and their M* counterparts. So a person's belief in M* would enable her actions to “track” the genuine evaluative facts identified by M.

27 Ryle, Gilbert, “Forgetting the Difference Between Right and Wrong,” ed. Melden, A. I., Essays in Moral Philosophy (Seattle: University of Washington Press, 1958), pp. 147–59.Google Scholar

28 Kant, Immanuel, Foundations of the Metaphysics of Morals (Indianapolis: Bobbs-Merrill Company, Inc., 1959).Google Scholar

29 More accurately: the act-types must either be ones with respect to which the agents are infallible, or else such that the agent who wants to perform an act of that type will in fact perform the act prescribed by M itself.

30 In an alternate terminology: any act of an M-significant type is on the same act-tree with many acts of different types.

31 I assume here that if the agent is able to perform the act at all, then there is some description of it under which the agent's desiring to perform it would lead to his performance of that act. This may be too strong. There might be cases in which no correct description of the act would elicit its performance. (Consider the familiar finger game in which the fingers of both hands are entangled in such a way that one becomes confused as to which fingers belong to which hand. In these circumstances, wanting to straighten the first finger of one's left hand will elicit straightening the first finger of one's right hand, but no accurate description of this act will elicit it.) I shall ignore such cases in the discussion in the text; they imply that a thorough list might need to include misdescriptions of the actions to be performed.

32 But see note 13.