Skip to main content
Log in

Advisors and Deliberation

  • Published:
The Journal of Ethics Aims and scope Submit manuscript

Abstract

The paper has two goals. First, it defends one type of subjectivist account of reasons for actions—deliberative accounts—against the criticism that they commit the conditional fallacy. Second, it attempts to show that another type of subjectivist account of practical reasons that has been gaining popularity—ideal advisor accounts—are liable to commit a closely related error. Further, I argue that ideal advisor accounts can avoid the error only by accepting the fundamental theoretical motivation behind deliberative accounts. I conclude that ideal advisor accounts represent neither a substantial departure from, nor a substantial improvement upon, deliberative accounts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. This is commonly known as the “internalism requirement” after Korsgaard (1986). As I will note shortly, this “basic thought” is not essential to deliberative accounts per se. It’s not the connection between deliberation and reasons that defines these accounts. Rather, it is the connection between reasons and the deliberative standpoint that is defining.

  2. Wallace (1990) calls this the “desire out/desire in” principle.

  3. One might question whether there is anything in McDowell’s account that does commit him the importance of the first personal perspective. I think there is. After all, McDowell’s reason for denying the importance of an agent being able to deliberate his way to the proper motivations is that deliberation is not necessary for an agent to “consider matters aright” (McDowell (1995), p. 100). But, according to McDowell, being able to see things aright is necessary for an agent have a reason. But “seeing things aright” is clearly a matter of how things appear (albeit correctly) from the agent’s point of view.

  4. For a more extensive discussion of the “conditional fallacy” and its commission in other a areas of philosophy, see Shope (1978).

  5. Although Sobel is discussing the “fragility” of the connection between reasons and motivation (he obviously has a Williams style account in mind), it is clear his objection, if successful, will work against the more general claim of deliberative accounts, namely that there is a connection between an agent’s reasons and his ability to recognize them. After all, the reason that the idealized agent is not motivated by his reason to go to the library or taste the food is because, once idealized, he no longer has that reason, and so there is no reason for him to recognize.

  6. I rely on the straightforward and convincing suggestion in Mark van Roojen (2000). Some might object that his solution assumes that, in deciding what counts as “relevantly informed,” we already need know the agent’s reasons and the basis for them. But, the objection continues, this is just what an account of reasons is supposed to yield and so is not something the account should need as input. The objection simply assumes that an account of reasons is supposed to answer the question: “What does X have reason to do,” and not “Is R a reason for X to ϕ?” I think that there is nothing wrong an account that sets out to answer only the second, more modest question. After all, the typical question we pose in practical deliberation is, “Should I ϕ?” or, “Is R a reason to ϕ?” not “What do I have reason to do?” where we do not have a limited set of options in mind. Indeed, the completely general “What do I have reason to do?” is most naturally heard as an expression of something like existential despair. (Thanks to Sobel for discussing these issues with me.).

  7. A point made by Robertson (2003).

  8. Or better, to lessen his reason.

  9. Consider Williams’s petrol and tonic case. The suggestion is not that the agent could deliberate to the conclusion that there is petrol in the glass. Rather, the suggestion is only if the agent had a true belief he would not be motivated to drink the contents of the glass. It is the possibility of deliberation given correct beliefs that grounds the agent’s reasons, according to Williams.

  10. One final case. Mark Andrew Schroeder (2007) has recently urged a counterexample against deliberative accounts, and again, the example seems to rely on the idea that deliberative accounts commit the conditional fallacy:

    Nate loves successful surprise parties, but can’t stand unsuccessful surprise parties. If there is an unsuspected surprise party waiting for Nate in the living room then plausibly there is a reason for Nate to go into the living room. There is certainly something that God would put in the “pros” column in listing the pros and cons of Nate’s going to the living room. But it is simply impossible to motivate Nate to go into the living room for this reason—for as soon as you tell him about it, it will go away. Nate’s case looks to me like a counterexample to many strong theses about the connection between reasons and motivation (pp. 165–166).

    Schroeder’s case is structurally similar to Millgram’s. Both cases seem to involve what we might call “self-falsifying beliefs.” And in both Schroder’s and Millgram’s examples the self-falsifying beliefs involve a reference to the believer, e.g., “I am insensitive,” or “I will be surprised by the party in the living room.” To believe those (otherwise) true claims is to render them false. (Though, as we just saw, there is little reason the believe that Millgram is right in thinking “I am insensitive” is self-falsifying.)

    Can we handle Schroeder’s case in the way we handled Millgram’s? We can certainly supply Nate with a modest epistemic improvement that, given his existing desires, would yield the appropriate motivation to go to the living room. For example, he could come to believe that there was an event he would enjoy in the living room. That he would enjoy the surprise party does give part of the reason he in fact has for attending it. Compare our response here to that of Sobel’s example of the “singular taste.” We do not need to say that, before the agent eats the food, it is consideration of the taste itself (in it is full phenomenological glory) that provides the agent reason to try it. After all, that is not a consideration available to the agent. Rather, it is the fact that he will be glad that he tried it that provides his reasons. (That is standardly what we have in mind when we are encouraging the culinary naïf.) Similarly, we might say it is that there is an event he would enjoy that gives Nate a reason to venture into the living room. We do not need to say that his reason is that there is a surprise party awaiting him.

    Could Schroeder insist that this does not go far enough? Could Schroeder insist it is “that there is a surprise party” is a reason for Nate, and deliberative accounts cannot make room for that very reason? I think it would be obstinate for defenders of deliberative accounts to just deny it. But the difficulty of Schroeder’s example lies entirely in the very particular nature of the belief involved. This makes it hard to evaluate how much of a difficulty it is for deliberative accounts. Consider that even basic principles of theoretical reason will run into trouble with his case. Consider the principle that one should believe what is true about one’s immediate environment that concern matters of importance to oneself. Schroder’s case causes trouble to even this anodyne principle. So the impact of the case on deliberative accounts in particular is not as clear as it could be. But for the purposes of this paper they need not be. For my central claim is that IA accounts have no clear advantage over deliberative accounts. And, as we will see, IA accounts run into similar troubles.

  11. Quoted in Sobel (2001).

  12. There are a number of variations a modification of full-information accounts might take. For example, Peter Railton argues that an agent’s good consists in what my fully informed self would want my ordinary self to want. The “want to want” formulation runs into troubles that the “want to do” formulation does not since there can be reasons to want something that are not reasons to act on that desire. There is also a difference between a “want to do” formulation and a “would advise” formulation. We will discuss difference between what our ideal counterpart might want us to do and might advise us to do below.

  13. The views of Railton and others offering similar accounts are persuasively criticized in C. Rosati (1995). Rosati’s criticisms focus on understanding what it is be fully informed, and thus take a different tack than mine.

  14. Although the content of Smith’s account of reasons does not take the role of advisors seriously, his justification for the account does. Smith’s defense of his account appeals to various platitudes about the connection between reasons and advisors. These platitudes will involve our attitudes about the connections between our actual reasons and our best actual advisors. (Although we might have some limited intuitions, it is hard to see how there could be platitudes about the purely fictional “ideal advisors” encountered in philosophical theories.) Unfortunately, his account has little to do with our best actual advisors and everything to do with the desires of wholly idealized counterparts.

  15. Here, again, we see an important the distinction between [A]- and [B]-type accounts. [A] accounts do not suffer from the above difficulty. For the fact of having been given advice from a reliable source (to go to the library) itself provides a reason (to go to the library) and makes the action intelligible to the advisee. The unideal agent of [A] accounts would be able to recognize a reason from his own perspective, and so would be able to make sense of his own actions.

  16. It is important to see that the point does not depend on a peculiarity of shame, or of self-regarding emotions in particular. As Moran points out, and will be discussed shortly, a similar example can be constructed around an agent’s belief. If I am notoriously bad at getting at the truth in some domain, then there would be nothing wrong with some else taking the fact of my believing P as evidence that not-P. But unless we fill in the example with a lot more detail, I could not in the same way take my believing that P as evidence that not-P.

  17. Of course we can imagine circumstances when such an appeal might make sense. For example, there is nothing odd about appealing to the fact that in the past you have reached the conclusion that P as evidence in deciding whether P. But now we have something that resembles a two-person case: you are considering the mental states of your past self to make up your mind now, as another might consider your mental states to make up her mind now. There is no oddness since past conclusions need not represent one’s present view of the matter. But in the rakehell’s case the shame represents what is actually his present, settled view about his character. That is what he is trying to appeal to in arriving at a different view of the situation.

References

  • Johnson, Robert N. 1999. Internal reasons and the conditional fallacy. The Philosophical Quarterly 50(194): 53–71.

    Article  Google Scholar 

  • Korsgaard, Christine M. 1986. Skepticism about practical reason. The Journal of Philosophy 83(1): 5–25.

    Article  Google Scholar 

  • McDowell, John. 1995. Might there be external reasons? In World, mind and ethics: Essays on the ethical philosophy of Bernard Williams, ed. J.E.J. Altham and Ross Harrison. Cambridge: Cambridge University Press.

    Google Scholar 

  • Millgram, E. 1996. Williams’ argument against external reasons. Nous 30(2): 197–220.

    Article  Google Scholar 

  • Moran, Richard A. 2001. Authority and estrangement: An essay on self-knowledge. Princeton: Princeton University Press.

    Google Scholar 

  • Railton, P. 1986. Moral realism. The Philosophical Review 95(2): 163–207.

    Article  Google Scholar 

  • Robertson, T. 2003. Internalism, (super) fragile reasons, and the conditional fallacy. Philosophical Papers 32(2): 171–184.

    Article  Google Scholar 

  • Rosati, Connie S. 1995. Persons, perspectives, and full information accounts of the good. Ethics 105(2): 296–325.

    Article  Google Scholar 

  • Schroeder, Mark Andrew. 2007. Slaves of the passions. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Shope, Robert K. 1978. The conditional fallacy in contemporary philosophy. The Journal of Philosophy 75(8): 397–413.

    Article  Google Scholar 

  • Sidgwick, Henry. 1901. Methods of ethics. Hackett: Indianapolis, IN.

  • Smith, M.A. 1995. The moral problem. Blackwell: Wiley.

    Google Scholar 

  • Sobel, D. 2001. Subjective accounts of reasons for action. Ethics 111(3): 461–492.

    Article  Google Scholar 

  • van Roojen, Mark. 2000. Motivational internalism: A somewhat less idealized account. The Philosophical Quarterly 50(199): 233–241.

    Article  Google Scholar 

  • Wallace, R. Jay. 1990. How to argue about practical reason. Mind 99(395): 355–385.

    Article  Google Scholar 

  • Williams, Bernard. 1979. Internal and external reasons. In Rational action, ed. Ross Harrison. Cambridge: Cambridge University Press.

    Google Scholar 

  • Williams, Bernard. 2002. Truth and truthfulness. Princeton: Princeton University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steven Arkonovich.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Arkonovich, S. Advisors and Deliberation. J Ethics 15, 405–424 (2011). https://doi.org/10.1007/s10892-011-9101-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10892-011-9101-7

Keywords

Navigation