Abstract
I show that accepting Moss’s (Philos Rev 122:1–43, 2013) claim that features of a rational agent’s credence function can constitute knowledge, together with the claim (put forward by several philosophers) that a rational agent should only act on the basis of reasons that he knows, predicts and explains evidential decision theory’s failure to recommend the right choice for the Newcomb problem. The Newcomb problem can be seen, in light of Moss’s suggestion, as a manifestation of a Gettier case in the domain of choice. This serves as strong evidence for both Moss’s claim and the knowledge-based action approach.
Similar content being viewed by others
Notes
At least, this is one way to understand it—a way that I suspect is especially attractive to subjective Bayesians who are skeptical of “objective-epistemic” interpretations of probability (e.g. “epistemic probability” or “evidential probability”). Moss herself seems to understand the basic puzzle about probabilistic-knowledge ascriptions differently. This, however, has no effect on my argument.
Here and in the rest of the paper I follow Moss by using the title “Gettier cases” in a general sense. I also intend it to cover those cases which are sometimes called “Ginet cases”, and more generally any case in which a justified true belief fails to constitute knowledge due to some kind of environmental luck or the like.
Throughout, I will use Jeffrey (1965) version of EDT.
Weisberg—following Brown (2008)—does discuss a case in which an agent’s justified degrees of belief do not constitute knowledge (as a result of being trapped in a Gettier case); but neither he nor Brown discuss a case in which this is the case while the agent does have knowledge of reasons that directly support choosing in a way that is at odds with maximizing expected value. It should be clear that I do not argue in this paper that every Gettier case can lead to a Newcomb problem. This claim is false. In order for a Gettier case to lead to a Newcomb problem it must be the case that the decision-maker does have knowledge that, no matter how the world is, choosing the act that does not maximize expected value leads to better consequences for him. Thus, the decision-maker does have knowledge of a reason to act in a way that conflicts with maximization of expected value on the one hand, and the credence values with respect to which he maximizes expected value do not constitute knowledge, on the other hand. By contrast, the case discussed in Brown (2008) is a case in which the decision-maker has knowledge of no reason for action. In such cases one might argue that while rationality does not dictates maximizing expected value, it allows it. An interesting question that will not be discussed in this paper is how exactly “Psycho-Johnny” cases (see Egan 2007) stand with relation to knowledge-based action. On the face of it, it seems that in such cases the decision-maker does have knowledge that there is no reason to choose any of the acts available.
It is true, of course, that if Amal is not one of the 40 lucky people, the counterfactual “If Amal had pressed the green button, she would have been a rich woman” is true, but we are dealing here not with this counterfactual but rather with the indicative conditional “If Amal presses the green button, she is a rich woman”, and this conditional is undoubtedly false when Amal is not one of the 40 lucky people. Notice that the conditional probability of a proposition B, given another proposition A, usually (but not always: this is the main lesson from Lewis’s (1976) criticism of Adams’ thesis) goes with the probability of the indicative conditional “If A then B”, and not with the probability of the counterfactual “If A had been the case, then B would have been the case. Thus we should expect a smooth move from the full-belief-in-conditionals version of the Newcomb problem to the conditional probability version.
Notice that one of the main arguments for CDT’s recommendation in Newcomb cases is that if the decision-maker does not follow it, she chooses an act that she knows will bring her less utility than what she would get by choosing the other act.
I thank an anonymous referee for presenting this objection to me.
This example is based on an example suggested to me by the referee mentioned in footnote 8.
This point and the discussion that follows are based on the discussion in Williamson (2000, Chap. 9, Sect. 9.3).
Thus, in a completely analogous way to Williamson’s [forthcoming(b)] point about the distinction between epistemic justification and epistemic “blamelessness”, the externalist can argue that although Magda’s choice is not practically rational, she has a good “excuse” for this practical irrationality and thus she is practically “blameless”.
Following Williamson (2000), the externalist can argue that even if one does one’s best to comply with an internalist action-guiding norm, one might fail to do so.
References
Brown, J. (2008). Subject-sensitive invariantism and the knowledge norm for practical reasoning. Noûs, 42(2), 167–189.
Egan, A. (2007). Some counterexamples to causal decision theory. Philosophical Review, 116(1), 93–114.
Gao, J. (2016). Rational action without knowledge (and vice versa). Synthese, 1–17. doi:10.1007/s11229-016-1027-y.
Goldman, A. (1976). Discrimination and perceptual knowledge. Journal of Philosophy, 73(20), 771–791.
Hawthorne, J. (2005). Knowledge and evidence. Philosophy and Phenomenological Research, LXX(2), 452–458.
Hawthorne, J., & Stanley, J. (2008). Knowledge and action. Journal of Philosophy, 105(10), 571–590.
Jeffrey, R. C. (1965). The logic of decision. Chicago, IL: University of Chicago Press.
Joyce, M. J. (1999). The foundation of causal decision theory. Cambridge: Cambridge University Press.
Lewis, D. (1976). Probability of conditionals and conditional probabilities. Philosophical Review, 85, 297–231.
Moss, S. (2013). Epistemology formalized. Philosophical Review, 122(1), 1–43.
Price, H. (2012). Causation, chance and the rational significance of supernatural evidence. Philosophical Review, 121(4), 483–583.
Schiffer, S. (2007). Interest-relative invariantism. Philosophy and Phenomenological Research, 75(1), 188–195.
Schulz, M. (2015). Decisions and higher-order knowledge. Nous. doi:10.1111/nous.12097.
Weatherson, B. (2012). Knowledge, bets, and interests. In J. Brown & M. Gerken (Eds.), Knowledge ascriptions (pp. 75–103). Oxford: Oxford University Press.
Weisberg, J. (2013). Knowledge in action. Philosophers’ Imprint, 13(22), 1–23.
Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.
Williamson, T. [forthcoming(a)]. Acting on knowledge. In J. A. Carter, E. Gordon, & B. Jarvis (Eds.), Knowledge-first. Oxford: Oxford University Press. http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0005/35834/KfirstCarter.pdf.
Williamson, T. [forthcoming(b)]. Justifications, excuses and skeptical scenarios. In J. Dutant, & F. Dorsch (Eds.), The new evil demon. Oxford: Oxford University Press. http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0018/35343/Luxembourg.pdf.
Acknowledgments
I thank David Enoch and two anonymous referees for helpful comments of earlier versions of this paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Nissan-Rozen, I. Newcomb meets Gettier. Synthese 194, 4799–4814 (2017). https://doi.org/10.1007/s11229-016-1169-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-016-1169-y