Abstract
It is valuable for inquiry to have researchers who are committed advocates of their own theories. However, in light of pervasive disagreement (and other concerns), such a commitment is not well explained by the idea that researchers believe their theories. Instead, this commitment, the rational attitude to take toward one’s favored theory during the course of inquiry, is what I call endorsement. Endorsement is a doxastic attitude, but one which is governed by a different type of epistemic rationality. This inclusive epistemic rationality is sensitive to reasons beyond those to think the particular proposition in question is true. Instead, it includes extrinsic epistemic reasons, which concern the health of inquiry more generally. Such extrinsic reasons include the distribution of cognitive labor that a researcher will contribute to by endorsing a particular theory. Recognizing endorsement and inclusive epistemic rationality thus allows us to smooth a tension between individual rationality and collective rationality. It does so by showing how it can be epistemically rational to endorse a theory on the basis of the way this endorsement will benefit collective inquiry. I provide a decision theoretic treatment for inclusive epistemic rationality and endorsement which illustrates how this can be accomplished.
Similar content being viewed by others
Notes
This example is inspired by the actual case of the Madagascan Sunset Moth, Chrysiridia rhipheus. For details on the signaling hypothesis, see Yoshioka and Kinoshita (2007).
Nor does she believe that it is approximately enough true, or that it is the best theory, or even that it is empirically adequate in van Fraassen’s sense (1980).
One can even swap out “knows” here for “is justified in believing,” and the knowledge norm for some other norm. Plausibly, Ellie will also fail to meet such weakened requirements. For the knowledge norm of assertion, see Williamson (1996, 2000). For an overview of the norms of assertion literature, see Pagin (2016) and Weiner (2007).
There are at least five kinds of “acceptance” notions that appear in philosophy: Cohen’s notion from epistemology (Alston 1996; Cohen 1989a), the notion from the philosophy of language (Stalnaker 1987), the notion from the philosophy of science (Kaplan 1981a, b; Levi 1974; Maher 1993; Van Fraassen 1980), the concept of acceptance from the metacognition literature (Frankish 2004; Proust 2013), and the genus conception of acceptance (Shah and Velleman 2005). For more on the various notions of acceptance, see McKaughan (2007).
I am not the first to notice the need for a provisional, acceptance-like attitude. Goldberg notices this need in the context of pervasive disagreement in philosophy (2013a, b). Other examples of varying degrees of similarity can be found in Firth (1981), Lacey (2015), Laudan (1987), McKaughan (2007) and Whitt (1985, 1990). I take this convergence to be good evidence in favor of the existence of the attitude I call endorsement. However, my account is significantly divergent from prior accounts, and I apply the theory to both philosophy and science.
Following Laudan, a number of philosophers of science have appealed to this notion of the context of pursuit. This context is the stage of inquiry when researchers pursue promising but as-yet unconfirmed theories. For an overview of this literature, see McKaughan (2007) and Whitt (1990). Other examples include Laudan (1987), McKaughan (2007, 2008), McMullin (1976), Nickles (1981), Šešelja et al. (2012), Šešelja and Straßer (2013, 2014) and Whitt (1985, 1992). There is insufficient space here to show all of the applications of endorsement, and inclusive epistemic rationality, to the pursuit literature.
Endorsement is a propositional attitude. However, this does not mean that the theories being endorsed need to be understood as propositions. I want to remain neutral on the nature and structure of scientific theories (Frigg and Nguyen 2016; Winther 2016). Technically speaking, the propositions that the attitude is taken toward can be propositions about the theory. So, if one has the view that, for instance, theories are actually models, then the proposition one endorses could be just “Theory A is an accurate enough model” or “Theory A is the best model,” or some other variation on these lines.
Note that this resiliency is quite distinct from Leitgeb’s notion of stability. Stability involves having good reason to expect no evidence that would warrant giving up the belief (2014). Belief is stable but not resilient, and endorsement is resilient but not stable.
I tend to think good inquiry is that which leads to the community learning interesting truths, but “healthy” here could refer to meeting a variety of epistemic standards.
One might worry that this will permit endorsement of theories which should be ruled out, either for epistemic or moral reasons, e.g., that anthropogenic climate change is not occurring, or pseudo-scientific racist theories. However, there are two ways of resisting the idea that endorsing these theories would be appropriate. First, I think moral reasons are over-riding, so if one has a moral reason not to endorse a theory, this will mean that one should not do so, all-things-considered. Second, and more directly, endorsement is an attitude taken to live options, not options known to be false. The two examples here are both known to be false, and so are not potential candidates for endorsement.
The analogy of free expression of ideas to an economic marketplace seems to trace back to Mill through Supreme Court Justice Oliver Wendell Holmes, though it is perhaps an imperfect metaphor for Mill’s own view (Gordon 1997).
Of course, none of this is to deny that some researchers really are motivated by prudential reasons, especially fame and prestige. This motivation is not even always a bad thing for science (see Kitcher 1990; Strevens 2003). I am merely suggesting that some of us are sometimes motivated by a desire to contribute to inquiry.
Throughout, I use terms like “reasons,” “considerations,” and “values,” and treat them as though they are interchangeable. I think my view is compatible with a wide variety of views about the nature of normativity, so one can simply plug in one’s favored view from the meta-ethics and meta-epistemology literature. For an overview of available theories, see Alvarez (2016), Broome (2015), Finlay and Schroeder (2015), FitzPatrick (2004), Gert (2009) and Parfit and Broome (1997).
Borrowing the formulation in Berker (2013):
“Suppose I am a scientist seeking to get a grant from a religious organization. Suppose, also, that I am an atheist: I have thought long and hard about whether God exists and have eventually come to the conclusion that He does not. However, I realize that my only chance of receiving funding from the organization is to believe in the existence of God: they only give grants to believers, and I know I am such a bad liar that I won’t be able to convince the organization’s review board that I believe God exists unless I genuinely do. Finally, I know that, were I to receive the grant, I would use it to further my research, which would allow me to form a large number of new true beliefs and to revise a large number of previously held false beliefs about a variety of matters of great intellectual significance. Given these circumstances, should I form a belief that God exists? Would such a belief be epistemically rational, or reasonable, or justified?”
Steel’s discussion of this idea is brief, so I am uncertain whether my use of this distinction precisely tracks his (2010, 18). Jenkins (2007) appeals to a distinction that is very similar to mine. She distinguishes between “extraneous consequences” of a belief in P, and those which “which directly concern P itself” (37).
The notion of intrinsic epistemic value is related to the idea that belief is “transparent,” in the sense explored by Shah and Velleman (2005). Transparency here means that the reasons to believe p are simply reasons for p, or evidence for p. Whenever one considers the question of “whether to believe p,” this question is equivalent to the question of “whether p.” Beliefs are transparent is because they are only appropriately sensitive to intrinsic epistemic values.
I learned of cases like this one from Nomy Arpaly, who attributes them to Sophie Horowitz. The two objections I am considering here are largely inspired by those in Arpaly’s manuscript.
Here I am drawing from Sosa (2015), especially Chapter 8.
I think Sosa (2015) has something like this in mind for determining epistemic standards for belief-formation performances.
In order to side-step worries about attitude voluntarism , we can treat inclusive epistemic rationality as providing evaluative standards (rather than deontic norms). This is a common move to make in epistemology: see, e.g., Fitelson and Easwaran (2015). For more on the distinction between deontic and evaluative norms, see Smith (2005).
Although there is not room to explore the thought here, I think it is (at least very often) irrational to believe one’s favored theory. In brief, there are three main considerations that should lower our confidence in theories in cutting-edge research domains: pervasive disagreement, the pessimistic meta-induction, and under-determination of theory by evidence. These problems are characteristic of cutting-edge domains, and so the subjective probability (confidence, or credence) we assign to the theories should be too low to justify belief. Indeed, in many such cases our confidence in the theory should be less than half, in which case full belief is clearly unwarranted. Moreover, as I have argued, the epistemic rationality governing belief is not sensitive to extrinsic reasons which do and should govern our decisions about which theories to be committed to, and to pursue.
I focus here on MCR models, however, other kinds of modeling might also be useful for discovering extrinsic epistemic reasons, e.g., Agent-Based epistemic terrain modeling (Muldoon 2013; Muldoon and Weisberg 2011; Thoma 2015; Weisberg and Muldoon 2009; Zollman 2009). The endorsement framework could easily implement constraints derived from such modeling approaches.
This can also be done in terms of research hours, rather than individuals. Number of researchers is more natural for my account, but either will work.
These assumptions can be relaxed to obtain very similar results (Kitcher 1990).
Kitcher and Strevens appeal to an MCR scheme in order to vindicate the priority rule in science. I will leave aside the differences between the Priority reward scheme and Marge, as the details do not affect the project here.
Though see Bright (2016) for how “pure” alethic goals can lead to fraud, too.
It is worth noting that Kitcher briefly mentioned a solution somewhat similar to mine, but dismisses it as “redefinition” (1990,14). I think this is a mistake. The project is not merely to stipulate that individual rationality is sensitive to concerns of collective rationality, but to show the explanatory payoff of a theory which ties them together in a coherent and rigorous manner.
I think this constraint will be operative in most domains. However, there are a number of specific research domains where this might need to be relaxed. For instance, we might need to relax it in characterizing the early stages of the pursuit of quantum mechanics, where the initial theory was known to be inconsistent (Faye 2014). More obviously, research about the applicability of paraconsistent logic, and dialetheism will violate this constraint (Priest 2006; Priest and Berto 2013; Priest et al. 2015). Even if all inconsistent theories are in fact false (because inconsistent), we can still model them using inclusively rational endorsement, as long as we relax this constraint.
As I suggest above, Strevens’ model might not turn out to be the best one. If so, we can simply adopt the better model and give it the same treatment. Again, the purpose here is not to come down in favor of one solution in that domain, but to show how we can use such solutions in a framework for rational endorsement that smooths the tension between individual and collective epistemic rationality.
Assuming, as above, that Strevens’ MCR is the right view of how to ensure appropriate distribution of labor.
For details on how this would work, see footnote 42.
Expanding this to a causal decision theory is a relatively simple matter, but it adds some complications to the formalism which are irrelevant to our purposes here. For the procedure for the expansion to CDT, see Joyce (1999).
The current version of the theory, since it is an evidential decision theory, makes this easier in some respects because it is partition invariant.
Although I prefer a theory which uses just the few constraints listed above, the framework is actually more flexible than this. There is a simple way to expand the decision theory to take into account additional extrinsic reasons more directly. We can do this by following Levi (1974), Levi (1980) and Pettigrew (2014), and using a composite utility function. This function is composed of the weighted average of several sub-utility functions, each of which represents sensitivity to a different extrinsic reason.
Suppose \(u_{mcr}(\cdot )\) is the sub-utility function structured as in the above. Also, \(u_{heu}(\cdot )\) is the sub-function structured by heuristic power, where \(u_{heu}(A_a \& S_i) > u_{heu}(A_b \& S_i)\) just when the heuristic of a is better (or more powerful) than b’s. Furthermore, let \(u_{exp}(\cdot )\) be the sub function structured by the novelty of explanations, where \(u_{exp}(A_a \& S_i) > u_{exp}(A_b \& S_i)\) just when a provides more novel explanations of phenomena than b.
To obtain the overall epistemic utility function, \(U(\cdot )\), we weight these individual sub-functions, then add them together. Let \(\alpha _{1} + \alpha _{2} + \alpha _{3} = 1\), where the size of each \(\alpha\) is determined by how important the subject takes the different considerations to be. Then,
$$\begin{aligned} U(A_a \& S_i) = \alpha _{1} u_{mcr}(A_a \& S_i) + \alpha _{2} u_{heu} (A_a \& S_i) + \alpha _{3} u_{exp}(A_a \& S_i) \end{aligned}$$Finally, this composite utility can be plugged into the expected utility calculation, as in Sect. 6.2.
References
Alston, W. M. (1996). Belief, acceptance, and religious faith. In J. Jordan & D. Howard-Snyder (Eds.), Faith, freedom, and rationality: Philosophy of religion today (pp. 3–27). Washington, DC: Rowman & Littlefield.
Alvarez, M. (2016). Reasons for action: Justification, motivation, explanation. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Winter 2016 ed.). https://plato.stanford.edu/archives/win2016/entries/reasons-just-vs-expl/.
Berker, S. (2013). Epistemic teleology and the separateness of propositions. Philosophical Review, 122(3), 337–393.
Bonner, B. L., Baumann, M. R., & Dalal, R. S. (2002). The effects of member expertise on group decision-making and performance. Organizational Behavior and Human Decision Processes, 88(2), 719–736.
Bright, L. K. (2016). On fraud. Philosophical Studies, 174(2), 291–310.
Broome, J. (2015). Reason versus ought. Philosophical Issues, 25(1), 80–97. doi:10.1111/phis.12058.
Cohen, L. J. (1989a). Belief and acceptance. Mind, 98(391), 367–389.
Cohen, L. J. (1989b). What use are beliefs that we do not take to be warranted? Analysis, 49(1), 7. doi:10.2307/3328887.
Cohen, L. J. (1995). An essay on belief and acceptance. Oxford: Oxford University Press. (Reprint edition ed.).
De Cruz, H., & De Smedt, J. (2013). The value of epistemic disagreement in scientific practice. The case of Homo floresiensis. Studies in History and Philosophy of Science Part A, 44(2), 169–177.
Easwaran, K. (2013). Expected accuracy supports conditionalization-and conglomerability and reflection. Philosophy of Science, 80(1), 119–142.
Easwaran, K. (2016). Dr. Truthlove or: How i learned to stop worrying and love Bayesian probabilities. Noûs, 50(4), 816–853.
Egan, A. (2008). Seeing and believing: Perception, belief formation and the divided mind. Philosophical Studies, 140(1), 47–63.
Faye, J. (2014). Copenhagen interpretation of quantum mechanics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Fall 2014 ed.). https://plato.stanford.edu/archives/fall2014/entries/qm-copenhagen/.
Finlay, S., & Schroeder, M. (2015). Reasons for action: Internal vs. external. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Winter 2015 ed.).
Firth, R. (1981). Epistemic merit, intrinsic and instrumental. Proceedings and Addresses of the American Philosophical Association, 55(1), 5–23.
Fitelson, B., & Easwaran, K. (2015). Accuracy, coherence, and evidence. In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 5). Oxford: Oxford University Press.
FitzPatrick, W. J. (2004). Reasons, value, and particular agents: Normative relevance without motivational internalism. Mind, 113(450), 285–318. doi:10.1093/mind/113.450.285.
Frankish, K. (2004). Mind and supermind. Cambridge: Cambridge University Press.
Frigg, R., & Nguyen, J. (2016). Scientific representation. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Winter 2016 ed.). https://plato.stanford.edu/archives/win2016/entries/scientific-representation/.
Geil, D. M. M. (1998). Collaborative reasoning: Evidence for collective rationality. Thinking & Reasoning, 4(3), 231–248. doi:10.1080/135467898394148.
Gert, J. (2009). Desires, reasons, and rationality. American Philosophical Quarterly, 46(4), 319–332.
Goldberg, S. (2013a). Defending philosophy in the face of systematic disagreement. In D. Machuca (Ed.), Disagreement and skepticism (pp. 277–94). New York: Routledge.
Goldberg, S. (2013b). Inclusiveness in the face of anticipated disagreement. Synthese, 190(7), 1189–1207. doi:10.1007/s11229-012-0102-2.
Goldman, A. I. (1986). Epistemology and cognition. Cambridge: Harvard University Press.
Gordon, J. (1997). John Stuart Mill and the “marketplace of ideas”. Social Theory and Practice, 23(2), 235–249.
Greaves, H. (2013). Epistemic decision theory. Mind, 122(488), 915–952. doi:10.1093/mind/fzt090.
Greaves, H., & Wallace, D. (2006). Justifying conditionalization: Conditionalization maximizes expected epistemic utility. Mind, 115(459), 607–632.
Jeffrey, R. C. (1990). The logic of decision. Chicago: University of Chicago Press.
Jenkins, C. (2007). Entitlement and rationality. Synthese, 157(1), 25–45.
Joyce, J. M. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65(4), 575–603.
Joyce, J. M. (1999). The foundations of causal decision theory. Cambridge: Cambridge University Press.
Kahneman, D. (2013). Thinking, fast and slow (1st ed.). New York: Farrar, Straus and Giroux.
Kaplan, M. (1981a). A Bayesian theory of rational acceptance. The Journal of Philosophy, 78(6), 305–330. doi:10.2307/2026127.
Kaplan, M. (1981b). Rational acceptance. Philosophical Studies, 40(2), 129–145.
Kaplan, M. (1995). Believing the improbable. Philosophical Studies, 77(1), 117–146.
Kerr, N. L., MacCoun, R. J., & Kramer, G. P. (1996). Bias in judgment: Comparing individuals and groups. Psychological Review, 103(4), 687.
Kerr, N. L., & Tindale, R. S. (2004). Group performance and decision making. Annual Review of Psychology, 55, 623–655.
Kitcher, P. (1990). The division of cognitive labor. The Journal of Philosophy, 87(1), 5–22. doi:10.2307/2026796.
Konek, J., & Levinstein, B. (2017). The foundations of epistemic decision theory. Mind. doi:10.1093/mind/fzw044.
Lacey, H. (2015). ‘Holding’ and ‘endorsing’ claims in the course of scientific activities. Studies in History and Philosophy of Science Part A, 53, 89–95. doi:10.1016/j.shpsa.2015.05.009.
Laudan, L. (1978). Progress and its problems: Towards a theory of scientific growth (Vol. 282). Berkeley: University of California Press.
Laudan, R. (1987). The rationality of entertainment and pursuit. In J. C. Pitt & M. Pera (Eds.), Rational changes in science (pp. 203–220). Dordrecht: Springer.
Laughlin, P. R., Bonner, B. L., & Miner, A. G. (2002). Groups perform better than the best individuals on letters-to-numbers problems. Organizational Behavior and Human Decision Processes, 88(2), 605–620.
Laughlin, P. R., & Ellis, A. L. (1986). Demonstrability and social combination processes on mathematical intellective tasks. Journal of Experimental Social Psychology, 22(3), 177–189.
Leitgeb, H. (2014). The stability theory of belief. Philosophical Review, 123(2), 131–171.
Levi, I. (1974). Gambling with truth: An essay on induction and the aims of science. Cambridge: The MIT Press.
Levi, I. (1980). The enterprise of knowledge: An essay on knowledge, credal probability, and chance. Cambridge, MA: MIT Press.
Levi, I. (2004). Mild contraction: Evaluating loss of information due to loss of belief. Oxford: Oxford University Press.
Lewis, D. (1982). Logic for equivocators. Noûs, 16(3), 431–441.
Maher, P. (1993). Betting on theories. Cambridge: Cambridge University Press.
McKaughan, D. (2007). Toward a richer vocabulary for epistemic attitudes. Unpublished doctoral dissertation, University of Notre Dame.
McKaughan, D. (2008). From ugly duckling to swan: C. S. Peirce, abduction, and the pursuit of scientific theories. Transactions of the Charles S. Peirce Society, 44(3), 446–468.
McMullin, E. (1976). The fertility of theory and the unit for appraisal in science. In R. S. Cohen, P. K. Feyerabend, & M. Wartofsky (Eds.), Essays in memory of Imre Lakatos (pp. 395–432). Kufstein: Riedel.
Mercier, H. (2016). The argumentative theory: Predictions and empirical evidence. Trends in Cognitive Sciences, 20(9), 689–700. doi:10.1016/j.tics.2016.07.001.
Mercier, H., & Sperber, D. (2011). Argumentation: Its adaptiveness and efficacy. Behavioral and Brain Sciences, 34(2), 94–111. doi:10.1017/S0140525X10003031.
Muldoon, R. (2013). Diversity and the division of cognitive labor. Philosophy Compass, 8(2), 117–125.
Muldoon, R., & Weisberg, M. (2011). Robustness and idealization in models of cognitive labor. Synthese, 183(2), 161–174.
Nickles, T. (1981). What is a problem that we may solve it? Synthese, 47(1), 85–118.
Pagin, P. (2016). Assertion. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Winter 2016 ed.). https://plato.stanford.edu/archives/win2016/entries/assertion/.
Parfit, D., & Broome, J. (1997). Reasons and motivation. Proceedings of the Aristotelian Society, Supplementary Volumes, 71, 99–146.
Pettigrew, R. (2014). L. A. Paul on transformative experience and decision theory I. Blog post, M-Phi blog. http://m-phi.blogspot.com.au/2014/08/l-paul-on-transformative-experience-and_22.html. Accessed 31 Aug 2017.
Pettigrew, R. (2016). Accuracy and the Laws of Credence. Oxford: Oxford University Press.
Priest, G. (2006). In contradiction: A study of the transconsistent. Oxford: Oxford University Press.
Priest, G., & Berto, F. (2013). Dialetheism. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Summer 2017 ed.). https://plato.stanford.edu/archives/spr2017/entries/dialetheism/.
Priest, G., Tanaka, K., & Weber, Z. (2015). Paraconsistent logic. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Fall 2017 ed.). https://plato.stanford.edu/archives/fall2017/entries/logic-paraconsistent/.
Proust, J. (2013). The philosophy of metacognition: Mental agency and self- awareness. Oxford: OUP Oxford.
Rayo, A. (2013). The construction of logical space. Oxford: Oxford University Press.
Resnick, L. B., Salmon, M., Zeitz, C. M., Wathen, S. H., & Holowchak, M. (1993). Reasoning in conversation. Cognition and Instruction, 11(3–4), 347–364.
Rinard, S. (2015). No exception for belief. Philosophy and Phenomenological Research, 91(2), 121–143.
Šešelja, D., Kosolosky, L., & Straßer, C. (2012). The rationality of scientific reasoning in the context of pursuit: Drawing appropriate distinctions. Philosophica, 86, 51–82.
Šešelja, D., & Straßer, C. (2013). Kuhn and the question of pursuit worthiness. Topoi, 32(1), 9–19.
Šešelja, D., & Straßer, C. (2014). Epistemic justification in the context of pursuit: A coherentist approach. Synthese, 191(13), 3111–3141.
Shah, N., & Velleman, J. D. (2005). Doxastic deliberation. The Philosophical Review, 114(4), 497–534.
Smith, M. (2005). Meta-ethics. In F. Jackson & M. Smith (Eds.), The Oxford handbook of contemporary philosophy (pp. 3–30). Oxford: Oxford University Press.
Sosa, E. (2009). Reflective knowledge: Apt belief and reflective knowledge (Vol. 2). Oxford: Oxford University Press.
Sosa, E. (2015). Judgment and agency. Oxford: Oxford University Press.
Stalnaker, R. (1999). Context and content: Essays on intentionality in speech and thought. Oxford: Oxford University Press.
Stalnaker, R. C. (1987). Inquiry. Cambridge, MA: MIT Press.
Steel, D. (2010). Epistemic values and the argument from inductive risk. Philosophy of Science, 77(1), 14–34.
Strevens, M. (2003). The role of the priority rule in science. The Journal of Philosophy, 100(2), 55–79.
Thoma, J. (2015). The epistemic division of labor revisited. Philosophy of Science, 82(3), 454–472.
Tversky, A., & Kahneman, D. (1975). Judgment under uncertainty: Heuristics and biases. In D. Wendt & C. Vlek (Eds.), Utility, probability, and human decision making (pp. 141–162). Dordrecht: Springer.
Van Fraassen, B. C. (1980). The scientific image. Oxford: Oxford University Press.
Weiner, M. (2007). Norms of assertion. Philosophy Compass, 2(2), 187–195. doi:10.1111/j.1747-9991.2007.00065.x.
Weisberg, J. (2017). Belief in psyontology. The Philosophers' Imprint. (Forthcoming).
Weisberg, M., & Muldoon, R. (2009). Epistemic landscapes and the division of cognitive labor. Philosophy of Science, 76(2), 225–252.
Whitt, L. A. (1985). The promise and pursuit of scientific theories. Unpublished doctoral dissertation, Dissertation.
Whitt, L. A. (1990). Theory pursuit: Between discovery and acceptance. In PSA: Proceedings of the biennial meeting of the philosophy of science association (Vol. 1, pp. 467–483).
Whitt, L. A. (1992). Indices of theory promise. Philosophy of Science, 59(4), 612–634.
Williamson, T. (1996). Knowing and asserting. The Philosophical Review, 105(4), 489–523. doi:10.2307/2998423.
Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.
Winther, R. G. (2016). The structure of scientific theories. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Winter 2016 ed.). https://plato.stanford.edu/archives/win2016/entries/structure-scientific-theories/.
Yoshioka, S., & Kinoshita, S. (2007). Polarization-sensitive color mixing in the wing of the Madagascan sunset moth. Optics Express, 15(5), 2691. doi:10.1364/OE.15.002691.
Zollman, K. J. S. (2009). Optimal publishing strategies. Episteme, 6(2), 185–199.
Acknowledgements
Thanks to Sara Aronowitz, Bob Beddor, David Black, Matt Duncan, Andy Egan, Adam Elga, Branden Fitelson, Georgi Gardiner, Alvin Goldman, Daniel Rubio, Joshua Schecter, Susanna Schellenberg, Ernest Sosa, and an anonymous referee. Thanks also to audiences at The Penn-Rutgers-Princeton Social Epistemology Workshop, the Ninth Workshop in Decision, Games, and Logic, and the Vancouver Summer Philosophy Conference. Special thanks to Megan Feeney.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Fleisher, W. Rational endorsement. Philos Stud 175, 2649–2675 (2018). https://doi.org/10.1007/s11098-017-0976-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-017-0976-4