Abstract
Consider two epistemic experts—for concreteness, let them be two weather forecasters. Suppose that you aren’t certain that they will issue identical forecasts, and you would like to proportion your degrees of belief to theirs in the following way: first, conditional on either’s forecast of rain being x, you’d like your own degree of belief in rain to be x. Secondly, conditional on them issuing different forecasts of rain, you’d like your own degree of belief in rain to be some weighted average of the forecast of each (perhaps with weights determined by their prior reliability). Finally, you’d like your degrees of belief to be given by an orthodox probability measure. Moderate ambitions, all. But you can’t always get what you want.
Similar content being viewed by others
Notes
There are a wide variety of putative experts and a wide variety of principles of expert deference on offer in the current epistemology literature. However, the reader will note that everything we will say here about Al and Bert goes just as well for chance, your rational future self, and so on and so forth. The reader will also note that if Al and Bert have all your evidence and more, and if they are additionally certain of their own forecasts, then every extant principle of expert deference will entail both (2) and (3). For instance, given an expert \(\mathcal{E}\) who is certain of their own credences and has all your evidence and more, the following ways of treating \(\mathcal{E}\) as an expert are all equivalent: for all p, x, E (1) \(C(p \mid \mathcal{E}(p)=x) = x\), (2) \(C(p \mid \mathcal{E} = E) = E(p)\), (3) \(C(p \mid \mathcal{E}=E) = E(p \mid \mathcal{E}=E)\), and \(C(p) = \sum _x x \cdot C(\mathcal{E}(p)=x)\). See Gallow (msb) for a proof of this claim and more on the relationship between various principles of expert deference.
By an ‘orthodox probability function’, I will mean that C is non-negative, normalized, countably additive, and conglomerable. These assumptions go beyond probabilism in the case where we are considering infinitely many possible values for \(\mathcal{A}\) and \(\mathcal{B}\). However, if we suppose that there are at most finitely many potential values for \(\mathcal{A}\) and \(\mathcal{B}\), then an ‘orthodox probability function’ is just any finitely additive probability. Thanks to an anonymous reviewer for their clarifying comments on this point.
Matthew 6:24.
Many of these principles of expert deference look different from (2) and (3); fortunately, in most cases, we can present them in the form of (2) and (3) by simply shifting our attention to a different expert. For instance, Lewis (1994) and Hall (1994) both say that we should not defer to the judgments of chance, but rather the judgments of chance, conditionalized on the proposition that it is chance. In such a case, we could consider the relevant expert to be, not chance itself, but rather chance conditionalized on chance, and we will get back a principle looking like (2) and (3). Cf. Hall and Arntzenius (2003) and Schaffer (2003). (See also footnote 1.)
Cf. Gallow (msa).
See, e.g., Shogenji (ms) and Fitelson and Jehle (2009).
Cf. Levinstein (2015).
References
Christensen, D. (2007). Epistemology of disagreement: The good news. Philosophical Review, 116(2), 187–217.
Christensen, D. (2010). Rational reflection. Philosophical Perspectives, 24, 121–140.
Christensen, D. (2011). Disagreement, question-begging, and epistemic self-criticism. Philosopher’s Imprint, 11(6).
Elga, A. (2007). Reflection and disagreement. Noûs, 41(3), 478–502.
Elga, A. (2013). The puzzle of the unmarked clock and the new rational reflection principle. Philosophical Studies, 164, 127–139.
Fitelson, B., & Jehle, D. (2009). What is the ‘equal weight view’? Episteme, 6(3), 280–293.
Gaifman, H. (1988). A theory of higher order probabilities. In B. Skyrms & W. L. Harper (Eds.), Causation, chance, and credence: Proceedings of the Irvine conference on probability and causation (Vol. 1, pp. 191–220). Dordrecht: Kluwer Academic Publishers.
Gallow, J. D. (msa). Expert deference and news from the future.
Gallow, J. D. (msb). Kinds of experts, forms of deference.
Hall, N. (1994). Correcting the guide to objective chance. Mind, 103(412), 505–517.
Hall, N., & Arntzenius, F. (2003). On what we know about chance. The British Journal for the Philosophy of Science, 54(2), 171–179.
Kelly, T. (2005). The epistemic significance of disagreement. In J. Hawthorne & T. Gendler (Eds.), Oxford studies in epistemology (Vol. 1, pp. 167–196). Oxford: Oxford University Press.
Levinstein, B. A. (2015). With all due respect: The macro-epistemology of disagreement. Philosopher’s Imprint, 15(13), 1–20.
Lewis, D. K. (1980). A subjectivist’s guide to objective chance. In R. C. Jeffrey (Ed.), Studies in inductive logic and probability (Vol. II, pp. 263–293). Berkeley: University of California Press.
Lewis, D. K. (1994). Humean supervenience debugged. Mind, 103(412), 473–490.
Schaffer, J. (2003). Principled chances. The British Journal for the Philosophy of Science, 54(1), 27–41.
Shogenji, T. (ms). A conundrum in Bayesian epistemology of disagreement.
Staffel, J. (2015). Disagreement and epistemic utility-based compromise. Journal of Philosophical Logic, 44, 273–286.
van Fraassen, B. C. (1984). Belief and the will. The Journal of Philosophy, 81(5), 235–256.
van Fraassen, B. C. (1995). Belief and the problem of ulysses and the sirens. Philosophical Studies, 77, 7–37.
Wagner, C. (1985). On the formal properties of weighted averaging as a method of aggregation. Synthese, 62(1), 97–108.
Acknowledgements
Thanks to Michael Caie, Daniel Drucker, Harvey Lederman, and an anonymous reviewer for helpful conversations and feedback.
Author information
Authors and Affiliations
Corresponding author
Appendix: Proof of Propositions 1 and 2
Appendix: Proof of Propositions 1 and 2
Proof
We establish three lemmas, from which the theorems follow immediately. (Note: throughout, I will use ‘C’ indiscriminately for (1) a joint probability density function over the values of \(\mathcal{A}\) and \(\mathcal{B}\), as well as (2) the corresponding marginal densities, and (3) the corresponding probability function. In the event that there are at most finitely many possible values of \(\mathcal{A}\) and \(\mathcal{B}\), ‘C’ will everywhere denote a probability function and integrals may be exchanged for sums throughout.)
Lemma 1
If (2), (3), and (4) hold, then so do (5) and (6).
Proof
Since C is a countably additive, conglomerable probability, for all a,
Then, because \(C(r \mid \mathcal{A}=a)=a\) and \(\beta = 1-\alpha\), we have (6). Following the same procedure, with ‘\(\mathcal{A}\)’ and ‘\(\mathcal{B}\)’ exchanged throughout, establishes (5).
Lemma 2
If (5) and (6) hold, then so does (7).
Proof
The same procedure, with ‘\(\mathcal{A}\)’ exchanged for ‘\(\mathcal{B}\)’ throughout, establishes that \(\mathbb {E}[\mathcal{A}\mathcal{B}] = \mathbb {E}[\mathcal{B}^2]\).
Lemma 3
If (7) holds, then so does (8).
Proof
If the expectation of \((\mathcal{A}- \mathcal{B})^2\) is 0, then \(C(\mathcal{A}=\mathcal{B}) = 1\), and (1) is violated.
Rights and permissions
About this article
Cite this article
Gallow, J.D. No one can serve two epistemic masters. Philos Stud 175, 2389–2398 (2018). https://doi.org/10.1007/s11098-017-0964-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-017-0964-8