Skip to main content
Log in

Deference and description

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Consider someone whom you know to be an expert about some issue. She knows at least as much as you do and reasons impeccably. The issue is a straightforward case of statistical inference that raises no deep problems of epistemology. You happen to know the expert’s opinion on this issue. Should you defer to her by adopting her opinion as your own? An affirmative answer may appear mandatory. But this paper argues that a crucial factor in answering this question is the description under which you identify the expert. Experts may be identified under various descriptions (e.g., “the expert appearing on Channel 7 at 5 pm”). Section 2 provides cases that illustrate how deference to a known expert may be appropriate under some descriptions, but not others. Section 3 proposes that, in these cases, the differentiating factor is self-identification: deference to an expert under a given description is appropriate when the expert self-identifies under that description. Section 4 presents a formal framework that demonstrates the sufficiency of self-identification for appropriate deference within a significant class of cases. Section 5 notes a way in which this phenomenon allows for a form of “agreeing to disagree” in Robert Aumann’s sense. Section 6 illustrates the practical application of these results to the case of testimonials.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. On a less idealized conception of an expert, we need not require that the expert know every possible-worlds proposition that one knows; it is enough if she knows every possible-worlds proposition one knows that is relevant to the question at issue. We might also allow an expert to fail to possess some of one’s relevant information, so long as she has “better” information overall (e.g., she has a larger, but distinct, sample of a population under study). The problems of deference raised below apply to these conceptions of experts as well.

  2. Discussions of deference sometimes limit themselves to cases where an agent has prior knowledge of the existence of an expert and then subsequently learns the expert’s credence. Our discussion applies to all cases where an agent knows what credence an expert has, including ones where she learns of the expert’s existence at the same time as she learns the expert’s credence.

  3. One might think that you possess de re knowledge of XSHAM that she sees the sham coin. This thought relies on a very permissive conception of de re knowledge, insofar as it maintains that simply, for example, hearing a description of Case B suffices to provide de re knowledge of XSHAM. But even so, it does not identify a piece of de re knowledge that you possess but that XSHAM does not. XSHAM may hear the description of the case just as you do, and so gain the same piece of de re knowledge.

  4. Since XFIRST-HEADS should have credence 1/3 in FAIR, while knowledge of just AT-LEAST-ONE-HEADS supports credence 1023/2047, this is a case where an expert’s making an observation has a different evidential significance for her than the fact that some expert or other made such an observation. Titelbaum (2013, pp. 235–236) uses a related case, but one in which the individuals lack knowledge of any uniquely identifying features of themselves, to argue for the relevance of centered evidence to uncentered propositions.

  5. Further features of the credences in this case are discussed in Sect. 3, but it may be useful to mention a few at this point. The above discussion shows C(FAIR) = 1023/2047 and C(FAIR | XA(FAIR) = 1/3) = 1/3. Since C(XFIRST-HEADS(FAIR) = 1/3) = 1, we also have C(FAIR | XFIRST-HEADS(FAIR) = 1/3) = 1023/2047 and C(FAIR | XA(FAIR) = 1/3 & XFIRST-HEADS(FAIR) = 1/3) = 1/3. The type of argument given in Sect. 3 shows C(FAIR | XA(FAIR) = 1/3 & XFIRST-HEADS(FAIR) = 1/3 & A-IS-XFIRST-HEADS) = 1023/2047.

  6. The case does not, however, illustrate the relativity of deference to descriptions, since there is no description under which you are disposed to defer to the credence of your Monday self. Partly for this reason, it is not clear that the Monday self should count as an expert in any intuitive sense, but for our purposes it is enough that you can appropriately fail to defer to your Monday self, despite knowing that your Monday self knows all the possible-worlds propositions you know and reasons impeccably.

  7. Schervish et al. (2004) and Halpern (2005) have noted the connection between self-identification (or a closely related condition) and deference in the prisoner cases of Arntzenius (2003). These approaches consider deference to one’s future self, under certain temporal descriptions (e.g., “myself at 11:30 pm”). It is not obvious how to generalize these approaches to descriptions other than temporal descriptions, since time plays a unique role in these approaches, being used to define the notion of a ‘filtration’ or of ‘perfect recall.’ Time orders all the future experts into a sequence of gradually improving knowledge. In general, however, descriptions may, like ‘XSHAM’ and ‘XFIRST-HEADS,’ range over experts who cannot be so ordered. Elga (2007) gives a slightly more general principle that applies to duplication cases of the type given in Elga (2004).

  8. A more general principle, CONDITIONAL DEFERENCE, is defined as follows: If C(X(q) = y) > 0, then C(q | X(q) = y) = y. Footnote 11 modifies the derivation of DEFERENCE in this subsection to establish CONDITIONAL DEFERENCE. This shows that the sufficient condition for DEFERENCE given in Sect. 4.3 is also sufficient for CONDITIONAL DEFERENCE. But we work with DEFERENCE because much of the importance of self-identification can be seen even in the simpler context. DEFERENCE and CONDITIONAL DEFERENCE are analogs of the Reflection Principle of van Fraassen (1984), which concerns deference to one’s future self. Principles of deference, whether concerning deference to credences or to objective chances or frequencies, are also discussed in Kyburg (1961), Miller (1966), Levi (1977), Lewis (1980), Skyrms (1980), Goldstein (1983), Gaifman (1988), Pollock (1990), and the references in footnotes 7 and 10.

  9. We assume standard probability theory, including countable additivity. To illustrate a countably infinite partition, suppose one is certain a fair coin will be flipped until it comes up heads. Then the set of propositions {the coin will be flipped once, the coin will be flipped twice, the coin will be flipped three times, …} would constitute a countably infinite partition, with the probabilities of these propositions being 1/2, 1/4, 1/8, … respectively.

  10. van Fraassen (1995), Weisberg (2007), and Briggs (2009) give related arguments.

  11. A related strategy shows that our assumptions also imply the more general claim CONDITIONAL DEFERENCE, which states: If C(X(q) = y) > 0, then C(q | X(q) = y) = y. To show this, let P be a partition satisfying PARTITION, and let ‘vy’ rigidly designate the disjunction of pi in P such that C(q | pi) = y. (If there are no such pi, vy is counted as false.) We first show C(X(q) = y iff vy) = 1. Because the pi are exhaustive, it suffices to show that for all pi, C(X(q) = y iff vy | pi) = 1. PARTITION states that C(X(q) = C(q | pi) | pi) = 1, so it suffices to show that for all pi, C(C(q | pi) = y iff vy | pi) = 1. We show that conditional on any pi, C either assigns credence 1 to both vy and C(q | pi) = y, or credence 0 to both. There are two cases. (i) pi is a disjunct of vy. Since pi entails vy, C(vy | pi) = 1. By selection of the disjuncts in vy, C(q | pi) = y, so by KNOWLEDGE OF CONDITIONAL CREDENCES, C(C(q | pi) = y) = 1, and so C(C(q | pi) = y | pi) = 1. (ii) pi is not a disjunct of vy. Since the members of P are mutually exclusive, C(vy | pi) = 0. Since pi is not a disjunct of vy, C(q | pi) = z, for some z ≠ y. By KNOWLEDGE OF CONDITIONAL CREDENCES, C(C(q | pi) = z) = 1, so C(C(q | pi) = y) = 0, and so C(C(q | pi) = y | pi) = 0. This establishes C(X(q) = y iff vy) = 1. Now in accord with the antecedent of CONDITIONAL DEFERENCE, assume C(X(q) = y) > 0. By the result just established, it follows that C(vy) > 0, and so vy is a nonempty disjunction. The definition of vy then implies that vy is a nonempty disjunction of mutually exclusive pi all satisfying C(q | pi) = y, and hence C(q | vy) = y. Since C(X(q) = y iff vy) = 1, it follows that C(q | X(q) = y) = y.

  12. Briggs (2010), in a similar framework, uses the same notion of self-locating uncertainty, terming it “irreducibly de se ignorance.”

  13. As noted above, the term “proposition” refers exclusively to a set of uncentered worlds.

  14. On a subjectivist reading, the halfer and thirder rules entail only a subjectivist version of the UPDATING ASSUMPTION, which adds to the antecedent the requirement that A and B have the same priors in the subjective Bayesian sense. The difference between the two versions is not particularly significant for the cases discussed here, since the relevant aspects of the priors are uncontroversial. The subjectivist version can function equally well in the formal argument, with minor modifications.

  15. As noted above, in the Sleeping Beauty problem, the UPDATING ASSUMPTION implies that if one is informed that it is Monday, then one’s credences in all propositions ought to be the same as they were on Sunday. Lewis denies this for the proposition HEADS. He endorses credence 1/2 in HEADS for Sunday, but credence 2/3 in HEADS for Monday upon being informed that it is Monday.

  16. This excludes the more sophisticated descriptions considered in two-dimensional semantics (see Chalmers (2006) for an overview).

  17. This description of the case assumes, as is plausible, that XSHAM possesses a description that she is certain uniquely refers to herself (i.e., ‘XRED’ or ‘XBLUE’). If this assumption does not hold, XSHAM’s total uncentered evidence is simply “someone’s total centered evidence is ered-heads” or “someone’s total centered evidence is eblue-heads” (with new values for ered-heads and eblue-heads). This does not change the analysis, as these propositions do not form a partition.

  18. The significance of the partition condition is perhaps most easily seen in the proof of CONDITIONAL DEFERENCE given in footnote 11. When the partition condition holds, the proposition that the expert’s credence in q is y stands proxy for the disjunction of partition elements conditional on which the agent herself assigns credence y to q.

  19. A world wj “satisfies” TEuncentered(X) = si just in case it is a world where the proposition TEuncentered(X) = si is true.

References

  • Arntzenius, F. (2003). Some problems for conditionalization and reflection. The Journal of Philosophy, 100, 356–370.

    Google Scholar 

  • Aumann, R. J. (1976). Agreeing to disagree. The Annals of Statistics, 4, 1236–1239.

    Article  Google Scholar 

  • Briggs, R. (2009). Distorted reflection. Philosophical Review, 118, 59–85.

    Article  Google Scholar 

  • Briggs, R. (2010). Putting a value on beauty. Oxford Studies in Epistemology, 3, 3–34.

    Google Scholar 

  • Chalmers, D. J. (2006). The foundations of two-dimensional semantics. In M. García-Carpintero & J. Macià (Eds.), Two-dimensional semantics (pp. 55–140). Oxford: Oxford University Press.

    Google Scholar 

  • Elga, A. (2000). Self-locating belief and the sleeping beauty problem. Analysis, 60, 143–147.

    Article  Google Scholar 

  • Elga, A. (2004). Defeating Dr. Evil with self-locating belief. Philosophy and Phenomenological Research, 69, 383–396.

  • Elga, A. (2007). Reflection and disagreement. Noûs, 41, 478–502.

    Article  Google Scholar 

  • Gaifman, H. (1988). A theory of higher order probabilities. In B. Skyrms & W. L. Harper (Eds.), Causation, chance, and credence (pp. 191–219). Dordrecht: Kluwer Academic Publishers.

    Chapter  Google Scholar 

  • Geanakoplos, J. (1989). Game theory without partitions, and applications to speculation and consensus. Cowles Foundation Discussion Paper No. 914, Yale University.

  • Goldstein, M. (1983). The prevision of a prevision. Journal of the American Statistical Association, 78, 817–819.

    Article  Google Scholar 

  • Halpern, J. (2005). Sleeping beauty reconsidered: Conditioning and reflection in asynchronous systems. Oxford Studies in Epistemology, 1, 167–196.

    Google Scholar 

  • Kyburg, H. E. (1961). Probability and the logic of rational belief. Middletown: Wesleyan University Press.

    Google Scholar 

  • Levi, I. (1977). Direct inference. The Journal of Philosophy, 74, 5–29.

    Article  Google Scholar 

  • Lewis, D. (1979). Attitudes de dicto and de se. Philosophical Review, 88, 513–543.

  • Lewis, D. (1980). A subjectivist’s guide to objective chance. In R. C. Jeffrey (Ed.), Studies in inductive logic and probability (pp. 263–293). Berkeley: University of California Press.

    Google Scholar 

  • Lewis, D. (2001). Sleeping beauty: reply to Elga. Analysis, 61, 171–176.

    Article  Google Scholar 

  • Miller, D. (1966). A paradox of information. British Journal for the Philosophy of Science, 17, 59–61.

    Article  Google Scholar 

  • Pollock, J. L. (1990). Nomic probability and the foundations of induction. Oxford: Oxford University Press.

    Google Scholar 

  • Quine, W. V. (1969). Propositional objects. In W. V. Quine (Ed.), Ontological relativity and other essays (pp. 139–160). New York: Columbia University Press.

    Google Scholar 

  • Schervish, M. J., Seidenfeld, T., & Kadane, J. B. (2004). Stopping to reflect. The Journal of Philosophy, 101, 315–322.

    Google Scholar 

  • Skyrms, B. (1980). Higher order degrees of belief. In D. H. Mellor (Ed.), Prospects for pragmatism (pp. 109–137). Cambridge: Cambridge University Press.

    Google Scholar 

  • Titelbaum, M. G. (2013). Quitting certainties. Oxford: Oxford University Press.

    Google Scholar 

  • van Fraassen, B. C. (1984). Belief and the will. The Journal of Philosophy, 81, 236–256.

    Google Scholar 

  • van Fraassen, B. C. (1995). Belief and the problem of Ulysses and the Sirens. Philosophical Studies, 77, 7–37.

    Article  Google Scholar 

  • Weisberg, J. (2007). Conditionalization, reflection, and self-knowledge. Philosophical Studies, 135, 179–197.

    Article  Google Scholar 

  • Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.

    Google Scholar 

Download references

Acknowledgments

I am grateful to Joe Mendola and two referees, including Mike Titelbaum, for helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aaron Bronfman.

Appendix

Appendix

This appendix shows that, combined with the background assumptions of Sects. 4.1 and 4.2, the four assumptions of Sect. 4.3 imply that C satisfies DEFERENCE with respect to the description ‘X’ and any proposition q. The proof consists in finding a partition P that satisfies PARTITION. (Section 4.1 showed that, in the presence of that section’s background assumptions, the existence of such a partition is sufficient to imply DEFERENCE.)

Let si range over propositions such that C(TEuncentered(X) = si) > 0. The proposed partition P is given by letting each pi be the proposition that TEuncentered(X) = si. In other words, the partition is given by propositions pi such as “X’s total uncentered evidence is s1,” “X’s total uncentered evidence is s2,” etc. These pi are mutually exclusive (since C assigns no chance to X’s total evidence being si and also being some other set sj), jointly exhaustive (since C is certain X has some total evidence or other), and finite or countably infinite in number (by Assumption 2). To establish DEFERENCE, it suffices to show that this partition P satisfies PARTITION, the condition that for all pi in P, C(X(q) = C(q | pi) | pi) = 1.

We first show that, for arbitrary si, C(si iff TEuncentered(X) = si) = 1. In other words, C treats si and TEuncentered(X) = si as equivalent propositions. To show this, it suffices to show (i) C(si | TEuncentered(X) = si) = 1 and (ii) C(TEuncentered(X) = si | si) = 1. Claim (i) follows immediately from the selection of the si as propositions such that C(TEuncentered(X) = si) > 0, and from the factivity of the term “uncentered evidence,” as in Assumption 2.

Claim (ii) is more difficult to establish. It suffices to show that all si-worlds satisfyFootnote 19 TEuncentered(X) = si. Because C(TEuncentered(X) = si) > 0, we may begin by choosing w* such that C(w*) > 0 and w* satisfies TEuncentered(X) = si. Let ‘t’ rigidly designate TE(X) at w*. Also let ‘tuncentered’ rigidly designate TEuncentered(X) at w*, so tuncentered = si. By Assumption 3, since C(w*) > 0, all 〈wj, ck〉 in t are such that TE(ck) = t. By Assumption 4, since C(w*) > 0, all 〈wj, ck〉 in t are such that X at wj is ck. Thus all 〈wj, ck〉 in t satisfy both TE(ck) = t and X at wj is ck. Thus all wj in tuncentered satisfy TE(X) = t. Hence all wj in tuncentered satisfy TEuncentered(X) = tuncentered. But as defined above, tuncentered is simply si, so this is just to say that all wj in si satisfy TEuncentered(X) = si. Thus C(TEuncentered(X) = si | si) = 1, completing the proof of claim (ii). Having established claims (i) and (ii), we have established C(si iff TEuncentered(X) = si) = 1.

To show that P satisfies PARTITION, we must show that C is certain of the conditions needed to apply the UPDATING ASSUMPTION. Note that Assumption 4 guarantees that C is certain that X has no self-locating uncertainty. This is because Assumption 4 implies that at any world w with C(w) > 0, TE(X) at w contains at most one centered world with any given wj as its first element, namely, the centered world 〈wj, ck〉 where X at wj is ck. Assumption 1 states that C is certain that C has no self-locating uncertainty. Assumption 2 states that C is certain that TEuncentered(X) ⊆ TEuncentered(C). Assumptions 1 and 2 together imply that C is certain that both C and X are rational. Assumption 1 states that C is certain of the UPDATING ASSUMPTION. Combining these things, the probabilistic consistency of C implies that C(X(q) = C(q | si) | TEuncentered(X) = si) = 1.

Above we established C(si iff TEuncentered(X) = si) = 1. Hence C(q | si) = C(q | TEuncentered(X) = si). By KNOWLEDGE OF CONDITIONAL CREDENCES, C(C(q | si) = C(q | TEuncentered(X) = si)) = 1. Combining this with the last claim of the previous paragraph, we have C(X(q) = C(q | TEuncentered(X) = si) | TEuncentered(X) = si) = 1. This establishes that P, the partition where each pi is TEuncentered(X) = si, satisfies PARTITION. By the result of Sect. 4.1, this suffices to establish DEFERENCE.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bronfman, A. Deference and description. Philos Stud 172, 1333–1353 (2015). https://doi.org/10.1007/s11098-014-0352-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-014-0352-6

Keywords

Navigation