Skip to main content
Log in

The theory of games as a tool for the social epistemologist

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Traditionally, epistemologists have distinguished between epistemic and pragmatic goals. In so doing, they presume that much of game theory is irrelevant to epistemic enterprises. I will show that this is a mistake. Even if we restrict attention to purely epistemic motivations, members of epistemic groups will face a multitude of strategic choices. I illustrate several contexts where individuals who are concerned solely with the discovery of truth will nonetheless face difficult game theoretic problems. Examples of purely epistemic coordination problems and social dilemmas will be presented. These show that there is a far deeper connection between economics and epistemology than previous appreciated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. It is common in game theory to assume that both players share a common prior. I do not make this assumption here, which is what drives these examples.

  2. Although they don’t use the term, Tebben and Waterman (2015) identify what game theorists call a “second-order” free-rider problem: I can gain the benefits of cooperative agreements and shirk on the costs of maintaining the social norms via punishment.

  3. For a more complete discussion of this situation see Easwaran (2016).

  4. A note about notation, the order of arguments sometimes differs across publications. In my case, S(ct) represents the score that credence c receives when the truth is t.

  5. While I describe this as an agent who is considering altering her credence in a proposition, in economics it is more common to talk about incentive compatibility. If I have credence c and I know I will be paid in proportion to scoring rule S, then we can ask: would I maximize my expected monetary return by honestly announcing my opinion c instead of an alternative opinion \(c'\)? The formal constraints are the same, but they have a different interpretation.

  6. Consider an agent who believes the probability of heads is 0.51. When they evaluate their own credence they assign it an expected accuracy of \(E(0.51, 0.51) = 0.51(-0.49) + 0.49(-0.51) = -0.4998\). When that same agent considers the expected accuracy of adopting a different credence in heads (or the expected accuracy of a different agent), say the credence 1, the expected accuracy is \(E(1, 0.51) = 0.51(0) + 0.49(-1) = -0.49\) As a result, the agent’s credence 0.51 is self-undermining: they regard themselves as less accurate than they would be with a different credence.

  7. While Brier score is popular for its simplicity, it is not without critics (Fallis and Lewis 2016).

  8. The literature on disagreement often focuses on questions about peer disagreement, and much of the debate turns on how one analyzes the concept of a “peer.” I do not wish to engage with this debate. If the reader would like to call these agents “peers,” I have no objection. But if the reader would prefer to call this a case of non-peer disagreement, I will not argue. Whatever the preferred nomenclature, this case is worthy of discussion.

  9. Due to a well-known theorem from Aumann (1976), if Ann and Bob share a prior, are Bayesian rational, learn different evidence, but commonly know the structure of evidence, this cannot happen. I will assume that Ann and Bob are Bayesian rational but one of the other conditions fail. For example, they may not have a common prior.

  10. For reasons not critical to this paper, the cooperative action problem is very difficult (see Seidenfeld et al. 1989).

  11. This note explains the notation. \(JA_i(x, y, z) = E_i(x, z) + E_i(y, x)\). i represents the scoring rule that will be used. A corresponds to Ann who uses \(E_A\) (and hence \(S_A\)). x and y represent the two different credences that are being evaluated. z represents from what perspective those two are being evaluated. \(JA_A(x, y, c_A)\) would represent Ann evaluating potential credences x and y using her current credence \(c_A\). So in his case, \(JA_A(c_A, c_B, c_A)\) represents Ann evaluating the expected joint score of \(c_A\) and \(c_B\) using the scoring rule \(S_A\) and her current credence \(c_A\).

  12. Evaluating the accuracy of two (or more) agents is formally very similar to evaluating the accuracy of a single agent whose beliefs are represented by a set of probabilities rather than just one (this is the “imprecise probability” framework). There are thorny issues in evaluating the accuracy of imprecise credences (Seidenfeld et al. 2012). However, various types of averaging (like arithmetic and geometric) are more defensible in the context of groups than in the context of a single individual with imprecise credences. Shifting to other measures like the minimum or maximum score will likely result in undesirable results.

  13. While the arithmetic average \({\bar{c}}\) will be better than sticking to one’s credences, depending on scoring rule, it may not be the optimal compromise. Moss provides several examples where different compromises are better.

  14. There is an unfortunate duality in how these rules are discussed. One can, as I have, talk about measures of accuracy, where higher scores are better. In that context, Brier score is concave. Alternatively, one can talk about measures of inaccuracy where lower scores are better. In that case, Brier score is convex. Joyce (1998, 2009) defends what he calls “convexity.” This is a defense of what I call “concavity.”

  15. Both propositions presented in this section are straightforward application of definitions. Proofs are omitted.

  16. There remain other thorny problems to deal with if one thinks that Ann and Bob should compromise (e.g. Staffel 2015).

  17. The games we will generate are: Prisoner’s dilemma, Prisoner’s delight, a pure coordination game, and the Stag hunt (a.k.a. assurance game). The only two-person symmetric game we do not create is Chicken (a.k.a. Hawk–Dove). I have yet to be able to create a plausible situation which is uncontroversially epistemic for this game.

  18. Although I have not investigated this completely, I do not believe the independence assumption is critical. However, the case does require that both groups might be involved, that one’s involvement is not mutually exclusive with the other’s involvement. Making the assumption of independence greatly simplifies the mathematics, and since this is merely for illustration, I take as an acceptable case.

  19. Safety has a formal definition in this context. For this region of the parameter space, the inferior equilibrium is risk dominant (Harsanyi and Selten 1988).

  20. My thanks to Alexandru Baltag for the suggestion.

References

  • Aumann, R. J. (1976). Agreeing to disagree. The Annals of Statistics, 4(6), 1236–1239.

    Article  Google Scholar 

  • Banerjee, A . V. (1992). A simple model of herd behavior. The Quarterly Journal of Economics, 107(3), 797–817.

    Article  Google Scholar 

  • Bicchieri, C. (2005). Grammar of society. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Bovens, L., & Hartmann, S. (2003). Bayesian epistemology. Oxford: Oxford University Press.

    Google Scholar 

  • Braithwaite, R. (1954). Theory of games as a tool for the moral philosopher. Cambridge: Cambridge University Press.

    Google Scholar 

  • Brier, G . W. (1950). Verification of forecasts expressed in terms of probability. Monthly Weather Review, 78(1), 1–3.

    Article  Google Scholar 

  • Carr, J. R. (2017). Epistemic utility theory and the aim of belief. Philosophy and Phenomenological Research, 95(3), 511–534.

    Article  Google Scholar 

  • de Finetti, B. (1975). Theory of probability: A critical introductory treatment. New York: Wiley.

    Google Scholar 

  • Diaconis, P., & Skyrms, B. (2018). Ten great ideas about chance. Princeton: Princeton University Press.

    Book  Google Scholar 

  • Dogramaci, S. (2012). Reverse engineering epistemic evaluations. Philosophy and Phenomenological Research, 84(3), 513–530.

    Article  Google Scholar 

  • Dretske, F. I. (1981). Knowledge and the flow of information. Cambridge: MIT Press.

    Google Scholar 

  • Dunn, J. (2018). Epistemic free riding. In epistemic consequentialism. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Easwaran, K. (2016). Dr. Truthlove or: How I learned to stop worrying and love Bayesian probabilities. Nous, 50(4), 816–853.

    Article  Google Scholar 

  • Fallis, D., & Lewis, P. J. (2016). The brier rule is not a good measure of epistemic utility (and other useful facts about epistemic betterness). Australasian Journal of Philosophy, 94(3), 576–590.

    Article  Google Scholar 

  • Goldman, A. (1999). Knowledge in a social world. Oxford: Clarendon Press.

    Book  Google Scholar 

  • Greaves, H. (2013). Epistemic decision theory. Mind, 122(488), 915–952.

    Article  Google Scholar 

  • Greaves, H., & Wallace, D. (2006). Justifying conditionalization: Conditionalization maximizes expected epistemic utility. Mind, 115(459), 607–631.

    Article  Google Scholar 

  • Harsanyi, J. C., & Selten, R. (1988). A general theory of equilibrium selection in games. Cambridge: MIT Press.

    Google Scholar 

  • Heesen, R., & van der Kolk, P. (2016). A game-theoretic approach to peer disagreement. Erkenntnis, 81(6), 1345–1368.

    Article  Google Scholar 

  • Joyce, J. M. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65(4), 575–603.

    Article  Google Scholar 

  • Joyce, J . M. (2009). Causal reasoning and backtracking. Philosophical Studies, 147(1), 139–154.

    Article  Google Scholar 

  • Kelly, T. (2003). Epistemic rationality as instrumental rationality: A critique. Philosophy and Phenomenological Research, 66(3), 612–640.

    Article  Google Scholar 

  • Kummerfeld, E., & Zollman, K. J. (2016). Conservatism and the scientific state of nature. British Journal for the Philosophy of Science, 67(4), 1057–1076.

    Article  Google Scholar 

  • Levinstein, B. A. (2012). Leitgeb and Pettigrew on accuracy and updating. Philosophy of Science, 79(3), 413–424.

    Article  Google Scholar 

  • List, C., & Pettit, P. (2004). An epistemic free-riding problem? In P. Catton & G. MacDonald (Eds.), Karl Popper: Critical appraisals. Hove: Psychology Press.

    Google Scholar 

  • Mayo-Wilson, C., Zollman, K. J., & Danks, D. (2011). The independence thesis : When individual and social epistemology diverge. Philosophy of Science, 78(4), 653–677.

    Article  Google Scholar 

  • Moss, S. (2011). Scoring rules and epistemic compromise. Mind, 120(480), 1053–1069.

    Article  Google Scholar 

  • Oddie, G. (1997). Conditionalization, cogency, and cognitive value. British Journal for the Philosophy of Science, 48, 533–541.

    Article  Google Scholar 

  • Pettigrew, R. (2016). Accuracy and the laws of credence. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Schervish, M . J. (1989). A general method for comparing probability assessors. The Annals of Statistics, 17(4), 1856–1879.

    Article  Google Scholar 

  • Schervish, M. J., Seidenfeld, T., & Kadane, J. B. (2009). Proper scoring rules, dominated forecasts, and coherence. Decision Analysis, 6(4), 202–221.

    Article  Google Scholar 

  • Seidenfeld, T. (1985). Calibration, coherence, and scoring rules. Philosophy of Science, 52(2), 274–294.

    Article  Google Scholar 

  • Seidenfeld, T., Kadane, J. B., & Schervish, M. J. (1989). On the shared preferences of two Bayesian decision makers. The Journal of Philosophy, 86(5), 225–244.

    Article  Google Scholar 

  • Seidenfeld, T., Schervish, M. J., & Kadane, J. B. (2012). Forecasting with imprecise probabilities. International Journal of Approximate Reasoning, 53(8), 1248–1261.

    Article  Google Scholar 

  • Skyrms, B. (2004). The stag hunt and the evolution of social structure. New York: Cambridge University Press.

    Google Scholar 

  • Skyrms, B. (2010). Signals: Evolution, learning and information. New York: Oxford University Press.

    Book  Google Scholar 

  • Staffel, J. (2015). Disagreement and epistemic utility-based compromise. Journal of Philosophical Logic, 44(3), 273–286.

    Article  Google Scholar 

  • Tebben, N., & Waterman, J. (2015). Epistemic free riders and reasons to trust testimony. Social Epistemology, 29(3), 270–279.

    Article  Google Scholar 

  • von Neumann, J., & Morgenstern, O. (1953). Theory of games and economic behavior. Princeton: Princeton University Press.

    Google Scholar 

  • Zollman, K. J. (2018). The credit economy and the economic rationality of science. Journal of Philosophy, 115(1), 5–33.

    Article  Google Scholar 

Download references

Acknowledgements

The title of this paper is an homage to R. B. Braithwaite’s insightful book The Theory of Games as a Tool for the Moral Philosopher (1954). The author would like to thank Liam Kofi Bright, Remco Heesen, Gurpreet Rattan, Teddy Seidenfeld, Julia Staffel, Katie Steele, an anonymous reviewer, and several workshop audiences for their comments. This work was supported by NSF Grant SES 1254291.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kevin J. S. Zollman.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zollman, K.J.S. The theory of games as a tool for the social epistemologist. Philos Stud 178, 1381–1401 (2021). https://doi.org/10.1007/s11098-020-01480-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-020-01480-5

Keywords

Navigation