Skip to main content
Log in

Confidence biases and learning among intuitive Bayesians

  • Published:
Theory and Decision Aims and scope Submit manuscript

Abstract

We design a double-or-quits game to compare the speed of learning one’s specific ability with the speed of rising confidence as the task gets increasingly difficult. We find that people on average learn to be overconfident faster than they learn their true ability and we present an intuitive-Bayesian model of confidence which integrates confidence biases and learning. Uncertainty about one’s true ability to perform a task in isolation can be responsible for large and stable confidence biases, namely limited discrimination, the hard–easy effect, the Dunning–Kruger effect, conservative learning from experience and the overprecision phenomenon (without underprecision) if subjects act as Bayesian learners who rely only on sequentially perceived performance cues and contrarian illusory signals induced by doubt. Moreover, these biases are likely to persist since the Bayesian aggregation of past information consolidates the accumulation of errors and the perception of contrarian illusory signals generates conservatism and under-reaction to events. Taken together, these two features may explain why intuitive Bayesians make systematically wrong predictions of their own performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. Using German survey data about stock market forecasters, Deaves et al. (2010) does not confirm that success has a greater impact than failure on self-confidence, which casts doubt on the self-attribution bias explanation.

  2. In studies where subjects are free to stay or to leave after a negative feedback, subjects who update most of their confidence in their future success to a negative feedback are selectively sorted out of the sample. This creates an asymmetry in measured responses to positive and negative feedback. Such spurious asymmetry does not exist in the present experiment, because subjects who fail to reach one level must drop out of the game.

  3. The planning fallacy is the tendency to underestimate the time needed for completion of a task. See, e.g. Buehler et al. (2002).

  4. The screen highlights the round, the number of correct anagrams cumulated during the current level and the number of anagrams needed to pass this level.

  5. After the subject has reported a probability p, the quadratic scoring rule imposes a cost that is proportional to \( (1-p)^{2} \) in case of success and to \( (0-p)^{2} \) in case of failure. The score takes the general form: \( S= a-b \). Cost, with \( a, b>0 \).

  6. The second study also included the lottery rule in the comparison and found that the latter slightly outperformed self-report. The lottery rule rests on the following mechanism: after the subject has reported a probability p, a random number q is drawn. If q is smaller than p, the subject is paid according to the task. If q is greater than p, the subject is paid according to a risky bet that provides the same reward with probability q. The lottery rule cannot be implemented on our design.

  7. We are grateful to Luis Santos-Pinto for making the last point clear in early discussions.

  8. Difference between confidence and frequency of success is significant at 1% for all ability levels. For these figures, we selected confidence reported after 4th round (during training level) to minimize the impact of mismeasurement.

  9. The Dunning–Kruger effect initially addressed general knowledge questions whereas we consider self-assessments of own performance in a real-effort task.

  10. Our explanation may also be better than the initial explanation such that the unskilled are unaware of their lower abilities. Miller and Geraci (2011) found that students with poor abilities showed greater overconfidence than high-performing students, but they also reported lower confidence in these predictions.

  11. No significant difference was found between the Choice and No-choice conditions, suggesting that the option to choose the preferred path does not trigger an illusion of control.

  12. Participants who reported confidence after the training period were more able than average since they had passed this level and decided to double. Thus, we compare ability-adjusted confidence Before and During with the reported confidence After. The ability-adjusted confidence Before and During are obtained by running a simple linear regression of confidence Before and During on ability, measured by the average number of anagrams solved per minute in the first four rounds of the training level. The estimated effect of superior ability of doublers was added to confidence During or Before to get the ability-adjusted confidence which directly compares with the observed confidence After.

  13. With a single exception, confidence variations are statistically significant at 1% level in the middle and high levels.

  14. There was no significant difference between treatments.

  15. The time \(t=(1,5,10)\) when confidence is reported is omitted in this sub-section to alleviate notations.

  16. Very close numbers are obtained for all calibration biases with confidence reported during the game.

  17. This should not be confounded with motivated inference as it applies symmetrically to undesirable and desirable outcomes.

  18. The rational decision to undertake a non-trivial task of level l (with a possibility to fail and regret) is subjective. The economic criterion for making this decision rests on the comparison of the expected utilities of all options conditional on the estimated probabilities of success at the time of decision. A rational subject should refuse the task if the expected utility of continuing to level l or above is no higher than the expected utility of stopping before level l. We make use of this criterion for writing Eqs. 6 and 7 in the next Sect. 5.1.

  19. It is assumed here, as in Table 1, that the two estimates are independent.

  20. If \(\nu _{i}\) denotes the prior precision of subject \(i'\)s estimate of her future success (omitting level l for simplicity) \(\nu _{i}+1\equiv \varPhi _{i}\) will be the posterior precision after reception of an i.i.d. signal. Thus, \(\varPhi _i >\nu _{i}\). Notice that \(\mu _{i}=\frac{\nu _{i}}{\nu _{i+1}}\).

  21. To have an unambiguous definition of \(D_{(5,l)}\) and \(D_{(1,l)}\) below, we use the expected utility (EU) criterion, as explained in note 18.

  22. The discrete value of confidence between 0 and 100 can be safely treated as continuous.

  23. We used an OLS to predict probabilities of success so as to make the comparison with confidence transparent. Estimating an OLS instead of a Probit in columns 3 and 4 did not affect the qualitative conclusions.

  24. Conditional on initial success, prior confidence is a good predictor of the future decision to double (regression not shown). This is good news for the quality of confidence reports; and it confirms that subjects behave as intuitive Bayesians who rely on their own subjective estimates of success to make the choice of doubling.

  25. The predicted values were computed on regressions containing only the significant variables. We checked that these values stayed close to predictions derived from the regressions listed in Table 6 which contain non significant variables too.

  26. However, overconfidence may pay off when there is uncertainty about opponents’ real strengths, and when the benefits of the prize at stake are sufficiently larger than the costs (e.g., Johnson and Fowler 2011; Anderson et al. 2012).

References

  • Adams, J. K. (1957). A confidence scale defined in terms of expected percentages. The American Journal of Psychology, 70, 432–436.

  • Anderson, C., Brion, S., Moore, D. A., & Kennedy, J. A. (2012). A status-enhancement account of overconfidence. Journal of Personality and Social Psychology, 103(4), 718–735.

    Article  Google Scholar 

  • Armantier, O., & Treich, N. (2013). Eliciting beliefs: Proper scoring rules, incentives, stakes and hedging. European Economic Review, 62, 17–40.

    Article  Google Scholar 

  • Barber, B. M., & Odean, T. (2001). Boys will be boys: Gender, overconfidence, and common stock investment. Quarterly Journal of Economics, 116(1), 261–292

  • Benoît, J. P., & Dubra, J. (2011). Apparent overconfidence. Econometrica, 79(5), 1591–1625.

    Article  Google Scholar 

  • Benoît, J. P., Dubra, J., & Moore, D. A. (2015). Does the better-than-average effect show that people are overconfident?: Two experiments. Journal of the European Economic Association, 13(2), 293–329.

    Article  Google Scholar 

  • Breen, R. (2001). A rational choice model of educational inequality. Centro de Estudios Avanzados en Ciencias Sociales Instituto Juan March de Estudios e Investigaciones, Madrid Working paper (166)

  • Brunnermeier, M. K., & Parker, J. A. (2005). Optimal expectations. American Economic Review, 95(4), 1092–1118.

    Article  Google Scholar 

  • Buehler, R., Griffin, D., & Ross, M. (2002). Inside the planning fallacy: The causes and consequences of optimistic time predictions. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 250–270). Cambridge, UK: Cambridge University Press.

  • Camerer, C., & Lovallo, D. (1999). Overconfidence and excess entry: An experimental approach. American Economic Review, 89(1), 306–318

  • Clark, J., & Friesen, L. (2009). Overconfidence in forecasts of own performance: An experimental study. The Economic Journal, 119(534), 229–251.

    Article  Google Scholar 

  • Deaves, R., Lüders, E., & Schröder, M. (2010). The dynamics of overconfidence: Evidence from stock market forecasters. Journal of Economic Behavior & Organization, 75(3), 402–412.

    Article  Google Scholar 

  • DeGroot, M. H. (1970). Optimal statistical decisions. New York: McGraw-Hill.

    Google Scholar 

  • Erev, I., Wallsten, T. S., & Budescu, D. V. (1994). Simultaneous over-and underconfidence: The role of error in judgment processes. Psychological Review, 101(3), 519–528.

    Article  Google Scholar 

  • Gervais, S., & Odean, T. (2001). Learning to be overconfident. Review of Financial studies, 14(1), 1–27.

    Article  Google Scholar 

  • Goodie, A. S. (2005). The role of perceived control and overconfidence in pathological gambling. Journal of Gambling Studies, 21(4), 481–502.

    Article  Google Scholar 

  • Grieco, D., & Hogarth, R. M. (2009). Overconfidence in absolute and relative performance: The regression hypothesis and Bayesian updating. Journal of Economic Psychology, 30(5), 756–771.

    Article  Google Scholar 

  • Griffin, D., & Tversky, A. (1992). The weighing of evidence and the determinants of confidence. Cognitive Psychology, 24(3), 411–435.

    Article  Google Scholar 

  • Heath, C., & Tversky, A. (1991). Preference and belief: Ambiguity and competence in choice under uncertainty. Journal of Risk and Uncertainty, 4(1), 5–28.

    Article  Google Scholar 

  • Hollard, G., Massoni, S., & Vergnaud, J. C. (2015). In search of good probability assessors: An experimental comparison of elicitation rules for confidence judgments. Theory and Decision, 1–25.

  • Johnson, D. D. (2004). Overconfidence and war: The Havoc and Glory of positive illusions. Cambridge: Harvard University Press.

    Google Scholar 

  • Johnson, D. D., & Fowler, J. H. (2011). The evolution of overconfidence. Nature, 477(7364), 317–320.

    Article  Google Scholar 

  • Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291

  • Köszegi, B. (2006). Ego utility, overconfidence, and task choice. Journal of the European Economic Association, 4(4), 673–707.

    Article  Google Scholar 

  • Kruger, J. (1999). Lake Wobegon be gone! The “below-average effect” and the egocentric nature of comparative ability judgments. Journal of Personality and Social Psychology, 77(2), 221–232.

    Article  Google Scholar 

  • Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121–1134.

    Article  Google Scholar 

  • Langer, E. J., & Roth, J. (1975). Heads I win tails it’s chance: The illusion of control as a function of the sequence outcomes in a purely chance task. Journal of Personality and Social Psychology, 32, 951–955.

    Article  Google Scholar 

  • Lichtenstein, S., & Fischhoff, B. (1977). Do those who know more also know more about how much they know? Organizational Behavior and Human Performance, 20(2), 159–183.

    Article  Google Scholar 

  • Lichtenstein, S., & Slovic, P. (1971). Reversals of preference between bids and choices in gambling decisions. Journal of Experimental Psychology, 89(1), 46–55.

    Article  Google Scholar 

  • Lichtenstein, S., Fischhoff, B., & Phillips, L. (1982). Calibration of probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, & A. Tverski (Eds.), Judgement under uncertainty: Heuristics and biases (pp. 306–334). New York: Cambridge University Press.

    Chapter  Google Scholar 

  • Malmendier, U., & Tate, G. (2005). Ceo overconfidence and corporate investment. The Journal of Finance, 60(6), 2661–2700.

    Article  Google Scholar 

  • Merkle, C., & Weber, M. (2011). True overconfidence: The inability of rational information processing to account for apparent overconfidence. Organizational Behavior and Human Decision Processes, 116(2), 262–271.

    Article  Google Scholar 

  • Miller, D. T., & Ross, M. (1975). Self-serving biases in the attribution of causality: Fact or fiction? Psychological Bulletin, 82(2), 213–225.

    Article  Google Scholar 

  • Miller, T. M., & Geraci, L. (2011). Unskilled but aware: Reinterpreting overconfidence in low-performing students. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(2), 502–506.

    Google Scholar 

  • Mobius, M., Niederle, M., Niehaus, P., & Rosenblat, T. (2014). Managing self-confidence. Tech. rep., Working Paper

  • Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review, 115(2), 502–517.

    Article  Google Scholar 

  • Oskamp, S. (1965). Overconfidence in case-study judgments. Journal of Consulting Psychology, 29(3), 261–265.

    Article  Google Scholar 

  • Ryvkin, D., Krajč, M., & Ortmann, A. (2012). Are the unskilled doomed to remain unaware? Journal of Economic Psychology, 33(5), 1012–1031.

    Article  Google Scholar 

  • Shiller, R. J. (2000). Measuring bubble expectations and investor confidence. The Journal of Psychology and Financial Markets, 1(1), 49–60.

    Article  Google Scholar 

  • Svenson, O. (1981). Are we all less risky and more skillful than our fellow drivers? Acta Psychologica, 47(2), 143–148.

    Article  Google Scholar 

  • Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207–232.

    Article  Google Scholar 

  • Van den Steen, E. (2011). Overconfidence by Bayesian-rational agents. Management Science, 57(5), 884–896.

    Article  Google Scholar 

Download references

Acknowledgements

We thank the French Ministère de la Recherche (ACI “Contextes sociaux, contextes institutionnels et rendements des systèmes éducatifs”) for generous support, Claude Montmarquette for offering an opportunity to conduct part of the experimental sessions at CIRANO (Montreal), and Noemi Berlin for numerous discussions. We are grateful to the referees and the editors of this special issue for bringing very helpful remarks and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Louis Lévy-Garboua.

Appendix

Appendix

See Fig. 7.

Fig. 7
figure 7

Example of the task screen. A Actual round (round 5 in this example). B List of anagrams to be decoded. C Fields to type the correct word. D Feedback. The “OK” appears when the solution for the anagram is correct. E Number of correct anagrams in the current round. F Total anagrams to be decoded in the current round, 6 in this example (first level). G Number of cumulated correct anagrams, including the current and previous rounds. H Number of correct anagrams required to solve the current level, in this example 36 (first level). I Remaining time. The total time is 8 min, we show only the 3 last minutes. J Button to go to next round. Participants can pass to next round without clearing all anagrams in the current level, but they cannot come back once they pushed the button

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lévy-Garboua, L., Askari, M. & Gazel, M. Confidence biases and learning among intuitive Bayesians. Theory Decis 84, 453–482 (2018). https://doi.org/10.1007/s11238-017-9612-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11238-017-9612-1

Keywords

Navigation