David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
Philosophy of Science 52 (2):274-294 (1985)
Can there be good reasons for judging one set of probabilistic assertions more reliable than a second? There are many candidates for measuring "goodness" of probabilistic forecasts. Here, I focus on one such aspirant: calibration. Calibration requires an alignment of announced probabilities and observed relative frequency, e.g., 50 percent of forecasts made with the announced probability of.5 occur, 70 percent of forecasts made with probability.7 occur, etc. To summarize the conclusions: (i) Surveys designed to display calibration curves, from which a recalibration is to be calculated, are useless without due consideration for the interconnections between questions (forecasts) in the survey. (ii) Subject to feedback, calibration in the long run is otiose. It gives no ground for validating one coherent opinion over another as each coherent forecaster is (almost) sure of his own long-run calibration. (iii) Calibration in the short run is an inducement to hedge forecasts. A calibration score, in the short run, is improper. It gives the forecaster reason to feign violation of total evidence by enticing him to use the more predictable frequencies in a larger finite reference class than that directly relevant
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
Sherrilyn Roush (2009). Second Guessing: A Self-Help Manual. Episteme 6 (3):251-268.
F. Bacchus, Mariam Thalos & H. E. Kyburg (1990). Against Conditionalization. Synthese 85 (3):475 - 506.
Bas C. Van Fraassen (1995). Belief and the Problem of Ulysses and the Sirens. Philosophical Studies 77 (1):7 - 37.
A. Hajek (2008). Arguments for-or Against-Probabilism? British Journal for the Philosophy of Science 59 (4):793-819.
Similar books and articles
Ilan Fischer & Ravid Bogaire (2012). The Group Calibration Index: A Group-Based Approach for Assessing Forecasters' Expertise When External Outcome Data Are Missing. [REVIEW] Theory and Decision 73 (4):671-685.
Barry Lam (2013). Calibrated Probabilities and the Epistemology of Disagreement. Synthese 190 (6):1079-1098.
Michael H. Brill (1999). A Wiring Demon Meets Socialized Humans and Calibrated Photometers. Behavioral and Brain Sciences 22 (6):948-949.
Gregory Wheeler (2012). Objective Bayesian Calibration and the Problem of Non-Convex Evidence. British Journal for the Philosophy of Science 63 (4):841-850.
Joel Predd, Robert Seiringer, Elliott Lieb, Daniel Osherson, H. Vincent Poor & Sanjeev Kulkarni (2009). Probabilistic Coherence and Proper Scoring Rules. IEEE Transactions on Information Theory 55 (10):4786-4792.
Jonathan M. Weinberg (2012). Intuition & Calibration. Essays in Philosophy 13 (1):15.
Frank Lad (1984). The Calibration Question. British Journal for the Philosophy of Science 35 (3):213-221.
Added to index2009-01-28
Total downloads34 ( #51,853 of 1,102,700 )
Recent downloads (6 months)2 ( #182,643 of 1,102,700 )
How can I increase my downloads?