Climate Models, Calibration, and Confirmation

British Journal for the Philosophy of Science 64 (3):609-635 (2013)
  Copy   BIBTEX

Abstract

We argue that concerns about double-counting—using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate—deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach to incremental confirmation. According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit. 1 Introduction2 Remarks about Models and Adequacy-for-Purpose3 Evidence for Calibration Can Also Yield Comparative Confirmation3.1 Double-counting I3.2 Double-counting II4 Climate Science Examples: Comparative Confirmation in Practice4.1 Confirmation due to better and worse best fits4.2 Confirmation due to more and less plausible forcings values5 Old Evidence6 Doubts about the Relevance of Past Data7 Non-comparative Confirmation and Catch-Alls8 Climate Science Example: Non-comparative Confirmation and Catch-Alls in Practice9 Concluding Remarks

Other Versions

No versions found

Similar books and articles

Predictivism and old evidence: a critical look at climate model tuning.Mathias Frisch - 2015 - European Journal for Philosophy of Science 5 (2):171-190.
Contrast Classes and Agreement in Climate Modeling.Corey Dethier - 2024 - European Journal for Philosophy of Science 14 (14):1-19.
Model tuning in engineering: uncovering the logic.Katie Steele & Charlotte Werndl - 2015 - Journal of Strain Analysis for Engineering Design 51 (1):63-71.
Confirmation and Robustness of Climate Models.Elisabeth A. Lloyd - 2010 - Philosophy of Science 77 (5):971–984.
II—Wendy S. Parker: Confirmation and adequacy-for-Purpose in Climate Modelling.Wendy S. Parker - 2009 - Aristotelian Society Supplementary Volume 83 (1):233-249.

Analytics

Added to PP
2013-03-01

Downloads
994 (#21,612)

6 months
184 (#18,786)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Katie Steele
Australian National University
Charlotte Werndl
University of Salzburg
Charlotte Sophie Werndl
London School of Economics

Citations of this work

Calibration: Modelling the measurement process.Eran Tal - 2017 - Studies in History and Philosophy of Science Part A 65:33-45.
The philosophy of logical practice.Ben Martin - 2022 - Metaphilosophy 53 (2-3):267-283.
The argument from surprise.Adrian Currie - 2018 - Canadian Journal of Philosophy 48 (5):639-661.
Computer simulations and experiments: The case of the Higgs boson.Michela Massimi & Wahid Bhimji - 2015 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 51 (C):71-81.

View all 21 citations / Add more citations

References found in this work

Prediction versus accommodation and the risk of overfitting.Christopher Hitchcock & Elliott Sober - 2004 - British Journal for the Philosophy of Science 55 (1):1-34.
II—Wendy S. Parker: Confirmation and adequacy-for-Purpose in Climate Modelling.Wendy S. Parker - 2009 - Aristotelian Society Supplementary Volume 83 (1):233-249.
Measuring Confirmation and Evidence.Ellery Elles & Branden Fitelson - 2000 - Journal of Philosophy 97 (12):663-672.

View all 8 references / Add more references