Abstract
Climate scientists and climate modelers often speak of determining whether or not a model has “skill.” This seems to imply that climate models themselves are epistemic experts. In this paper, I take a careful look at how model skill is evaluated in climate science with an eye to determining the extent to which we ought to view the idea of model skill as a step in the direction toward post-human science. I begin by consider the paradigm of verification and validation which comes from the world of engineering. Those climate scientists sometimes speak of verifying and validating their models, I argue that this paradigm is unsuitable for climate science. I then consider the question of whether or not there are general norms with which climate model skill can be established. William Goodwin has criticized my earlier work on the grounds that it “makes it unlikely that the legitimacy of [computer models] can be reconstructed, or rationalized, in terms of generally recognized, ahistorical evidential norms” (J Gen Philos Sci 46(2):339–350, 2015, p. 342). Nevertheless, I double down on my earlier claims in this paper. I concede that “a philosopher who hopes to address the epistemological concerns that have been raised about the reliability of climate models isn’t going to be able to tell a normatively grounded story that will secure the unassailable reliability of the results of climate modeling” (p. 342). I argue that when we look on the work of those who are in the business of modeling highly complex non- linear systems, the best we are ever going to be able to do is to arrive at a situation where “a simulation modeler could explain to his peers why it was legitimate and rational to use a certain approximation technique to solve a particular problem” by appealing to “very context specific reasons and particular features” (p. 344). If this is right, it suggests the prospects of science failing to “remain human” continue to be bleak.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Verification-directed activities include things like software quality assurance techniques, consistency and convergence tests (roughly speaking: making sure that as you make space and time grid cells smaller and smaller, the results you get start to change less and less) and doing comparisons of simulation results with known solutions. Validation-directed activities including comparing your model’s output to the output of a model that has stronger theoretical underpinning, etc.
- 2.
Though of course this will be no help in responding to folks who think that all non-linear modeling is “embroiled in confusion.”
- 3.
Much of the rest of this section follows Frisch (2015). The last part, in which I square off some of Frisch’s claims against the stronger claims of climate scientists, is of course entirely my own.
References
Barnes, Eric Christian. 2008. The Paradox of Predictivism. 1st ed. Cambridge: Cambridge University Press.
Frigg, Roman, and Julian Reiss. 2009. The Philosophy of Simulation: Hot New Issues or Same Old Stew? Synthese 169 (3): 593–613.
Frisch, Mathias. 2015. Predictivism and Old Evidence: A Critical Look at Climate Model Tuning. European Journal for Philosophy of Science 5 (2): 171–190.
Golaz, Jean-Christophe, Larry W. Horowitz, and Hiram Levy. 2013. Cloud Tuning in a Coupled Climate Model: Impact on 20th Century Warming. Geophysical Research Letters 40 (10): 2246–2251.
Goodwin, William M. 2015. Global Climate Modeling as Applied Science. Journal for General Philosophy of Science 46 (2): 339–350.
Lenhard, Johannes, and Eric Winsberg. 2010. Holism, Entrenchment, and the Future of Climate Model Pluralism. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 41 (3): 253–262.
Mauritsen, Thorsten, Bjorn Stevens, Erich Roeckner, Traute Crueger, Monika Esch, Marco Giorgetta, Helmuth Haak, et al. 2012. Tuning the Climate of a Global Model. Journal of Advances in Modeling Earth Systems 4 (3). http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/full.
Oberkampf, William L., and Christopher J. Roy. 2010. Verification and Validation in Scientific Computing. 1st ed. New York: Cambridge University Press.
Schmidt, Gavin A., and Steven Sherwood. 2015. A Practical Philosophy of Complex Climate Modelling. European Journal for Philosophy of Science 5 (2): 149–169.
Steele, Katie, and Charlotte Werndl. 2013. Climate Models, Calibration, and Confirmation. The British Journal for the Philosophy of Science 64 (3): 609–635.
Winsberg, Eric. 2003. Simulated Experiments: Methodology for a Virtual World. Philosophy of Science 70 (1): 105–125.
Worrall, John. 2014. Prediction and Accommodation Revisited. Studies in History and Philosophy of Science Part A 45 (March): 54–61. https://doi.org/10.1016/j.shpsa.2013.10.001.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Winsberg, E. (2020). Can Models Have Skill?. In: Bertolaso, M., Sterpetti, F. (eds) A Critical Reflection on Automated Science. Human Perspectives in Health Sciences and Technology, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-030-25001-0_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-25001-0_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-25000-3
Online ISBN: 978-3-030-25001-0
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)