Skip to main content

Can Models Have Skill?

  • Chapter
  • First Online:
A Critical Reflection on Automated Science

Part of the book series: Human Perspectives in Health Sciences and Technology ((HPHST,volume 1))

  • 279 Accesses

Abstract

Climate scientists and climate modelers often speak of determining whether or not a model has “skill.” This seems to imply that climate models themselves are epistemic experts. In this paper, I take a careful look at how model skill is evaluated in climate science with an eye to determining the extent to which we ought to view the idea of model skill as a step in the direction toward post-human science. I begin by consider the paradigm of verification and validation which comes from the world of engineering. Those climate scientists sometimes speak of verifying and validating their models, I argue that this paradigm is unsuitable for climate science. I then consider the question of whether or not there are general norms with which climate model skill can be established. William Goodwin has criticized my earlier work on the grounds that it “makes it unlikely that the legitimacy of [computer models] can be reconstructed, or rationalized, in terms of generally recognized, ahistorical evidential norms” (J Gen Philos Sci 46(2):339–350, 2015, p. 342). Nevertheless, I double down on my earlier claims in this paper. I concede that “a philosopher who hopes to address the epistemological concerns that have been raised about the reliability of climate models isn’t going to be able to tell a normatively grounded story that will secure the unassailable reliability of the results of climate modeling” (p. 342). I argue that when we look on the work of those who are in the business of modeling highly complex non- linear systems, the best we are ever going to be able to do is to arrive at a situation where “a simulation modeler could explain to his peers why it was legitimate and rational to use a certain approximation technique to solve a particular problem” by appealing to “very context specific reasons and particular features” (p. 344). If this is right, it suggests the prospects of science failing to “remain human” continue to be bleak.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Verification-directed activities include things like software quality assurance techniques, consistency and convergence tests (roughly speaking: making sure that as you make space and time grid cells smaller and smaller, the results you get start to change less and less) and doing comparisons of simulation results with known solutions. Validation-directed activities including comparing your model’s output to the output of a model that has stronger theoretical underpinning, etc.

  2. 2.

    Though of course this will be no help in responding to folks who think that all non-linear modeling is “embroiled in confusion.”

  3. 3.

    Much of the rest of this section follows Frisch (2015). The last part, in which I square off some of Frisch’s claims against the stronger claims of climate scientists, is of course entirely my own.

References

  • Barnes, Eric Christian. 2008. The Paradox of Predictivism. 1st ed. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Frigg, Roman, and Julian Reiss. 2009. The Philosophy of Simulation: Hot New Issues or Same Old Stew? Synthese 169 (3): 593–613.

    Article  Google Scholar 

  • Frisch, Mathias. 2015. Predictivism and Old Evidence: A Critical Look at Climate Model Tuning. European Journal for Philosophy of Science 5 (2): 171–190.

    Article  Google Scholar 

  • Golaz, Jean-Christophe, Larry W. Horowitz, and Hiram Levy. 2013. Cloud Tuning in a Coupled Climate Model: Impact on 20th Century Warming. Geophysical Research Letters 40 (10): 2246–2251.

    Article  Google Scholar 

  • Goodwin, William M. 2015. Global Climate Modeling as Applied Science. Journal for General Philosophy of Science 46 (2): 339–350.

    Article  Google Scholar 

  • Lenhard, Johannes, and Eric Winsberg. 2010. Holism, Entrenchment, and the Future of Climate Model Pluralism. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 41 (3): 253–262.

    Article  Google Scholar 

  • Mauritsen, Thorsten, Bjorn Stevens, Erich Roeckner, Traute Crueger, Monika Esch, Marco Giorgetta, Helmuth Haak, et al. 2012. Tuning the Climate of a Global Model. Journal of Advances in Modeling Earth Systems 4 (3). http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/full.

    Article  Google Scholar 

  • Oberkampf, William L., and Christopher J. Roy. 2010. Verification and Validation in Scientific Computing. 1st ed. New York: Cambridge University Press.

    Book  Google Scholar 

  • Schmidt, Gavin A., and Steven Sherwood. 2015. A Practical Philosophy of Complex Climate Modelling. European Journal for Philosophy of Science 5 (2): 149–169.

    Article  Google Scholar 

  • Steele, Katie, and Charlotte Werndl. 2013. Climate Models, Calibration, and Confirmation. The British Journal for the Philosophy of Science 64 (3): 609–635.

    Article  Google Scholar 

  • Winsberg, Eric. 2003. Simulated Experiments: Methodology for a Virtual World. Philosophy of Science 70 (1): 105–125.

    Article  Google Scholar 

  • Worrall, John. 2014. Prediction and Accommodation Revisited. Studies in History and Philosophy of Science Part A 45 (March): 54–61. https://doi.org/10.1016/j.shpsa.2013.10.001.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric Winsberg .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Winsberg, E. (2020). Can Models Have Skill?. In: Bertolaso, M., Sterpetti, F. (eds) A Critical Reflection on Automated Science. Human Perspectives in Health Sciences and Technology, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-030-25001-0_10

Download citation

Publish with us

Policies and ethics