Skip to main content

Defining Coverage of a Domain Using a Modified Nearest-Neighbor Metric

  • Conference paper
  • First Online:

Abstract

Validation experiments are conducted at discrete settings within the domain of interest to assess the predictive maturity of a model over the entire domain. Satisfactory model performance merely at these discrete tested settings is insufficient to ensure that the model will perform well throughout the domain, particularly at settings far from validation experiments. The goal of coverage metrics is to reveal how well a set of validation experiments represents the entire operational domain. The authors identify the criteria of an exemplary coverage metric, evaluate the ability of existing coverage metrics to fulfill each criterion, and propose a new, improved coverage metric. The proposed metric favors interpolation over extrapolation through a penalty function, causing the metric to prefer a design of validation experiments near the boundaries of the domain, while simultaneously exploring inside the domain. Furthermore, the proposed metric allows the coverage to account for uncertainty associated with validation experiments. Application of the proposed coverage metric on a practical, non-trivial problem is demonstrated on the Viscoplastic Self-Consistent material plasticity code for 5182 aluminum alloy.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    A more objective criterion could also be used, where the size of each convex hull surrounding a validation experiment is based on a sensitivity analysis of the model; therefore, if the model predictions change rapidly around a point, then the size is made smaller.

References

  1. Higdon D, Gattiker J, Williams B, Rightley M (2008) Computer model calibration using high-dimensional output. J Am Stat Assoc 103(482):570–83

    Article  MathSciNet  MATH  Google Scholar 

  2. Higdon D, Nakhleh C, Gattiker J, Williams B (2008) A Bayesian calibration approach to the thermal problem. Comput Method Appl Mech Eng 197(29–32):2431–2441

    Article  MATH  Google Scholar 

  3. Atamturktur S, Hemez F, Williams B, Tome C, Unal C (2011) A forecasting metric for predictive modeling. Comput Struct 89(23, 24):2377–2387

    Google Scholar 

  4. Hemez F, Atamturktur S, Unal C (2010) Defining predictive maturity for validated numerical simulations. Comput Struct J 88:497–505

    Article  Google Scholar 

  5. Atamturktur S, Hegenderfer J, Williams B, Egeberg M, Lebensohn R, Unal C (2013) A resource allocation framework or experiment-based validation of numerical models. Mech Adv Mater Struc (in print)

    Google Scholar 

  6. Stull C, Williams B, Unal C (2012) Assessing the predictive capability of the LIFEIV nuclear fuel performance code using sequential calibration. Los Alamos National Laboratory technical report, LA-UR-12-22712

    Google Scholar 

  7. Stull CJ, Hemez F, Williams BJ, Unal C, Rogers ML (2011) An improved description of predictive maturity for verification and validation activities. Los Alamos National Laboratory technical report, LA-UR-11-05659

    Google Scholar 

  8. Atamturktur S, Hemez F, Unal C, William B (2009) Predictive maturity of computer models using functional and multivariate output. In: Proceedings of the 27th SEM international modal analysis conference, Orlando

    Google Scholar 

  9. Johnson ME, Moore LM, Ylvisaker D (1990) Minimax and maximin distance designs. J Stat Plan Inference 26(2):131–148

    Article  MathSciNet  Google Scholar 

  10. Williams BJ, Loeppky JK, Moore LM, Macklem MS (2011) Batch sequential design to achieve predictive maturity with calibrated computer models. Reliab Eng Syst Saf 96:1208–1219

    Article  Google Scholar 

  11. Sacks J, Welch W, Mitchell T, Wynn H (1989) Designs and analysis of computer experiments. Stat Sci 4:409–435

    Article  MathSciNet  MATH  Google Scholar 

  12. Montgomery DC (1997) Design and analysis of experiments, 5th edn. Wiley, New York, p 416

    MATH  Google Scholar 

  13. Fryer RJ, Shepherd JG (1996) Models of codend size selection. J Northwest Atl Fish Sci 19:51–58

    Article  Google Scholar 

  14. Logan RW, Nitta CK, Chidester SK (2003) Risk reduction as the product of model assessed reliability, confidence, and consequence. Technical report UCRL-AR-200703, Lawrence Livermore National Laboratory

    Google Scholar 

  15. Oberkampf WL, Pilch M, Trucano TG (2007) Predictive capability maturity model for computational modeling and simulation. Sandia National Laboratory technical report, SAND-2007-5948

    Google Scholar 

  16. Draper D (1995) Assessment and propagation of model uncertainty. J R Stat Soc B 57(1):45–97

    MathSciNet  MATH  Google Scholar 

  17. Thompson D, McAuley K, McLellan P (2010) Design of optimal sequential experiments to improve model predictions from a polyethylene molecular weight distribution model. Macromol React Eng 4(1):73–85

    Article  Google Scholar 

  18. Hemez F, Atamturktur S, Unal C (2009) Defining predictive maturity for validated numerical simulations. In: Proceedings of the IMAC-XXVII, Orlando, 9–12 Feb 2009

    Google Scholar 

  19. Lebensohn RA, Hartley CS, Tomé CN, Castelnau O (2010) Modeling the mechanical response of polycrystals deforming by climb and glide. Philos Mag 90(5):567–83

    Article  Google Scholar 

  20. Hegenderfer J (2012) Resource allocation framework: validation of numerical models of complex engineering systems against physical experiments. PhD dissertation, Clemson University

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Godfrey Kimball for his editorial review of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sez Atamturktur .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 The Society for Experimental Mechanics, Inc.

About this paper

Cite this paper

Egeberg, M.C., Atamturktur, S., Hemez, F.M. (2013). Defining Coverage of a Domain Using a Modified Nearest-Neighbor Metric. In: Simmermacher, T., Cogan, S., Moaveni, B., Papadimitriou, C. (eds) Topics in Model Validation and Uncertainty Quantification, Volume 5. Conference Proceedings of the Society for Experimental Mechanics Series. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-6564-5_12

Download citation

  • DOI: https://doi.org/10.1007/978-1-4614-6564-5_12

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4614-6563-8

  • Online ISBN: 978-1-4614-6564-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics