Abstract
A surrogate model built based on a limited number of sample points will inevitably have large prediction uncertainty. Applying such imprecise surrogate models in design and optimization may lead to misleading predictions or optimal solutions located in unfeasible regions (Picheny in Improving accuracy and compensating for uncertainty in surrogate modeling. University of Florida, Gainesville, 2009). Therefore, verifying the accuracy of a surrogate model before using it can ensure the reliability of the design.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Acar E (2010) Various approaches for constructing an ensemble of metamodels using local measures. Struct Multidiscip Optim 42:879–896
Acar E (2015) Effect of error metrics on optimum weight factor selection for ensemble of metamodels. Expert Syst Appl 42:2703–2709
Acar E, Rais-Rohani M (2009) Ensemble of metamodels with optimized weight factors. Struct Multidiscip Optim 37:279–294
Arlot S, Celisse A (2010) A survey of cross-validation procedures for model selection. Stat Surv 4:40–79
Barrett JP (1974) The coefficient of determination—some limitations. Am Stat 28:19–20
Bhattacharyya B (2018) A critical appraisal of design of experiments for uncertainty quantification. Arch Comput Methods Eng 25:727–751
Boopathy K, Rumpfkeil MP (2014) Unified framework for training point selection and error estimation for surrogate models. AIAA J 53:215–234
Borra S, Di Ciaccio A (2010) Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods. Comput Stat Data Anal 54:2976–2989
Breiman L (1996) Heuristics of instability and stabilization in model selection. Ann Stat 24:2350–2383
Burman P (1989) A comparative study of ordinary cross-validation, v-fold cross-validation and the repeated learning-testing methods. Biometrika 76:503–514
Chai T, Draxler RR (2014) Root mean square error (RMSE) or mean absolute error (MAE)?–Arguments against avoiding RMSE in the literature. Geosci Model Dev 7:1247–1250
Du Q, Faber V, Gunzburger M (1999) Centroidal Voronoi tessellations: applications and algorithms. SIAM Rev 41:637–676
Eason J, Cremaschi S (2014) Adaptive sequential sampling for surrogate model generation with artificial neural networks. Comput Chem Eng 68:220–232
Efron B (1979) Bootstrap methods: another look at the jackknife. Ann Stat 1–26
Efron B (1983) Estimating the error rate of a prediction rule: improvement on cross-validation. J Am Stat Assoc 78:316–331
Efron B, Tibshirani R (1997) Improvements on cross-validation: the 632 + bootstrap method. J Am Stat Assoc 92:548–560
Efron B, Tibshirani RJ (1993a) An introduction to the bootstrap. Number 57 in monographs on statistics and applied probability. Chapman & Hall, New York
Efron B, Tibshirani RJ (1993b) An Introduction to the Bootstrap: monographs on Statistics and Applied Probability, vol 57. Chapman and Hall/CRC, New York and London
Franses PH (2016) A note on the mean absolute scaled error. Int J Forecast 32:20–22
Fushiki T (2011) Estimation of prediction error by using K-fold cross-validation. Stat Comput 21:137–146
Goel T, Hafkta RT, Shyy W (2009) Comparing error estimation measures for polynomial and kriging approximation of noise-free functions. Struct Multidiscip Optim 38:429
Goel T, Haftka RT, Shyy W, Queipo NV (2007) Ensemble of surrogates. Struct Multidiscip Optim 33:199–216
Goel T, Stander N (2009) Comparing three error criteria for selecting radial basis function network topology. Comput Methods Appl Mech Eng 198:2137–2150
Grafton RQ (2012) Coefficient of determination. A dictionary of climate change and the environment. Edward Elgar Publishing Limited
Gronau QF, Wagenmakers E-J (2018) Limitations of Bayesian leave-one-out cross-validation for model selection. Comput Brain Behav 1–11
Hu J, Yang Y, Zhou Q, Jiang P, Shao X, Shu L, Zhang Y (2018) Comparative studies of error metrics in variable fidelity model uncertainty quantification. J Eng Des 29:512–538
Hyndman RJ, Koehler AB (2006) Another look at measures of forecast accuracy. Int J Forecast 22:679–688
Jin R, Chen W, Simpson TW (2001) Comparative studies of metamodeling techniques under multiple modelling criteria. Struct Multidiscip Optim 23:1–13
Kohavi R (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. Montreal, Canada, Ijcai, pp 1137–1145
Larson SC (1931) The shrinkage of the coefficient of multiple correlation. J Educ Psychol 22:45
Li Y (2010) Root mean square error. In: Salkind NJ (ed) Encyclopedia of research design. Sage Publications Inc., Thousand Oaks, CA, pp 1288–1289
Li J, Heap AD (2011) A review of comparative studies of spatial interpolation methods in environmental sciences: performance and impact factors. Ecol Inform 6:228–241
Liu H, Cai J, Ong Y-S (2017) An adaptive sampling approach for kriging metamodeling by maximizing expected prediction error. Comput Chem Eng 106:171–182
Liu J, Han Z, Song W (2012) Comparison of infill sampling criteria in kriging-based aerodynamic optimization. In: 28th congress of the international council of the aeronautical sciences, pp 23–28
Mao W, Xu J, Wang C, Dong L (2014) A fast and robust model selection algorithm for multi-input multi-output support vector machine. Neurocomputing 130:10–19
Meckesheimer M, Booker AJ, Barton RR, Simpson TW (2002) Computationally inexpensive metamodel assessment strategies. AIAA J 40:2053–2060
Mehmani A, Chowdhury S, Messac A (2015) Predictive quantification of surrogate model fidelity based on modal variations with sample density. Struct Multidiscip Optim 52:353–373
Miller RG (1974) The jackknife-a review. Biometrika 61:1–15
Nagelkerke NJ (1991) A note on a general definition of the coefficient of determination. Biometrika 78:691–692
Nguyen HM, Couckuyt I, Knockaert L, Dhaene T, Gorissen D, Saeys Y (2011) An alternative approach to avoid overfitting for surrogate models. In: Proceedings of the winter simulation conference: winter simulation conference, pp 2765–2776
Picheny V (2009) Improving accuracy and compensating for uncertainty in surrogate modeling. University of Florida, Gainesville
Queipo NV, Haftka RT, Shyy W, Goel T, Vaidyanathan R, Tucker PK (2005) Surrogate-based analysis and optimization. Prog Aerosp Sci 41:1–28
Quenouille MH (1949) Approximate tests of correlation in time-series 3. In: Mathematical proceedings of the Cambridge Philosophical Society. Cambridge University Press, pp 483–484
Renaud O, Victoria-Feser M-P (2010) A robust coefficient of determination for regression. J Stat Plan Inference 140:1852–1862
Rodriguez JD, Perez A, Lozano JA (2010) Sensitivity analysis of k-fold cross validation in prediction error estimation. IEEE Trans Pattern Anal Mach Intell 32:569–575
Romero DA, Marin VE, Amon CH (2015) Error metrics and the sequential refinement of kriging metamodels. J Mech Des 137:011402
Salkind NJ (2010) Encyclopedia of research design. Sage
Sanchez E, Pintos S, Queipo NV (2008) Toward an optimal ensemble of kernel-based approximations with engineering applications. Struct Multidiscip Optim 36:247–261
Shao J (1993) Linear model selection by cross-validation. J Am Stat Assoc 88:486–494
Shao J (1996) Bootstrap model selection. J Am Stat Assoc 91:655–665
Shao J, Tu D (2012) The jackknife and bootstrap. Springer Science & Business Media
Stone M (1974) Cross-validatory choice and assessment of statistical predictions. J R Stat Soc. Series B (Methodological) 111–147
Vehtari A, Gelman A, Gabry J (2017) Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat Comput 27:1413–1432
Viana FA, Haftka RT, Steffen V (2009) Multiple surrogates: how cross-validation errors can help us to obtain the best predictor. Struct Multidiscip Optim 39:439–457
Wang Y, Liu Q (2006) Comparison of Akaike information criterion (AIC) and Bayesian information criterion (BIC) in selection of stock–recruitment relationships. Fish Res 77:220–225
Willmott CJ, Matsuura K (2005) Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Climate Res 30:79–82
Yanagihara H, Tonda T, Matsumoto C (2006) Bias correction of cross-validation criterion based on Kullback-Leibler information under a general condition. J Multivar Anal 97:1965–1975
Yang Y (2007) Consistency of cross validation for comparing regression procedures. Ann Stat 35:2450–2473
Ye P, Pan G, Dong Z (2018) Ensemble of surrogate based global optimization methods using hierarchical design space reduction. Struct Multidiscip Optim 58:537–554
Zhao D, Xue D (2010) A comparative study of metamodeling methods considering sample quality merits. Struct Multidiscip Optim 42:923–938
Zhou Q, Shao X, Jiang P, Gao Z, Zhou H, Shu L (2016) An active learning variable-fidelity metamodelling approach based on ensemble of metamodels and objective-oriented sequential sampling. J Eng Des 27:205–231
Zhou Q, Wang Y, Choi S-K, Jiang P, Shao X, Hu J (2017) A sequential multi-fidelity metamodeling approach for data regression. Knowl-Based Syst 134:199–212
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Jiang, P., Zhou, Q., Shao, X. (2020). Verification Methods for Surrogate Models. In: Surrogate Model-Based Engineering Design and Optimization. Springer Tracts in Mechanical Engineering. Springer, Singapore. https://doi.org/10.1007/978-981-15-0731-1_5
Download citation
DOI: https://doi.org/10.1007/978-981-15-0731-1_5
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-0730-4
Online ISBN: 978-981-15-0731-1
eBook Packages: EngineeringEngineering (R0)