Advertisement

Empirical Best Linear Unbiased Prediction of Computer Simulator Output

  • Thomas J. Santner
  • Brian J. Williams
  • William I. Notz
Chapter
Part of the Springer Series in Statistics book series (SSS)

Abstract

This chapter and Chap.  4 discuss techniques for predicting output for a computer simulator based on “training” runs from the model. Knowing how to predict computer output is a prerequisite for answering most practical research questions that involve computer simulators including those listed in Sect.  1.3. As an example where the prediction methods described below will be central, Chap.  6 will present a sequential design for a computer experiment to find input conditions \(\boldsymbol{x}\) that maximize a computer output which requires prediction of \(y(\boldsymbol{x})\) at all untried sites.

References

  1. Allen DM (1974) The relationship between variable selection and data augmentation and a method for prediction. Technometrics 16:125–127MathSciNetCrossRefGoogle Scholar
  2. Atamturktur S, Williams B, Egeberg M, Unal C (2013) Batch sequential design of optimal experiments for improved predictive maturity in physics-based modeling. Struct Multidiscip Optim 48:549–569CrossRefGoogle Scholar
  3. Atamturktur S, Hegenderfer J, Williams B, Unal C (2015) Selection criterion based on an exploration-exploitation approach for optimal design of experiments. ASCE: J Eng Mech 141(1):04014108Google Scholar
  4. Ba S, Joseph VR (2012) Composite Gaussian Process models for emulating expensive functions. Ann Appl Stat 6(4):1838–1860MathSciNetCrossRefGoogle Scholar
  5. Ba S, Myers WR, Brenneman WA (2015) Optimal sliced Latin hypercube designs. Technometrics 57(4):479–487MathSciNetCrossRefGoogle Scholar
  6. Bachoc F (2013) Cross validation and maximum likelihood estimations of hyper-parameters of Gaussian processes with model misspecification. Comput Stat Data Anal 66:55–69MathSciNetCrossRefGoogle Scholar
  7. Bastos LS, O’Hagan A (2009) Diagnostics for Gaussian process emulators. Technometrics 51(4):425–438MathSciNetCrossRefGoogle Scholar
  8. Beattie SD, Lin DK (1997) Rotated factorial design for computer experiments. In: ASA proceedings of the section on physical and engineering sciences, American Statistical Association, Alexandria, VA, pp 431–450Google Scholar
  9. Ben-Ari EN, Steinberg DM (2007) Experiments: an empirical comparison of kriging with MARS and projection pursuit regression. Qual Eng 19:327–338CrossRefGoogle Scholar
  10. Berger JO, De Oliveira V, Sansó B (2001) Objective Bayesian analysis of spatially correlated data. J Am Stat Assoc 96:1361–1374MathSciNetCrossRefGoogle Scholar
  11. Blatman G, Sudret B (2010) Efficient computation of global sensitivity indices using sparse polynomial chaos expansions. Reliab Eng Syst Saf 95:1216–1229CrossRefGoogle Scholar
  12. Buhmann MD (2003) Radial basis functions: theory and implementations. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  13. Bursztyn D, Steinberg DM (2002) Rotation designs: orthogonal first-order designs with higher-order projectivity. J Appl Stoch Models Bus Ind 18:197–206MathSciNetCrossRefGoogle Scholar
  14. Bursztyn D, Steinberg DM (2006) Comparison of designs for computer experiments. J Stat Plann Inf 136:1103–1119MathSciNetCrossRefGoogle Scholar
  15. Butler A, Haynes RD, Humphries TD, Ranjan P (2014) Efficient optimization of the likelihood function in Gaussian process modelling. Comput Stat Data Anal 73:40–52MathSciNetCrossRefGoogle Scholar
  16. Chakraborty A, Bingham D, Dhavala SS, Kuranz CC, Drake RP, Grosskopf MJ, Rutter EM, Torralva BR, Holloway JP, McClaren RG, Malllick BK (2017) Emulation of numerical models with over-specified basis functions. Technometrics 59:153–164MathSciNetCrossRefGoogle Scholar
  17. Davis C (2015) A Bayesian approach to prediction and variable selection using nonstationary Gaussian processes. PhD thesis, Department of Statistics, The Ohio State University, Columbus, OHGoogle Scholar
  18. Franey M, Ranjan P, Chipman H (2011) Branch and bound algorithms for maximizing expected improvement functions. J Stat Plann Inf 141:42–55MathSciNetCrossRefGoogle Scholar
  19. Gibbs MN (1997) Bayesian Gaussian processes for regression and classification. PhD thesis, Cambridge University, CambridgeGoogle Scholar
  20. Golub GH, Heath M, Wahba G (1979) Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 21:215–223MathSciNetCrossRefGoogle Scholar
  21. Gramacy RB, Lee HKH (2008) Bayesian treed Gaussian process models with an application to computer modeling. J Am Stat Assoc 103:1119–1130MathSciNetCrossRefGoogle Scholar
  22. Handcock MS, Stein ML (1993) A Bayesian analysis of kriging. Technometrics 35:403–410CrossRefGoogle Scholar
  23. Harville DA (1974) Bayesian inference for variance components using only error contrasts. Biometrika 61:383–385MathSciNetCrossRefGoogle Scholar
  24. Harville DA (1977) Maximum likelihood approaches to variance component estimation and to related problems (with discussion). J Am Stat Assoc 72:320–340CrossRefGoogle Scholar
  25. Harville DA (1997) Matrix algebra from a statistician’s perspective. Springer, New York, NYCrossRefGoogle Scholar
  26. Hastie T, Tibshirani R, Friedman J (2001) The elements of statistical learning: data mining, inference, and prediction. Springer, New York, NYCrossRefGoogle Scholar
  27. Higdon D, Kennedy M, Cavendish J, Cafeo J, Ryne R (2004) Combining field data and computer simulations for calibration and prediction. SIAM J Sci Comput 26:448–466MathSciNetCrossRefGoogle Scholar
  28. Higdon D, Gattiker J, Williams B, Rightley M (2008) Computer model calibration using high dimensional output. J Am Stat Assoc 103:570–583MathSciNetCrossRefGoogle Scholar
  29. Johnson RT, Jones B, Fowler JW, Montgomery DC (2008) Comparing designs for computer simulation experiments. In: Mason SJ, Hill RR, Moench L, Rose O, Jefferson T, Fowler JW (eds) Proceedings of the 2008 winter simulation conference, pp 463–470Google Scholar
  30. Johnson R, Montgomery D, Jones B, Parker P (2010) Comparing computer experiments for fitting high order polynomial metamodels. J Qual Technol 42(1):86–102CrossRefGoogle Scholar
  31. Johnson RT, Montgomery DC, Jones B (2011) An empirical study of the prediction performance of space-filling designs. Int J Exp Design Process Optim 2(1):1–18CrossRefGoogle Scholar
  32. Jones B, Johnson R (2009) Design and analysis for the Gaussian process model. Qual Reliab Eng Int 25(5):515–524CrossRefGoogle Scholar
  33. Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. J Glob Optim 13:455–492MathSciNetCrossRefGoogle Scholar
  34. Joseph VR, Hung Y, Sudjianto A (2008) Blind kriging: a new method for developing metamodels. ASME J Mech Des 130:377–381CrossRefGoogle Scholar
  35. Kackar RN, Harville DA (1984) Approximations for standard errors of estimators of fixed and random effects in mixed linear models. J Am Stat Assoc 87:853–862MathSciNetzbMATHGoogle Scholar
  36. Kennedy MC, O’Hagan A (2000) Predicting the output from a complex computer code when fast approximations are available. Biometrika 87:1–13MathSciNetCrossRefGoogle Scholar
  37. Leatherman ER, Dean AM, Santner TJ (2014) Computer experiment designs via particle swarm optimization. In: Melas VB, Mignani S, Monari P, Salmaso L (eds) Topics in statistical simulation: research papers from the 7th international workshop on statistical simulation, vol 114. Springer, Berlin, pp 309–317Google Scholar
  38. Leatherman ER, Santner TJ, Dean AM (2018) Computer experiment designs for accurate prediction. Stat Comput 28:739–751MathSciNetCrossRefGoogle Scholar
  39. Li B, Genton MG, Sherman M (2008) Testing the covariance structure of multivariate random fields. Biometrika 95:813–829MathSciNetCrossRefGoogle Scholar
  40. Liefvendahl M, Stocki R (2006) A study on algorithms for optimization of Latin hypercubes. J Stat Plann Inf 136:3231–3247MathSciNetCrossRefGoogle Scholar
  41. Loeppky JL, Sacks J, Welch WJ (2009) Choosing the sample size of a computer experiment: a practical guide. Technometrics 51(4):366–376MathSciNetCrossRefGoogle Scholar
  42. Loeppky JL, Moore LM, Williams BJ (2010) Batch sequential designs for computer experiments. J Stat Plann Inf 140(6):1452–1464MathSciNetCrossRefGoogle Scholar
  43. MacDonald B, Ranjan P, Chipman H (2015) GPfit: an R package for Gaussian process model fitting using a new optimization algorithm. J Stat Softw 64(12):1–23CrossRefGoogle Scholar
  44. Mitchell MW, Genton MG, Gumpertz ML (2005) Testing for separability of space-time covariances. Environmetrics 16:819–831MathSciNetCrossRefGoogle Scholar
  45. Nelder JA, Mead R (1965) A simplex method for function minimization. Comput J 7:308–313MathSciNetCrossRefGoogle Scholar
  46. Patterson HD, Thompson R (1971) Recovery of interblock information when block sizes are unequal. Biometrika 58:545–554MathSciNetCrossRefGoogle Scholar
  47. Patterson HD, Thompson R (1974) Maximum likelihood estimation of components of variance. In: Proceedings of the 8th international biometric conference, Washington DC, pp 197–207Google Scholar
  48. Qian PZG, Wu CFJ (2008) Bayesian hierarchical modeling for integrating low-accuracy and high-accuracy experiments. Technometrics 50(2):192–204MathSciNetCrossRefGoogle Scholar
  49. Qian PZG, Wu H, Wu CFJ (2008) Gaussian Process models for computer experiments with qualitative and quantitative factors. Technometrics 50(3):383–396MathSciNetCrossRefGoogle Scholar
  50. Reich BJ, Storlie CB, Bondell HD (2009) Variable selection in Bayesian smoothing spline ANOVA models: application to deterministic computer codes. Technometrics 5(2):110–120MathSciNetCrossRefGoogle Scholar
  51. Rinnooy Kan AHG, Timmer GT (1984) A stochastic approach to global optimization. In: Boggs PT, Byrd RH, Schnabel RB (eds) Optimization 84: proceedings of the SIAM conference on numerical optimization. SIAM, Philadelphia, PA pp 245–262Google Scholar
  52. Silvestrini RT, Montgomery DC, Jones B (2013) Comparing computer experiments for the Gaussian process model using integrated prediction variance. Qual Eng 25:164–174CrossRefGoogle Scholar
  53. Stone M (1974) Cross-validatory choice and assessment of statistical predictions (with discussion) (correction: 38:102). J R Stat Soc Ser B 36:111–147zbMATHGoogle Scholar
  54. Stone M (1977) An asymptotic equivalence of choice of model by cross-validation and Akaike’s criterion. J R Stat Soc Ser B 39:44–47MathSciNetzbMATHGoogle Scholar
  55. Stripling HF, McCarren RG, Kuranz CC, Grosskopf MJ, Rutter E, Torralva BR (2013) A calibration and data assimilation method using the Bayesian MARS emulator. Ann Nucl Energy 52:103–112CrossRefGoogle Scholar
  56. Vicario G, Craparotta G, Pistone G (2016) Metamodels in computer experiments: kriging versus artificial neural networks. Qual Reliab Eng Int 32:2055–2065CrossRefGoogle Scholar
  57. Wahba G (1980) Spline bases, regularization, and generalized cross validation for solving approximation problems with large quantities of noisy data. In: Cheney EW (ed) Approximation theory III. Academic, New York, NY, pp 905–912Google Scholar
  58. Welch WJ, Buck RJ, Sacks J, Wynn HP, Mitchell TJ, Morris MD (1992) Screening, predicting, and computer experiments. Technometrics 34:15–25CrossRefGoogle Scholar
  59. Williams BJ, Santner TJ, Notz WI (2000) Sequential design of computer experiments to minimize integrated response functions. Stat Sinica 10:1133–1152MathSciNetzbMATHGoogle Scholar
  60. Williams BJ, Loeppky JL, Moore LM, Macklem MS (2011) Batch sequential design to achieve predictive maturity with calibrated computer models. Reliab Eng Syst Saf 96(9):1208–1219CrossRefGoogle Scholar
  61. Xiong Y, Chen W, Apley D, Ding X (2007) A non-stationary covariance-based kriging method for metamodelling in engineering design. Int J Numer Methods Eng 71(6):733–756CrossRefGoogle Scholar
  62. Zhang Y (2014) Computer experiments with both quantitative and qualitative inputs. PhD thesis, Department of Statistics, The Ohio State University, Columbus, OHGoogle Scholar
  63. Zhou Q, Qian PZG, Wu H, Zhou S (2011) A simple approach to emulation for computer models with qualitative and quantitative factors. Technometrics 53:266–273MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Thomas J. Santner
    • 1
  • Brian J. Williams
    • 2
  • William I. Notz
    • 1
  1. 1.Department of StatisticsThe Ohio State UniversityColumbusUSA
  2. 2.Statistical Sciences GroupLos Alamos National LaboratoryLos AlamosUSA

Personalised recommendations