Skip to main content

Multi-response Approach to Improving Identifiability in Model Calibration

  • Reference work entry
  • First Online:

Abstract

In physics-based engineering modeling, two primary sources of model uncertainty that account for the differences between computer models and physical experiments are parameter uncertainty and model discrepancy. One of the main challenges in model updating results from the difficulty in distinguishing between the effects of calibration parameters versus model discrepancy. In this chapter, this identifiability problem is illustrated with several examples that explain the mechanisms behind it and that attempt to shed light on when a system may or may not be identifiable. For situations in which identifiability cannot be achieved using only a single response, an approach is developed to improve identifiability by using multiple responses that share a mutual dependence on the calibration parameters. Furthermore, prior to conducting physical experiments but after conducting computer simulations, in order to address the issue of how to select the most appropriate set of responses to measure experimentally to best enhance identifiability, a preposterior analysis approach is presented to predict the degree of identifiability that will result from using different sets of responses to measure experimentally. To handle the computational challenges of the preposterior analysis, we also present a surrogate preposterior analysis based on the Fisher information of the calibration parameters.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   1,099.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   1,399.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Kennedy, M.C., O’Hagan, A.: Bayesian calibration of computer models. J. R. Stat. Soc. Ser. B 63(3), 425–464 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  2. Higdon, D., Kennedy, M.C., Cavendish, J., Cafeo, J., Ryne, R.: Combining field data and computer simulations for calibration and prediction. SIAM J. Sci. Comput. 26(2), 448–466 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  3. Reese, C.S., Wilson, A.G., Hamada, M., Martz, H.F.: Integrated analysis of computer and physical experiments. Technometrics 46(2), 153–164 (2004)

    Article  MathSciNet  Google Scholar 

  4. Bayarri, M.J., Berger, J.O., Paulo, R., Sacks, J., Cafeo, J.A., Cavendish, J., Lin, C.H., Tu, J.: A framework for validation of computer models. Technometrics 49(2), 138–154 (2007)

    Article  MathSciNet  Google Scholar 

  5. Higdon, D., Gattiker, J., Williams, B., Rightley, M.: Computer model calibration using high-dimensional output. J. Am. Stat. Assoc. 103(482), 570–583 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Chen, W., Xiong, Y., Tsui, K.L., Wang, S.: A design-driven validation approach using bayesian prediction models. J. Mech. Des. 130(2), 021101 (2008)

    Article  Google Scholar 

  7. Qian, P.Z.G., Wu, C.F.J.: Bayesian hierarchical modeling for integrating low-accuracy and high-accuracy experiments. Technometrics 50(2), 192–204 (2008)

    Article  MathSciNet  Google Scholar 

  8. Wang, S., Tsui, K.L., Chen, W.: Bayesian validation of computer models. Technometrics 51(4), 439–451 (2009)

    Article  MathSciNet  Google Scholar 

  9. Drignei, D.: A kriging approach to the analysis of climate model experiments. J. Agric. Biol. Environ. Stat. 14(1), 99–112 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. Akkaram, S., Agarwal, H., Kale, A., Wang, L.: Meta modeling techniques and optimal design of experiments for transient inverse modeling applications. Paper presented at the ASME International Design Engineering Technical Conference, Montreal (2010)

    Book  Google Scholar 

  11. Huan, X., Marzouk, Y.M.: Simulation-based optimal Bayesian experimental design for nonlinear systems. J. Comput. Phys. 232(1), 288–317 (2013)

    Article  MathSciNet  Google Scholar 

  12. Loeppky, J., Bingham, D., Welch, W.: Computer model calibration or tuning in practice. Technical Report, University of British Columbia, Vancouver, p. 20 (2006)

    Google Scholar 

  13. Han, G., Santner, T.J., Rawlinson, J.J.: Simultaneous determination of tuning and calibration parameters for computer experiments. Technometrics 51(4), 464–474 (2009)

    Article  MathSciNet  Google Scholar 

  14. Arendt, P.D., Apley, D.W., Chen, W.: Quantification of model uncertainty: Calibration, model discrepancy, and identifiability. J. Mech. Des. 134(10) (2012)

    Google Scholar 

  15. Arendt, P.D., Apley, D.W., Chen, W., Lamb, D., Gorsich, D.: Improving identifiability in model calibration using multiple responses. J. Mech. Des. 134(10) (2012)

    Google Scholar 

  16. Ranjan, P., Lu, W., Bingham, D., Reese, S., Williams, B., Chou, C., Doss, F., Grosskopf, M., Holloway, J.: Follow-up experimental designs for computer models and physical processes. J. Stat. Theory Pract. 5(1), 119–136 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  17. Williams, B.J., Loeppky, J.L., Moore, L.M., Macklem, M.S.: Batch sequential design to achieve predictive maturity with calibrated computer models. Reliab. Eng. Syst. Saf. 96(9), 1208–1219 (2011)

    Article  Google Scholar 

  18. Tuo, R., Wu, C.F.J., Vu, D.: Surrogate modeling of computer experiments with different mesh densities. Technometrics 56(3), 372–380 (2014)

    Article  MathSciNet  Google Scholar 

  19. Maheshwari, A.K., Pathak, K.K., Ramakrishnan, N., Narayan, S.P.: Modified Johnson-Cook material flow model for hot deformation processing. J. Mater. Sci. 45(4), 859–864 (2010)

    Article  Google Scholar 

  20. Xiong, Y., Chen, W., Tsui, K.L., Apley, D.W.: A better understanding of model updating strategies in validating engineering models. Comput. Methods Appl. Mech. Eng. 198(15–16), 1327–1337 (2009)

    Article  MATH  Google Scholar 

  21. Liu, F., Bayarri, M.J., Berger, J.O., Paulo, R., Sacks, J.: A Bayesian analysis of the thermal challenge problem. Comput. Methods Appl. Mech. Eng. 197(29–32), 2457–2466 (2008)

    Article  MATH  Google Scholar 

  22. Arendt, P., Apley, D.W., Chen, W.: Updating predictive models: calibration, bias correction, and identifiability. Paper presented at the ASME 2010 International Design Engineering Technical Conferences, Montreal (2010)

    Google Scholar 

  23. Chakrabarty, J.: Theory of Plasticity, 3rd edn. Elsevier/Butterworth-Heinemann, Burlington (2006)

    MATH  Google Scholar 

  24. Liu, F., Bayarri, M.J., Berger, J.O.: Modularization in Bayesian analysis, with emphasis on analysis of computer models. Bayesian Anal. 4(1), 119–150 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Joseph, V., Melkote, S.: Statistical adjustments to engineering models. J. Qual. Technol. 41(4), 362–375 (2009)

    Google Scholar 

  26. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. MIT Press, Cambridge (2006)

    MATH  Google Scholar 

  27. Qian, P.Z.G., Wu, H.Q., Wu, C.F.J.: Gaussian process models for computer experiments with qualitative and quantitative factors. Technometrics 50(3), 383–396 (2008)

    Article  MathSciNet  Google Scholar 

  28. McMillan, N.J., Sacks, J., Welch, W.J., Gao, F.: Analysis of protein activity data by gaussian stochastic process models. J. Biopharm. Stat. 9(1), 145–160 (1999)

    Article  MATH  Google Scholar 

  29. Cressie, N.: Statistics for Spatial Data. Wiley Series in Probability and Statistics. Wiley, New York (1993)

    MATH  Google Scholar 

  30. Ver Hoef, J., Cressie, N.: Multivariable spatial prediction. Math. Geol. 25(2), 219–240 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  31. Conti, S., Gosling, J.P., Oakley, J.E., O’Hagan, A.: Gaussian process emulation of dynamic computer codes. Biometrika 96(3), 663–676 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  32. Conti, S., O’Hagan, A.: Bayesian emulation of complex multi-output and dynamic computer models. J. Stat. Plan. Inference 140(3), 640–651 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  33. Williams, B., Higdon, D., Gattiker, J., Moore, L.M., McKay, M.D., Keller-McNulty, S.: Combining experimental data and computer simulations, with an application to flyer plate experiments. Bayesian Analysis 1(4), 765–792 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  34. Bayarri, M.J., Berger, J.O., Cafeo, J., Garcia-Donato, G., Liu, F., Palomo, J., Parthasarathy, R.J., Paulo, R., Sacks, J., Walsh, D.: Computer model validation with functional output. Ann. Stat. 35(5), 1874–1906 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  35. McFarland, J., Mahadevan, S., Romero, V., Swiler, L.: Calibration and uncertainty analysis for computer simulations with multivariate output. AIAA J. 46(5), 1253–1265 (2008)

    Article  Google Scholar 

  36. Drignei, D.: A kriging approach to the analysis of climate model experiments. J. Agric. Biol. Environ. Stat. 14(1), 99–114 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  37. Kennedy, M.C., Anderson, C.W., Conti, S., O’Hagan, A.: Case studies in gaussian process modelling of computer codes. Reliab. Eng. Syst. Saf. 91(10–11), 1301–1309 (2006)

    Article  Google Scholar 

  38. Sacks, J., Welch, W.J., Mitchell, T.J., Wynn, H.P.: Design and analysis of computer experiments. Stat. Sci. 4(4), 409–423 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  39. Rasmussen, C.E.: Evaluation of Gaussian Processes and Other Methods for Non-linear Regression. University of Toronto (1996)

    Google Scholar 

  40. Lancaster, T.: An Introduction to Modern Bayesian Econometrics. Blackwell, Malden (2004)

    MATH  Google Scholar 

  41. Arendt, P.D., Apley, D.W., Chen, W.: A preposterior analysis to predict identifiability in experimental calibration of computer models. IIE Trans. 48(1), 75–88 (2016)

    Article  Google Scholar 

  42. Jiang, Z., Chen, W., Apley, D.W.: Preposterior analysis to select experimental responses for improving identifiability in model uncertainty quantification. Paper presented at the ASME 2013 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Portland (2013)

    Google Scholar 

  43. Jiang, Z., Apley, D.W., Chen, W.: Surrogate preposterior analyses for predicting and enhancing identifiability in model calibration. Int. J. Uncertain. Quantif. 5(4), 341–359 (2015)

    Article  Google Scholar 

  44. Berger, J.O.: Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics. Springer, New York (1985)

    Book  MATH  Google Scholar 

  45. Carlin, B.P., Louis, T.A.: Empirical bayes: Past, present and future. J. Am. Stat. Assoc. 95(452), 1286–1289 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  46. Wu, C., Hamada, M.: Experiments: Planning, Analysis, and Optimization. Wiley, New York (2009)

    MATH  Google Scholar 

  47. Montgomery, D.C.: Design and Analysis of Experiments, 7th edn. Wiley, Hoboken (2008)

    Google Scholar 

  48. Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions. Dover, New York (1972)

    MATH  Google Scholar 

  49. Beyer, W.H.: CRC Standard Mathematical Tables, 28 edn. CRC, Boca Raton (1987)

    MATH  Google Scholar 

  50. Robert, C., Casella, G.: Monte Carlo Statistical Methods. Springer, New York (2004)

    Book  MATH  Google Scholar 

  51. Smith, A., Gelfand, A.: Bayesian statistics without tears: a sampling-resampling perspective. Am. Stat. 46(2), 84–88 (1992)

    MathSciNet  Google Scholar 

  52. Johnson, R., Wichern, D.: Applied Multivariate Statistical Analysis, 6th edn. Prentice Hall, Upper Saddle River (2007)

    MATH  Google Scholar 

  53. Kennedy, M.C., O’Hagan, A.: Supplementary Details on Bayesian Calibration of Computer Models, pp. 1–13. University of Sheffield, Sheffield (2000)

    Google Scholar 

  54. Billingsley, P.: Probability and Measure, Anniversary Edition. John Wiley & Sons, Inc., Hoboken (2011)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Chen .

Editor information

Editors and Affiliations

Appendices

Appendix A: Estimates of the Hyperparameters for the Computer Model MRGP

To obtain the MLEs of the hyperparameters for the computer model MRGP model, the multivariate normal likelihood function is first constructed as:

$$\displaystyle{ \begin{array}{ll} &p(vec(\mathbf{Y}^{m})\vert \mathbf{B}^{m},\boldsymbol{\Sigma }^{m},\boldsymbol{\upomega }^{m}) = (2\pi )^{-qN_{m}/2}\left \vert \Sigma ^{m}\right \vert ^{-N_{m}/2}\left \vert \mathbf{R}^{m}\right \vert ^{-q/2} \\ & \quad \quad \quad \quad \times \exp \left \{-\frac{1} {2}vec(\mathbf{Y}^{m} -\mathbf{H}^{m}\mathbf{B}^{m})^{T}\left (\Sigma ^{m} \otimes \mathbf{R}^{m}\right )^{-1}vec(\mathbf{Y}^{m} -\mathbf{H}^{m}\mathbf{B}^{m})\right \},\end{array} }$$
(4.18)

where vec(⋅ ) is the vectorization of the matrix (stacking of the columns), ⊗ denotes the Kronecker product, R m is a N m × N m correlation matrix whose ith-row, jth-column entry is \(R^{m}((\mathbf{x}_{i}^{m},\boldsymbol{\uptheta }_{i}^{m}),(\mathbf{x}_{j}^{m},\boldsymbol{\uptheta }_{j}^{m}))\), and \(\mathbf{H}^{m} = [\mathbf{h}^{m}(\mathbf{x}_{1}^{m},\boldsymbol{\uptheta }_{1}^{m})^{T},\,\,\ldots,\mathbf{h}^{m}(\mathbf{x}_{N_{m}}^{m},\boldsymbol{\uptheta }_{N_{m}}^{m})^{T}]^{T}\). Taking the log of Eq. (4.18) yields:

$$\displaystyle{ \begin{array}{l} \ln (p(vec(\mathbf{Y}^{m})\vert \mathbf{B}^{m},\boldsymbol{\Sigma }^{m},\boldsymbol{\upomega }^{m})) = -\frac{qN_{m}} {2} \ln (2\pi ) -\frac{N_{m}} {2} \ln \left (\left \vert \Sigma ^{m}\right \vert \right ) -\frac{q} {2}\ln \left (\left \vert \mathbf{R}^{m}\right \vert \right ) \\ \qquad \qquad \qquad \qquad -\frac{1} {2}vec(\mathbf{Y}^{m} -\mathbf{H}^{m}\mathbf{B}^{m})^{T}\left (\Sigma ^{m} \otimes \mathbf{R}^{m}\right )^{-1}vec(\mathbf{Y}^{m} -\mathbf{H}^{m}\mathbf{B}^{m}). \end{array} }$$
(4.19)

The MLE of B m is found by setting the derivative of Eq. (4.19) with respect to B m equal to zero, which gives:

$$\displaystyle{ \hat{\mathbf{B}} ^{m} = [(\mathbf{H}^{m})^{T}(\mathbf{R}^{m})^{-1}\mathbf{H}^{m}]^{-1}(\mathbf{H}^{m})^{T}(\mathbf{R}^{m})^{-1}\mathbf{Y}^{m}. }$$
(4.20)

The MLE of \(\boldsymbol{\Sigma }^{m}\) is found using result 4.10 of Ref. [52], which yields:

$$\displaystyle{ \boldsymbol{\hat{\Sigma } }^{m} = \frac{1} {N_{m}}(\mathbf{Y}^{m} -\mathbf{H}^{m}\hat{\mathbf{B}} ^{m})^{T}(\mathbf{R}^{m})^{-1}(\mathbf{Y}^{m} -\mathbf{H}^{m}\hat{\mathbf{B}} ^{m}). }$$
(4.21)

Finally, the MLE of \(\boldsymbol{\upomega }^{m}\), denoted by \(\hat{\boldsymbol{\upomega }}^{m}\), is found by numerically maximizing Eq. (4.19) after plugging in the MLEs of B m and \(\boldsymbol{\Sigma }^{m}\).

Appendix B: Posterior Distributions of the Computer Responses

After observing Y m, the posterior of the computer response \(y_{i}^{m}(\mathbf{x},\boldsymbol{\uptheta })\) given Y m (and given \(\boldsymbol{\upomega }^{m}\) and \(\boldsymbol{\Sigma }^{m}\) and assuming a non-informative prior for B m) is Gaussian with mean and covariance:

$$\displaystyle{ \mathbb{E}[\mathbf{y}^{m}(\mathbf{x},\boldsymbol{\uptheta })\vert \mathbf{Y}^{m},\boldsymbol{\upphi }^{m}] = \mathbf{h}^{m}(\mathbf{x},\boldsymbol{\uptheta })\hat{\mathbf{B}} ^{m} + \mathbf{r}^{m}(\mathbf{x},\boldsymbol{\uptheta })^{T}(\mathbf{R}^{m})^{-1}(\mathbf{Y}^{m} -\mathbf{H}^{m}\hat{\mathbf{B}} ^{m}) }$$
(4.22)
$$\displaystyle{ \begin{array}{ll} &\mbox{ Cov}[\mathbf{y}^{m}(\mathbf{x},\boldsymbol{\uptheta })),\mathbf{y}^{m}(\mathbf{x}^{{\prime}},\boldsymbol{\uptheta }')\vert \mathbf{Y}^{m},\boldsymbol{\upphi }^{m}] = \boldsymbol{\Sigma }^{m}\left \{R^{m}((\mathbf{x},\boldsymbol{\uptheta }),(\mathbf{x}^{{\prime}},\boldsymbol{\uptheta }')\right.) \\ &\quad -\mathbf{r}^{m}(\mathbf{x},\boldsymbol{\uptheta })^{T}(\mathbf{R}^{m})^{-1}\mathbf{r}^{m}(\mathbf{x}^{{\prime}},\boldsymbol{\uptheta }') + [\mathbf{h}^{m}(\mathbf{x},\boldsymbol{\uptheta })^{T} - (\mathbf{H}^{m})^{T}(\mathbf{R}^{m})^{-1}\mathbf{r}^{m}(\mathbf{x},\boldsymbol{\uptheta })]^{T} \\ &\quad \times \left.[(\mathbf{H}^{m})^{T}(\mathbf{R}^{m})^{-1}\mathbf{H}^{m}]^{-1}[\mathbf{h}^{m}(\mathbf{x},\boldsymbol{\uptheta })^{T} - (\mathbf{H}^{m})^{T}(\mathbf{R}^{m})^{-1}\mathbf{r}^{m}(\mathbf{x},\boldsymbol{\uptheta })]\right \}\end{array} }$$
(4.23)

where \(\mathbf{r}^{m}(\mathbf{x},\boldsymbol{\uptheta })\) is a N m × 1 vector whose ith element is \(R^{m}((\mathbf{x}_{i}^{m},\boldsymbol{\uptheta }_{i}^{m}),(\mathbf{x},\boldsymbol{\uptheta }))\). Using an empirical Bayes approach, the MLEs of the hyperparameters from Appendix A are plugged into Eqs. (4.22) and (4.23) to calculate the posterior distribution of the computer responses. Notice that Eqs. (4.22) and (4.23) are analogous to the single-response GP model results.

Appendix C: Estimates of the Hyperparameters for the Discrepancy Functions MRGP

To estimate the hyperparameters \(\boldsymbol{\upphi }^{\delta } =\{ \mathbf{B}^{\delta }\), \(\boldsymbol{\Sigma }^{\updelta }\), \(\boldsymbol{\upomega }^{\delta }\), \(\boldsymbol{\uplambda }\}\) of the MRGP model representing the discrepancy functions, the procedure outlined by Kennedy and O’Hagan [1] is used and modified to handle multiple responses. This procedure begins by obtaining a posterior of the experimental responses given the simulation data and the hyperparameters from Module 1, which has prior mean and covariance:

$$\displaystyle{ \mathbb{E}[\mathbf{y}^{e}(\mathbf{x})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m},\boldsymbol{\uptheta } = \boldsymbol{\uptheta }^{{\ast}}] = \mathbb{E}[\mathbf{y}^{m}(\mathbf{x},\boldsymbol{\uptheta }^{{\ast}})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m}] + \mathbf{h}^{\delta }(\mathbf{x})\mathbf{B}^{\delta }, }$$
(4.24)
$$\displaystyle{ \begin{array}{l} \mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x}),\mathbf{y}^{e}(\mathbf{x'})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m},\boldsymbol{\uptheta } = \boldsymbol{\uptheta }^{{\ast}}] = \boldsymbol{\Sigma }^{\delta }R^{\delta }(\mathbf{x},\mathbf{x}) + \boldsymbol{\uplambda } \\ \qquad \qquad \qquad \quad + \mbox{ Cov}[\mathbf{y}^{m}(\mathbf{x},\boldsymbol{\uptheta }^{{\ast}})),\mathbf{y}^{m}(\mathbf{x'},\boldsymbol{\uptheta }^{{\ast}}))\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m}],\end{array} }$$
(4.25)

where \(\hat{\boldsymbol{\upphi }}^{m}\) are the MLEs of the hyperparameters for the computer model MRGP model. Since Eqs. (4.24) and (4.25) depend on the unknown true value of \(\boldsymbol{\uptheta }^{{\ast}}\), these two equations are integrated with respect to the prior distribution of \(\boldsymbol{\uptheta }(p(\boldsymbol{\uptheta }))\) via:

$$\displaystyle{ \begin{array}{ll} &\mathbb{E}[\mathbf{y}^{e}(\mathbf{x})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m}] =\int \mathbb{E}[\mathbf{y}^{e}(\mathbf{x})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m},\boldsymbol{\uptheta }]p(\boldsymbol{\uptheta })\mbox{ d}\boldsymbol{\uptheta }, \\ &\mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x}),\mathbf{y}^{e}(\mathbf{x'})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m}] =\int \mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x}),\mathbf{y}^{e}(\mathbf{x'})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m},\boldsymbol{\uptheta }]p(\boldsymbol{\uptheta })\mbox{ d}\boldsymbol{\uptheta }.\end{array} }$$
(4.26)

Kennedy and O’Hagan [53] provide closed form solutions for Eq. (4.26) under the conditions of Gaussian correlation functions, constant regression functions, and normal prior distributions for \(\boldsymbol{\uptheta }\) (for details, refer to Section 3 of [53] and Section 4.5 of [1]). In this chapter, similar closed form solutions are used except that a uniform prior distribution is assumed for \(\boldsymbol{\uptheta }\).

After observing the experimental data, Y e, one can construct a multivariate normal likelihood function with mean and variance from Eq. (4.26). The MLEs of \(\boldsymbol{\upphi }^{\delta }\) maximize this likelihood function. The MLE of B δ can found by setting the analytical derivative of this likelihood function with respect to B δ equal to zero (see Section 2 of Ref. [53]). However, there are no analytical derivatives with respect to the hyperparameters \(\boldsymbol{\Sigma }^{\delta }\), \(\boldsymbol{\upomega }^{\delta }\), and \(\boldsymbol{\lambda }\). Therefore, numerical optimization techniques are needed to find these MLEs.

Appendix D: Posterior Distribution of the Calibration Parameters

The posterior for the calibration parameters in Eq. (4.12) involves the likelihood function \(p(\mathbf{d}\vert \boldsymbol{\uptheta },\hat{\boldsymbol{\upphi }})\) and the marginal posterior distribution for the data \(p(\mathbf{d}\vert \hat{\boldsymbol{\upphi }})\). The likelihood function is multivariate normal with mean vector and covariance matrix defined as:

$$\displaystyle\begin{array}{rcl} \mathbf{m}(\boldsymbol{\uptheta })& =& \mathbf{H}(\boldsymbol{\uptheta })\hat{\mathbf{B}} \\ \quad \quad \;& =& \left [\begin{array}{*{20}c} \mathbf{I}_{q} \otimes \mathbf{H}^{m} & \mathbf{0} \\ \mathbf{I}_{q} \otimes \mathbf{H}^{m}(\mathbf{X}^{e},\boldsymbol{\uptheta })&\mathbf{I}_{q} \otimes \mathbf{H}^{\delta }\\ \end{array} \right ]\left [\begin{array}{*{20}c} vec(\hat{\mathbf{B}} ^{m}) \\ vec(\hat{\mathbf{B}} ^{\delta } )\\ \end{array} \right ],{}\end{array}$$
(4.27)
$$\displaystyle{ \mathbf{V}(\boldsymbol{\uptheta }) = \left [\begin{array}{*{20}c} \boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{R}^{m} & \boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{C}^{m} \\ \boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{C}^{m\;T}&\boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{R}^{m}(\mathbf{X}^{e},\boldsymbol{\uptheta }) + \boldsymbol{\hat{\Sigma } } ^{\delta }\otimes \mathbf{R}^{\delta } + \boldsymbol{\hat{\lambda }} \otimes \mathbf{I}_{N_{e}}\\ \end{array} \right ], }$$
(4.28)

where \(\hat{\mathbf{B}}= \left (\mathbf{H}(\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{H}(\boldsymbol{\uptheta })\right )^{-1}\mathbf{H}(\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{d}\), which is calculated based on the entire data set (instead of using the estimates from Modules 1 and 2 for B m and B δ) as detailed in Section 4 of [53]. \(\mathbf{H}^{m}(\mathbf{X}^{e},\boldsymbol{\uptheta }) = [\mathbf{h}^{m}(\mathbf{x}_{1}^{e},\boldsymbol{\uptheta })^{T},\,\,\ldots,\mathbf{h}^{m}(\mathbf{x}_{N_{e}}^{m},\boldsymbol{\uptheta })^{T}]_{}^{T}\) and \(\mathbf{H}^{\delta } = [\mathbf{h}^{\delta }(\mathbf{x}_{1}^{e})^{T},\,\,\ldots,\mathbf{h}^{\delta }(\mathbf{x}_{N_{e}}^{\delta })^{T}]_{}^{T}\) denote the specified regression functions for the computer model and the discrepancy functions at the input settings X e. C m denotes the N m × N e matrix with ith-row, jth-column entries \(R^{m}((\mathbf{x}_{i}^{m},\boldsymbol{\uptheta }_{i}^{m}),(\mathbf{x}_{j}^{e},\boldsymbol{\uptheta }))\). \(\mathbf{R}^{m}(\mathbf{X}^{e},\boldsymbol{\uptheta })\) denotes the N e × N e matrix with ith-row, jth-column entries \(R^{m}((\mathbf{x}_{i}^{e},\boldsymbol{\uptheta }),(\mathbf{x}_{j}^{e},\boldsymbol{\uptheta }))\). R δ denotes the N e × N e matrix with ith-row, jth-column entries R δ(x e i , x e j ). Finally, I q and \(\mathbf{I}_{N_{e}}\) denote the q × q and N e × N e identity matrices.

The marginal posterior distribution for the data \(p(\mathbf{d}\vert \hat{\boldsymbol{\upphi }})\) is:

$$\displaystyle{ p(\mathbf{d}\vert \boldsymbol{\upphi }) =\int p(\mathbf{d}\vert \boldsymbol{\uptheta },\boldsymbol{\upphi })p(\boldsymbol{\uptheta })\mbox{ d}\boldsymbol{\uptheta }, }$$
(4.29)

which can be calculated using any numerical integration technique. In this chapter, Legendre-Gauss quadrature is used for the low-dimensional examples. Alternatively, Markov chain Monte Carlo (MCMC) could be used to sample complex posterior distributions such as those in Eq. (4.12).

Appendix E: Posterior Distribution of the Experimental Responses

Since a MRGP model represents the experimental responses, the conditional (given \(\boldsymbol{\uptheta }\)) posterior distribution at any point x is Gaussian with mean and covariance defined as (assuming a non-informative prior on B m and B δ and using the empirical Bayes approach that treats \(\boldsymbol{\upphi }=\hat{\boldsymbol{\upphi }}\) as fixed):

$$\displaystyle{ \mathbb{E}[\mathbf{y}^{e}(\mathbf{x})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}] = \mathbf{h}(\mathbf{x},\boldsymbol{\uptheta })\boldsymbol{\hat{\mathrm{B}} }+ \mathbf{t}(\mathbf{x},\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}(\mathbf{d} -\mathbf{H}(\boldsymbol{\uptheta })\boldsymbol{\hat{\mathrm{B}} } ), }$$
(4.30)
$$\displaystyle{ \begin{array}{ll} &\mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x})^{T},\mathbf{y}^{e}(\mathbf{x'})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}] = \boldsymbol{\hat{\Sigma } } ^{m}R^{m}((\mathbf{x},\boldsymbol{\uptheta }),(\mathbf{x'},\boldsymbol{\uptheta })) + \boldsymbol{\hat{\Sigma } } ^{\delta } R^{\delta }(\mathbf{x},\mathbf{x'}) \\ &\quad + \boldsymbol{\hat{\lambda }} -\mathbf{t}(\mathbf{x},\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{t}(\mathbf{x'},\boldsymbol{\uptheta }) + \left (\mathbf{h}(\mathbf{x},\boldsymbol{\uptheta })^{T} -\mathbf{H}(\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{t}(\mathbf{x},\boldsymbol{\uptheta })\right )^{T} \\ &\quad \times \left (\mathbf{H}(\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{H}(\boldsymbol{\uptheta })\right )^{-1}\left (\mathbf{h}(\mathbf{x'},\boldsymbol{\uptheta })^{T} -\mathbf{H}(\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{t}(\mathbf{x'},\boldsymbol{\uptheta })\right ),\\ \end{array} }$$
(4.31)

where:

$$\displaystyle{ \mathbf{t}(\mathbf{x},\boldsymbol{\uptheta }) = \left [\begin{array}{*{20}c} \boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{R}^{m}((\mathbf{X}^{m},\boldsymbol{\uptheta }^{m}),(\mathbf{x},\boldsymbol{\uptheta })) \\ \boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{R}^{m}((\mathbf{X}^{e},\boldsymbol{\uptheta }),(\mathbf{x},\boldsymbol{\uptheta })) + \boldsymbol{\hat{\Sigma } } ^{\delta }\otimes \mathbf{R}^{\delta }(\mathbf{X}^{e},\mathbf{x})\\ \end{array} \right ], }$$
(4.32)
$$\displaystyle{ \mathbf{h}(\mathbf{x},\boldsymbol{\uptheta }) = \left [\begin{array}{*{20}c} \mathbf{I}_{q} \otimes \mathbf{h}^{m}(\mathbf{x},\boldsymbol{\uptheta })&\mathbf{I}_{ q} \otimes \mathbf{h}^{\delta }(\mathbf{x})\\ \end{array} \right ]. }$$
(4.33)

\(\mathbf{R}^{m}((\mathbf{X}^{m},\boldsymbol{\Theta }^{m}),(\mathbf{x},\boldsymbol{\uptheta }))\) is a N m ×1 vector whose ith entry is \(R^{m}((\mathbf{x}_{i}^{m},\boldsymbol{\uptheta }_{i}^{m}),(\mathbf{x},\boldsymbol{\uptheta }))\), \(\mathbf{R}^{m}((\mathbf{X}^{e},\boldsymbol{\uptheta }),(\mathbf{x},\boldsymbol{\uptheta }))\) is a N e ×1 vector whose ith entry is \(R^{m}((\mathbf{x}_{i}^{e},\boldsymbol{\uptheta }),(\mathbf{x},\boldsymbol{\uptheta }))\), and R δ(X e, x) is a N e × 1 vector whose ith entry is R δ(x e i , x).

To calculate the unconditional posterior distributions (marginalized with respect to \(\boldsymbol{\uptheta }\)) of the experimental responses, the conditional posterior distributions are marginalized with respect to the posterior distribution of the calibration parameters from Module 3. The mean and covariance of the unconditional posterior distributions can be written as:

$$\displaystyle\begin{array}{rcl} \mathbb{E}[\mathbf{y}^{e}(\mathbf{x})^{T}\vert \mathbf{d},\hat{\boldsymbol{\upphi }}] = \mathbb{E}[\mathbb{E}[\mathbf{y}^{e}(\mathbf{x})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}]],& &{}\end{array}$$
(4.34)
$$\displaystyle\begin{array}{rcl} \begin{array}{l} \mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x})^{T},\mathbf{y}^{e}(\mathbf{x'})^{T}\vert \mathbf{d},\hat{\boldsymbol{\upphi }}] = \mathbb{E}[\mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x})^{T},\mathbf{y}^{e}(\mathbf{x'})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}]] \\ \quad + \mbox{ Cov}[\mathbb{E}[\mathbf{y}^{e}(\mathbf{x})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}], \mathbb{E}[\mathbf{y}^{e}(\mathbf{x'})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}]],\\ \end{array} & &{}\end{array}$$
(4.35)

where the outer expectation and covariance are with respect to the posterior distribution of the calibration parameters. Equations (4.34) and (4.35) are derived using the law of total expectation and the law of total covariance [54]. Due to the complexity of the posterior distribution of the calibration parameters, the marginalization requires numerical integration methods. For the examples in this chapter, Legendre-Gauss quadrature is used.

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing Switzerland

About this entry

Cite this entry

Jiang, Z., Arendt, P.D., Apley, D.W., Chen, W. (2017). Multi-response Approach to Improving Identifiability in Model Calibration. In: Ghanem, R., Higdon, D., Owhadi, H. (eds) Handbook of Uncertainty Quantification. Springer, Cham. https://doi.org/10.1007/978-3-319-12385-1_65

Download citation

Publish with us

Policies and ethics