So far in this book, we have considered model selection and evaluation criteria from both an information-theoretic point of view and a Bayesian approach. The AIC-type criteria were constructed as estimators of the Kullback–Leibler information between a statistical model and the true distribution generating the data or equivalently the expected log-likelihood of a statistical model. In contrast, the Bayes approach for selecting a model was to choose the model with the largest posterior probability among a set of candidate models.
There are other model evaluation criteria based on various different points of view. This chapter describes cross-validation, generalized cross-validation, final predictive error (FPE), Mallows’ C p , the Hannan–Quinn criterion, and ICOMP. Cross-validation also provides an alternative approach to estimate the Kullback–Leibler information. We show that the cross-validation estimate is asymptotically equivalent to AIC-type criteria in a general setting.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2008 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
(2008). Various Model Evaluation Criteria. In: Information Criteria and Statistical Modeling. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-0-387-71887-3_10
Download citation
DOI: https://doi.org/10.1007/978-0-387-71887-3_10
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-71886-6
Online ISBN: 978-0-387-71887-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)