Abstract
Previous chapters explored some uses for resampling techniques. In Chapter 3 we saw that the bias and variance of parameter estimators could themselves be estimated. This included model parameters as well as performance measures based on independent test data. Then in Chapter 4 we saw that performance measures for a model could be safely obtained from the very same data that was used to train the model. In this chapter we will explore assorted methods for using resampling to improve the performance of models. In particular, we will witness a marvelous phenomenon: it will be shown how a model whose performance is only slightly better than random guessing can be used to create a super-model whose performance is markedly better than the original. These techniques are extremely expensive in terms of computational requirements. However, in situations in which performance is more important than cost, resampling methods for model building are priceless.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Timothy Masters
About this chapter
Cite this chapter
Masters, T. (2018). Miscellaneous Resampling Techniques. In: Assessing and Improving Prediction and Classification. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3336-8_5
Download citation
DOI: https://doi.org/10.1007/978-1-4842-3336-8_5
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-3335-1
Online ISBN: 978-1-4842-3336-8
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)