Abstract
K-fold cross validation is a commonly used technique which takes a set of m examples and partitions them into K equal-size sets (folds) of size m/K. For each set, a classifier is trained on the other sets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Devroye, L., Gyorfi, L., Lugosi, G.: A Proabibilistic Theory of Pattern Recognition. Springer, Heidelberg (1996)
Kearns, M., Ron, D.: Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation. Neural Computation 11(6), 1427–1453 (1999)
Blum, A., Kalai, A., Langford, J.: Beating the Holdout: Bounds for KFold and Progressive Cross-Validation. In: COLT 1999, pp. 203–208 (1999)
Bengio, Y., Grandvalet, Y.: No unbiased estimator of the variance of K-fold cross-validation. Journal of Machine Learning Research 5, 1089–1105 (2004)
Kaariainen, M.: Generalization Error Bounds Using Unlabeled Data. In: Auer, P., Meir, R. (eds.) COLT 2005. LNCS (LNAI), vol. 3559, pp. 127–142. Springer, Heidelberg (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Langford, J. (2005). The Cross Validation Problem. In: Auer, P., Meir, R. (eds) Learning Theory. COLT 2005. Lecture Notes in Computer Science(), vol 3559. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11503415_47
Download citation
DOI: https://doi.org/10.1007/11503415_47
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-26556-6
Online ISBN: 978-3-540-31892-7
eBook Packages: Computer ScienceComputer Science (R0)