Abstract
In Chapter 12 a classifier was selected by minimizing the empirical error over a class of classifiers C. With the help of the Vapnik-Chervonenkis theory we have been able to obtain distribution-free performance guarantees for the selected rule. For example, it was shown that the difference between the expected error probability of the selected rule and the best error probability in the class behaves at least as well as where V C is the Vapnik-Chervonenkis dimension of C, and n is the size of the training data D n . (This upper bound is obtained from Theorem 12.5. Corollary 12.5 may be used to replace the log n term with log V C .) Two questions arise immediately: Are these upper bounds (at least up to the order of magnitude) tight? Is there a much better way of selecting a classifier than minimizing the empirical error? This chapter attempts to answer these questions. As it turns out, the answer is essentially affirmative for the first question, and negative for the second.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer Science+Business Media New York
About this chapter
Cite this chapter
Devroye, L., Györfi, L., Lugosi, G. (1996). Lower Bounds for Empirical Classifier Selection. In: A Probabilistic Theory of Pattern Recognition. Stochastic Modelling and Applied Probability, vol 31. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-0711-5_14
Download citation
DOI: https://doi.org/10.1007/978-1-4612-0711-5_14
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-6877-2
Online ISBN: 978-1-4612-0711-5
eBook Packages: Springer Book Archive