Skip to main content

Lower Bounds for Empirical Classifier Selection

  • Chapter
A Probabilistic Theory of Pattern Recognition

Part of the book series: Stochastic Modelling and Applied Probability ((SMAP,volume 31))

  • 3950 Accesses

Abstract

In Chapter 12 a classifier was selected by minimizing the empirical error over a class of classifiers C. With the help of the Vapnik-Chervonenkis theory we have been able to obtain distribution-free performance guarantees for the selected rule. For example, it was shown that the difference between the expected error probability of the selected rule and the best error probability in the class behaves at least as well as where V C is the Vapnik-Chervonenkis dimension of C, and n is the size of the training data D n . (This upper bound is obtained from Theorem 12.5. Corollary 12.5 may be used to replace the log n term with log V C .) Two questions arise immediately: Are these upper bounds (at least up to the order of magnitude) tight? Is there a much better way of selecting a classifier than minimizing the empirical error? This chapter attempts to answer these questions. As it turns out, the answer is essentially affirmative for the first question, and negative for the second.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer Science+Business Media New York

About this chapter

Cite this chapter

Devroye, L., Györfi, L., Lugosi, G. (1996). Lower Bounds for Empirical Classifier Selection. In: A Probabilistic Theory of Pattern Recognition. Stochastic Modelling and Applied Probability, vol 31. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-0711-5_14

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-0711-5_14

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4612-6877-2

  • Online ISBN: 978-1-4612-0711-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics