Abstract
Generalization error model provides a theoretical support for a classifier’s performance in terms of prediction accuracy. However, existing models give very loose error bounds.
This explains why classification systems generally rely on experimental validation for their claims on prediction accuracy. In this talk we will revisit this problem and explore the idea of developing a new generalization error model based on the assumption that only prediction accuracy on unseen points in a neighbourhood of a training point will be considered, since it will be unreasonable to require a classifier to accurately predict unseen points “far away” from training samples. The new error model makes use of the concept of sensitivity measure for multiplayer feedforward neural networks (Multilayer Perceptrons or Radial Basis Function Neural Networks). The new model will be applied to the feature reduction problem for RBFNN classifiers. A number of experimental results using datasets such as the UCI, the 99 KDD Cup, and text categorization, will be presented.
Chapter PDF
Similar content being viewed by others
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Yeung, D.S. (2009). Sensitivity Based Generalization Error for Supervised Learning Problems with Application in Feature Selection. In: Huang, R., Yang, Q., Pei, J., Gama, J., Meng, X., Li, X. (eds) Advanced Data Mining and Applications. ADMA 2009. Lecture Notes in Computer Science(), vol 5678. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03348-3_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-03348-3_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-03347-6
Online ISBN: 978-3-642-03348-3
eBook Packages: Computer ScienceComputer Science (R0)