Abstract
A combination of classification rules (classifiers) is known as an Ensemble, and in general it is more accurate than the individual classifiers used to build it. One method to construct an Ensemble is Bagging introduced by Breiman, (1996). This method relies on resampling techniques to obtain different training sets for each of the classifiers. Previous work has shown that Bagging is very effective for unstable classifiers. In this paper we present some results in application of Bagging to classifiers where the class conditional density is estimated using kernel density estimators. The effect of feature selection in bagging is also considered.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
BAUER, E. and KOKAVI, R. (1999): An empirical comparison of voting classification algorithms: Bagging, Boosting and variants. Machine Learning, 36, 105–139.
BLAKE, C. and MERZ, C. (1998): UCI repository of machine learning databases. Department of Computer Science and Information, University of California, Irvine, USA.
BREIMAN, L. (1996): Bagging Predictors. Machine Learning, 26, 123–140.
BREIMAN, L. (1998): Arcing Classfiers. Annals of Statistic, 26, 801–849.
DIETTERICH, T.G (2000): An Experimental comparison of three methods for constructing Ensembles of decision trees: Bagging, Boosting, and randomization. Machine Learning, 26, 801–849.
FREUND, Y. and SCHAPIRE, R. (1996): Experiments with a new boosting algorithm. In Machine Learning, Proceedings of the Thirteenth International Conference, San Francisco, Morgan Kaufman, 148–156.
KOHAVI, R. and JOHN, G.H. (1997): Wrappers for feature subset selection. Artificial Intelligence, 97, 273–324.
MACLIN, R. and OPTIZ, D. (1997): An empirical evaluation of Bagging and Bosting. Proceedings of the Fourteenth National Conference on Artificial Intelligence, AAAI/MIT Press.
MICHIE, D., SPIGELHALTER, D.J. and TAYLOR, C.C. (1994): Machine Learning, Neural and Statistical Classification. London: Ellis Horwood.
QUINLAN, J.R. (1996): Bagging, Boosting and C4.5. Proceedings of the Thirteenth National Conference on Artificial Intelligence, AAAI/MIT Press, 725–730.
SILVERMAN, B.W. (1986): Density Estimation for Statistics and Data Analysis. Chapman and Hall. London.
TITTERINGTON, D.M. (1980): A comparative study of kernel-based density estimates for categorical data. Technometrics, 22, 259–268.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Acuña, E., Rojas, A., Coaquira, F. (2002). Effect of Feature Selection on Bagging Classifiers Based on Kernel Density Estimators. In: Jajuga, K., Sokołowski, A., Bock, HH. (eds) Classification, Clustering, and Data Analysis. Studies in Classification, Data Analysis, and Knowledge Organization. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-56181-8_17
Download citation
DOI: https://doi.org/10.1007/978-3-642-56181-8_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43691-1
Online ISBN: 978-3-642-56181-8
eBook Packages: Springer Book Archive