Effect of Feature Selection on Bagging Classifiers Based on Kernel Density Estimators
A combination of classification rules (classifiers) is known as an Ensemble, and in general it is more accurate than the individual classifiers used to build it. One method to construct an Ensemble is Bagging introduced by Breiman, (1996). This method relies on resampling techniques to obtain different training sets for each of the classifiers. Previous work has shown that Bagging is very effective for unstable classifiers. In this paper we present some results in application of Bagging to classifiers where the class conditional density is estimated using kernel density estimators. The effect of feature selection in bagging is also considered.
KeywordsFeature Selection Linear Discriminant Analysis Kernel Density Misclassification Error Kernel Density Estimator
Unable to display preview. Download preview PDF.
- BLAKE, C. and MERZ, C. (1998): UCI repository of machine learning databases. Department of Computer Science and Information, University of California, Irvine, USA.Google Scholar
- BREIMAN, L. (1996): Bagging Predictors. Machine Learning, 26, 123–140.Google Scholar
- DIETTERICH, T.G (2000): An Experimental comparison of three methods for constructing Ensembles of decision trees: Bagging, Boosting, and randomization. Machine Learning, 26, 801–849.Google Scholar
- FREUND, Y. and SCHAPIRE, R. (1996): Experiments with a new boosting algorithm. In Machine Learning, Proceedings of the Thirteenth International Conference, San Francisco, Morgan Kaufman, 148–156.Google Scholar
- MACLIN, R. and OPTIZ, D. (1997): An empirical evaluation of Bagging and Bosting. Proceedings of the Fourteenth National Conference on Artificial Intelligence, AAAI/MIT Press.Google Scholar
- QUINLAN, J.R. (1996): Bagging, Boosting and C4.5. Proceedings of the Thirteenth National Conference on Artificial Intelligence, AAAI/MIT Press, 725–730.Google Scholar