Advertisement

Classifier Ensemble Using a Heuristic Learning with Sparsity and Diversity

  • Xu-Cheng Yin
  • Kaizhu Huang
  • Hong-Wei Hao
  • Khalid Iqbal
  • Zhi-Bin Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7664)

Abstract

Classifier ensemble has been intensively studied with the aim of overcoming the limitations of individual classifier components in two prevalent directions, i.e., to diversely generate classifier components, and to sparsely combine multiple classifiers. Currently, most approaches are emphasized only on sparsity or on diversity. In this paper, we investigated classifier ensemble with learning both sparsity and diversity using a heuristic method. We formulated the sparsity and diversity learning problem in a general mathematical framework which is beneficial for learning sparsity and diversity while grouping classifiers. Moreover, we proposed a practical approach based on the genetic algorithm for the optimization process. In order to conveniently evaluate the diversity of component classifiers, we introduced the diversity contribution ability to select proper classifier components and evolve classifier weights. Experimental results on several UCI classification data sets confirm that our approach has a promising sparseness and the generalization performance.

Keywords

Classifier ensemble Sparsity learning Diversity learning Bagging 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dietterich, T.G.: Ensemble Methods in Machine Learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  2. 2.
    Ho, T., Hull, J., Srihari, S.: Decision Combination in Multiple Classifier Systems. IEEE T-PAMI 16(1), 66–75 (1994)CrossRefGoogle Scholar
  3. 3.
    Kittler, J., Hatef, M., Duin, R., Matas, J.: On Combining Classifiers. IEEE T-PAMI 20(3), 226–239 (1998)CrossRefGoogle Scholar
  4. 4.
    Kuncheva, L., Whitaker, C.: Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy. Mach. Learn. 51(2), 181–207 (2003)zbMATHCrossRefGoogle Scholar
  5. 5.
    Tang, E., Suganthan, P., Yao, X.: An Analysis of Diversity Measures. Mach. Learn. 1, 247–271 (2006)CrossRefGoogle Scholar
  6. 6.
    Breiman, L.: Bagging Predictors. Mach. Learn. 24(1), 123–140 (1996)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Freund, Y., Schapire, R.: A Decision-Theoretic Generalization of on-Line Learning and an Application to Boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Schapire, R.: The Strength of Weak Learnability. Mach. Learn. 5(2), 197–227 (1990)Google Scholar
  9. 9.
    Kim, H.C., Pang, S., Je, H.M., Kim, D.: Constructing Support Vector Machine Ensemble. Pattern Recogn. 36(12), 2757–2767 (2003)zbMATHCrossRefGoogle Scholar
  10. 10.
    Breiman, L.: Random Forests. Mach. Learn. 45(1), 15–32 (2001)CrossRefGoogle Scholar
  11. 11.
    Liu, C.L.: Classifier Combination Based on Confidence Transformation. Pattern Recogn. 38(1), 11–28 (2005)zbMATHCrossRefGoogle Scholar
  12. 12.
    Yin, X.C., Liu, C.P., Han, Z.: Feature Combination Using Boosting. Pattern Recogn. Lett. 26(13), 2195–2205 (2005)CrossRefGoogle Scholar
  13. 13.
    Yu, Y., Li, Y.F., Zhou, Z.H.: Diversity Regularized Machine. In: IJCAI, pp. 1603–1608 (2011)Google Scholar
  14. 14.
    Li, N., Yu, Y., Zhou, Z.-H.: Diversity Regularized Ensemble Pruning. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012, Part I. LNCS, vol. 7523, pp. 330–345. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  15. 15.
    Grove, A., Schuurmans, D.: Boosting in the Limit: Maximizing the Margin of Learned Ensembles. In: AAAI, pp. 692–699 (1998)Google Scholar
  16. 16.
    Margineantu, D., Dietterich, T.: Pruning Adaptive Boosting. In: ICML, pp. 211–218 (1997)Google Scholar
  17. 17.
    Martinez-Munoz, G., Hernandez-Lobato, D., Suarez, A.: An Analysis of Ensemble Pruning Techniques Based on Ordered Aggregation. IEEE T-PAMI 31(2), 245–259 (2009)CrossRefGoogle Scholar
  18. 18.
    Yao, X., Liu, Y.: Making Use of Population Information in Evolutionary Artificial Neural Networks. IEEE T-SMC Part B 28(3), 417–425 (1998)MathSciNetGoogle Scholar
  19. 19.
    Zhang, L., Zhou, W.D.: Sparse Ensembles Using Weighted Combination Methods Based on Linear Programming. Pattern Recogn. 44(1), 97–106 (2011)zbMATHCrossRefGoogle Scholar
  20. 20.
    Zhou, Z.H., Wu, J.X., Tang, W.: Ensembling Neural Networks: Many Could Be Better than All? Artif. Intell. 137(1-2), 239–263 (2002)MathSciNetzbMATHCrossRefGoogle Scholar
  21. 21.
    Chen, H., Tino, P., Yao, X.: Predictive Ensemble Pruning by Expectation Propagation. IEEE T-TDE 21(7), 999–1013 (2009)Google Scholar
  22. 22.
    Chen, H., Yao, X.: Multiobjective Neural Network Ensembles Based on Regularized Negative Correlation Learning. IEEE T-KDE 22(12), 1738–1751 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Xu-Cheng Yin
    • 1
  • Kaizhu Huang
    • 2
  • Hong-Wei Hao
    • 2
  • Khalid Iqbal
    • 1
  • Zhi-Bin Wang
    • 1
  1. 1.School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijingChina
  2. 2.Institute of AutomationChinese Academy of SciencesBeijingChina

Personalised recommendations