Advertisement

An Ensemble Method for Incremental Classification in Stationary and Non-stationary Environments

  • Ricardo Ñanculef
  • Erick López
  • Héctor Allende
  • Héctor Allende-Cid
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7042)

Abstract

We present a model based on ensemble of base classifiers, that are combined using weighted majority voting, for the task of incremental classification. Definition of such voting weights becomes even more critical in non-stationary environments where the patterns underlying the observations change over time. Given an instance to classify, we propose to define each voting weight as a function that will take into account the location of an instance to classify in the different class-specific feature spaces and also the prior probability of such classes given the knowledge represented by the classifier as well as its overall performance in learning its training examples. This approach can improve the generalization performance and ability to control the stability/plasticity tradeoff, in stationary and non-stationary environments. Experiments were carried out using several real classification problems already introduced to test incremental algorithms in stationary as well as non-stationary environments.

Keywords

Incremental Learning Dynamic Environments Ensemble Methods Concept Drift 

References

  1. 1.
    Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)Google Scholar
  2. 2.
    Fern, A., Givan, R.: Online ensemble learning: An empirical study. Machine Learning 53(1-2), 71–109 (2003)CrossRefzbMATHGoogle Scholar
  3. 3.
    Freud, Y., Schapire, R.: A short introduction to boosting. Journal of Japanese Society for Artificial Intelligence 14(5), 771–780 (1999)Google Scholar
  4. 4.
    Fumera, G., Roli, F.: A theoretical and experimental analysis of linear combiners for multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(6), 942–956 (2005)CrossRefGoogle Scholar
  5. 5.
    Gangardiwala, A., Polikar, R.: Dynamically weighted majority voting for incremental learning and comparison of three boosting based approaches. In: Joint Conf. on Neural Networks (IJCNN 2005), pp. 1131–1136 (2005)Google Scholar
  6. 6.
    Klinkenberg, R.: Learning drifting concepts: Example selection vs. example weighting. Intelligent Data Analysis 8(3), 281–300 (2004)Google Scholar
  7. 7.
    Kuncheva, L.I., Bezdek, J.C., Duin, R.P.W.: Decision templates for multiple classifier fusion: An experimental comparison. Pattern Recognition 34(2), 299–314 (2001)CrossRefzbMATHGoogle Scholar
  8. 8.
    Kuncheva, L.: Combining pattern classifiers: Methods and algorithms. Wiley InterScience (2004)Google Scholar
  9. 9.
    Littlestone, N., Warmuth, M.: The weighted majority algorithm. Information and Computation 108(2), 212–261 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Muhlbaier, M., Topalis, A., Polikar, R.: Learn++.MT: A new approach to incremental learning. In: Roli, F., Kittler, J., Windeatt, T. (eds.) MCS 2004. LNCS, vol. 3077, pp. 52–61. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  11. 11.
    Oza, N.C.: Online bagging and boosting. In: IEEE International Conference on Systems, Man and Cybernetics, vol. 3, pp. 2340–2345 (2005)Google Scholar
  12. 12.
    Polikar, R.: Ensemble based systems in decision making. IEEE Circuits and Systems 24(4), 21–45 (2006)Google Scholar
  13. 13.
    Polikar, R., Udpa, L., Udpa, S., Honavar, V.: Learn++: An incremental learning algorithm for supervised neural networks. IEEE Transactions on Systems, Man, and Cybernetics Part C: Applications and Reviews 31(4), 497–508 (2001)CrossRefGoogle Scholar
  14. 14.
    Scholz, M.: Knowledge-based sampling for subgroup discovery. In: Morik, K., Boulicaut, J.-F., Siebes, A. (eds.) Local Pattern Detection. LNCS (LNAI), vol. 3539, pp. 171–189. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  15. 15.
    Scholz, M., Klinkenberg, R.: Boosting classifiers for drifting concepts. Intelligent Data Analysis, Special Issue on Knowledge Discovery from Data Streams 11(1), 3–28 (2007)Google Scholar
  16. 16.
    Todorovski, L., Dzeroski, L.: Combining classifiers with meta decision trees. Machine Learning 50(223), 249 (2003)zbMATHGoogle Scholar
  17. 17.
    Trejo, P., Ñanculef, R., Allende, H., Moraga, C.: Probabilistic aggregation of classifiers for incremental learning. In: Sandoval, F., Prieto, A.G., Cabestany, J., Graña, M. (eds.) IWANN 2007. LNCS, vol. 4507, pp. 135–143. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  18. 18.
    Widmer, K., Kubat, M.: Learning in the presence of concept drift and hidden contexts. Machine Learning 23, 69–101 (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Ricardo Ñanculef
    • 1
  • Erick López
    • 1
  • Héctor Allende
    • 1
  • Héctor Allende-Cid
    • 1
  1. 1.Department of InformaticsFederico Santa María UniversityChile

Personalised recommendations