Skip to main content

A Bounded Version of Online Boosting on Open-Ended Data Streams

  • Conference paper
Data Warehousing and Knowledge Discovery (DaWaK 2011)

Abstract

In this paper, we propose an incremental learning algorithm for ensemble classifier systems. Ensemble learning algorithms combine the predictions of multiple base models, each of which is learned using a traditional algorithm.

We propose a new method to update weights of classifiers in the weighted majority voting scheme under the one-pass incremental learning situations. This method computes the weights of classifiers and the distribution of training data following an approach based on the computing of prequential error that avoids the overflow of internal values used by the learning algorithm.

Using a prequential approach implies that learned samples are forgotten progressively. Forgetting learned concepts could influence the accuracy of the model. However, in the experiments, we verify that the proposed model can learn incrementally without serious forgetting and that the performance is not seriously influenced by the used re-weighting method in comparison with learning models without forgetting.

Experimental results confirm that the proposed incremental ensemble classifier system yields comparable performance with another learning ensemble classifier system. Moreover, it can be trained with open-ended data streams without data overflow.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36(1-2), 105–139 (1999)

    Article  Google Scholar 

  2. Schapire, R.E., Freund, Y., Barlett, P., Lee, W.S.: Boosting the margin: A new explanation for the effectiveness of voting methods. In: International Conference on Machine Learning, pp. 322–330. Morgan Kaufmann, San Francisco (1997)

    Google Scholar 

  3. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)

    Article  MATH  Google Scholar 

  4. Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)

    MATH  Google Scholar 

  5. Quinlan, J.R.: Bagging, boosting, and c4.5. In: Proceedings of the 13th National Conference on Artificial Intelligence, pp. 725–730. AAAI/MIT Press (1996)

    Google Scholar 

  6. Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)

    Google Scholar 

  7. Schapire, R.: The streng of weak learnability. Machine Learning 5(2), 197–227 (1990)

    Google Scholar 

  8. Futschik, M.E., Reeve, A., Kasabov, N.: Evolving connectionist systems for knowledge discovery from gene expression data of cancer tissue. Artificial Intelligence in Medicine 28(2), 165–189 (2003)

    Article  Google Scholar 

  9. Domingos, P., Hulten, G.: Mining high-speed data streams. In: Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 71–80. ACM Press, New York (2000)

    Google Scholar 

  10. Domingos, P., Hulten, G.: Catching up with the data: Research issues in mining data streams. In: Workshop on Research Issues in Data Mining and Knowledge Discovery (2001)

    Google Scholar 

  11. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Annals of Statistics 28, 2000 (1998)

    MATH  Google Scholar 

  12. Chu, F., Zaniolo, C.: Fast and light boosting for adaptive mining of data streams. In: Dai, H., Srikant, R., Zhang, C. (eds.) PAKDD 2004. LNCS (LNAI), vol. 3056, pp. 282–292. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  13. Fern, A., Givan, R.: Online ensemble learning: An empirical study. In: Proceedings of the 17th International Conference on Machine Learning, pp. 279–286. Morgan Kaufmann, San Francisco (2000)

    Google Scholar 

  14. Street, W.N., Kim, Y.: A streaming ensemble algorithm (sea) for large-scale classification. In: Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 377–382. ACM Press, New York (2001)

    Google Scholar 

  15. Oza, N.C., Russell, S.: Online bagging and boosting. In: Artificial Intelligence and Statistics, pp. 105–112. Morgan Kaufmann, San Francisco (2001)

    Google Scholar 

  16. Pelossof, R., Jones, M., Vovsha, I., Rudin, C.: Online Coordinate Boosting. ArXiv e-prints (October 2008)

    Google Scholar 

  17. Kidera, T., Ozawa, S., Abe, S.: An incremental learning algorithm of ensemble classifier systems. In: International Joint Conference on Neural Networks, pp. 3421–3427 (2006)

    Google Scholar 

  18. Ferrer-Troyano, F.J., Aguilar-Ruiz, J.S., Riquelme-Santos, J.C.: Incremental rule learning based on example nearness from numerical data streams. In: Proceedings of the 2005 ACM Symposium on Applied Computing, pp. 568–572 (2005)

    Google Scholar 

  19. Oza, N.C., Russell, S.: Experimental comparisons of online and batch versions of bagging and boosting. In: Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 359–364. ACM, New York (2001)

    Google Scholar 

  20. Yamauchi, K., Yamaguchi, N., Ishii, N.: Incremental learning methods with retrieving of interfered patterns. IEEE Transactions on Neural Networks 10(6), 1351–1365 (1999)

    Article  Google Scholar 

  21. Ozawa, S., Toh, S.L., Abe, S., Pang, S., Kasabov, N.: Special issue: Incremental learning of feature space and classifier for face recognition. Neural Networks 18(5-6), 575–584 (2005)

    Article  Google Scholar 

  22. Carpenter, G.A., Grossberg, S.: The art of adaptive pattern recognition by a self-organizing neural network. Computer 21(3), 77–88 (1988)

    Article  Google Scholar 

  23. Diehl, C.P., Cauwenberghs, G.: Svm incremental learning, adaptation and optimization. In: Proceedings of the 2003 International Joint Conference on Neural Networks, pp. 2685–2690 (2003)

    Google Scholar 

  24. Weng, J., Evans, C.H., Hwang, W.S.: An incremental learning method for face recognition under continuous video stream. In: Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, p. 251. IEEE Computer Society, Washington, DC, USA (2000)

    Google Scholar 

  25. Gama, J., Sebastião, R., Pereira-Rodriguez, P.: Issues in evaluation of stream learning algorithms. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 329–338. ACM, New York (2009)

    Chapter  Google Scholar 

  26. Dawid, A.P.: Statistical theory: The prequential approach. Journal of the Royal Statistical Society 147(2), 278–292 (1984)

    Article  MATH  Google Scholar 

  27. Bifet, A., Holmes, G., Pfahringer, B., Kirkby, R., Gavaldà, R.: New ensemble methods for evolving data streams. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 139–148. ACM, New York (2009)

    Chapter  Google Scholar 

  28. Witten, I., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Triviño-Rodriguez, J.L., Ruiz-Sepúlveda, A., Morales-Bueno, R. (2011). A Bounded Version of Online Boosting on Open-Ended Data Streams. In: Cuzzocrea, A., Dayal, U. (eds) Data Warehousing and Knowledge Discovery. DaWaK 2011. Lecture Notes in Computer Science, vol 6862. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23544-3_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-23544-3_35

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-23543-6

  • Online ISBN: 978-3-642-23544-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics