Abstract
In this paper, we propose an incremental learning algorithm for ensemble classifier systems. Ensemble learning algorithms combine the predictions of multiple base models, each of which is learned using a traditional algorithm.
We propose a new method to update weights of classifiers in the weighted majority voting scheme under the one-pass incremental learning situations. This method computes the weights of classifiers and the distribution of training data following an approach based on the computing of prequential error that avoids the overflow of internal values used by the learning algorithm.
Using a prequential approach implies that learned samples are forgotten progressively. Forgetting learned concepts could influence the accuracy of the model. However, in the experiments, we verify that the proposed model can learn incrementally without serious forgetting and that the performance is not seriously influenced by the used re-weighting method in comparison with learning models without forgetting.
Experimental results confirm that the proposed incremental ensemble classifier system yields comparable performance with another learning ensemble classifier system. Moreover, it can be trained with open-ended data streams without data overflow.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36(1-2), 105–139 (1999)
Schapire, R.E., Freund, Y., Barlett, P., Lee, W.S.: Boosting the margin: A new explanation for the effectiveness of voting methods. In: International Conference on Machine Learning, pp. 322–330. Morgan Kaufmann, San Francisco (1997)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)
Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)
Quinlan, J.R.: Bagging, boosting, and c4.5. In: Proceedings of the 13th National Conference on Artificial Intelligence, pp. 725–730. AAAI/MIT Press (1996)
Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)
Schapire, R.: The streng of weak learnability. Machine Learning 5(2), 197–227 (1990)
Futschik, M.E., Reeve, A., Kasabov, N.: Evolving connectionist systems for knowledge discovery from gene expression data of cancer tissue. Artificial Intelligence in Medicine 28(2), 165–189 (2003)
Domingos, P., Hulten, G.: Mining high-speed data streams. In: Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 71–80. ACM Press, New York (2000)
Domingos, P., Hulten, G.: Catching up with the data: Research issues in mining data streams. In: Workshop on Research Issues in Data Mining and Knowledge Discovery (2001)
Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Annals of Statistics 28, 2000 (1998)
Chu, F., Zaniolo, C.: Fast and light boosting for adaptive mining of data streams. In: Dai, H., Srikant, R., Zhang, C. (eds.) PAKDD 2004. LNCS (LNAI), vol. 3056, pp. 282–292. Springer, Heidelberg (2004)
Fern, A., Givan, R.: Online ensemble learning: An empirical study. In: Proceedings of the 17th International Conference on Machine Learning, pp. 279–286. Morgan Kaufmann, San Francisco (2000)
Street, W.N., Kim, Y.: A streaming ensemble algorithm (sea) for large-scale classification. In: Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 377–382. ACM Press, New York (2001)
Oza, N.C., Russell, S.: Online bagging and boosting. In: Artificial Intelligence and Statistics, pp. 105–112. Morgan Kaufmann, San Francisco (2001)
Pelossof, R., Jones, M., Vovsha, I., Rudin, C.: Online Coordinate Boosting. ArXiv e-prints (October 2008)
Kidera, T., Ozawa, S., Abe, S.: An incremental learning algorithm of ensemble classifier systems. In: International Joint Conference on Neural Networks, pp. 3421–3427 (2006)
Ferrer-Troyano, F.J., Aguilar-Ruiz, J.S., Riquelme-Santos, J.C.: Incremental rule learning based on example nearness from numerical data streams. In: Proceedings of the 2005 ACM Symposium on Applied Computing, pp. 568–572 (2005)
Oza, N.C., Russell, S.: Experimental comparisons of online and batch versions of bagging and boosting. In: Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 359–364. ACM, New York (2001)
Yamauchi, K., Yamaguchi, N., Ishii, N.: Incremental learning methods with retrieving of interfered patterns. IEEE Transactions on Neural Networks 10(6), 1351–1365 (1999)
Ozawa, S., Toh, S.L., Abe, S., Pang, S., Kasabov, N.: Special issue: Incremental learning of feature space and classifier for face recognition. Neural Networks 18(5-6), 575–584 (2005)
Carpenter, G.A., Grossberg, S.: The art of adaptive pattern recognition by a self-organizing neural network. Computer 21(3), 77–88 (1988)
Diehl, C.P., Cauwenberghs, G.: Svm incremental learning, adaptation and optimization. In: Proceedings of the 2003 International Joint Conference on Neural Networks, pp. 2685–2690 (2003)
Weng, J., Evans, C.H., Hwang, W.S.: An incremental learning method for face recognition under continuous video stream. In: Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, p. 251. IEEE Computer Society, Washington, DC, USA (2000)
Gama, J., Sebastião, R., Pereira-Rodriguez, P.: Issues in evaluation of stream learning algorithms. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 329–338. ACM, New York (2009)
Dawid, A.P.: Statistical theory: The prequential approach. Journal of the Royal Statistical Society 147(2), 278–292 (1984)
Bifet, A., Holmes, G., Pfahringer, B., Kirkby, R., Gavaldà, R.: New ensemble methods for evolving data streams. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 139–148. ACM, New York (2009)
Witten, I., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Triviño-Rodriguez, J.L., Ruiz-Sepúlveda, A., Morales-Bueno, R. (2011). A Bounded Version of Online Boosting on Open-Ended Data Streams. In: Cuzzocrea, A., Dayal, U. (eds) Data Warehousing and Knowledge Discovery. DaWaK 2011. Lecture Notes in Computer Science, vol 6862. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23544-3_35
Download citation
DOI: https://doi.org/10.1007/978-3-642-23544-3_35
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-23543-6
Online ISBN: 978-3-642-23544-3
eBook Packages: Computer ScienceComputer Science (R0)