Advertisement

Online Gradient Boosting for Incremental Recommender Systems

  • João VinagreEmail author
  • Alípio Mário Jorge
  • João Gama
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11198)

Abstract

Ensemble models have been proven successful for batch recommendation algorithms, however they have not been well studied in streaming applications. Such applications typically use incremental learning, to which standard ensemble techniques are not trivially applicable. In this paper, we study the application of three variants of online gradient boosting to top-N recommendation tasks with implicit data, in a streaming data environment. Weak models are built using a simple incremental matrix factorization algorithm for implicit feedback. Our results show a significant improvement of up to 40% over the baseline standalone model. We also show that the overhead of running multiple weak models is easily manageable in stream-based applications.

Keywords

Recommender systems Boosting Online learning Data streams 

Notes

Acknowledgments

This work is financed by the European Regional Development Fund (ERDF), through the Incentive System to Research and Technological development, within the Portugal2020 Competitiveness and Internationalization Operational Program – COMPETE 2020 – within project PushNews (POCI-01- 0247-FEDER-0024257). The work is also financed by the ERDF through COMPETE 2020 within project POCI-01-0145-FEDER-006961, and by national funds through the Portuguese Foundation for Science and Technology (FCT) as part of project UID/EEA/50014/2013.

References

  1. 1.
    Beygelzimer, A., Hazan, E., Kale, S., Luo, H.: Online gradient boosting. In: Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7–12, 2015, Montreal, Quebec, Canada, pp. 2458–2466 (2015)Google Scholar
  2. 2.
    Bifet, A., Holmes, G., Pfahringer, B.: Leveraging bagging for evolving data streams. In: Balcázar, J.L., Bonchi, F., Gionis, A., Sebag, M. (eds.) ECML PKDD 2010. LNCS (LNAI), vol. 6321, pp. 135–150. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15880-3_15CrossRefGoogle Scholar
  3. 3.
    Bifet, A., Holmes, G., Pfahringer, B., Kirkby, R., Gavaldà, R.: New ensemble methods for evolving data streams. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, France, June 28 - July 1, 2009, pp. 139–148. ACM (2009)Google Scholar
  4. 4.
    Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  6. 6.
    Chen, S., Lin, H., Lu, C.: An online boosting algorithm with theoretical justifications. In: Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress (2012)Google Scholar
  7. 7.
    Chowdhury, N., Cai, X., Luo, C.: BoostMF: boosted matrix factorisation for collaborative ranking. In: Appice, A., Rodrigues, P.P., Santos Costa, V., Gama, J., Jorge, A., Soares, C. (eds.) ECML PKDD 2015. LNCS (LNAI), vol. 9285, pp. 3–18. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-23525-7_1CrossRefGoogle Scholar
  8. 8.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Proceedings of the 13th Intl. Conference on Machine Learning ICML ’96, pp. 148–156. Morgan Kaufmann (1996)Google Scholar
  9. 9.
    Friedman, J.H.: Stochastic gradient boosting. Comput. Stat. Data Anal. 38(4), 367–378 (2002)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Gama, J., Medas, P., Rocha, R.: Forest trees for on-line data. In: Proceedings of the 2004 ACM Symposium on Applied Computing (SAC), Nicosia, Cyprus, March 14–17, 2004, pp. 632–636. ACM (2004)Google Scholar
  11. 11.
    Gama, J., Sebastião, R., Rodrigues, P.P.: On evaluating stream learning algorithms. Mach. Learn. 90(3), 317–346 (2013)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Gomes, H.M., Barddal, J.P., Enembreck, F., Bifet, A.: A survey on ensemble learning for data stream classification. ACM Comput. Surv. 50(2), 23:1–23:36 (2017)CrossRefGoogle Scholar
  13. 13.
    Hu, H., Sun, W., Venkatraman, A., Hebert, M., Bagnell, J.A.: Gradient boosting on stochastic data streams. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20–22 April 2017, Fort Lauderdale, FL, USA. Proceedings of Machine Learning Research, vol. 54, pp. 595–603. PMLR (2017)Google Scholar
  14. 14.
    Jahrer, M., Töscher, A., Legenstein, R.A.: Combining predictions for accurate recommender systems. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2010, pp. 693–702. ACM (2010)Google Scholar
  15. 15.
    Krawczyk, B., Minku, L.L., Gama, J., Stefanowski, J., Wozniak, M.: Ensemble learning for data stream analysis: a survey. Inf. Fusion 37, 132–156 (2017)CrossRefGoogle Scholar
  16. 16.
    Lee, H.K.H., Clyde, M.A.: Lossless online bayesian bagging. J. Mach. Learn. Res. 5, 143–151 (2004)MathSciNetGoogle Scholar
  17. 17.
    Oza, N.C., Russell, S.J.: Experimental comparisons of online and batch versions of bagging and boosting. In: Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2001, pp. 359–364. ACM (2001)Google Scholar
  18. 18.
    Schclar, A., Tsikinovsky, A., Rokach, L., Meisels, A., Antwarg, L.: Ensemble methods for improving the performance of neighborhood-based collaborative filtering. In: Proceedings of the 2009 ACM Conference on Recommender Systems, RecSys 2009, pp. 261–264. ACM (2009)Google Scholar
  19. 19.
    Segrera, S., Moreno, M.N.: An experimental comparative study of web mining methods for recommender systems. In: Proceedings of the 6th WSEAS Intl. Conf. on Distance Learning and Web Engineering, pp. 56–61. WSEAS (2006)Google Scholar
  20. 20.
    Sill, J., Takács, G., Mackey, L.W., Lin, D.: Feature-weighted linear stacking. CoRR (2009). arXiv:0911.0460
  21. 21.
    Vinagre, J., Jorge, A.M., Gama, J.: Fast incremental matrix factorization for recommendation with positive-only feedback. In: Dimitrova, V., Kuflik, T., Chin, D., Ricci, F., Dolog, P., Houben, G.-J. (eds.) UMAP 2014. LNCS, vol. 8538, pp. 459–470. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-08786-3_41CrossRefGoogle Scholar
  22. 22.
    Vinagre, J., Jorge, A.M., Gama, J.: Online bagging for recommender systems. Expert Syst. 35(4) (2018).  https://doi.org/10.1111/exsy.12303CrossRefGoogle Scholar
  23. 23.
    Wickramaratna, J., Holden, S.B., Buxton, B.F.: Performance degradation in boosting. In: Kittler, J., Roli, F. (eds.) MCS 2001. LNCS, vol. 2096, pp. 11–21. Springer, Heidelberg (2001).  https://doi.org/10.1007/3-540-48219-9_2CrossRefGoogle Scholar
  24. 24.
    Wolpert, D.H.: Stacked generalization. Neural Netw. 5(2), 241–259 (1992)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.FCUP - University of PortoPortoPortugal
  2. 2.FEP - University of PortoPortoPortugal
  3. 3.LIAAD - INESC TECPortoPortugal

Personalised recommendations