Advertisement

Large-Scale Bandit Recommender System

  • Frédéric GuillouEmail author
  • Romaric Gaudel
  • Philippe Preux
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10122)

Abstract

The main target of Recommender Systems (RS) is to propose to users one or several items in which they might be interested. However, as users provide more feedback, the recommendation process has to take these new data into consideration. The necessity of this update phase makes recommendation an intrinsically sequential task. A few approaches were recently proposed to address this issue, but they do not meet the need to scale up to real life applications. In this paper, we present a Collaborative Filtering RS method based on Matrix Factorization and Multi-Armed Bandits. This approach aims at good recommendations with a narrow computation time. Several experiments on large datasets show that the proposed approach performs personalized recommendations in less than a millisecond per recommendation.

Notes

Aknowledgments

The authors would like to acknowledge the stimulating environment provided by SequeL research group, Inria and CRIStAL. This work was supported by French Ministry of Higher Education and Research, by CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015–2020, and by FUI Hermès. Experiments were carried out using Grid’5000 testbed, supported by Inria, CNRS, RENATER and several universities as well as other organizations.

References

  1. 1.
    Agarwal, D., Chen, B.C., Elango, P., Motgi, N., Park, S.T., Ramakrishnan, R., Roy, S., Zachariah, J.: Online models for content optimization. In: Proceedings of NIPS 2008, pp. 17–24 (2008)Google Scholar
  2. 2.
    Audibert, J.Y., Munos, R., Szepesvári, C.: Exploration-exploitation tradeoff using variance estimates in multi-armed bandits. Theoret. Comput. Sci. 410(19), 1876–1902 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47, 235–256 (2002)CrossRefzbMATHGoogle Scholar
  4. 4.
    Bennett, J., Lanning, S.: Netflix: the Netflix prize. In: KDD Cup and Workshop (2007)Google Scholar
  5. 5.
    Bhagat, S., Weinsberg, U., Ioannidis, S., Taft, N.: Recommending with an agenda: active learning of private attributes using matrix factorization. In: Proceedings of RecSys 2014, pp. 65–72 (2014)Google Scholar
  6. 6.
    Bottou, L., Bousquet, O.: The tradeoffs of large scale learning. In: Proceedings of NIPS, pp. 161–168 (2007)Google Scholar
  7. 7.
    Chapelle, O., Li, L.: An empirical evaluation of thompson sampling. In: Proceedings of NIPS 2011, pp. 2249–2257 (2011)Google Scholar
  8. 8.
    Cremonesi, P., Koren, Y., Turrin, R.: Performance of recommender algorithms on top-N recommendation tasks. In: Proceedings of RecSys 2010, pp. 39–46 (2010)Google Scholar
  9. 9.
    Garcin, F., Faltings, B., Donatsch, O., Alazzawi, A., Bruttin, C., Huber, A.: Offline and online evaluation of news recommender systems at swissinfo.ch. In: Proceedings of RecSys 2014, pp. 169–176. ACM, New York (2014)Google Scholar
  10. 10.
    Garivier, A., Cappé, O.: The KL-UCB algorithm for bounded stochastic bandits and beyond. In: Proceedings of COLT 2011, pp. 359–376 (2011)Google Scholar
  11. 11.
    Harper, F.M., Konstan, J.A.: The movielens datasets: history and context. ACM Trans. Interact. Intell. Syst. (TiiS) 5(4), 19 (2015)Google Scholar
  12. 12.
    Kawale, J., Bui, H., Kveton, B., Thanh, L.T., Chawla, S.: Efficient thompson sampling for online matrix-factorization recommendation. In: NIPS 2015 (2015)Google Scholar
  13. 13.
    Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer 42(8), 30–37 (2009)CrossRefGoogle Scholar
  14. 14.
    Koren, Y., Sill, J.: Ordrec: an ordinal model for predicting personalized item rating distributions. In: Proceedings of RecSys 2011, pp. 117–124 (2011)Google Scholar
  15. 15.
    Langford, J., Strehl, A., Wortman, J.: Exploration scavenging. In: Proceedings of ICML, pp. 528–535 (2008)Google Scholar
  16. 16.
    Li, L., Chu, W., Langford, J., Schapire, R.E.: A contextual-bandit approach to personalized news article recommendation. In: Proceedings of the 19th International Conference on World Wide Web, pp. 661–670. ACM, New York (2010)Google Scholar
  17. 17.
    Li, L., Chu, W., Langford, J., Wang, X.: Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. In: Proceedings of WSDM 2011, pp. 297–306 (2011)Google Scholar
  18. 18.
    Li, L., Chu, W., Langford, J., Schapire, R.E.: A contextual-bandit approach to personalized news article recommendation. In: Proceedings of World Wide Web (WWW 2010), pp. 661–670 (2010)Google Scholar
  19. 19.
    Ma, H., Zhou, D., Liu, C., Lyu, M.R., King, I.: Recommender systems with social regularization. In: Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, pp. 287–296. ACM (2011)Google Scholar
  20. 20.
    Mary, J., Gaudel, R., Preux, P.: Bandits and Recommender Systems. In: Pardalos, P., Pavone, M., Farinella, G.M., Cutello, V. (eds.) MOD 2015. LNCS, vol. 9432, pp. 325–336. Springer, Cham (2015). doi: 10.1007/978-3-319-27926-8_29 CrossRefGoogle Scholar
  21. 21.
    Nakamura, A.: A UCB-like strategy of collaborative filtering. In: Proceedings of ACML 2014 (2014)Google Scholar
  22. 22.
    Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: BPR: bayesian personalized ranking from implicit feedback. In: Proceedings of UAI 2009, pp. 452–461 (2009)Google Scholar
  23. 23.
    Said, A., Bellogín, A.: Comparative recommender system evaluation: benchmarking recommendation frameworks. In: Proceedings of RecSys 2014, pp. 129–136 (2014)Google Scholar
  24. 24.
    Shi, Y., Karatzoglou, A., Baltrunas, L., Larson, M., Oliver, N., Hanjalic, A.: CLiMF: learning to maximize reciprocal rank with collaborative less-is-more filtering. In: Proceedings of RecSys 2012, pp. 139–146 (2012)Google Scholar
  25. 25.
    Tang, L., Jiang, Y., Li, L., Li, T.: Ensemble contextual bandits for personalized recommendation. In: Proceedings of RecSys 2014 (2014)Google Scholar
  26. 26.
    Weston, J., Yee, H., Weiss, R.J.: Learning to rank recommendations with the k-order statistic loss. In: Proc. of RecSys 2013, pp. 245–248 (2013)Google Scholar
  27. 27.
    Xing, Z., Wang, X., Wang, Y.: Enhancing collaborative filtering music recommendation by balancing exploration and exploitation. In: Proceedings of International Society of Music Information Retrieval (ISMIR), pp. 445–450 (2014)Google Scholar
  28. 28.
    Zhao, X., Zhang, W., Wang, J.: Interactive collaborative filtering. In: CKIM 2013, pp. 1411–1420 (2013)Google Scholar
  29. 29.
    Zhou, Y., Wilkinson, D., Schreiber, R., Pan, R.: Large-Scale Parallel Collaborative Filtering for the Netflix Prize. In: Fleischer, R., Xu, J. (eds.) AAIM 2008. LNCS, vol. 5034, pp. 337–348. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-68880-8_32 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Frédéric Guillou
    • 1
    Email author
  • Romaric Gaudel
    • 2
  • Philippe Preux
    • 2
  1. 1.Inria, Univ. Lille, CNRSVilleneuve-d’AscqFrance
  2. 2.Univ. Lille, CNRS, Centrale Lille, Inria, UMR 9189 - CRIStALVilleneuve-d’AscqFrance

Personalised recommendations