Skip to main content

Locality-Sensitive Linear Bandit Model for Online Social Recommendation

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9947))

Abstract

Recommender systems provide personalized suggestions by learning users’ preference based on their historical feedback. To alleviate the heavy relying on historical data, several online recommendation methods are recently proposed and have shown the effectiveness in solving data sparsity and cold start problems in recommender systems. However, existing online recommendation methods neglect the use of social connections among users, which has been proven as an effective way to improve recommendation accuracy in offline settings. In this paper, we investigate how to leverage social connections to improve online recommendation performance. In particular, we formulate the online social recommendation task as a contextual bandit problem and propose a Locality-sensitive Linear Bandit (LS.Lin) method to solve it. The proposed model incorporates users’ local social relations into a linear contextual bandit model and is capable to deal with the dynamic changes of user preference and the network structure. We provide a theoretical analysis to the proposed LS.Lin method and then demonstrate its improved performance for online social recommendation in empirical studies compared with baseline methods.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Abbasi-Yadkori, Y., Pal, D., Szepesvari, C.: Improved algorithms for linear stochastic bandits. In: NIPS (2011)

    Google Scholar 

  2. Agarwal, D., Chen, B.-C., Elango, P.: Explore/exploit schemes for web content optimization. In: ICDM (2009)

    Google Scholar 

  3. Audibert, J.-Y., Munos, R., Szepesvari, C.: Exploration-exploitation tradeoff using variance estimates in multi-armed bandits. Theoret. Comput. Sci. 410(19), 1876–1902 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  4. Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47, 235–256 (2002)

    Article  MATH  Google Scholar 

  5. Awerbuch, B., Kleinberg, R.: Online linear optimization and adaptive routing. J. Comput. Syst. Sci. 74(1), 97–114 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bubeck, S., Cesa-Bianchi, N.: Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Found. Trends Mach. Learn. 5, 1–122 (2012)

    Article  MATH  Google Scholar 

  7. Buccapatnam, S., Eryilmaz, A., Shroff, N.B.: Multi-armed bandits in the presence of side observations in social networks. OSU Tech. rep. (2013)

    Google Scholar 

  8. Cantador, I., Brusilovsky, P., Kuflik, T.: 2nd workshop on information heterogeneity and fusion in recommender systems (hetrec 2011). In: Recsys (2011)

    Google Scholar 

  9. Cesa-Bianchi, N., Gentile, C., Zappella, G.: A gang of bandits. In: NIPS (2013)

    Google Scholar 

  10. Chu, W., Li, L., Reyzin, L., Schapire, R.E.: Contextual bandits with linear payoff functions. In: AISTAS (2011)

    Google Scholar 

  11. Fang, H., Bao, Y., Zhang, J.: Leveraging decomposed trust in probabilistic matrix factorization for effective recommendation. In: AAAI (2014)

    Google Scholar 

  12. Fang, M., Tao, D.: Networked bandits with disjoint linear payoffs. In: KDD (2014)

    Google Scholar 

  13. Gai, Y., Krishnamachari, B., Jain, R.: Combinatorial network optimization with unknown variables: multi-armed bandits with linear rewards and individual observations. TON 20(5), 1466–1478 (2012)

    Google Scholar 

  14. Gentile, C., Li, S., Zappella, G.: Online clustering of bandits. In: ICML (2014)

    Google Scholar 

  15. Guo, G., Zhang, J., Yorke-Smith, N.: Trustsvd: collaborative filtering with both the explicit and implicit influence of user trust and of item ratings. In: AAAI (2015)

    Google Scholar 

  16. Hu, G.-N., Dai, X.-Y., Song, Y., Huang, S.-J., Chen, J.-J.: A synthetic approach for recommendation: combining ratings, social relations, and reviews. In: IJCAI (2015)

    Google Scholar 

  17. Hu, L., Sun, A., Liu, Y.: Your neighbors affect your ratings: on geographical neighborhood influence to rating prediction. In: SIGIR (2014)

    Google Scholar 

  18. Koren, Y.: Factorization meets the neighborhood: a multifaceted collaborative filtering model. In: KDD (2008)

    Google Scholar 

  19. Lacerda, A., Santos, R.L., Veloso, A., Ziviani, N.: Improving daily deals recommendation using explore-then-exploit strategies. Inf. Retr. 18(2), 95–122 (2015)

    Article  Google Scholar 

  20. Li, L., Chu, W., Langford, J., Schapire, R.E.: A contextual-bandit approach to personalized news article recommendation. In: WWW (2010)

    Google Scholar 

  21. Ma, H., Yang, H., Lyu, M.R., King, I.: Sorec: social recommendation using probabilistic matrix factorization. In: CIKM

    Google Scholar 

  22. Ma, H., Zhou, D., Liu, C., Lyu, M.R., King, I.: Recommender systems with social regularization. In: WSDM (2011)

    Google Scholar 

  23. Nguyen, T.T., Lauw, H.W.: Dynamic clustering of contextual multi-armed bandits. In: CIKM (2014)

    Google Scholar 

  24. Noel, J., Sanner, S., Tran, K., Christen, P., Xie, L., Bonilla, E.V., Abbasnejad, E., Penna, N.D.: New objective functions for social collaborative filtering. In: WWW (2012)

    Google Scholar 

  25. Pandey, S., Olston, C.: Handling advertisements of unknown quality in search advertising. In: NIPS (2006)

    Google Scholar 

  26. Radlinski, F., Kleinberg, R., Joachims, T.: Learning diverse rankings with multi-armed bandits. In: ICML (2008)

    Google Scholar 

  27. Shen, Y., Jin, R.: Learning personal + social latent factor model for social recommendation. In: Proceedings of SIGKDD, pp. 1303–1311 (2012)

    Google Scholar 

  28. Slivkins, A.: Multi-armed bandits on implicit metric spaces. In: NIPS (2011)

    Google Scholar 

  29. Tang, L., Jiang, Y., Li, L., Zeng, C., Li, T.: Personalized recommendation via parameter-free contextual bandits (2015)

    Google Scholar 

  30. Zhao, T., Li, C., Li, M., Ding, Q., Li, L.: Social recommendation incorporating topic mining and social trust analysis. In: CIKM (2013)

    Google Scholar 

  31. Zhao, T., McAuley, J.J., King, I.: Leveraging social connections to improve personalized ranking for collaborative filtering. In: CIKM (2014)

    Google Scholar 

Download references

Acknowledgments

The work described in this paper was partially supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 of the General Research Fund), and 2015 Microsoft Research Asia Collaborative Research Program (Project No. FY16-RES-THEME-005).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tong Zhao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Zhao, T., King, I. (2016). Locality-Sensitive Linear Bandit Model for Online Social Recommendation. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9947. Springer, Cham. https://doi.org/10.1007/978-3-319-46687-3_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46687-3_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46686-6

  • Online ISBN: 978-3-319-46687-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics