Active Learning in Recommender Systems



Recommender Systems (RSs) are often assumed to present items to users for one reason – to recommend items a user will likely be interested in. Of course RSs do recommend, but this assumption is biased, with no help of the title, towards the “recommending” the system will do. There is another reason for presenting an item to the user: to learn more about his/her preferences, or his/her likes and dislikes. This is where Active Learning (AL) comes in. Augmenting RSs with AL helps the user become more self-aware of their own likes/dislikes while at the same time providing new information to the system that it can analyze for subsequent recommendations. In essence, applying AL to RSs allows for personalization of the recommending process, a concept that makes sense as recommending is inherently geared towards personalization. This is accomplished by letting the system actively influence which items the user is exposed to (e.g. the items displayed to the user during sign-up orduring regular use), and letting the user explore his/her interests freely.


Active Learn Recommender System Decision Boundary Training Point Active Learn Method 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.



We would like to express our appreciation to Professor Okamoto, Professor Ueno, Professor Tokunaga, Professor Tomioka, Dr. Sheinman, Dr. Vilenius, Sachi Kabasawa and Akane Odake for their help and assistance, and also to MEXT and JST for their financial support; comments received from reviewers and editors were also indespensible to the writing process.


  1. 1.
    Abe, N., Mamitsuka, H.: Query learning strategies using boosting and bagging. In: Proceedings of the Fifteenth International Conference on Machine Learning, vol. 388. Morgan Kaufmann Publishers Inc. (1998)Google Scholar
  2. 2.
    Adomavicius, G., Tuzhilin, A.: Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and Data Engineering 17(6), 734–749 (2005)CrossRefGoogle Scholar
  3. 3.
    Ahn, L.V.: Games with a purpose. Computer 39(6), 92–94 (2006). DOI 10.1109/MC.2006. 196CrossRefGoogle Scholar
  4. 4.
    Bailey, R.A.: Design of Comparative Experiments. Cambridge University Press (2008)Google Scholar
  5. 5.
    Balcan, M.F., Beygelzimer, A., Langford, J.: Agnostic active learning. In: ICML ’06: Proceedings of the 23rd international conference on Machine learning, pp. 65–72. ACM, New York, NY, USA (2006). DOI
  6. 6.
    Boutilier, C., Zemel, R., Marlin, B.: Active collaborative filtering. In: Proceedings of the Nineteenth Annual Conference on Uncertainty in Artificial Intelligence, pp. 98–106 (2003). URL Scholar
  7. 7.
    Box, G., Hunter, S.J., Hunter, W.G.: Statistics for Experimenters: Design, Innovation, and Discovery. Wiley-Interscience (2005)Google Scholar
  8. 8.
    Breiman, L., Breiman, L.: Bagging predictors. In: Machine Learning, pp. 123–140 (1996)Google Scholar
  9. 9.
    Bridge, D., Ricci, F.: Supporting product selection with query editing recommendations. In: RecSys ’07: Proceedings of the 2007 ACM conference on Recommender systems, pp. 65–72. ACM, New York, NY, USA (2007). DOI
  10. 10.
    Carenini, G., Smith, J., Poole, D.: Towards more conversational and collaborative recommender systems. In: IUI ’03: Proceedings of the 8th international conference on Intelligent user interfaces, pp. 12–18. ACM, New York, NY, USA (2003). DOI
  11. 11.
    Chan, N.: A-optimality for regression designs. Tech. rep., Stanford University, Department of Statistics (1981)Google Scholar
  12. 12.
    Cohn, D.A.: Neural network exploration using optimal experiment design 6, 679–686 (1994). URL Scholar
  13. 13.
    Cohn, D.A., Ghahramani, Z., Jordan, M.I.: Active learning with statistical models. Journal of Artificial Intelligence Research 4, 129–145 (1996)MATHGoogle Scholar
  14. 14.
    Dagan, I., Engelson, S.: Committee-based sampling for training probabilistic classifiers. In: Proceedings of the International Conference on Machine Learning (ICML), pp. 150–157. Citeseer (1995)Google Scholar
  15. 15.
    Danziger, S., Zeng, J., Wang, Y., Brachmann, R., Lathrop, R.: Choosing where to look next in a mutation sequence space: Active learning of informative p53 cancer rescue mutants. Bioinformatics 23(13), 104–114 (2007)Google Scholar
  16. 16.
    Dasgupta, S., Lee, W., Long, P.: A theoretical analysis of query selection for collaborative filtering. Machine Learning 51, 283–298 (2003). URL dasgupta02theoretical.htmlGoogle Scholar
  17. 17.
    Freund, Y., Schapire, R.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences 55(1), 119–139 (1997)MATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Fujii, A., Tokunaga, T., Inui, K., Tanaka, H.: Selective sampling for example-based word sense disambiguation. Computational Linguistics 24, 24–4 (1998)Google Scholar
  19. 19.
    Greiner, R., Grove, A., Roth, D.: Learning cost-sensitive active classifiers. Artificial Intelligence 139, 137–174 (2002)CrossRefMathSciNetGoogle Scholar
  20. 20.
    Harpale, A.S., Yang, Y.: Personalized active learning for collaborative filtering. In: SIGIR ’08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pp. 91–98. ACM, New York, NY, USA (2008). DOI
  21. 21.
    Herlocker, J.L., Konstan, J.A., Terveen, L.G., Riedl, J.T.: Evaluating collaborative filtering recommender systems. ACM Trans. Inf. Syst. 22(1), 5–53 (2004). DOI Google Scholar
  22. 22.
    Hinkelmann, K., Kempthorne, O.: Design and Analysis of Experiments, Advanced Experimental Design. Wiley Series in Probability and Statistics (2005)Google Scholar
  23. 23.
    Hofmann, T.: Collaborative filtering via gaussian probabilistic latent semantic analysis. In: SIGIR ’03: Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pp. 259–266. ACM, New York, NY, USA (2003). DOI
  24. 24.
    Huang, Z.: Selectively acquiring ratings for product recommendation. In: ICEC ’07: Proceedings of the ninth international conference on Electronic commerce, pp. 379–388. ACM, New York, NY, USA (2007). DOI
  25. 25.
    Jin, R., Si, L.: A bayesian approach toward active learning for collaborative filtering. In: AUAI ’04: Proceedings of the 20th conference on Uncertainty in artificial intelligence, pp. 278–285. AUAI Press, Arlington, Virginia, United States (2004)Google Scholar
  26. 26.
    John, R.C.S., Draper, N.R.: D-optimality for regression designs: A review. Technometrics 17(1), 15–23 (1975)MATHCrossRefGoogle Scholar
  27. 27.
    Kapoor, A., Horvitz, E., Basu, S.: Selective supervision: Guiding supervised learning with decision-theoretic active learning. In: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), pp. 877–882 (2007)Google Scholar
  28. 28.
    Kohrs, A., Merialdo, B.: Improving collaborative filtering for new users by smart object selection. In: Proceedings of International Conference on Media Features (ICMF) (2001)Google Scholar
  29. 29.
    Leino, J., Räihä, K.J.: Case amazon: ratings and reviews as part of recommendations. In: RecSys ’07: Proceedings of the 2007 ACM conference on Recommender systems, pp. 137– 140. ACM, New York, NY, USA (2007). DOI
  30. 30.
    Lomasky, R., Brodley, C., Aernecke, M., Walt, D., Friedl, M.: Active class selection. In: In Proceedings of the European Conference on Machine Learning (ECML). Springer (2007)Google Scholar
  31. 31.
    McCallum, A., Nigam, K.: Employing em and pool-based active learning for text classification. In: ICML ’98: Proceedings of the Fifteenth International Conference on Machine Learning, pp. 350–358. San Francisco, CA, USA (1998)Google Scholar
  32. 32.
    Mcginty, L., Smyth, B.: On the Role of Diversity in Conversational Recommender Systems. Case-Based Reasoning Research and Development pp. 276–290 (2003)Google Scholar
  33. 33.
    McNee, S.M., Riedl, J., Konstan, J.A.: Being accurate is not enough: how accuracy metrics have hurt recommender systems. In: CHI ’06: CHI ’06 extended abstracts on Human factors in computing systems, pp. 1097–1101. ACM Press, New York, NY, USA (2006). DOI
  34. 34.
    Nakamura, A., Abe, N.: Collaborative filtering using weighted majority prediction algorithms. In: ICML ’98: Proceedings of the Fifteenth International Conference on Machine Learning, pp. 395–403. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1998)Google Scholar
  35. 35.
    Rashid, A.M., Albert, I., Cosley, D., Lam, S.K., McNee, S.M., Konstan, J.A., Riedl, J.: Getting to know you: learning new user preferences in recommender systems. In: IUI ’02: Pro766 Neil Rubens, Dain Kaplan, and Masashi Sugiyama ceedings of the 7th international conference on Intelligent user interfaces, pp. 127–134. ACM Press, New York, NY, USA (2002). DOI
  36. 36.
    Rashid, A.M., Karypis, G., Riedl, J.: Influence in ratings-based recommender systems: An algorithm-independent approach. In: SIAM International Conference on Data Mining, pp. 556–560 (2005)Google Scholar
  37. 37.
    Ricci, F., Nguyen, Q.N.: Acquiring and revising preferences in a critique-based mobile recommender system. IEEE Intelligent Systems 22(3), 22–29 (2007). DOI Google Scholar
  38. 38.
    Rokach, L., Naamani, L., Shmilovici, A.: Pessimistic cost-sensitive active learning of decision trees for profit maximizing targeting campaigns. Data Mining and Knowledge Discovery 17(2), 283–316 (2008). DOI
  39. 39.
    Rokach, L. and Maimon, O. and Arbel, R., Selective voting-getting more for less in sensor fusion, International Journal of Pattern Recognition and Artificial Intelligence 20 (3) (2006), pp. 329–350.Google Scholar
  40. 40.
    Roy, N., Mccallum, A.: Toward optimal active learning through sampling estimation of error reduction. In: In Proc. 18th International Conf. on Machine Learning, pp. 441–448. Morgan Kaufmann (2001)Google Scholar
  41. 41.
    Rubens, N., Sugiyama, M.: Influence-based collaborative active learning. In: Proceedings of the 2007 ACM conference on Recommender systems (RecSys 2007). ACM (2007). DOI
  42. 42.
    Rubens, N., Tomioka, R., Sugiyama, M.: Output divergence criterion for active learning in collaborative settings. IPSJ Transactions on Mathematical Modeling and Its Applications 2(3), 87–96 (2009)Google Scholar
  43. 43.
    Saar-Tsechansky, M., Provost, F.: Decision-centric active learning of binary-outcome models. Information Systems Research 18(1), 4–22 (2007). DOI
  44. 44.
    Schein, A.I., Popescul, A., Ungar, L.H., Pennock, D.M.: Methods and metrics for cold-start recommendations. In: SIGIR ’02: Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 253–260. ACM, New York, NY, USA (2002). DOI
  45. 45.
    Schohn, G., Cohn, D.: Less is more: Active learning with support vector machines. In: Proc. 17th International Conf. on Machine Learning, pp. 839–846. Morgan Kaufmann, San Francisco, CA (2000). URL Scholar
  46. 46.
    Settles, B.: Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison (2009)Google Scholar
  47. 47.
    Settles, B., Craven, M.: An analysis of active learning strategies for sequence labeling tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1069–1078. ACL Press (2008)Google Scholar
  48. 48.
    Settles, B., Craven, M., Friedland, L.: Active learning with real annotation costs. In: Proceedings of the NIPS Workshop on Cost-Sensitive Learning, pp. 1–10 (2008)Google Scholar
  49. 49.
    Settles, B., Craven, M., Ray, S.: Multiple-instance active learning. In: Advances in Neural Information Processing Systems (NIPS), vol. 20, pp. 1289–1296. MIT Press (2008)Google Scholar
  50. 50.
    Seung, H.S., Opper, M., Sompolinsky, H.: Query by committee. In: Computational Learning Theory, pp. 287–294 (1992). URL Scholar
  51. 51.
    Sugiyama, M.: Active learning in approximately linear regression based on conditional expectation of generalization error. Journal of Machine Learning Research 7, 141–166 (2006)Google Scholar
  52. 52.
    Sugiyama, M., Rubens, N.: A batch ensemble approach to active learning with model selection. Neural Netw. 21(9), 1278–1286 (2008). DOI
  53. 53.
    Sugiyama, M., Rubens, N., Mueller, K.R.: Dataset Shift in Machine Learning, chap. A conditional expectation approach to model selection and active learning under covariate shift. MIT Press, Cambridge (2008)Google Scholar
  54. 54.
    Swearingen, K., Sinha, R.: Beyond algorithms: An hci perspective on recommender systems. ACM SIGIR 2001 Workshop on Recommender Systems (2001).Google Scholar
  55. 55.
    Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. In: P. Langley (ed.) Proceedings of ICML-00, 17th International Conference on Machine Learning, pp. 999–1006. Morgan Kaufmann Publishers, San Francisco, US, Stanford, US (2000). URL Scholar
  56. 56.
    Yu, K., Bi, J., Tresp, V.: Active learning via transductive experimental design. In: Proceedings of the 23rd Int. Conference on Machine Learning ICML ’06, pp. 1081–1088. ACM, New York, NY, USA (2006). DOI

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.University of Electro-CommunicationsTokyoJapan
  2. 2.Tokyo Institute of TechnologyTokyoJapan

Personalised recommendations