Time-Aware Novelty Metrics for Recommender Systems

  • Pablo SánchezEmail author
  • Alejandro Bellogín
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10772)


Time-aware recommender systems is an active research area where the temporal dimension is considered to improve the effectiveness of the recommendations. Even though performance evaluation is dominated by accuracy-related metrics – such as precision or NDCG –, other properties of the recommended items like their novelty and diversity have attracted attention in recent years, where several metrics have been defined with this goal in mind. However, it is unclear how suitable these metrics are to measure novelty or diversity in temporal contexts. In this paper, we propose a formulation to capture the time-aware novelty (or freshness) of the recommendation lists, according to different time models of the items. Hence, we provide a measure to account for how much a system is promoting fresh items in its recommendations. We show that time-aware recommenders tend to provide more fresh items, although this is not always the case, depending on statistical biases and patterns inherent to the data. Our results, nonetheless, indicate that the proposed formulation can be used to extend the knowledge about what items are being suggested by any recommendation technique aiming to exploit temporal contexts.



This research was supported by the Spanish Ministry of Economy, Industry and Competitiveness (TIN2016-80630-P).


  1. 1.
    McNee, S.M., Riedl, J., Konstan, J.A.: Being accurate is not enough: how accuracy metrics have hurt recommender systems. In: CHI, pp. 1097–1101. ACM (2006)Google Scholar
  2. 2.
    Gunawardana, A., Shani, G.: Evaluating recommender systems. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 265–308. Springer, Boston (2015). CrossRefGoogle Scholar
  3. 3.
    Campos, P.G., Díez, F., Cantador, I.: Time-aware recommender systems: a comprehensive survey and analysis of existing evaluation protocols. UMUAI 24(1–2), 67–119 (2014)Google Scholar
  4. 4.
    Bell, R.M., Koren, Y.: Lessons from the netflix prize challenge. SIGKDD Explor. 9(2), 75–79 (2007)CrossRefGoogle Scholar
  5. 5.
    Vargas, S., Castells, P.: Rank and relevance in novelty and diversity metrics for recommender systems. In: RecSys, pp. 109–116. ACM (2011)Google Scholar
  6. 6.
    Ning, X., Desrosiers, C., Karypis, G.: A comprehensive survey of neighborhood-based recommendation methods. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 37–76. Springer, Boston (2015). CrossRefGoogle Scholar
  7. 7.
    Hu, Y., Koren, Y., Volinsky, C.: Collaborative filtering for implicit feedback datasets. In: ICDM, pp. 263–272. IEEE Computer Society (2008)Google Scholar
  8. 8.
    Koren, Y.: Collaborative filtering with temporal dynamics. CACM 53(4), 89–97 (2010)CrossRefGoogle Scholar
  9. 9.
    Ding, Y., Li, X.: Time weight collaborative filtering. In: CIKM, pp. 485–492. ACM (2005)Google Scholar
  10. 10.
    Castells, P., Hurley, N.J., Vargas, S.: Novelty and diversity in recommender systems. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 881–918. Springer, Boston (2015). CrossRefGoogle Scholar
  11. 11.
    Chou, S., Yang, Y., Lin, Y.: Evaluating music recommendation in a real-world setting: on data splitting and evaluation metrics. In: ICME, pp. 1–6. IEEE Computer Society (2015)Google Scholar
  12. 12.
    Jones, R., Diaz, F.: Temporal profiles of queries. ACM TOIS 25(3), 14 (2007)CrossRefGoogle Scholar
  13. 13.
    Ziegler, C., McNee, S.M., Konstan, J.A., Lausen, G.: Improving recommendation lists through topic diversification. In: WWW, pp. 22–32. ACM (2005)Google Scholar
  14. 14.
    Dooms, S., Bellogín, A., Pessemier, T.D., Martens, L.: A framework for dataset benchmarking and its application to a new movie rating dataset. ACM TIST 7(3), 41:1–41:28 (2016)Google Scholar
  15. 15.
    He, R., McAuley, J.: Fusing similarity models with Markov chains for sparse sequential recommendation. In: ICDM, pp. 191–200. IEEE (2016)Google Scholar
  16. 16.
    Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: BPR: Bayesian personalized ranking from implicit feedback. In: UAI, pp. 452–461. AUAI Press (2009)Google Scholar
  17. 17.
    Gantner, Z., Rendle, S., Freudenthaler, C., Schmidt-Thieme, L.: MyMediaLite: a free recommender system library. In: RecSys, pp. 305–308. ACM (2011)Google Scholar
  18. 18.
    Bellogín, A., Castells, P., Cantador, I.: Statistical biases in information retrieval metrics for recommender systems. Inf. Retr. J. 20(6), 606–634 (2017)CrossRefGoogle Scholar
  19. 19.
    Sato, N., Uehara, M., Sakai, Y.: A case study on freshness based scoring for fresh information retrieval. In: ISCIT, vol. 1, pp. 210–215 (2004)Google Scholar
  20. 20.
    Wang, H., Dong, A., Li, L., Chang, Y., Gabrilovich, E.: Joint relevance and freshness learning from clickthroughs for news search. In: WWW, pp. 579–588. ACM (2012)Google Scholar
  21. 21.
    Knees, P., Schedl, M.: Music Similarity and Retrieval - An Introduction to Audio- and Web-based Strategies. The Information Retrieval Series, vol. 36, 1st edn. Springer, Heidelberg (2016). Google Scholar
  22. 22.
    Hu, Y., Ogihara, M.: Nextone player: a music recommendation system based on user behavior. In: ISMIR, pp. 103–108. University of Miami (2011)Google Scholar
  23. 23.
    McNee, S.M., Riedl, J., Konstan, J.A.: Making recommendations better: an analytic model for human-recommender interaction. In: CHI, pp. 1103–1108. ACM (2006)Google Scholar
  24. 24.
    Lathia, N., Hailes, S., Capra, L., Amatriain, X.: Temporal diversity in recommender systems. In: SIGIR, pp. 210–217. ACM (2010)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Universidad Autónoma de MadridMadridSpain

Personalised recommendations