Advertisement

A Rich Ranking Model Based on the Matthew Effect Optimization

  • Jinzhong LiEmail author
  • Guanjun Liu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11280)

Abstract

Most existing approaches of learning to rank treat the effectiveness of each query equally which results in a relatively lower ratio of queries with high effectiveness (i.e. rich queries) in the produced ranking model. Such ranking models need to be further optimized to increase the number of rich queries. In this paper, queries with different effectiveness are distinguished, and the queries with higher effectiveness are given higher weights. We modify the gradient in the LambdaMART algorithm based on a new perspective of Matthew effect to highlight the optimization of the rich queries and to produce the rich ranking model, and we present a consistency theorem for the modified optimization objective. Based on the effectiveness evaluation criteria for information retrieval, we introduce the Gini coefficient, mean-variance and quantity statistics to measure the performances of the ranking models. Experimental results show that the ranking models produced by the gradient-modified LambdaMART algorithm based on Matthew effect exhibit a stronger Matthew effect compared to the original LambdaMART algorithm.

Keywords

Learning to rank Ranking model Matthew effect LambdaMART algorithm Gradient 

References

  1. 1.
    Page, L., Brin, S., Motwani, R., Winograd, T.: The pagerank citation ranking: bringing order to the web. Technical report, Stanford Digital Library Technologies Project (1999)Google Scholar
  2. 2.
    Merton, R.K., et al.: The matthew effect in science. Science 159(3810), 56–63 (1968)CrossRefGoogle Scholar
  3. 3.
    Wu, Q., Burges, C.J., Svore, K.M., Gao, J.: Adapting boosting for information retrieval measures. Inf. Retr. 13(3), 254–270 (2010)CrossRefGoogle Scholar
  4. 4.
    Burges, C.J.: From ranknet to lambdarank to lambdaMART: an overview. Microsoft Research Technical report MSR-TR-2010-82 (2010)Google Scholar
  5. 5.
    Wang, S., Wu, Y., Gao, B.J., Wang, K., Lauw, H.W., Ma, J.: A cooperative coevolution framework for parallel learning to rank. IEEE Trans. Knowl. Data Eng. 27(12), 3152–3165 (2015)CrossRefGoogle Scholar
  6. 6.
    Ibrahim, O.A.S., Landasilva, D.: An evolutionary strategy with machine learning for learning to rank in information retrieval. Soft Comput. 22(10), 3171–3185 (2018)CrossRefGoogle Scholar
  7. 7.
    Xu, J., Zeng, W., Lan, Y., Guo, J., Cheng, X.: Modeling the parameter interactions in ranking SVM with low-rank approximation. IEEE Trans. Knowl. Data Eng. (2018, in Press)Google Scholar
  8. 8.
    Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. (TOIS) 20(4), 422–446 (2002)CrossRefGoogle Scholar
  9. 9.
    Chapelle, O., Metlzer, D., Zhang, Y., Grinspan, P.: Expected reciprocal rank for graded relevance. In: Proceedings of the 18th ACM Conference on Information and Knowledge Management, pp. 621–630. ACM (2009)Google Scholar
  10. 10.
    Dixon, P.M., Weiner, J., Mitchell-Olds, T., Woodley, R.: Bootstrapping the gini coefficient of inequality. Ecology 68, 1548–1551 (1987)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Department of Computer Science and Technology, College of Electronic and Information EngineeringJinggangshan UniversityJi’anChina
  2. 2.Network and Data Security Key Laboratory of Sichuan ProvinceUniversity of Electronic Science and Technology of ChinaChengduChina
  3. 3.Department of Computer Science and Technology, College of Electronic and Information EngineeringTongji UniversityShanghaiChina
  4. 4.Key Laboratory of Embedded System and Service Computing, Ministry of EducationTongji UniversityShanghaiChina

Personalised recommendations