A Listwise Approach for Learning to Rank Based on Query Normalization Network

  • Chongchong Zhu
  • Fusheng JinEmail author
  • Yan Li
  • Tu Peng
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 849)


Learning to rank is one of the hotspots in the intersection between information retrieval and machine learning. In the traditional listwise approach for learning to rank based on the neural network, the model predicts the score of each document independently, which cannot reflect the link between those documents associated with the same query. To solve the problem, this paper proposes a new ranking neural network model called Query Normalization Network (QNN). In QNN, normalization is added as a part of the original neural network model to perform the normalization operation for each query sample collection. Through this operation, the prediction scores of documents returned by the same query are also associated with each other. Then, this paper proposes a listwise approach called Optimizing Normalized Discounted Cumulative Gain (NDCG) Query Normalization Network (OptNDCGQNN) which based on QNN and directly optimize the evaluation measure NDCG. OptNDCGQNN use QNN as model and Stochastic Gradient Descent (SGD) as optimization algorithm to optimize an upper bound function of the original loss function, which directly defined according to the evaluation measure NDCG. Experimental results show that OptNDCGQNN has better ranking performance than other traditional ranking algorithms. It also show that when the amount of training data is large enough, OptNDCGQNN can enhance the ranking performance by training deep neural network.


Learning to rank Neural network Query Normalization Network Directly optimizing evaluation measure 


  1. 1.
    Cao, Z., Qin, T., Liu, T.Y., Tsai, M.F., Li, H.: Learning to rank: from pairwise approach to listwise approach. In: Proceedings of the 24th International Conference on Machine Learning, pp. 129–136. ACM, Corvallis (2007)Google Scholar
  2. 2.
    Xia, F., Liu, T.Y., Wang, J., Zhang, W., Li, H.: Listwise approach to learning to rank-theorem and algorithm. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1192–1199. DBLP, Helsinki (2008)Google Scholar
  3. 3.
    Qin, T., Liu, T.Y., Li, H.: A general approximation framework for direct optimization of information retrieval measures. J. Inf. Retr. 13(4), 375–397 (2009)CrossRefGoogle Scholar
  4. 4.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on Machine Learning, pp. 448–456. PMLR, Lille (2015)Google Scholar
  5. 5.
    Taylor, M., Guiver, J., Robertson, S., Minka, T., Taylor, M.: Softrank: optimising non-smooth rank metrics. In: Proceedings of the 1st International Conference on Web Search and Web Data Mining, pp. 77–86. WSDM, New York (2008)Google Scholar
  6. 6.
    Xu, J., Li, H.: Adarank: a boosting algorithm for information retrieval. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 391–398. ACM, Amsterdam (2007)Google Scholar
  7. 7.
    Valizadegan, H., Jin, R., Zhang, R.: Learning to rank by optimizing NDCG measure. In: Conference on Neural Information Processing Systems, pp. 1883–1891. DBLP, Vancouver (2009)Google Scholar
  8. 8.
    Wang, Y., Huang, Y.L., Lu, M., Pang, X.D., Xie, M.Q., Liu, J.: Multiple rank aggregation based on directly optimizing performance measure. J. Chin. J. Comput. 37(8), 1658–1668 (2014)Google Scholar
  9. 9.
    Qin, T., Liu, T.Y., Xu, J., Li, H.: LETOR: a benchmark collection for research on learning to rank for information retrieval. J. Inf. Retr. J. 13(4), 346–374 (2010)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Beijing Institute of TechnologyBeijingChina

Personalised recommendations