Abstract
Algorithms for learning to rank can be inefficient when they employ risk functions that use structural information. We describe and analyze a learning algorithm that efficiently learns a ranking function using a domination loss. This loss is designed for problems in which we need to rank a small number of positive examples over a vast number of negative examples. In that context, we propose an efficient coordinate descent approach that scales linearly with the number of examples. We then present an extension that incorporates regularization, thus extending Vapnik’s notion of regularized empirical risk minimization to ranking learning. We also discuss an extension to the case of multi-value feedback. Experiments performed on several benchmark datasets and large-scale Google internal datasets demonstrate the effectiveness of the learning algorithm in constructing compact models while retaining the empirical performance accuracy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cao, Z., Qin, T., Liu, T.Y., Tsai, M.F., Li, H.: Learning to rank: from pairwise approach to listwise approach. In: ICML ’07: Proceedings of the 24th International Conference on Machine Learning, Corvalis, pp. 129–136 (2007)
Cohen, W.W., Schapire, R.E., Singer, Y.: Learning to order things. J. Artif. Intell. Res. 10, 243–270 (1999)
Dekel, O., Manning, C., Singer, Y.: Log-Linear Models for Label Ranking. Advances in Neural Information Processing Systems, vol. 14, Vancouver. MIT Press, Cambridge (2004)
Freund, Y., Iyer, R., Schapire, R.E., Singer, Y.: An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res. 4, 933–969 (2003)
Grangier, D., Bengio, S.: A discriminative kernel-based model to rank images from text queries. IEEE Trans. Pattern Anal. Mach. Intell. 30(8), 1371–1384 (2008)
Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), Edmonton (2002)
Joachims, T.: A support vector method for multivariate performance measures. In: Proceedings of the International Conference on Machine Learning (ICML), Bonn (2005)
Luo1, Z., Tseng, P.: On the convergence of the coordinate descent method for convex differentiable minimization. J. Optim. Theory Appl. 72(1), 7–35 (1992)
Roweis, S.T., Salakhutdinov, R.: Adaptive overrelaxed bound optimization methods. In: Proceedings of the International Conference on Machine Learning (ICML), Washington, DC, pp. 664–671 (2003)
Salton, G.: Automatic Text Processing: The Transformation, Analysis and Retrieval of Information by Computer. Addison-Wesley, Boston (1989)
Singhal, A., Buckley, C., Mitra, M.: Pivoted document length normalization. In: Research and Development in Information Retrieval, Zurich, pp. 21–29 (1996)
Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. B 117, 387–423 (2007)
Vapnik, V.N.: Estimation of Dependences Based on Empirical Data. Springer, New York (1982)
Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995)
Vapnik, V.N.: Statistical Learning Theory. Wiley, New York (1998)
Xu, J., Li, H.: Adarank: a boosting algorithm for information retrieval. In: SIGIR ’07: Proceedings of the 30th Annual international ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, pp. 391–398 (2007)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Stevens, M., Bengio, S., Singer, Y. (2013). Efficient Learning of Sparse Ranking Functions. In: Schölkopf, B., Luo, Z., Vovk, V. (eds) Empirical Inference. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41136-6_22
Download citation
DOI: https://doi.org/10.1007/978-3-642-41136-6_22
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-41135-9
Online ISBN: 978-3-642-41136-6
eBook Packages: Computer ScienceComputer Science (R0)