Abstract
We study the problem of optimizing an individual base ranker using clicks. Surprisingly, while there has been considerable attention for using clicks to optimize linear combinations of base rankers, the problem of optimizing an individual base ranker using clicks has been ignored. The problem is different from the problem of optimizing linear combinations of base rankers as the scoring function of a base ranker may be highly non-linear. For the sake of concreteness, we focus on the optimization of a specific base ranker, viz. BM25. We start by showing that significant improvements in performance can be obtained when optimizing the parameters of BM25 for individual datasets. We also show that it is possible to optimize these parameters from clicks, i.e., without the use of manually annotated data, reaching or even beating manually tuned parameters.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Asadi, N., Metzler, D., Elsayed, T., Lin, J.: Pseudo test collections for learning web search ranking functions. In: SIGIR 2011, pp. 1073–1082. ACM (2011)
Azzopardi, L., de Rijke, M., Balog, K.: Building simulated queries for known-item topics: An analysis using six European languages. In: SIGIR 2007, pp. 455–462. ACM (2007)
Berendsen, R., Tsagkias, M., Weerkamp, W., de Rijke, M.: Pseudo test collections for training and tuning microblog rankers. In: SIGIR 2013. ACM (2013)
Carterette, B., Jones, R.: Evaluating search engines by modeling the relationship between relevance and clicks. In: NIPS 2007, pp. 217–224. MIT Press (2008)
Gao, N., Deng, Z.-H., Yu, H., Jiang, J.-J.: Listopt: Learning to optimize for XML ranking. In: Huang, J.Z., Cao, L., Srivastava, J. (eds.) PAKDD 2011, Part II. LNCS, vol. 6635, pp. 482–492. Springer, Heidelberg (2011)
Guo, F., Liu, C., Wang, Y.M.: Efficient multiple-click models in web search. In: WSDM 2009. ACM (2009)
Hofmann, K., Whiteson, S., de Rijke, M.: A probabilistic method for inferring preferences from clicks. In: CIKM 2011. ACM (2011)
Hofmann, K., Whiteson, S., de Rijke, M.: Estimating interleaved comparison outcomes from historical click data. In: CIKM 2012. ACM (2012)
Hofmann, K., Schuth, A., Whiteson, S., de Rijke, M.: Reusing historical interaction data for faster online learning to rank for IR. In: WSDM 2013. ACM (2013)
Hofmann, K., Whiteson, S., de Rijke, M.: Balancing exploration and exploitation in listwise and pairwise online learning to rank for information retrieval. Information Retrieval 16(1), 63–90 (2013)
Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS) 20(4) (2002)
Ji, S., Zhou, K., Liao, C., Zheng, Z., Xue, G.-R., Chapelle, O., Sun, G., Zha, H.: Global ranking by exploiting user clicks. In: SIGIR 2009, pp. 35–42. ACM (2009)
Joachims, T.: Optimizing search engines using clickthrough data. In: KDD 2002 (2002)
Jung, S., Herlocker, J.L., Webster, J.: Click data as implicit relevance feedback in web search. Information Processing & Management 43(3), 791–807 (2007)
Liu, T.-Y.: Learning to rank for information retrieval. Foundations and Trends in Information Retrieval 3(3), 225–331 (2009)
Qin, T., Liu, T.-Y., Xu, J., Li, H.: LETOR: A benchmark collection for research on learning to rank for information retrieval. Information Retrieval 13(4), 346–374 (2010)
Radlinski, F., Kleinberg, R., Joachims, T.: Learning diverse rankings with multi-armed bandits. In: ICML 2008. ACM (2008a)
Radlinski, F., Kurup, M., Joachims, T.: How does clickthrough data reflect retrieval quality? In: CIKM 2008, pp. 43–52. ACM, New York (2008b)
Robertson, S., Walker, S.: Okapi at TREC-3. In: TREC-3. NIST (1995)
Robertson, S., Zaragoza, H.: The probabilistic relevance framework: BM25 and beyond. Foundations and Trends in Information Retrieval 3(4), 333–389 (2009)
Schuth, A., Hofmann, K., Whiteson, S., de Rijke, M.: Lerot: An online learning to rank framework. In: LivingLab 2013, pp. 23–26. ACM (2013)
Sparck Jones, K., Walker, S., Robertson, S.: A probabilistic model of information retrieval: Development and comparative experiments. Information Processing and Management 36, 779–808, 809–840 (2000)
Svore, K.M., Burges, C.J.: A machine learning approach for improved BM25 retrieval. In: CIKM 2009. ACM (2009)
Tague, J., Nelson, M., Wu, H.: Problems in the simulation of bibliographic retrieval systems. In: SIGIR 1980, pp. 236–255 (1980)
Taylor, M., Zaragoza, H., Craswell, N., Robertson, S., Burges, C.: Optimisation methods for ranking functions with multiple parameters. In: CIKM 2006. ACM (2006)
Yue, Y., Joachims, T.: Interactively optimizing information retrieval systems as a dueling bandits problem. In: ICML 2009 (2009)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Schuth, A., Sietsma, F., Whiteson, S., de Rijke, M. (2014). Optimizing Base Rankers Using Clicks. In: de Rijke, M., et al. Advances in Information Retrieval. ECIR 2014. Lecture Notes in Computer Science, vol 8416. Springer, Cham. https://doi.org/10.1007/978-3-319-06028-6_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-06028-6_7
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-06027-9
Online ISBN: 978-3-319-06028-6
eBook Packages: Computer ScienceComputer Science (R0)