Skip to main content

Evaluating Search Engine Relevance with Click-Based Metrics

  • Chapter
  • First Online:

Abstract

Automatically judging the quality of retrieval functions based on observable user behavior holds promise for making retrieval evaluation faster, cheaper, and more user centered. However, the relationship between observable user behavior and retrieval quality is not yet fully understood. In this chapter, we expand upon, Radlinski et al. (How does clickthrough data reflect retrieval quality, In Proceedings of the ACM Conference on Information and Knowledge Management (CIKM), 43–52, 2008), presenting a sequence of studies investigating this relationship for an operational search engine on the arXiv.org e-print archive. We find that none of the eight absolute usage metrics we explore (including the number of clicks observed, the frequency with which users reformulate their queries, and how often result sets are abandoned) reliably reflect retrieval quality for the sample sizes we consider. However, we find that paired experiment designs adapted from sensory analysis produce accurate and reliable statements about the relative quality of two retrieval functions. In particular, we investigate two paired comparison tests that analyze clickthrough data from an interleaved presentation of ranking pairs, and find that both give accurate and consistent results. We conclude that both paired comparison tests give substantially more accurate and sensitive evaluation results than the absolute usage metrics in our domain.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Based on Filip Radlinski, Madhu Kurup, and Thorsten Joachims. How does clickthrough data reflect retrieval quality. In Proceedings of the ACM Conference on Information and Knowledge Management (CIKM), pages 43–52, 2008 ©.

  2. 2.

    Made available athttp://search.arxiv.org/

  3. 3.

    Available at http://lucene.apache.org/

  4. 4.

    In other words, we report macro-averages rather than micro-averages.

  5. 5.

    While the two “original ranking function” curves represent the same ranking function, they were collected on two different months thus explaining the variation between them.

  6. 6.

    in other words, using micro-averaging.

References

  1. E. Agichtein, E. Brill, S. Dumais, R. Ragno, Learning user interaction models for prediction web search results preferences, in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2006), pp. 3–10

    Google Scholar 

  2. R. Agrawal, A. Halverson, K. Kenthapadi, N. Mishra, P. Tsaparas, Generating labels from clicks, in Proceedings of ACM International Conference on Web Search and Data Mining (WSDM) (2009), pp. 172–181

    Google Scholar 

  3. K.Ali, C.Chang, On the relationship between click-rate and relevance for search engines, in Proceedings of Data Mining and Information Engineering (2006)

    Google Scholar 

  4. J.A. Aslam, V. Pavlu, E. Yilmaz, A sampling technique for efficiently estimating measures of query retrieval performance using incomplete judgments, in ICML Workshop on Learning with Partially Classified Training Data (2005)

    Google Scholar 

  5. J. Boyan, D. Freitag, T. Joachims, A machine learning architecture for optimizing web search engines, in AAAI Workshop on Internet Based Information Systems (1996)

    Google Scholar 

  6. C. Buckley, E.M. Voorhees, Retrieval evaluation with incomplete information, in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2004), pp. 25–32

    Google Scholar 

  7. B. Carterette, J. Allan, R. Sitaraman, Minimal test collections for retrieval evaluation, in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2006), pp. 268–275

    Google Scholar 

  8. B. Carterette, P.N. Bennett, D.M. Chickering, S.T. Dumais, Here or there: Preference judgements for relevance, in Proceedings of the European Conference on Information Retrieval (ECIR) (2008), pp. 16–27

    Google Scholar 

  9. B. Carterette, R. Jones, Evaluating search engines by modeling the relationship between relevance and clicks, in Proceedings of the International Conference on Advances in Neural Information Processing Systems (NIPS) (2007), pp. 217–224

    Google Scholar 

  10. K.Crammer, Y.Singer, Pranking with ranking, in Proceedings of the International Conference on Advances in Neural Information Processing Systems (NIPS) (2001), pp. 641–647

    Google Scholar 

  11. G. Dupret, V. Murdock, B. Piwowarski, Web search engine evaluation using clickthrough data and a user model, in WWW Workshop on Query Log Analysis (2007)

    Google Scholar 

  12. S. Fox, K. Karnawat, M. Mydland, S. Dumais, T. White, Evaluating implicit measures to improve web search, ACM Trans. Inf. Sci. (TOIS) 23(2), 147–168 (2005)

    Google Scholar 

  13. S.B. Huffman, M. Hochster, How well does result relevance predict session satisfaction? in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2007), pp. 567–573

    Google Scholar 

  14. T.Joachims, Evaluating retrieval performance using clickthrough data, in Text Mining, ed. by J.Franke, G.Nakhaeizadeh, I.Renz (Physica Verlag, 2003)

    Google Scholar 

  15. T. Joachims, Optimizing search engines using clickthrough data, in Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (KDD) (2002),pp. 132–142

    Google Scholar 

  16. T. Joachims, L. Granka, B. Pan, H. Hembrooke, F. Radlinski, G. Gay, Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Trans. Inf. Sci. (TOIS) 25(2) (2007), Article 7

    Google Scholar 

  17. D.Kelly, J.Teevan, Implicit feedback for inferring user preference: A bibliography. ACM SIGIR Forum 37(2), 18–28 (2003)

    Article  Google Scholar 

  18. J.Kozielecki, Psychological Decision Theory (Kluwer, 1981)

    Google Scholar 

  19. D.Laming, Sensory Analysis (Academic, 1986)

    Google Scholar 

  20. Y. Liu, Y. Fu, M. Zhang, S. Ma, L. Ru, Automatic search engine performance evaluation with click-through data analysis, in Proceedings of the International World Wide Web Conference (WWW) (2007)

    Google Scholar 

  21. C.D. Manning, P. Raghavan, H. Schuetze, Introduction to Information Retrieval (Cambridge University Press, 2008)

    Google Scholar 

  22. F. Radlinski, T. Joachims, Query chains: Learning to rank from implicit feedback, in Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (KDD) (2005)

    Google Scholar 

  23. F. Radlinski, M. Kurup, T. Joachims, How does clickthrough data reflect retrieval quality, in Proceedings of the ACM Conference on Information and Knowledge Management (CIKM) (2008), pp. 43–52

    Google Scholar 

  24. S. Rajaram, A. Garg, Z.S. Zhou, T.S. Huang, Classification approach towards ranking and sorting problems, in Lecture Notes in Artificial Intelligence (2003), pp. 301–312

    Google Scholar 

  25. J. Reid, A task-oriented non-interactive evaluation methodology for information retrieval systems. Inf. Retr. 2, 115–129 (2000)

    Article  Google Scholar 

  26. I. Soboroff, C. Nicholas, P. Cahan, Ranking retrieval systems without relevance judgments, in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2001), pp. 66–73

    Google Scholar 

  27. A. Spink, D. Wolfram, M. Bernard, J. Jansen, T. Saracevic, Searching to web: The public and their queries. J. Am. Soc. Inf. Sci. Technol. 52(3), 226–234 (2001)

    Article  Google Scholar 

  28. A. Turpin, F. Scholer, User performance versus precision measures for simple search tasks, in Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR) (2006), pp. 11–18

    Google Scholar 

  29. E.M. Voorhees, D.K. Harman (eds.), TREC: Experiment and Evaluation in Information Retrieval (MIT, 2005)

    Google Scholar 

  30. Y. Yue, T. Joachims, Interatively optimizing information systems as a dueling bandits problem, in NIPS 2008 Workshop on Beyond Search: Compuitations Intelligence for the Web (2008)

    Google Scholar 

  31. Y. Yue, T. Joachims, Interatively optimizing information retrieval systems as a dueling bandits problem, in Proceedings of the International Conference on Machine Learning (ICML) (2009)

    Google Scholar 

Download references

Acknowledgements

Many thanks to Paul Ginsparg and Simeon Warner for their insightful discussions and their support of the arXiv.org search. The first author was supported by a Microsoft Ph.D. Student Fellowship. This work was also supported by NSF Career Award No. 0237381, NSF Award IIS-0812091 and a gift from Google.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Filip Radlinski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Radlinski, F., Kurup, M., Joachims, T. (2010). Evaluating Search Engine Relevance with Click-Based Metrics. In: Fürnkranz, J., Hüllermeier, E. (eds) Preference Learning. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14125-6_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-14125-6_16

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-14124-9

  • Online ISBN: 978-3-642-14125-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics