Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Advanced Information Retrieval Measures

  • Tetsuya Sakai
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_80705

Definition

Advanced information retrieval measures are effectiveness measures for various types of information access tasks that go beyond traditional document retrieval. Traditional document retrieval measures are suitable for set retrieval (measured by precision, recall, F-measure, etc.) or ad hoc ranked retrieval, the task of ranking documents by relevance (measured by average precision, etc.). Whereas, advanced information retrieval measures may work for diversified search (the task of retrieving relevant and diverse documents), aggregated search (the task of retrieving from multiple sources/media and merging the results), one-click access (the task of returning a textual multidocument summary instead of a list of URLs in response to a query), and multiquery sessions (information-seeking activities that involve query reformulations), among other tasks. Some advanced measures are based on user models that arguably better reflect real user behaviors than standard measures do.

Historic...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Allan J, Croft B, Moffat A, Sanderson M, editors. Frontiers, challenges and opportunities for information retrieval: report from SWIRL 2012. SIGIR Forum. 2012;46(1):2–32.CrossRefGoogle Scholar
  2. 2.
    Chapelle O, Metzler D, Zhang Y, Grinspan P. Expected reciprocal rank for graded relevance. In: Proceedings of the 18th ACM International Conference on Information and Knowledge Management; 2009. p. 621–30.Google Scholar
  3. 3.
    Chapelle O, Ji S, Liao C, Velipasaoglu E, Lai L, Wu SL. Intent-based diversification of web search results: metrics and algorithms. Inf Retr. 2011;14(6):572–92.CrossRefGoogle Scholar
  4. 4.
    Clarke CLA, Craswell N, Soboroff I, Ashkan A. A comparative analysis of cascade measures for novelty and diversity. In: Proceedings of the 4th ACM International Conference on Web Search and Data Mining; 2011. p. 75–84.Google Scholar
  5. 5.
    Järvelin K, Kekäläinen J. Cumulated gain-based evaluation of IR techniques. ACM Trans Information Syst. 2002;20(4):422–46.CrossRefGoogle Scholar
  6. 6.
    Kanoulas E, Carterette B, Clough PD, Sanderson M. Evaluating multi-query sessions. In: Proceedings of the 34th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2011. p. 1026–53.Google Scholar
  7. 7.
    Moffat A, Zobel J. Rank-biased Precision for measurement of retrieval effectiveness. ACM Trans Information Syst. 2008;27(1):2:1–2:27.CrossRefGoogle Scholar
  8. 8.
    Pollock SM. Measures for the comparison of information retrieval systems. Am Doc. 1968;19(4): 387–97.CrossRefGoogle Scholar
  9. 9.
    Robertson SE, Kanoulas E, Yilmaz E. Extending average Precision to graded relevance judgments. In: Proceedings of the 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2010. p. 603–10.Google Scholar
  10. 10.
    Sakai T. Statistical reform in information retrieval? SIGIR Forum. 2014;48(1):3–12.CrossRefGoogle Scholar
  11. 11.
    Sakai, T. Inf Retrieval J (2016) 19(3):256. https://doi.org/10.1007/s10791-015-9273-zCrossRefGoogle Scholar
  12. 12.
    Sakai T, Dou Z. Summaries, ranked retrieval and sessions: a unified framework for information access evaluation. In: Proceedings of the 36th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2013. p. 473–82.Google Scholar
  13. 13.
    Sakai T, Song R. Evaluating diversified search results using per-intent graded relevance. In: Proceedings of the 34th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2011. p. 1043–52.Google Scholar
  14. 14.
    Sakai T, Kato MP, Song YI. Click the search button and be happy: evaluating direct and immediate information access. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management; 2011. p. 621–30.Google Scholar
  15. 15.
    Sakai T. Metrics, statistics, tests. In: PROMISE winter school 2013: bridging between information retrieval and databases, Bressanone. LNCS, vol 8173. 2014.Google Scholar
  16. 16.
    Smucker MD, Clarke CLA. Time-based calibration of effectiveness measures. In: Proceedings of the 35th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2012. p. 95–104.Google Scholar
  17. 17.
    Zhai C, Cohen WW, Lafferty J. Beyond independent relevance: methods and evaluation metrics for subtopic retrieval. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2003. p. 10–7.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Waseda UniversityTokyoJapan