Skip to main content

Advanced Information Retrieval Measures

  • Reference work entry
  • First Online:
  • 36 Accesses

Definition

Advanced information retrieval measures are effectiveness measures for various types of information access tasks that go beyond traditional document retrieval. Traditional document retrieval measures are suitable for set retrieval (measured by precision, recall, F-measure, etc.) or ad hoc ranked retrieval, the task of ranking documents by relevance (measured by average precision, etc.). Whereas, advanced information retrieval measures may work for diversified search (the task of retrieving relevant and diverse documents), aggregated search (the task of retrieving from multiple sources/media and merging the results), one-click access (the task of returning a textual multidocument summary instead of a list of URLs in response to a query), and multiquery sessions (information-seeking activities that involve query reformulations), among other tasks. Some advanced measures are based on user models that arguably better reflect real user behaviors than standard measures do.

Historic...

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   4,499.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   6,499.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Recommended Reading

  1. Allan J, Croft B, Moffat A, Sanderson M, editors. Frontiers, challenges and opportunities for information retrieval: report from SWIRL 2012. SIGIR Forum. 2012;46(1):2–32.

    Article  Google Scholar 

  2. Chapelle O, Metzler D, Zhang Y, Grinspan P. Expected reciprocal rank for graded relevance. In: Proceedings of the 18th ACM International Conference on Information and Knowledge Management; 2009. p. 621–30.

    Google Scholar 

  3. Chapelle O, Ji S, Liao C, Velipasaoglu E, Lai L, Wu SL. Intent-based diversification of web search results: metrics and algorithms. Inf Retr. 2011;14(6):572–92.

    Article  Google Scholar 

  4. Clarke CLA, Craswell N, Soboroff I, Ashkan A. A comparative analysis of cascade measures for novelty and diversity. In: Proceedings of the 4th ACM International Conference on Web Search and Data Mining; 2011. p. 75–84.

    Google Scholar 

  5. Järvelin K, Kekäläinen J. Cumulated gain-based evaluation of IR techniques. ACM Trans Information Syst. 2002;20(4):422–46.

    Article  Google Scholar 

  6. Kanoulas E, Carterette B, Clough PD, Sanderson M. Evaluating multi-query sessions. In: Proceedings of the 34th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2011. p. 1026–53.

    Google Scholar 

  7. Moffat A, Zobel J. Rank-biased Precision for measurement of retrieval effectiveness. ACM Trans Information Syst. 2008;27(1):2:1–2:27.

    Article  Google Scholar 

  8. Pollock SM. Measures for the comparison of information retrieval systems. Am Doc. 1968;19(4): 387–97.

    Article  Google Scholar 

  9. Robertson SE, Kanoulas E, Yilmaz E. Extending average Precision to graded relevance judgments. In: Proceedings of the 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2010. p. 603–10.

    Google Scholar 

  10. Sakai T. Statistical reform in information retrieval? SIGIR Forum. 2014;48(1):3–12.

    Article  Google Scholar 

  11. Sakai, T. Inf Retrieval J (2016) 19(3):256. https://doi.org/10.1007/s10791-015-9273-z

    Article  Google Scholar 

  12. Sakai T, Dou Z. Summaries, ranked retrieval and sessions: a unified framework for information access evaluation. In: Proceedings of the 36th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2013. p. 473–82.

    Google Scholar 

  13. Sakai T, Song R. Evaluating diversified search results using per-intent graded relevance. In: Proceedings of the 34th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2011. p. 1043–52.

    Google Scholar 

  14. Sakai T, Kato MP, Song YI. Click the search button and be happy: evaluating direct and immediate information access. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management; 2011. p. 621–30.

    Google Scholar 

  15. Sakai T. Metrics, statistics, tests. In: PROMISE winter school 2013: bridging between information retrieval and databases, Bressanone. LNCS, vol 8173. 2014.

    Google Scholar 

  16. Smucker MD, Clarke CLA. Time-based calibration of effectiveness measures. In: Proceedings of the 35th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2012. p. 95–104.

    Google Scholar 

  17. Zhai C, Cohen WW, Lafferty J. Beyond independent relevance: methods and evaluation metrics for subtopic retrieval. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2003. p. 10–7.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tetsuya Sakai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Science+Business Media, LLC, part of Springer Nature

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Sakai, T. (2018). Advanced Information Retrieval Measures. In: Liu, L., Özsu, M.T. (eds) Encyclopedia of Database Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8265-9_80705

Download citation

Publish with us

Policies and ethics