Skip to main content

Effectiveness Involving Multiple Queries

  • Reference work entry
  • First Online:
Encyclopedia of Database Systems

Synonyms

Relevance evaluation of IR systems

Definition

In information retrieval (IR), effectiveness is defined as the relevance of retrieved information to a given query. System effectiveness evaluation typically focuses on the problem of document retrieval: retrieving a ranked list of documents for each input query. Effectiveness is then measured with respect to an environment of interest consisting of the populations of documents, queries, and relevance judgments defining which of these documents are relevant to which queries. Sampling methodologies are employed for each of these populations to estimate a relevance metric, typically a function of precision (the ratio of relevant documents retrieved to the total number of documents retrieved) and recall (the ratio of relevant documents retrieved to the total number of relevant documents for that query). Conclusions about which systems outperform others are drawn from common experimental design, typically focusing on a random sample of...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 4,499.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 6,499.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Recommended Reading

  1. Buckley C, Voorhees EM. Evaluating evaluation measure stability. In: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2000.p. 33–40.

    Google Scholar 

  2. Clarke CLA, Craswell N, Soboroff I. Overview of the TREC 2004 Terabyte track. In: Proceedings of the 15th Text Retrieval Conference; 2006.

    Google Scholar 

  3. Cleverdon CW, Mills J, Keen EM. Factors determining the performance of indexing systems. vol. 1: Cranfield CERES: Aslib Cranfield Research Project, College of Aeronautics, Cranfield. vol. 1: Design, vol. 2: Results, 1996.

    Google Scholar 

  4. Hawking D, Craswell N. Overview of the TREC 2001 web track. In: Proceedings of the 10th Text Retrieval Conference; 2001.

    Google Scholar 

  5. Jensen EC, Beitzel SM, Chowdhury A, Frieder O. On repeatable evaluation of search services in dynamic environments. ACM Trans Inf Syst. 2007;26(1):1.

    Article  Google Scholar 

  6. Sanderson M, Zobel J. Information retrieval system evaluation: effort, sensitivity, and reliability. In: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2005. p. 162–9.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric C. Jensen .

Editor information

Editors and Affiliations

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Science+Business Media, LLC, part of Springer Nature

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Jensen, E.C., Beitzel, S.M., Frieder, O. (2018). Effectiveness Involving Multiple Queries. In: Liu, L., Özsu, M.T. (eds) Encyclopedia of Database Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8265-9_477

Download citation

Publish with us

Policies and ethics