Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Effectiveness Involving Multiple Queries

  • Eric C. Jensen
  • Steven M. Beitzel
  • Ophir Frieder
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_477

Synonyms

Relevance evaluation of IR systems

Definition

In information retrieval (IR), effectiveness is defined as the relevance of retrieved information to a given query. System effectiveness evaluation typically focuses on the problem of document retrieval: retrieving a ranked list of documents for each input query. Effectiveness is then measured with respect to an environment of interest consisting of the populations of documents, queries, and relevance judgments defining which of these documents are relevant to which queries. Sampling methodologies are employed for each of these populations to estimate a relevance metric, typically a function of precision (the ratio of relevant documents retrieved to the total number of documents retrieved) and recall (the ratio of relevant documents retrieved to the total number of relevant documents for that query). Conclusions about which systems outperform others are drawn from common experimental design, typically focusing on a random sample of...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Buckley C, Voorhees EM. Evaluating evaluation measure stability. In: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2000.p. 33–40.Google Scholar
  2. 2.
    Clarke CLA, Craswell N, Soboroff I. Overview of the TREC 2004 Terabyte track. In: Proceedings of the 15th Text Retrieval Conference; 2006.Google Scholar
  3. 3.
    Cleverdon CW, Mills J, Keen EM. Factors determining the performance of indexing systems. vol. 1: Cranfield CERES: Aslib Cranfield Research Project, College of Aeronautics, Cranfield. vol. 1: Design, vol. 2: Results, 1996.Google Scholar
  4. 4.
    Hawking D, Craswell N. Overview of the TREC 2001 web track. In: Proceedings of the 10th Text Retrieval Conference; 2001.Google Scholar
  5. 5.
    Jensen EC, Beitzel SM, Chowdhury A, Frieder O. On repeatable evaluation of search services in dynamic environments. ACM Trans Inf Syst. 2007;26(1):1.CrossRefGoogle Scholar
  6. 6.
    Sanderson M, Zobel J. Information retrieval system evaluation: effort, sensitivity, and reliability. In: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2005. p. 162–9.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Eric C. Jensen
    • 1
  • Steven M. Beitzel
    • 2
  • Ophir Frieder
    • 3
  1. 1.Twitter, Inc.San FranciscoUSA
  2. 2.Telcordia TechnologiesPiscatawayUSA
  3. 3.Georgetown UniversityWashingtonUSA

Section editors and affiliations

  • Weiyi Meng
    • 1
  1. 1.Dept. of Computer ScienceState University of New York at BinghamtonBinghamtonUSA