Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Search Engine Metrics

  • Ben Carterette
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_325

Synonyms

Evaluation measures; Performance measures

Definition

Search engine metrics measure the ability of an information retrieval system (such as a web search engine) to retrieve and rank relevant material in response to a user’s query. In contrast to database retrieval, relevance in information retrieval depends on the natural language semantics of the query and document, and search engines can and do retrieve results that are not relevant. The two fundamental metrics are recall, measuring the ability of a search engine to find the relevant material in the index, and precision, measuring its ability to place that relevant material high in the ranking. Precision and recall have been extended and adapted to many different types of evaluation and task, but remain the core of performance measurement.

Historical Background

Performance measurement of information retrieval systems began with Cleverdon and Mills in the early 1960s with the Cranfield tests of language indexing devices [3, 4]...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Aslam JA, Pavlu V, Yilmaz E. A statistical method for system evaluation using incomplete judgments. In: Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2006. p. 541–8.Google Scholar
  2. 2.
    Carterette B, Allan J, Sitaraman RK. Minimal test collections for retrieval evaluation. In: Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2006. p. 268–75.Google Scholar
  3. 3.
    Cleverdon CW. The cranfield tests on index language devices. In: Jones KS, Willett P, editors. Readings in information retrieval. Morgan Kaufmann; 1967. p. 47–59.Google Scholar
  4. 4.
    Cleverdon CW, Mills J. The testing of index language devices. In: Jones KS, Willett P, editors. Readings in information retrieval. Morgan Kaufmann; 1963. p. 98–110.Google Scholar
  5. 5.
    Kekalainen J, Jarvelin K. Using graded relevance assessments in IR evaluation. JASIST. 2002;53(13):1120–9.CrossRefGoogle Scholar
  6. 6.
    Papineni K, Roukos S, Ward T, Zhu WJ. BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics; 2002. p. 311–8.Google Scholar
  7. 7.
    van Rijsbergen CJ. Information retrieval. London: Butterworths; 1979.zbMATHGoogle Scholar
  8. 8.
    Salton G, Lesk ME. Computer evaluation of indexing and text processing. In: Jones KS, Willett P, editors. Readings in information retrieval. Morgan Kaufmann; 1967. p. 60–84.Google Scholar
  9. 9.
    Soboroff I. Dynamic test collections: measuring search effectiveness on the live web. In: Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2006. p. 276–83.Google Scholar
  10. 10.
    Sparck JK, van Rijsbergen CJ. Information retrieval test collections. J Doc. 1976;32(1):59–75.CrossRefGoogle Scholar
  11. 11.
    Voorhees E. Variations in relevance judgments and the measurement of retrieval effectiveness. In: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 1998. p. 315–23.Google Scholar
  12. 12.
    Voorhees EM, Harman DK, editors. TREC: experiment and evaluation in information retrieval. Cambridge, MA: MIT; 2005.Google Scholar
  13. 13.
    Zobel J. How reliable are the results of large-scale information retrieval experiments? In: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 1998. p. 307–14.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of Massachusetts AmherstAmherstUSA

Section editors and affiliations

  • Cong Yu
    • 1
  1. 1.Google ResearchNew YorkUSA