Skip to main content

Search Engine Metrics

  • Reference work entry
  • First Online:
  • 20 Accesses

Synonyms

Evaluation measures; Performance measures

Definition

Search engine metrics measure the ability of an information retrieval system (such as a web search engine) to retrieve and rank relevant material in response to a user’s query. In contrast to database retrieval, relevance in information retrieval depends on the natural language semantics of the query and document, and search engines can and do retrieve results that are not relevant. The two fundamental metrics are recall, measuring the ability of a search engine to find the relevant material in the index, and precision, measuring its ability to place that relevant material high in the ranking. Precision and recall have been extended and adapted to many different types of evaluation and task, but remain the core of performance measurement.

Historical Background

Performance measurement of information retrieval systems began with Cleverdon and Mills in the early 1960s with the Cranfield tests of language indexing devices [3, 4]...

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   4,499.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   6,499.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Recommended Reading

  1. Aslam JA, Pavlu V, Yilmaz E. A statistical method for system evaluation using incomplete judgments. In: Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2006. p. 541–8.

    Google Scholar 

  2. Carterette B, Allan J, Sitaraman RK. Minimal test collections for retrieval evaluation. In: Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2006. p. 268–75.

    Google Scholar 

  3. Cleverdon CW. The cranfield tests on index language devices. In: Jones KS, Willett P, editors. Readings in information retrieval. Morgan Kaufmann; 1967. p. 47–59.

    Google Scholar 

  4. Cleverdon CW, Mills J. The testing of index language devices. In: Jones KS, Willett P, editors. Readings in information retrieval. Morgan Kaufmann; 1963. p. 98–110.

    Google Scholar 

  5. Kekalainen J, Jarvelin K. Using graded relevance assessments in IR evaluation. JASIST. 2002;53(13):1120–9.

    Article  Google Scholar 

  6. Papineni K, Roukos S, Ward T, Zhu WJ. BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics; 2002. p. 311–8.

    Google Scholar 

  7. van Rijsbergen CJ. Information retrieval. London: Butterworths; 1979.

    MATH  Google Scholar 

  8. Salton G, Lesk ME. Computer evaluation of indexing and text processing. In: Jones KS, Willett P, editors. Readings in information retrieval. Morgan Kaufmann; 1967. p. 60–84.

    Google Scholar 

  9. Soboroff I. Dynamic test collections: measuring search effectiveness on the live web. In: Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2006. p. 276–83.

    Google Scholar 

  10. Sparck JK, van Rijsbergen CJ. Information retrieval test collections. J Doc. 1976;32(1):59–75.

    Article  Google Scholar 

  11. Voorhees E. Variations in relevance judgments and the measurement of retrieval effectiveness. In: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 1998. p. 315–23.

    Google Scholar 

  12. Voorhees EM, Harman DK, editors. TREC: experiment and evaluation in information retrieval. Cambridge, MA: MIT; 2005.

    Google Scholar 

  13. Zobel J. How reliable are the results of large-scale information retrieval experiments? In: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 1998. p. 307–14.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ben Carterette .

Editor information

Editors and Affiliations

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Science+Business Media, LLC, part of Springer Nature

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Carterette, B. (2018). Search Engine Metrics. In: Liu, L., Özsu, M.T. (eds) Encyclopedia of Database Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8265-9_325

Download citation

Publish with us

Policies and ethics