Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Discounted Cumulated Gain

  • Kalervo Järvelin
  • Jaana Kekäläinen
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_478

Synonyms

Discounted cumulated gain (DCG); Normalized discounted cumulated gain (nDCG)

Definition

Discounted cumulated gain (DCG) is an evaluation metric for information retrieval (IR). It is based on non-binary relevance assessments of documents ranked in a retrieval result. It assumes that, for a searcher, highly relevant documents are more valuable than marginally relevant documents. It further assumes that the greater the ranked position of a relevant document (of any relevance grade), the less valuable it is for the searcher, because the less likely it is that the searcher will ever examine the document and at least has to pay more effort to find it. DCG formalizes these assumptions by crediting a retrieval system (or a query) for retrieving relevant documents by their (possibly weighted) degree of relevance which, however, is discounted by a factor dependent on the logarithm of the document’s ranked position. The steepness of the discount is controlled by the base of the logarithm...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Losee RM. Text retrieval and filtering: analytic models of performance. Boston: Kluwer Academic; 1998.CrossRefGoogle Scholar
  2. 2.
    Hull D. Using statistical testing in the evaluation of retrieval experiments. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 1993. p. 329–38.Google Scholar
  3. 3.
    Sakai T. On the reliability of information retrieval metrics based on graded relevance. Inf Process Manag. 2007;43(2):531–48.CrossRefGoogle Scholar
  4. 4.
    Voorhees E. Evaluation by highly relevant documents. In: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2001. p. 74–82.Google Scholar
  5. 5.
    A suggestion by Susan Price (Portland State University, Portland), May 2007, (private communication).Google Scholar
  6. 6.
    Sakai T. Average gain ratio: a simple retrieval performance measure for evaluation with multiple relevance levels. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2003. p. 417–18.Google Scholar
  7. 7.
    Bompada T, Chang C, Chen J, Kumar R, Shenoy R. On the robustness of relevance measures with incomplete judgments. In: Proceedings of the 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2007. p. 359–66.Google Scholar
  8. 8.
    Järvelin K, Kekäläinen J. IR evaluation methods for retrieving highly relevant documents. In: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2000. p. 41–8.Google Scholar
  9. 9.
    Järvelin K, Kekäläinen J. Cumulated gain-based evaluation of IR techniques. ACM Trans Inf Syst. 2002;20(4):422–46.CrossRefGoogle Scholar
  10. 10.
    Kekäläinen J. Binary and graded relevance in IR evaluations – comparison of the effects on ranking of IR systems. Inf Process Manag. 2005;41(5):1019–33.CrossRefGoogle Scholar
  11. 11.
    Kekäläinen J, Järvelin K. Using graded relevance assessments in IR evaluation. J Am Soc Inf Sci Technol. 2002;53(13):1120–9.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of TampereTampereFinland

Section editors and affiliations

  • Weiyi Meng
    • 1
  1. 1.Dept. of Computer ScienceState University of New York at BinghamtonBinghamtonUSA