Machine Translation

, Volume 23, Issue 2–3, pp 71–103 | Cite as

The NIST 2008 Metrics for machine translation challenge—overview, methodology, metrics, and results

  • Mark Przybocki
  • Kay Peterson
  • Sébastien Bronsart
  • Gregory Sanders


This paper discusses the evaluation of automated metrics developed for the purpose of evaluating machine translation (MT) technology. A general discussion of the usefulness of automated metrics is offered. The NIST MetricsMATR evaluation of MT metrology is described, including its objectives, protocols, participants, and test data. The methodology employed to evaluate the submitted metrics is reviewed. A summary is provided for the general classes of evaluated metrics. Overall results of this evaluation are presented, primarily by means of correlation statistics, showing the degree of agreement between the automated metric scores and the scores of human judgments. Metrics are analyzed at the sentence, document, and system level with results conditioned by various properties of the test data. This paper concludes with some perspective on the improvements that should be incorporated into future evaluations of metrics for MT evaluation.


Automated MT metrics Metric evaluation MetricsMATR 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Babych B, Hartley A (2004) Extending the BLEU MT evaluation method with frequency weightings. Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04). Association for Computational Linguistics, Barcelona, SpainGoogle Scholar
  2. Callison-Burch C, Fordyce C, Koehn P, Monz C, Schroeder J (2007) (Meta-) evaluation of machine translation. Proceedings of the Second Workshop on Statistical Machine Translation. Prague Czech Republic, Association for Computational LinguisticsGoogle Scholar
  3. Callison-Burch C, Fordyce C, Koehn P, Monz C, Schroeder J (2008) Further meta-evaluation of machine translation. Proceedings of the Third Workshop on Statistical Machine Translation (WMT08). Association for Computational Linguistics, Columbus OHGoogle Scholar
  4. Callison-Burch C, Osborne M, Koehn P (2006) Re-evaluating the role of BLEU in machine translation research. Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Trento, ItalyGoogle Scholar
  5. Chiang D, Knight K, Wang W (2009) 11,001 New features for statistical machine translation. Proceedings of The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2009). Association for Computational Linguistics, Boulder, COGoogle Scholar
  6. Chiang D, DeNeefe S, Chan YS, Ng HT (2008) Decomposability of translation metrics for improved evaluation and efficient algorithms. Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Honolulu, HawaiiGoogle Scholar
  7. Chinchor N, Hirschman L, Lewis DD (1993) Evaluating message understanding systems: an analysis of the third message understanding conference (MUC-3). Computational Linguistics 19(3)Google Scholar
  8. Cohen J (1960) A coefficient of agreement for nominal scales. Educ Psychol Meas 37–46Google Scholar
  9. Condon S, Sanders GA, Parvaz D, Rubenstein A, Doran C, Aberdeen J, Oshika B (2009) Normalization for automated metrics: English and Arabic speech translation. Proceedings of MT Summit XII. Association for Machine Translation in the Americas, Ottawa, ON, CanadaGoogle Scholar
  10. Coughlin D (2003) Correlating automated and human assessments of machine translation quality. Proceedings of MT Summit IX. Association for Machine Translation in the Americas, New Orleans, LAGoogle Scholar
  11. Doddington G (2002) Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. Proceedings of the Second International Conference on Human Language Technology Research, Morgan Kaufmann, San Diego, CAGoogle Scholar
  12. Fellbaum C (1998) Wordnet: an electronic lexical database. Bradford BooksGoogle Scholar
  13. Fleiss JL (1971) Measuring nominal scale agreement among many raters. Psychol Bull 76Google Scholar
  14. Fleiss JL, Cohen J, Everitt BS (1969) Large sample standard errors of Kappa and weighted Kappa. Psychol Bull 72Google Scholar
  15. Hodges JL, Lehmann EL (1963) Estimates of location based on tank tests. Ann Math Stat 34Google Scholar
  16. Jones D, Herzog M, Ibrahim H, Jairam A, Shen W, Gibson E, Emonts M (2007) ILR-based MT comprehension test with multi-level questions. Proceedings of The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2007). Association for Computational Linguistics, Rochester, NYGoogle Scholar
  17. Kendall MG (1938) A new measure of rank correlation. Biometrika 30Google Scholar
  18. Lavie A, Agarwal A (2007) METEOR: an automatic metric for MT evaluation with high levels of correlation with human judgments. Workshop on statistical machine translation at the 45th annual meeting of the association of computational linguistics (ACL-2007). PragueGoogle Scholar
  19. Neter J, Kutner M, Nachtsheim C, Wasserman W (1996) Applied linear statistical models. McGraw-Hill/IrwinGoogle Scholar
  20. Noreen EW (1989) Computer intensive methods for testing hypotheses. An introduction. Wiley, New YorkGoogle Scholar
  21. Och FJ (2003) Minimum error rate training in statistical machine translation. Association for Computational Linguistics, Sapporo, JapanGoogle Scholar
  22. Papineni K, Roukos S, Ward T, Zhu W-J (2001) BLEU: a method for automatic evaluation of machine translation. Technical Report, Yorktown Heights NY. IBM Research DivisionGoogle Scholar
  23. Paul M (2006) Overview of the IWSLT 2006 evaluation campaign. Proceedings of the international workshop on spoken language translation. Kyoto, JapanGoogle Scholar
  24. Pearson K (1900) On a criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can reasonably be supposed to have arisen in random sampling. Philos Mag 50Google Scholar
  25. Porter M (2009) Snowball: a language for stemming algorithms. 2001. Accessed 28 Oct 2009)
  26. Przybocki M, Peterson K, Bronsart S (2008) NIST metrics for machine translation challenge (MetricsMATR). NIST Multimodal Information Group. National Institute of Standards and Technology. Accessed 28 Oct 2009)
  27. Riezler J, Maxwell JT (2005) On some pitfalls in automatic evaluation and significance testing for MT. ACL-05 workshop on intrinsic and extrinsic evaluation measures for MT and/or summarization. Association for Computational Linguistics, Ann Arbor, MIGoogle Scholar
  28. Sanders GA, Bronsart S, Condon S, Schlenoff C (2008) Odds of successful transfer of low-level concepts: a key metric for bidirectional speech-to-speech machine translation in DARPA’s TRANSTAC program. Proceedings of the 6th international conference on language resources and evaluation (LREC’08). European Language Resources Association (ELRA), Marrakech, MoroccoGoogle Scholar
  29. Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A study of translation edit rate with targeted human annotation. Proceedings of association for machine translation in the Americas. Cambridge, MAGoogle Scholar
  30. Spearman CE (1904) The proof and measurement of association between two things. Am J Psychol 15Google Scholar
  31. Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics 1Google Scholar

Copyright information

© US Government 2010

Authors and Affiliations

  • Mark Przybocki
    • 1
  • Kay Peterson
    • 1
  • Sébastien Bronsart
    • 1
  • Gregory Sanders
    • 1
  1. 1.Multimodal Information GroupNational Institute of Standards and TechnologyGaithersburgUSA

Personalised recommendations