Skip to main content

Automatic Metrics for Machine Translation Evaluation and Minority Languages

  • Conference paper
  • First Online:
Proceedings of the Mediterranean Conference on Information & Communication Technologies 2015

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 381))

Abstract

Translation quality and its evaluation play a crucial role in the field of machine translation (MT). This paper focuses on the quality assessment of automatic metrics for MT evaluation. In our study we assess the reliability and validity of the following automatic metrics: Position-independent Error Rate (PER), Word Error Rate (WER) and Cover Disjoint Error Rate (CDER). These metrics define an error rate of MT output and also of MT system itself, in our case it is an on-line statistical MT system. The results of the reliability analysis showed that these automatic metrics for MT evaluation are reliable and valid, whereby the validity and reliability were verified for one translation direction: from the minority language (Slovak) into English.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. House, J.: Translation Quality Assessment: a Model Revisited. Tubingen, Narr (1997)

    Google Scholar 

  2. Nord, Ch.: Text Analysis in Translation. Theory, Methodology, and Didactic Application of a Model for Translation-Oriented Text Analysis. Amsterdam, Rodopi (2005)

    Google Scholar 

  3. Papineni, K., Roukos, S., Ward, T., Zhu, W.: BLEU: a method for automatic evaluation of machine translation. In: ACL ’02, pp. 311–318 (2002)

    Google Scholar 

  4. Vilar, D., Leusch, G., Ney, H., Banchs, R.E: Human evaluation of machine translation through binary system comparisons. In: StatMT ’07, pp. 96–103 (2007)

    Google Scholar 

  5. Banerjee, S., Lavie, A.: METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In: ACL ‘05 pp. 65–72 (2005)

    Google Scholar 

  6. Callison-Burch, Ch., Fordyce, C., Koehn, P., Monz, Ch., Schroeder, J.: (Meta-) evaluation of machine translation. In: StatMT ’07, pp. 136–158 (2007)

    Google Scholar 

  7. Popovic M., Ney, H.: Word error rates: decomposition over POS classes and applications for error analysis. In: ACL ’07 WSMT (2007)

    Google Scholar 

  8. Gornostay, T.: Machine translation evaluation. http://www.ida.liu.se/labs/nlplab/gslt/mt-course/info/TatianaG_assignment1.pdf (2008). Accessed 22 June 2014

  9. Doddington, G.: Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In: HLT-02, pp. 138–145 (2002)

    Google Scholar 

  10. Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A study of translation edit rate with targeted human annotation. In: ASMTA (2006)

    Google Scholar 

  11. Koehn, P.: Statistical Machine Translation. Cambridge University Press (2010)

    Google Scholar 

  12. Lavie, A., Agarwal, A.: Meteor: an automatic metric for mt evaluation with high levels of correlation with human judgments. In: ACL ’07 WSMT, pp. 228–231 (2007)

    Google Scholar 

  13. Pilkova, A., Volna, J., Papula, J., Holienka, M.: The influence of intellectual capital on firm performance among slovak smes. In: ICICKM-2013, pp. 329–338 (2013)

    Google Scholar 

  14. Munková, D., Munk, M., Adamová, Ľ.: Modelling of language processing dependence on morphological features. In: Advances in Intelligent Systems and Computing (2013) 77-86

    Google Scholar 

  15. Munková, D., Munk, M., Vozár, M.: Data pre-processing evaluation for text mining: transaction/sequence model. In: ICCS 2013, pp. 1198–1207 (2013)

    Google Scholar 

  16. Munková, D., Munk, M., Vozár, M.: Influence of stop-words removal on sequence patterns identification within comparable corpora. In: Advances in Intelligent Systems and Computing, pp. 67–76 (2013)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the Slovak Research and Development Agency under the contract No. APVV-14-0336 and Scientific Grant Agency of the Ministry of Education of the Slovak Republic (ME SR) and of Slovak Academy of Sciences (SAS) under the contracts No. VEGA-1/0559/14 and No. VEGA-1/0392/13.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michal Munk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Munková, D., Munk, M. (2016). Automatic Metrics for Machine Translation Evaluation and Minority Languages. In: El Oualkadi, A., Choubani, F., El Moussati, A. (eds) Proceedings of the Mediterranean Conference on Information & Communication Technologies 2015. Lecture Notes in Electrical Engineering, vol 381. Springer, Cham. https://doi.org/10.1007/978-3-319-30298-0_69

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-30298-0_69

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-30296-6

  • Online ISBN: 978-3-319-30298-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics