Advertisement

Machine Translation

, Volume 27, Issue 3–4, pp 167–170 | Cite as

Quality estimation for machine translation: preface

  • Lucia Specia
  • Radu Soricut
Article
  • 439 Downloads

Machine translation (MT) has become a reality in many practical scenarios, from human-aided translation intended for publication, to fully automated translation for gisting purposes among end-users of online systems. This widespread use of MT systems has been motivated, among other things, by better translation quality, more user-friendly tools, and higher demand for translation. In order to make MT maximally useful in these scenarios, a quantification of the quality of translated segments at system run-time—similar to “fuzzy match scores” from translation memory systems—is needed.

Metrics of quality estimation address this problem as a machine learning task to predict quality by combining a number of features that describe different aspects of translation fluency and adequacy, source text complexity and “translatability”, and MT system confidence. Among other applications, these metrics can be useful to decide whether a given segment needs revision by a human translator, to estimate...

Keywords

Machine Translation Quality Estimation Shared Task Statistical Machine Translation Source Sentence 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Bojar O, Buck C, Callison-Burch C, Haddow B, Koehn P, Monz C, Post M, Saint-Amand H, Soricut R, Specia L (2013) Findings of the 2013 workshop on statistical machine translation. In: Proceedings of the eighth workshop on statistical machine translation, Sofia, BulgariaGoogle Scholar
  2. Callison-Burch C, Koehn P, Monz C, Post M, Soricut R, Specia L (2012) Findings of the 2012 workshop on statistical machine translation. In: Proceedings of the seventh workshop on statistical machine translation, Montréal, Canada, pp 10–51Google Scholar
  3. Papineni K, Roukos S, Ward T, Zhu WJ (2002) BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the fortieth meeting of the association for computational linguistics, Philadelphia, PA, pp 311–318Google Scholar
  4. Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A study of translation edit rate with targeted human annotation. In: Proceedings of the conference of the seventh association for machine translation in the Americas, Cambridge, MA, pp 223–231Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of SheffieldSheffieldUK
  2. 2.Google Inc.Mountain ViewUSA

Personalised recommendations