Advertisement

Approaches to Human and Machine Translation Quality Assessment

  • Sheila Castilho
  • Stephen Doherty
  • Federico Gaspari
  • Joss Moorkens
Chapter
Part of the Machine Translation: Technologies and Applications book series (MATRA, volume 1)

Abstract

In both research and practice, translation quality assessment is a complex task involving a range of linguistic and extra-linguistic factors. This chapter provides a critical overview of the established and developing approaches to the definition and measurement of translation quality in human and machine translation workflows across a range of research, educational, and industry scenarios. We intertwine literature from several interrelated disciplines dealing with contemporary translation quality assessment and, while we acknowledge the need for diversity in these approaches, we argue that there are fundamental and widespread issues that remain to be addressed, if we are to consolidate our knowledge and practice of translation quality assessment in increasingly technologised environments across research, teaching, and professional practice.

Keywords

Translation quality assessment Principles to practice Translation industry Translation metrics Translation studies Machine translation Human translation Professional translation 

Notes

Acknowledgments

This work has been partly supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.

References

  1. Abdallah K (2012) Translators in production networks. Reflections on Agency, Quality and Ethics. Dissertation, University of Eastern FinlandGoogle Scholar
  2. Adab B (2005) Translating into a second language: can we, should we? In: Anderman G, Rogers M (eds) In and out of English for better, for worse? Multilingual Matters, Clevedon, pp 227–241Google Scholar
  3. Alabau V, Bonk R, Buck C, Carl M, Casacuberta F, García-Martínez M, González J, Koehn P, Leiva L, Mesa-Lao B, Ortiz D, Saint-Amand H, Sanchis G, Tsoukala C (2013) CASMACAT: an open source workbench for advanced computer aided translation. Prague Bull Math Linguist 100(1):101–112CrossRefGoogle Scholar
  4. Allen J (2003) Post-editing. In: Somers H (ed) Computers and translation: a translator’s guide. John Benjamins, Amsterdam, pp 297–317CrossRefGoogle Scholar
  5. Arnold D, Balkan L, Meijer S, Lee Humphreys R, Sadler L (1994) Machine translation: an introductory guide. Blackwell, ManchesterGoogle Scholar
  6. Aziz W, Sousa SCM, Specia L (2012) PET: a tool for post-editing and assessing machine translation. In: Calzolari N, Choukri K, Declerck T, Doğan MU, Maegaard B, Mariani J, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the eighth international conference on language resources and evaluation, Istanbul, pp 3982–3987Google Scholar
  7. Baker M (1992) In other words: a coursebook on translation. Routledge, LondonCrossRefGoogle Scholar
  8. Björnsson CH (1971) Læsbarhed. København, GadGoogle Scholar
  9. Bojar O, Ercegovčević M, Popel M, Zaidan OF (2011) A grain of salt for the WMT manual evaluation. In: Proceedings of the 6th workshop on Statistical Machine Translation, Edinburgh, 30–31 July 2011, pp 1–11Google Scholar
  10. Byrne J (2006) Technical translation: usability strategies for translating technical documentation. Springer, HeidelbergCrossRefGoogle Scholar
  11. Callison-Burch C, Osborne M, Koehn P (2006) Re-evaluating the role of BLEU in machine translation research. In: Proceedings of 11th conference of the European chapter of the association for computational linguistics 2006, Trento, 3–7 April, pp 249–256Google Scholar
  12. Callison-Burch C, Fordyce C, Koehn P, Monz C, Schroeder J (2007) (Meta-)evaluation of machine translation. In: Proceedings of the second workshop on Statistical Machine Translation, Prague, pp 136–158Google Scholar
  13. Callison-Burch C, Koehn P, Monz C, Schroeder J (2009) Findings of the 2009 workshop on Statistical Machine Translation. In: Proceedings of the 4th EACL workshop on Statistical Machine Translation, Athens, 30–31 March 2009, p 1–28Google Scholar
  14. Callison-Burch C, Koehn P, Monz C, Zaidan OF (2011) Findings of the 2011 Workshop on Statistical Machine Translation. In: Proceedings of the 6th Workshop on Statistical Machine Translation, 30–31 July, 2011, Edinburgh, pp 22–64Google Scholar
  15. Campbell S (1998) Translation into the second language. Longman, New YorkGoogle Scholar
  16. Canfora C, Ottmann A (2016) Who’s afraid of translation risks? Paper presented at the 8th EST Congress, Aarhus, 15–17 September 2016Google Scholar
  17. Carl M (2012) Translog – II: a program for recording user activity data for empirical reading and writing research. In: Calzolari N, Choukri K, Declerck T, Doğan MU, Maegaard B, Mariani J, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the eight international conference on language resources and evaluation, Istanbul, 23–25 May 2014, pp 4108–4112Google Scholar
  18. Carl M, Gutermuth S, Hansen-Schirra S (2015) Post-editing machine translation: a usability test for professional translation settings. In: Ferreira A, Schwieter JW (eds) Psycholinguistic and cognitive inquiries into translation and interpreting. John Benjamins, Amsterdam, pp 145–174Google Scholar
  19. Castilho S (2016) Measuring acceptability of machine translated enterprise content. PhD thesis, Dublin City UniversityGoogle Scholar
  20. Castilho S, O’Brien S (2016) Evaluating the impact of light post-editing on usability. In: Proceedings of the tenth international conference on language resources and evaluation, Portorož, 23–28 May 2016, pp 310–316Google Scholar
  21. Castilho S, O’Brien S, Alves F, O’Brien M (2014) Does post-editing increase usability? A study with Brazilian Portuguese as target language. In: Proceedings of the seventeenth annual conference of the European Association for Machine Translation, Dubrovnik, 16–18 June 2014, pp 183–190Google Scholar
  22. Catford J (1965) A linguistic theory of translation. Oxford University Press, OxfordGoogle Scholar
  23. Chan YS, Ng HT (2008) MAXSIM: an automatic metric for machine translation evaluation based on maximum similarity. In: Proceedings of the MetricsMATR workshop of AMTA-2008, Honolulu, Hawaii, pp 55–62Google Scholar
  24. Chomsky N (1969) Aspects of the theory of syntax. MIT Press, Cambridge, MAGoogle Scholar
  25. Coughlin D (2003) Correlating automated and human assessments of machine translation quality. In: Proceedings of the Machine Translation Summit IX, New Orleans, 23–27 September 2003, pp 63–70Google Scholar
  26. Daems J, Vandepitte S, Hartsuiker R, Macken L (2015) The Impact of machine translation error types on post-editing effort indicators. In: Proceedings of the 4th workshop on post-editing technology and practice, Miami, 3 November 2015, pp 31–45Google Scholar
  27. Dale E, Chall JS (1948) A formula for predicting readability: instructions. Educ Res Bull 27(2):37–54Google Scholar
  28. De Almeida G, O’Brien S (2010) Analysing post-editing performance: correlations with years of translation experience. In: Hansen V, Yvon F (eds) Proceedings of the 14th annual conference of the European Association for Machine Translation, St. Raphaël, 27–28 May 2010. Available via: http://www.mt-archive.info/EAMT-2010-Almeida.pdf. Accessed 10 Jan 2017
  29. De Beaugrande R, Dressler W (1981) Introduction to text linguistics. Longman, New YorkGoogle Scholar
  30. Debove A, Furlan S, Depraetere I (2011) A contrastive analysis of five automated QA tools (QA distiller. 6.5.8, Xbench 2.8, ErrorSpy 5.0, SDLTrados 2007 QA checker 2.0 and SDLX 2007 SP2 QA check). In: Depraetere I (ed) Perspectives on translation quality. Walter de Gruyter, Berlin, pp 161–192Google Scholar
  31. DePalma D, Kelly N (2009) The business case for machine translation. Common Sense Advisory, BostonGoogle Scholar
  32. Depraetere I (2010) What counts as useful advice in a university post-editing training context? Report on a case study. In: Proceedings of the 14th annual conference of the European Association for Machine Translation, St. Raphaël, 27–28 May 2010. Available via: http://www.mt-archive.info/EAMT-2010-Depraetere-2.pdf. Accessed 12 May 2017
  33. Doddington G (2002) Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In: Proceedings of the second international conference on human language technology research, San Diego, pp 138–145Google Scholar
  34. Doherty S (2012) Investigating the effects of controlled language on the reading and comprehension of machine translated texts. PhD dissertation, Dublin City UniversityGoogle Scholar
  35. Doherty S (2016) The impact of translation technologies on the process and product of translation. Int J Commun 10:947–969Google Scholar
  36. Doherty S (2017) Issues in human and automatic translation quality assessment. In: Kenny D (ed) Human issues in translation technology. Routledge, London, pp 154–178Google Scholar
  37. Doherty S, O’Brien S (2014) Assessing the usability of raw machine translated output: a user-centred study using eye tracking. Int J Hum Comput Interact 30(1):40–51CrossRefGoogle Scholar
  38. Doherty S, Gaspari F, Groves D, van Genabith J, Specia L, Burchardt A, Lommel A, Uszkoreit H (2013) Mapping the industry I: findings on translation technologies and quality assessment. Available via: http://www.qt21.eu/launchpad/sites/default/files/QTLP_Survey2i.pdf. Accessed 12 May 2017
  39. Drugan J (2013) Quality in professional translation: assessment and improvement. Bloomsbury, LondonGoogle Scholar
  40. Dybkjær L, Bernsen N, Minker W (2004) Evaluation and usability of multimodal spoken language dialogue systems. Speech Comm 43(1):33–54CrossRefGoogle Scholar
  41. Federmann C (2012) Appraise: an open-source toolkit for manual evaluation of MT output. Prague Bull Math Linguist 98:25–35CrossRefGoogle Scholar
  42. Fields P, Hague D, Koby GS, Lommel A, Melby A (2014) What is quality? A management discipline and the translation industry get acquainted. Revista Tradumàtica 12:404–412. Available via: https://ddd.uab.cat/pub/tradumatica/tradumatica_a2014n12/tradumatica_a2014n12p404.pdf. Accessed 12 May 2017
  43. Flesch R (1948) A new readability yardstick. J Appl Psychol 32(3):221–233CrossRefGoogle Scholar
  44. Gaspari F (2004) Online MT services and real users’ needs: an empirical usability evaluation. In: Frederking RE, Taylor KB (eds) Proceedings of AMTA 2004: 6th conference of the Association for Machine Translation in the Americas “Machine translation: from real users to research”. Springer, Berlin, pp 74–85Google Scholar
  45. Gaspari F, Almaghout H, Doherty S (2015) A survey of machine translation competences: insights for translation technology educators and practitioners. Perspect Stud Translatol 23(3):333–358CrossRefGoogle Scholar
  46. Giménez J, Màrquez L (2008) A smorgasbord of features for automatic MT evaluation. In: Proceedings of the third workshop on Statistical Machine Translation, Columbus, pp 195–198Google Scholar
  47. Giménez J, Màrquez L, Comelles E, Catellón I, Arranz V (2010) Document-level automatic MT evaluation based on discourse representations. In: Proceedings of the joint fifth workshop on Statistical Machine Translation and MetricsMATR, Uppsala, pp 333–338Google Scholar
  48. Guerberof A (2014) Correlations between productivity and quality when post-editing in a professional context. Mach Transl 28(3–4):165–186CrossRefGoogle Scholar
  49. Guzmán F, Joty S, Màrquez L, Nakov P (2014) Using discourse structure improves machine translation evaluation. In: Proceedings of the 52nd annual meeting of the Association for Computational Linguistics, Baltimore, June 23–25 2014, pp 687–698Google Scholar
  50. Harrison C (1980) Readability in the classroom. Cambridge University Press, CambridgeGoogle Scholar
  51. Holmes JS (1988) Translated! Papers on literary translation and translation studies. Rodopi, AmsterdamGoogle Scholar
  52. House J (1997) Translation quality assessment. A model revisited. Gunter Narr, TübingenGoogle Scholar
  53. House J (2001) Translation quality assessment: linguistic description versus social evaluation. Meta 46(2):243–257CrossRefGoogle Scholar
  54. House J (2015) Translation quality assessment: past and present. Routledge, LondonGoogle Scholar
  55. Hovy E, King M, Popescu-Belis A (2002) Principles of context-based machine translation evaluation. Mach Transl 17(1):43–75CrossRefGoogle Scholar
  56. International Organization for Standardisation (2002) ISO/TR 16982:2002 ergonomics of human-system interaction—usability methods supporting human centred design. International Organization for Standardisation, Geneva. Available via: http://www.iso.org/iso/catalogue_detail?csnumber=31176. Accessed 20 May 2017Google Scholar
  57. International Organization for Standardisation (2012) ISO/TS 11669:2012 technical specification: translation projects – general guidance. International Organization for Standardisation, Geneva. Available via: https://www.iso.org/standard/50687.html. Accessed 20 May 2017Google Scholar
  58. Jones MJ (1988) A longitudinal study of the readability of the chairman’s narratives in the corporate reports of a UK company. Account Bus Res 18(72):297–305CrossRefGoogle Scholar
  59. Karwacka W (2014) Quality assurance in medical translation. JoSTrans 21:19–34Google Scholar
  60. Kincaid JP, Fishburne RP Jr, Rogers RL, Chissom BS (1975) Derivation of new readability formulas (automated readability index, Fog count and Flesch reading ease formula) for navy enlisted personnel (No. RBR-8-75). Naval Technical Training Command Millington TN Research BranchGoogle Scholar
  61. Klerke S, Castilho S, Barrett M, Søgaard A (2015) Reading metrics for estimating task efficiency with MT output. In: Proceedings of the sixth workshop on cognitive aspects of computational language learning, Lisbon, 18 September 2015, pp 6–13Google Scholar
  62. Koby GS, Fields P, Hague D, Lommel A, Melby A (2014) Defining translation quality. Revista Tradumàtica 12:413–420. Available via: https://ddd.uab.cat/pub/tradumatica/tradumatica_a2014n12/tradumatica_a2014n12p413.pdf. Accessed 12 May 2017Google Scholar
  63. Koehn P (2009) Statistical machine translation. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  64. Koehn P (2010) Enabling monolingual translators: post-editing vs. options. In: Proceedings of human language technologies: the 2010 annual conference of the North American chapter of the ACL, Los Angeles, pp 537–545Google Scholar
  65. Koponen M (2012) Comparing human perceptions of post-editing effort with post-editing operations. In: Proceedings of the seventh workshop on Statistical Machine Translation, Montréal, 7–8 June 2012, pp 181–190Google Scholar
  66. Krings HP (2001) Repairing texts: empirical investigations of machine translation post-editing processes. Kent State University Press, KentGoogle Scholar
  67. Kushner S (2013) The freelance translation machine: algorithmic culture and the invisible industry. New Media Soc 15(8):1241–1258CrossRefGoogle Scholar
  68. Kussmaul P (1995) Training the translator. John Benjamins, AmsterdamCrossRefGoogle Scholar
  69. Labaka G, España-Bonet C, Marquez L, Sarasola K (2014) A hybrid machine translation architecture guided by syntax. Mach Transl 28(2):91–125CrossRefGoogle Scholar
  70. Lacruz I, Shreve GM (2014) Pauses and cognitive effort in post-editing. In: O’Brien S, Balling LW, Carl M, Simard M, Specia L (eds) Post-editing of machine translation: processes and applications. Cambridge Scholars Publishing, Newcastle-Upon-Tyne, pp 246–272Google Scholar
  71. Lassen I (2003) Accessibility and acceptability in technical manuals: a survey of style and grammatical metaphor. John Benjamins, AmsterdamCrossRefGoogle Scholar
  72. Lauscher S (2000) Translation quality assessment: where can theory and practice meet? Translator 6(2):149–168CrossRefGoogle Scholar
  73. Lavie A, Agarwal A (2007) METEOR: an automatic metric for MT evaluation with high levels of correlation with human judgments. In: Proceedings of the workshop on Statistical Machine Translation, Prague, pp 228–231Google Scholar
  74. Liu C, Dahlmeier D, Ng HT (2011) Better evaluation metrics lead to better machine translation. In: Proceedings of the 2011 conference on empirical methods in natural language processing, Edinburgh, 27–31 July 2011, pp 375–384Google Scholar
  75. Lommel A, Uszkoreit H, Burchardt A (2014) Multidimensional Quality Metrics (MQM): a framework for declaring and describing translation quality metrics. Revista Tradumàtica 12:455–463. Available via: https://ddd.uab.cat/pub/tradumatica/tradumatica_a2014n12/tradumatica_a2014n12p455.pdf. Accessed 12 May 2017Google Scholar
  76. Moorkens J (2017) Under pressure: translation in times of austerity. Perspect Stud Trans Theory Pract 25(3):464–477Google Scholar
  77. Moorkens J, O’Brien S, da Silva IAL, de Lima Fonseca NB, Alves F (2015) Correlations of perceived post-editing effort with measurements of actual effort. Mach Transl 29(3):267–284CrossRefGoogle Scholar
  78. Moran J, Lewis D, Saam C (2014) Analysis of post-editing data: a productivity field test using an instrumented CAT tool. In: O’Brien S, Balling LW, Carl M, Simard M, Specia L (eds) Post-editing of machine translation: processes and applications. Cambridge Scholars Publishing, Newcastle-Upon-Tyne, pp 128–169Google Scholar
  79. Muegge U (2015) Do translation standards encourage effective terminology management? Revista Tradumàtica 13:552–560. Available via: https://ddd.uab.cat/pub/tradumatica/tradumatica_a2015n13/tradumatica_a2015n13p552.pdf. Accessed 2 May 2017Google Scholar
  80. Munday J (2008) Introducing translation studies: theories and applications. Routledge, LondonGoogle Scholar
  81. Muzii L (2014) The red-pen syndrome. Revista Tradumàtica 12:421–429. Available via: https://ddd.uab.cat/pub/tradumatica/tradumatica_a2014n12/tradumatica_a2014n12p421.pdf. Accessed 30 May 2017Google Scholar
  82. Nida E (1964) Toward a science of translation. Brill, LeidenGoogle Scholar
  83. Nielsen J (1993) Usability engineering. Morgan Kaufmann, AmsterdamzbMATHGoogle Scholar
  84. Nießen S, Och FJ, Leusch G, Ney H (2000) An evaluation tool for machine translation: fast evaluation for MT research. In: Proceedings of the second international conference on language resources and evaluation, Athens, 31 May–2 June 2000, pp 39–45Google Scholar
  85. Nord C (1997) Translating as a purposeful activity. St. Jerome, ManchesterGoogle Scholar
  86. O’Brien S (2011) Towards predicting post-editing productivity. Mach Transl 25(3):197–215CrossRefGoogle Scholar
  87. O’Brien S (2012) Towards a dynamic quality evaluation model for translation. JoSTrans 17:55–77Google Scholar
  88. O’Brien S, Roturier J, de Almeida G (2009) Post-editing MT output: views from the researcher, trainer, publisher and practitioner. Paper presented at the Machine Translation Summit XII, Ottawa, 26 August 2009Google Scholar
  89. O’Brien, S, Choudhury R, Van der Meer J, Aranberri Monasterio N (2011) Dynamic quality evaluation framework. Available via: https://goo.gl/eyk3Xf. Accessed 21 May 2017
  90. O’Brien S, Simard M, Specia L (eds) (2012) Workshop on post-editing technology and practice (WPTP 2012). In: Conference of the Association for Machine Translation in the Americas (AMTA 2012), San DiegoGoogle Scholar
  91. O’Brien S, Simard M, Specia L (eds) (2013) Workshop on post-editing technology and practice (WPTP 2013). Machine Translation Summit XIV, NiceGoogle Scholar
  92. O’Brien S, Balling LW, Carl M, Simard M, Specia L (eds) (2014) Post-editing of machine translation: processes and applications. Cambridge Scholars Publishing, Newcastle-Upon-TyneGoogle Scholar
  93. O’Hagan M (2012) The impact of new technologies on translation studies: a technological turn. In: Millán C, Bartrina F (eds) The Routledge handbook of translation studies. Routledge, London, pp 503–518Google Scholar
  94. Owczarzak K, van Genabith J, Way A (2007) Evaluating machine translation with LFG dependencies. Mach Transl 21(2):95–119CrossRefGoogle Scholar
  95. Padó S, Cer D, Galley M, Jurafsky D, Manning CD (2009) Measuring machine translation quality as semantic equivalence: a metric based on entailment features. Mach Transl 23(2–3):181–193CrossRefGoogle Scholar
  96. Papineni K, Roukos S, Ward T, Zhu W (2002) BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting on Association for Computational Linguistics, Philadelphia, pp 311–318Google Scholar
  97. Plitt M, Masselot F (2010) A productivity test of Statistical Machine Translation post-editing in a typical localisation context. Prague Bull Math Linguist 93:7–16CrossRefGoogle Scholar
  98. Pokorn NK (2005) Challenging the traditional axioms: translation into a non-mother tongue. John Benjamins, AmsterdamCrossRefGoogle Scholar
  99. Popović M (2015) ChrF: character n-gram F-score for automatic MT evaluation. In: Proceedings of the 10th workshop on Statistical Machine Translation (WMT-15), Lisbon, 17–18 September 2015, pp 392–395Google Scholar
  100. Popović M, Ney H (2009) Syntax-oriented evaluation measures for machine translation output. In: Proceedings of the fourth workshop on Statistical Machine Translation (StatMT ’09), Athens, pp 29–32Google Scholar
  101. Proctor R, Vu K, Salvendy G (2002) Content preparation and management for web design: eliciting, structuring, searching, and displaying information. Int J Hum Comput Interact 14(1):25–92CrossRefGoogle Scholar
  102. Pym A (2010) Exploring translation theories. Routledge, AbingdonGoogle Scholar
  103. Pym A (2015) Translating as risk management. J Pragmat 85:67–80CrossRefGoogle Scholar
  104. Ray R, DePalma D, Pielmeier H (2013) The price-quality link. Common Sense Advisory, BostonGoogle Scholar
  105. Reeder F (2004) Investigation of intelligibility judgments. In: Frederking RE, Taylor KB (eds) Proceedings of the 6th conference of the Association for MT in the Americas, AMTA 2004. Springer, Heidelberg, pp 227–235Google Scholar
  106. Rehm G, Uszkoreit H (2012) META-NET White Paper Series: Europe’s languages in the digital age. Springer, HeidelbergGoogle Scholar
  107. Reiss K (1971) Möglichkeiten und Grenzen der Übersetzungskritik. Hueber, MunichGoogle Scholar
  108. Reiss K, Vermeer HJ (1984) Grundlegung einer allgemeinen Translationstheorie. Niemeyer, TübingenCrossRefGoogle Scholar
  109. Roturier J (2006) An investigation into the impact of controlled English rules on the comprehensibility, usefulness, and acceptability of machine-translated technical documentation for French and German users. Dissertation, Dublin City UniversityGoogle Scholar
  110. Sacher H, Tng T, Loudon G (2001) Beyond translation: approaches to interactive products for Chinese consumers. Int J Hum Comput Interact 13:41–51CrossRefGoogle Scholar
  111. Schäffner C (1997) From ‘good’ to ‘functionally appropriate’: assessing translation quality. Curr Issue Lang Soc 4(1):1–5CrossRefGoogle Scholar
  112. Secară A (2005) Translation evaluation: a state of the art survey. In: Proceedings of the eCoLoRe/MeLLANGE workshop, Leeds, 21–23 March 2005, pp 39–44Google Scholar
  113. Smith M, Taffler R (1992) Readability and understandability: different measures of the textual complexity of accounting narrative. Account Audit Account J 5(4):84–98CrossRefGoogle Scholar
  114. Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A study of translation edit rate with targeted human annotation. In: Proceedings of the 7th conference of the Association for Machine Translation in the Americas: “Visions for the future of Machine Translation”, Cambridge, 8–12 August 2006, pp 223–231Google Scholar
  115. Somers H, Wild E (2000) Evaluating Machine Translation: The Cloze procedure revisited. Paper presented at Translating and the Computer 22, LondonGoogle Scholar
  116. Sousa SC, Aziz W, Specia L (2011) Assessing the post-editing effort for automatic and semi-automatic translations of DVD subtitles. Paper presented at the recent advances in natural language processing workshop, Hissar, pp 97–103Google Scholar
  117. Specia L (2011) Exploiting objective annotations for measuring translation post-editing effort. In: Proceedings of the fifteenth annual conference of the European Association for Machine Translation, Leuven, 30–31 May, pp 73–80Google Scholar
  118. Stewart D (2012) Translating tourist texts into English as a foreign language. Liguori, NapoliGoogle Scholar
  119. Stewart D (2013) From pro loco to pro globo: translating into English for an international readership. Interpret Transl Train 7(2):217–234CrossRefGoogle Scholar
  120. Stymne S, Danielsson H, Bremin S, Hu H, Karlsson J, Lillkull AP, Wester M (2012) Eye-tracking as a tool for machine translation error analysis. In: Calzolari N, Choukri K, Declerck T, Doğan MU, Maegaard B, Mariani J, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the eighth international conference on language resources and evaluation, Istanbul, 23–25 May 2012, pp 1121–1126Google Scholar
  121. Stymne S, Tiedemann J, Hardmeier C, Nivre J (2013) Statistical machine translation with readability constraints. In: Proceedings of the 19th Nordic conference of computational linguistics, Oslo, 22–24 May 2013, pp 375–386Google Scholar
  122. Suojanen T, Koskinen K, Tuominen T (2015) User-centered translation. Routledge, AbingdonGoogle Scholar
  123. Tang J (2017) Translating into English as a non-native language: a translator trainer’s perspective. Translator 23(4):388–403MathSciNetCrossRefGoogle Scholar
  124. Tatsumi M (2009) Correlation between automatic evaluation metric scores, post-editing speed, and some other factors. In: Proceedings of MT Summit XII, Ottawa, pp 332–339Google Scholar
  125. Toury G (1995) Descriptive translation studies and beyond. John Benjamins, AmsterdamGoogle Scholar
  126. Turian JP, Shen L, Melamed ID (2003) Evaluation of machine translation and its evaluation. In: Proceedings of MT Summit IX, New Orleans, pp 386–393Google Scholar
  127. Uszkoreit H, Lommel A (2013) Multidimensional quality metrics: a new unified paradigm for human and machine translation quality assessment. Paper presented at Localisation World, London, 12–14 June 2013Google Scholar
  128. Van Slype G (1979) Critical study of methods for evaluating the quality of machine translation. Bureau Marcel van Dijk, BruxellesGoogle Scholar
  129. White J, O’Connell T, O’Mara F (1994) The ARPA MT evaluation methodologies: evolution, lessons and future approaches. In: Technology partnerships for crossing the language barrier, Proceedings of the first conference of the Association for Machine Translation in the Americas, Columbia, pp 193–205Google Scholar
  130. Wilks Y (1994) Stone soup and the French room. In: Zampolli A, Calzolari N, Palmer M (eds) Current issues in computational linguistics: in honour of Don Walker. Linguistica Computazionale IX–X:585–594. Reprinted in Ahmad K, Brewster C, Stevenson M (eds) (2007) Words and intelligence I: selected papers by Yorick Wilks. Springer, Heidelberg, pp 255–265Google Scholar
  131. Williams J (2013) Theories of translation. Palgrave Macmillan, BasingstokeCrossRefGoogle Scholar
  132. Wong BTM, Kit C (2012) Extending machine translation evaluation metrics with lexical cohesion to document level. In: Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, Jeju Island, 12–14 July 2012, pp 1060–1068Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Sheila Castilho
    • 1
  • Stephen Doherty
    • 2
  • Federico Gaspari
    • 1
    • 3
  • Joss Moorkens
    • 4
  1. 1.ADAPT Centre/School of ComputingDublin City UniversityDublinIreland
  2. 2.School of Humanities and Languages, The University of New South WalesSydneyAustralia
  3. 3.University for Foreigners “Dante Alighieri” of Reggio CalabriaReggio CalabriaItaly
  4. 4.ADAPT Centre/School of Applied Language and Intercultural StudiesDublin City UniversityDublinIreland

Personalised recommendations