Abstract
Researchers, developers, translators and information consumers all share the problem that there is no accepted standard for machine translation. The problem is much further confounded by the fact that MT evaluations properly done require a considerable commitment of time and resources, an anachronism in this day of cross-lingual information processing when new MT systems may developed in weeks instead of years. This paper surveys the needs addressed by several of the classic “types” of MT, and speculates on ways that each of these types might be automated to create relevant, near-instantaneous evaluation of approaches and systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Arnold, A., Sadler, L., and Humphreys, R.: “Evaluation: an assessment.” Machine Translation 8-1/2 (1993) 1–24
Dostert, B. User’s Evaluation of Machine Translation, Georgetown MT System, 1963-1973. Rome Air Development Center Report AD-768 451. Texas A&M University (1973)
Doyon, J., Taylor, K., and White, J.: “Task-Based Evaluation for Machine Translation.” Singapore: Proceedings of Machine Translation Summit VII’ 99 (1999)
Hovy, E., 1994. „Apples, Oranges, or Kiwis? Criteria for the comparison of MT Systems” Panel Discussion, Vasconcellos, M., moderator. In Vasconellos, M. (ed. ) MT Evaluation: Basis for Future Directions. National Science Foundation (1994)
Jones, D. and Rusk G.: Toward a Scoring Function for Quality-Driven Machine Translation. Proceedings of Coling-2000 (2000)
King, M, rochair. 1991. “Evaluation of MT systems panel discussion.” Proceedings of MT Summit III (1991) 141–146
Nagao, M., Tsujii, J. Nakamura, J.: “The Japanese Government project for machine translation.” Computational Linguistics 11-2/3 (1985) 91–109
Pierce, J., (roChair): Language and Machines: Computers in Translation and Linguistics. Report by the Automatic Language Processing Advisory Committee (ALPAC). Publication 1416. National Academy of Sciences National Research Council (1966)
Taylor, K., and White, J.: “Predicting what MT is Good for: User Judgments and Task Performance.” Proceedings of Third Conference of the Association for Machine Translation in the Americas, AMTA98. Philadelphia, PA (1998)
Van Slype, G. Critical Methods for Evaluating the Quality of Machine Translation. Prepared for the European Commission Directorate General Scientific and Technical Information and Information Management. Report BR 19142. Bureau Marcel van Dijk (1979)
Vasconcellos, M. (ed.): MT Evaluation: Basis for Future Directions. Proceedings of a workshop sponsored by the National Science Foundation. Washington, D.C.: Association for Machine Translation (1994)
White, J. Toward an Automated, Task-Based MT Evaluation Strategy. Athens, Greece: Proceedings of the Workshop on Evaluation, Language Resources and Evaluation Conference (2000)
White, J., and O’Connell, T.: 1994. The ARPA MT evaluation methodologies: evolution, lessons, and future approaches. Proceedings of the 1994 Conference, Association for Machine Translation in the Americas (1994)
White, J., and O’ Connell, T.: „Adaptation of the DARPA machine translation evaluation paradigm to end-to-end systems” Proceedings of AMTA-96 (1996)
White, J. and Taylor, K. 1998. „A task-oriented metric for machine translation.” Granada, Spain: Proceedings of the First Language Resources and Evaluation Conference (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
White, J.S. (2000). Contemplating Automatic MT Evaluation. In: White, J.S. (eds) Envisioning Machine Translation in the Information Future. AMTA 2000. Lecture Notes in Computer Science(), vol 1934. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-39965-8_10
Download citation
DOI: https://doi.org/10.1007/3-540-39965-8_10
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-41117-8
Online ISBN: 978-3-540-39965-0
eBook Packages: Springer Book Archive