Abstract
We propose new translation evaluation metrics for legal sentences. Since most previous metrics, that have been proposed to evaluate machine translation systems, prepare human reference translations and assume that several correct translations exist for one source sentence. However, readers usually believe that different translations denote different meanings, so that the existence of several translations of one legal expression may confuse them. Therefore, since translation variety is unacceptable and consistency is crucial in legal translation, we propose two metrics to evaluate the consistency of legal translations and illustrate their performances by comparing them with other metrics.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Banerjee, S., Lavie, A.: Meteor: An automatic metric for mt evaluation with improved correlation with human judgements. In: Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pp. 65–72 (2005)
Doddington, G.: Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In: Proceedings of the second international conference on Human Language Technology Research, pp. 138–145 (2002)
Study Council for Promoting Translation of Japanese Laws and Regulations into Foreign Languages: Final report (2006), http://www.cas.go.jp/jp/seisaku/hourei/report.pdf
Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM Computing Surveys 31(3), 264–323 (1999)
Kudo, T., Matsumoto, Y.: Japanese Dependency Analysis using Cascaded Chunking. In: Proceedings of the International Conference on Computational Linguistics, pp. 1–7 (2002)
Leusch, G., Ueffing, N., Ney, H.: A novel string-to-string distance measure with applications to machine translation evaluation. In: Proceedings MT Summit IX, pp. 240–247 (2003)
Lin, C.: Rouge: a package for automatic evaluation of summaries. In: Proceedings of Text Summarization Branches Out, Workshop at the ACL 2004, pp. 74–81 (2004)
Papineni, K., Roukos, S., Ward, T., Zhu, W.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318 (2002)
Toyama, K., Ogawa, Y., Imai, K., Matsuura, Y.: Application of word alignment for supporting translation of Japanese statutes into English. In: van Engers, T.M. (ed.) Legal Knowledge and Information Systems JURIX 2006: The Nineteenth Annual Conference, pp. 141–150 (2006)
Zhang, Y., Vogel, S.: Measuring confidence intervals for the machine translation evaluation metrics. In: Proceedings of the Tenth Conference on Theoretical and Methodological Issues in Machine Translation (TMI), pp. 85–94 (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Ogawa, Y., Imai, K., Toyama, K. (2010). Evaluation Metrics for Consistent Translation of Japanese Legal Sentences. In: Francesconi, E., Montemagni, S., Peters, W., Tiscornia, D. (eds) Semantic Processing of Legal Texts. Lecture Notes in Computer Science(), vol 6036. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-12837-0_13
Download citation
DOI: https://doi.org/10.1007/978-3-642-12837-0_13
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-12836-3
Online ISBN: 978-3-642-12837-0
eBook Packages: Computer ScienceComputer Science (R0)