Advertisement

Improving Quality Estimation of Machine Translation by Using Pre-trained Language Representation

  • Guoyi Miao
  • Hui Di
  • Jinan XuEmail author
  • Zhongcheng Yang
  • Yufeng Chen
  • Kazushige Ouchi
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1104)

Abstract

Translation quality estimation (QE) has been attracting increasing attention due to its potential to reduce post-editing human effort. However, QE still suffers heavily from the problem that the quality annotation data remain expensive and small. In this paper, we focus on overcoming the limitation of QE data and explore to utilize the high level latent features learned by the pre-trained language models to reduce the model’s dependence on QE data and improve QE performance. Specifically, we propose two strategies to integrate the pre-trained language features into QE model: (1) a mixed integration model, where the pre-trained language features are fed into the QE mode combined with other features; and (2) a constrained integration model, where a constraint mechanism is used to adjust the reporting bias of our first integration model and enhance the robustness of the QE model. Experimental results on WMT17 QE task demonstrate the effectiveness of our approaches.

Keywords

Quality estimation Machine translation Pre-trained language model 

Notes

Acknowledgements

This work is supported by the National Nature Science Foundation of China (Nos. 61370130, 61976015, 61473294 and 61876198), the Fundamental Research Funds for the Central Universities (2015JBM033), the International Science and Technology Cooperation Program of China under grant No. K11F100010, the Fundamental Research Funds for the Central Universities (No. 2018YJS043), Major Projects of Fundamental Research on Philosophy and Social Sciences of Henan Education Department (2016-JCZD-022), and Toshiba (China) Co., Ltd.

References

  1. 1.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: Proceedings of ICLR 2015 (2015)Google Scholar
  2. 2.
    Vaswani, A., et al.: Attention is all you need. arXiv preprint arXiv:1601.03317 (2017)
  3. 3.
    Felice, M., Specia, L.: Linguistic features for quality estimation. In: Proceedings of the 7th Workshop on Statistical Machine Translation, pp. 96–103. Association for Computational Linguistics (2012)Google Scholar
  4. 4.
    Specia, L., Shah, K., de Souza, J.G., Cohn, T.: QuEst - a translation quality estimation framework. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 79–84. Association for Computational Linguistics (2013)Google Scholar
  5. 5.
    Kozlova, A., Shmatova, M., Frolov, A.: YSDA participation in the WMT 2016 quality estimation shared task. In: Proceedings of the 1st Conference on Machine Translation, pp. 793–799. Association for Computational Linguistics (2016)Google Scholar
  6. 6.
    Kreutzer, J., Schamoni, S., Riezler, S.: QUality estimation from ScraTCH (QUETCH): deep learning for word-level translation quality estimation. In: Proceedings of the 10th Workshop on Statistical Machine Translation, pp. 316–322. Association for Computational Linguistics (2015)Google Scholar
  7. 7.
    Martins, A.F.T., Astudillo, R., Hokamp, C., Kepler, F.: Unbabel’s participation in the WMT16 wordlevel translation quality estimation shared task. In: Proceedings of the 1st Conference on Machine Translation, pp. 806–811. Association for Computational Linguistics (2016)Google Scholar
  8. 8.
    Kim, H., Jung, H.-Y., Kwon, H., Lee, J.-H., Na, S.-H.: Predictor-estimator: neural quality estimation based on target word prediction for machine translation. ACM Trans. Asian Low-Resour. Lang. Inf. Process. (TALLIP) 17(1), 3 (2017)Google Scholar
  9. 9.
    Fan, K., Wang, J., Li, B., et al.: “Bilingual Expert” can find translation errors. In: National Conference on Artificial Intelligence (2019)Google Scholar
  10. 10.
    Peters, M.E., Neumann, M., Iyyer, M., et al.: Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018)
  11. 11.
    Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding with unsupervised learning. Technical report, OpenAI (2018)Google Scholar
  12. 12.
    Devlin, J., Chang, M.W., Lee, K., et al.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
  13. 13.
    Wu, Y., et al.: Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016)
  14. 14.
    Gehring, J., Auli, M., Grangier, D., Yarats, D., Dauphin, Y.N.: Convolutional sequence to sequence learning. arXiv preprint arXiv:1601.03317 (2017)
  15. 15.
    Luong, M.-T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of EMNLP 2015, pp. 1412–1421 (2015)Google Scholar
  16. 16.
    Dai, A.M., Le, Q.V.: Semi-supervised sequence learning. In: Advances in Neural Information Processing Systems, pp. 3079–3087 (2015)Google Scholar
  17. 17.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NIPS (2013)Google Scholar
  18. 18.
    Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP (2014)Google Scholar
  19. 19.
    Wu, Y., Schuster, M., Chen, Z., et al.: Google’s neural machine translation system: bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016)
  20. 20.
    Dyer, C., Chahuneau, V., Smith, N.A.: A simple, fast, and effective reparameterization of IBM model 2. In: Proceedings of NAACL 2013 (2013)Google Scholar
  21. 21.
    Gordon, J., Van Durme, B.: Reporting bias and knowledge acquisition. In: Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, pp. 25–30. ACM (2013)Google Scholar
  22. 22.
    Yang, Z., Dai, Z., Yang, Y., et al.: XLNet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237 (2019)
  23. 23.
    Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. In: Proceedings of ACL 2016, pp. 1715–1725 (2016)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Guoyi Miao
    • 1
  • Hui Di
    • 2
  • Jinan Xu
    • 1
    Email author
  • Zhongcheng Yang
    • 3
  • Yufeng Chen
    • 1
  • Kazushige Ouchi
    • 2
  1. 1.School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
  2. 2.Toshiba (China) Co., Ltd.BeijingChina
  3. 3.Qihoo 360 Technology Co., Ltd.BeijingChina

Personalised recommendations