Skip to main content

Co-attention and Aggregation Based Chinese Recognizing Textual Entailment Model

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11839))

Abstract

Recognizing Textual Entailment is a fundamental task of natural language processing, and its purpose is to recognize the inferential relationship between two sentences. With the development of deep learning and construction of relevant corpus, great progress has been made in English Textual Entailment. However, the progress in Chinese Textual Entailment is relatively rare because of the lack of large-scale annotated corpus. The Seventeenth China National Conference on Computational Linguistics (CCL 2018) first released a Chinese textual entailment dataset that including 100,000 sentence pairs, which provides support for application of deep learning model. Inspired by attention models on English, we proposed a Chinese recognizing textual entailment model based on co-attention and aggregation. This model uses co-attention to calculate the feature of relationship between two sentences, and aggregates this feature with another feature obtained from sentences. Our model achieved 93.5% accuracy on CCL2018 textual entailment dataset, which is higher than the first place in previous evaluations. Experimental results showed that recognition of contradiction relations is difficult, but our model outperforms other benchmark models. What’s more, our model can be applied to Chinese document based question answer (DBQA). The accuracy of the experiment results on the dataset of NLPCC2016 is 72.3%.

The authors were supported financially by the National Social Science Fund of China (18ZDA315), Programs for Science and Technology Development in Henan province (No. 192102210260) and the Key Scientific Research Program of Higher Education of Henan (No. 20A520038).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://tcci.ccf.org.cn/conference/2016/pages/page05_evadata.html.

  2. 2.

    https://pypi.org/project/jieba/.

References

  1. Bowman, S.R., Angeli, G., Potts, C.: A large annotated corpus for learning natural language inference. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 632–642. ACL, Lisbon (2015)

    Google Scholar 

  2. Wang, S., Jiang, J.: Learning natural language inference with LSTM. arXiv preprint arXiv:1512.08849 (2015)

  3. Tan, Y., Liu, Z., Lv, X.: CNN and BiLSTM based Chinese textual entailment recognition. J. Chin. Inf. Proc. 32(7), 11–19 (2018)

    Google Scholar 

  4. Chen, Q., Chen, X., Guo, X.: Multiple-to-One Chinese textual entailment for reading comprehension. J. Chin. Inf. Proc. 32(4), 87–94 (2018)

    Google Scholar 

  5. Parikh, A.P., Täckström, O., Das, D.: A decomposable attention model for natural language inference. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2249–2255. ACL, Austin (2016)

    Google Scholar 

  6. Shen, D., Wang, G., Wang, W.: Baseline needs more love: on simple word-embedding-based models and associated pooling mechanisms. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 440–450. ACL, Melbourne (2018)

    Google Scholar 

  7. Dagan, I., Glickman, O.: Probabilistic textual entailment: generic applied modeling of language variability. Learning Methods for Text Understanding and Mining, pp. 26–29 (2004)

    Google Scholar 

  8. Dagan, I., Glickman, O., Magnini, B.: The PASCAL recognizing textual entailment challenge. In: Quiñonero-Candela, J., Dagan, I., Magnini, B., d’Alché-Buc, F. (eds.) MLCW 2005. LNCS (LNAI), vol. 3944, pp. 177–190. Springer, Heidelberg (2006). https://doi.org/10.1007/11736790_9

    Chapter  Google Scholar 

  9. Dagan, I., Roth, D., Sammons, M.: Recognizing textual entailment: models and applications. Synth. Lect. Hum. Lang. Technol. 6(4), 1–220 (2013)

    Article  Google Scholar 

  10. Burger, J., Ferro, L.: Generating an entailment corpus from news headlines. In: Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, pp. 49–54. ACL, Ann Arbor (2005)

    Google Scholar 

  11. Williams, A., Nangia, N., Bowman, S.R.: A broad-coverage challenge corpus for sentence understanding through inference. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (Volume 1: Long Papers), pp. 1112–1122. ACL, New Orleans (2018)

    Google Scholar 

  12. Conneau, A., Kiela, D., Schwenk, H.: Supervised learning of universal sentence representations from natural language inference data. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 670–680. ACL, Copenhagen (2017)

    Google Scholar 

  13. Chen, Q., Zhu, X., Ling, Z.: Enhanced LSTM for natural language inference. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1657–1668. ACL, Vancouver (2017)

    Google Scholar 

  14. Tan, C., Wei, F., Wang, W.: Multiway attention networks for modeling sentence pairs. In: 27th International Joint Conference on Artificial Intelligence, pp. 4411–4417. Morgan Kaufmann, Sweden (2018)

    Google Scholar 

  15. Ghaeini, R., Hasan, S.A., Datla, V., et al.: DR-BiLSTM: dependent reading bidirectional LSTM for natural language inference. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp. 1460–1469. ACL, New Orleans (2018)

    Google Scholar 

  16. Kim, S., Kang, I., Kwak, N.: Semantic sentence matching with densely-connected recurrent and co-attentive information. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 6586–6593. AAAI, Hawaii (2019)

    Article  Google Scholar 

  17. Li, S., Zhao, Z., Hu, R., Li, W., Liu, T., Du, X.: analogical reasoning on chinese morphological and semantic relations. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 138–143. ACL, Melbourne (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lingling Mu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, P., Mu, L., Zan, H. (2019). Co-attention and Aggregation Based Chinese Recognizing Textual Entailment Model. In: Tang, J., Kan, MY., Zhao, D., Li, S., Zan, H. (eds) Natural Language Processing and Chinese Computing. NLPCC 2019. Lecture Notes in Computer Science(), vol 11839. Springer, Cham. https://doi.org/10.1007/978-3-030-32236-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32236-6_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32235-9

  • Online ISBN: 978-3-030-32236-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics