Skip to main content

Knowledge Augmented Inference Network for Natural Language Inference

  • Conference paper
  • First Online:
Knowledge Graph and Semantic Computing. Knowledge Computing and Language Understanding (CCKS 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 957))

Included in the following conference series:

  • 802 Accesses

Abstract

This paper proposes a Knowledge Augmented Inference Network (K- AIN) that can effectively incorporate external knowledge into existing neural network models on Natural Language Inference (NLI) task. Different from previous works that use one-hot representations to describe external knowledge, we employ the TransE model to encode various semantic relations extracted from the external Knowledge Base (KB) as distributed relation features. We utilize these distributed relation features to construct knowledge augmented word embeddings and integrate them into the current neural network models. Experimental results show that our model achieves a better performance than the strong baseline on the SNLI dataset and we also surpass the current state-of-the-art models on the SciTail dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bordes, A., Usunier, N., García-Durán, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: NIPS (2013)

    Google Scholar 

  2. Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. In: ACL (2015)

    Google Scholar 

  3. Chen, Q., Zhu, X., Ling, Z.H., Wei, S., Jiang, H., Inkpen, D.: Enhanced LSTM for natural language inference. In: ACL (2017)

    Google Scholar 

  4. Choi, J., Yoo, K.M., Lee, S-G.: Unsupervised learning of task-specific tree structures with tree-LSTMs. CoRR abs/1707.02786 (2017)

    Google Scholar 

  5. Weissenborn, D., Kočiský, T., Dyer, C.: Reading twice for natural language understanding. CoRR (2017)

    Google Scholar 

  6. Ghaeini, R., et al.: DR-BiLSTM: dependent reading bidirectional LSTM for natural language inference. CoRR (2018)

    Google Scholar 

  7. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)

    Article  Google Scholar 

  8. Khot, T., Sabharwal, A., Clark, P.: SciTail: a textual entailment dataset from science question answering (2018)

    Google Scholar 

  9. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR (2014)

    Google Scholar 

  10. Lin, H., Sun, L., Han, X.: Reasoning with heterogeneous knowledge for commonsense machine comprehension (2017)

    Google Scholar 

  11. MacCartney, B., Manning, C.D.: Modeling semantic containment and exclusion in natural language inference. COLING (2008)

    Google Scholar 

  12. Manning, C., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S., McClosky, D.: The Stanford CoreNLP natural language processing toolkit. In: ACL (2014)

    Google Scholar 

  13. Miller, G.A.: WordNet: a lexical database for English. Commun. ACM 38, 39–41 (1992)

    Article  Google Scholar 

  14. Nie, Y., Bansal, M.: Shortcut-stacked sentence encoders for multi-domain inference. In: ACL (2017)

    Google Scholar 

  15. Parikh, A.P., Täckström, O., Das, D., Uszkoreit, J.: A decomposable attention model for natural language inference. CoRR (2016)

    Google Scholar 

  16. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP (2014)

    Google Scholar 

  17. Chen, Q., Zhu, X., Ling, Z.-H., Inkpen, D., Wei, S.: Natural language inference with external knowledge. CoRR (2017)

    Google Scholar 

  18. Rocktäschel, T., Grefenstette, E., Hermann, K.M., Kociský, T., Blunsom, P.: Reasoning about entailment with neural attention. CoRR (2015)

    Google Scholar 

  19. Sha, L., Li, S., Chang, B., Sui, Z.: Recognizing textual entailment via multi-task knowledge assisted LSTM (2016)

    Google Scholar 

  20. Shen, T., Zhou, T., Long, G., Jiang, J., Wang, S., Zhang, C.: Reinforced self-attention network: a hybrid of hard and soft attention for sequence modeling. CoRR abs/1801.10296 (2018)

    Google Scholar 

  21. Shi, C., et al.: Knowledge-based semantic embedding for machine translation (2016)

    Google Scholar 

  22. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. In: Machine Learning Research, pp. 1929–1958 (2014)

    Google Scholar 

  23. Tay, Y., Tuan, L.A., Hui, S.C.: A compare-propagate architecture with alignment factorization for natural language inference. CoRR (2018)

    Google Scholar 

  24. Vougiouklis, P., Hare, J.S., Simperl, E.P.B.: A neural network approach for knowledge-driven response generation. In: COLING (2016)

    Google Scholar 

  25. Wang, S., Jiang, J.: Learning natural language inference with LSTM. CoRR (2015)

    Google Scholar 

  26. Wang, Z., Hamza, W., Florian, R.: Bilateral multi-perspective matching for natural language sentences. CoRR (2017)

    Google Scholar 

  27. Williams, A., Nangia, N., Bowman, S.R.: A broad-coverage challenge corpus for sentence understanding through inference. CoRR (2017)

    Google Scholar 

Download references

Acknowledgements

This work is funded by Beijing Advanced Innovation for Language Resources of BLCU, the Fundamental Research Funds for the Central Universities in BLCU (No.17PT05) and the BLCU Academic Talents Support Program for the Young and Middle-Aged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dong Yu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, S., Li, B., Liu, C., Yu, D. (2019). Knowledge Augmented Inference Network for Natural Language Inference. In: Zhao, J., Harmelen, F., Tang, J., Han, X., Wang, Q., Li, X. (eds) Knowledge Graph and Semantic Computing. Knowledge Computing and Language Understanding. CCKS 2018. Communications in Computer and Information Science, vol 957. Springer, Singapore. https://doi.org/10.1007/978-981-13-3146-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-3146-6_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-3145-9

  • Online ISBN: 978-981-13-3146-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics