Advertisement

CNN-BiLSTM-CRF Model for Term Extraction in Chinese Corpus

  • Xiaowei HanEmail author
  • Lizhen Xu
  • Feng Qiao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11242)

Abstract

Neural networks based term extraction methods regard term extraction task as sequence labeling task. They make better modeling of natural language and eliminate the dependence of traditional term extraction methods on artificial features. CNN-BiLSTM-CRF model is proposed in this paper to minimize the influence of different word segmentation results on term extraction. Experiment results show that CNN-BiLSTM-CRF has higher stability than the baseline model for different word segmentation results.

Keywords

Term extraction Recurrent neural networks Word embedding Convolutional neural networks 

References

  1. Lingpeng, Y., Donghong, J., Guodong, Z., Yu, N.: Improving retrieval effectiveness by using key terms in top retrieved documents. In: Losada, David E., Fernández-Luna, Juan M. (eds.) ECIR 2005. LNCS, vol. 3408, pp. 169–184. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-31865-1_13CrossRefGoogle Scholar
  2. Biemann, C, Mehler, A.: Text Mining: From Ontology Learning to Automated Text Processing Applications. Festschrift in Honor of Gerhard Heyer (2014)Google Scholar
  3. Zhou, J., Xu, W.: End-to-end learning of semantic role labeling using recurrent neural networks. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics (2015)Google Scholar
  4. Koomen, P., Punyakanok, V., Dan, R., et al.: Generalized inference with multiple semantic role labeling systems. In: Conference on Computational Natural Language Learning, pp. 181–184 (2005)Google Scholar
  5. Mikolov, T., Chen, K., Corrado, G., et al.: Efficient estimation of word representations in vector space. Comput. Sci. (2013)Google Scholar
  6. Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Conference on Empirical Methods in Natural Language Processing, pp. 1532–1543 (2014)Google Scholar
  7. Levy, O., Goldberg, Y., Dagan, I.: Improving distributional similarity with lessons learned from word embeddings. Bull. De La Soc. Bot. France 75(3), 552–555 (2015)Google Scholar
  8. Collobert, R., Weston, J., Karlen, M., et al.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12(1), 2493–2537 (2011)zbMATHGoogle Scholar
  9. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  10. Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. arXiv:1410.5401 (2014)
  11. Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. Comput. Sci. (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.School of Computer Science and EngineeringSoutheast UniversityNanjingChina

Personalised recommendations