The conventional Chinese word embedding model is similar to the English word embedding model in modeling text, simply uses the Chinese word or character as the minimum processing unit of the text, without using the semantic information about Chinese characters and the radicals in Chinese words. To this end, we proposed a radical enhanced Chinese word embedding in this paper. The model uses conversion and radical escaping mechanisms to extract the intrinsic information in Chinese corpus. Through the improved parallel dual-channel network model on a CBOW-like model, the word information context is used together with the Chinese character radical information context to predict the target word. Therefore, the word vector generated by the model can fully reflect the semantic information contained in the radicals. Compared with other similar models by word analogy and similarity experiments, the results showed that our model has effectively improved the accuracy of word vector expression and the direct relevance of similar words.


Word embedding Radical enhanced Chinese word embedding 



Financial support for this study was provided by the Fundamental Research Funds for the Central Universities (Grant No. ZYGX2016J198) and Science and Technology Planning Project of Sichuan Province, China (Grant No. 2017JY0080).


  1. 1.
    Collobert, R., Weston, J., Bottou, L., et al.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12(Aug), 2493–2537 (2011)zbMATHGoogle Scholar
  2. 2.
    Grave, E., Mikolov, T., Joulin, A., et al.: Bag of tricks for efficient text classification. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pp. 427–431 (2017)Google Scholar
  3. 3.
    Tang, D., Wei, F., Yang, N., et al.: Learning sentiment-specific word embedding for twitter sentiment classification. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (vol. 1: Long Papers), pp. 1555–1565 (2014)Google Scholar
  4. 4.
    Zhou, G., He, T., Zhao, J., et al.: Learning continuous word embedding with metadata for question retrieval in community question answering. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (vol. 1: Long Papers), pp. 250–259 (2015)Google Scholar
  5. 5.
    Bengio, Y., Ducharme, R., Vincent, P., et al.: A neural probabilistic language model. J. Mach. Learn. Res. 3(Feb), 1137–1155 (2003)zbMATHGoogle Scholar
  6. 6.
    Mnih, A., Hinton, G.E.: A scalable hierarchical distributed language model. In: Advances in Neural Information Processing Systems, pp. 1081–1088 (2009)Google Scholar
  7. 7.
    Mikolov, T., Chen, K., Corrado, G., et al.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
  8. 8.
    Sun, Y., Lin, L., Yang, N., Ji, Z., Wang, X.: Radical-enhanced Chinese character embedding. In: Loo, C.K., Yap, K.S., Wong, K.W., Teoh, A., Huang, K. (eds.) ICONIP 2014. LNCS, vol. 8835, pp. 279–286. Springer, Cham (2014). Scholar
  9. 9.
    Li, Y., Li, W., Sun, F., et al.: Component-enhanced Chinese character embeddings. arXiv preprint arXiv:1508.06669 (2015)
  10. 10.
    Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160–167. ACM (2008)Google Scholar
  11. 11.
    Chen, X., Xu, L., Liu, Z., et al.: Joint learning of character and word embeddings. In: IJCAI, pp. 1236–1242 (2015)Google Scholar
  12. 12.
    Xu, J., Liu, J., Zhang, L., et al.: Improve Chinese word embeddings by exploiting internal structure. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1041–1050 (2016)Google Scholar
  13. 13.
    Yu, J., Jian, X., Xin, H., et al.: Joint embeddings of Chinese words, characters, and fine-grained subcharacter components. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 286–291 (2017)Google Scholar
  14. 14.
    Mikolov, T., Sutskever, I., Chen, K., et al.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)Google Scholar
  15. 15.
    Mikolov, T., Yih, W., Zweig, G.: Linguistic regularities in continuous space word representations. In: Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 746–751 (2013)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.University of Electronic Science and Technology of ChinaChengduChina

Personalised recommendations