Advertisement

Neural or Statistical: An Empirical Study on Language Models for Chinese Input Recommendation on Mobile

  • Hainan ZhangEmail author
  • Yanyan Lan
  • Jiafeng Guo
  • Jun Xu
  • Xueqi Cheng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10390)

Abstract

Chinese input recommendation plays an important role in alleviating human cost in typing Chinese words, especially in the scenario of mobile applications. The fundamental problem is to predict the conditional probability of the next word given the sequence of previous words. Therefore, statistical language models, i.e. n-grams based models, have been extensively used on this task in real application. However, the characteristics of extremely different typing behaviors usually lead to serious sparsity problem, even n-gram with smoothing will fail. A reasonable approach to tackle this problem is to use the recently proposed neural models, such as probabilistic neural language model, recurrent neural network and word2vec. They can leverage more semantically similar words for estimating the probability. However, there is no conclusion on which approach of the two will work better in real application. In this paper, we conduct an extensive empirical study to show the differences between statistical and neural language models. The experimental results show that the two different approach have individual advantages, and a hybrid approach will bring a significant improvement.

Keywords

Neural network Deep learning Language model Machine learning Sequential prediction 

Notes

Acknowledgments

The work was funded by 973 Program of China under Grant No. 2014CB340401, the National Key R&D Program of China under Grant No. 2016QY02D0405, the National Natural Science Foundation of China (NSFC) under Grants No. 61232010, 61472401, 61433014, 61425016, and 61203298, the Key Research Program of the CAS under Grant No. KGZD-EW-T03-2, and the Youth Innovation Promotion Association CAS under Grants No. 20144310 and 2016102.

References

  1. 1.
    Chen, S.F., Goodman, J.: An empirical study of smoothing techniques for language modeling. In: ACL, 310–318 (1996)Google Scholar
  2. 2.
    Reinhard, K., Hermann, N.: Improved backing-off for m-gram language modeling. In: Acoustics, Speech, and Signal Processing, pp. 181–184 (1995)Google Scholar
  3. 3.
    Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1151 (2003)zbMATHGoogle Scholar
  4. 4.
    Mikolov, T., Chen, K., Greg, C., Jeffrey, D.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
  5. 5.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed Representations of Words and Phrases and their Compositionality. In: NIPS, pp. 3111–3119 (2013)Google Scholar
  6. 6.
    Mikolov, T., Martin, K., Lukas, B., Jan, C., Sanjeev, K.: Recurrent neural network based language model. In: INTERSPEECH 2010, pp. 1045–1048 (2010)Google Scholar
  7. 7.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)Google Scholar
  8. 8.
    Chen, S.-Y., Wang, R., Zhao, H.: Neural Network Language Model for Chinese Pinyin Input Method Engine (2015)Google Scholar
  9. 9.
    Katz, S.: Estimation of probabilities from sparse data for the language model component of a speech recognizer. ASSP IEEE Trans. 35(3), 400–401 (1987)CrossRefGoogle Scholar
  10. 10.
    Moore, R.C., Quirk, C.: Improved smoothing for N-gram language models based on ordinary counts. In: ACL, pp. 349–352 (2009)Google Scholar
  11. 11.
    Stolcke, A.: Srilm-an extensible language modeling toolkit. In: INTERSPEECH 2002, pp. 257–286 (2002)Google Scholar
  12. 12.
    Chen, H.: Machine learning for information retrieval: Neural networks, symbolic learning, and genetic algorithms. ASIS 46(46), 194–216 (1995)Google Scholar
  13. 13.
    Bengio, Y.: Deep learning of semantics for natural language. In: Twitter Boston (2016)Google Scholar
  14. 14.
    Zhai, C., John, L.: A study of smoothing methods for language models applied to ad hoc information retrieval. In: ACM SIGIR, pp. 334–342 (2001)Google Scholar
  15. 15.
    Trnka, K.: Adaptive language modeling for word prediction. In: ACL, pp. 61–66 (2008)Google Scholar
  16. 16.
    Zheng, X., Chen, H., Tianyu, X.: Deep learning for chinese word segmentation and POS tagging. In: EMNL, pp. 647–657 (2013)Google Scholar
  17. 17.
    Zou, W.Y., Socher, R., Cer, D.M., Manning, C.D.: Bilingual word embeddings for phrase-based machine translation. In: EMNL, pp. 1393–1398 (2013)Google Scholar
  18. 18.
    Goldberg, Y., Levy, O.: word2vec Explained: deriving Mikolov et al’.s negative sampling word embedding method. arXiv preprint arXiv:1402.3722 (2014)
  19. 19.
    Levy, O., Goldberg, Y.: Neural word embedding as implicit matrix factorization. In: NIPS, pp. 2177–2185 (2014)Google Scholar
  20. 20.
    Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)CrossRefGoogle Scholar
  21. 21.
    Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: continual prediction with LSTM. Neural Comput. 12, 2451–2471 (2000)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Hainan Zhang
    • 1
    Email author
  • Yanyan Lan
    • 1
  • Jiafeng Guo
    • 1
  • Jun Xu
    • 1
  • Xueqi Cheng
    • 1
  1. 1.CAS Key Lab of Network Data Science and TechnologyInstitute of Computing Technology, Chinese Academy of SciencesBeijingChina

Personalised recommendations