Exploiting the Tibetan Radicals in Recurrent Neural Network for Low-Resource Language Models

  • Tongtong Shen
  • Longbiao Wang
  • Xie Chen
  • Kuntharrgyal Khysru
  • Jianwu DangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10635)


In virtue of the superiority of handling the sequence data and the effectiveness of preserving long-distance information, recurrent neural network language model (RNNLM) has prevailed in a range of tasks in recent years. However, a large quantities of data are required for language modelling with good performance, which poses the difficulties of modeling for low-resource languages. To address this issue, Tibetan as one of minority languages is instantiated, and its radicals (components of Tibetan characters) are explored for constructing language model. Motivated by the inherent structure of Tibetan, a novel construction of Tibetan character embedding is exploited to RNNLM. The fusion of individual radical embedding is enhanced by three ways, including using uniform weight (TRU), different weights (TRD) and radical combination (TRC). This structure, especially combining with the radicals, can extend the capability to capture long-term context dependencies and solve the low-resource problem to some extent. The experimental results suggest that this proposed structure obtained a better performance than standard RNNLM, yielding 7.4%, 12.7% and 13.5% relative perplexity reduction by using TRU, TRD and TRC respectively.


Language model Low resource Recurrent neural network Character embedding Radical 



The research is partially supported by the National Basic Research Program of China (No. 2013CB329301), and the National Natural Science Foundation of China (No. 61233009). Besides, we are especially grateful to the partial support by JSPS KAKENHI Grant (16K00297).


  1. 1.
    Brown, P.F., Cocke, J., Pietra, S.A.D., Pietra, V.J.D., Jelinek, F., Lafferty, J.D., Mercer, R.L., Roossin, P.S.: A statistical approach to machine translation. Comput. Linguist. 16(2), 79–85 (1990)Google Scholar
  2. 2.
    Zhai, C.X., Lafferty, J.: A study of smoothing methods for language models applied to information retrieval. ACM Trans. Inf. Syst. 22(2), 179–214 (2004)CrossRefGoogle Scholar
  3. 3.
    Kuhn, R., Mori, R.D.: A Cache-Based Natural Language Model for Speech Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 12(6), 570–583 (1990)CrossRefGoogle Scholar
  4. 4.
    Chen, S.F., Goodman, J.: An empirical study of smoothing techniques for language modeling. Comput. Speech Lang. 13(4), 359–394 (1994)CrossRefGoogle Scholar
  5. 5.
    Roark, B., Saraclar, M., Collins, M.: Discriminative N-Gram language modeling. Comput. Speech Lang. 21(2), 373–392 (2007)CrossRefGoogle Scholar
  6. 6.
    Brown, P.F., Desouza, P.V., Mercer, R.L., Pietra, V.J., Vincent, J.D., Lai, J.C.: Class-based N-Gram models of natural language. Comput. Linguist. 18(4), 467–479 (1992)Google Scholar
  7. 7.
    Mikolov, T., Karafiát, M., Burget, L., Cernocký, J., Khudanpur, S.: Recurrent neural network based language model. In: INTERSPEECH 2010 – 11th Annual Conference of the International Speech Communication Association, Japan, pp. 1045–1048 (2010)Google Scholar
  8. 8.
    Mikolov, T., Kombrink, S., Burget L., Cernocky J.H.: Extensions of recurrent neural network language model. In: ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing, Czech, pp. 5528–5531 (2011)Google Scholar
  9. 9.
    Mikolov, T., Zweig, G.: Context dependent recurrent neural network language model. In SLT 2012 - 2012 IEEE Spoken Language Technology Workshop, USA, pp. 234–239 (2012)Google Scholar
  10. 10.
    Chen, X., Liu, X., Qian, Y., Gales, M.J.F., Woodland, P.C.: CUED-RNNLM — an open-source toolkit for efficient training and evaluation of recurrent neural network language models. In: ICASSP 2016 – 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, China, pp. 6000–6004 (2016)Google Scholar
  11. 11.
    Shi, Y.Z., Zhang, W.Q., Liu, J., Johnson, M.T.: RNN language model with word clustering and class-based output layer. J. Audio Speech Music Process. 2013(1), 22 (2013)Google Scholar
  12. 12.
    Mousa, E.D., Kuo, H.K.J., Mangu, L., Soltau, H.M.: Morpheme-based feature-rich language models using deep neural networks for LVCSR of Egyptian Arabic. In: ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Canada, pp. 8435–8439 (2013)Google Scholar
  13. 13.
    Shi, Y.Y., Wiggers, P., Catholijn, M., Jonker: Towards recurrent neural networks language models with linguistic and contextual features. In: INTERSPEECH 2012 - 13th Annual Conference of the International Speech Communication Association, pp. 1662–1665. (2012)Google Scholar
  14. 14.
    Mousa, E.D., Schluter, R., Ney, H.: Investigations on the use of morpheme level features in language models for Arabic LVCSR. In: ICASSP 2012 - IEEE International Conference on Acoustics, Speech and Signal Processing, USA, pp. 5021–5024 (2012)Google Scholar
  15. 15.
    He, Y.Z., Hutchinson, B., Baumann, P., Ostendorf, M.: Subword-based modeling for handling OOV words inkeyword spotting. In: ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing, Italy, pp. 7864–7868 (2014)Google Scholar
  16. 16.
    He, T.X., Xiang, X., Qian, Y., Yu, K.: Recurrent neural network language model with structured word embeddings for speech recognition. In: ICASSP 2015 - IEEE International Conference on Acoustics, Speech and Signal Processing, Australia, pp. 5396–5400 (2015)Google Scholar
  17. 17.
    Fang, H., Ostendorf, M., Baumann, P., Pierrehumbert, J.: Exponential language modeling using morphological features and multi-task learning. IEEE/ACM Trans. Audio Speech Lang. Process. 23(12), 2410–2421 (2015)CrossRefGoogle Scholar
  18. 18.
    Kim, Y., Jernite, Y., Sontag, D., Rush, A.M.: Character-aware neural language models. In: AAAI 2016 - 30th AAAI Conference on Artificial Intelligence, USA, pp. 2741–2749 (2015)Google Scholar
  19. 19.
    Chen, X., Wang, X., Liu, X., Gales, M.J.F., Woodland, P.C.: Efficient GPU-based training of recurrent neural network language models using spliced sentence bunch. In: INTERSPEECH 2014 - 15th Annual Conference of the International Speech Communication Association, Singapore, pp. 641–645 (2014)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Tongtong Shen
    • 1
  • Longbiao Wang
    • 1
  • Xie Chen
    • 2
  • Kuntharrgyal Khysru
    • 1
  • Jianwu Dang
    • 1
    • 3
    Email author
  1. 1.Tianjin Key Laboratory of Cognitive Computing and ApplicationTianjin UniversityTianjinChina
  2. 2.University of CambridgeCambridgeUK
  3. 3.Japan Advanced Institute of Science and TechnologyIshikawaJapan

Personalised recommendations