Optimization of Gain in Symmetrized Itakura-Saito Discrimination for Pronunciation Learning

  • Andrey V. SavchenkoEmail author
  • Vladimir V. Savchenko
  • Lyudmila V. Savchenko
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12095)


This paper considers an assessment and evaluation of the pronunciation quality in computer-aided language learning systems. We propose the novel distortion measure for speech processing by using the gain optimization of the symmetrized Itakura-Saito divergence. This dissimilarity is implemented in a complete algorithm for pronunciation learning and improvement. At its first stage, a user has to achieve a stable pronunciation of all sounds by matching them with sounds of an ideal speaker. At the second stage, the recognition of sounds and their short sequences is carried out to guarantee the distinguishability of learned sounds. The training set may contain not only ideal sounds but the best utterances of a user obtained at the previous step. Finally, the word recognition accuracy is estimated by using deep neural networks fine-tuned on the best words from a user. Experimental study shows that the proposed procedure makes it possible to achieve high efficiency for learning of sounds and their sequences even in the presence of noise in an observed utterance.


Signal processing Itakura-Saito divergence Gain optimization Computer-aided language learning Speech quality assessment Convolutional neural networks (CNN) 



The work was prepared within the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE).


  1. 1.
    Golonka, E.M., Bowles, A.R., Frank, V.M., Richardson, D.L., Freynik, S.: Technologies for foreign language learning: a review of technology types and their effectiveness. Comput. Assist. Lang. Learn. 27(1), 70–105 (2014)CrossRefGoogle Scholar
  2. 2.
    Sztahó, D., Kiss, G., Vicsi, K.: Computer based speech prosody teaching system. Comput. Speech Lang. 50, 126–140 (2018)CrossRefGoogle Scholar
  3. 3.
    Han, K.I., Park, H.J., Lee, K.M.: Speech recognition and lip shape feature extraction for English vowel pronunciation of the hearing-impaired based on SVM technique. In: Proceedings of the International Conference on Big Data and Smart Computing (BigComp), pp. 293–296. IEEE (2016)Google Scholar
  4. 4.
    Hu, W., Qian, Y., Soong, F.K.: A new DNN-based high quality pronunciation evaluation for computer-aided language learning (CALL). In: Proceedings of Interspeech, pp. 1886–1890 (2013)Google Scholar
  5. 5.
    Kneller, E., Karaulnyh, D.: System and method of converting voice signal into transcript presentation with metadata. RU Patent 2589851 C2, 10 July 2016Google Scholar
  6. 6.
    Agarwal, C., Chakraborty, P.: A review of tools and techniques for computer aided pronunciation training (CAPT) in English. Educ. Inf. Technol. 24(6), 3731–3743 (2019). Scholar
  7. 7.
    Haikun, T., Shiying, W., Xinsheng, L., Yue, X.G.: Speech recognition model based on deep learning and application in pronunciation quality evaluation system. In: Proceedings of the International Conference on Data Mining and Machine Learning, pp. 1–5 (2019)Google Scholar
  8. 8.
    Savchenko, V.V.: Minimum of information divergence criterion for signals with tuning to speaker voice in automatic speech recognition. Radioelectron. Commun. Syst. 63(1), 42–54 (2020). Scholar
  9. 9.
    Franco, H., Bratt, H., Rossier, R., Rao Gadde, V., Shriberg, E., Abrash, V., Precoda, K.: Eduspeak®: a speech recognition and pronunciation scoring toolkit for computer-aided language learning applications. Lang. Test. 27(3), 401–418 (2010)CrossRefGoogle Scholar
  10. 10.
    Sudhakara, S., Ramanathi, M.K., Yarra, C., Ghosh, P.K.: An improved goodness of pronunciation (GoP) measure for pronunciation evaluation with DNN-HMM system considering hmm transition probabilities. In: Proceedings of Interspeech, pp. 954–958 (2019)Google Scholar
  11. 11.
    Arias, J.P., Yoma, N.B., Vivanco, H.: Automatic intonation assessment for computer aided language learning. Speech Commun. 52(3), 254–267 (2010)CrossRefGoogle Scholar
  12. 12.
    Elaraby, M.S., Abdallah, M., Abdou, S., Rashwan, M.: A deep neural networks (DNN) based models for a computer aided pronunciation learning system. In: Ronzhin, A., Potapova, R., Németh, G. (eds.) SPECOM 2016. LNCS (LNAI), vol. 9811, pp. 51–58. Springer, Cham (2016). Scholar
  13. 13.
    Huang, G., Ye, J., Shen, Y., Zhou, Y.: A evaluating model of English pronunciation for Chinese students. In: Proceedings of the 9th International Conference on Communication Software and Networks (ICCSN), pp. 1062–1065. IEEE (2017)Google Scholar
  14. 14.
    Xiao, Y., Soong, F., Hu, W.: Paired phone-posteriors approach to ESL pronunciation quality assessment. In: Proceedings of Interspeech, pp. 1631–1635 (2018)Google Scholar
  15. 15.
    Srinivasan, A., Yarra, C., Ghosh, P.K.: Automatic assessment of pronunciation and its dependent factors by exploring their interdependencies using DNN and LSTM. In: Proceedings of the 8th ISCA Workshop on Speech and Language Technology in Education (SLaTE), pp. 30–34 (2019)Google Scholar
  16. 16.
    Gu, L., Harris, J.G.: SLAP: a system for the detection and correction of pronunciation for second language acquisition. In: Proceedings of the International Symposium on Circuits and Systems (ISCAS), vol. 2, p. II. IEEE (2003)Google Scholar
  17. 17.
    Gray, R., Buzo, A., Gray, A., Matsuyama, Y.: Distortion measures for speech processing. IEEE Trans. Acoust. Speech Signal Process. 28(4), 367–376 (1980)CrossRefGoogle Scholar
  18. 18.
    Benesty, J., Sondhi, M.M., Huang, Y.A. (eds.): Springer Handbook of Speech Processing. SH. Springer, Heidelberg (2008). Scholar
  19. 19.
    Mošner, L., et al.: Improving noise robustness of automatic speech recognition via parallel data and teacher-student learning. In: Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6475–6479. IEEE (2019)Google Scholar
  20. 20.
    Savchenko, A.V., Savchenko, L.V.: Towards the creation of reliable voice control system based on a fuzzy approach. Pattern Recogn. Lett. 65, 145–151 (2015)CrossRefGoogle Scholar
  21. 21.
    Savchenko, L.V., Savchenko, A.V.: Fuzzy phonetic decoding method in a phoneme recognition problem. In: Drugman, T., Dutoit, T. (eds.) NOLISP 2013. LNCS (LNAI), vol. 7911, pp. 176–183. Springer, Heidelberg (2013). Scholar
  22. 22.
    Su, H.Y., Gao, Y.: Adaptive gain reduction for encoding a speech signal. US Patent 9,269,365, 23 February 2016Google Scholar
  23. 23.
    Dionelis, N., Brookes, M.: Speech enhancement using modulation-domain Kalman filtering with active speech level normalized log-spectrum global priors. In: Proceedings of the 25th European Signal Processing Conference (EUSIPCO), pp. 2309–2313. IEEE (2017)Google Scholar
  24. 24.
    Erkelens, J., Jensen, J., Heusdens, R.: A data-driven approach to optimizing spectral speech enhancement methods for various error criteria. Speech Commun. 49(7–8), 530–541 (2007)CrossRefGoogle Scholar
  25. 25.
    Bastos, I., Oliveira, L.B., Goes, J., Silva, M.: MOSFET-only wideband LNA with noise cancelling and gain optimization. In: Proceedings of the 17th International Conference Mixed Design of Integrated Circuits and Systems (MIXDES), pp. 306–311. IEEE (2010)Google Scholar
  26. 26.
    Itakura, F., Saito, S.: Analysis synthesis telephony based on the maximum likelihood method. In: Proceedings of the 6th International Congress on Acoustics, pp. 17–20 (1968)Google Scholar
  27. 27.
    Marple Jr., S.L.: Digital Spectral Analysis with Applications, 2nd edn. Dover Publications, Mineola, New York (2019). 432 p. Google Scholar
  28. 28.
    Savchenko, V.V.: Itakura–Saito divergence as an element of the information theory of speech perception. J. Commun. Technol. Electron. 64(6), 590–596 (2019). Scholar
  29. 29.
    Kullback, S.: Information Theory and Statistics. Dover Publications, New York (1997)zbMATHGoogle Scholar
  30. 30.
    Savchenko, A.V., Belova, N.S.: Statistical testing of segment homogeneity in classification of piecewise-regular objects. Int. J. Appl. Math. Comput. Sci. 25(4), 915–925 (2015)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Itakura, F.: Minimum prediction residual principle applied to speech recognition. IEEE Trans. Acoust. Speech Signal Process. 23(1), 67–72 (1975)CrossRefGoogle Scholar
  32. 32.
    Savchenko, V.V., Savchenko, L.V.: Method for measuring the intelligibility of speech signals in the Kullback–Leibler information metric. Meas. Tech. 62(9), 832–839 (2019). Scholar
  33. 33.
    Sainath, T.N., Parada, C.: Convolutional neural networks for small-footprint keyword spotting. In: Proceedings of the Sixteenth Annual Conference of the International Speech Communication Association, pp. 1478–1482 (2015)Google Scholar
  34. 34.
    Zhang, Y., Pezeshki, M., Brakel, P., Zhang, S., Bengio, C.L.Y., Courville, A.: Towards end-to-end speech recognition with deep convolutional neural networks. arXiv preprint arXiv:1701.02720 (2017)
  35. 35.
    Nakkiran, P., Alvarez, R., Prabhavalkar, R., Parada, C.: Compressing deep neural networks using a rank-constrained topology. In: Proceedings of the Sixteenth Annual Conference of the International Speech Communication Association, pp. 1473–1477 (2015)Google Scholar
  36. 36.
    Kuchaiev, O., et al.: Nemo: a toolkit for building AI applications using neural modules. arXiv preprint arXiv:1909.09577 (2019)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Andrey V. Savchenko
    • 1
    Email author
  • Vladimir V. Savchenko
    • 2
  • Lyudmila V. Savchenko
    • 3
  1. 1.Laboratory of Algorithms and Technologies for Network AnalysisNational Research University Higher School of EconomicsNizhny NovgorodRussia
  2. 2.Nizhny Novgorod State Linguistic UniversityNizhny NovgorodRussia
  3. 3.Department of Information Systems and TechnologiesNational Research University Higher School of EconomicsNizhny NovgorodRussia

Personalised recommendations