Advertisement

Machine Learning Based Assistive Speech Technology for People with Neurological Disorders

  • Shanmuganathan ChandrakalaEmail author
Chapter
Part of the Intelligent Systems Reference Library book series (ISRL, volume 170)

Abstract

With the tremendous improvements of automatic speech recognition systems worldwide, efficient ways of recognizing dysarthric speech has emerged as a practical challenge. Recognizing the impaired speech with poor articulation, missing consonants, and so forth is one of the foremost requirements in research for speech domain. Given an unknown dysarthric (partial) speech utterance, the problem is to recognize the speech content. I first review and analyze the different approaches such as generative, discriminative, hybrid model based approaches and unsupervised approaches for dysarthric speech recognition (DSR). Next, I present a framework in which effective representations are formed using generative model-driven features for dysarthric speech recognition task. The performance of the proposed method is examined to recognize the isolated utterances from the UA-Speech database. The recognition accuracy of the proposed approach is better than the conventional hidden Markov model-based approach.

Keywords

Dysarthric speech recognition (DSR) Mel frequency cepstral coefficients (MFCC) Generative model-driven features Hidden Markov models Gaussian mixture models Likelihood embedding-support vector machine Transition embedding-support vector machine 

References

  1. 1.
    Asemi, A., Salim, S.S.B., Shahamiri, S.R., Asemi, A., Houshangi, N.: Adaptive neuro-fuzzy inference system for evaluating dysarthric automatic speech recognition (ASR) systems: a case study on MVML-based ASR. Soft Comput. 1–16 (2018)Google Scholar
  2. 2.
    Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press (1995)Google Scholar
  3. 3.
    Burges, C.J.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Disc. 2(2), 121–167 (1998)CrossRefGoogle Scholar
  4. 4.
    Chandrakala, S., Rajeswari, N.: Representation learning based speech assistive system for persons with dysarthria. IEEE Trans. Neural Syst. Rehabil. Eng. 25(9), 1510–1517 (2017)CrossRefGoogle Scholar
  5. 5.
    Chang, C.-C., Lin, C.-J.: Libsvm: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2(3), 27 (2011)Google Scholar
  6. 6.
    Clapham, R.P., van der Molen, L., van Son, R., van den Brekel, M.W.M. , Hilgers, F.J.: NKI-CCRT corpus-speech intelligibility before and after advanced head and neck cancer treated with concomitant chemoradiotherapy. In: LREC, pp. 3350–3355. Citeseer (2012)Google Scholar
  7. 7.
    Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press (2000)Google Scholar
  8. 8.
    De Pauw, G., Daelemans, W., Huyghe, J., Derboven, J., Vuegen, L., Van Den Broeck, B., Karsmakers, P., Vanrumste, B.: Self-taught Assistive Vocal Interfaces: An Overview of the ALADIN Project (2013)Google Scholar
  9. 9.
    Dhanalakshmi, M., Celin, T.M., Nagarajan, T., Vijayalakshmi, P.: Speech-input speech-output communication for dysarthric speakers using HMM-based speech recognition and adaptive synthesis system. Circ. Syst. Sig. Process. 37(2), 674–703 (2018)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Duffy, J.R.: Motor speech disorders: clues to neurologic diagnosis. In: Parkinson’s Disease and Movement Disorders, pp. 35–53. Springer (2000)Google Scholar
  11. 11.
    Gauvain, J.-L., Lee, C.-H.: Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains. IEEE Trans. Speech Audio Process. 2(2), 291–298 (1994)CrossRefGoogle Scholar
  12. 12.
    Godino-Llorente, J.I., Gomez-Vilda, P.: Automatic detection of voice impairments by means of short-term cepstral parameters and neural network based detectors. IEEE Trans. Biomed. Eng. 51(2), 380–384 (2004)CrossRefGoogle Scholar
  13. 13.
    Green, P.D., Carmichael, J., Hatzis, A., Enderby, P., Hawley, M.S., Parker, M.: Automatic speech recognition with sparse training data for dysarthric speakers. In: INTERSPEECH. Citeseer (2003)Google Scholar
  14. 14.
    Hasegawa-Johnson, M., Gunderson, J., Penman, A., Huang, T.: HMM-based and SVM-based recognition of the speech of talkers with spastic dysarthria. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, vol. 3, p. III. IEEE (2006)Google Scholar
  15. 15.
    Jayaram, G., Abdelhamied, K.: Experiments in dysarthric speech recognition using artificial neural networks. J. Rehabil. Res. Dev. 32, 162 (1995)Google Scholar
  16. 16.
    Jurafsky, D., Martin, J.H.: Speech & Language Processing. Pearson Education India (2000)Google Scholar
  17. 17.
    Kim, H., Hasegawa-Johnson, M., Perlman, A., Gunderson, J., Huang, T.S., Watkin, K., Frame, S.: Dysarthric speech database for universal access research. In: INTERSPEECH, pp. 1741–1744 (2008)Google Scholar
  18. 18.
    Kim, J., Kumar, N., Tsiartas, A., Li, M., Narayanan, S.S.: Automatic intelligibility classification of sentence-level pathological speech. Comput. Speech Lang. 29(1), 132–144 (2015)CrossRefGoogle Scholar
  19. 19.
    Kim, M., Kim, Y., Yoo, J., Wang, J., Kim, H.: Regularized speaker adaptation of KL-HMM for dysarthric speech recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 25(9), 1581–1591 (2017)CrossRefGoogle Scholar
  20. 20.
    Kim, M.J., Wang, J., Kim, H.: Dysarthric speech recognition using Kullback-Leibler divergence-based hidden Markov model. In: INTERSPEECH, pp. 2671–2675 (2016)Google Scholar
  21. 21.
    Lee, C., Rabiner, L., Pieraccini, R., Wilpon, J.: Acoustic modeling for large vocabulary speech recognition. Comput. Speech Lang. 4(2), 127–165 (1990)CrossRefGoogle Scholar
  22. 22.
    Mengistu, K.T., Rudzicz, F.: Comparing humans and automatic speech recognition systems in recognizing dysarthric speech. In: Advances in Artificial Intelligence, pp. 291–300 (2011)CrossRefGoogle Scholar
  23. 23.
    Murphy, K.: Hidden Markov model HMM toolbox for Matlab. Online at http://www.ai.mit.edu/˜murphyk/Software/HMM/hmm.html (1998)
  24. 24.
    Nakashika, T., Yoshioka, T., Takiguchi, T., Ariki, Y., Duffner, S., Garcia, C.: Dysarthric speech recognition using a convolutive bottleneck network. In: 12th International Conference on Signal Processing (ICSP), pp. 505–509. IEEE (2014)Google Scholar
  25. 25.
    Nidhyananthan, S.S., Shenbagalakshmi, V.O.: Assessment of dysarthric speech using Elman back propagation network (recurrent network) for speech recognition. Int. J. Speech Technol. 19(3), 577–583 (2016)CrossRefGoogle Scholar
  26. 26.
    Polur, P.D., Miller, G.E.: Experiments with fast Fourier transform, linear predictive and cepstral coefficients in dysarthric speech recognition algorithms using hidden Markov model. IEEE Trans. Neural Syst. Rehabil. Eng. 13(4), 558–561 (2005)CrossRefGoogle Scholar
  27. 27.
    Polur, P.D., Miller, G.E.: Investigation of an HMM/ANN hybrid structure in pattern recognition application using cepstral analysis of dysarthric (distorted) speech signals. Med. Eng. Phys. 28(8), 741–748 (2006)CrossRefGoogle Scholar
  28. 28.
    Povey, D., Burget, L., Agarwal, M., Akyazi, P., Kai, F., Ghoshal, A., Glembek, O., Goel, N., Karafiát, Martin, Rastrow, A., et al.: The subspace Gaussian mixture model—a structured model for speech recognition. Comput. Speech Lang. 25(2), 404–439 (2011)CrossRefGoogle Scholar
  29. 29.
    Rabiner, L.R., Juang, B.-H.: An introduction to hidden Markov models. IEEEASSP Mag. 3(1), 4–16 (1986)Google Scholar
  30. 30.
    Rabiner, L.R., Juang, B.-H.: Fundamentals of Speech Recognition, vol. 14. PTR Prentice Hall Englewood Cliffs (1993)Google Scholar
  31. 31.
    Rudzicz, F.: Articulatory knowledge in the recognition of dysarthric speech. IEEE Trans. Audio Speech Lang. Process. 19(4), 947–960 (2011)CrossRefGoogle Scholar
  32. 32.
    Rudzicz, F., Namasivayam, A.K., Wolff, T.: The TORGO database of acoustic and articulatory speech from speakers with dysarthria. Lang. Resour. Eval. 46(4), 523–541 (2012)CrossRefGoogle Scholar
  33. 33.
    Selouani, S.-A., Dahmani, H., Amami, R., Hamam, H.: Using speech rhythm knowledge to improve dysarthric speech recognition. Int. J. Speech Technol. 15(1), 57–64 (2012)CrossRefGoogle Scholar
  34. 34.
    Selouani, S.-A., Yakoub, M.S., O’Shaughnessy, D.: Alternative speech communication system for persons with severe speech disorders. EURASIP J. Adv. Sig. Process. 6 (2009)Google Scholar
  35. 35.
    Seong, W.K., Kim, N.K., Ha, H.K., Kim, H.K.: A discriminative training method incorporating pronunciation variations for dysarthric automatic speech recognition. In: 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), pp. 1–5. IEEE (2016)Google Scholar
  36. 36.
    Seong, W.K., Park, J.H., Kim, H.K.: Dysarthric Speech Recognition Error Correction Using Weighted Finite State Transducers Based on Context–Dependent Pronunciation Variation. Springer (2012)Google Scholar
  37. 37.
    Shahamiri, S.R., Salim, S.S.B.: A multi-views multi-learners approach towards dysarthric speech recognition using multi-nets artificial neural networks. IEEE Trans. Neural Syst. Rehabil. Eng. 22(5), 1053–1063 (2014)CrossRefGoogle Scholar
  38. 38.
    Shahamiri, S.R., Salim, S.S.B.: Artificial neural networks as speech recognisers for dysarthric speech: identifying the best-performing set of MFCC parameters and studying a speaker-independent approach. Adv. Eng. Inf. 28(1), 102–110 (2014)CrossRefGoogle Scholar
  39. 39.
    Sharma, H.V., Hasegawa-Johnson, M.: State-transition interpolation and map adaptation for HMM-based dysarthric speech recognition. In: Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies, pp. 72–79. Association for Computational Linguistics (2010)Google Scholar
  40. 40.
    Szatloczki, G., Hoffmann, I., Vincze, V., Kalman, J., Pakaski, M.: Speaking in Alzheimer’s disease, is that an early sign? importance of changes in language abilities in Alzheimer’s disease. Front. Aging Neurosci. 7, 195 (2015)CrossRefGoogle Scholar
  41. 41.
    Walter, O., Despotovic, V., Haeb-Umbach, R., Gemmeke, J., Ons, B.O.: An evaluation of unsupervised acoustic model training for a dysarthric speech interface. In: INTERSPEECH (2014)Google Scholar
  42. 42.
    Wan, V., Carmichael, J.: Polynomial dynamic time warping kernel support vector machines for dysarthric speech recognition with sparse training data. In: Ninth European Conference on Speech Communication and Technology (2005)Google Scholar
  43. 43.
    Wiśniewski, M., Kuniszyk-Jóźkowiak, W., Smołka, E., Suszyński, W.: Automatic detection of disorders in a continuous speech with the hidden Markov models approach. In: Computer Recognition Systems, vol. 2, pp. 445–453. Springer (2007)Google Scholar
  44. 44.
    Wrench, A.: The MOCHA-TIMIT articulatory database. Online at http://www.cstr.ed.ac.uk/research/projects/artic/mocha.html (1999)
  45. 45.
    Zue, V., Seneff, S., Glass, J.: Speech database development at MIT: TIMIT and beyond. Speech Commun. 9(4), 351–356 (1990)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Intelligent Systems Group, School of ComputingSASTRA UniversityThanjavurIndia

Personalised recommendations