Work-in-Progress: Silent Speech Recognition Interface for the Differently Abled

  • Josh Elias JoyEmail author
  • H. Ajay Yadukrishnan
  • V. Poojith
  • J. Prathap
Conference paper
Part of the Lecture Notes in Networks and Systems book series (LNNS, volume 80)


Silent speech or unvoiced speech can be interpreted by lip reading, which is difficult, or by using EMG (Electromyography) electrodes to convert the facial muscle movements into distinct signals. These signals are processed in MATLAB and matched to a predefined word by using Dynamic Time Warping algorithm. The identified word is then converted to speech and can be used to control a nearby device such as a motorized wheelchair. Thus, a silent speech interface has the potential to enable a differently-abled person to communicate and interact with objects in their surroundings to ease their lives.


Silent speech Signal processing Human computer interface Electromyography 


  1. 1.
    National Institute on Deafness and Other Communication Disorders: Retrieved from (2010)
  2. 2.
    Juang, B.H.: On the hidden Markov model and dynamic time warping for speech recognition—a unified view. AT&T Bell Laboratories Tech. J. 63(7), 1213–1243 (1984)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Wand, M., Schultz, T.: Session-Independent EMG-Based Speech Recognition (2011)Google Scholar
  4. 4.
    Kapur, A., Kapur, S., Maes, P.: AlterEgo: a personalized wearable silent speech interface. In: 23rd International Conference on Intelligent User Interfaces (IUI 2018), 5 Mar 2018Google Scholar
  5. 5.
    Gaikwad, S.K., Gawali, B.W., Yannawar, P.: A review on speech recognition technique. Int. J. Comput. Appl. (0975–8887) (2010)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Josh Elias Joy
    • 1
    Email author
  • H. Ajay Yadukrishnan
    • 1
  • V. Poojith
    • 1
  • J. Prathap
    • 1
  1. 1.The Oxford College of EngineeringBengaluruIndia

Personalised recommendations