Work-in-Progress: Silent Speech Recognition Interface for the Differently Abled
Silent speech or unvoiced speech can be interpreted by lip reading, which is difficult, or by using EMG (Electromyography) electrodes to convert the facial muscle movements into distinct signals. These signals are processed in MATLAB and matched to a predefined word by using Dynamic Time Warping algorithm. The identified word is then converted to speech and can be used to control a nearby device such as a motorized wheelchair. Thus, a silent speech interface has the potential to enable a differently-abled person to communicate and interact with objects in their surroundings to ease their lives.
KeywordsSilent speech Signal processing Human computer interface Electromyography
- 1.National Institute on Deafness and Other Communication Disorders: Retrieved from https://www.nidcd.nih.gov/health/statistics/quick-statistics-voice-speech-language (2010)
- 3.Wand, M., Schultz, T.: Session-Independent EMG-Based Speech Recognition (2011)Google Scholar
- 4.Kapur, A., Kapur, S., Maes, P.: AlterEgo: a personalized wearable silent speech interface. In: 23rd International Conference on Intelligent User Interfaces (IUI 2018), 5 Mar 2018Google Scholar
- 5.Gaikwad, S.K., Gawali, B.W., Yannawar, P.: A review on speech recognition technique. Int. J. Comput. Appl. (0975–8887) (2010)Google Scholar