AI-Enabled Real-Time Sign Language Translator

  • Yash PatilEmail author
  • Sahil Krishnadas
  • Adya Kastwar
  • Sujata Kulkarni
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1154)


Even in recent times with the advancement in technology, there exists a hindrance in seamless communication with the hearing and speech-impaired section of the society. Inclusive communication is instrumental for a society to function as a whole. It is not only essential for exchanging ideas, but also for progress and innovation. A lack of means for spontaneous communication should not stand in the way of socializing, employment or productivity. We propose an android application that interprets American Sign Language into English language using convolutional neural network with an aim to provide real-time translation to facilitate seamless communication. Although there is a presence of computer-based translation application for sign language recognition, the availability of such applications over an android platform is relatively few in number. The proposed sign language translator finds its applicability in gesture-controlled applications like human–computer interaction, providing control action for various home appliances and electronic gadgets that are triggered by gestures when given as an input. The proposed work is aimed to transform hearing and speech abnormality to normality, thus eliminating their dependencies.


Convolutional neural network Real time Translation Sign language Feature extraction 



The authors are grateful to the Sardar Patel Institute of Technology, India, for providing with the necessary facilities to carry out this piece of work.


  1. 1.
    Chauhan, R., Jangade, R., Rekapally, R.: Classification model for prediction of heart disease. Proc. SoCTA Adv. Intell. Syst. Comput. 2, 707–714 (2016)Google Scholar
  2. 2.
    Arora, S., Mishra, N.: software cost estimation using artificial neural network. Proc. SoCTA Adv. Intell. Syst. Comput. 2, 51–58 (2016)Google Scholar
  3. 3.
    Badhe, P.C., Kulkarni, V.: Indian sign language translator using gesture recognition algorithm. In: 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS). Bhubaneswar, pp 195–200. (2015)
  4. 4.
    Starner, T., Pentland, A.: Real-time American sign language recognition from video using hidden markov models. In: Proceedings of International Symposium on Computer Vision. Coral Gables, USA., 23 Nov 1995
  5. 5.
    Vogler, C., Metaxas, D.: Adapting hidden Markov models for ASL recognition by using three-dimensional computer vision methods. In: IEEE International Conference on Computational Cybernetics and Simulation. Oralando, FL (1995)Google Scholar
  6. 6.
    Shanableh, T., Assaleh, K., Al-Rousan, M.: Spatio-temporal feature-extraction techniques for isolated gesture recognition in arabic sign language. IEEE Trans. Syst. Man Cybern. Part B Cybern. 37, 641–650 (2007). Scholar
  7. 7.
    Al-Jarrah, O., Halawani, A.: Recognition of gestures in Arabic sign language using neuro-fuzzy systems. Artif. Intell. 133, 117–138 (2001). Scholar
  8. 8.
    Youssif, A.A., Aboutabl, A.E., Ali, H.: Arabic sign language (ArSL) recognition system using HMM. Int. J. Adv. Comput. Sci. Appl. 2. (2011)
  9. 9.
    Akash: “ASL Alphabet”, Kaggle. 22 Apr 2018 [Online]. Available Accessed 25 Jun 2019
  10. 10.
    Nanivadekar, P., Kulkarni, V.: Indian sign language recognition: database creation, hand tracking and segmentation. In: Proceedings of CSCITA 2014, IEEE Xplore, pp 358–363Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Yash Patil
    • 1
    Email author
  • Sahil Krishnadas
    • 1
  • Adya Kastwar
    • 1
  • Sujata Kulkarni
    • 1
  1. 1.Department of Electronics & TelecommunicationSardar Patel Institute of TechnologyMumbaiIndia

Personalised recommendations