Preliminary Design of a Dual-Sensor Based Sign Language Translator Device

  • Radzi Ambar
  • Chan Kar Fai
  • Chew Chang Choon
  • Mohd Helmy Abd Wahab
  • Muhammad Mahadi Abdul Jamil
  • Ahmad Alabqari Ma‘Radzi
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 700)

Abstract

There are many different types of sign languages that are used around the world which are important as the medium of conversation among hearing impaired community. However, majority of hearing people do not know or understand sign languages. Thus, communication between a hearing-impaired person and a hearing person is a difficult issue. In order to solve this problem, this project proposes a development of a dual-sensor based sign language translator. The goal of the project is to translate sign language into speech and display on screen by using the device. The device was developed in a glove-based system which was able to read the movements of every finger and arm using two (2) types of sensors, an accelerometer and five (5) units of flex sensors. This paper describes the design of the glove-based sign language translator. Subsequently, the preliminary experimental results show the usefulness of the accelerometer and flex sensors.

Keywords

Sign language translator Accelerometer Flex sensor Experiment 

References

  1. 1.
    Stokoe, W.C.: Sign Language Structure: An Outline of the Communicative Systems of the American Deaf. Linstock Press, Silver Spring, MD (1960)Google Scholar
  2. 2.
    Madhuri, Y, Anitha, G., Anburajan, M.: Vision-based sign language translation device. In: International Conference on Information Communication and Embedded Systems (ICICES), pp. 565–568 (2013)Google Scholar
  3. 3.
    Lewis, M.P.: Ethnologue: Languages of the World, 16th edn. SIL International, New York (2009)Google Scholar
  4. 4.
    Igari, S., Fukumura, N.: Recognition of Japanese sign language words represented by both arms using multi-stream HMMs. In: Proceedings of IMCIC-ICSIT, pp. 157–162 (2016)Google Scholar
  5. 5.
    Bauer, B., Kraiss, K.F.: Video-based sign recognition using self-organizing subunit. Proc. Int. Conf. Pattern Recogn. 2, 434–437 (2002)Google Scholar
  6. 6.
    Dreuw, P. et al.: Speech recognition techniques for a sign language recognition system. InterSpeech-2007, pp. 2513—2516 (2007)Google Scholar
  7. 7.
    Huang, T.S., Wu, Y.: Vision-based gesture recognition: a review. In: Gesture Workshop, Gif-sur-Yvette, France, vol. 1739, pp. 103–115. LNCS (1999)Google Scholar
  8. 8.
    Bui, T.D., Nguyen, L.T.: Recognizing postures in vietnamese sign language with MEMS accelerometers. IEEE Sens. J. 7(5), 707–712 (2007)CrossRefGoogle Scholar
  9. 9.
    Su, Y., et al.: 3D motion system (“data-gloves”): application for Parkinson’s disease. IEEE Trans. Instrum. Meas. 52(3), 662–674 (2003)CrossRefGoogle Scholar
  10. 10.
    Ahmed, S., et al.: Electronic speaking system for speech impaired people: speak up. In: Proceedings of 2nd International Conference on Electrical Engineering and Information & Communication Technology (ICEEICT) (2015)Google Scholar
  11. 11.
    Dipietro, L. et al.: A survey of glove-based systems and their applications. IEEE Trans. Syst. Man Cybern. Part C (Applications and Reviews) 38(4), 461–482 (2008)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Radzi Ambar
    • 1
  • Chan Kar Fai
    • 1
  • Chew Chang Choon
    • 1
  • Mohd Helmy Abd Wahab
    • 1
  • Muhammad Mahadi Abdul Jamil
    • 2
  • Ahmad Alabqari Ma‘Radzi
    • 2
  1. 1.Department of Computer Engineering, Faculty of Electric and Electronic EngineeringUniversiti Tun Hussein OnnParit RajaMalaysia
  2. 2.Department of Electronic Engineering, Faculty of Electric and Electronic EngineeringUniversiti Tun Hussein OnnParit RajaMalaysia

Personalised recommendations