Advertisement

Investigation of Sign Language Recognition Performance by Integration of Multiple Feature Elements and Classifiers

  • Tatsunori Ozawa
  • Yuna Okayasu
  • Maitai Dahlan
  • Hiromitsu Nishimura
  • Hiroshi TanakaEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10904)

Abstract

Sign languages are used by healthy individuals when communicating with those who are hearing or speech impaired as well by those with hearing or speech impediments. It is quite difficult to acquire sign language skills since there are vast number of sign language words and some signing motions are very complex. Several attempts at machine translation have been investigated for a limited number of sign language motions by using KINECT and a data glove, which is equipped with a strain gauge to monitor the angles at which fingers are bent, to detect hand motions and hand shapes.

One of the key features of our proposed method is using an optical camera and colored gloves for detection of sign language motion. The optical camera is implemented in a smartphone. This makes it possible to remove the limitation of using area and occasion as a machine translation tool.

The authors propose two new schemes. One is to add the two feature elements, that is, hand direction obtained from the angle between the wrist and fingertips, and hand rotation calculated from the visible size of the palm and wrist incorporating the four conventional elements comprising motion trajectory, motion velocity, hand position and hand shape. The other is integrating the results which is obtained by each classifier to enhance the recognition performance. The six kinds of classifiers have been applied to 35 sign language motions.

A total of 3150 pieces of motion data, that is, 2100 pieces of motion data as training data and 1050 pieces as evaluation data, were used to evaluate the proposed method. The recognition results were examined by integrating the feature elements and classifier. The success rate for 35 words was respectively 76.2% and 94.2%, for the selection of the first ranked answer, and the selection of the first, second or third ranked answers. These values suggest that the proposed method could be used as a review tool for assessing how well learner have mastered sign language motions.

Keywords

Sign language Color gloves Optical camera Classifiers Feature element Ensemble learning 

References

  1. 1.
    Baatar, B., Tanaka, J.: Comparing sensor based and vision based techniques for dynamic gesture recognition. In: The 10th Asia Pacific Conference on Computer Human Interaction (APCHI), Poster 2P-21 (2012)Google Scholar
  2. 2.
    Zafrulla, Z., Brashear, H., Starner, T., Hamilton, H., Presti, P.: American sign language recognition with the Kinect. In: Proceedings of the 13th International Conference on Multimodal Interfaces, pp. 276–286 (2011)Google Scholar
  3. 3.
    Jitcharoenpory, R., Senechakr, P., Dahlan, M., Suchato, A., Chuangsuwanich, E., Punyabukkana, P.: Recognizing words in Thai Sign Language using flex sensors and gyroscopes. In: i-CREATe2017, 4 p. (2017)Google Scholar
  4. 4.
    Channaiah Chandana, K., Nikhita, K., Nikitha, P., Bhavani, N.K., Sudeep, J.: Hand gestures recognition system for deaf, dumb and blind people. IJIRCCE 5(5), 10058–10062 (2017)Google Scholar
  5. 5.
    Singha, J., Das, K.: Hand gesture recognition based on Karhunen-Loeve transform. In: Mobile & Embedded Technology International Conference 2013, pp. 365–371 (2013)Google Scholar
  6. 6.
    Ozawa, T., Shibata, H., Nishimura, H., Tanaka, H.: Investigation of feature elements and performance improvement for sign language recognition by hidden Markov model. In: Antona, M., Stephanidis, C. (eds.) UAHCI 2017 Part II. LNCS, vol. 10278, pp. 76–88. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-58703-5_6CrossRefGoogle Scholar
  7. 7.
    Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000).  https://doi.org/10.1007/3-540-45014-9_1CrossRefGoogle Scholar
  8. 8.
    Sugaya, T., Suzuki, T., Nishimura, H., Tanaka, H.: Basic investigation into hand shape recognition using colored gloves taking account of the peripheral environment. In: Yamamoto, S. (ed.) HIMI 2013 Part I. LNCS, vol. 8016, pp. 133–142. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-39209-2_16CrossRefGoogle Scholar
  9. 9.
    NHK (Japan Broadcasting Corporation), NHK Sign Language CG. http://cgi2.nhk.or.jp/signlanguage/
  10. 10.
    Signing Savvy | ASL Sign Language Video Dictionary. https://www.signingsavvy.com/
  11. 11.
    KCC Corporation, Smart Deaf. http://www.smartdeaf.com/
  12. 12.
    HTK version 3.4.1. http://htk.eng.cam.ac.uk/
  13. 13.
  14. 14.
  15. 15.
    Okayasu, Y., Ozawa, T., Dahlan, M., Nishimura, H., Tanaka, H.: Performance enhancement by combining visual clues to identify sing language motions. IEEE Pacific Rim Conference, 4 p. (2017)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Tatsunori Ozawa
    • 1
  • Yuna Okayasu
    • 1
    • 2
  • Maitai Dahlan
    • 3
  • Hiromitsu Nishimura
    • 1
    • 2
  • Hiroshi Tanaka
    • 1
    • 2
    Email author
  1. 1.Course of Information and Computer SciencesGraduate School of Kanagawa Institute of TechnologyAtsugi-shiJapan
  2. 2.Department of Information and Computer SciencesKanagawa Institute of TechnologyAtsugiJapan
  3. 3.Course of Mechanical EngineeringGraduate School of Chulalongkorn UniversityBangkokThailand

Personalised recommendations