Abstract
A 3D motion capture system is being used to develop a complete 3D sign language recognition (SLR) system. This paper introduces motion capture technology and its capacity to capture human hands in 3D space. A hand template is designed with marker positions to capture different characteristics of Indian sign language. The captured 3D models of hands form a dataset for Indian sign language. We show the superiority of 3D hand motion capture over 2D video capture for sign language recognition. 3D model dataset is immune to lighting variations, motion blur, color changes, self-occlusions and external occlusions. We conclude that 3D model based sign language recognizer will provide full recognition and has a potential for development of a complete sign language recognizer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
S. Kwak, B. Han, J. Han, Scenario-based video event recognition by constraint flow, in: Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Colorado Springs, 2011, pp. 3345–3352, http://dx.doi.org/10.1109/CVPR.2011.5995435.
U. Gaur, Y. Zhu, B. Song, A. Roy-Chowdhury, A string of feature graphs model for recognition of complex activities in natural videos, in: Proceedings of International Conference on Computer Vision (ICCV), IEEE, Barcelona, Spain, 2011, pp. 2595–2602, http://dx.doi.org/10.1109/ICCV.2011.6126548.
S. Park, J. Aggarwal, Recognition of two-person interactions using a hierarchical Bayesian network, in: First ACM SIGMM International Workshop on Video surveillance, ACM, Berkeley, California, 2003, pp. 65–76, http://dx.doi.org/10.1145/982452.982461.
I. Junejo, E. Dexter, I. Laptev, P. Pérez, View-independent action recognition from temporal self-similarities, IEEE Trans. Pattern Anal. Mach. Intell. 33 (1) (2011) 172–185, http://dx.doi.org/10.1109/TPAMI.2010.68.
Z. Duric, W. Gray, R. Heishman, F. Li, A. Rosenfeld, M. Schoelles, C. Schunn, H. Wechsler, Integrating perceptual and cognitive modeling for adaptive and intelligent human–computer interaction, Proc. IEEE 90 (2002) 1272–1289, http://dx.doi.org/10.1109/JPROC.2002.801449.
Y.-J. Chang, S.-F. Chen, J.-D. Huang, A Kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities, Res. Dev. Disabil. 32 (6) (2011) 2566–2570, http://dx.doi.org/10.1016/j.ridd.2011.07.002.
A. Thangali, J.P. Nash, S. Sclaroff, C. Neidle, Exploiting phonological constraints for handshape inference in ASL video, in: Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Colorado Springs, 2011, pp. 521–528, http://dx.doi.org/10.1109/CVPR.2011.5995718.
A. Thangali Varadaraju, Exploiting phonological constraints for handshape recognition in sign language video (Ph.D. thesis), Boston University, MA, USA, 2013.
H. Cooper, R. Bowden, Large lexicon detection of sign language, in: Proceedings of International Workshop on Human–Computer Interaction (HCI), Springer, Berlin, Heidelberg, Beijing, P.R. China, 2007, pp. 88–97.
J.M. Rehg, G.D. Abowd, A. Rozga, M. Romero, M.A. Clements, S. Sclaroff, I. Essa, O.Y. Ousley, Y. Li, C. Kim, et al., Decoding children’s social behavior, in:Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Portland, Oregon, 2013, pp. 3414–3421, http://dx.doi.org/10.1109/CVPR.2013.438.
L. Lo Presti, S. Sclaroff, A. Rozga, Joint alignment and modeling of correlated behavior streams, in: Proceedings of International Conference on Computer Vision-Workshops (ICCVW), Sydney, Australia, 2013, pp. 730–737, http://dx.doi.org/10.1109/ICCVW.2013.100.
H. Moon, R. Sharma, N. Jung, Method and system for measuring shopper response to products based on behavior and facial expression, US Patent 8,219,438, July 10, 2012 〈http://www.google.com/patents/US8219438〉.
G. Johansson, Visual perception of biological motion and a model for its analysis, Percept. Psychophys. 14 (2) (1973) 201–211.
Cerveri, P., De Momi, E., Lopomo, N. et al., Finger kinematic modelling and real-time hand motion estimation, Ann Biomed Eng (2007) 35: 1989. doi:10.1007/s10439-007-9364-0.
N. Miyata, M. Kouchi, T. Kurihara, and M. Mochimaru, “Modelling of human hand link structure from optical motion capture data,” in Proc. Int. Conf. Intelligent Robots Systems, Sendai, Japan, 2004, pp. 2129–2135.
I. Carpinella, P. Mazzoleni, M. Rabuffetti, R. Thorsen, and M. Ferrarin, “Experimental protocol for the kinematic analysis of the hand: Definition and repeatability,” Gait Posture, vol. 23, pp. 445–454, 2006.
G. Wu, F. C. T. van der Helm, H. E. J. Veeger, M. Makhsous, P. van Roy, C. Anglin, J. Nagels, A. R. Karduna, K. McQuade, X. Wang, F. W. Werner, and B. Buchholz, “ISB recommendation on definitions of joint coordinate systems of various joints for the reporting of human joint motion—Part II: Shoulder, elbow, wrist and hand,” J. Biomech., vol. 38, pp. 981–992, 2005.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Kiran Kumar, E., Kishore, P.V.V., Sastry, A.S.C.S., Anil Kumar, D. (2018). 3D Motion Capture for Indian Sign Language Recognition (SLR). In: Satapathy, S., Bhateja, V., Das, S. (eds) Smart Computing and Informatics . Smart Innovation, Systems and Technologies, vol 78. Springer, Singapore. https://doi.org/10.1007/978-981-10-5547-8_3
Download citation
DOI: https://doi.org/10.1007/978-981-10-5547-8_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-5546-1
Online ISBN: 978-981-10-5547-8
eBook Packages: EngineeringEngineering (R0)