Abstract
Using Convolutional Neural Networks (CNN)’s to create Deep Learning systems that turns Sign Language into text has been a vital tool in breaking communication barriers between deaf-mute people. Conventional research on this subject concerns training networks to recognize alphanumerical gestures and produce their textual equivalents.
A problem with current methods is that images are scarce, with little variation in available gestures, often skewed towards skin tones and hand sizes that makes a significant subset of gestures hard to detect. Current identification programs are only trained in a single language despite there being over two-hundred known variants so far. This presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters.
This work presents a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. As a result, a program is created that can receive finger spelling from multiple tactile languages and deduct the corresponding alphanumeric and its language which no other neural network has been able to replicate.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
World Health Organization: Deafness and hearing loss. http://www.who.int/mediacentre/factsheets/fs300/en/
Clarion UK. https://www.clarion-uk.com/know-many-sign-languages-world/
British Deaf Association. https://bda.org.uk/help-resources/
https://www.microsoft.com/en-us/research/blog/kinect-sign-language-translator-part-1/
Mohandes, M., Aliyu, S., Deriche, M.: Prototype Arabic Sign language recognition using multi-sensor data fusion of two leap motion controllers. In: 2015 IEEE 12th International Multi-conference on Systems, Signals & Devices (SSD15), Mahdia, pp. 1–6 (2015)
Abhishek, K.S., Qubeley, L.C.F., Ho, D.: Glove-based hand gesture recognition sign language translator using capacitive touch sensor. In: 2016 IEEE International Conference on Electron Devices and Solid-State Circuits (EDSSC). IEEE (2016)
Yang, H.-D.: Sign language recognition with the Kinect sensor based on conditional random fields. Sensors (2014)
Kumar, V.K., Goudar, R.H., Desai, V.T.: Sign language unification: the need for next generation deaf education. Procedia Comput. Sci. 48, 673–678 (2015). https://doi.org/10.1016/j.procs.2015.04.151
http://facundoq.github.io/unlp/sign_language_datasets/index.html
Efficiency of Shallow Cascades for Improving Deep Learning AI Systems. https://doi.org/10.1109/IJCNN.2018.8489266
[cs.CV] arXiv:1512.00567
https://towardsdatascience.com/a-simple-guide-to-the-versions-of-the-inception-network-7fc52b863202
[cs.CV] arXiv:1409.4842
[cs.CV] arXiv:1805.06618
Tyukin, I.Y., Gorban, A.N., Green, S., Prokhorov, D.: Fast construction of correcting ensembles for legacy artificial intelligence systems: algorithms and a case study
Jackson, D.: Stopping rules in principal components analysis: a comparison of heuristical and statistical approaches. Ecology 74(8), 2204–2214 (1993)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Green, S., Tyukin, I., Gorban, A. (2020). Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics. In: Oneto, L., Navarin, N., Sperduti, A., Anguita, D. (eds) Recent Advances in Big Data and Deep Learning. INNSBDDL 2019. Proceedings of the International Neural Networks Society, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-030-16841-4_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-16841-4_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-16840-7
Online ISBN: 978-3-030-16841-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)