Skip to main content

Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

  • Conference paper
  • First Online:
  • 1004 Accesses

Part of the book series: Proceedings of the International Neural Networks Society ((INNS,volume 1))

Abstract

Using Convolutional Neural Networks (CNN)’s to create Deep Learning systems that turns Sign Language into text has been a vital tool in breaking communication barriers between deaf-mute people. Conventional research on this subject concerns training networks to recognize alphanumerical gestures and produce their textual equivalents.

A problem with current methods is that images are scarce, with little variation in available gestures, often skewed towards skin tones and hand sizes that makes a significant subset of gestures hard to detect. Current identification programs are only trained in a single language despite there being over two-hundred known variants so far. This presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters.

This work presents a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. As a result, a program is created that can receive finger spelling from multiple tactile languages and deduct the corresponding alphanumeric and its language which no other neural network has been able to replicate.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. World Health Organization: Deafness and hearing loss. http://www.who.int/mediacentre/factsheets/fs300/en/

  2. Clarion UK. https://www.clarion-uk.com/know-many-sign-languages-world/

  3. https://www.startasl.com/american-sign-language

  4. British Deaf Association. https://bda.org.uk/help-resources/

  5. http://www.washington.edu/news/2016/04/12/uw-undergraduate-team-wins-10000-lemelson-mit-student-prize-for-gloves-that-translate-sign-language/

  6. https://www.microsoft.com/en-us/research/blog/kinect-sign-language-translator-part-1/

  7. Mohandes, M., Aliyu, S., Deriche, M.: Prototype Arabic Sign language recognition using multi-sensor data fusion of two leap motion controllers. In: 2015 IEEE 12th International Multi-conference on Systems, Signals & Devices (SSD15), Mahdia, pp. 1–6 (2015)

    Google Scholar 

  8. Abhishek, K.S., Qubeley, L.C.F., Ho, D.: Glove-based hand gesture recognition sign language translator using capacitive touch sensor. In: 2016 IEEE International Conference on Electron Devices and Solid-State Circuits (EDSSC). IEEE (2016)

    Google Scholar 

  9. Yang, H.-D.: Sign language recognition with the Kinect sensor based on conditional random fields. Sensors (2014)

    Google Scholar 

  10. Kumar, V.K., Goudar, R.H., Desai, V.T.: Sign language unification: the need for next generation deaf education. Procedia Comput. Sci. 48, 673–678 (2015). https://doi.org/10.1016/j.procs.2015.04.151

    Article  Google Scholar 

  11. http://facundoq.github.io/unlp/sign_language_datasets/index.html

  12. Efficiency of Shallow Cascades for Improving Deep Learning AI Systems. https://doi.org/10.1109/IJCNN.2018.8489266

  13. [cs.CV] arXiv:1512.00567

  14. https://medium.com/initialized-capital/we-need-to-go-deeper-a-practical-guide-to-tensorflow-and-inception-50e66281804f

  15. https://medium.com/@sh.tsang/review-inception-v3-1st-runner-up-image-classification-in-ilsvrc-2015-17915421f77c

  16. https://towardsdatascience.com/a-simple-guide-to-the-versions-of-the-inception-network-7fc52b863202

  17. [cs.CV] arXiv:1409.4842

  18. [cs.CV] arXiv:1805.06618

  19. Tyukin, I.Y., Gorban, A.N., Green, S., Prokhorov, D.: Fast construction of correcting ensembles for legacy artificial intelligence systems: algorithms and a case study

    Google Scholar 

  20. Jackson, D.: Stopping rules in principal components analysis: a comparison of heuristical and statistical approaches. Ecology 74(8), 2204–2214 (1993)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stephen Green .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Green, S., Tyukin, I., Gorban, A. (2020). Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics. In: Oneto, L., Navarin, N., Sperduti, A., Anguita, D. (eds) Recent Advances in Big Data and Deep Learning. INNSBDDL 2019. Proceedings of the International Neural Networks Society, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-030-16841-4_29

Download citation

Publish with us

Policies and ethics