Skip to main content

Real-time Video-based Recognition of Sign Language Gestures using Guided Template Matching

  • Conference paper

Abstract

We present a system for recognising hand-gestures in Sign language. The system works in real-time and uses input from a colour video camera. The user wears different coloured gloves on either hand and colour matching is used to distinguish the hands from each other and from the background. So far the system has been tested in fixed lighting conditions, with the camera a fixed distance from the user. The system is user-dependent.

The system is implemented on a Datacube platform, which contains specialised image processing hardware and allows real-time operation. Hand-shapes are recognised using template-matching implemented using the Datacube’s NMAC module. Different sets of templates can be matched at each frame depending on the absolute positions of the hands within the frame and their relative positions with respect to each other.

Results are presented for a set of 25 gestures from Irish Sign Language. 22 of the gestures involve motion and we show how the structure of the gestures may be used to guide the template-matching algorithm. Dynamic gestures are recognised by identifying sequences of distinct hand-shapes/positions. Recognition rates will be presented for a set of 500 example video sequences.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. T. Darrel and A. Pentland. Space-Time Gestures. In A. Pentland, editor, Looking at People Workshop, IJCAI 93, 1993.

    Google Scholar 

  2. B. Dorner. Hand shape identification and tracking for Sign Language interpretation. In A. Pentland, editor, Looking at People Workshop, IJCAI 93, 1993.

    Google Scholar 

  3. H. Sako, M. P. Whitehouse, A. V. W. Smith, and A. I. Sutherland. Real-time facial feature tracking based on matching techniques and its applications. In Proceedings of the International Conference on Pattern Recognition, pages B320–4, 1994.

    Google Scholar 

  4. T. Starner and A. Pentland. Visual recognition of American Sign Language using Hidden Markov Models. In M. Bichsel, editor, Proceedings of the International Workshop on Automatic Face-and Gesture-Recognition, pages 189–194, 26–28 June 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag London

About this paper

Cite this paper

Sutherland, A. (1997). Real-time Video-based Recognition of Sign Language Gestures using Guided Template Matching. In: Harling, P.A., Edwards, A.D.N. (eds) Progress in Gestural Interaction. Springer, London. https://doi.org/10.1007/978-1-4471-0943-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0943-3_4

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76094-8

  • Online ISBN: 978-1-4471-0943-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics