Real-time Video-based Recognition of Sign Language Gestures using Guided Template Matching

  • A. Sutherland
Conference paper


We present a system for recognising hand-gestures in Sign language. The system works in real-time and uses input from a colour video camera. The user wears different coloured gloves on either hand and colour matching is used to distinguish the hands from each other and from the background. So far the system has been tested in fixed lighting conditions, with the camera a fixed distance from the user. The system is user-dependent.

The system is implemented on a Datacube platform, which contains specialised image processing hardware and allows real-time operation. Hand-shapes are recognised using template-matching implemented using the Datacube’s NMAC module. Different sets of templates can be matched at each frame depending on the absolute positions of the hands within the frame and their relative positions with respect to each other.

Results are presented for a set of 25 gestures from Irish Sign Language. 22 of the gestures involve motion and we show how the structure of the gestures may be used to guide the template-matching algorithm. Dynamic gestures are recognised by identifying sequences of distinct hand-shapes/positions. Recognition rates will be presented for a set of 500 example video sequences.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    T. Darrel and A. Pentland. Space-Time Gestures. In A. Pentland, editor, Looking at People Workshop, IJCAI 93, 1993.Google Scholar
  2. [2]
    B. Dorner. Hand shape identification and tracking for Sign Language interpretation. In A. Pentland, editor, Looking at People Workshop, IJCAI 93, 1993.Google Scholar
  3. [3]
    H. Sako, M. P. Whitehouse, A. V. W. Smith, and A. I. Sutherland. Real-time facial feature tracking based on matching techniques and its applications. In Proceedings of the International Conference on Pattern Recognition, pages B320–4, 1994.Google Scholar
  4. [4]
    T. Starner and A. Pentland. Visual recognition of American Sign Language using Hidden Markov Models. In M. Bichsel, editor, Proceedings of the International Workshop on Automatic Face-and Gesture-Recognition, pages 189–194, 26–28 June 1995.Google Scholar

Copyright information

© Springer-Verlag London 1997

Authors and Affiliations

  • A. Sutherland
    • 1
  1. 1.Hitachi Dublin Laboratory, O’Reilly InstituteTrinity CollegeDublin 2Ireland

Personalised recommendations