Towards Appearance-Based Multi-Channel Gesture Recognition

  • G. J. Sweeney
  • A. C. Downton
Conference paper


Sign Language is a rich and expressive means of communication used by profoundly deaf people as an alternative to speech. Computer recognition of sign language represents a demanding research objective analogous to recognising continuous speech, but has many more potential applications in areas such as human body motion tracking and analysis, surveillance, video telephony, and non-invasive interfacing with virtual reality applications. In this paper, we survey those aspects of human body motion which are relevant to sign language, and outline an overall system architecture for computer vision-based sign language recognition. We then discuss the initial stages of processing required, and show how recognition of static manual and facial gestures can be used to provide the low-level features from which an integrated multi-channel dynamic gesture recognition system can be constructed.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    S. Ahmad. A usable real-time 3D hand tracker. In 28th Asilomar Conference on Signals, Systems and Computers, pages 1257–1261. IEEE Computer Society Press, 1995.Google Scholar
  2. [2]
    British Deaf Association. Dictionary of British Sign Language/English. Faber and Faber, 1992.Google Scholar
  3. [3]
    C. R. Wren, A. Azarbayejani, T. Darrell, A. Pentland. Pfinder: Real-time tracking of the human body. Technical Report TR#353, Media Lab, MIT, 1995.Google Scholar
  4. [4]
    D. M. Gavrilla and L. S. Davis. Towards 3-D model-based tracking and recognition of human movement: a multi-view approach. In International Workshop on Automatic Face-and Gesture- Recognition, pages 272–277, June 1995.Google Scholar
  5. [5]
    B. Dorner. Chasing the colour glove: Visual hand tracking. Master’s thesis, Simon Fraser University, June 1994.Google Scholar
  6. [6]
    A. C. Downton and H. Drouet. Model-based image analysis for unconstrained upper-body motion. In International Conference on Image Processing and its Applications, pages 274–277. IEE, 1992.Google Scholar
  7. [7]
    H. Drouet. Image analysis for model-based sign language coding. Master’s thesis, Dept of ESE, University of Essex, June 1991.Google Scholar
  8. [8]
    M. A. H. Sagawa, H. Sakou. Sign language translation using continuous DP matching. In IAPR Workshop on Machine Vision Applications, pages 7–9, December 1992.Google Scholar
  9. [9]
    S. Hanlon and R. Boyle. Evaluating a Hidden Markov Model of Syntax in a text recognition system. In BMVC, pages 462–471, 1992.CrossRefGoogle Scholar
  10. [10]
    T. Heap. Robust real-time hand tracking and gesture recognition using smart snakes. Technical Report TR.95. 1, Olivetti Research Limited, Cambridge, United Kingdom, February 1995.Google Scholar
  11. [11]
    D. C. Hogg. Interpreting images of a known moving object. PhD thesis, University of Sussex, 1984. 16Google Scholar
  12. [12]
    M. Kokuer. Model-based Coding for Human Imagery. PhD thesis, Dept of ESE, University of Essex, 1994.Google Scholar
  13. [13]
    J. M. Rehg and T. Kanade. Digit eyes: Vision-based human hand tracking. Technical Report CMU-CS-93–220, School of Computer Science, Carnegie Mellon University, December 1993.Google Scholar
  14. [14]
    T. Starner and A. Pentland. Visual recognition of American Sign Language using Hidden Markov Models, In International Workshop on Automatic Face-and Gesture- Recognition, June 1995.Google Scholar

Copyright information

© Springer-Verlag London 1997

Authors and Affiliations

  • G. J. Sweeney
    • 1
  • A. C. Downton
    • 1
  1. 1.Department of Electronic Systems EngineeringUniversity of EssexColchesterUK

Personalised recommendations