Abstract
Sign Language is a rich and expressive means of communication used by profoundly deaf people as an alternative to speech. Computer recognition of sign language represents a demanding research objective analogous to recognising continuous speech, but has many more potential applications in areas such as human body motion tracking and analysis, surveillance, video telephony, and non-invasive interfacing with virtual reality applications. In this paper, we survey those aspects of human body motion which are relevant to sign language, and outline an overall system architecture for computer vision-based sign language recognition. We then discuss the initial stages of processing required, and show how recognition of static manual and facial gestures can be used to provide the low-level features from which an integrated multi-channel dynamic gesture recognition system can be constructed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
S. Ahmad. A usable real-time 3D hand tracker. In 28th Asilomar Conference on Signals, Systems and Computers, pages 1257–1261. IEEE Computer Society Press, 1995.
British Deaf Association. Dictionary of British Sign Language/English. Faber and Faber, 1992.
C. R. Wren, A. Azarbayejani, T. Darrell, A. Pentland. Pfinder: Real-time tracking of the human body. Technical Report TR#353, Media Lab, MIT, 1995.
D. M. Gavrilla and L. S. Davis. Towards 3-D model-based tracking and recognition of human movement: a multi-view approach. In International Workshop on Automatic Face-and Gesture- Recognition, pages 272–277, June 1995.
B. Dorner. Chasing the colour glove: Visual hand tracking. Master’s thesis, Simon Fraser University, June 1994.
A. C. Downton and H. Drouet. Model-based image analysis for unconstrained upper-body motion. In International Conference on Image Processing and its Applications, pages 274–277. IEE, 1992.
H. Drouet. Image analysis for model-based sign language coding. Master’s thesis, Dept of ESE, University of Essex, June 1991.
M. A. H. Sagawa, H. Sakou. Sign language translation using continuous DP matching. In IAPR Workshop on Machine Vision Applications, pages 7–9, December 1992.
S. Hanlon and R. Boyle. Evaluating a Hidden Markov Model of Syntax in a text recognition system. In BMVC, pages 462–471, 1992.
T. Heap. Robust real-time hand tracking and gesture recognition using smart snakes. Technical Report TR.95. 1, Olivetti Research Limited, Cambridge, United Kingdom, February 1995.
D. C. Hogg. Interpreting images of a known moving object. PhD thesis, University of Sussex, 1984. 16
M. Kokuer. Model-based Coding for Human Imagery. PhD thesis, Dept of ESE, University of Essex, 1994.
J. M. Rehg and T. Kanade. Digit eyes: Vision-based human hand tracking. Technical Report CMU-CS-93–220, School of Computer Science, Carnegie Mellon University, December 1993.
T. Starner and A. Pentland. Visual recognition of American Sign Language using Hidden Markov Models, In International Workshop on Automatic Face-and Gesture- Recognition, June 1995.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer-Verlag London
About this paper
Cite this paper
Sweeney, G.J., Downton, A.C. (1997). Towards Appearance-Based Multi-Channel Gesture Recognition. In: Harling, P.A., Edwards, A.D.N. (eds) Progress in Gestural Interaction. Springer, London. https://doi.org/10.1007/978-1-4471-0943-3_2
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0943-3_2
Publisher Name: Springer, London
Print ISBN: 978-3-540-76094-8
Online ISBN: 978-1-4471-0943-3
eBook Packages: Springer Book Archive