Abstract
Recent developments in contour tracking permit the outlines of moving, natural objects to be tracked live, at full video-rate. Such a capability can be used to turn parts of the body—for instance, the hands and lips—into input devices. The results presented here were obtained using a real-time lip tracker which utilises a novel Kalman filter based dynamic contour to track the outline of a speaker’s lips. The tracker incorporates predictive dynamics which can be learned from training sequences and automatically tuned to follow typical motions found in speech. The visual data from the tracker is incorporated into an acoustic automatic speech recogniser enabling robust recognition of speech in the presence of acoustic noise. Tests on a 40 word vocabulary using a dynamic time warping based audio-visual recogniser demonstrate that the lip outline is a rich source of information for speech recognition and establish dynamic contour tracking as a viable instrument for near real-time speechreading applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Dalton, B., Kaucic, R., Blake, A. (1996). Automatic Speechreading using dynamic contours. In: Stork, D.G., Hennecke, M.E. (eds) Speechreading by Humans and Machines. NATO ASI Series, vol 150. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-13015-5_27
Download citation
DOI: https://doi.org/10.1007/978-3-662-13015-5_27
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-08252-8
Online ISBN: 978-3-662-13015-5
eBook Packages: Springer Book Archive