Abstract
The approach described in this paper is based on the use of time-delay neural networks for solving the task of articulatory estimation from acoustic speech and on image vector quantization as far as the visual synthesis is concerned. Once the system has been trained on a reference speaker, the association of visual cues is performed in real-time to each 20 ms of incoming speech. Preliminary results are reported with reference to the on-going experimentation both with normal hearing people and with deaf persons to estimate some of the many perceptual thresholds involved in the complex task of speechreading from synthetic images. This experimental phase is carried on in cooperation with FIADDA, the italian association of the families of hearing impaired children, and is based on a flexible simulation environment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Lavagetto, F., Lavagetto, P. (1996). Time Delay Neural Networks for Articulatory Estimation from Speech: Suitable Subjective Evaluation Protocols. In: Stork, D.G., Hennecke, M.E. (eds) Speechreading by Humans and Machines. NATO ASI Series, vol 150. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-13015-5_33
Download citation
DOI: https://doi.org/10.1007/978-3-662-13015-5_33
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-08252-8
Online ISBN: 978-3-662-13015-5
eBook Packages: Springer Book Archive