Abstract
This chapter describes the use of the Tilt intonation model as a data-driven approach to building computational models of intonation for fundamental frequency (f0) generation in a speech synthesis system. The chapter presents the theoretical issues behind the design of the model as well as a comprehensive description of the theory underlying it. Automatic labelling issues and synthesis functions are discussed. A series of three substantial experiments is presented showing the training of an f0 generation algorithm using the Tilt model. The fo generation algorithm is trained from a database of news stories labelled with Tilt events. The resulting model can generate natural sounding contours from the information available at f0 generation time in a speech synthesiser (e.g. syllable position, segmental information, position in phrase, position in utterance, lexical stress). The accuracy of the generated contours was tested against unseen data and compares favourably with similar experiments done on the same database using different intonation theories. A detailed discussion of the features and results is given, including comparison with related work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Black, A.W., Dusterhoff, K.E., Taylor, P.A. (2001). Using the Tilt Intonation Model: A Data-Driven Approach. In: Damper, R.I. (eds) Data-Driven Techniques in Speech Synthesis. Telecommunications Technology & Applications Series. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-3413-3_9
Download citation
DOI: https://doi.org/10.1007/978-1-4757-3413-3_9
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4419-4733-8
Online ISBN: 978-1-4757-3413-3
eBook Packages: Springer Book Archive