Using the Tilt Intonation Model: A Data-Driven Approach

  • Alan W. Black
  • Kurt E. Dusterhoff
  • Paul A. Taylor
Chapter
Part of the Telecommunications Technology & Applications Series book series (TTAP)

Abstract

This chapter describes the use of the Tilt intonation model as a data-driven approach to building computational models of intonation for fundamental frequency (f0) generation in a speech synthesis system. The chapter presents the theoretical issues behind the design of the model as well as a comprehensive description of the theory underlying it. Automatic labelling issues and synthesis functions are discussed. A series of three substantial experiments is presented showing the training of an f0 generation algorithm using the Tilt model. The fo generation algorithm is trained from a database of news stories labelled with Tilt events. The resulting model can generate natural sounding contours from the information available at f0 generation time in a speech synthesiser (e.g. syllable position, segmental information, position in phrase, position in utterance, lexical stress). The accuracy of the generated contours was tested against unseen data and compares favourably with similar experiments done on the same database using different intonation theories. A detailed discussion of the features and results is given, including comparison with related work.

Keywords

Root Mean Square Error Pitch Accent Lexical Stress Cart Model Speech Synthesis System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media Dordrecht 2001

Authors and Affiliations

  • Alan W. Black
    • 1
  • Kurt E. Dusterhoff
    • 2
  • Paul A. Taylor
    • 3
  1. 1.Language Technologies InstituteCarnegie Mellon UniversityUK
  2. 2.Phonetic Systems UK LtdBishops CleeveUK
  3. 3.Centre for Speech Technology ResearchUniversity of EdinburghUK

Personalised recommendations