Speech Synthesis

  • Priyabrata Sinha


In the previous chapter, we have seen mechanisms and algorithms by which a processor-based electronic system can receive and recognize words uttered by a human speaker. The converse of Speech Recognition, in which a processor-based electronic system can actually produce speech that can be heard and understood by a human listener, is called Speech Synthesis. Like Speech Recognition, such algorithms also have a wide range of uses in daily life, some well-established and others yet to emerge to their fullest potential. Indeed, Speech Synthesis is the most natural user interface for a user of any product for receiving usage instructions, monitor system status, or simply carrying out a true man–machine communication. Speech Synthesis is also closely related to the subjects of Linguistics and Dialog Management. Although quite a few Speech Synthesis techniques are mature and well-understood in the research community, and some of these are available as software solutions in Personal Computers, there is tremendous potential for Speech Synthesis algorithms to be optimized and refined so that they gain wide acceptability in the world of embedded control.


Speech Synthesis Speech Quality Pitch Period Linear Predictive Code Prosodic Phrasing 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    J Holmes, W Holmes Speech Synthesis and Recognition, CRC Press, 2001.Google Scholar
  2. 2.
    LR Rabiner, RW Schafer Digital Processing of Speech Signals, Prentice Hall, 1998.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Microchip Technology, Inc.ChandlerUSA

Personalised recommendations