Optimisation of Artificial Neural Network Topology Applied in the Prosody Control in Text-to-Speech Synthesis

  • Václav Šebesta
  • Jana Tučková
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1963)


Multilayer artificial neural networks (ANN) are often used for the solution of classification problems or for the time series forecasting. An appropriate number of learning and testing patterns must be available for the ANN training. Each training pattern is composed of n input parameters and m output parameters. Number m is usually given by the problem formulation, but the number n may be often selected from a greater set of input parameters. An optimal selection of input parameters is a very important task especially in a situation when the number of usable input parameters is great and the analytical relations between the input and output parameters are not known. The number of neurons in all ANN layers must be generally kept as small as possible because of the optimal generalisation ability.

In this paper we present a possible way for the selection of significant input parameters (the so called “markers”), which are the most important ones from the point of view of influence on the output parameters. These parameters are later used for the training of ANN. A statistical approach is usually used for this reason [5]. After some experience in the ANN application we recognised that the approach based on mathematical logic, i. e. the GUHA method (General Unary Hypotheses Automaton) is also suitable for the determination of markers.

Besides the minimisation of the number of elements in the input layer of ANN, also the number of neurons in hidden layers must be optimised. For this reason standard methods of pruning can be used, described e. g. in [1]. We have used this method in the following applications: - Optimisation of the intervals between the major overhaul of plane engines by the analysis of tribodiagnostic data. Only selected types of chemical pollution in oil can be taken into account. - Prediction of bleeding of patients with chronic lymphoblastic leukemia. Only a part of parameters about the patient is important from this point of view (see [2]). - Optimisation of the quality and reliability prediction of artificial resin production in chemical factory. Only a part of the production parameters (times of production phases, temperatures, percentage of components etc.) have straight influence on the product. - Optimisation of the prosody control in the text-to-speech synthesis. This application is described in the paper.


Input Parameter Hide Layer Fundamental Frequency Speech Signal Test Sentence 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    V. Šebesta. Pruning of neural networks by statistical optimization. In Proc. 6th Schoolon Neural Networks, pages 209–214. Microcomputer, 1994. 420, 426Google Scholar
  2. 2.
    V. Šebesta and L. Straka. Determination of markers by GUHA method for neural network training. Neural Network World, 8(3): 255–268, 1998. 421Google Scholar
  3. 3.
    V. Šebesta and J. Tučková. Selection of important input parameters for a text-to-speech synthesis by neural networks. In Proc. International Joint Conference on Neural Networks IJCNN’ 99. Washington, DC, USA, 1999. 422Google Scholar
  4. 4.
    P. Hájek, A. Sochorová, and J. Zvárová. GUHA for personal computers. Computational Statistics and Data Analysis, 19: 149–153, 1995. 424, 425zbMATHCrossRefGoogle Scholar
  5. 5.
    A. K. Jain, R. P. W. Duin, and J. Mao. Statistical pattern recognition: A review. IEEE Trans. on PAMI, 22(1):4–37, 2000. 420Google Scholar
  6. 6.
    P. Hájek, et. al. GUHA method — objectives and tools. Proc. IXth SOFSEM. VUT UJEP, Brno, 1982. (in Czech). 424, 425Google Scholar
  7. 7.
    M. P. Reidi. Controlling Segmental Duration in Speech Synthesis System. PhD thesis, ETHZurich, Switzerland.Google Scholar
  8. 8.
    T. J. Sejnowski and Ch. R. Rosenberg. NETtalk: a parallel network that learns to read aloud. Technical Report JHU/EECS-86/01, John Hopkins University.Google Scholar
  9. 9.
    J. Terken. Variation of accent prominence within the phrase: Models and spontaneous speech data. Computing Prosody, pages 95–116, 1997.Google Scholar
  10. 10.
    Ch. Traber. SVOX:The implementation of the Text-to-Speech System for German. PhD thesis, ETH Zurich, Switzerland, 1995.Google Scholar
  11. 11.
    J. Tučková and P. Horák. Fundamental frequency control in czech text-to-speech synthesis. In Proc. Third Workshop on ECMS’ 97. Toulouse, France, 1997.Google Scholar
  12. 12.
    J. Tučková and R. Vích. Fundamental frequency modelling by neural nets in the czech text-to-speech synthesis. In Proc. IASTED Int. Conference Signal and ImageProcessing SIP’ 97, pages 85–87. New Orleans, USA, 1997.Google Scholar
  13. 13.
    R. Vích. Pitch synchronous linear predictive czech and slovak text-to-speech synthesis. In Proc. 15th Int. Congress on Acoustics. Trondheim, Norway, 1995. 421Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Václav Šebesta
    • 1
  • Jana Tučková
    • 2
  1. 1.Institute of Computer Science, Academy of Sciences of the Czech Republic, and Faculty of TransportationCzech Technical UniversityCzech
  2. 2.Institute of Radioengineering and Electronics, Academy of Sciences of the Czech Republic, Faculty of Electrical EngineeringCzech Technical UniversityCzech

Personalised recommendations