Non-Linear Adaptive Prediction of Speech with a Pipelined Recurrent Neural Network and Advanced Learning Algorithms

  • Danilo Mandic
  • Jens Baltersee
  • Jonathon Chambers
Part of the Applied and Numerical Harmonic Analysis book series (ANHA)


New learning algorithms for an adaptive non-linear forward predictor which is based on a Pipelined Recurrent Neural Network (PRNN) are presented. A computationally efficient Gradient Descent (GD) algorithm, as well as a novel Extended Recursive Least Squares (ERLS) algorithm are tested on the predictor. Simulation studies, based on three speech signals, which have been made public and are available on the World Wide Web (WWW), show that the non-linear predictor does not perform satisfactorily when the previously proposed gradient descent algorithm was used. The steepest descent algorithm is shown to yield a poor performance in terms of the prediction error gain, whereas consistently improved results are obtained using the ERLS algorithm. The merit of the non-linear predictor structure is confirmed by yielding approximately 2 dB higher prediction gain than only a linear structure predictor, which uses the conventional Recursive Least Squares (RLS) algorithm.


Speech Signal Extend Kalman Filter Little Mean Square Recurrent Neural Network Recursive Little Square 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    J. Shynk. Adaptive IIR filtering. IEEE ASSP Magazine, 6(2):4–21, 1989.CrossRefGoogle Scholar
  2. [2]
    S. Haykin. Adaptive Filter Theory (Second Edition). Prentice Hall, 1989.Google Scholar
  3. [3]
    S. M. Kay. Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice Hall, 1989.Google Scholar
  4. [4]
    J. Makhoul. Linear Prediction: A Tutorial Overview. Proceedings of the IEEE, 63(4):561–580, 1975.CrossRefGoogle Scholar
  5. [5]
    S. Haykin and L. Li. Non-linear Adaptive Prediction of Non-stationary Signals. IEEE Transactions on Signal Processing, 43(2):526–535, 1995.CrossRefGoogle Scholar
  6. [6]
    L. Li and S. Haykin. A Cascaded Neural Networks for Real-Time Nonlinear Adaptive Filtering. Proceedings of the IEEE International Conference on Neural Networks, (ICNN’93), San Francisco, USA, 2:857–862, 1993.Google Scholar
  7. [7]
    M. Niranjan and V. Kadirkamanathan. A Nonlinear Model for Time Series Prediction and Signal Interpolation. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, (ICASSP-91), Toronto, Canada, 3:1713–1716, 1991.Google Scholar
  8. [8]
    Y. Bengio. Neural Networks for Speech and Sequence Recognition. International Thomson Publishing, 1995.Google Scholar
  9. [9]
    R. M. Dillon and C. N. Manikopoulos. Neural Net Nonlinear Prediction for Speech Data. Electronics Letters, 27(10):824–826, 1991.CrossRefGoogle Scholar
  10. [10]
    B. Townshend. Nonlinear Prediction of Speech. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, (ICASSP-91), Toronto, Canada, 1:425–428, 1991.Google Scholar
  11. [11]
    C. R. Gent and C. P. Sheppard. Predicting time series by a fully connected neural network trained by back propagation. Computing and Control Engineering Journal, 109-112, 1992.Google Scholar
  12. [12]
    S. Z. Qin, H.T. Su, and T. J. Mc-Avoy. Comparison of Four Neural Net Learning Methods for Dynamic System Identification. IEEE Transactions on Neural Networks, 3(2):122–130, 1992.CrossRefGoogle Scholar
  13. [13] Scholar
  14. [14]
    J.T. Connor, R. D. Martin, and L. E. Atlas. Recurrent Neural Networks and Robust Time Series Prediction. IEEE Transactions on Neural Networks, 5(2):240–254, 1994.CrossRefGoogle Scholar
  15. [15]
    R. Williams and D. Zipser. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Computation, 1:270–280, 1989.CrossRefGoogle Scholar
  16. [16]
    S. Haykin. Neural Networks — A Comprehensive Foundation. Prentice Hall, 1994.Google Scholar
  17. [17]
    O. Nerrand, P. Roussel-Ragot, D. Urbani, L. Personnaz, and G. Dreyfus. Training Recurrent Neural Networks: Why and How? An Illustration in Dynamical Process Modelling. IEEE Transactions on Neural Networks, 5(2):178–184, 1994.CrossRefGoogle Scholar
  18. [18]
    K. S. Narendra and K. Parthasarathy. Identification and Control of Dynamical Systems Using Neural Networks. IEEE Transactions on Neural Networks, 1(1):4–27, 1990.CrossRefGoogle Scholar
  19. [19]
    K. S. Narendra and K. Parthasarathy. Gradient Methods for the Optimization of Dynamical Systems Containing Neural Networks. IEEE Transactions on Neural Networks, 2(2):252–262, 1991.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1998

Authors and Affiliations

  • Danilo Mandic
    • 1
  • Jens Baltersee
    • 2
  • Jonathon Chambers
    • 1
  1. 1.Signal Processing Section, Department of Electrical EngineeringImperial College of Science, Technology and MedicineLondonUK
  2. 2.Integrated Systems for Signal ProcessingAachen University of TechnologyAachenGermany

Personalised recommendations