Advertisement

Global Convergence Property of Error Back-Propagation Method for Recurrent Neural Networks

  • Keiji Tatsumi
  • Tetsuzo Tanino
  • Masao Fukushima
Part of the International Series in Operations Research & Management Science book series (ISOR, volume 43)

Abstract

Error Back-Propagation (BP) method and its variations are popular methods for the supervised learning of neural networks. BP method can be regarded as an approximate steepest descent method for minimizing the sum of error functions, which uses exact derivatives of each error function. Thus, they have the global convergence property under some natural conditions. On the other hand, Real Time Recurrent Learning method (RTRL) is also one of variations of BP method for the recurrent neural network (RNN) which is suited for handling time sequences. Since, for the real-time learning, this method cannot utilize exact outputs from the network, approximate derivatives of each error function are used to update the weights. Therefore, although RTRL is widely used in practice, its global convergence property is not known yet. In this paper, we show that RTRL has the global convergence property under almost the same conditions as other variations of BP.

Keywords

real time recurrent learning global convergence property recurrent neural network 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bertsekas, D.P.. Incremental least squares methods and the extended Kalman filter. SIAM Journal on Optimization 1996; 6(3): 807–822.CrossRefGoogle Scholar
  2. Bertsekas, D.P.. Nonlinear Programming: 2nd Edition, Athena Scientific, Belmont, MA, 1999.Google Scholar
  3. Connor, J. T., Martin R. D., Atlas, L.E.. Recurrent neural networks and robust time series prediction. IEEE Transactions on Neural Networks 1994; 5(2): 240–254.CrossRefGoogle Scholar
  4. Gaivoronski, A. A.. Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods, part 1. Optimization Methods and Software 1994; 4(2): 117–134.CrossRefGoogle Scholar
  5. Hertz, J., Krogh, A., Palmer, R. G.. Introduction to the Theory of Neural Computation, Addison-Wesley, Redwood City, 1991.Google Scholar
  6. Ljung, L.. Analysis of recursive stochastic algorithms. IEEE Transactions on Automatic Control 1977; AC-22(4): 551–575.CrossRefGoogle Scholar
  7. Luo, Z.Q., Tseng, P.. Analysis of an approximate gradient projection method with applications to the back propagation algorithm. Optimization Methods and Software 1994; 4(2): 85–101.CrossRefGoogle Scholar
  8. Mak, M. W., Ku, K. W., Lu, Y. L.. On the improvement of the real time recurrent learning algorithm for recurrent neural networks. Neurocomputing 1999; 24: 13–36.CrossRefGoogle Scholar
  9. Mangasarian, O.L., Solodov, M.V.. Serial and parallel backpropagation convergence via nonmonotone perturbed minimization. Optimization and Software 1994; 4(2): 103–116.CrossRefGoogle Scholar
  10. Narendra, K. S., Parthasarathy, K.. Identification and Control of dynamical systems using neural networks. IEEE Transactions on Neural Networks 1990; 1(1): 4–27.CrossRefGoogle Scholar
  11. Ortega, J. M.. Numerical Analysis: A Second Course, Academic Press, NewYork, NY, 1972.Google Scholar
  12. Parlos, A. G., Chong K. T., Atiya, A. F.. Application of the recurrent multilayer perception in modeling complex process dynamics. IEEE Transactions on Neural Networks 1994; 5(2): 255–266.CrossRefGoogle Scholar
  13. Rumelhart, D. E., Hinton, G. E., Williams, R. J.. Learning internal representations by error propagation, in D. E. Rumelhart, J. L. McClelland and the PDP Research Group Eds., Parallel Distributed Processing MIT Press, Cambridge, MA, 1986; 318–362.Google Scholar

Copyright information

© Springer Science+Business Media New York 2002

Authors and Affiliations

  • Keiji Tatsumi
    • 1
  • Tetsuzo Tanino
    • 1
  • Masao Fukushima
    • 2
  1. 1.Graduate School of EngineeringOsaka UniversitySuita, OsakaJapan
  2. 2.Graduate School of InformaticsKyoto UniversityKyotoJapan

Personalised recommendations