Evolutionary Learning of Recurrent Networks by Successive Orthogonal Inverse Approximations

  • C. Gégout
Conference paper


Recurrent networks have proved to be more powerful than feedforward neural networks in terms of classes of functions they can compute. But, because training of recurrent networks is a difficult task, it is not clear that these networks provide an advantage over feedforward networks for learning from examples. This communication proposes a general computation model that lays the foundations for characterizing the classes of functions computed by feedforward nets and convergent recurrent nets. Then a mathematical statement proves that convergent nets outperform feedforward nets on data fitting problems. It provides the basis to devise a new learning procedure that constraints the attractor set of a recurrent net and assures a convergent dynamic by using orthogonal inverse tools. The learning algorithm is based on an evolutionary selection mechanism. Using the previous procedure as evaluation function, it has been shown to be robust and well adapted to train convergent recurrent nets when feedforward nets cannot approximate a real parameter mapping.


Gradient Descent Recurrent Network Front Axle Good Initialization Synchronous Dynamic 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    R.P. Agaxwal. Difference Equations and Inequalities: Theory, Methods and Applications. Pure and Applied Mathematics. Springer, E.J. Taft and Z. Nashed ed. edition, 1991.Google Scholar
  2. [2]
    American Institute of Physics. Neural Information Processing Systems Conference, volume 1, 1987.Google Scholar
  3. [3]
    Y. Bengio, P. Simard, and P. Frasconi. Learning Long-Term Dependencies with gradient is difficult. IEEE Trans, on Neural Networks, 1993. Special issue on Recurrent Networks.Google Scholar
  4. [4]
    O. Christensen. Frames and pseudo-inverses. J. Math. Anal. Appl., 2, 1995.Google Scholar
  5. [5]
    C. Gégout. Improvement of Multilayer Perceptron Trainings with an Evolutionary Initialization. In ICANN’95, volume 2, pages 153–158. EC2, 1995.Google Scholar
  6. [6]
    C. Gégout. Stable and Convergent Dynamics for Discrete-time Recurrent Networks. In World Congress of Nonlinear Analysts. IFNA, July 1996. to be published in 1997.Google Scholar
  7. [7]
    M.W. Hirsch. Convergent Activation Dynamics in Continuous Time Networks. Neural Networks, 2:331–349, 1989.CrossRefGoogle Scholar
  8. [8]
    M. Schoenauer, E. Ronald, and S. Damour. Evolving Networks for Control. In Neuronîmes 93, Paris, 1993. EC2.Google Scholar
  9. [9]
    H.T. Siegelmann, B.G. Horne, and C.L. Giles. Computational capabilities of recurrent narx neural networks. Technical Report 95–78, University of Maryland, College Park, Md, 1995. Accepted also IEEE Trans, on Syst., Man and Cybern.Google Scholar
  10. [10]
    P.Y. Simard. Learning State Space Dynamics in Recurrent Networks. PhD thesis, University of Rochester, New York, mars 1991. 383.Google Scholar

Copyright information

© Springer-Verlag Wien 1998

Authors and Affiliations

  • C. Gégout
    • 1
  1. 1.Laboratoire de l’Informatique du Parallèlisme, École Normale Supèrieure de LyonCentre de Mathèmatiques Appliquèes, École PolytechniqueFrance

Personalised recommendations