Skip to main content

Evolutionary Learning of Recurrent Networks by Successive Orthogonal Inverse Approximations

  • Conference paper
Book cover Artificial Neural Nets and Genetic Algorithms

Abstract

Recurrent networks have proved to be more powerful than feedforward neural networks in terms of classes of functions they can compute. But, because training of recurrent networks is a difficult task, it is not clear that these networks provide an advantage over feedforward networks for learning from examples. This communication proposes a general computation model that lays the foundations for characterizing the classes of functions computed by feedforward nets and convergent recurrent nets. Then a mathematical statement proves that convergent nets outperform feedforward nets on data fitting problems. It provides the basis to devise a new learning procedure that constraints the attractor set of a recurrent net and assures a convergent dynamic by using orthogonal inverse tools. The learning algorithm is based on an evolutionary selection mechanism. Using the previous procedure as evaluation function, it has been shown to be robust and well adapted to train convergent recurrent nets when feedforward nets cannot approximate a real parameter mapping.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. R.P. Agaxwal. Difference Equations and Inequalities: Theory, Methods and Applications. Pure and Applied Mathematics. Springer, E.J. Taft and Z. Nashed ed. edition, 1991.

    Google Scholar 

  2. American Institute of Physics. Neural Information Processing Systems Conference, volume 1, 1987.

    Google Scholar 

  3. Y. Bengio, P. Simard, and P. Frasconi. Learning Long-Term Dependencies with gradient is difficult. IEEE Trans, on Neural Networks, 1993. Special issue on Recurrent Networks.

    Google Scholar 

  4. O. Christensen. Frames and pseudo-inverses. J. Math. Anal. Appl., 2, 1995.

    Google Scholar 

  5. C. Gégout. Improvement of Multilayer Perceptron Trainings with an Evolutionary Initialization. In ICANN’95, volume 2, pages 153–158. EC2, 1995.

    Google Scholar 

  6. C. Gégout. Stable and Convergent Dynamics for Discrete-time Recurrent Networks. In World Congress of Nonlinear Analysts. IFNA, July 1996. to be published in 1997.

    Google Scholar 

  7. M.W. Hirsch. Convergent Activation Dynamics in Continuous Time Networks. Neural Networks, 2:331–349, 1989.

    Article  Google Scholar 

  8. M. Schoenauer, E. Ronald, and S. Damour. Evolving Networks for Control. In Neuronîmes 93, Paris, 1993. EC2.

    Google Scholar 

  9. H.T. Siegelmann, B.G. Horne, and C.L. Giles. Computational capabilities of recurrent narx neural networks. Technical Report 95–78, University of Maryland, College Park, Md, 1995. Accepted also IEEE Trans, on Syst., Man and Cybern.

    Google Scholar 

  10. P.Y. Simard. Learning State Space Dynamics in Recurrent Networks. PhD thesis, University of Rochester, New York, mars 1991. 383.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Wien

About this paper

Cite this paper

Gégout, C. (1998). Evolutionary Learning of Recurrent Networks by Successive Orthogonal Inverse Approximations. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6492-1_83

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-6492-1_83

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83087-1

  • Online ISBN: 978-3-7091-6492-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics