Skip to main content

The “moving targets” training algorithm

  • Part II Theory, Algorithms
  • Conference paper
  • First Online:
Neural Networks (EURASIP 1990)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 412))

Included in the following conference series:

Abstract

A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The algorithm resembles back-propagation in that an error function is minimized using a gradient-based method, but the optimization is carried out in the hidden part of state space either instead of, or in addition to weight space. A straightforward adaptation of this method to feedforward networks offers an alternative to training by conventional back-propagation. Computational results are presented for some simple dynamical training problems, one of which requires response to a signal 100 time steps in the past.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. K. Birmiwal, P. Sarwal, and S. Sinha, “A new Gradient-Free Learning Algorithm”, Tech. report, Dept. of EE, Southern Illinois U., Carbondale, (1989).

    Google Scholar 

  2. T. Grossman, R. Meir, and E. Domany, “Learning by Choice of Internal Representations”, Dept. of Electronics, Weizmann Institute of Science, Rehovot, Israel, (1988).

    Google Scholar 

  3. G. L. Heileman, M. Georgiopoulos, and A. K. Brown, “The Minimal Disturbance Back Propagation Algorithm”, Tech. report, Dept. of EE, U. of Central Florida, Orland, (1989).

    Google Scholar 

  4. G. Kuhn, “Connected Recognition with a Recurrent Network”, to appear in proc. NEUROSPEECH, 18 May 1989, as special issue of Speech Communication, v. 9, no. 2, (1990).

    Google Scholar 

  5. Pearlmutter, B., “Learning State Space Trajectories in Recurrent Neural Networks”, Proc. IEEE IJCNN 89, Washington D. C., Publ. IEEE TAB Neural Network Committee., p. II-365, (1989).

    Google Scholar 

  6. R. Fletcher, Practical Methods of Optimization, v1, Wiley, (1980).

    Google Scholar 

  7. W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes, The Art of Scientific Computing, Cambridge, (1986).

    Google Scholar 

  8. R. Rohwer and B. Forrest, “Training Time Dependence in Neural Networks” Proc. IEEE ICNN, San Diego, p. II-701, (1987).

    Google Scholar 

  9. R. Rohwer and S. Renals, “Training Recurrent Networks”, in Neural Networks from Models to Applications, L. Personnaz and G. Dreyfus, eds., I.D.S.E.T., Paris, p. 207, (1989).

    Google Scholar 

  10. Rumelhart, D., Hinton, G. and Williams, R., “Learning Internal Representations by Error Propagation” in Parallel Distributed Processing, v. 1, MIT, (1986).

    Google Scholar 

  11. R. L. Watrous, “Phoneme Discrimination using Connectionist Networks”, Tech. Report, Dept. of Computer Science, U. Penn., submitted to J. Acoustical Soc. of America, (1989).

    Google Scholar 

  12. P. Werbos, Energy Models and Studies, B. Lev, Ed., North Holland, (1983).

    Google Scholar 

  13. R. Williams and D. Zipser, “A Learning Algorithm for Continually Running Fully Recurrent Neural Networks”, ICS Report 8805, UC San Diego, (1988).

    Google Scholar 

  14. R. Rohwer, “The ‘Moving Targets’ Training Algorithm”, to appear in Proc. DANIP, GMD Bonn, J. Kinderman and A. Linden, Eds., 24–25 April, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Luis B. Almeida Christian J. Wellekens

Rights and permissions

Reprints and permissions

Copyright information

© 1990 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rohwer, R. (1990). The “moving targets” training algorithm. In: Almeida, L.B., Wellekens, C.J. (eds) Neural Networks. EURASIP 1990. Lecture Notes in Computer Science, vol 412. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-52255-7_31

Download citation

  • DOI: https://doi.org/10.1007/3-540-52255-7_31

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-52255-3

  • Online ISBN: 978-3-540-46939-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics