Abstract
A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The algorithm resembles back-propagation in that an error function is minimized using a gradient-based method, but the optimization is carried out in the hidden part of state space either instead of, or in addition to weight space. A straightforward adaptation of this method to feedforward networks offers an alternative to training by conventional back-propagation. Computational results are presented for some simple dynamical training problems, one of which requires response to a signal 100 time steps in the past.
Preview
Unable to display preview. Download preview PDF.
References
K. Birmiwal, P. Sarwal, and S. Sinha, “A new Gradient-Free Learning Algorithm”, Tech. report, Dept. of EE, Southern Illinois U., Carbondale, (1989).
T. Grossman, R. Meir, and E. Domany, “Learning by Choice of Internal Representations”, Dept. of Electronics, Weizmann Institute of Science, Rehovot, Israel, (1988).
G. L. Heileman, M. Georgiopoulos, and A. K. Brown, “The Minimal Disturbance Back Propagation Algorithm”, Tech. report, Dept. of EE, U. of Central Florida, Orland, (1989).
G. Kuhn, “Connected Recognition with a Recurrent Network”, to appear in proc. NEUROSPEECH, 18 May 1989, as special issue of Speech Communication, v. 9, no. 2, (1990).
Pearlmutter, B., “Learning State Space Trajectories in Recurrent Neural Networks”, Proc. IEEE IJCNN 89, Washington D. C., Publ. IEEE TAB Neural Network Committee., p. II-365, (1989).
R. Fletcher, Practical Methods of Optimization, v1, Wiley, (1980).
W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes, The Art of Scientific Computing, Cambridge, (1986).
R. Rohwer and B. Forrest, “Training Time Dependence in Neural Networks” Proc. IEEE ICNN, San Diego, p. II-701, (1987).
R. Rohwer and S. Renals, “Training Recurrent Networks”, in Neural Networks from Models to Applications, L. Personnaz and G. Dreyfus, eds., I.D.S.E.T., Paris, p. 207, (1989).
Rumelhart, D., Hinton, G. and Williams, R., “Learning Internal Representations by Error Propagation” in Parallel Distributed Processing, v. 1, MIT, (1986).
R. L. Watrous, “Phoneme Discrimination using Connectionist Networks”, Tech. Report, Dept. of Computer Science, U. Penn., submitted to J. Acoustical Soc. of America, (1989).
P. Werbos, Energy Models and Studies, B. Lev, Ed., North Holland, (1983).
R. Williams and D. Zipser, “A Learning Algorithm for Continually Running Fully Recurrent Neural Networks”, ICS Report 8805, UC San Diego, (1988).
R. Rohwer, “The ‘Moving Targets’ Training Algorithm”, to appear in Proc. DANIP, GMD Bonn, J. Kinderman and A. Linden, Eds., 24–25 April, 1989.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1990 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Rohwer, R. (1990). The “moving targets” training algorithm. In: Almeida, L.B., Wellekens, C.J. (eds) Neural Networks. EURASIP 1990. Lecture Notes in Computer Science, vol 412. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-52255-7_31
Download citation
DOI: https://doi.org/10.1007/3-540-52255-7_31
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-52255-3
Online ISBN: 978-3-540-46939-1
eBook Packages: Springer Book Archive