Rates of Learning in Gradient and Genetic Training of Recurrent Neural Networks
In this paper, gradient descent and genetic techniques are used for on-line training of recurrent neural networks. A singular perturbation model for gradient learning of fixed points introduces the problem of the rate of learning formulated as the relative speed of evolution of the network and the adaptation process, and motivates an analogous study when genetic training is used. The existence of bounds for the rate of learning in order to guarantee convergence is obtained in both gradient and genetic training. Some computer simulations confirm theoretical predictions.
KeywordsGenetic Algorithm Adaptation Process Recurrent Neural Network Recurrent Network Stable Equilibrium Point
Unable to display preview. Download preview PDF.
- Kokotovic, P. V., Khalil, H. K., O’Reilly J.: Singular Perturbation Methods in Control: Analysis and Design. Academic Press 1986.Google Scholar
- Kushner, H. J., Clark, D. S.: Stochastic Approximation Methods for Constrained and Unconstrained Systems. Springer-Verlag 1978.Google Scholar
- Riaza, R., Zufiria, P. J.: A singular perturbation approach to fixed point learning in dynamical systems and neural networks, Proc. 2nd World Multiconf. Systemics, Cybernetics and Informatics, Orlando, USA, 1 616–623 (1998).Google Scholar
- Whitley, D., Genetic Algorithms and Neural Networks, in “Genetic Algorithms in Engineering and Computer Science”, Winter, G., Périaux, J., Galán, M., Cuesta, P., eds. John Wiley & Sons, 203–216 (1995).Google Scholar