Abstract
An online gradient method with momentum for feedforward neural network is considered. The learning rate is set to be a constant and the momentum coefficient an adaptive variable. Both the weak and strong convergence results are proved, as well as the convergence rates for the error function and for the weight.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bhaya, A., Kaszkurewicz, E.: Steepest Descent with Momentum for Quadratic Functions is a Version of the Conjugate Gradient Method. Neural Networks 17, 65–71 (2004)
Ellacott, S.W.: The Numerical Approach of Neural Networks. In: Taylor, J.G. (ed.) Mathematical Approaches to Neural Networks, pp. 103–138. North-Holland, Amsterdam (1993)
Gaivjoronski, A.A.: Convergence Properties of Backpropagation for Neural Nets via Theory of Stochastic Gradient Methods, Part I. Optimization Methods and Software 4, 117–134 (1994)
Hassoun, M.H.: Foundation of Artificial Neural Networks. MIT Press, Cambridge (1995)
Li, Z.X., Wu, W., Tian, Y.L.: Convergence of an Online Gradient Method for Feedforward Neural Networks with Stochastic Inputs. J. Comput. Appl. Math. 163, 165–176 (2004)
Luo, Z., Tseng, P.: Analysis of an Approximate Gradient Projection Method with Applications to the Backpropagation Algorithm. Optimization Methods and Software 4, 85–101 (1994)
Luo, Z.: On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward Networks. Neural Comput 3, 226–245 (1991)
Mangagasarian, O.L., Solodov, M.V.: Serial and Parallel Backpropagation Convergence via Nonmonotone Perturbed Minimization. Optimization Methods and Software 4, 103–116 (1994)
Rumelhart, D.E., McClelland, J.L.: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1. MIT Press, Cambrige (1986)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Representations by Back-propagating Errors. Nature 323, 533–536 (1986)
Sollich, P., Barber, D.: Online Learning from Finite Training Sets and Robustness Tjo Input Bias. Neural Comput. 10, 2201–2217 (1998)
Torii, M., Hagan, M.T.: Stability of Steepest Descent with Momentum for Quadratic Functions. IEEE Transactions on Neural Networks 13, 752–756 (2002)
Wu, W., Xu, Y.S.: Deterministic Convergence of an Online Gradient Method for Neural Networks. J. Comput. Appl. Math. 144, 335–347 (2002)
Wu, W., Feng, G.R., Li, X.: Training Multilayer Perceptrons via Minimization of Ridge Functions. Advances in Computational Mathematics 17, 331–347 (2002)
Zhang, N.M., Wu, W., Zheng, G.F.: Convergence of Gradient Method with Momentum for Two-layer Feedforward Neural Networks. IEEE Transactions on Neural Networks 17, 522–525 (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zhang, N. (2006). Deterministic Convergence of an Online Gradient Method with Momentum. In: Huang, DS., Li, K., Irwin, G.W. (eds) Intelligent Computing. ICIC 2006. Lecture Notes in Computer Science, vol 4113. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11816157_10
Download citation
DOI: https://doi.org/10.1007/11816157_10
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-37271-4
Online ISBN: 978-3-540-37273-8
eBook Packages: Computer ScienceComputer Science (R0)