Skip to main content

Deterministic Convergence of an Online Gradient Method with Momentum

  • Conference paper
Intelligent Computing (ICIC 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4113))

Included in the following conference series:

Abstract

An online gradient method with momentum for feedforward neural network is considered. The learning rate is set to be a constant and the momentum coefficient an adaptive variable. Both the weak and strong convergence results are proved, as well as the convergence rates for the error function and for the weight.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bhaya, A., Kaszkurewicz, E.: Steepest Descent with Momentum for Quadratic Functions is a Version of the Conjugate Gradient Method. Neural Networks 17, 65–71 (2004)

    Article  MATH  Google Scholar 

  2. Ellacott, S.W.: The Numerical Approach of Neural Networks. In: Taylor, J.G. (ed.) Mathematical Approaches to Neural Networks, pp. 103–138. North-Holland, Amsterdam (1993)

    Google Scholar 

  3. Gaivjoronski, A.A.: Convergence Properties of Backpropagation for Neural Nets via Theory of Stochastic Gradient Methods, Part I. Optimization Methods and Software 4, 117–134 (1994)

    Article  Google Scholar 

  4. Hassoun, M.H.: Foundation of Artificial Neural Networks. MIT Press, Cambridge (1995)

    Google Scholar 

  5. Li, Z.X., Wu, W., Tian, Y.L.: Convergence of an Online Gradient Method for Feedforward Neural Networks with Stochastic Inputs. J. Comput. Appl. Math. 163, 165–176 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  6. Luo, Z., Tseng, P.: Analysis of an Approximate Gradient Projection Method with Applications to the Backpropagation Algorithm. Optimization Methods and Software 4, 85–101 (1994)

    Article  Google Scholar 

  7. Luo, Z.: On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward Networks. Neural Comput 3, 226–245 (1991)

    Article  Google Scholar 

  8. Mangagasarian, O.L., Solodov, M.V.: Serial and Parallel Backpropagation Convergence via Nonmonotone Perturbed Minimization. Optimization Methods and Software 4, 103–116 (1994)

    Article  Google Scholar 

  9. Rumelhart, D.E., McClelland, J.L.: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1. MIT Press, Cambrige (1986)

    Google Scholar 

  10. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Representations by Back-propagating Errors. Nature 323, 533–536 (1986)

    Article  Google Scholar 

  11. Sollich, P., Barber, D.: Online Learning from Finite Training Sets and Robustness Tjo Input Bias. Neural Comput. 10, 2201–2217 (1998)

    Article  Google Scholar 

  12. Torii, M., Hagan, M.T.: Stability of Steepest Descent with Momentum for Quadratic Functions. IEEE Transactions on Neural Networks 13, 752–756 (2002)

    Article  Google Scholar 

  13. Wu, W., Xu, Y.S.: Deterministic Convergence of an Online Gradient Method for Neural Networks. J. Comput. Appl. Math. 144, 335–347 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  14. Wu, W., Feng, G.R., Li, X.: Training Multilayer Perceptrons via Minimization of Ridge Functions. Advances in Computational Mathematics 17, 331–347 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  15. Zhang, N.M., Wu, W., Zheng, G.F.: Convergence of Gradient Method with Momentum for Two-layer Feedforward Neural Networks. IEEE Transactions on Neural Networks 17, 522–525 (2006)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, N. (2006). Deterministic Convergence of an Online Gradient Method with Momentum. In: Huang, DS., Li, K., Irwin, G.W. (eds) Intelligent Computing. ICIC 2006. Lecture Notes in Computer Science, vol 4113. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11816157_10

Download citation

  • DOI: https://doi.org/10.1007/11816157_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-37271-4

  • Online ISBN: 978-3-540-37273-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics