Skip to main content

Recent Developments on Convergence of Online Gradient Methods for Neural Network Training

  • Conference paper
Advances in Neural Networks – ISNN 2004 (ISNN 2004)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 3173))

Included in the following conference series:

  • 1034 Accesses

Abstract

A survey is presented on some recent developments on the convergence of online gradient methods for feedforward neural networks such as BP neural networks. Unlike most of the convergence results which are of probabilistic and non-monotone nature, the convergence results we show here have a deterministic and monotone nature. Also considered are the cases where a momentum or a penalty term is added to the error function to improve the performance of the training procedure.

Partly supported by the National Natural Science Foundation of China.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ellacott, S.W.: The Numerical Analysis Approach of Neural Networks. In: Taylor, J.G. (ed.) Mathematical Approaches to Neural Networks, pp. 103–138. North-Holland, Amsterdam (1993)

    Google Scholar 

  2. Fine, T.L., Mukherjee, S.: Parameter Convergence and Learning Curves for Neural Networks. Neural Computation 11, 747–769 (1999)

    Article  Google Scholar 

  3. Finnoff, W.: Diffusion Approximations for the Constant Learning Rate Backpropagation Algorithm and Resistance to Locol Minima. Neural Computation 6, 285–295 (1994)

    Article  Google Scholar 

  4. Haykin, S.: Neural Networks, 2nd edn. Prentice-Hall, Englewood Cliffs (1999)

    MATH  Google Scholar 

  5. Liang, Y.C., et al.: Successive Approximation Training Algorithm for Feedforward Neural Networks. Neurocomputing 42, 11–322 (2002)

    Google Scholar 

  6. Li, Z., Wu, W., Tian, Y.: Convergence of an Online Gradient Method for Feedforward Neural Networks with Stochastic Inputs. Journal of Computational and Applied Mathematics 163(1), 165–176 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  7. Liu, W.B., Dai, Y.H.: Minimization Algorithms Based on Supervisor and Searcher Cooperation: I-Fast and Robust Gradient Algorithms for Minimization Problems with Strong Noise. Journal of Optimization Theory and Application 111, 359–379 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  8. Rumelhart, D.E., Hinton, G.E.: Learning Representations of Back-propagation Errors. Nature 323, 533–536 (1986)

    Article  Google Scholar 

  9. Wu, W., Feng, G.R., Li, X.: Training Multylayer Perceptrons via Minimization of Sum of Ridge Functions. Advances in Computational Mathematics 17, 331–347 (2003)

    Article  MathSciNet  Google Scholar 

  10. Wu, W., Feng, G., Li, Z., Xu, Y.: Deterministic Convergence of an Online Gradient Method for BP Neural Networks. Accepted by IEEE Transactions on Neural Networks (2004)

    Google Scholar 

  11. Wu, W., Shao, Z.: Convergence of Online Gradient Methods for Continuous Perceptrons with Linearly Separable Training Patterns. Applied Mathematics Letters 16, 999–1002 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  12. Wu, W., Xu, Y.S.: Deterministic Convergence of an On-line Gradient Method for Neural Networks. Journal of Computational and Applied Mathematics 144, 335–347 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  13. Zhang, L., Wu, W.: Online Gradient Method with a Penalty Term for Feedforward Neural Networks (to appear)

    Google Scholar 

  14. Zhang, L., Wu, W.: Online Gradient Method with a Penalty Term for BP Nneural Networks (to appear)

    Google Scholar 

  15. Zhang, N., Wu, W., Zheng, G.: Convergence of Gradient Method with a Momentum Term for Feedforward Neural Networks (to appear)

    Google Scholar 

  16. Zhang, N., Wu, W.: Convergence of Gradient Method with a Momentum Term for BP Neural Networks (to appear)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wu, W. et al. (2004). Recent Developments on Convergence of Online Gradient Methods for Neural Network Training. In: Yin, FL., Wang, J., Guo, C. (eds) Advances in Neural Networks – ISNN 2004. ISNN 2004. Lecture Notes in Computer Science, vol 3173. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-28647-9_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-28647-9_40

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22841-7

  • Online ISBN: 978-3-540-28647-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics