Advertisement

A Reliable Resilient Backpropagation Method with Gradient Ascent

  • Xugang Wang
  • Hongan Wang
  • Guozhong Dai
  • Zheng Tang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4114)

Abstract

While the Resilient Backpropagation (RPROP) method can be extremely fast in converging to a solution, it suffers from the local minima problem. In this paper, a fast and reliable learning algorithm for multi-layer artificial neural networks is proposed. The learning model has two phases: the RPROP phase and the gradient ascent phase. The repetition of two phases can help the network get out of local minima. The proposed algorithm is tested on some benchmark problems. For all the above problems, the systems are shown to be capable of escaping from the local minima and converge faster than the Backpropagation with momentum algorithm and the simulated annealing techniques.

Keywords

Local Minimum Gain Parameter Simulated Annealing Method Gradient Ascent Local Minimum Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Xugang Wang
    • 1
  • Hongan Wang
    • 1
  • Guozhong Dai
    • 1
  • Zheng Tang
    • 2
  1. 1.Intelligence Engineering Laboratory, Institute of Software, The Chinese Academy of Sciences, Beijing 100080China
  2. 2.Faculty of Engineering, Toyama University, Toyama-shi, 930-8555Japan

Personalised recommendations