A Stochastic Optimization Approach for Training the Parameters in Neural Networks
- 57 Downloads
Recently, back-propagation method has often been applied to adapt artificial neural network for various pattern classification problems. However, an important limitation of this method is that it sometimes fails to find a global minimum of the total error function of neural network. In this paper, a hybrid algorithm which combines the modified back-propagation method and the random optimization method is proposed in order to find the global minimum of the total error function of neural network in a small number of steps. It is shown that this hybrid algorithm ensures convergence to a global minimum with probability 1 in a compact region of weight vector space. Further, several computer simulation results dealing with the problem of forcasting air pollution density, stock price, and etc. are given.
KeywordsNeural Network Global Minimum Weight Vector Stock Price Line Search
Unable to display preview. Download preview PDF.
- N. Baba, “A Hybrid Algorithm for Finding a Global Minimum”, Int. J. Control, Vol. 37–5, pp.929 — pp.942, 1983.Google Scholar
- S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi, “Optimization by Simulated Annealing”, IBM Thomas J. Watson Research Center Report, 1982.Google Scholar
- D.G. Luenberger, Introduction to Linear & Nonlinear Programming, Addison-Wesley, 1973.Google Scholar
- J. Matyas, “Random Optimization”, Automation & Remote Control, Vol. 26, pp.246 – pp.253, 1965.Google Scholar
- J. Matyas, “Das Zufallig Optimierungs Verfahren und Seine Knnvergenz”, Proceedings of the 5th Analogue Computation Meeting, pp.540 – pp.544, 1968.Google Scholar
- D.E. Rumelhart and J.L. McClelland, Editors, Parallel Distributed Processing, MIT Press, 1986.Google Scholar
- Chua Publishing Company, Various Data on Stock Price in Japan, 1989 and 1990.Google Scholar