Abstract
Recently, back-propagation method has often been applied to adapt artificial neural network for various pattern classification problems. However, an important limitation of this method is that it sometimes fails to find a global minimum of the total error function of neural network. In this paper, a hybrid algorithm which combines the modified back-propagation method and the random optimization method is proposed in order to find the global minimum of the total error function of neural network in a small number of steps. It is shown that this hybrid algorithm ensures convergence to a global minimum with probability 1 in a compact region of weight vector space. Further, several computer simulation results dealing with the problem of forcasting air pollution density, stock price, and etc. are given.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
N. Baba, T. Shoman, and Y. Sawaragi, “A Modified Convergence Theorem for a Random Optimization Method”, Information Sciences, Vol. 13, pp.159 – pp.166, 1977.
N. Baba, “Convergence of a Random Optimization Method for Constrained Optimization Problems”, JOTA, Vol. 33, pp.451 — pp.461, 1981.
N. Baba, “A Hybrid Algorithm for Finding a Global Minimum”, Int. J. Control, Vol. 37–5, pp.929 — pp.942, 1983.
N. Baba, “A New Approach for Finding the Global Minimum of Error Function of Neural Networks”, Neural Networks, Vol. 2, pp.367 – pp.373, 1989.
V. Cerny, “Thermodynamical Approach to the Travelling Salesman Problems and Efficient Simulation Algorithm”, JOTA, Vol. 45, pp.41 – pp.51, 1985.
R. Fletcher and C.M. Reeves, “Function Minimization by Conjugate Gradients”, Computer Journal, Vol. 7, pp.149 – pp.154, 1964.
S. Geman & C. Hwang, “Diffusions for Global Optimization”, SIAM J. Control and Optimization, Vol. 24, pp.1031 – pp.1043, 1983.
S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi, “Optimization by Simulated Annealing”, IBM Thomas J. Watson Research Center Report, 1982.
D.G. Luenberger, Introduction to Linear & Nonlinear Programming, Addison-Wesley, 1973.
J. Matyas, “Random Optimization”, Automation & Remote Control, Vol. 26, pp.246 – pp.253, 1965.
J. Matyas, “Das Zufallig Optimierungs Verfahren und Seine Knnvergenz”, Proceedings of the 5th Analogue Computation Meeting, pp.540 – pp.544, 1968.
F.J. Solis & J.B. Wets, “Minimization by Random Search Techniques”, Mathematics of Operations Research, Vol. 6, pp.19 – pp.30, 1981.
D.E. Rumelhart and J.L. McClelland, Editors, Parallel Distributed Processing, MIT Press, 1986.
Chua Publishing Company, Various Data on Stock Price in Japan, 1989 and 1990.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1992 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Baba, N. (1992). A Stochastic Optimization Approach for Training the Parameters in Neural Networks. In: Pflug, G., Dieter, U. (eds) Simulation and Optimization. Lecture Notes in Economics and Mathematical Systems, vol 374. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-48914-3_5
Download citation
DOI: https://doi.org/10.1007/978-3-642-48914-3_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-54980-2
Online ISBN: 978-3-642-48914-3
eBook Packages: Springer Book Archive