Abstract
Selection of the topology of a neural network and correct parameters for the learning algorithm is a tedious task for designing an optimal artificial neural network, which is smaller, faster and with a better generalization performance. In this paper we introduce a recently developed cutting angle method (a deterministic technique) for global optimization of connection weights. Neural networks are initially trained using the cutting angle method and later the learning is fine-tuned (meta-learning) using conventional gradient descent or other optimization techniques. Experiments were carried out on three time series benchmarks and a comparison was done using evolutionary neural networks. Our preliminary experimentation results show that the proposed deterministic approach could provide near optimal results much faster than the evolutionary approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
A. Abraham and B. Nath, ALEC -An Adaptive Learning Framework for Optimizing Artificial Neural Networks, in V. N. Alexandrov, ed., Computational Science, Springer-Verlag, San Francisco, USA, 2001, pp. 171–180.
A. Abraham and B. Nath, Optimal Design of Neural Nets Using Hybrid Algorithms, 6th Pacific Rim International Conference on Artificial Intelligence, Springer Verlag, Melbourne, 2000, pp. 510–520.
M. Andramonov, A. Rubinov and B. Glover, Cutting angle methods in global optimization, Applied Mathematics Letters, 12 (1999), pp. 95–100.
A. Bagirov and A. Rubinov, Global minimization of increasing positively homogeneous function over the unit simplex, Annals of Operations Research, 98 (2000), pp. 171–187.
L. M. Batten and G. Beliakov, Fast algorithm for the cutting angle method of global optimization, Journal of Global Optimization, accepted (2001).
G. Beliakov, Fuzzy clustering of non-convex patterns using global optimisation, FUZZ-IEEE, Melbourne, 2001.
G. Beliakov, Least squares splines with free knots: global optimisation approach, IMA Journal of Numerical Analysis, submitted (2001).
M. Bianchini, M. Gori and M. Maggini, On the problem of local minima in recurrent neural networks, IEEE Transactions on Neural Networks, 5 (1994), pp. p167(11).
C. M. Bishop, Neural Networks for Pattern Recognition, Oxford Press, 1995.
G. E. P. Box and G. M. Jenkins, Time Series Analysis, Forecasting and Control, Holden Day, San Francisco, 1970.
M. Brown and C. Harris, Neurofuzzy adaptive modelling and control, Prentice Hall, London, 1994.
P. A. Castillo, J. J. Merelo, A. Prieto, V. Rivas and G. Romero, G-Prop: Global optimization of multilayer perceptrons using GAs, Neurocomputing, 35 (2000), pp. 149–163.
F. M. Coetzee and V. L. Stonick, On the uniqueness of weights in single-layer perceptrons, IEEE Transactions on Neural Networks, 7 (1996), pp. p318(8).
W. Duch and J. Korczak, Optimization and global minimization methods suitable for neural networks, Neural computing surveys (1999).
S. Ergezinger and E. Thomson, An accelerated learning algorithm for multilayer perceptron: optimizing layer by layer, IEEE Transactions on Neural Networks, 6 (1995), pp. 31–42.
D. Fogel, Evolutionary Computation: Towards a New Philosophy of Machine Intelligence, IEEE press, 1999.
M. Forti, A note on neural networks with multiple equilibrium points, IEEE Transactions on Circuits and Systems-I: Fundamental Theory, 43 (1996), pp. p487(5).
H. V. Gupta, K. Hsu and S. Sorooshian, Superior training of artificial neural networks using weight-space partitioning, International conference on neural networks, Houston, USA, 1997, pp. 1919–1923.
R. Horst and P. M. Pardalos, Handbook of global optimization, Kluwer Academic Publishers, Dordrecht; Boston, 1995.
N. Kasabov, Foundations of Neural Networks, Fuzzy Systems and Knowledge Engineering, The MIT Press, 1996.
M. C. Mackey and L. Glass, Oscillation and Chaos in Physiological Control Systems, Science, 197 (1977), pp. 287–289.
O. L. Mangasarian, Mathematical programming in neural networks, ORSA Journal on Computing, 5 (1993), pp. 349–360.
T. Masters, Advanced algorithms for neural networks: a C++ sourcebook, Wiley, New York, 1995.
T. Masters, Practical neural network recipes in C++, Academic Press, Boston, 1993.
V. V. Phansalkar and M. A. L. Thathachar, Local and Global Optimization Algorithms for Generalized Learning Automata, Neural Computation, 7 (1995), pp. 950–973.
J. Pintér, Global optimization in action: continuous and Lipschitz optimization-algorithms, implementations, and applications, Kluwer Academic Publishers, Dordrecht; Boston, 1996.
A. M. Rubinov, Abstract convexity and global optimization, Kluwer Academic Publishers, Dordrecht; Boston, 2000.
R. Sexton, R. Dorsey and J. Johnson, Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing, European Journal of Operational Research, 114 (1999), pp. 589–601.
R. Sexton, R. Dorsey and J. Johnson, Toward global optimization of neural networks: A comparison of the genetic algorithm and backpropagation, Decision Support Systems, 22 (1998), pp. 171–185.
R. S. Sexton, R. E. Dorsey and J. D. Johnson, Beyond Backpropagation: Using Simulated Annealing for Training Neural Networks, Journal of End User Computing, 11 (1999), pp. 3.
Y. Shang and B. W. Wah, Global optimization for neural network training,Computer, 29 (1996), pp. p45(10).
K. K. Shukla and Raghunath, An efficient global algorithm for supervised training of neural networks,Computers and Electrical Engineering, 25 (1999), pp. 195(2).
A. Törn and A. Zhilinskas, Global optimization, Springer-Verlag, Berlin; New York, 1989.
X. Yao, Evolving Artificial Neural Networks, Proceedings of the IEEE, 87 (1999), pp. 1423–1447.
X. M. Zhang and Y. Q. Chen, Ray-guided global optimization method for training neural networks, Neurocomputing, 30 (2000), pp. 333–337.
X.-S. Zhang, Neural networks in optimization, Kluwer Academic Publishers, Boston, Mass., 2000.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Beliakov, G., Abraham, A. (2002). Global Optimisation of Neural Networks Using a Deterministic Hybrid Approach. In: Abraham, A., Köppen, M. (eds) Hybrid Information Systems. Advances in Soft Computing, vol 14. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-1782-9_8
Download citation
DOI: https://doi.org/10.1007/978-3-7908-1782-9_8
Publisher Name: Physica, Heidelberg
Print ISBN: 978-3-7908-1480-4
Online ISBN: 978-3-7908-1782-9
eBook Packages: Springer Book Archive