Skip to main content

Global Optimisation of Neural Networks Using a Deterministic Hybrid Approach

  • Conference paper
Hybrid Information Systems

Part of the book series: Advances in Soft Computing ((AINSC,volume 14))

Abstract

Selection of the topology of a neural network and correct parameters for the learning algorithm is a tedious task for designing an optimal artificial neural network, which is smaller, faster and with a better generalization performance. In this paper we introduce a recently developed cutting angle method (a deterministic technique) for global optimization of connection weights. Neural networks are initially trained using the cutting angle method and later the learning is fine-tuned (meta-learning) using conventional gradient descent or other optimization techniques. Experiments were carried out on three time series benchmarks and a comparison was done using evolutionary neural networks. Our preliminary experimentation results show that the proposed deterministic approach could provide near optimal results much faster than the evolutionary approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Abraham and B. Nath, ALEC -An Adaptive Learning Framework for Optimizing Artificial Neural Networks, in V. N. Alexandrov, ed., Computational Science, Springer-Verlag, San Francisco, USA, 2001, pp. 171–180.

    Google Scholar 

  2. A. Abraham and B. Nath, Optimal Design of Neural Nets Using Hybrid Algorithms, 6th Pacific Rim International Conference on Artificial Intelligence, Springer Verlag, Melbourne, 2000, pp. 510–520.

    Google Scholar 

  3. M. Andramonov, A. Rubinov and B. Glover, Cutting angle methods in global optimization, Applied Mathematics Letters, 12 (1999), pp. 95–100.

    Article  MathSciNet  MATH  Google Scholar 

  4. A. Bagirov and A. Rubinov, Global minimization of increasing positively homogeneous function over the unit simplex, Annals of Operations Research, 98 (2000), pp. 171–187.

    Article  MathSciNet  MATH  Google Scholar 

  5. L. M. Batten and G. Beliakov, Fast algorithm for the cutting angle method of global optimization, Journal of Global Optimization, accepted (2001).

    Google Scholar 

  6. G. Beliakov, Fuzzy clustering of non-convex patterns using global optimisation, FUZZ-IEEE, Melbourne, 2001.

    Google Scholar 

  7. G. Beliakov, Least squares splines with free knots: global optimisation approach, IMA Journal of Numerical Analysis, submitted (2001).

    Google Scholar 

  8. M. Bianchini, M. Gori and M. Maggini, On the problem of local minima in recurrent neural networks, IEEE Transactions on Neural Networks, 5 (1994), pp. p167(11).

    Google Scholar 

  9. C. M. Bishop, Neural Networks for Pattern Recognition, Oxford Press, 1995.

    Google Scholar 

  10. G. E. P. Box and G. M. Jenkins, Time Series Analysis, Forecasting and Control, Holden Day, San Francisco, 1970.

    MATH  Google Scholar 

  11. M. Brown and C. Harris, Neurofuzzy adaptive modelling and control, Prentice Hall, London, 1994.

    Google Scholar 

  12. P. A. Castillo, J. J. Merelo, A. Prieto, V. Rivas and G. Romero, G-Prop: Global optimization of multilayer perceptrons using GAs, Neurocomputing, 35 (2000), pp. 149–163.

    MATH  Google Scholar 

  13. F. M. Coetzee and V. L. Stonick, On the uniqueness of weights in single-layer perceptrons, IEEE Transactions on Neural Networks, 7 (1996), pp. p318(8).

    Google Scholar 

  14. W. Duch and J. Korczak, Optimization and global minimization methods suitable for neural networks, Neural computing surveys (1999).

    Google Scholar 

  15. S. Ergezinger and E. Thomson, An accelerated learning algorithm for multilayer perceptron: optimizing layer by layer, IEEE Transactions on Neural Networks, 6 (1995), pp. 31–42.

    Article  Google Scholar 

  16. D. Fogel, Evolutionary Computation: Towards a New Philosophy of Machine Intelligence, IEEE press, 1999.

    Google Scholar 

  17. M. Forti, A note on neural networks with multiple equilibrium points, IEEE Transactions on Circuits and Systems-I: Fundamental Theory, 43 (1996), pp. p487(5).

    Google Scholar 

  18. H. V. Gupta, K. Hsu and S. Sorooshian, Superior training of artificial neural networks using weight-space partitioning, International conference on neural networks, Houston, USA, 1997, pp. 1919–1923.

    Google Scholar 

  19. R. Horst and P. M. Pardalos, Handbook of global optimization, Kluwer Academic Publishers, Dordrecht; Boston, 1995.

    Google Scholar 

  20. N. Kasabov, Foundations of Neural Networks, Fuzzy Systems and Knowledge Engineering, The MIT Press, 1996.

    Google Scholar 

  21. M. C. Mackey and L. Glass, Oscillation and Chaos in Physiological Control Systems, Science, 197 (1977), pp. 287–289.

    Article  Google Scholar 

  22. O. L. Mangasarian, Mathematical programming in neural networks, ORSA Journal on Computing, 5 (1993), pp. 349–360.

    Article  MATH  Google Scholar 

  23. T. Masters, Advanced algorithms for neural networks: a C++ sourcebook, Wiley, New York, 1995.

    Google Scholar 

  24. T. Masters, Practical neural network recipes in C++, Academic Press, Boston, 1993.

    Google Scholar 

  25. V. V. Phansalkar and M. A. L. Thathachar, Local and Global Optimization Algorithms for Generalized Learning Automata, Neural Computation, 7 (1995), pp. 950–973.

    Article  Google Scholar 

  26. J. Pintér, Global optimization in action: continuous and Lipschitz optimization-algorithms, implementations, and applications, Kluwer Academic Publishers, Dordrecht; Boston, 1996.

    Google Scholar 

  27. A. M. Rubinov, Abstract convexity and global optimization, Kluwer Academic Publishers, Dordrecht; Boston, 2000.

    Google Scholar 

  28. R. Sexton, R. Dorsey and J. Johnson, Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing, European Journal of Operational Research, 114 (1999), pp. 589–601.

    Article  MATH  Google Scholar 

  29. R. Sexton, R. Dorsey and J. Johnson, Toward global optimization of neural networks: A comparison of the genetic algorithm and backpropagation, Decision Support Systems, 22 (1998), pp. 171–185.

    Article  Google Scholar 

  30. R. S. Sexton, R. E. Dorsey and J. D. Johnson, Beyond Backpropagation: Using Simulated Annealing for Training Neural Networks, Journal of End User Computing, 11 (1999), pp. 3.

    Google Scholar 

  31. Y. Shang and B. W. Wah, Global optimization for neural network training,Computer, 29 (1996), pp. p45(10).

    Google Scholar 

  32. K. K. Shukla and Raghunath, An efficient global algorithm for supervised training of neural networks,Computers and Electrical Engineering, 25 (1999), pp. 195(2).

    Google Scholar 

  33. A. Törn and A. Zhilinskas, Global optimization, Springer-Verlag, Berlin; New York, 1989.

    Google Scholar 

  34. X. Yao, Evolving Artificial Neural Networks, Proceedings of the IEEE, 87 (1999), pp. 1423–1447.

    Article  Google Scholar 

  35. X. M. Zhang and Y. Q. Chen, Ray-guided global optimization method for training neural networks, Neurocomputing, 30 (2000), pp. 333–337.

    Article  Google Scholar 

  36. X.-S. Zhang, Neural networks in optimization, Kluwer Academic Publishers, Boston, Mass., 2000.

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Beliakov, G., Abraham, A. (2002). Global Optimisation of Neural Networks Using a Deterministic Hybrid Approach. In: Abraham, A., Köppen, M. (eds) Hybrid Information Systems. Advances in Soft Computing, vol 14. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-1782-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-7908-1782-9_8

  • Publisher Name: Physica, Heidelberg

  • Print ISBN: 978-3-7908-1480-4

  • Online ISBN: 978-3-7908-1782-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics