Training Multi Layer Perceptron Network Using a Genetic Algorithm as a Global Optimizer
In this paper, we introduce an approach for solving a regression problem. In regression problems, one tries to reconstruct the original data from a noisy data set. We solve the problem using a genetic algorithm and a neural network called Multi Layer Perceptron (MLP) network. By constructing the neural network in an appropriate way, we are able to form an objective function for the regression problem. We solve the obtained optimization problem using a hybrid genetic algorithm and compare the results to those of a simple multistart method. The hybrid genetic algorithm used is a simple hybridization of a genetic algorithm and a Nelder-Mead simplex method.
KeywordsHybrid method Regression problem Neural networks Genetic algorithms Multi layer perceptron.
Unable to display preview. Download preview PDF.
- C. M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995.Google Scholar
- K. Deb. Multi-Objective Optimization using Evolutionary Algorithms. John Wiley & Sons, 2001.Google Scholar
- D. E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, New York, 1989.Google Scholar
- H. Greenberg. Mathematical programming glossary. http://carbon.cudenver.edu/-hgreenberg/glossary/glossary.htm 2001.Google Scholar
- A. C. Harvey. The Econometric Analysis of Time Series. MIT Press, 2nd edition, 1990.Google Scholar
- S. Kotz and N. L. Johnson, editors. Encyclopedia of Statistical Sciences, volume 3 and 7. John Wiley & Sons, New York, 1982.Google Scholar
- Z. Michalewicz, T. D. Logan, and S. Swaminathan. Evolutionary operators for continuous convex parameter spaces. In A. V. Sebald and L. J. Fogel, editors, Proceedings of the 3rd Annual Conference on Evolutionary Programming, pages 84–97. World Scientific Publishing, River Edge, NJ, 1994.Google Scholar
- Z. Michalewicz, G. Nazhiyath, and M. Michalewicz. A note on usefulness of geometrical crossover for numerical optimization problems. In L. J. L.J. Fogel, P. J. Angeline, and T. Back, editors, Proceedings of the 5th Annual Conference on Evolutionary Programming, pages 305–312. MIT Press, Cambridge, MA, 1996.Google Scholar
- K. Miettinen, M. M. Mäkelä, and J. Toivanen. Comparison of four penalty function-based methods in handling constraints with genetic algorithms. Reports of the Department of Mathematical Information Technology, Series B. Scientific Computing No. B 17/2000, University of Jyväskylä, Department of Mathematical Information Technology, 2000.Google Scholar
- D. T. Pham and D. Karaboga. Intelligent Optimization Techniques: Genetic Algorithms, Tabu Search, Simulated Annealing and Neural Networks. Springer-Verlag, London, 2000.Google Scholar
- R. Rojas. Neural Networks: A Systematic Introduction. Springer-Verlag, Berlin Heidelberg, 1996.Google Scholar
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representation by error propagation. In J. L. McClelland, D. E. Rumelhart, and the PDP Research Group, editors, Parallel Distributed Processing Explorations in the Microstructure of Cognition, volume 1: Foundations, pages 318–362. MIT Press, Cambridge, MA, 1996.Google Scholar