Advertisement

Training Multi Layer Perceptron Network Using a Genetic Algorithm as a Global Optimizer

  • Heikki Maaranen
  • Kaisa Miettinen
  • Marko M. Mäkelä
Chapter
Part of the Applied Optimization book series (APOP, volume 86)

Abstract

In this paper, we introduce an approach for solving a regression problem. In regression problems, one tries to reconstruct the original data from a noisy data set. We solve the problem using a genetic algorithm and a neural network called Multi Layer Perceptron (MLP) network. By constructing the neural network in an appropriate way, we are able to form an objective function for the regression problem. We solve the obtained optimization problem using a hybrid genetic algorithm and compare the results to those of a simple multistart method. The hybrid genetic algorithm used is a simple hybridization of a genetic algorithm and a Nelder-Mead simplex method.

Keywords

Hybrid method Regression problem Neural networks Genetic algorithms Multi layer perceptron. 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. C. M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995.Google Scholar
  2. K. Deb. Multi-Objective Optimization using Evolutionary Algorithms. John Wiley & Sons, 2001.Google Scholar
  3. J. E. Dennis, Jr. and R. B. Schnabel. A view of unconstrained optimization. In Optimization, pages 1–72. Elsevier North-Holland, New York, 1989.CrossRefGoogle Scholar
  4. D. E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, New York, 1989.Google Scholar
  5. H. Greenberg. Mathematical programming glossary. http://carbon.cudenver.edu/-hgreenberg/glossary/glossary.htm 2001.Google Scholar
  6. M. Hagan and M. Mohammad. Training feedforward networks with the Marquardt algorithm. IEEE Transactions on Neural Networks, 5 (6): 989–993, 1994.CrossRefGoogle Scholar
  7. A. C. Harvey. The Econometric Analysis of Time Series. MIT Press, 2nd edition, 1990.Google Scholar
  8. T. Kärkkäinen. MLP-network in a layer-wise form with applications to weight decay. Neural Computation, 14 (6): 1451–1480, 2002.zbMATHCrossRefGoogle Scholar
  9. S. Kotz and N. L. Johnson, editors. Encyclopedia of Statistical Sciences, volume 3 and 7. John Wiley & Sons, New York, 1982.Google Scholar
  10. J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright. Convergence properties of the Nelder—Mead simplex method in low dimensions. SIAM Journal on Optimization, 9 (1): 112–147, 1998.MathSciNetzbMATHCrossRefGoogle Scholar
  11. H.-L. Liu. A hybrid AI optimization method applied to industrial processes. Chemometrics and Intelligent Laboratory Systems, 45 (1–2): 101–104, 1999.CrossRefGoogle Scholar
  12. Z. Michalewicz, T. D. Logan, and S. Swaminathan. Evolutionary operators for continuous convex parameter spaces. In A. V. Sebald and L. J. Fogel, editors, Proceedings of the 3rd Annual Conference on Evolutionary Programming, pages 84–97. World Scientific Publishing, River Edge, NJ, 1994.Google Scholar
  13. Z. Michalewicz, G. Nazhiyath, and M. Michalewicz. A note on usefulness of geometrical crossover for numerical optimization problems. In L. J. L.J. Fogel, P. J. Angeline, and T. Back, editors, Proceedings of the 5th Annual Conference on Evolutionary Programming, pages 305–312. MIT Press, Cambridge, MA, 1996.Google Scholar
  14. K. Miettinen, M. M. Mäkelä, and J. Toivanen. Comparison of four penalty function-based methods in handling constraints with genetic algorithms. Reports of the Department of Mathematical Information Technology, Series B. Scientific Computing No. B 17/2000, University of Jyväskylä, Department of Mathematical Information Technology, 2000.Google Scholar
  15. D. T. Pham and D. Karaboga. Intelligent Optimization Techniques: Genetic Algorithms, Tabu Search, Simulated Annealing and Neural Networks. Springer-Verlag, London, 2000.Google Scholar
  16. R. Rojas. Neural Networks: A Systematic Introduction. Springer-Verlag, Berlin Heidelberg, 1996.Google Scholar
  17. F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65 (6): 386–408, 1958.MathSciNetCrossRefGoogle Scholar
  18. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representation by error propagation. In J. L. McClelland, D. E. Rumelhart, and the PDP Research Group, editors, Parallel Distributed Processing Explorations in the Microstructure of Cognition, volume 1: Foundations, pages 318–362. MIT Press, Cambridge, MA, 1996.Google Scholar
  19. X. Yao and Y. Lui. New evolutionary systems for evolving artificial neural networks. IEEE Transactions on Neural Networks, 8 (3): 694–713, 1997.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2003

Authors and Affiliations

  • Heikki Maaranen
    • 1
    • 2
  • Kaisa Miettinen
    • 1
    • 2
  • Marko M. Mäkelä
    • 1
    • 2
  1. 1.Department of Mathematical Information TechnologyUniversity of JyväskyläAgoraFinland
  2. 2.University of JyväskyläFinland

Personalised recommendations