Advertisement

Evolutionary Programming of Near-Optimal Neural Networks

  • D. Lock
  • C. Giraud-Carrier

Abstract

A genetic algorithm (GA) method that evolves both the topology and training parameters of backpropagation-trained, fully-connected, feed-forward neural networks is presented. The GA uses a weak encoding scheme with real-valued alleles. One contribution of the proposed approach is to replace the needed but potentially slow evolution of final weights by the more efficient evolution of a single weight spread parameter used to set the initial weights only. In addition, the co-evolution of an input mask effects a form of automatic feature selection. Preliminary experiments suggest that the resulting system is able to produce networks that perform well under backpropagation.

Keywords

Neural Network Genetic Algorithm Hide Layer Optimal Topology Initial Weight 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Burdsall, B. and Giraud-Carrier, C. (1997). Evolving Fuzzy Prototypes for Efficient Data Clustering. In Proceedings of the Second International ICSC Symposium on Fuzzy Logic and Applications (ISFL ′97), 217–223.Google Scholar
  2. [2]
    Fahlman, S.E. (1988). An Empirical Study of Learning Speed in Backpropagation Networks. Technical Report CMU-CS-88-162, Carnegie Mellon University.Google Scholar
  3. [3]
    Fullmer, B. and Miikkulainen, R. (1992). Using Marker-Based Genetic Encoding of Neural Networks to Evolve Finite-State Behaviour. In Proceedings of the First European Conference on Artificial Life (ECAL′91), 255–262.Google Scholar
  4. [4]
    Goldberg, D.E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Publishing Company.Google Scholar
  5. [5]
    Harp, S.A., Samad, T. and Guha, A. (1989). Towards the Genetic Synthesis of ANN. In Proceedings of the Third International Conference on Genetic Algorithms (ICGA′89), 360–369.Google Scholar
  6. [6]
    Holland, J. (1975). Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Arbor, MI.Google Scholar
  7. [7]
    Jacobs, R.A. (1988). Increased Rates of Convergence Through Learning Rate Adaptation. Neural Networks, 1(4):295–307.CrossRefGoogle Scholar
  8. [8]
    Koza, J.R. (1992). Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press.Google Scholar
  9. [9]
    Lock, D.F. (1998). Using Genetic Algorithms to Build, train and Optimize Neural Networks. MSc Thesis, Department of Computer Science, University of Bristol.Google Scholar
  10. [10]
    Merz, C.J. and Murphy, P.M. (1996). UCI Repository of Machine Learning Databases. Department of Information and Computer Science, University of California, Irvine.Google Scholar
  11. [11]
    Miller, G.F., Todd, P.M. and Hedge, S.U. (1991). Designing Neural Networks. Neural Networks, 4: 53–60.CrossRefGoogle Scholar
  12. [12]
    Plaut, D.C. and Hinton, G.E. (1987). Learning Sets of Filters Using Backpropagation. Computer Speech and Language, 2:35–61.CrossRefGoogle Scholar
  13. [13]
    Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986). Learning Internal Representations by Error Propagation. In Rumelhart, D.E. and McClelland, J.L. (Eds.), Parallel Distributed Processing, Vol. 1,MTT Press.Google Scholar

Copyright information

© Springer-Verlag Wien 1999

Authors and Affiliations

  • D. Lock
    • 1
  • C. Giraud-Carrier
    • 2
  1. 1.RCMS Ltd, Windsor HouseColnbrookUK
  2. 2.Department of Computer ScienceUniversity of BristolBristolUK

Personalised recommendations