Abstract
In this paper we present a novel general framework for encoding and evolving networks called Common Genetic Encoding (CGE) that can be applied to both direct and indirect encoding methods. The encoding has important properties that makes it suitable for evolving neural networks: (1) It is complete in that it is able to represent all types of valid phenotype networks. (2) It is closed, i. e. every valid genotype represents a valid phenotype. Similarly, the encoding is closed under genetic operators such as structural mutation and crossover that act upon the genotype. Moreover, the encoding’s genotype can be seen as a composition of several subgenomes, which makes it to inherently support the evolution of modular networks in both direct and indirect encoding cases. To demonstrate our encoding, we present an experiment where direct encoding is used to learn the dynamic model of a two-link arm robot. We also provide an illustration of how the indirect-encoding features of CGE can be used in the area of artificial embryogeny.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Angeline, P.J., Saunders, G.M., Pollack, J.B.: An evolutionary algorithm that constructs recurrent neural networks. IEEE Transactions on Neural Networks 5, 54–65 (1994)
Bentley, P., Kumar, S.: Three ways to grow designs: A comparison of embryogenies for an evolutionary design problem. In: Banzhaf, W., Daida, J., Eiben, A.E., Garzon, M.H., Honavar, V., Jakiela, M., Smith, R.E. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference, Orlando, Florida, USA, 13-17 July, 1999, vol. 1, pp. 35–43. Morgan Kaufmann, San Francisco (1999)
Bongard, J.C., Pfeifer, R.: Repeated structure and dissociation of genotypic and phenotypic complexity in artificial ontogeny. In: GECCO-2001. Proceedings of the Genetic and Evolutionary Computation Conference, pp. 829–836 (2001)
Dellaert, F., Beer, R.D.: A developmental model for the evolution of complete autonomous agents. In: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior, pp. 393–401 (1996)
Gruau, F.: Neural Network Synthesis Using Cellular Encoding and the Genetic Algorithm. PhD thesis, Ecole Normale Superieure de Lyon, Laboratoire de l’Informatique du Parallelisme, France (January 1994)
Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 9(2), 159–195 (2001)
Jakobi, N.: Harnessing morphogenesis. In: Proceedings of Information Processing in Cells and Tissues, pp. 29–41 (1995)
Kassahun, Y.: Towards a Unified Approach to Learning and Adaptation. PhD thesis, Technical Report 0602, Institute of Computer Science and Applied Mathematics, Christian-Albrechts University, Kiel, Germany (February 2006)
Kassahun, Y., Edgington, M., Metzen, J.H., Sommer, G., Kirchner, F.: A common genetic encoding for both direct and indirect encodings of networks. In: GECCO-2007. Proceedings of the Genetic and Evolutionary Computation Conference (accepted, July 2007)
Kitano, H.: Designing neural networks using genetic algoithms with graph generation system. Complex Systems 4, 461–476 (1990)
Lewis, F.L., Dawson, D.M., Abdallah, C.T.: Robot Manipulator Control: Theory and Practice. Marcel Dekker, Inc., New York, Basel (2004)
Lindenmayer, A.: Mathematical models for cellular interactions in development, parts I and II. Journal of Theoretical Biology 18, 280–315 (1968)
Luke, S., Spector, L.: Evolving graphs and networks with edge encoding: Preliminary report. In: Late-breaking papers of Genetic Programming 1996, Stanford, CA (1996)
Nolfi, S., Parisi, D.: Growing neural networks. Technical Report PCIA-91-15, Institute of Psychology, Rome (1991)
Schaffer, J., Whitley, L.D., Eshelmann, L.J.: Combination of genetic algorithms and neural networks: A survey of the state of the art. In: Proceedings of COGANN92 International Workshop on the Combination of Genetic Algorithm and Neural Networks, pp. 1–37. IEEE Computer Society Press, Los Alamitos (1992)
Sendhoff, B., Kreutz, M.: Variable encoding of modular neural networks for time series prediction. In: CEC 1999. Congress on Evolutionary Computation, pp. 259–266 (1999)
Stanley, K.O.: Efficient Evolution of Neural Networks through Complexification. PhD thesis, Artificial Intelligence Laboratory. The University of Texas at Austin, Austin, USA (August 2004)
Vaario, J., Onitsuka, A., Shimohara, K.: Formation of neural structures. In: ECAL 1997. Proceedings of the Fourth European Conference on Articial Life, pp. 214–223 (1997)
Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9), 1423–1447 (1999)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kassahun, Y., Metzen, J.H., de Gea, J., Edgington, M., Kirchner, F. (2007). A General Framework for Encoding and Evolving Neural Networks. In: Hertzberg, J., Beetz, M., Englert, R. (eds) KI 2007: Advances in Artificial Intelligence. KI 2007. Lecture Notes in Computer Science(), vol 4667. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74565-5_17
Download citation
DOI: https://doi.org/10.1007/978-3-540-74565-5_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74564-8
Online ISBN: 978-3-540-74565-5
eBook Packages: Computer ScienceComputer Science (R0)