Advertisement

Evolving Recurrent Neural Networks

  • Kristian Lindgren
  • Anders Nilsson
  • Mats G. Nordahl
  • Ingrid Råde

Abstract

An evolutionary algorithm which allows entities to increase and decrease in complexity during the evolutionary process is applied to recurrent neural networks. Recognition of various regular languages provides a suitable set of test problems.

Keywords

Genetic Algorithm Recurrent Neural Network Topological Entropy Regular Language Finite Automaton 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Schaffer, J. D., Whitley, D., and Eshelman, J. L., ‘Combinations of genetic algorithms and neural networks: a survey of the state of the art’, in COGANN-92, International Workshop on Combinations of Genetic Algorithms and Neural Networks, pp. 1–37, IEEE Computer Society Press, Los Alamitos CA, 1992.Google Scholar
  2. [2]
    Montana D. J. and Davis, L., ‘Training feedforward neural networks using genetic algorithms’, in Proceedings of the 11th Joint International Conference on Artificial Intelligence, pp. 762-767, 1989.Google Scholar
  3. [3]
    Whitley, D. and Hanson, T., ‘Optimizing neural networks using faster, more accurate genetic search’, in Proceedings of the Third International Conference on Genetic Algorithms, pp. 391–396, Morgan Kauffman, San Mateo, CA, 1989.Google Scholar
  4. [4]
    Whitley, D., Dominic, S. and Das, R., ‘Genetic reinforcement learning with multilayer neural networks’, in Proceedings of the Fourth International Conference on Genetic Algorithms, pp. 562–569, Morgan Kauffman, San Mateo CA, 1991.Google Scholar
  5. [5]
    Miller, G. F., Todd, P. and Hedge, S. U., ‘Designing neural networks using genetic algorithms’, in Proceedings of the Third International Conference on Genetic Algorithms, pp. 379–384, Morgan Kauffman, San Mateo, CA, 1989.Google Scholar
  6. [6]
    Belew, R. K., McInerney, J. and Schraudolph, N. N., ‘Evolving networks: Using the genetic algorithm with connectionist learning’, in Artificial Life II, pp. 511–547, Addison-Wesley, Redwood City CA, 1991.Google Scholar
  7. [7]
    Elman, J. L., ‘Finding structure in time’, Cognitive Science, 14, 179, 1990.CrossRefGoogle Scholar
  8. [8]
    Williams, R. J. and Zipser, D., ‘A learning algorithm for continually running fully recurrent neural networks’, Neural Computation, 1, 270, 1989.CrossRefGoogle Scholar
  9. [9]
    Schmidhuber, J., ‘A fixed size storage O(n 3) time complexity learning algorithm for fully recurrent continually running networks’, Neural Computation, 4, 243, 1992.CrossRefGoogle Scholar
  10. [10]
    Bergmann, A., ‘Self-organization by simulated evolution’, in 1989 Lectures in Complex Systems, pp. 455–463, (Addison-Wesley, Redwood City CA, 1990).Google Scholar
  11. [11]
    de Garis, H., ‘Steerable GenNETS: the genetic programming of steerable behavior in GenNETS’, in Towards a practice of autonomous systems, Proceedings of the First European Conference on Artificial Life, pp. 272–281, MIT Press, Cambridge, MA, 1992.Google Scholar
  12. [12]
    Torreele, J., ‘Temporal processing with recurrent networks: An evolutionary approach’, in Proceedings of the Fourth International Conference on Genetic Algorithms, pp. 555–561, Morgan Kauffman, San Mateo, CA, 1991.Google Scholar
  13. [13]
    Lindgren, K., Nilsson, A., Nordahl. M. G., and Råde, I., ‘Regular language inference using evolving neural networks’, in COGANN-92, International Workshop on Combinations of Genetic Algorithms and Neural Networks, pp. 75–86, IEEE Computer Society Press, Los Alamitos CA, 1992.Google Scholar
  14. [14]
    Koza, J. R., ‘Genetic programming: On the programming of computers by means of natural selection’, MIT Press, 1992.Google Scholar
  15. [15]
    Goldberg, D. E., Deb, K. and Korb, B., ‘Messy genetic algorithms: Motivation, analysis, and first results’, Complex Systems, 3, 493, 1988.MathSciNetGoogle Scholar
  16. [16]
    Fahlman, S. E. and Lebiere, C., ‘The cascade-correlation learning architecture’, in Advances in Neural Information Processing Systems, Vol. 2, pp. 524–532, Morgan Kauffman, San Mateo CA, 1990.Google Scholar
  17. [17]
    Frean, M., ‘The upstart algorithm: A method for constructing and training feedforward neural networks’, Neural Computation, 2, 198, 1990.CrossRefGoogle Scholar
  18. [18]
    Fahlman, S. E., ‘The recurrent cascade correlation architecture’, technical report CMU-CS-91-100, School of Computer Science, Carnegie Mellon University, 1991.Google Scholar
  19. [19]
    Elman, J. L., ‘Incremental learning, or the importance of starting small’, CRL Technical Report 9101, University of California, San Diego, 1991.Google Scholar
  20. [20]
    Lindgren, K., ‘Evolutionary phenomena in simple dynamics’, in Artificial Life II, pp. 295–312, Addison-Wesley, Redwood City CA, 1991.Google Scholar
  21. [21]
    Hopcroft, J. E. and Ullman, J. D., ‘Introduction to Automata Theory, Languages, and Computation’, Addison-Wesley, Reading MA, 1979.MATHGoogle Scholar
  22. [22]
    Siegelmann, H. T. and Sontag, E. D., ‘On the computational power of neural nets’, technical report SYCON-91-11, Rutgers Center for Systems and Control, 1991.Google Scholar
  23. [23]
    Moore, C., ‘Unpredictability and undecidability in dynamical systems’, Physical Review Letters’, 64, 2354, 1990.MathSciNetMATHCrossRefGoogle Scholar
  24. [24]
    Pollack, J. B., ‘The induction of dynamical recognizers’, Machine Learning, 7, 227, 1991.Google Scholar
  25. [25]
    Manderick, B., de Weger, M., and Spiessens, P., The genetic algorithm and the structure of the fitness landscape, in Proceedings of the Fourth International Conference on Genetic Algorithms, pp. 143–150, Morgan Kauffman, San Mateo CA, 1991.Google Scholar
  26. [26]
    Angeline, P. J. and Pollack, J. B., ‘Coevolving high-level representations’, preprint.Google Scholar
  27. [27]
    Lindgren, K., Nilsson, A., Nordahl. M. G., and Råde, I., in preparation.Google Scholar
  28. [28]
    Rissanen, J., ‘Stochastic Complexity in Statistical Inquiry’, World Scientific, Singapore, 1989.Google Scholar
  29. [29]
    Gold, E. M., ‘Complexity of automaton identification from given data’, Information and Control, 37, 302, 1978.MathSciNetMATHCrossRefGoogle Scholar
  30. [30]
    Biermann, A. W. and Feldman, J. A., ‘On the synthesis of finite-state machines from samples of their behavior’, IEEE Trans. Comput., C-21, 592, 1972.MathSciNetCrossRefGoogle Scholar
  31. [31]
    Miclet, L., Grammatical inference, in Syntactic and Structural Pattern Recognition Theory and Applications, World Scientific, Singapore, 1990.Google Scholar
  32. [32]
    Cleeremans, A., Servan-Schreiber, D. and McClelland, J. L., ‘Finite state automata and simple recurrent networks’, Neural Computation, 1, 372, 1989.CrossRefGoogle Scholar
  33. [33]
    Smith, A. W. and Zipser, D., ‘Learning sequential structure with the real-time recurrent learning algorithm’, International Journal of Neural Systems, 1, 125, 1989.CrossRefGoogle Scholar
  34. [34]
    Giles, C. L., Miller, C. B., Chen, D., Chen, H. H., Sun, G. Z. and Lee, Y. C., ‘Learning and extracting finite state automata with second-order recurrent neural networks’, Neural Computation, 4, 393, 1992.CrossRefGoogle Scholar
  35. [35]
    Watrous, R. C. and Kuhn, G. M., ‘Induction of finite-state languages using second order recurrent networks’, Neural Computation, 4, 406, 1992.CrossRefGoogle Scholar
  36. [36]
    Schwarz, D. B., Samalam, V. K., Solla, S. A., and Denker, J. S., ‘Exhaustive learning’, Neural Computation, 2, 374, 1990.CrossRefGoogle Scholar
  37. [37]
    Cohn, D. and Tesauro, G., ‘How tight are the Vapnik-Chervonenkis bounds?’, Neural Computation, 4, 249, 1992.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag/Wien 1993

Authors and Affiliations

  • Kristian Lindgren
    • 1
  • Anders Nilsson
    • 1
  • Mats G. Nordahl
    • 2
  • Ingrid Råde
    • 1
  1. 1.Institute of Physical Resource TheoryChalmers University of TechnologyGöteborgSweden
  2. 2.Santa Fe InstituteSanta FeUSA

Personalised recommendations