Advertisement

Scalable Neuroevolution for Reinforcement Learning

  • Faustino Gomez
Conference paper
  • 685 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7505)

Abstract

The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), has now been around for over 20 years. The main appeal of this approach is that, because it does not rely on gradient information (e.g. backpropagation), it can potentially harness the universal function approximation capability of neural networks to solve reinforcement learning (RL) tasks, where there is no “teacher” (i.e. no targets or examples of correct behavior). Instead of incrementally adjusting the synaptic weights of a single network, the space of network parameters is searched directly according to principles inspired by natural selection: (1) encode a population of networks as strings, or genomes, (2) transform them into networks, (3) evaluate them on the task, (4) generate new, hopefully better, nets by recombining those that are most “fit”, (5) goto step 2 until a solution is found. By evolving neural networks, NE can cope naturally with tasks that have continuous inputs and outputs, and, by evolving networks with feedback connections (recurrent networks), it can tackle more general tasks that require memory.

Early work in the field focused on evolving rather small networks (hundreds of weights) for RL benchmarks, and control problems with relatively few inputs/outputs. However, as RL tasks become more challenging, the networks required become larger, as do their genomes. The result is that scaling NE to large nets (i.e. tens of thousands of weights) is infeasible using a straightforward, direct encoding where genes map one-to-one to network components. Therefore, recent efforts have focused on indirect encodings [3,7,8,11,12,14] where the genome is translated into a network by a more complex mapping that allows small genomes to represent nets of potentially arbitrary size.

At IDSIA, in addition to state of the art direct methods [1,4,6,15,16], we have developed a novel indirect encoding scheme where networks are encoded by a set of Fourier coefficients which are converted into network weight matrices via an inverse Fourier-type transform [5, 9, 10, 13]. Since there often exist network solutions whose weight matrices contain regularity (i.e. adjacent weights are correlated), the number of coefficients required to represent these networks in the frequency domain is much smaller than the number of weights (in the same way that natural images can be compressed by ignore high-frequency components). This “compressed” encoding can reduce the search-space dimensionality by as much as two orders of magnitude, both accelerating convergence and yielding more general solutions.

We have also explored an entirely different approach where, instead of trying to efficiently search the space of compactly-encoded, large networks, NE is scaled by combining it with unsupervised learning (UL) [2]. Here, the standard NE architecture is augmented by a UL module that learns to transform high-dimensional inputs (e.g. video) generated by the controllers being evaluated, into a low-dimensional code that they use instead of the raw input. Because the number of inputs becomes small after the UL dimensionality reduction, the controllers can be small as well. Research is currently underway combining these two approaches (compressed nets fed by a small number UL features), to discover controllers for real robot manipulation using stereo vision input.

Keywords

Reinforcement Learning Evolutionary Computation Weight Matrice Unsupervised Learning Goto Step 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cuccu, G., Gomez, F.: Block Diagonal Natural Evolution Strategies. In: Coello, C.A.C., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M. (eds.) PPSN XII, Part II. LNCS, vol. 7492, pp. 488–497. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  2. 2.
    Cuccu, G., Luciw, M., Schmidhuber, J., Gomez, F.: Intrinsically motivated evolutionary search for vision-based reinforcement learning. In: Proceedings of the IEEE Conference on Development and Learning, and Epigenetic Robotics (2011)Google Scholar
  3. 3.
    Dürr, P., Mattiussi, C., Floreano, D.: Neuroevolution with Analog Genetic Encoding. In: Runarsson, T.P., Beyer, H.-G., Burke, E.K., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) PPSN IX. LNCS, vol. 4193, pp. 671–680. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  4. 4.
    Glasmachers, T., Schaul, T., Sun, Y., Wierstra, D., Schmidhuber, J.: Exponential Natural Evolution Strategies. In: Genetic and Evolutionary Computation Conference (GECCO), Portland, OR (2010)Google Scholar
  5. 5.
    Gomez, F., Koutník, J., Schmidhuber, J.: Compressed Network Complexity Search. In: Coello, C.A.C., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M. (eds.) PPSN XII, Part I. LNCS, vol. 7491, pp. 316–326. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  6. 6.
    Gomez, F., Schmidhuber, J., Miikkulainen, R.: Accelerated neural evolution through cooperatively coevolved synapses. Journal of Machine Learning Research 9, 937–965 (2008)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Gruau, F.: Cellular encoding of genetic neural networks. Tech. Rep. RR-92-21, Ecole Normale Superieure de Lyon, Institut IMAG, Lyon, France (1992)Google Scholar
  8. 8.
    Kitano, H.: Designing neural networks using genetic algorithms with graph generation system. Complex Systems 4, 461–476 (1990)zbMATHGoogle Scholar
  9. 9.
    Koutník, J., Gomez, F., Schmidhuber, J.: Evolving neural networks in compressed weight space. In: Proceedings of the Conference on Genetic and Evolutionary Computation, GECCO 2010 (2010)Google Scholar
  10. 10.
    Koutník, J., Gomez, F., Schmidhuber, J.: Searching for minimal neural networks in fourier space. In: Proceedings of the 4th Annual Conference on Artificial General Intelligence, pp. 61–66 (2010)Google Scholar
  11. 11.
    Risi, S., Lehman, J., Stanley, K.O.: Evolving the placement and density of neurons in the hyperneat substrate. In: Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, GECCO 2010, pp. 563–570 (2010)Google Scholar
  12. 12.
    Schmidhuber, J.: Discovering neural nets with low Kolmogorov complexity and high generalization capability. Neural Networks 10(5), 857–873 (1997)CrossRefGoogle Scholar
  13. 13.
    Srivastava, R.K., Schmidhuber, J., Gomez, F.: Generalized Compressed Network Search. In: Coello, C.A.C., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M. (eds.) PPSN XII, Part I. LNCS, vol. 7491, pp. 337–346. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  14. 14.
    Stanley, K.O., Miikkulainen, R.: A taxonomy for artificial embryogeny. Artificial Life 9(2), 93–130 (2003)CrossRefGoogle Scholar
  15. 15.
    Sun, Y., Wierstra, D., Schaul, T., Schmidhuber, J.: Efficient Natural Evolution Strategies. In: Genetic and Evolutionary Computation Conference, GECCO (2009)Google Scholar
  16. 16.
    Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.: Natural Evolution Strategies. In: Proceedings of the Congress on Evolutionary Computation (CEC 2008), Hongkong. IEEE Press (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Faustino Gomez
    • 1
  1. 1.IDSIAUSI-SUPSIManno-LuganoSwitzerland

Personalised recommendations