Advertisement

Neural Random Access Machines Optimized by Differential Evolution

  • Marco Baioletti
  • Valerio Belli
  • Gabriele Di Bari
  • Valentina PoggioniEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11298)

Abstract

Recently a research trend of learning algorithms by means of deep learning techniques has started. Most of these are different implementations of the controller-interface abstraction: they use a neural controller as a “processor" and provide different interfaces for input, output and memory management. In this trend, we consider of particular interest the Neural Random-Access Machines, called NRAM, because this model is also able to solve problems which require indirect memory references. In this paper we propose a version of the Neural Random-Access Machines, where the core neural controller is trained with Differential Evolution meta-heuristic instead of the usual backpropagation algorithm. Some experimental results showing that this approach is effective and competitive are also presented.

Keywords

NRAM Differential Evolution Neural networks 

References

  1. 1.
    Baioletti, M., Di Bari, G., Poggioni, V., Tracolli, M.: Can differential evolution be an efficient engine to optimize neural networks? In: Nicosia, G., Pardalos, P., Giuffrida, G., Umeton, R. (eds.) MOD 2017. LNCS, vol. 10710, pp. 401–413. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-72926-8_33CrossRefGoogle Scholar
  2. 2.
    Das, S., Mullick, S.S., Suganthan, P.N.: Recent advances in differential evolution an updated survey. Swarm Evol. Comput. 27, 1–30 (2016)CrossRefGoogle Scholar
  3. 3.
    Swagatam Das and Ponnuthurai Nagaratnam Suganthan: Differential evolution: a survey of the state-of-the-art. IEEE Trans. Evol. Comput. 15(1), 4–31 (2011)CrossRefGoogle Scholar
  4. 4.
    Di Bari, G., Poggioni, V., Baioletti, M., Tracolli, M.: Differential evolution for learning large neural networks. Technical report (2018). https://github.com/Gabriele91/DENN-RESULTS-2018
  5. 5.
    Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. CoRR abs/1410.5401 (2014)Google Scholar
  6. 6.
    Graves, A., et al.: Hybrid computing using a neural network with dynamic external memory. Nature 538(7626), 471–476 (2016)CrossRefGoogle Scholar
  7. 7.
    Greve, R.B., Jacobsen, E.J., Risi, S.: Evolving neural turing machines for reward-based learning. In: Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO 2016, pp. 117–124. ACM, New York (2016)Google Scholar
  8. 8.
    Joulin, A., Mikolov, T.: Inferring algorithmic patterns with stack-augmented recurrent nets. In: Proceedings of the GECCO 2016, pp. 190–198 (2015)Google Scholar
  9. 9.
    Kurach, K., Andrychowicz, M., Sutskever, I.: Neural random-access machines. CoRR abs/1511.06392 (2015)Google Scholar
  10. 10.
    Morse, G., Stanley, K.O.: Simple evolutionary optimization can rival stochastic gradient descent in neural networks. In: Proceedings of the GECCO 2016, pp. 477–484 (2016)Google Scholar
  11. 11.
    Zaremba, W., Mikolov, T., Joulin, A., Fergus, R.: Learning simple algorithms from examples. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning, ICML 2016, vol. 48, pp. 421–429. JMLR.org (2016)Google Scholar
  12. 12.
    Zaremba, W., Sutskever, I.: Reinforcement learning neural turing machines. CoRR abs/1505.00521 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Marco Baioletti
    • 1
  • Valerio Belli
    • 1
  • Gabriele Di Bari
    • 1
  • Valentina Poggioni
    • 1
    Email author
  1. 1.Dip. Matematica e InformaticaUniversità di PerugiaPerugiaItaly

Personalised recommendations