Advertisement

Using Stigmergy to Incorporate the Time into Artificial Neural Networks

  • Federico A. Galatolo
  • Mario Giovanni C. A. CiminoEmail author
  • Gigliola Vaglini
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11308)

Abstract

A current research trend in neurocomputing involves the design of novel artificial neural networks incorporating the concept of time into their operating model. In this paper, a novel architecture that employs stigmergy is proposed. Computational stigmergy is used to dynamically increase (or decrease) the strength of a connection, or the activation level, of an artificial neuron when stimulated (or released). This study lays down a basic framework for the derivation of a stigmergic NN with a related training algorithm. To show its potential, some pilot experiments have been reported. The XOR problem is solved by using only one single stigmergic neuron with one input and one output. A static NN, a stigmergic NN, a recurrent NN and a long short-term memory NN have been trained to solve the MNIST digits recognition benchmark.

Keywords

Artificial neural networks Stigmergy Deep learning Supervised learning 

Notes

Acknowledgements

This work was partially carried out in the framework of the SCIADRO project, co-funded by the Tuscany Region (Italy) under the Regional Implementation Programme for Underutilized Areas Fund (PAR FAS 2007-2013) and the Research Facilitation Fund (FAR) of the Ministry of Education, University and Research (MIUR).

This research was supported in part by the PRA 2018_81 project entitled “Wearable sensor systems: personalized analysis and data security in healthcare” funded by the University of Pisa.

References

  1. 1.
    Schmidhuber, J.: Deep learning in neural networks: An overview. Neural Netw. 61, 85–117 (2015)CrossRefGoogle Scholar
  2. 2.
    Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., Chen, T.: Recent advances in convolutional neural networks. Pattern Recognit. 77, 354–377 (2018)CrossRefGoogle Scholar
  3. 3.
    Pérez, J., Cabrera, J.A., Castillo, J.J., Velasco, J.M.: Bio-inspired spiking neural network for nonlinear systems control. Neural Netw. 104, 15–25 (2018)CrossRefGoogle Scholar
  4. 4.
    Xie, X., Qu, H., Yi, Z., Kurths, J.: Efficient training of supervised spiking neural network via accurate synaptic-efficiency adjustment method. IEEE Trans. Neural Netw. Learn. Syst. 28(6), 1411–1424 (2017)CrossRefGoogle Scholar
  5. 5.
    Song, H.F., Yang, G.R., Wang, X.J.: Training excitatory-inhibitory recurrent neural networks for cognitive tasks: a simple and flexible framework. PLoS Comput. Biol. 12(2), e1004792 (2016)CrossRefGoogle Scholar
  6. 6.
    Heylighen, F.: Stigmergy as a universal coordination mechanism I: definition and components. Cogn. Syst. Res. 38, 4–13 (2016)CrossRefGoogle Scholar
  7. 7.
    Greer, K.: Turing: then, now and still key. In: Yang, X.S. (ed.) Artificial Intelligence, Evolutionary Computing and Metaheuristics. SCI, vol. 427, pp. 43–62. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-29694-9_3CrossRefGoogle Scholar
  8. 8.
    Cimino, M.G.C.A., Lazzeri, A., Vaglini, G.: Improving the analysis of context-aware information via marker-based stigmergy and differential evolution. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2015. LNCS (LNAI), vol. 9120, pp. 341–352. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-19369-4_31CrossRefGoogle Scholar
  9. 9.
    Alfeo, A.L., Cimino, M.G., Vaglini, G.: Measuring physical activity of older adults via smartwatch and stigmergic receptive fields. In: Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods (ICPRAM), pp. 724–730. Scitepress (2017)Google Scholar
  10. 10.
    Goodfellow, I., Bengio, Y., Courville, A., Bengio, Y.: Deep Learning, vol. 1. MIT press, Cambridge (2016)zbMATHGoogle Scholar
  11. 11.
    LeCun, Y., Cortes, C., Burges, C.J.C.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/. Accessed 03 Sept 2018
  12. 12.
    Kingma, D.P., Ba, J.L.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations, pp. 1–13 (2015)Google Scholar
  13. 13.
    PyTorch, Tensors and Dynamic neural networks in Python with strong GPU acceleration. https://pytorch.org. Accessed 14 Sept 2018
  14. 14.
    GitHub platform. https://github.com/galatolofederico/mike2018. Accessed 14 Sept 2018

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Federico A. Galatolo
    • 1
  • Mario Giovanni C. A. Cimino
    • 1
    Email author
  • Gigliola Vaglini
    • 1
  1. 1.Department of Information EngineeringUniversity of PisaPisaItaly

Personalised recommendations