Skip to main content

Temporal Convolution in Spiking Neural Networks: A Bio-mimetic Paradigm

  • Conference paper
  • First Online:
Soft Computing for Problem Solving 2019

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1139))

Abstract

Recent spectacular advances in artificial intelligence (AI), in large, be attributed to the developments in deep learning (DL). In essence, DL is not a new concept. In many respects, DL shares characteristics of “traditional” types of neural network (NN). The main distinguishing feature is that it uses many more layers in order to learn increasingly complex features. Each layer convolutes into the previous by simplifying and applying a function upon a subsection of that layer. Deep learning’s fantastic success can be attributed to dedicated researchers experimenting with many different groundbreaking techniques, but also some of its triumphs can also be attributed to fortune. It was the right technique at the right time. To function effectively, DL mainly requires two things: (a) vast amounts of training data and (b) a very specific type of computational capacity. These two respective requirements have been amply met with the growth of the Internet and the rapid development of GPUs. As such DL is an almost perfect fit for today’s technologies. However, DL is only a very rough approximation of how the brain works. More recently, spiking neural networks (SNNs) have tried to simulate biological phenomena in a more realistic way. In SNNs, information is transmitted as discreet spikes of data rather than a continuous weight or a differentiable activation function. In practical terms, this means that far more nuanced interactions can occur between neurons and that the network can run far more efficiently (e.g., in terms of calculations needed and therefore overall power requirements). Nevertheless, the big problem with SNNs is that unlike DL it does not “fit” well with existing technologies. Worst still is that no one has yet come up with definitive way to make SNNs function at a “deep” level. The difficulty is that in essence “deep” and “spiking” refer to fundamentally different characteristics of a neural network: “spiking” focuses on the activation of individual neurons, whereas “deep” concerns itself to the network architecture itself Pfeiffer and Pfeil (Front Neurosci 12, 2018) [1]. However, these two methods are in fact not contradictory, but have so far been developed in isolation from each other due to the prevailing technology driving each technique and the fundamental conceptual distance between each of the two biological paradigms. If advances in AI are to continue at the present rate, then new technologies are going to be developed and the contradictory aspects of DL and SNN are going to have to be reconciled. Very recently, there have been a handful of attempts to amalgamate DL and SNN in a variety of ways Tavanaei et al. (Neural Netw 111:47–63, 2019) [2] one of the most exciting being the creation of a specific hierarchical learning paradigm in recurrent SNN (RSNNs) called e-prop Bellec et al. (bioRxiv, 2019) [3]. However, this paper posits that this has been made problematic because a fundamental agent in the way the biological brain functions has been missing from each paradigm, and that if this is included in a new model, then the union between DL and RSNN can be made in a more harmonious manner. The missing piece to the jigsaw, in fact, is the glial cell, and the unacknowledged function it plays in neural processing. In this context, this paper examines how DL and SNN can be combined, and how glial dynamics cannot only address outstanding issues with the existing individual paradigms—for example, the “weight transport” problem but also act as the “glue”—e.g., pun intended—between these two paradigms. This idea has a direct parallel with the idea of convolution in DL but has the added dimension of time. It is important not only where events happen but also when events occur in this new paradigm. The synergy between these two powerful paradigms gives hints at the direction and potential of what could be an important part of the next wave of development in AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. M. Pfeiffer, T. Pfeil, Deep learning with spiking neurons: opportunities and challenges. Front. Neurosci. (Frontiers Media, 2018) 12, Article 774

    Google Scholar 

  2. A. Tavanaei, M. Ghodrati, S.R. Kheradpisheh, T. Masquelier, A. Maida, Deep learning in spiking neural networks. Neural Netw. (Elsevier) 111, 47–63 (2019)

    Article  Google Scholar 

  3. G. Bellec, F. Scherr, A. Subramoney, E. Hajek, D. Salaj, R. Legenstein, W. Maass, A solution to the learning dilemma for recurrent networks of spiking neurons, bioRxiv (2019)

    Google Scholar 

  4. V. Demin, D. Nekhaev, Recurrent spiking neural network learning based on a competitive maximization of neuronal activity. Front. Neurosci. (Frontiers Media, 2018) 12

    Google Scholar 

  5. R. Gütig, To spike, or when to spike?, in Current Opinion in Neurobiology, Theoretical and Computational Neuroscience Special Issue (Elsevier, 2014), pp. 134–139

    Google Scholar 

  6. M.A. Montemurro, M.J. Rasch, Y. Murayama, N.K. Logothetis, S. Panzeri, Phase-of-firing coding of natural visual stimuli in primary visual cortex. Curr. Biol. (Elsevier) 18, 375–380 (2008)

    Article  Google Scholar 

  7. W. Gerstner, R. Kempter, J.L. van Hemmen, H. Wagner, A neuronal learning rule for sub-millisecond temporal coding. Nature (Springer) 363, 76–81 (1996)

    Google Scholar 

  8. F. Rieke, D. Warland, R. Steveninck, W. Bialek, in Spikes: exploring the Neural Code (MIT Press, 1999). ISBN:0-262-18174-6

    Google Scholar 

  9. F. Ponulak, A. Kasinski, Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting, in Neural Computation vol. 22 (MIT Press, 2010)

    Google Scholar 

  10. R. Gütig, H. Sompolinsky, The tempotron: a neuron that learns spike timing-based decisions, in Nature Neuroscience, vol. 9 (Springer, 2006)

    Google Scholar 

  11. S.M. Bohte, J.N. Kok, H. La Poutre, Spike-prop: error-backpropagation in multi-layer networks of spiking neurons. Neurocomputing (Elsevier) 48, 17–37 (2002)

    Article  Google Scholar 

  12. J.H. Lee, T. Delbruck, M. Pfeiffer, Training deep spiking neural networks using backpropagation. Front. Neurosci. (Frontiers Media, 2016) 10

    Google Scholar 

  13. S.R. Kulkarni, B. Rajendran, Spiking neural networks for handwritten digit recognition-Supervised learning and network optimization. Neural Netw. (Elsevier) 10, 118–127 (2018)

    Article  Google Scholar 

  14. Y. Wu, L. Deng, G. Li, J. Zhu, L. Shi, Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. (Frontiers Media) 12 (2018)

    Google Scholar 

  15. Y. Jin, W. Zhang, P. Li, Hybrid macro/micro level backpropagation for training deep spiking neural networks, in Neural and Evolutionary Computing (IEEE, 2019)

    Google Scholar 

  16. W. Maass, T. Natschläger, H. Markram, Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput (MIT Press) 14, 2531–2560 (2002)

    Article  Google Scholar 

  17. B. Schrauwen, D. Verstraeten, J. van Campenhout, An overview of reservoir computing: theory, applications, and implementations, in Proceedings of the European Symposium on Artificial Neural Networks ESANN (2007), pp. 471–482

    Google Scholar 

  18. Y. Bengio, D. Lee, J. Bornschein, T, Mesnard, Z. Lin, Towards biologically plausible deep learning (2015). arXiv:1502.04156

  19. B. Scellier, Y. Bengio, Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. (Frontiers Media), 11 (2017)

    Google Scholar 

  20. M. Mozafari, S.R. Kheradpisheh, T. Masquelier, A. Nowzari-Dalini, M. Ganjtabesh, First-spike-based visual categorization using reward-modulated STDP, in IEEE Trans. Neural Netw. Learn. Syst. (IEEE), 12 (2018)

    Google Scholar 

  21. J.C. Thiele, O. Bichler, A. Dupret, Event-based, timescale invariant unsupervised online deep learning with STDP. Front. Comput. Neurosci. (Frontiers Media), 12 (2018)

    Google Scholar 

  22. P. Panda, K. Roy, Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition, in Neural and Evolutionary Computing (MIT Press, 2016)

    Google Scholar 

  23. S. Grossberg, Competitive learning: from interactive activation to adaptive resonance. Cogn. Sci. 11(1), 23–63 (1987)

    Article  Google Scholar 

  24. S. Bartunov, A. Santoro, B.A. Richards, L. Morris, G.E. Hinton, T.P. Lillicrap, Assessing the scalability of biologically-motivated deep learning algorithms and architectures, in 32nd Conference on Neural Information Processing Systems (NIPS) (Montreal, Canada, 2018). arXiv:1807.04587v2

  25. T.P. Lillicrap, Deep learning and the brain: does the brain approximate backpropagation?, in Carnegie Mellon University’s BrainHub Victor Bearg Lecture (2018). https://www.youtube.com/watch?v=zQSNijL1fJg

  26. G.E. Hinton, S. Sabour, N. Frosst, Matrix capsules with EM routing, in Proceedings of the International Conference on Learning Representations (ICLR) (Vancouver, Canada, 2018)

    Google Scholar 

  27. G.E. Hinton, P. Dayan, B.J. Frey, R. Neal, The wake-sleep algorithm for unsupervised neural networks. Sci. Am. Assoc. Adv. Sci. 268(5214), 1158–1161 (1995)

    Google Scholar 

  28. J. Lisman, A mechanism for the Hebb and the anti-Hebb processes underlying learning and memory. Proceedings of the National Academy of Science United States of America 86(23), 9574–9578 (1989)

    Article  Google Scholar 

  29. H.Z. Shouval, G.C. Castellani, B.S. Blais, L.N. Cooper, Converging evidence for a simplified biophysical model of synaptic plasticity, in Proceedings of Biological Cybernetics (Springer, 2003)

    Google Scholar 

  30. G. Perea, M. Sur, A. Araque, Neuron-glia networks: integral gear of brain function. Front. Cell. Neurosci. (Frontiers Press) 8 (2014)

    Google Scholar 

  31. D.S. Auld, R. Robitaille, Glial cells and neurotransmission: an inclusive view of synaptic function. Neuron (Elsevier) 40(2), 389–400 (2003)

    Article  Google Scholar 

  32. B. Lu, Q. Zhang. H. Wang, Y. Wang, M. Nakayama, D. Ren, Extracellular calcium controls background current and neuronal excitability via an UNC79-UNC80-NALCN cation channel complex. Neuron (Elsevier), 68(3), 488–499 (2010)

    Google Scholar 

  33. G. Chen, A mathematical model for bifurcations in a Belousov-Zhabotinsky reaction. Phys. D: Nonlinear Phenom. (Elsevier) 145(3), 309–329 (2000)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Reid .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Reid, D., Secco, E.L. (2020). Temporal Convolution in Spiking Neural Networks: A Bio-mimetic Paradigm. In: Nagar, A., Deep, K., Bansal, J., Das, K. (eds) Soft Computing for Problem Solving 2019 . Advances in Intelligent Systems and Computing, vol 1139. Springer, Singapore. https://doi.org/10.1007/978-981-15-3287-0_17

Download citation

Publish with us

Policies and ethics