Abstract
The article is devoted to the analysis of neural networks from the positions of the neuromorphic approach. The analysis allows to conclude that modern artificial neural networks can effectively solve particular problems, for which it is permissible to fix the topology of the network or its small changes. In the nervous system, as a prototype, the functional element - the neuron - is a fundamentally complex object, which allows implementing a change in topology through the structural adaptation of the dendritic tree of a single neuron. Promising direction of development of neuromorphic systems based on deep spike neural networks in which structural adaptation can be realized is determined.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chetlur, S., Woolley, C., Vandermersch, P., Cohen, J., Tran J.: cuDNN: Efficient Primitives for Deep Learning arXiv:1410.0759v3 [cs.NE], 18 December 2014
Pfeil, T., Grübl, A., Jeltsch, S., Müller, E., Müller, P., Petrovici, M.A., Schmuker, M., Brüderle, D., Schemmel, J., Meier, K.: Six networks on a universal neuromorphic computing substrate. Front. Neurosci. 7(11) (2013). doi:10.3389/fnins.2013.00011
Schemmel, J., Bruderle, D., Grubl, A., Hock, M., Meier, K., Millner, S.: A wafer-scale neuromorphic hardware system for large-scale neural modeling. In: Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1947–1950. IEEE (2010)
Furber, S.B., Lester, D.R., Plana, L., Garside, J.D., Painkras, E., Temple, S., Brown, A.D., et al.: Overview of the spinnaker system architecture. IEEE Trans. Comput. 62(12), 2454–2467 (2013)
Schuman, C.D., Birdwell, J.D.: Variable structure dynamic artificial neural networks. Biol. Inspired Cognit. Archit. 6, 126–130 (2013)
Schuman, C.D., Disney, A., Reynolds, J.: Dynamic adaptive neural network arrays: a neuromorphic architecture. In: Workshop on Machine Learning in HPC Environments, Supercomputing (2015)
Benjamin, B.V., Gao, P., McQuinn, E., Choudhary, S., Chandrasekaran, A.R., Bussat, J.-M., Alvarez-Icaza, R., Arthur, J.V., Merolla, P., Boahen, K., et al.: Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE. 102(5), 699–716 (2014)
Merolla, P.A., Arthur, J.V., Alvarez-Icaza, R., Cassidy, A.S., Sawada, J., Akopyan, F., Jackson, B.L., Imam, N., Guo, C., Nakamura, Y., et al.: A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197), 668–673 (2014)
Van Veen, F.: The Neural Network Zoo (2016). http://www.asimovinstitute.org/neural-network-zoo/. Accessed 16 Apr 2017
Kingma, D.P., Welling, M.: Auto-encoding Variational Bayes. arXiv preprint arXiv:1312.6114 (2013)
Hayes, B.: First links in the Markov chain. Am. Sci. 101(2), 252 (2013)
Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79(8), 2554–2558 (1982)
Hinton, Geoffrey E., Sejnowski, Terrence J.: Learning and releaming in Boltzmann machines. Parallel Distrib. Process. 1, 282–317 (1986)
LeCun, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE. 86(11), 2278–2324 (1998)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Published as a Conference Paper at ICLR 2015
Szegedy, C., et al.: Rethinking the Inception Architecture for Computer Vision. arXiv:1512.00567v3 [cs.CV], 11 December 2015
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
He, K., et al.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)
Moniz, J., Pal, C.: Convolutional Residual Memory Networks. arXiv:1606.05262 [cs.CV], 14 July 2016
Targ, S., Almeida, D., Lyman, K.: Generalizing Residual Architectures. arXiv:1603.08029v1 [cs.LG], 25 March 2016
Kohonen, Teuvo: Self-organized formation of topologically correct feature maps. Biol. Cybern. 43(1), 59–69 (1982)
Deng, D., Kasabov, N.: ESOM: an algorithm to evolve self-organizing maps from on-line data streams. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN 2000), Como, Italy, 24–27 July 2000, vol. vi, pp. 3–8. IEEE computer society (2000)
Growing Hierarchical Self-Organizing Map (GHSOM), Dittenbach, M., Merkl, D., Rauber, A.: The growing hierarchical self-organizing map. In: Proceedings of the International Joint Conference On Neural Networks (IJCNN 2000), vol. VI, pp. 15–19
Self-Organizing Surfaces (SOS), Zell, A., Bayer, H., Bauknecht, H.: Similarity analysis of molecules with self-organizing surfaces—an extension of the self-organizing map. In: Proceedings of International Conference on Neural Networks, ICNN 1994, Piscataway, pp. 719–724 (1994)
Furao, S., Hasegawa, O.: An incremental network for on-line unsupervised classification and topology learning. Neural Netw. 19(1), 90–106 (2006)
Martinetz, T., Schulten, K.: A “neural gas” Network Learns Topologies. Artificial Neural Networks, pp. 397–402. Elsevier, Amsterdam (1991)
Fritzke, B.: A growing neural gas network learns topologies. Adv. Neural. Inf. Process. Syst. 7, 625–632 (1995)
Prudent, Y., Ennaji, A.: An incremental growing neural gas learns topologies. In: Neural Networks, IJCNN 2005 (2005)
Fritzke, B.: Growing cell structures- a self-organizing network for unsupervised and supervised learning. Neural Netw. 7(9), 1441–1460 (1994)
Hodge, V., Austin, J.: Hierarchical growing cell structures: TreeGCS. In: IEEE TKDE Special Issue on Connectionist Models for Learning in Structured Domains
Vlassis, N., Dimopoulos, A., Papakonstantinou, G.: The probabilistic growing cell structures algorithm. Lecture Notes in Computer Science, vol. 1327, p. 649 (1997)
Hunsberger, E., Eliasmith, C.: Spiking deep networks with LIF neurons. arXiv:1510.08829v1 [cs.LG], 29 October 2015
Gavrilov, A., Panchenko, K.: Methods of learning for spiking neural networks. A survey. In: 13th International Scientific-Technical Conference APEIE–39281, At Novosibirsk, vol. 1, part 2, pp. 60–65 (2016)
Cao, Y., Chen, Y., Khosla, D.: Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vis. 113(1), 54–66 (2015)
Hodgkin, A.L., Huxley, A.F.: A quantative description of membrane current and its application conduction and excitation in nerve. J. Physiol. 117, 500–544 (1952)
Izhikevich E.M.: Simple model of spiking neurons. In: IEEE Transactions on Neural Networks. A Publication of the IEEE Neural Networks Council, vol. 14 (2003)
McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943)
Nair, V., Hinton, G.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML 2010), Haifa, Israel, 21–24 June 2010
Burkitt, A.N.: A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol. Cybern. 95(1), 1–19 (2006)
Bakhshiev, A.V., Gundelakh, F.V.: Application the spiking neuron model with structural adaptation to describe neuromorphic systems. Procedia Comput. Sci. 103, 190–197 (2017)
Nicholls, J.G., Martin, A.R., Fuchs, P.A., Brown, D.A., Diamond, M.E., Weisblat, D.A.: From Neuron to Brain. Sinauer Associates Incorporated, Sunderland (1999)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Bakhshiev, A., Stankevich, L. (2018). Prospects for the Development of Neuromorphic Systems. In: Kryzhanovsky, B., Dunin-Barkowski, W., Redko, V. (eds) Advances in Neural Computation, Machine Learning, and Cognitive Research. NEUROINFORMATICS 2017. Studies in Computational Intelligence, vol 736. Springer, Cham. https://doi.org/10.1007/978-3-319-66604-4_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-66604-4_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-66603-7
Online ISBN: 978-3-319-66604-4
eBook Packages: EngineeringEngineering (R0)