Abstract
Recent advances in machine learning, notably deep learning, have resulted in unprecedented success in a wide variety of recognition tasks including vision, speech, and natural language processing. However, implementation of such neural algorithms in conventional “von-Neumann” architectures involve orders of magnitude more area and power consumption than that involved in the biological brain. This is mainly attributed to the inherent mismatch between the computational units—neurons and synapses in such models and the underlying CMOS transistors. In addition, these algorithms, being highly memory-intensive, suffer from memory bandwidth limitations due to significant amount of data transfer between the memory and computing units. Recent experiments in spintronics have opened up the possibility of implementing such computing kernels by single device structures that can be arranged in crossbar architectures resulting in a compact and energy-efficient “in-memory computing” platform. In this chapter, we will review spintronic device structures consisting of single-domain/domain-wall motion based devices for mimicking neuronal and synaptic units. System-level simulations indicate ∼ 100× improvement in energy consumption for such spintronic implementations over a corresponding CMOS implementation across different computing workloads.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556
T. Mikolov, M. Karafiát, L. Burget, J. Cernockỳ, S. Khudanpur, Recurrent neural network based language model.Interspeech 2, 3 (2010)
Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition. Proc. IEEE 86 (11), 2278–2324 (1998)
A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems (2012), pp. 1097–1105
Y. Taigman, M. Yang, M. Ranzato, L. Wolf, Deepface: closing the gap to human-level performance in face verification, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1701–1708
M. Rhu, N. Gimelshein, J. Clemons, A. Zulfiqar, S.W. Keckler, vDNN: Virtualized deep neural networks for scalable, memory-efficient neural network design (2016). arXiv preprint arXiv:1602.08124
Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun et al., Dadiannao: A machine-learning supercomputer, in Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture (IEEE Computer Society, 2014), pp. 609–622
M. Julliere, Tunneling between ferromagnetic films. Phys. Lett. A 54 (3), 225–226 (1975)
J.C. Slonczewski, Conductance and exchange coupling of two ferromagnets separated by a tunneling barrier. Phys. Rev. B 39 (10), 6995 (1989)
J. Hirsch, Spin hall effect. Phys. Rev. Lett. 83 (9), 1834 (1999)
C.-F. Pai, L. Liu, Y. Li, H. Tseng, D. Ralph, R. Buhrman, Spin transfer torque devices utilizing the giant spin Hall effect of tungsten. Appl. Phys. Lett. 101 (12), 122404 (2012)
L. Liu, C.-F. Pai, Y. Li, H. Tseng, D. Ralph, R. Buhrman, Spin-torque switching with the giant spin Hall effect of tantalum. Science 336 (6081), 555–558 (2012)
A. Sengupta, S.H. Choday, Y. Kim, K. Roy, Spin orbit torque based electronic neuron. Appl. Phys. Lett. 106 (14), 143701 (2015)
S. Emori, U. Bauer, S.-M. Ahn, E. Martinez, G.S. Beach, Current-driven dynamics of chiral ferromagnetic domain walls. Nat. Mater. 12 (7), 611–616 (2013)
S. Emori, E. Martinez, K.-J. Lee, H.-W. Lee, U. Bauer, S.-M. Ahn, P. Agrawal, D.C. Bono, G.S. Beach, Spin Hall torque magnetometry of Dzyaloshinskii domain walls. Phys. Rev. B 90 (18), 184427 (2014)
G. Indiveri, A low-power adaptive integrate-and-fire neuron circuit, in ISCAS (4). Citeseer (2003), pp. 820–823
P.A. Merolla, J.V. Arthur, R. Alvarez-Icaza, A.S. Cassidy, J. Sawada, F. Akopyan, B.L. Jackson, N. Imam, C. Guo, Y. Nakamura et al., A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345 (6197), 668–673 (2014)
A. Sengupta, Y. Shim, K. Roy, Proposal for an All-Spin Artificial Neural Network: emulating neural and synaptic functionalities through domain wall motion in ferromagnets, in IEEE Transactions on Biomedical Circuits and Systems (2016)
A. Sengupta, M. Parsa, B. Han, K. Roy, Probabilistic deep spiking neural systems enabled by magnetic tunnel junction. IEEE Trans. Electron Dev. 63 (7), 2963–2970 (2016)
A. Sengupta, P. Panda, P. Wijesinghe, Y. Kim, K. Roy, Magnetic tunnel junction mimics stochastic cortical spiking neurons. Sci. Rep. 6, 30039 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Sengupta, A., Ankit, A., Roy, K. (2017). Efficient Neuromorphic Systems and Emerging Technologies: Prospects and Perspectives. In: Chattopadhyay, A., Chang, C., Yu, H. (eds) Emerging Technology and Architecture for Big-data Analytics. Springer, Cham. https://doi.org/10.1007/978-3-319-54840-1_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-54840-1_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-54839-5
Online ISBN: 978-3-319-54840-1
eBook Packages: EngineeringEngineering (R0)