Advertisement

Nonvolatile Memory Crossbar Arrays for Non-von Neumann Computing

  • Severin Sidler
  • Jun-Woo Jang
  • Geoffrey W. Burr
  • Robert M. Shelby
  • Irem Boybat
  • Carmelo di Nolfo
  • Pritish Narayanan
  • Kumar Virwani
  • Hyunsang Hwang
Chapter
Part of the Cognitive Systems Monographs book series (COSMOS, volume 31)

Abstract

In the conventional von Neumann (VN) architecture, data—both operands and operations to be performed on those operands—makes its way from memory to a dedicated central processor. With the end of Dennard scaling and the resulting slowdown in Moore’s law, the IT industry is turning its attention to non-Von Neumann (non-VN) architectures, and in particular, to computing architectures motivated by the human brain. One family of such non-VN computing architectures is artificial neural networks (ANNs). To be competitive with conventional architectures, such ANNs will need to be massively parallel, with many neurons interconnected using a vast number of synapses, working together efficiently to compute problems of significant interest. Emerging nonvolatile memories, such as phase-change memory (PCM) or resistive memory (RRAM), could prove very helpful for this, by providing inherently analog synaptic behavior in densely packed crossbar arrays suitable for on-chip learning. We discuss our recent research investigating the characteristics needed from such nonvolatile memory elements for implementation of high-performance ANNs. We describe experiments on a 3-layer perceptron network with 164,885 synapses, each implemented using 2 NVM devices. A variant of the backpropagation weight update rule suitable for NVM+selector crossbar arrays is shown and implemented in a mixed hardware–software experiment using an available, non-crossbar PCM array. Extensive tolerancing results are enabled by precise matching of our NN simulator to the conditions of the hardware experiment. This tolerancing shows clearly that NVM-based neural networks are highly resilient to random effects (NVM variability, yield, and stochasticity), but highly sensitive to gradient effects that act to steer all synaptic weights. Simulations of ANNs with both PCM and non-filamentary bipolar RRAM based on Pr\(_{1-x}\)Ca\(_x\)MnO\(_3\) (PCMO) are also discussed. PCM exhibits smooth, slightly nonlinear partial-SET (conductance increase) behavior, but the asymmetry of its abrupt RESET introduces difficulties; in contrast, PCMO offers continuous conductance change in both directions, but exhibits significant nonlinearities (degree of conductance change depends strongly on absolute conductance). The quantitative impacts of these issues on ANN performance (classification accuracy) are discussed.

Keywords

Synaptic Weight Nonvolatile Memory Weight Update Nonvolatile Memory Device Crossbar Array 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
  2. 2.
    Breitwisch, M., Nirschl, T., Chen, C.F., Zhu, Y., Lee, M.H., Lamorey, M., Burr, G.W., Joseph, E., Schrott, A., Philipp, J.B., Cheek, R., Happ, T.D., Chen, S.H., Zaidi, S., Flaitz, P., Bruley, J., Dasaka, R., Rajendran, B., Rossnagel, S., Yang, M., Chen, Y.C., Bergmann, R., Lung, H.L., Lam, C.: Novel lithography–independent pore phase change memory. In: Symposium on VLSI Technology, pp. 100–101 (2007)Google Scholar
  3. 3.
    Burr, G.W., Narayanan, P., Shelby, R.M., Sidler, S., Boybat, I., di Nolfo, C., Leblebici, Y.: Large–scale neural networks implemented with nonvolatile memory as the synaptic weight element: comparative performance analysis (accuracy, speed, and power). In: IEDM Technical Digest, p. 4.4 (2015)Google Scholar
  4. 4.
    Burr, G.W., Shelby, R.M., di Nolfo, C., Jang, J., Shenoy, R., Narayanan, P., Virwani, K., Giacometti, E., Kurdi, B., Hwang, H.: Experimental demonstration and tolerancing of a large–scale neural network (165,000 synapses), using phase–change memory as the synaptic weight element. In: IEDM Technical Digest, p. 29.5 (2014)Google Scholar
  5. 5.
    Burr, G.W., Shelby, R.M., Sidler, S., di Nolfo, C., Jang, J., Boybat, I., Shenoy, R.S., Narayanan, P., Virwani, K., Giacometti, E.U., Kurdi, B., Hwang, H.: Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses), using phase-change memory as the synaptic weight element. IEEE Trans. Electr. Devices 62(11), 3498–3507 (2015)CrossRefGoogle Scholar
  6. 6.
    Gupta, S., Kaldewey, T.: Private communication (2015)Google Scholar
  7. 7.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504 (2006)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Jackson, B.L., Rajendran, B., Corrado, G.S., Breitwisch, M., Burr, G.W., Cheek, R., Gopalakrishnan, K., Raoux, S., Rettner, C.T., Padilla, A., Schrott, A.G., Shenoy, R.S., Kurdi, B.N., Lam, C.H., Modha, D.S.: Nanoscale electronic synapses using phase change devices. ACM J. Emerg. Technol. Comput. Syst. 9(2), 12 (2013)CrossRefGoogle Scholar
  9. 9.
    Jang, J.W., Park, S., Burr, G.W., Hwang, H., Jeong, Y.H.: Optimization of conductance change in Pr\(_{1-x}\)Ca\(_x\)MnO\(_3\)-based synaptic devices for neuromorphic systems. IEEE Electr. Device Lett. 36(5), 457–459 (2015)CrossRefGoogle Scholar
  10. 10.
    Jo, S.H., Chang, T., Ebong, I., Bhadviya, B.B., Mazumder, P., Lu, W.: Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett. 10(4), 1297–1301 (2010)CrossRefGoogle Scholar
  11. 11.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278 (1998)CrossRefGoogle Scholar
  12. 12.
    Lee, J., Jo, M., Seong, D.J., Shin, J., Hwang, H.: Materials and process aspect of cross-point RRAM (invited). Microelectron. Eng. 88(7), 1113–1118 (2011)CrossRefGoogle Scholar
  13. 13.
    Park, S., Kim, H., Choo, M., Noh, J., Sheri, A., Jung, S., Seo, K., Park, J., Kim, S., Lee, W., Shin, J., Lee, D., Choi, G., Woo, J., Cha, E., Jang, J., Park, C., Jeon, M., Lee, B., Lee, B., Hwang, H.: RRAM–based synapse for neuromorphic system with pattern recognition function. In: IEDM Technical Digest, p. 10.2 (2012)Google Scholar
  14. 14.
    Rajendran, B., Liu, Y., Seo, J.S., Gopalakrishnan, K., Chang, L., Friedman, D.J., Ritter, M.B.: Specifications of nanoscale devices and circuits for neuromorphic computational systems. IEEE Trans. Electr. Devices 60(1), 246–253 (2013)CrossRefGoogle Scholar
  15. 15.
    Rumelhart, D., Hinton, G.E., McClelland, J.L.: A general framework for parallel distributed processing. In: Parallel Distributed Processing. MIT Press (1986)Google Scholar
  16. 16.
    Suri, M., Bichler, O., Querlioz, D., Cueto, O., Perniola, L., Sousa, V., Vuillaume, D., Gamrat, C., DeSalvo, B.: Phase change memory as synapse for ultra–dense neuromorphic systems: application to complex visual pattern extraction. In: IEDM Technical Digest, p. 4.4 (2011)Google Scholar
  17. 17.
    Suri, M., Bichler, O., Querlioz, D., Palma, G., Vianello, E., Vuillaume, D., Gamrat, C., DeSalvo, B.: CBRAM devices as binary synapses for low–power stochastic neuromorphic systems: auditory (cochlea) and visual (retina) cognitive processing applications. In: IEDM Technical Digest, p. 10.3 (2012)Google Scholar
  18. 18.
    Yu, S., Gao, B., Fang, Z., Yu, H., Kang, J., Wong, H.S.P.: A neuromorphic visual system using RRAM synaptic devices with sub–pj energy and tolerance to variability: experimental characterization and large–scale modeling. In: IEDM Technical Digest, p. 10.4 (2012)Google Scholar

Copyright information

© Springer (India) Pvt. Ltd. 2017

Authors and Affiliations

  • Severin Sidler
    • 1
  • Jun-Woo Jang
    • 2
  • Geoffrey W. Burr
    • 3
  • Robert M. Shelby
    • 3
  • Irem Boybat
    • 1
  • Carmelo di Nolfo
    • 3
  • Pritish Narayanan
    • 3
  • Kumar Virwani
    • 3
  • Hyunsang Hwang
    • 2
  1. 1.EPFLLausanneSwitzerland
  2. 2.Pohang University of Science and TechnologyPohangKorea
  3. 3.IBM ResearchAlmaden, San JoseUSA

Personalised recommendations