Advertisement

Analog VLSI Hardware Implementation of a Supervised Learning Algorithm

  • Gian Marco Bo
  • Daniele Caviglia
  • Hussein Chiblé
  • Maurizio Valle
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 74)

Abstract

In this chapter, we introduce an analog chip hosting a self-learning neural network with local learning rate adaptation. The neural architecture has been validated through intensive simulations on the recognition of handwritten characters. It has hence been mapped onto an analog architecture. The prototype chip implementing the whole on-chip learning neural architecture has been designed and fabricated by using a 0.7 gm channel length CMOS technology. Experimental results on two learning tasks confirm the functionality of the chip and the soundness of the approach. The chip features a peak performance of 2.65 × 106 connections updated per second.

Keywords

Hide Neuron Output Neuron Multi Layer Perceptron Neural Architecture Analog Integrate Circuit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    D. E. Rumelhart, and J. L. McClelland. Parallel Distributed Processing. Cambridge, USA, MIT Press, 1986.Google Scholar
  2. [2]
    J. Hertz, A. Krogh, and R. G. Palmer. Introduction to the Theory of the Neural Computation. Addison—Wesley Publishing Company, 1981.Google Scholar
  3. [3]
    A. Cichocki and R. Unbehauen. Neural Networks for Optimization and Signal Processing. John Wiley & Sons, 1993.Google Scholar
  4. [4]
    A. H. Kramer. “Array—Based Computation: Principles, Advantages and Limitations”, in Proc. of Microneuro’96, pp. 68–79, 1996.Google Scholar
  5. [5]
    E. A. Vittoz, “Analog VLSI Signal Processing: Why, Where and How”, J. of VLSI Signal Processing, Vol. 8, pp. 27–44, 1994.CrossRefGoogle Scholar
  6. [6]
    G. M. L. Sarnè and M. N. Pastorino, “Application of Neural Networks for the Simulation of theTraffic Flows in a Real Transportation Network”, Proc. of the Int. Conf. on Artificial Neural Networks ICANN94, pp. 831833, Sorrento, Italy, 1994.Google Scholar
  7. [7]
    G. M. Bo, D. D. Caviglia, e M. Valle, “An Analog VLSI Neural Architecture for Handwritten Numeric Character Recognition”, in Proc. of the Int. Conf. on Artificial Neural Networks ICANN95 — Industrial Conference, Paris, France, 1995.Google Scholar
  8. [8]
    H. Bourlard and N Morgan, “Hybrid Connectionist Models for Continuous Speech Recognition”, in C. Lee, F. K. Soong, and K. K. Paliwal editors, Automatic Speech and Speaker Recognition, pp. 259–283, Kluwer Academic Publishers, 1996.Google Scholar
  9. [9]
    D. Baratta, G. M. Bo, D. D. Caviglia, M. Valle, G. Canepa, R. Parenti, e C. Penno, “A Hardware Implementation of Hierarchical Neural Networks for Real—Time Quality Control Systems in Industrial Applications”, in Proc. of the Int. Conf. on Artificial Neural Networks, ICANN’97, pp. 1229–1234, Lausanne, Switzerland, 1997.Google Scholar
  10. [10]
    A. J. Annema. Feed—Forward Neural Networks. Kluwer Academic Publisher, 1995.Google Scholar
  11. [11]
    R. P. Lippmann, “An Introduction to Computing with Neural Nets”, IEEE ASSP Magazine, Vol. 4, No. 2, pp. 4–22, 1987.CrossRefGoogle Scholar
  12. [12]
    J. Alspector, R. Meir, B. Yuhas, A. Jayakumar, and D. Lippe, “A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks”, in Advances in Neural Information Processing Systems 5 (NIPS5), pp. 836–844, 1993.Google Scholar
  13. [13]
    G. Cauwenberghs, “A Fast Stochastic Error—Descent Algorithm for Supervised Learning and Optimization”, in Advances in Neural Information Processing Systems 5 (NIPS5), pp. 244–251, 1993.Google Scholar
  14. [14]
    G. M. Bo, D. D. Caviglia, H. Chiblé, M. Valle, “A Circuit Architecture for Analog On—Chip Back Propagation Learning with Local Learning Rate Adaptation”, Analog Integrated Circuits and Signal Processing, Kluwer Academic Publisher, Vol. 2 /3, pp. 163–173, 1999.Google Scholar
  15. [15]
    P. J. Edwards, and A. F. Murray. Analogue Imprecision in MLP Training. World Scientific Publishing Co. Pte. Ltd., 1996.Google Scholar
  16. [16]
    Y. Berg, R. L. Sigvartsen, T. S. Lande, and A. Abusland, “An Analog Feed—Forward Neural Network with On—Chip Learning”, Analog Integrated Circuits and Signal Processing, Kluwer Academic Publisher, Vol. 9, pp. 65–75, 1996.Google Scholar
  17. [17]
    T Lehmann. Hardware Learning in Analog VLSI Neural Networks. Ph.D. Thesis, Electronics Institute, Technical University of Denmark, 1994.Google Scholar
  18. [18]
    G. Cauwenberghs, “An Analog VLSI Recurrent Neural Network Learning a Continuous—Time Trajectory”, IEEE Transaction on Neural Networks, Vol. 7, No. 2, pp. 346–361, 1996.CrossRefGoogle Scholar
  19. [19]
    L. M. Reyneri, and E. Filippi, “An Analysis on the Performance of Silicon Implementations of Backpropagation Algorithms for Artificial Neural Networks”, IEEE Trans. on Computers, Vol. 12, pp. 1380–1389, 1991.CrossRefGoogle Scholar
  20. [20]
    T. Shima, T. Kimura, Y. Kamatani, T. Itakura, Y. Fujita, and T. Iida, “Neuro Chips with On—Chip Back—Propagation and or/Hebbian Learning”, IEEE Journal of Solid State Circuits, Vol. 27, No. 12, pp. 1868–1876, 1992.CrossRefGoogle Scholar
  21. [21]
    Y. Arima, M. Murasaki, T. Yamada, A. Maeda, and H. Shinohara, “A Refreshable Analog VLSI Neural Network Chip with 400 Neurons and 40K Synapses”, IEEE Journal of Solid State Circuits, Vol.. 26, No. 12, pp. 1854–1861, 1992.Google Scholar
  22. [22]
    L. Tarassenko, J. Tombs, and G. Cairns, “On—Chip Learning with analog VLSI Neural Networks”, Int. J. of Neural Systems, Vol. 4, No. 4, pp. 419426, 1993.Google Scholar
  23. [23]
    Y. Wang, “A Modular Analog CMOS LSI for Feedforward Neural networks with On—Chip BEP Learning”, in Proc. of the IEEE International Symposium on Circuits and Systems ISCAS 1994, Vol. 4, pp. 2744–2747, 1994.Google Scholar
  24. [24]
    H Withagen, “Implementing Backpropagation with Analog Hardware?”, in Proc. of the Int. Conf on Neural Networks ICNN, 1994.Google Scholar
  25. [25]
    J. Cho, Y. K. Choi, and S. Lee, “Modular Analog Neuro—Chip Set with On—Chip Learning by Error Back—Propagation and/or Hebbian Rules”, Proc. of the Int. Conf. on Artificial Neural Networks ICANN’94, Sorrento, Italy, Vol. 2, pp. 1343, 1994.Google Scholar
  26. [26]
    T. Morie and Y Amemiya, “An All—Analog Expandable Neural—Network LSI with On—Chip Back Propagation Learning”, IEEE Journal of Solid State Circuits, Vol. 29, No. 9, pp. 1086–1093, 1994.CrossRefGoogle Scholar
  27. [27]
    M. Valle, D. D. Caviglia, and G. M. Bisio, “An Analog VLSI Neural Network with On—Chip Back Propagation Learning”, Analog Integrated Circuits and Signal Processing, Kluwer Academic Publisher, Vol. 9, pp. 231–245, 1996.Google Scholar
  28. [28]
    H. Chiblé. Studio e Progetto di Architetture Microelettroniche Analogiche di Tipo Neurale con Capacità di Apprendimento Autonomo. Ph.D. Thesis, DIBE, University of Genoa, 1997.Google Scholar
  29. [29]
    M. Valle, D. D. Caviglia, G. Donzellini, A. Mussi, F. Oddone, and G. M. Bisio, “A Neural Computer based on an Analog VLSI Neural Network”, in Proc. of Int. Conf. on Artificial Neural Network, ICANN94, Vol. 2, pp. 1339–1342 1994.Google Scholar
  30. [30]
    G. M. Bo, Microelectronic Neural Systems: Analog VLSI for Perception and Cognition. Ph.D. Thesis, DIBE, University of Genoa, 1998.Google Scholar
  31. [31]
    C. A. Mead. Analog VLSI and Neural Systems. Addison—Wesley, Reading, 1989.MATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Gian Marco Bo
    • 1
  • Daniele Caviglia
    • 1
  • Hussein Chiblé
    • 1
  • Maurizio Valle
    • 1
  1. 1.Department of Biophysical and Electronic EngineeringUniversity of GenoaItaly

Personalised recommendations