Skip to main content

A VLSI Implementation of Multi-Layered Neural Networks: 2-Performance

  • Chapter
VLSI for Artificial Intelligence and Neural Networks

Abstract

This paper presents a VLSI cellular array dedicated to the processing of neural networks. Each cell of this array computes the algorithms devoted to a neuron of a back propagation neural network. Our approach consists in permitting only integer computations and limiting the number of synaptic weightstconnections a cell must manage. If necessary, the topology of the neural network is locally changed with the introduction of sub-neurons and so related partially connected sub-layers. These changes in the topology add new degrees of freedom in the weight space and then increase the number of training epochs necessary to obtain a behaviour similar to that of the initial network. In recall mode, a 64 x 64 cellular array runs the NETtalk text-to-speech network 4 times faster than the Warp, a 20 node systolic array which uses a layer-based parallelism, 52 times faster than the Connection-Machine-1 which uses a synaptic parallelism and more than 2600 times faster than a Vax 780 without any parallelism.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Faure, B. and Mazaré, G., “A VLSI implementation of multi-layered neural networks”, in VLSI for Artificial Intelligence, J. Delgado-Frias and W. Moore, Eds., Kluwer, 1989.

    Google Scholar 

  • Faure, B. and Mazaré, G., “A VLSI asynchronous cellular architecture for neural computing: functional definition and performance evaluation”, 3rd International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems,Charleston, South Carolina, pp. 838–847, July 1990.

    Google Scholar 

  • Jones, W. P. and Hoskins, J., “Back-propagation: a generalized delta learning rule”, BYTE Magazine, vol. 12, n°.10, pp. 155–162, 1987.

    Google Scholar 

  • Karabernou, M., “Etude et réalisation d’un mécanisme d’acheminement de messages dans un réseau cellulaire”, DEA de microélectronique, Université de Grenoble, France, June 1988.

    Google Scholar 

  • Lattard, D., Faure, B. and Mazaré, G., “Massively parallel architecture: application to neural net emulation and image reconstruction”, Application Specific Array Processors,S.-Y. Kung, E. E. Swartzlander Jr., J. A. B. Fortes and K. Wojtek Przytula, Eds., IEEE Computer Society Press, 1990.

    Google Scholar 

  • Le Cun, Y., “Modèles connexionistes de l’apprentissage”, Thèse d’informatique, Université de Paris 6, France, June 1987.

    Google Scholar 

  • Lipmann, R. P., “An introduction to computing with neural nets”, IEEE ASSP Magazine, n° 4, pp. 4–22, April 1987.

    Article  Google Scholar 

  • Mhiri, M., “Réalisation de la partie traitement d’un réseau cellulaire asynchrone pour une application de réseaux de neurones”, DEA de microélectronique de l’INPG, Grenoble, France, June 1989.

    Google Scholar 

  • Objois, P., Ansade Y., Cornu-Emieux, R. and Mazaré, G., “Highly parallel logic simulation accelerators based upon distributed discrete-event simulation”, in Hardware Accelerators for Electrical CAD,T. Ambler, P. Agrawal and W. Moore, Eds., Adam Hilger, 1988.

    Google Scholar 

  • Payan, E. and Mazaré, G., “Programmable highly parallel architecture: functional definition and performance evaluation”, in Parallel Processing in Neural Systems and Computers, R. Eckmiller, G. Hartmann and G. Hauske, Eds., North-Holland, 1990.

    Google Scholar 

  • Pomerleau, D. A., Gusciora, G. L., Touretzsky, D. S. and Kung, H. T., “Neural network simulation at Warp speed: how we got 17 million connections per second”, IEEE International Conference on Neural Networks, vol. 2, pp. 143–150, San Diego, California, July 1988.

    Google Scholar 

  • Rubini, P., Karabernou, S. M., Payan, E. and Mazaré, G., “A network with small general processing units for fine grain parallelism”, International Workshop on Algorithms and Parallel VLSI Architectures, Pont-à-Mousson, France, pp. 197–200, June 1990.

    Google Scholar 

  • Rumelhart, D. E., Hinton, G. E. and Williams, R. J., “Learning internal representations by error propagation”, in Parallel Distributed Processing: Explorations in the Microstructure of Cognition,Volume 1: Foundations,D. E. Rumelhart and J. L. McClelland, Eds., MIT Press, 1986.

    Google Scholar 

  • Sejnowski, T. J. and Rosenberg, C. R., “Parallel networks that learn to pronounce English text”, Complex Systems, n° 1, pp. 145–168, 1987.

    MATH  Google Scholar 

  • Wang, S., “Réseaux multicouches de neurones artificiels: algorithmes d’apprentissage, implantations sur hypercube, applications”, Thèse d’informatique de l’INPG, Grenoble, France, September 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer Science+Business Media New York

About this chapter

Cite this chapter

Faure, B., Mazaré, G. (1991). A VLSI Implementation of Multi-Layered Neural Networks: 2-Performance. In: Delgado-Frias, J.G., Moore, W.R. (eds) VLSI for Artificial Intelligence and Neural Networks. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-3752-6_37

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-3752-6_37

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-6671-3

  • Online ISBN: 978-1-4615-3752-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics