Skip to main content

Systolic VLSI and FPGA Realization of Artificial Neural Networks

  • Chapter

Part of the book series: Adaptation, Learning, and Optimization ((ALO,volume 7))

Abstract

Systolic architectures are established as a widely popular class of VLSI structures for repetitive and computation-intensive applications due to the simplicity of their processing elements (PEs), modularity of design, regular and nearest neighbor interconnections between the PEs, high-level of pipelinability, small chip-area and low-power consumption. In systolic arrays, the desired data is pumped rhythmically in a regular interval across the PEs to yield high throughput by fully pipelined processing. The I/O bottleneck is significantly reduced by the systolic array architectures by feeding the data at the chip-boundary, and pipelining it across the structure. The extensive reuse of data within the array allows for executing large volume of computation with only a modest increase of bandwidth. Since the FPGA devices consist of regularly placed inter-connected logic blocks, they closely resemble with the layout of systolic processors. The systolic computation within the PEs therefore could easily be mapped to the configurable logic blocks in FPGA device. Interestingly also, the artificial neural network (ANN) algorithms are quite suitable for systolic implementation due to their repetitive multiply-accumulate behaviour. Several variations of one-dimensional and two-dimensional systolic arrays are, therefore, reported in the literature for the implementation of different types of neural networks. Special purpose systolic designs for various ANN-based applications relating to pattern recognition and classification, adaptive filtering and channel equalization, vector quantization, image compression and general signal/image processing applications have been reported in the last two decades. We have devoted this chapter on the systolic architectures for the implementation of ANN algorithms in custom VLSI and FPGA platforms. The key techniques used for the design of basic systolic building blocks of ANN algorithms are discussed in detail. Moreover, the mapping of fully-connected unconstrained ANN, as well as, multilayer ANN algorithm into fully-pipelined systolic architecture is described with generalized dependence graph formulation. A brief overview of systolic architectures for advance ANN algorithms for different applications are presented at the end.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   189.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Brown, J.R., Garber, M.M., Venable, S.F.: Artificial neural network on a SIMD architecture. In: Proceedings Frontiers of Massively Parallel Computation, pp. 43–47 (1988)

    Google Scholar 

  2. Shams, S., Przytula, K.W.: Mapping of neural networks onto programmable parallel machines. In: Proceedings IEEE International Symposium on Circuits and Systems, vol. 4, pp. 2613–2617 (1990)

    Google Scholar 

  3. Kung, S.Y.: Digital Neurocomputing. Prentice Hall, Englewood Cliffs (1992)

    Google Scholar 

  4. Kung, S.Y.: Tutorial: digital neurocomputing for signal/image processing. In: Proceedings of IEEE Workshop Neural Networks for Signal Processing, pp. 616–644 (1991)

    Google Scholar 

  5. Kung, S.Y., Hwang, J.N.: Parallel architectures for artificial neural nets. In: Proceedings of IEEE International Conference on Neural Networks, vol. 2, pp. 165–172 (1988)

    Google Scholar 

  6. Kung, S.Y., Hwang, J.N.: A unifying algorithm/architecture for artificial neural networks. In: Proceedings of International Conference on Acoustics, Speech, and Signal Processing, vol. 4, pp. 2505–2508 (1989)

    Google Scholar 

  7. Amin, H., Curtis, K.M., Hayes Gill, B.R.: Efficient two-dimensional systolic array architecture for multilayer neural network. Electronics Letters 33(24), 2055–2056 (1997)

    Article  Google Scholar 

  8. Amin, H., Curtis, K.M., Hayes Gill, B.R.: Two-ring systolic array network for artificial neural networks. In: IEE Proceedings Circuits, Devices and Systems, vol. 164(5), pp. 225–230 (1999)

    Google Scholar 

  9. Myoupo, J.F., Seme, D.: A single-layer systolic architecture for back propagation learning. In: Proceedings of IEEE International Conference on Neural Networks, vol. 2, pp. 1329–1333 (1996)

    Google Scholar 

  10. Khan, E.R., Ling, N.: Systolic architectures for artificial neural nets. In: Proceedings of IEEE International Joint Conference on Neural Networks, vol. 1, pp. 620–627 (1991)

    Google Scholar 

  11. Zubair, M., Madan, B.B.: Systolic implementation of neural networks. In: Proceedings of IEEE International Conference on Computer Design: VLSI in Computers and Processors, pp. 479–482 (1989)

    Google Scholar 

  12. Pazienti, F.: Systolic array for neural network implementation. In: Proceedings 6th Mediterranean Electrotechnical Conference, vol. 2, pp. 981–984 (1991)

    Google Scholar 

  13. Girones, R.G., Salcedo, A.M.: Systolic implementation of a pipelined on-line back propagation. In: Proceedings Seventh International Conference on Microelectronics for Neural, Fuzzy and Bio-Inspired Systems, pp. 387–394 (1999)

    Google Scholar 

  14. Naylor, D., Jones, S.: A performance model for multilayer neural networks in linear arrays. IEEE Transactions on Parallel and Distributed Systems 5(12), 1322–1328 (1994)

    Article  Google Scholar 

  15. Naylor, D., Jones, S., Myers, D.: Back propagation in linear arrays-a performance analysis and optimization. IEEE Transactions on Neural Networks 6(3), 583–595 (1995)

    Article  Google Scholar 

  16. Kung, H.T.: Why systolic architectures? Computer 15(1), 37–46 (1982)

    Article  Google Scholar 

  17. Kung, S.Y.: VLSI Array Processors. Prentice Hall, Englewood Cliffs (1988)

    Google Scholar 

  18. Parhi, K.K.: VLSI Digital Signal Processing Systems: Design and Implementation. Wiley-Interscience Publication, John Wiley & Sons, New York (1999)

    Google Scholar 

  19. Zhang, D., Pal, S.K. (eds.): Neural Networks and Systolic Array Design. World Scientific, River Edge (2002)

    MATH  Google Scholar 

  20. Ben Salem, A.K., Ben Othman, S., Ben Saoud, S.: Design and implementation of a neural command rule on a FPGA circuit. In: Proceedings 12th IEEE International Conference on Electronics, Circuits and Systems, pp. 1–4 (2005)

    Google Scholar 

  21. Liu, J., Liang, D.: A Survey of FPGA-Based Hardware Implementation of ANNs. In: Proceedings 1st International Conference on Neural Networks and Brain, pp. 915–918 (2005)

    Google Scholar 

  22. Mohan, A.R., Sudha, N., Meher, P.K.: An embedded face recognition system on A VLSI array architecture and its FPGA implementation. In: Proceedings 34th Annual Conference of IEEE Industrial Electronics, pp. 2432–2437 (2008)

    Google Scholar 

  23. Farhat, N.H., Paaltis, D., Prata, A., Paek, E.: Optical Implementation of the Hopfield Model. Applied Optics 24, 1469–1475 (1985)

    Article  Google Scholar 

  24. Wanger, K., Paaltis, D.: Multilayer optical learning networks. Applied Optics 26, 5061–5076 (1987)

    Article  Google Scholar 

  25. Mead, C.: Analog VLSI and neural systems. Addison Wesley, Reading (1989)

    MATH  Google Scholar 

  26. Sivilotti, M.A., Mahowald, M.A., Mead, C.A.: Real-time visual computations using analog CMOS processing arrays. In: Loslben, P. (ed.) Advanced Research on VLSI, pp. 295–312. MIT Press, Cambridge (1987)

    Google Scholar 

  27. Sheu, B.J., Choi, J.: Neural Information Processing and VLSI. Kluwer Academic Publishers, Dordrecht (1995)

    MATH  Google Scholar 

  28. Hopfield, J.J., Tank, D.W.: Neural computation of decisions in optimization problems. Biological cybernetics 52, 141–154 (1985)

    MATH  MathSciNet  Google Scholar 

  29. Rummelhart, D.E., McClelland, J.L.: Parallel and distributed processing: Explorations in the Microstructure of cognition. MIT Press, Cambridge (1986)

    Google Scholar 

  30. Werbos, P.: Beyond regression: New tools for prediction and analysis in the behavioral sciences. Ph.D. thesis, Harvard University, Cambridge, Mass. (1974)

    Google Scholar 

  31. Gisutham, B., Srikanthan, T., Asari, K.V.: A high speed flat CORDIC based neuron with multi-level activation function for robust pattern recognition. In: Proceedings Fifth IEEE International Workshop on Computer Architectures for Machine Perception, pp. 87–94 (2000)

    Google Scholar 

  32. Anna Durai, S., Siva Prasad, P.V., Balasubramaniam, A., Ganapathy, V.: A learning strategy for multilayer neural network using discretized sigmoidal function. In: Proceedings Fifth IEEE International Conference on Neural Networks, pp. 2107–2110 (1995)

    Google Scholar 

  33. Zhang, M., Vassiliadis, S., Delgado-Frias, J.G.: Sigmoid generators for neural computing using piecewise approximations. IEEE Transactions on Computers 45, 1045–1049 (1996)

    Article  MATH  Google Scholar 

  34. Saichand, V., Nirmala, D.M., Arumugam, S., Mohankumar, N.: FPGA realization of activation function for artificial neural networks. In: Proceedings Eighth International Conference on Intelligent Systems Design and Applications, vol. 3, pp. 159–164 (2008)

    Google Scholar 

  35. Williams, R.J., Zipser, D.: Experimental analysis of the real-time recurrent learning algorithm. Connection Science 1, 87–111 (1989)

    Article  Google Scholar 

  36. Williams, R.J., Zipser, D.: Gradient-based learning algorithms for recurrent networks and their computational complexity. In: Back-propagation: Theory, Architectures and Applications. Erlbaum, Hillsdale (1992)

    Google Scholar 

  37. Kechriotis, G., Manolakos, E.S.: A VLSI array architecture for the on-line training of recurrent neural networks. In: Conference Record of Asilomar Conference on the Twenty-Fifth Signals, Systems and Computers, vol. 1, pp. 506–510 (1991)

    Google Scholar 

  38. Shaikh-Husin, N., Hani, M.K., Teoh, G.S.: Implementation of recurrent neural network algorithm for shortest path calculation in network routing. In: Proceedings of International Symposium on Parallel Architectures, Algorithms and Networks, I-SPAN 2002, pp. 313–317 (2002)

    Google Scholar 

  39. Ramacher, U., Beichter, J., Bruls, N., Sicheneder, E.: Architecture and VLSI design of a VLSI neural signal processor. In: Proceedings IEEE International Symposium on Circuits and Systems, vol. 3, pp. 1975–1978 (1993)

    Google Scholar 

  40. Vidal, M., Massicotte, D.: A VLSI parallel architecture of a piecewise linear neural network for nonlinear channel equalization. In: Proceedings the 16th IEEE Conference on Instrumentation and Measurement Technology, vol. 3, pp. 1629–1634 (1999)

    Google Scholar 

  41. Broomhead, D.S., Jones, R., McWhirter, J.G., Shepherd, T.J.: A systolic array for nonlinear adaptive filtering and pattern recognition. In: Proceedings IEEE International Symposium on Circuits and Systems, vol. 2, pp. 962–965 (1990)

    Google Scholar 

  42. Cavaiuolo, M., Yakovleff, A.J.S., Watson, C.R., Kershaw, J.A.: A systolic neural network image processing architecture. In: Proceedings Computer Systems and Software Engineering, pp. 695–700 (1992)

    Google Scholar 

  43. Bermak, A., Martinez, D.: Digital VLSI implementation of a multi-precision neural network classifier. In: Proceedings 6th International Conference on Neural Information Processing, vol. 2, pp. 560–565 (1999)

    Google Scholar 

  44. Shadafan, R.S., Niranjan, M.: A systolic array implementation of a dynamic sequential neural network for pattern recognition. In: Proceedings IEEE World Congress on Computational Intelligence and IEEE International Conference on Neural Networks, vol. 4, pp. 2034–2039 (1994)

    Google Scholar 

  45. Sudha, N., Mohan, A.R., Meher, P.K.: Systolic array realization of a neural network-based face recognition system. In: Proceedings 3rd IEEE Conference on Industrial Electronics and Applications, pp. 1864–1869 (2008)

    Google Scholar 

  46. Sheu, B.J., Chang, C.F., Chen, T.H., Chen, O.T.C.: Neural-based analog trainable vector quantizer and digital systolic processors. In: Proceedings IEEE International Symposium on Circuits and Systems, vol. 3, pp. 1380–1383 (1991)

    Google Scholar 

  47. Moreno, J.M., Castillo, F., Cabestany, J., Madrenas, J., Napieralski, A.: An analog systolic neural processing architecture. IEEE Micro. 14(3), 51–59 (1994)

    Article  Google Scholar 

  48. Madraswala, T.H., Mohd, B.J., Ali, M., Premi, R., Bayoumi, M.A.: A reconfigurable ‘ANN’ architecture. In: Proceedings IEEE International Symposium on Circuits and Systems, vol. 3, pp. 1569–1572 (1992)

    Google Scholar 

  49. Jang, Y.-J., Park, C.-H., Lee, H.-S.: A programmable digital neuro-processor design with dynamically reconfigurable pipeline/parallel architecture. In: Proceedings International Conference on Parallel and Distributed Systems, pp. 18–24 (1998)

    Google Scholar 

  50. Patra, J.C., Lee, H.Y., Meher, P.K., Ang, E.L.: Field Programmable Gate Array Implementation of a Neural Network-Based Intelligent Sensor System. In: Proceeding International Conference on Control Automation Robotics and Vision, December 2006, pp. 333–337 (2006)

    Google Scholar 

  51. Patra, J.C., Chakraborty, G., Meher, P.K.: Neural Network-Based Robust Linearization and Compensation Technique for Sensors under Nonlinear Environmental Influences. IEEE Transactions on Circuits and Systems-I: Regular Papers 55(5), 1316–1327 (2008)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Meher, P.K. (2010). Systolic VLSI and FPGA Realization of Artificial Neural Networks. In: Tenne, Y., Goh, CK. (eds) Computational Intelligence in Optimization. Adaptation, Learning, and Optimization, vol 7. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-12775-5_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-12775-5_15

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-12774-8

  • Online ISBN: 978-3-642-12775-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics