Advertisement

Design of Hardware Accelerator for Artificial Neural Networks Using Multi-operand Adder

  • Shilpa MayannavarEmail author
  • Uday Wali
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1025)

Abstract

Computational requirements of Artificial Neural Networks (ANNs) are so vastly different from the conventional architectures that exploring new computing paradigms, hardware architectures, and their optimization has gained momentum. ANNs use large number of parallel operations because of which their implementation on conventional computer hardware becomes inefficient. This paper presents a new design methodology for Multi-operand adders. These adders require multi-bit carries which makes their design unique. Theoretical upper bound on the size of sum and carry in a multi-operand addition for any base and any number of operands is presented in this paper. This result is used to design modular 4-operand, 4-bit adder. This module computes the partial sums using a look-up-table. These modules can be connected in a hierarchical structure to implement larger adders. Method to build a 16 bit 16 operand adder using this basic 4-bit 4-operand adder block is presented. Verilog simulation results are presented for both 4 × 4 and 16 × 16 adders. Design strategy used for the 16 × 16 adder may further be extended to more number of bits or operands with ease, using the guidelines discussed in the paper.

Keywords

Artificial Intelligence Deep Learning Hardware accelerators Hardware optimization Massive parallelism Multi-operand addition Neural computing Neural network processor 

Notes

Acknowledgements

The authors would like to thank C-Quad Research, Belagavi for all the support.

References

  1. 1.
    Defelipe, J., Alonso-Nanclares, L., Arellano, J.: Microstructure of the neocortex: comparative aspects. J. Neurocytol. 31, 299–316 (2002).  https://doi.org/10.1023/A:1024130211265CrossRefGoogle Scholar
  2. 2.
    Aparanji, V.M., Wali, U.V., Aparna, R.: Pathnet: a neuronal model for robotic motion planning. In: Nagabhushan, T.N., Aradhya, V.N.M., Jagadeesh, P., Shukla, S., Chayadevi, M.L. (eds.) CCIP 2017. CCIS, vol. 801, pp. 386–394. Springer, Singapore (2018).  https://doi.org/10.1007/978-981-10-9059-2_34CrossRefGoogle Scholar
  3. 3.
    Mayannavar, S., Wali, U.: A noise tolerant auto resonance network for image recognition. In: Gani, A.B., et al. (eds.) ICICCT 2019. CCIS, vol. 1025, pp. XX–YY. Springer, Cham (2019)Google Scholar
  4. 4.
    Bergstra, J., et al.: Theano: a CPU and GPU math compiler in Python. In: Proceedings of the 9th Python in Science Conference (SCIPY) (2010)Google Scholar
  5. 5.
    Liu, S., et al.: Cambricon: an instruction set architecture for neural networks. In: ACM/IEEE 43rd Annual International Symposium on Computer Architecture (2016)Google Scholar
  6. 6.
    Jouppi, N.P., et al.: In-datacenter performance analysis of a tensor processing unit. In: 44th International Symposium on Computer Architecture (ISCA), Toronto, Canada, June 2017Google Scholar
  7. 7.
    Akopyan, F., et al.: TrueNorth: design and tool flow of a 65mW 1 million neuron programmable neurosynaptic chip. IEEE Trans. Comput. Aided Des. Intergr. Circ. Syst. (2015).  https://doi.org/10.1109/TCAD.2015.2474396CrossRefGoogle Scholar
  8. 8.
    Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative Adversarial Nets, Achieves, Cornell University Library (2014). https://arxiv.org/pdf/1406.2661.pdf
  9. 9.
    Hinton, G.E., et al.: Dynamic routing between capsules. In: 31st Conference on Neural Information Processing Systems, NIPS 2017, Long Beach, CA, USA (2017)Google Scholar
  10. 10.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Pereira, F., et al. (eds.) Advances in Neural Information Processing Systems, NIPS 25, pp. 1097–1105. Curran Associates, Inc. (2012)Google Scholar
  11. 11.
    Hijazi, S., Kumar, R., Rowen, C.: Using Convolutional Neural Networks for Image Recognition. IP Group, Cadence, San Jose (2015)Google Scholar
  12. 12.
    Farabet, C., Poulet, C., Han, J.Y., LeCun, Y.: CNP: an FPGA based processor for convolutional networks. IEEE (2009)Google Scholar
  13. 13.
    Mayannavar, S., Wali, U.: Performance comparison of serial and parallel multipliers in massively parallel environment. IEEE Xplore Digital Library, December 2018, in pressGoogle Scholar
  14. 14.
    Mayannavar, S., Wali, U.: Hardware implementation of activation function for neural network processor. IEEE Xplore Digital Library, January 2018, in pressGoogle Scholar
  15. 15.
    Abdelouahab, K., Pelcat, M., Berry, F.: The challenge of multi-operand adders in CNNs on FPGAs; how not to solve it. In: SAMOS XVIII, Pythagorion, Samos Island, Greece, 15–19 July 2018. Association for Computing Machinery (2018)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.C-Quad ResearchBelagaviIndia
  2. 2.KLE DR MSS CETBelagaviIndia

Personalised recommendations