Advertisement

A Hardware Accelerator for Convolutional Neural Network Using Fast Fourier Transform

  • S. KalaEmail author
  • Babita R. Jose
  • Debdeep Paul
  • Jimson Mathew
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 892)

Abstract

Convolutional Neural Networks (CNN) are biologically inspired architectures which can be trained to perform various classification tasks. CNNs typically consists of convolutional layers, max pooling layers, followed by dense fully connected layers. Convolutional layer is the compute intensive layer in CNNs. In this paper we present FFT (Fast Fourier Transform) based convolution technique for accelerating CNN architecture. Computational complexity of direct convolution and FFT convolution are evaluated and compared. Also we present an efficient FFT architecture based on radix-4 butterfly for convolution. For validating our analysis we have implemented a convolutional layer in Virtex-7 FPGA.

Keywords

Convolutional neural networks Hardware complexity FFT FPGA VLSI 

References

  1. 1.
    Motamedi, M., Gysel, P., Akella, V., Ghiasi, S.: Design space exploration of FPGA-based deep convolutional neural networks. In: Proceedings of the ASP DAC, pp. 575–580 (2016)Google Scholar
  2. 2.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  3. 3.
    Solazzo, A., Del Sozzo, E., De Rose, I., De Silvestri, M.: Hardware design automation of convolutional neural networks. In: IEEE ISVLSI (2016)Google Scholar
  4. 4.
    Peemen, M., Setio, A.A.A., Mesman, B., Corporaal, H.: Memory-centric accelerator design for convolutional neural networks. In: ICCD (2013)Google Scholar
  5. 5.
    Farabet, C., Poulet, C., Han, J.Y., LeCun, Y.: CNP: an FPGA based processor for convolutional networks. In: FPL (2009)Google Scholar
  6. 6.
    Ma, Y., Suda, N., Cao, Y., Seo, J.-S., Vrudhula, S.: Scalable and modularized RTL compilation of convolutional neural networks onto FPGA. In: FPL (2016)Google Scholar
  7. 7.
    Li, H., et al.: A high performance FPGA-based accelerator for large-scale convolutional neural networks. In: FPL (2016)Google Scholar
  8. 8.
    Qiu, J., et al.: Going deeper with embedded FPGA platform for convolutional neural network. In: FPGA (2016)Google Scholar
  9. 9.
    Zhang, C., et al.: Optimizing FPGA-based accelerator design for deep convolutional neural networks (2015)Google Scholar
  10. 10.
    Farabet, C., et al.: Hardware accelerated convolutional neural networks for synthetic vision systems. In: ISCAS (2010)Google Scholar
  11. 11.
    Chen, Y., et al.: DaDianNao: a machine-learning supercomputer. In: IEEE/ACM International Symposium on Microarchitecture, pp. 602–622 (2014)Google Scholar
  12. 12.
    Du, Z., Fasthuber, R.: ShiDianNao: shifting vision processing closer to the sensor. In: ACM International Symposium Computer Architecture (ISCA) (2015)Google Scholar
  13. 13.
    Kang, L., Kumar, J., Ye, P., Li, Y., Doermann, D.: Convolutional neural networks for document image classification. In: 22nd International Conference on Pattern Recognition (2014)Google Scholar
  14. 14.
    Matthias, G.P.: Ristretto: hardware-oriented approximation of convolutional neural networks. MSc thesis, UC Davis (2016)Google Scholar
  15. 15.
    Kala, S., Nalesh, S., Maity, A., Nandy, S.K., Narayan, R.: High throughput, low latency, memory optimized 64K point FFT architecture using novel radix-4 butterfly unit. In: IEEE International Symposium on Circuits and Systems, ISCAS, pp. 3034–3037 (2013)Google Scholar
  16. 16.
    Chakradhar, S., Sankaradas, M., Jakkula, V., Cadambi, S.: A dynamically configurable coprocessor for convolutional neural networks: In: ACM SIGARCH Computer Architecture News, vol. 38, pp. 247–257. ACM (2010)Google Scholar
  17. 17.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2323 (1998)CrossRefGoogle Scholar
  18. 18.
    Szegedy, C., Reed, S., Sermanet, P., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions, pp. 1–12 (2014)Google Scholar
  19. 19.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  20. 20.
    Mathieu, M., Henaff, M.: Fast training of convolutional networks through FFTs. In: ICLR (2014)Google Scholar
  21. 21.
    Abtahi, T., Kulkarni, A., Mohsenin, T.: Accelerating convolutional neural network with FFT on tiny cores. In: ISCAS (2017)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • S. Kala
    • 1
    Email author
  • Babita R. Jose
    • 1
  • Debdeep Paul
    • 2
  • Jimson Mathew
    • 3
  1. 1.Cochin University of Science and TechnologyKeralaIndia
  2. 2.Department of Electrical EngineeringIndian Institute of Technology PatnaPatnaIndia
  3. 3.Department of Computer Science and EngineeringIndian Institute of Technology PatnaPatnaIndia

Personalised recommendations