Advertisement

A Systematic Literature Review of Hardware Neural Networks

  • Dorfell ParraEmail author
  • Carlos Camargo
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 833)

Abstract

Although Neural Networks (NN) are extremely useful for the solution of several problems such as object recognition and semantic segmentation, the NN libraries usually target devices which face several drawbacks such as memory bottlenecks and limited efficiency (e.g. GPUs, multi-core processors). Fortunately, the recent implementation of Hardware Neural Networks aims to tackle down this problem and for that reason several researchers had turn back their attention to them. This paper presents the Systematic Literature Review (SLR) of the most relevant HNN works presented in the last few years. The main sources chosen for the SLR were the IEEE Computer Society Digital Library and the SCOPUS indexing system, from which 61 papers were reviewed according to the inclusion and exclusion criteria, and of which after a detail assessment, only 20 papers remained. Finally, the results show that the most popular NN hardware platforms are the FPGAs-based.

Keywords

HNN SLR Framework FPGA Neural networks 

References

  1. 1.
    Face image analysis with convolutional neural networks. https://lmb.informatik.uni-freiburg.de/papers/download/du_diss.pdf. Accessed 15 Feb 2018
  2. 2.
    Mathworks: Matlab. https://www.mathworks.com/products/matlab.html. Accessed 15 Feb 2018
  3. 3.
    Microsoft cognitive toolkit. https://www.microsoft.com/en-us/cognitive-toolkit/. Accessed 15 Feb 2018
  4. 4.
    Tensorflow: An open-source software library for machine intelligence. https://www.tensorflow.org/. Accessed 15 Feb 2018
  5. 5.
    Chen, T., et al.: DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning. In: Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS 2014, pp. 269–284, March 2014Google Scholar
  6. 6.
    Chetlur, S., et al.: CuDNN: efficient primitives for deep learning, December 2014. https://arxiv.org/abs/1410.0759. Accessed 15 Feb 2018
  7. 7.
    Du, Z., et al.: ShiDianNao: shifting vision processing closer to the sensor. In: Proceedings of the 42nd Annual International Symposium on Computer Architecture-ISCA 2015, pp. 92–104, June 2015Google Scholar
  8. 8.
    Dundar, A., Jin, J., Martini, B., Culurciello, E.: Embedded streaming deep neural networks accelerator with applications. IEEE Trans. Neural Netw. Learn. Syst. 28(7), 1572–1583 (2017).  https://doi.org/10.1109/TNNLS.2016.2545298MathSciNetCrossRefGoogle Scholar
  9. 9.
    Farabet, C., Martini, B., Corda, B., Akselrod, P., Culurciello, E., Lecun, Y.: NeuFlow: a runtime reconfigurable dataflow processor for vision. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 109–116, June 2011Google Scholar
  10. 10.
    Kitchenham, B., Brereton, O.P., Budgen, D., Turner, M., Bailey, J., Linkman, S.: Systematic literature reviews in software engineering - a systematic literature review. Inf. Softw. Technol. 51(1), 7–15 (2008)CrossRefGoogle Scholar
  11. 11.
    Kitchenham, B., et al.: Systematic literature reviews in software engineering-a tertiary study. Inf. Softw. Technol. 52, 792–805 (2010)CrossRefGoogle Scholar
  12. 12.
    krizhevsky, A.: Survey: implementing dense neural networks in hardware, April 2014. https://arxiv.org/abs/1404.5997. Accessed 15 Feb 2018
  13. 13.
    Kyrkou, C., Bouganis, C.S., Theocharides, T., Polycarpou, M.M.: Embedded hardware-efficient real-time classification with cascade support vector machines. IEEE Trans. Neural Netw. Learn. Syst. 27(1), 99–112 (2016)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Li, N., Takaki, S., Tomioka, Y., Kitazawa, H.: A multistage dataflow implementation of a deep convolutional neural network based on FPGA for high-speed object recognition. In: 2016 IEEE Southwest Symposium On Image Analysis and Interpretation (SSIAI), pp. 165–168 (2016).  https://doi.org/10.1109/SSIAI.2016.7459201
  15. 15.
    Luo, T., et al.: Dadiannao: a neural network supercomputer. IEEE Trans. Comput. 66(1), 73–88 (2017)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Misra, J., Saha, I.: Artificial neural networks in hardware: a survey of two decades of progress. Neurocomputing 74(1–3), 239–255 (2010)CrossRefGoogle Scholar
  17. 17.
    Mohammed, E.Z., Ali, H.K.: Hardware implementation of artificial neural network using field programmable gate array. Int. J. Comput. Theory Eng. 5(5), 780–783 (2013)CrossRefGoogle Scholar
  18. 18.
    Motamedi, M., Gysel, P., Akella, V., Ghiasi, S.: Design space exploration of FPGA-based deep convolutional neural networks. In: 21st Asia and South Pacific Design Automation Conference, pp. 575–580 (2016).  https://doi.org/10.1109/ASPDAC.2016.7428073
  19. 19.
    Murakami, Y.: FPGA implementation of a SIMD-based array processor with torus interconnect. In: 2015 International Conference on Field Programmable Technology, FPT 2015, pp. 244–247, May 2015.  https://doi.org/10.1109/FPT.2015.7393159
  20. 20.
    Muthuramalingam, A., Himavathi, S., Srinivasan, E.: Neural network implementation using FPGA issues and application. Inf. Technol. 4(2), 86–92 (2018)Google Scholar
  21. 21.
    Ortega-Zamorano, F., Jerez, J.M., Munoz, D.U., Luque-Baena, R.M., Franco, L.: Efficient implementation of the backpropagation algorithm in FPGAs and microcontrollers. IEEE Trans. Neural Netw. Learn. Syst. 27(9), 1840–1850 (2016)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Abdu-Aljabar, R.D.: Design and implementation of neural network in FPGA. J. Eng. Dev. 16(3), 73–90 (2012)Google Scholar
  23. 23.
    Singh, S., Sanjeevi, S., V., S., Talashi, A.: FPGA implementation of a trained neural network. IOSR J. Electron. Commun. Eng. (IOSR-JECE) 10(3), 45–54 (2015)Google Scholar
  24. 24.
    Saidane, Z.: Image and video text recognition using convolutional neural networks: study of new CNNs architectures for binarization, segmentation and recognition of text images. LAP LAMBERT Academic Publishing (2011)Google Scholar
  25. 25.
    Saldanha, L.B., Bobda, C.: Sparsely connected neural networks in FPGA for handwritten digit recognition. In: Proceedings - International Symposium on Quality Electronic Design (ISQED), pp. 113–117, May 2016Google Scholar
  26. 26.
    Sankaran, A., Aralikatte, R., Mani, S., Khare, S., Panwar, N., Gantayat, N.: DARVIZ: deep abstract representation, visualization, and verification of deep learning models. In: 2017 IEEE/ACM 39th International Conference on Software Engineering: New Ideas and Emerging Results Track (2017)Google Scholar
  27. 27.
    Shakoory, G.H.: FPGA implementation of multilayer perceptron for speech recognition. J. Eng. Dev. 17(6), 175–185 (2013)Google Scholar
  28. 28.
    Venieris, S.I., Bouganis, C.S.: FpgaConvNet: A framework for mapping convolutional neural networks on FPGAs. In: Proceedings - 24th IEEE International Symposium on Field-Programmable Custom Computing Machines, FCCM 2016, pp. 40–47, May 2016Google Scholar
  29. 29.
    Wang, Y., et al.: Low power convolutional neural networks on a chip. In: IEEE International Symposium on Computer Architecture, no. 1, pp. 129–132, April 2016Google Scholar
  30. 30.
    Zhang, C., Li, P., Sun, G., Guan, Y., Xiao, B., Cong, J.: Optimizing FPGA-based accelerator design for deep convolutional neural networks. In: Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays - FPGA 2015, pp. 161–170, February 2015Google Scholar
  31. 31.
    Zhou, Y., Jiang, J.: An FPGA-based accelerator implementation for deep convolutional neural networks. In: 4th International Conference on Computer Science and Network Technology (ICCSNT), pp. 829–832, December 2015Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Department of Electric and Electronic EngineeringUniversidad Nacional de ColombiaBogotáColombia

Personalised recommendations