Advertisement

Journal of Computer Science and Technology

, Volume 32, Issue 2, pp 286–296 | Cite as

DLPlib: A Library for Deep Learning Processor

  • Hui-Ying Lan
  • Lin-Yang Wu
  • Xiao Zhang
  • Jin-Hua Tao
  • Xun-Yu Chen
  • Bing-Rui Wang
  • Yu-Qing Wang
  • Qi Guo
  • Yun-Ji Chen
Regular Paper

Abstract

Recently, deep learning processors have become one of the most promising solutions of accelerating deep learning algorithms. Currently, the only method of programming the deep learning processors is through writing assembly instructions by bare hands, which costs a lot of programming efforts and causes very low efficiency. One solution is to integrate the deep learning processors as a new back-end into one prevalent high-level deep learning framework (e.g., TPU (tensor processing unit) is integrated into Tensorflow directly). However, this will obstruct other frameworks to profit from the programming interface. The alternative approach is to design a framework-independent low-level library for deep learning processors (e.g., the deep learning library for GPU, cuDNN). In this fashion, the library could be conveniently invoked in high-level programming frameworks and provides more generality. In order to allow more deep learning frameworks to gain benefits from this environment, we envision it as a low-level library which could be easily embedded into current high-level frameworks and provide high performance. Three major issues of designing such a library are discussed. The first one is the design of data structures. Data structures should be as few as possible while being able to support all possible operations. This will allow us to optimize the data structures easier without compromising the generality. The second one is the selection of operations, which should provide a rather wide range of operations to support various types of networks with high efficiency. The third is the design of the API, which should provide a flexible and user-friendly programming model and should be easy to be embedded into existing deep learning frameworks. Considering all the above issues, we propose DLPlib, a tensor-filter based library designed specific for deep learning processors. It contains two major data structures, tensor and filter, and a set of operators including basic neural network primitives and matrix/vector operations. It provides a descriptor-based API exposed as a C++ interface. The library achieves a speedup of 0.79x compared with the performance of hand-written assembly instructions.

Keywords

deep learning processor API library DLPlib 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

11390_2017_1722_MOESM1_ESM.pdf (153 kb)
ESM 1 (PDF 152 kb)

References

  1. [1]
    Zhang S J, Du Z D, Zhang L, Lan H Y, Liu S L, Li L, Guo Q, Chen T S, Chen Y. Cambricon-X: An accelerator for sparse neural networks. In Proc. the 49th Annual IEEE/ACM International Symposium on Microarchitecture, Oct. 2016.Google Scholar
  2. [2]
    Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks. In Proc. the 26th Annual Conference on Neural Information Processing Systems, Dec. 2012, pp.1106-1114.Google Scholar
  3. [3]
    Sun Y, Liang D, Wang X G, Tang X O. DeepID3: Face recognition with very deep neural networks. arXiv:1502.00873, 2015. http://arxiv.org/abs/1502.00873, Feb. 2017.
  4. [4]
    Karpathy A, Li F F. Deep visual-semantic alignments for generating image descriptions. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2015, pp.3128-3137.Google Scholar
  5. [5]
    Eriguchi A, Hashimoto K, Tsuruoka Y. Tree-to-sequence attentional neural machine translation. In Proc. the 54th Annual Meeting of the Association for Computational Linguistics, Aug. 2016.Google Scholar
  6. [6]
    Ren S Q, He K M, Girshick R B, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proc. Annual Conference on Neural Information Processing Systems, Dec. 2015, pp.91-99.Google Scholar
  7. [7]
    Farabet C, Poulet C, Han J Y, LeCun Y. CNP: An FPGA-based processor for convolutional networks. In Proc. the 19th International Conference on Field Programmable Logic and Applications, Aug.31-Sept.2, 2009, pp.32-37.Google Scholar
  8. [8]
    Zhang C, Li P, Sun G Y, Guan Y J, Xiao B J, Cong J. Optimizing FPGA-based accelerator design for deep convolutional neural networks. In Proc. the ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Feb. 2015, pp.161-170.Google Scholar
  9. [9]
    Chen T S, Du Z D, Sun N H et al. DianNao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In Proc. the 19th ACM Int. Conf. Languages and Operating Systems, Mar. 2014, pp.269-284.Google Scholar
  10. [10]
    Chen Y, Luo T, Liu S et al. DaDianNao: A machine-learning supercomputer. In Proc. the 47th Annual IEEE/ACM Int. Symp. Microarchitecture, Dec. 2014, pp.609-622.Google Scholar
  11. [11]
    Liu S L, Du Z D, Tao J H et al. Cambricon: An instruction set architecture for neural networks. In Proc. the 43rd ACM/IEEE Annual Int. Symp. Computer Architecture (ISCA), Jun. 2016, pp.393-405.Google Scholar
  12. [12]
    Chakradhar S T, Sankaradass M, Jakkula V, Cadambi S. A dynamically configurable coprocessor for convolutional neural networks. In Proc. the 37th International Symposium on Computer Architecture, Jun. 2010, pp.247-257.Google Scholar
  13. [13]
    Chi P, Li S C, Xu C et al. PRIME: A novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In Proc. the 43rd ACM/IEEE Annual Int. Symp. Computer Architecture (ISCA), Jun. 2016, pp.27-39.Google Scholar
  14. [14]
    Shafiee A, Nag A, Muralimanohar N et al. ISAAC: A convolutional neural network accelerator with In-Situ analog arithmetic in crossbars. In Proc. the 43rd ACM/IEEE Annual International Symposium on Computer Architecture, Jun. 2016, pp.14-26.Google Scholar
  15. [15]
    Du Z D, Fasthuber R, Chen T S et al. ShiDianNao: Shifting vision processing closer to the sensor. In Proc. the 42nd Annual Int. Symp. Computer Architecture, Jun. 2015, pp.92-104.Google Scholar
  16. [16]
    Chetlur S, Woolley C, Vandermersch P, Cohen J, Tran J, Catanzaro B, Shelhamer E. cuDNN: Efficient primitives for deep learning. arXin: 1410.0759, 2014. http://arxiv.org/abs/1410.0759, Feb. 2017.
  17. [17]
    Abadi M, Barham P, Chen J et al. Tensorflow: A system for large-scale machine learning. In Proc. the 12th USENIX Symp. Operating Systems Design and Implementation, Nov. 2016, pp.265-283.Google Scholar
  18. [18]
    Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278-2324.CrossRefGoogle Scholar
  19. [19]
    Szegedy C, Liu W, Jia Y Q et al. Going deeper with convolutions. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, Jun. 2015.Google Scholar
  20. [20]
    Krizhevsky A. Cuda-convnet: High-performance C++/CUDA implementation of convolutional neural networks. https://code.google.com/p/cuda-convnet, Feb. 2017.
  21. [21]
    Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proc. the 32nd Int. Conf. Machine Learning, Jul. 2015, pp.448-456.Google Scholar
  22. [22]
    He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, Jun. 2016, pp.770-778.Google Scholar
  23. [23]
    Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv: 1409.1556, 2014. http://arxin.org/abs/1409.1556, Feb. 2017.

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  • Hui-Ying Lan
    • 1
    • 2
    • 3
  • Lin-Yang Wu
    • 1
    • 2
    • 3
  • Xiao Zhang
    • 1
    • 2
  • Jin-Hua Tao
    • 1
    • 2
  • Xun-Yu Chen
    • 1
    • 2
  • Bing-Rui Wang
    • 1
    • 2
    • 4
  • Yu-Qing Wang
    • 1
    • 2
    • 4
  • Qi Guo
    • 1
    • 2
  • Yun-Ji Chen
    • 1
    • 2
  1. 1.State Key Laboratory of Computer Architecture, Institute of Computing TechnologyChinese Academy of SciencesBeijingChina
  2. 2.Microprocessor Research Center, Institute of Computing TechnologyChinese Academy of SciencesBeijingChina
  3. 3.University of Chinese Academy of SciencesBeijingChina
  4. 4.Department of Computer ScienceUniversity of Science and Technology of ChinaHefeiChina

Personalised recommendations