Advertisement

Various Frameworks and Libraries of Machine Learning and Deep Learning: A Survey

  • Zhaobin WangEmail author
  • Ke Liu
  • Jian Li
  • Ying Zhu
  • Yaonan Zhang
Original Paper
  • 59 Downloads

Abstract

With the rapid development of deep learning in various fields, the big companies and research teams have developed independent and unique tools. This paper collects 18 common deep learning frameworks and libraries (Caffe, Caffe2, Tensorflow, Theano include Keras Lasagnes and Blocks, MXNet, CNTK, Torch, PyTorch, Pylearn2, Scikit-learn, Matlab include MatconvNet Matlab deep learning and Deep learning tool box, Chainer, Deeplearning4j) and introduces a large number of benchmarking data. In addition, we give the overall score of the current eight mainstream deep learning frameworks from six aspects (model design ability, interface property, deployment ability, performance, framework design and prospects for development). Based on our overview, the deep learning researchers can choose the appropriate development tools according to the evaluation criteria. By summarizing the 18 deep learning frameworks and libraries, we have found that most of the deep learning tools are moving closer to the mobile terminal, and the role of ASICs is gradually emerging. It is believed that the future deep learning applications will be inseparable from the ASIC support.

Notes

Acknowledgements

We would like to thank the associate editors and the reviewers for their valuable comments and suggestions.

Funding

This work was jointly supported by National Natural Science Foundation of China (Grant No. 61201421), the Fundamental Research Funds for the Central Universities of Lanzhou University (lzujbky-2017-187).

Compliance with Ethical Standards

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

References

  1. 1.
    Donahue J, Jia Y, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T (2014) Decaf: a deep convolutional activation feature for generic visual recognition. In: Proceedings of the 31st international conference on international conference on machine learning—volume 32, ICML’14, JMLR.org, http://JMLR.org, pp 647–655
  2. 2.
    Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22Nd ACM international conference on multimedia, MM ’14, pp 675–678Google Scholar
  3. 3.
    Bottleson J, Kim S, Andrews J, Bindu P, Murthy DN, Jin J (2016) clcaffe: Opencl accelerated caffe for convolutional neural networks. In: 2016 IEEE international parallel and distributed processing symposium workshops (IPDPSW), pp 50–57Google Scholar
  4. 4.
    Zhang C, Fang Z, Zhou P, Pan P, Cong J (2016) Caffeine: towards uniformed representation and acceleration for deep convolutional neural networks. In: Proceedings of the 35th international conference on computer-aided design, ICCAD ’16, pp 12:1–12:8Google Scholar
  5. 5.
    Hegde G, Siddhartha, Ramasamy N, Kapre N (2016) Caffepresso: an optimized library for deep learning on embedded accelerator-based platforms. In: Proceedings of the international conference on compilers, architectures and synthesis for embedded systems, CASES ’16, pp 14:1–14:10Google Scholar
  6. 6.
    Team TTD, Al-Rfou R, Alain G, Almahairi A, Angermueller C, Bahdanau D, Ballas N, Bastien F, Bayer J, Belikov A (2016) Theano: a python framework for fast computation of mathematical expressions. arXiv:1605.02688
  7. 7.
    Bahrampour S, Ramakrishnan N, Schott L, Shah M (2016) Comparative study of deep learning software frameworks. arXiv:1511.06435
  8. 8.
    Kovalev V, Kalinovsky A, Kovalev S (2016) Deep learning with theano, torch, caffe, tensorflow, and deeplearning4j: which one is the best in speed and accuracy? In: XIII Int. conf. on pattern recognition and information processing, 3–5 October, Minsk, Belarus State University, 2016, pp. 99–103Google Scholar
  9. 9.
    Goodfellow IJ, Wardefarley D, Lamblin P, Dumoulin V, Mirza M, Pascanu R, Bergstra J, Bastien F, Bengio Y (2013) Pylearn2: a machine learning research library. arXiv:1308.4214
  10. 10.
    Ding W, Wang R, Mao F, Taylor G (2014) Theano-based large-scale visual recognition with multiple gpus. arXiv:1412.2302
  11. 11.
    Ma H, Mao F, Taylor GW (2017) Theano-MPI: a theano-based distributed training framework. Springer, ChamGoogle Scholar
  12. 12.
    Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467
  13. 13.
    Abadi M (2016) Tensorflow: learning functions at scale. SIGPLAN Not 51(9):1–2CrossRefGoogle Scholar
  14. 14.
    Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M, Kudlur M, Levenberg J, Monga R, Moore S, Murray DG, Steiner B, Tucker P, Vasudevan V, Warden P, Wicke M, Yu Y, Zheng X (2016) Tensorflow: a system for large-scale machine learning. In: Proceedings of the 12th USENIX conference on operating systems design and implementation, OSDI’16, pp 265–283Google Scholar
  15. 15.
    Vishnu A, Siegel C, Daily J (2016) Distributed tensorflow with MPI. arXiv:1603.02339
  16. 16.
    Tang Y (2016) Tf.learn: Tensorflow’s high-level module for distributed machine learning. arXiv:arXiv:1612.04251v1
  17. 17.
    Ronan (2017) Torch scientific computing for luajit (online)Google Scholar
  18. 18.
    Collobert R, Kavukcuoglu K, Farabet C (2012) Implementing neural networks efficiently. Springer, BerlinCrossRefGoogle Scholar
  19. 19.
    Collobert R, Bengio S, Marithoz J (2002) Torch: a modular machine learning software libraryGoogle Scholar
  20. 20.
    Collobert R, Bengio S (2001) Svmtorch: support vector machines for large-scale regression problems. J Mach Learn Res 1(2):143–160MathSciNetzbMATHGoogle Scholar
  21. 21.
    Collobert R, Kavukcuoglu K, Farabet C (2011) Torch7: a matlab-like environment for machine learning. In: BigLearn, NIPS workshop, NIPS, http://nips.cc, pp 1–6
  22. 22.
    Pedregosa F, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J (2013) Scikit-learn: machine learning in python. J Mach Learn Res 12(10):2825–2830MathSciNetzbMATHGoogle Scholar
  23. 23.
    Albanese D, Visintainer R, Merler S, Riccadonna S, Jurman G, Furlanello C (2012) mlpy: machine learning Python. arXiv:1202.6548
  24. 24.
    Schaul T, Bayer J, Wierstra D, Sun Y, Felder M, Sehnke F (2010) Pybrain. J Mach Learn Res 11(3):743–746Google Scholar
  25. 25.
    Hanke M, Halchenko YO, Sederberg PB, Hanson SJ, Haxby JV, Pollmann S (2009) Pymvpa: a python toolbox for multivariate pattern analysis of fmri data. Neuroinformatics 7(1):37–53CrossRefGoogle Scholar
  26. 26.
    Zito T, Wilbert N, Wiskott L, Berkes P (2008) Modular toolkit for data processing (mdp): a python data processing framework. Front Neuroinform 2(8):8Google Scholar
  27. 27.
    Sonnenburg S, Tsch G, Henschel S, Widmer C, Behr J, Zien A, Bona FD, Binder A, Gehl C, Franc V (2010) The shogun machine learning toolbox. J Mach Learn Res 11(2010):1799–1802zbMATHGoogle Scholar
  28. 28.
    Pedregosa F (2011) Benchmarks—ml-benchmarks v0.1 documentation (online)Google Scholar
  29. 29.
    Dean J, Corrado GS, Monga R, Chen K, Devin M, Le QV, Mao MZ, Ranzato M, Senior A, Tucker P, Yang K, Ng AY (2012) Large scale distributed deep networks. In: Proceedings of the 25th international conference on neural information processing systems—volume 1, NIPS’12, pp 1223–1231Google Scholar
  30. 30.
    Chris Nicholson AG (2017) Deeplearning4j benchmarks—deeplearning4j: open-source, distributed deep learning for the jvm (online)Google Scholar
  31. 31.
    Garillot F (2017) Repo to track dl4j benchmark code (online)Google Scholar
  32. 32.
    Johnson J (2016) Benchmarks for popular cnn models (online)Google Scholar
  33. 33.
    Chintala S (2015) Easy benchmarking of all publicly accessible implementations of convnets (online)Google Scholar
  34. 34.
    Shi S, Wang Q, Xu P, Chu X (2016) Benchmarking state-of-the-art deep learning software tools. arXiv:arXiv:1608.07249v7
  35. 35.
    Krizhevsky A (2009) Learning multiple layers of features from tiny images. Technical Report TR-2009, University of Toronto.Google Scholar
  36. 36.
    Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90CrossRefGoogle Scholar
  37. 37.
    He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 770–778Google Scholar
  38. 38.
    Chen T, Li M, Li Y, Lin M, Wang N, Wang M, Xiao T, Xu B, Zhang C, Zhang Z (2015) Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems. arXiv:arXiv:1512.01274v1
  39. 39.
    Chen K, Huo Q (2016) Scalable training of deep learning machines by incremental block training with intra-block parallel optimization and blockwise model-update filtering. In: 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 5880–5884Google Scholar
  40. 40.
    Hido S (2016) Complex neural networks made easy by chainer (online)Google Scholar
  41. 41.
    Ikeda M, Sakai Y, Oda T, Barolli L (2018) Performance evaluation of a vegetable recognition system using caffe and chainer. Springer, ChamCrossRefGoogle Scholar
  42. 42.
    Team TM (2014) Matconvnet: Cnns for matlab (online)Google Scholar
  43. 43.
    Vedaldi A, Lenc K (2015) Matconvnet: Convolutional neural networks for matlab. In: Proceedings of the 23rd ACM international conference on multimedia, MM ’15, pp 689–692Google Scholar
  44. 44.
    Palm RB (2012) Github—deeplearntoolbox: Matlab/octave toolbox for deep learning (online)Google Scholar
  45. 45.
    Palm RB (2012) Prediction as a candidate for learning deep hierarchical models of data. Master’s thesis, Technical University of DenmarkGoogle Scholar
  46. 46.
    I. The MathWorks (2017) What is deep learning?—how it works, techniques and applications (online)Google Scholar
  47. 47.
    Bergstra J, Breuleux O, Bastien F, Lamblin P, Pascanu R, Desjardins G, Turian J, Warde-Farley D, Bengio Y (2010) Theano: a cpu and gpu math expression compiler. In: Proceedings of the python for scientific computing conference (SciPy), SciPy, http://www.scipy.org, pp 3–10, oral Presentation
  48. 48.
    Palzer D, Hutchinson B (2015) The tensor deep stacking network toolkit. In: International joint conference on neural networks, pp 1–5Google Scholar
  49. 49.
    Lan H-Y, Wu L-Y, Zhang X, Tao J-H, Chen X-Y, Wang B-R, Wang Y-Q, Guo Q, Chen Y-J (2017) Dlplib: a library for deep learning processor. J Comput Sci Technol 32(2):286–296CrossRefGoogle Scholar
  50. 50.
    Banerjee DS, Hamidouche K, Panda DK (2016) Re-designing cntk deep learning framework on modern gpu enabled clusters. In: 2016 IEEE international conference on cloud computing technology and science (CloudCom), pp 144–151Google Scholar
  51. 51.
    Sethi A (2016) Experiments with fine tuning caffe modelsGoogle Scholar
  52. 52.
    Sladojevic S, Arsenovic M, Anderla A, Culibrk D, Stefanovic D (2016) Deep neural networks based recognition of plant diseases by leaf image classification. Comput Intell Neurosci 6:1–11Google Scholar
  53. 53.
    Handa A, Bloesch M, Ptrucean V, Stent S, Mccormac J, Davison A (2016) gvnn: Neural network library for geometric computer vision. Springer, ChamGoogle Scholar
  54. 54.
    Yang Y, Feng M, Chakradhar S (2016) Hppcnn: a high-performance, portable deep-learning library for gpgpus. In: 2016 45th international conference on parallel processing (ICPP), pp 582–587Google Scholar
  55. 55.
    Profeta A, Rodriguez A, Clouse HS (2016) Convolutional neural networks for synthetic aperture radar classification. In: Algorithms for synthetic aperture radar imagery XXIII, pp 174–184Google Scholar
  56. 56.
    Erickson BJ, Korfiatis P, Akkus Z, Kline T, Philbrick K (2017) Toolkits and libraries for deep learning. J Digit Imaging 30(4):1–6Google Scholar
  57. 57.
    Karpathy (2017) Convnetjs: deep learning in your browser (online)Google Scholar
  58. 58.
    N.S. Inc. (2017) Neon documentation (online)Google Scholar
  59. 59.
    Druzhkov PN, Kustikova VD (2016) A survey of deep learning methods and software tools for image classification and object detection. Pattern Recognit Image Anal 26(1):9–15CrossRefGoogle Scholar
  60. 60.
    Sako Y (2018) Deep learning benchmark for comparing the performance of dl frameworks, gpus, and single vs half precision (online)Google Scholar
  61. 61.
    Welinder P, Branson S, Mita T, Wah C, Schroff F, Belongie S, Perona P (2010) Caltech-UCSD Birds 200, Tech Rep CNS-TR-2010-001, California Institute of TechnologyGoogle Scholar
  62. 62.
    Jordan MI, Mitchell TM (2015) Machine learning: trends, perspectives, and prospects. Science 349(6245):255–260MathSciNetCrossRefzbMATHGoogle Scholar
  63. 63.
    I Inc. Intel nervana-inside artificial intelligence (online) (2017)Google Scholar
  64. 64.
    Fujii T, Sato S, Nakahara H, Motomura M (2017) An FPGA realization of a deep convolutional neural network using a threshold neuron pruning. Springer, ChamCrossRefGoogle Scholar
  65. 65.
    Orchard G, Martin JG, Vogelstein RJ, Etienne-Cummings R (2013) Fast neuromimetic object recognition using fpga outperforms gpu implementations. IEEE Trans Neural Netw Learn Syst 24(8):1239CrossRefGoogle Scholar
  66. 66.
    Jouppi NP, Young CEA (2017) In-datacenter performance analysis of a tensor processing unit. SIGARCH Comput Archit News 45(2):1–12CrossRefGoogle Scholar

Copyright information

© CIMNE, Barcelona, Spain 2019

Authors and Affiliations

  1. 1.School of Information Science and EngineeringLanzhou UniversityLanzhouChina
  2. 2.Institute of BiologyGansu Academy of SciencesLanzhouChina
  3. 3.Cold and Arid Regions Environmental and Engineering Research InstituteChinese Academy of SciencesLanzhouChina

Personalised recommendations