Skip to main content

A Roadmap to Deep Learning: A State-of-the-Art Step Towards Machine Learning

  • Conference paper
  • First Online:
Advanced Informatics for Computing Research (ICAICR 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 955))

Abstract

Deep learning is a new era of machine learning and belonging to the area of artificial intelligence. It has tried to mimic the working of the way the human brain does. The models of deep learning have the capability to deal with high dimensional data and perform the complicated tasks in an accurate manner with the use of graphical processing unit (GPU). Significant performance is observed to analyze images, videos, text and speech. This paper deals with the detailed comparison of various deep learning models and the area in which these various deep learning models can be applied. We also present the comparison of various deep networks of classification. The paper also describes deep learning libraries along with the platform and interface in which they can be used. The accuracy is evaluated with respect to various machine learning and deep learning models on the MNIST dataset. The evaluation shows classification on deep learning model is far better than a machine learning model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Deep Learning Research Groups. http://deeplearning.net/deep-learning-research-groups-and-labs/

  2. Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. In: 30th International Conference on Machine Learning, pp. 1310– 1318. JMLR.org, Atlanta, GA, USA (2013)

    Google Scholar 

  3. Abraham, A.: Artificial neural networks. In: Sydenham, Peter H., Thorn, Richard (eds.) Handbook of Measuring System Design, pp. 901–908. Wiley, London (2005)

    Google Scholar 

  4. Sutskever, I., Hinton, G., Taylor, G.: The recurrent temporal restricted Boltzmann machine. In: NIPS’2008, Curran Associates, Inc., pp. 1601–1608 (2009)

    Google Scholar 

  5. Fischer, A: Training Restricted Boltzmann Machines. KI-Künstliche Intelligenz. 29, 441–444 (2015)

    Google Scholar 

  6. Hinton, G.E., Simon, O., Yee-Whye, T.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006)

    Article  MathSciNet  Google Scholar 

  7. Huang, H., Li, R., Yang, M., Lim, T., Ding, W.: Evaluation of vehicle interior sound quality using a continuous restricted Boltzmann machine-based DBN. Mech. Syst. Signal Process. 84, 245–267 (2017)

    Article  Google Scholar 

  8. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998)

    Article  Google Scholar 

  9. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. Miami (2009)

    Google Scholar 

  10. Krizhevsky, A., Sutskever, I., Hinton, G., E.: ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 110–1114 (2012)

    Google Scholar 

  11. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Computer Vision–ECCV 2014. LNCS, vol. 8689. Springer, Cham (2014)

    Google Scholar 

  12. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)

    Google Scholar 

  13. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9. Boston (2015)

    Google Scholar 

  14. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–-778. Las Vegas, NV (2016)

    Google Scholar 

  15. LeCun, Y., Bengio, Y., Hinton, G.E.: Deep learning. Nature 521, 436–444 (2015)

    Article  Google Scholar 

  16. Graves, A.: Generating Sequences With Recurrent Neural Networks, CoRR. (2013)

    Google Scholar 

  17. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)

    Article  Google Scholar 

  18. Hinton, G.E., Zemel, R.S.: Autoencoders, minimum description length, and Helmholtz free energy. Adv. Neural. Inf. Process. Syst. 6, 3–10 (1994)

    Google Scholar 

  19. Socher, R., Perelygin, A., Wu, J., Chuang, J., D. Manning, C., Ng, A., et al.: Recursive deep models for semantic compositionality over a sentiment Treebank. In: Conference on Empirical Methods in Natural Language Processing (2013)

    Google Scholar 

  20. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al.: Tensorflow: A system for large-scale machine learning. In: 12th USENIX Conference on Operating Systems Design and Implementation. pp. 265–283, USENIX Association, Savannah, GA, USA (2016)

    Google Scholar 

  21. Keras: The Python Deep Learning library. https://keras.io/

  22. Deeplearning4j: Open-source, Distributed Deep Learning for the JVM. https://deeplearning4j.org/index.html

  23. Caffe| Deep Learning Framework. http://caffe.berkeleyvision.org

  24. Bergstra, J., Bastien, F., Breuleux, O., Lamblin, P., Pascanu, R., Delalleau, O., et al.: Theano: deep learning on gpus with python. In: Neural Information Processing Systems. Big Learn workshop (2011)

    Google Scholar 

  25. Collobert, R., Kavukcuoglu, K., Farabet, C.: Torch7: a matlab-like environment for machine learning. In: Neural Information Processing Systems. Big Learn workshop (2011)

    Google Scholar 

  26. Agarwal, A., Akchurin, E., Basoglu, C., Chen, G., Cyphers, S., Droppo, J., et al.: An Introduction to Computational Networks and the Computational Network Toolkit. Microsoft Technical Report MSR-TR-2014-112 (2014)

    Google Scholar 

  27. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist

  28. NIST Handprinted Forms and Characters Database. https://www.nist.gov/srd/nist-special-database-19

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dweepna Garg .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Garg, D., Goel, P., Kandaswamy, G., Ganatra, A., Kotecha, K. (2019). A Roadmap to Deep Learning: A State-of-the-Art Step Towards Machine Learning. In: Luhach, A., Singh, D., Hsiung, PA., Hawari, K., Lingras, P., Singh, P. (eds) Advanced Informatics for Computing Research. ICAICR 2018. Communications in Computer and Information Science, vol 955. Springer, Singapore. https://doi.org/10.1007/978-981-13-3140-4_15

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-3140-4_15

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-3139-8

  • Online ISBN: 978-981-13-3140-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics