Skip to main content

Part of the book series: Integrated Series in Information Systems ((ISIS,volume 36))

Abstract

The main objective of this chapter is to discuss the modern deep learning techniques, called the no-drop, the dropout, and the dropconnect in detail and provide programming examples that help you clearly understand these approaches. These techniques heavily depend on the stochastic gradient descent approach; and this approach is also discussed in detail with simple iterative examples. These parametrized deep learning techniques are also dependent on two parameters (weights), and the initial values of these parameters can significantly affect the deep learning models; therefore, a simple approach is presented to enhance the classification accuracy and improve computing performance using perceptual weights. The approach is called the perceptually inspired deep learning framework, and it incorporates edge-sharpening filters and their frequency responses for the classifier and the connector parameters of the deep learning models. They preserve class characteristics and regularize the deep learning model parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. B. Dalessandro. “Bring the noise: Embracing randomness is the key to scaling up machine learning algorithms.” Big Data vol. 1, no. 2, pp. 110–112, 2013.

    Article  MathSciNet  Google Scholar 

  2. L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, and R. Fergus. “Regularization of neural networks using dropconnect.” In Proceedings of the International Conference on Machine Learning, pp. 1058–1066, 2013.

    Google Scholar 

  3. G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. “Improving neural networks by preventing co-adaptation of feature detectors.” Technical Report, arXiv:1207.0580, pp. 1–18, 2012.

    Google Scholar 

  4. T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. New York: Springer, 2009.

    Book  MATH  Google Scholar 

  5. J. Han and C. Moraga. “The influence of the sigmoid function parameters on the speed of backpropagation learning.” In From Natural to Artificial Neural Computation, pp. 195–201, Springer, 1995.

    Google Scholar 

  6. B. L. Kalman and S. C. Kwasny. “Why tanh: choosing a sigmoidal function.” International Joint Conference on Neural Networks, vol. 4, pp. 578–581, 1992.

    Google Scholar 

  7. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. “Gradient-based learning applied to document recognition.” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.

    Article  Google Scholar 

  8. T. Zhang. “Solving large scale linear prediction problems using stochastic gradient descent algorithms.” In Proceedings of the International Conference on Machine learning, pp. 919–926, 2004.

    Google Scholar 

  9. Subtle Sharpen Filter, http://lodev.org/cgtutor/filtering.html.

  10. S. Suthaharan. “No-reference visually significant blocking artifact metric for natural scene images.” Signal Processing, vol. 89, no. 8, pp. 1647–1652, 2009.

    Article  MATH  Google Scholar 

  11. N. Ahmed, T. Natarajan, and K. R. Rao. Discrete cosine transform. IEEE Transactions on Computers, vol. 100, no. 1, pp. 90–93, 1974.

    Article  MathSciNet  Google Scholar 

  12. Twonorm, http://www.cs.toronto.edu/~delve/data/datasets.html, (dataset used by Leo Breiman).

  13. G. Montavon, M. L. Braun, and K. R. Muller. “Kernel analysis of deep networks,” The Journal of Machine Learning Research, vol. 12, pp. 2563–2581, 2011.

    MATH  MathSciNet  Google Scholar 

  14. C. Jose, P. Goyal, P. Aggrwal, and M. Varma. “Local deep kernel learning for efficient non-linear svm prediction,” In Proceedings of the 30th International Conference on Machine Learning, pp. 486–494, 2013.

    Google Scholar 

  15. NSL-KDD, https://archive.ics.uci.edu/ml/datasets/KDD+Cup+1999+Data.

Download references

Acknowledgements

I would like to thank Professor Bin Yu from the University of California, Berkeley for giving me an opportunity to visit the Statistics Department and work on the Deep Learning research. This work was carried out with Professor Bin Yu. I also would like to thank Dr. Jinzhu Jia from Peaking University, who was a visiting scholar at the University of California, Berkeley during this research, for his help in validating the SGD implementation.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media New York

About this chapter

Cite this chapter

Suthaharan, S. (2016). Deep Learning Models. In: Machine Learning Models and Algorithms for Big Data Classification. Integrated Series in Information Systems, vol 36. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7641-3_12

Download citation

Publish with us

Policies and ethics