Advertisement

Performance Analysis of Deep Neural Network and Stacked Autoencoder for Image Classification

  • S. N. ShivappriyaEmail author
  • R. Harikumar
Chapter
Part of the EAI/Springer Innovations in Communication and Computing book series (EAISICC)

Abstract

There are many challenging impacts on real-time image classification problems like extraction of features from a noisy and uncertainty existence. Task-based feature extraction is not possible for all the cases; to overcome this automatic feature extraction is included in the layers of the deep neural network and stacked autoencoder (SAE), which improves classification accuracy and speed. In this paper image datasets such as MNIST are taken, and it is trained and tested using two different networks. The time consumed and accuracy during the training period are calculated for the MNIST images applying the DNN algorithm. On the other hand, a stacked autoencoder (SAE) is constructed which is trained one layer at a time. Here the SAE consist of three layers which are stacked together, and its parameters are varied in such a way that the constructed SAE outperforms the DNN model. The SAE model improves the validation set accuracy by a noticeable margin. This paper demonstrates the effectiveness of using the SAE model over DNN with the performance analysis of binary handwritten image with time and accuracy trade-off.

Keywords

Artificial neural network (ANN) Stacked autoencoder (SAE) Deep neural network (DNN) 

References

  1. Arulmurugan, R., Sabarmathi, K. R., & Anandakumar, H. (2017). Classification of sentence level sentiment analysis using cloud machine learning techniques. Cluster Computing.  https://doi.org/10.1007/s10586-017-1200-1.
  2. Ba, B. J., & Frey, B. (2013). Adaptive dropout for training deep neural networks. Proceeding of the Advances in Neural Information Processing Systems, Lake Taheo, NV, USA 3084–3092.Google Scholar
  3. Baldi, P. (2012). Autoencoders, unsupervised learning, and deep architectures. ICML Unsupervised and Transfer Learning, 27(37–50), 1.Google Scholar
  4. Dong, P. W., Yin, W., Shi, G., Wu, F., & Lu, X. (2018). Denoising prior driven deep neural network for image restoration, arXiv:1801.06756v1 [cs.CV] pp. 1–13.Google Scholar
  5. Du, L. Y., Shin, K. J., & Managi, S. (2018). Enhancement of land-use change modeling using convolutional neural networks and convolutional denoising autoencoders, arXiv:1803.01159v1 [stat.AP].Google Scholar
  6. Galloway, A., Taylor, G. W., & Moussa, M. (2018). Predicting adversarial examples with high confidence. ICML.Google Scholar
  7. Gottimukkula, V. C. R. (2016). Object classification using stacked autoencoder. North Dakota: North Dakota State University.Google Scholar
  8. Harikumar, R., Shivappriya, S.N., & Raghavan, S. (2014). Comparison of different optimization algorithms for cardiac arrhythmia classification INFORMATION - An international interdisciplinary Journal Published by International Information Institute, Tokyo, Japan, Information 17(8), 3859.Google Scholar
  9. Hinton, S. O., & Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 1527–1554.MathSciNetCrossRefGoogle Scholar
  10. Holder, J., & Gass, S. (2018). Compressing deep neural networks: A new hashing pipeline using Kac’s random walk matrices. Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS) 2018, Lanzarote, Spain. JMLR: W&CP, vol. 7X.Google Scholar
  11. Ishfaq, A. H., & Rubin, D. (2018). TVAE: Triplet-based variational autoencoder using metric learning, 2015 (pp. 1–4). ICLR 2018 Workshop Submission.Google Scholar
  12. Kohli, D., Gopalakrishnan, V., & Iyer, K. N. (2017). Learning rotation invariance in deep hierarchies using circular symmetric filters. ICASSP, Proceedings of the IEEE International Conference of Acoustics, and Speech Signal Processing (pp. 2846–2850).Google Scholar
  13. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.CrossRefGoogle Scholar
  14. Lei, T., & Ming, L. (2016). A robot exploration strategy based on Q-learning network, IEEE International Conference on Real-time Computing and Robotics RCAR 2016 (pp. 57–62).Google Scholar
  15. Liu, Z. W., Liu, X., Zeng, N., Liu, Y., & Alsaadi, F. E. (2017). A survey of deep neural network architectures and their applications. Neurocomputing, 234, 11–26.CrossRefGoogle Scholar
  16. Liu, T., Taniguchi, K. T., & Bando, T. (2018). Defect-repairable latent feature extraction of driving behavior via a deep sparse autoencoder. Sensors, 18(2), 608.CrossRefGoogle Scholar
  17. Meyer, D. (2015). Introduction to Autoencoders. http://www.1-4-5.net/~dmm/papers/
  18. Mohd Yassin, R., Jailani, M. S. A., Megat Ali, R., Baharom, A. H. A. H., & Rizman, Z. I. (2017). Comparison between Cascade forward and multi-layer perceptron neural networks for NARX functional electrical stimulation (FES)-based muscle model. International Journal on Advanced Science, Engineering and Information, 7(1), 215.CrossRefGoogle Scholar
  19. Ng, Andrew, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline Suen, Adam Coates, Andrew Maas, et al. (2015). Deep learning tutorial. http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial
  20. Parloff, R., & Metz, J. (2016). Why deep learning is suddenly changing your life. Published electronically 28 Sept 2016. http://fortune.com/ai-artificial
  21. Raith, S., et al. (2017). Artificial neural networks as a powerful numerical tool to classify specific features of a tooth based on 3D scan data. Computers in Biology and Medicine, 80, 65–76.CrossRefGoogle Scholar
  22. Raju, D., & Shivappriya, S. N. (2018). A review on development. In Machine Learning Algorithms and Its Resources, International Journal of Pure and Applied Mathematics Volume 118 No. 5 759–768 ISSN: 1311-8080 (printed version); ISSN: 1314–3395 (on-line version).Google Scholar
  23. Ruder, S. (2017). An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.Google Scholar
  24. Schmitt, S., et al. (2017). Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system. Proceedings of the International Joint Conference on Neural Networks, 2017, 2227–2234.Google Scholar
  25. Sun, G., Yen, G., & Yi, Z. (2017). Evolving unsupervised deep neural networks for learning meaningful representations. IEEE Transactions on Evolutionary Computation, 1.  https://doi.org/10.1109/TEVC.2018.2808689.
  26. Wang, X., Takaki, S., & Yamagishi, J. (2018). Investigating very deep highway networks for parametric speech synthesis. Speech Communication, 96, 1–9.CrossRefGoogle Scholar
  27. Yang, H. F., Lin, K., & Chen, C.-S. (2015). Supervised learning of semantics-preserving hash via deep convolutional neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8828(c), 1–15 2015.intelligence-deep-machine-learning/intro_to_autoencoders.pdf. arXiv:1507.00101v2 [cs.CV] 14 Feb 2017Google Scholar
  28. Yu, J., Hong, C., Rui, Y., & Tao, D. (2018). Multi-task autoencoder model for recovering human poses. IEEE Transactions on Industrial Electronics, 65(6), 5060–5068.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Electronics and Communication EngineeringKumaraguru College of TechnologyCoimbatoreIndia
  2. 2.Department of Electronics and Communication EngineeringBannari Amman Institute of TechnologySathyamangalamIndia

Personalised recommendations