Abstract
Autoencoder has been successfully used as an unsupervised learning framework to learn some useful representations in deep learning tasks. Based on it, a wide variety of regularization techniques have been proposed such as early stopping, weight decay and contraction. This paper presents a new training principle for autoencoder based on denoising autoencoder and dropout training method. We extend denoising autoencoder by both partial corruption of the input pattern and adding noise to its hidden units. This kind of noisy autoencoder can be stacked to initialize deep learning architectures. Moreover, we show that in the full noisy network the activations of hidden units are sparser. Furthermore, the method significantly improves learning accuracy when conducting classification experiments on benchmark data sets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)
Erhan, D., Bengio, Y., Courville, A., et al.: Why does unsupervised pre-training help deep learning. J. Mach. Learn. Res. 11, 625–660 (2010)
Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2, 1–127 (2009)
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.-A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)
Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio,Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the 28th International Conference on Machine Learning, pp. 833–840 (2011)
Chen, M., Xu, Z., Weinberger, Z., Sha, F.: Marginalized denoising autoencoders for domain adaptation. In: Langford, J., Pineau, J. (eds.) Proceedings of the 29th International Conference on Machine Learning, pp. 767–774 (2012)
Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580 (2012)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)
Bengio, Y., Courville, A., Vincent, P.: Unsupervised feature learning and deep learning: a review and new perspectives. arXiv:1206.5538 (2012)
Bengio, Y.: Deep Learning of Representations: Looking Forward. arXiv:1305.0445 (2013)
Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103 (2008)
Chen, M., Weinberger, K., Sha, F., Bengio, Y.: Marginalized denoising autoencoders for nonlinear representation. In: Proceedings of the 31th International Conference on Machine Learning, pp. 1476–1484 (2014)
Zhou, Y., Arpit, D., Nwogu, I., Govindaraju, V.: Is Joint Training Better for Deep Auto-Encoders? arXiv:1405.1380 (2015)
Acknowledgements
This work was supported in part by National Natural Science Foundation of China (61273225, 61273303, 61373109), Program for Outstanding Young Science and Technology Innovation Teams in Higher Education Institutions of Hubei Province (No. T201202), as well as National “Twelfth Five-Year” Plan for Science & Technology Support (2012BAC22B01).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Xia, L., Zhang, X., Li, B. (2016). Improving Deep Learning Accuracy with Noisy Autoencoders Embedded Perturbative Layers. In: Huang, DS., Han, K., Hussain, A. (eds) Intelligent Computing Methodologies. ICIC 2016. Lecture Notes in Computer Science(), vol 9773. Springer, Cham. https://doi.org/10.1007/978-3-319-42297-8_22
Download citation
DOI: https://doi.org/10.1007/978-3-319-42297-8_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-42296-1
Online ISBN: 978-3-319-42297-8
eBook Packages: Computer ScienceComputer Science (R0)