Skip to main content

Improving Deep Learning Accuracy with Noisy Autoencoders Embedded Perturbative Layers

  • Conference paper
  • First Online:
Intelligent Computing Methodologies (ICIC 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9773))

Included in the following conference series:

Abstract

Autoencoder has been successfully used as an unsupervised learning framework to learn some useful representations in deep learning tasks. Based on it, a wide variety of regularization techniques have been proposed such as early stopping, weight decay and contraction. This paper presents a new training principle for autoencoder based on denoising autoencoder and dropout training method. We extend denoising autoencoder by both partial corruption of the input pattern and adding noise to its hidden units. This kind of noisy autoencoder can be stacked to initialize deep learning architectures. Moreover, we show that in the full noisy network the activations of hidden units are sparser. Furthermore, the method significantly improves learning accuracy when conducting classification experiments on benchmark data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)

    Google Scholar 

  2. Erhan, D., Bengio, Y., Courville, A., et al.: Why does unsupervised pre-training help deep learning. J. Mach. Learn. Res. 11, 625–660 (2010)

    MathSciNet  MATH  Google Scholar 

  3. Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2, 1–127 (2009)

    Article  MATH  Google Scholar 

  4. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.-A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)

    MathSciNet  MATH  Google Scholar 

  5. Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio,Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the 28th International Conference on Machine Learning, pp. 833–840 (2011)

    Google Scholar 

  6. Chen, M., Xu, Z., Weinberger, Z., Sha, F.: Marginalized denoising autoencoders for domain adaptation. In: Langford, J., Pineau, J. (eds.) Proceedings of the 29th International Conference on Machine Learning, pp. 767–774 (2012)

    Google Scholar 

  7. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580 (2012)

  8. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  9. Bengio, Y., Courville, A., Vincent, P.: Unsupervised feature learning and deep learning: a review and new perspectives. arXiv:1206.5538 (2012)

  10. Bengio, Y.: Deep Learning of Representations: Looking Forward. arXiv:1305.0445 (2013)

    Google Scholar 

  11. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103 (2008)

    Google Scholar 

  12. Chen, M., Weinberger, K., Sha, F., Bengio, Y.: Marginalized denoising autoencoders for nonlinear representation. In: Proceedings of the 31th International Conference on Machine Learning, pp. 1476–1484 (2014)

    Google Scholar 

  13. Zhou, Y., Arpit, D., Nwogu, I., Govindaraju, V.: Is Joint Training Better for Deep Auto-Encoders? arXiv:1405.1380 (2015)

Download references

Acknowledgements

This work was supported in part by National Natural Science Foundation of China (61273225, 61273303, 61373109), Program for Outstanding Young Science and Technology Innovation Teams in Higher Education Institutions of Hubei Province (No. T201202), as well as National “Twelfth Five-Year” Plan for Science & Technology Support (2012BAC22B01).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaolong Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Xia, L., Zhang, X., Li, B. (2016). Improving Deep Learning Accuracy with Noisy Autoencoders Embedded Perturbative Layers. In: Huang, DS., Han, K., Hussain, A. (eds) Intelligent Computing Methodologies. ICIC 2016. Lecture Notes in Computer Science(), vol 9773. Springer, Cham. https://doi.org/10.1007/978-3-319-42297-8_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-42297-8_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-42296-1

  • Online ISBN: 978-3-319-42297-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics