Skip to main content

Stacked Auto-Encoders for Feature Extraction with Neural Networks

  • Conference paper
  • First Online:
Bio-inspired Computing – Theories and Applications (BIC-TA 2016)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 681))

Abstract

Auto-encoder plays an important role in the feature extraction of deep learning architecture. In this paper, we present several variants of stacked auto-encoders for feature extracting with neural networks. In fact, these stacked auto-encoders can serve as certain biologically plausible filters to extract effective features as the input to a particular neural network with a learning task. The experimental results on the real datasets demonstrate that the convolutional auto-encoders can help a supervised neural network to get the best performance of classification or recognition.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Masci, J., Meier, U., Cireşan, D., Schmidhuber, J.: Stacked convolutional auto-encoders for hierarchical feature extraction. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds.) ICANN 2011. LNCS, vol. 6791, pp. 52–59. Springer, Heidelberg (2011). doi:10.1007/978-3-642-21735-7_7

    Chapter  Google Scholar 

  2. Fukushima, K.: Neocognitron: a self-organizing neural network for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36(4), 193–202 (1980)

    Article  MATH  Google Scholar 

  3. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  4. Ranzato, M., Huang, F.J., Boureau, Y.L., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: Proceedings of Computer Vision and Pattern Recognition Conference (2007)

    Google Scholar 

  5. Erhan, D., Bengio, Y., Courville, A., Manzagol, P.A., Vincent, P.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 11, 625–660 (2010)

    MATH  MathSciNet  Google Scholar 

  6. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  7. Bengio, Y., LeCun, Y.: Scaling learning algorithms towards AI. In: Bottou, L., Chapelle, O., DeCoste, D., Weston, J. (eds.) Large-Scale Kernel Machines. MIT Press, Cambridge (2007)

    Google Scholar 

  8. Song, T., Liu, X., Zhao, Y., Zhang, X.: Spiking neural P systems with white hole neurons. IEEE Trans. Nanobiosci. (2016). doi:10.1109/TNB.2016.2598879

    Google Scholar 

  9. Song, T., Pan, Z., Wong, D.M., Wang, X.: Design of logic gates using spiking neural P systems with homogeneous neurons and astrocytes-like control. Inf. Sci. 372, 380–391 (2016)

    Article  Google Scholar 

  10. Wang, X., Song, T., Gong, F., Pan, Z.: On the computational power of spiking neural P systems with self-organization. Sci. Rep. 6, 27624 (2016). doi:10.1038/srep27624

    Article  Google Scholar 

  11. Baldi, P., Guyon, G., Dror, V., Lemaire, G., Taylor, D.: Autoencoders, unsupervised learning, and deep architectures. In: Guyon, I., Dror, G., Lemaire, V., Taylor, G., Silver, D. (eds.) JMLR: Workshop and Conference Proceedings (2012)

    Google Scholar 

  12. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Neural Information Processing Systems, NIPS (2007)

    Google Scholar 

  13. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: Neural Information Processing Systems, NIPS (2008)

    Google Scholar 

  14. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  15. Hubel, D.H., Wiesel, T.N.: Receptive fields and functional architecture of monkey striate cortex. J. Physiol. 195(1), 215–243 (1968)

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the Natural Science Foundation of China for Grant 61171138.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinwen Ma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper

Liu, S., Zhang, C., Ma, J. (2016). Stacked Auto-Encoders for Feature Extraction with Neural Networks. In: Gong, M., Pan, L., Song, T., Zhang, G. (eds) Bio-inspired Computing – Theories and Applications. BIC-TA 2016. Communications in Computer and Information Science, vol 681. Springer, Singapore. https://doi.org/10.1007/978-981-10-3611-8_31

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-3611-8_31

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-3610-1

  • Online ISBN: 978-981-10-3611-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics