Skip to main content

Unsupervised Deep Learning Architectures

  • Chapter
  • First Online:
Book cover Advances in Deep Learning

Part of the book series: Studies in Big Data ((SBD,volume 57))

Abstract

The cascade of multiple layers of a deep learning architecture can be learnt in an unsupervised manner for the tasks like pattern analysis. A deep learning architecture can be trained one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine. Unsupervised deep learning algorithms are important because unlabeled data is more abundant than the labeled data. For applications with large volumes of unlabeled data, a two-step procedure is used: in the first step, a deep neural network is pretrained in an unsupervised manner; in the second step, a small portion of the unlabeled data is manually labeled, and then used for supervised fine-tuning of the deep neural network.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

  • Afzal, S., Wani, M.A.: Improving performance of deep networks on handwritten digit classification. In: 2017 4th International Conference on Computing for Sustainable Global Development (INDIACOM), pp. 4238–4241. IEEE (2017)

    Google Scholar 

  • Afzal, S., Wani, M.A.: Deep neural network architectures: a review. In: 2018 5th International Conference on Computing for Sustainable Global Development (INDIACOM), pp. 3024–3030. IEEE (2018)

    Google Scholar 

  • Afzal, S., Wani, M.A.: Training and model structure of deep architectures. Artif. Intell. Syst. Mach. Learn. 10(2), 38–46 (2018)

    Google Scholar 

  • Bengio, Y.: Learning deep architectures for AI. Found. Trends® Mach. Learn. 2(1), 1–127 (2009)

    Google Scholar 

  • Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)

    Article  Google Scholar 

  • Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In Advances in neural information processing systems, pp. 153–160 (2007)

    Google Scholar 

  • Erhan, D., Bengio, Y., Courville, A., Manzagol, P.A., Vincent, P., Bengio, S.: Why does unsupervised pre-training help deep learning?. J. Mach. Learn. Res. 11(Feb), 625–660 (2010)

    Google Scholar 

  • Goroshin, R., LeCun, Y.: Saturating auto-encoders. arXiv preprint arXiv:1301.3577 (2013)

  • Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002)

    Article  Google Scholar 

  • Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Google Scholar 

  • Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  Google Scholar 

  • Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 10(Jan), 1–40 (2009)

    Google Scholar 

  • Lee, H., Ekanadham, C., Ng, A.Y.: Sparse deep belief net model for visual area V2. In: Advances in Neural Information Processing Systems, pp. 873–880 (2008)

    Google Scholar 

  • Poultney, C., Chopra, S., Cun, Y.L.: Efficient learning of sparse representations with an energy-based model. In: Advances in Neural Information Processing Systems, pp. 1137–1144 (2007)

    Google Scholar 

  • Rifai, S., Vincent, P., Muller, X., Glorot, X., & Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the 28th International Conference on International Conference on Machine Learning, pp. 833–840. Omnipress (2011)

    Google Scholar 

  • Tieleman, T., Hinton, G.: Using fast weights to improve persistent contrastive divergence. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1033–1040. ACM (2009)

    Google Scholar 

  • Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P. A. Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on Machine learning, pp. 1096–1103. ACM (2008)

    Google Scholar 

  • Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11(Dec), 3371–3408 (2010)

    Google Scholar 

  • Wani, M.A., Afzal, S.: Gain parameter and dropout-based fine tuning of deep networks. Int. J. Intell. Inf. Database Syst. 11(4), 236–254 (2018a)

    Google Scholar 

  • Wani, M.A., Afzal, S.: Optimization of deep network models through fine tuning. Int. J. Intell. Comput. Cybern. 11(3), 386–403 (2018b)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Wani, M.A., Bhat, F.A., Afzal, S., Khan, A.I. (2020). Unsupervised Deep Learning Architectures. In: Advances in Deep Learning. Studies in Big Data, vol 57. Springer, Singapore. https://doi.org/10.1007/978-981-13-6794-6_5

Download citation

Publish with us

Policies and ethics