Advertisement

Deep Dictionary Learning vs Deep Belief Network vs Stacked Autoencoder: An Empirical Analysis

  • Vanika Singhal
  • Anupriya Gogna
  • Angshul MajumdarEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9950)

Abstract

A recent work introduced the concept of deep dictionary learning. The first level is a dictionary learning stage where the inputs are the training data and the outputs are the dictionary and learned coefficients. In subsequent levels of deep dictionary learning, the learned coefficients from the previous level acts as inputs. This is an unsupervised representation learning technique. In this work we empirically compare and contrast with similar deep representation learning techniques – deep belief network and stacked autoencoder. We delve into two aspects; the first one is the robustness of the learning tool in the presence of noise and the second one is the robustness with respect to variations in the number of training samples. The experiments have been carried out on several benchmark datasets. We find that the deep dictionary learning method is the most robust.

Keywords

Deep learning Dictionary learning Classification 

References

  1. 1.
    Sutskever, I., Tieleman, T.: On the convergence properties of contrastive divergence. In: AISTATS (2010)Google Scholar
  2. 2.
    Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)CrossRefGoogle Scholar
  3. 3.
    Rubinstein, R., Bruckstein, A.M., Elad, M.: Dictionaries for sparse representation modeling. Proc. IEEE 98(6), 1045–1057 (2010)CrossRefGoogle Scholar
  4. 4.
    Tariyal, S., Majumdar, A., Singh, R., Vatsa, M.: Greedy Deep Dictionary Learning, arXiv:1602.00203v1
  5. 5.
    Engan, K., Aase, S., Hakon-Husoy, J.: Method of optimal directions for frame design. In: IEEE ICASSP (1999)Google Scholar
  6. 6.
    Jain, P., Netrapalli, P., Sanghavi, S.: Low-rank matrix completion using alternating minimization. In: Symposium on Theory of Computing (2013)Google Scholar
  7. 7.
    Agarwal, A., Anandkumar, A., Jain, P., Netrapalli, P.: Learning sparsely used overcomplete dictionaries via alternating minimization. In: International Conference on Learning Theory (2014)Google Scholar
  8. 8.
    Spielman, D.A., Wang, H., Wright, J.: Exact recovery of sparsely-used dictionaries. In: International Conference on Learning Theory (2012)Google Scholar
  9. 9.
    Arora, S., Bhaskara, A., Ge, R., Ma, T.: More Algorithms for Provable Dictionary Learning, arXiv:1401.0579v1
  10. 10.
    Courville, A., Bergstra, J., Bengio, Y.: An empirical evaluation of deep architectures on problems with many factors of variation. In: ICML (2007)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Vanika Singhal
    • 1
  • Anupriya Gogna
    • 1
  • Angshul Majumdar
    • 1
    Email author
  1. 1.Indraprastha Institute of Information TechnologyDelhiIndia

Personalised recommendations