Advertisement

Incrementally Built Dictionary Learning for Sparse Representation

  • Ludovic TrottierEmail author
  • Brahim Chaib-draa
  • Philippe Giguère
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9489)

Abstract

Extracting sparse representations with Dictionary Learning (DL) methods has led to interesting image and speech recognition results. DL has recently been extended to supervised learning (SDL) by using the dictionary for feature extraction and classification. One challenge with SDL is imposing diversity for extracting more discriminative features. To this end, we propose Incrementally Built Dictionary Learning (IBDL), a supervised multi-dictionary learning approach. Unlike existing methods, IBDL maximizes diversity by optimizing the between-class residual error distance. It can be easily parallelized since it learns the class-specific parameters independently. Moreover, we propose an incremental learning rule that improves the convergence guarantees of stochastic gradient descent under sparsity constraints. We evaluated our approach on benchmark digit and face recognition tasks, and obtained comparable performances to existing sparse representation and DL approaches.

Keywords

Supervised dictionary learning Sparse representation Digit recognition Face recognition 

References

  1. 1.
    Guyon, I., Gunn, S., Nikravesh, M., Zadeh, L.: Feature Extraction: Foundations and Applications. Springer, New York (2006)CrossRefzbMATHGoogle Scholar
  2. 2.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)CrossRefGoogle Scholar
  3. 3.
    Kreutz-Delgado, K., Murray, J.F., Rao, B.D., Engan, K., Lee, T.W., Sejnowski, T.J.: Dictionary learning algorithms for sparse representation. Neural Comput. 15(2), 349–396 (2003)CrossRefzbMATHGoogle Scholar
  4. 4.
    Coates, A., Ng, A.Y.: The importance of encoding versus training with sparse coding and vector quantization. In: Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pp. 921–928 (2011)Google Scholar
  5. 5.
    Mairal, J., Ponce, J., Sapiro, G., Zisserman, A., Bach, F.R.: Supervised dictionary learning. In: Advances in Neural Information Processing Systems, pp. 1033–1040 (2009)Google Scholar
  6. 6.
    Ramirez, I., Sprechmann, P., Sapiro, G.: Classification and clustering via dictionary learning with structured incoherence and shared features. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3501–3508 (2010)Google Scholar
  7. 7.
    Yang, M, Zhang, D, Feng, X: Fisher discrimination dictionary learning for sparse representation. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 543–550 (2011)Google Scholar
  8. 8.
    Zhang, Q., Li, B.: Discriminative K-SVD for dictionary learning in face recognition. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2691–2698 (2010)Google Scholar
  9. 9.
    Jiang, Z., Lin, Z., Davis, L.S.: Learning a discriminative dictionary for sparse coding via label consistent K-SVD. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1697–1704 (2011)Google Scholar
  10. 10.
    Aharon, M., Elad, M., Bruckstein, A.: K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54(11), 4311–4322 (2006)CrossRefGoogle Scholar
  11. 11.
    Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)CrossRefGoogle Scholar
  12. 12.
    Donoho, D.L., Elad, M.: Optimally sparse representation in general (nonorthogonal) dictionaries via \(\ell _1\) minimization. Proc. Natl. Acad. Sci. 100(5), 2197–2202 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc. Ser. B (Methodol.) 58, 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Pati, Y.C., Rezaiifar, R., Krishnaprasad, P.: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In: 1993 Conference Record of The Twenty-Seventh Asilomar Conference on Signals, Systems and Computers, pp. 40–44 (1993)Google Scholar
  15. 15.
    Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Ann. Stat. 32(2), 407–499 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Donoho, D.L., Johnstone, I.M.: Adapting to unknown smoothness via wavelet shrinkage. J. Am. Stat. Assoc. 90(432), 1200–1224 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Engan, K., Aase, S.O., Husoy, J.H.: Method of optimal directions for frame design. In: 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 5, pp. 2443–2446 (1999)Google Scholar
  18. 18.
    Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res. 11, 19–60 (2010)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Advances in Neural Information Processing Systems, pp. 2951–2959 (2012)Google Scholar
  20. 20.
    Huang, K., Aviyente, S.: Sparse representation for signal classification. In: Advances in Neural Information Processing Systems, pp. 609–616 (2006)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Ludovic Trottier
    • 1
    Email author
  • Brahim Chaib-draa
    • 1
  • Philippe Giguère
    • 1
  1. 1.Department of Computer Science and Software EngineeringUniversité LavalQuébecCanada

Personalised recommendations