Optimizing Dictionary Size

  • Bogdan Dumitrescu
  • Paul Irofti
Chapter

Abstract

Until now the number of atoms was an input parameter of the DL algorithms. Its choice was left to the user, leading usually to a trial and error approach. We discuss here possible ways to optimize the number of atoms. The most common way to pose the DL problem is to impose a certain representation error and attempt to find the smallest dictionary that can ensure that error. The algorithms solving this problem use the sparse coding and dictionary update ideas of the standard algorithms, but add and remove atoms during the DL iterations. They start either with a small number of atoms, then try to add new atoms that are able to significantly reduce the error, or with a large number of atoms, then remove the less useful ones; the growing strategy seems more successful and is likely to have the lowest complexity. Working on a general DL structure for designing dictionaries with variable size, we present some of the algorithms with best results, in particular Stagewise K-SVD and DLENE (DL with efficient number of elements); the first serves also as basis for an initialization algorithm that leads to better results than the typical random initializations. We present the main ideas of a few other methods, insisting on those based on clustering, in particular on the mean shift algorithm. Finally, we discuss how OMP can be modified to reduce the number of atoms without impeding too much on the quality of the representation.

References

  1. 4.
    M. Aharon, M. Elad, A. Bruckstein, K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54(11), 4311–4322 (2006)Google Scholar
  2. 45.
    H.P. Dang, P. Chainais, Towards dictionaries of optimal size: a Bayesian non parametric approach. J. Signal Process. Syst. 1–12 (2016). https://doi.org/10.1007/s11265-016-1154-1
  3. 59.
    B. Dumitrescu, P. Irofti, Low dimensional subspace finding via size-reducing dictionary learning, in International Workshop Machine Learning for Signal Processing, Vietri sul Mare (2016)Google Scholar
  4. 64.
    M. Elad, I. Yavneh, A plurality of sparse representations is better than the sparsest one alone. IEEE Trans. Inf. Theory 55(10), 4701–4714 (2009)MathSciNetCrossRefGoogle Scholar
  5. 68.
    O.D. Escoda, L. Granai, P. Vandergheynst, On the use of a priori information for sparse signal approximations. IEEE Trans. Signal Process. 54(9), 3468–3482 (2006)CrossRefGoogle Scholar
  6. 72.
    J. Feng, L. Song, X. Yang, W. Zhang, Sub clustering K-SVD: size variable dictionary learning for sparse representations, in 16th IEEE International Conference on Image Processing (2009), pp. 2149–2152Google Scholar
  7. 73.
    J. Feng, L. Song, X. Yang, W. Zhang, Learning dictionary via subspace segmentation for sparse representation, in 18th IEEE International Conference on Image Processing (2011), pp. 1245–1248Google Scholar
  8. 131.
    M. Marsousi, K. Abhari, P. Babyn, J. Alirezaie, An adaptive approach to learn overcomplete dictionaries with efficient numbers of elements. IEEE Trans. Signal Process. 62(12), 3272–3283 (2014)MathSciNetCrossRefGoogle Scholar
  9. 134.
    R. Mazhar, P.D. Gader, EK-SVD: optimized dictionary design for sparse representations, in 19th International Conference on Pattern Recognition (2008), pp. 1–4Google Scholar
  10. 150.
    I. Ramirez, G. Sapiro, An MDL framework for sparse coding and dictionary learning, IEEE Trans. Signal Proc., 60(6), 2913–2927 (2012)MathSciNetCrossRefGoogle Scholar
  11. 152.
    N. Rao, F. Porikli, A clustering approach to optimize online dictionary learning, in International Conference on Acoustics Speech Signal Processing (ICASSP), Kyoto, March 2012, pp. 1293–1296Google Scholar
  12. 160.
    C. Rusu, B. Dumitrescu, Stagewise K-SVD to design efficient dictionaries for sparse representations. IEEE Signal Process. Lett. 19(10), 631–634 (2012)CrossRefGoogle Scholar
  13. 162.
    C. Rusu, B. Dumitrescu, An initialization strategy for the dictionary learning problem, in International Conference Acoustics Speech Signal Processing, Florence (2014), pp. 6731–6735Google Scholar
  14. 170.
    J. Scarlett, J.S. Evans, S. Dey, Compressed sensing with prior information: information-theoretic limits and practical decoders. IEEE Trans. Signal Process. 61(2), 427–439 (2013)MathSciNetCrossRefGoogle Scholar
  15. 210.
    M. Yaghoobi, T. Blumensath, M.E. Davies, Dictionary learning for sparse approximations with the majorization method. IEEE Trans. Signal Process. 57(6), 2178–2191 (2009)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Bogdan Dumitrescu
    • 1
  • Paul Irofti
    • 2
  1. 1.Department of Automatic Control and Systems Engineering, Faculty of Automatic Control and ComputersUniversity Politehnica of BucharestBucharestRomania
  2. 2.Department of Computer Science, Faculty of Mathematics and Computer ScienceUniversity of BucharestBucharestRomania

Personalised recommendations