Cosparse Representations

  • Bogdan Dumitrescu
  • Paul Irofti
Chapter

Abstract

So far we have approached dictionary learning from the perspective of Chap. 2, where we are interested in the few nonzero entries of the representations. In literature this process is also called the synthesis-based sparse representation model. Recent years have shown approximation improvements when instead we analyze the set of atoms that do not participate in signal representation. If the sparse DL quest is to learn a dictionary able to identify the low-dimensional space that is the true origin of a given class of signals, in this new analysis-based cosparse representation model we are interested in finding its null-space complement. Throughout this chapter we look at representation and learning challenges posed by the cosparse domain and compare them to similar obstacles encountered by its sparse sibling.

References

  1. 34.
    S. Chatterjee, M. Vehkapera, M. Skoglund, Projection-based and look-ahead strategies for atom selection. IEEE Trans. Signal Process. 60(2), 634–647 (2012)Google Scholar
  2. 44.
    W. Dai, T. Xu, W. Wang, Simultaneous codeword optimization (SimCO) for dictionary update and learning. IEEE Trans. Signal Process. 60(12), 6340–6353 (2012)Google Scholar
  3. 50.
    J. Dong, W. Wang, W. Dai, Analysis SimCo: a new algorithm for analysis dictionary learning, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, New York, 2014), pp. 7193–7197Google Scholar
  4. 66.
    M. Elad, P. Milanfar, R. Rubinstein, Analysis versus synthesis in signal priors. Inverse Prob. 23(3), 947 (2007)Google Scholar
  5. 78.
    R. Giryes, S. Nam, M. Elad, R. Gribonval, M. Davies, Greedy-like algorithms for the cosparse analysis model. Linear Algebra Appl. 441, 22–60 (2014)Google Scholar
  6. 99.
    P. Irofti, B. Dumitrescu, Overcomplete dictionary design: the impact of the sparse representation algorithm, in The 20th International Conference on Control Systems and Computer Science (2015), pp. 901–908Google Scholar
  7. 119.
    Q. Liang Q. Ye, Computing singular values of large matrices with an inverse-free preconditioned Krylov subspace method. Electron. Trans. Numer. Anal. 42, 197–221 (2014)Google Scholar
  8. 121.
    Z. Liu, J. Li, W. Li, P. Dai, A modified greedy analysis pursuit algorithm for the cosparse analysis model. Numer. Algorithms 74(3), 867–887 (2017)Google Scholar
  9. 137.
    S. Nam, M.E. Davies, M. Elad, R. Gribonval, The cosparse analysis model and algorithms. Appl. Comput. Harmon. Anal. 34(1), 30–56 (2013)Google Scholar
  10. 158.
    R. Rubinstein, A.M. Bruckstein, M. Elad, Dictionaries for sparse representations modeling. Proc. IEEE 98(6), 1045–1057 (2010)Google Scholar
  11. 159.
    R. Rubinstein, T. Peleg, M. Elad. Analysis K-SVD: a dictionary-learning algorithm for the analysis sparse model. IEEE Trans. Signal Process. 61(3), 661–677 (2013)Google Scholar
  12. 165.
    Y. Saad, Numerical Methods for Large Eigenvalue Problems, Revised edn. (SIAM, Providence, 2011)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Bogdan Dumitrescu
    • 1
  • Paul Irofti
    • 2
  1. 1.Department of Automatic Control and Systems Engineering, Faculty of Automatic Control and ComputersUniversity Politehnica of BucharestBucharestRomania
  2. 2.Department of Computer Science, Faculty of Mathematics and Computer ScienceUniversity of BucharestBucharestRomania

Personalised recommendations