Sparse representations using overcomplete dictionaries have found many signal processing applications. We present the main ways of formulating sparse approximation problems and discuss their advantages over the classical orthogonal transforms. The foremost difficulty is the computation of sparse representations, since it amounts to find the sparsest among the infinite number of solutions of an underdetermined linear system, a problem that has a combinatorial character. The most successful classes of algorithms are based on greedy approaches and convex relaxation. We describe in detail a representative algorithm from each class, namely Orthogonal Matching Pursuit and FISTA. In some circumstances, the algorithms are guaranteed to find the sparsest solution and we present sets of conditions that ensure their success. In preparation for stating the dictionary learning problem, we debate the advantages and drawbacks of learned dictionaries with respect to fixed ones. Since learning is based on training signals from the application at hand, adapted dictionaries have the potential of more faithful sparse representations, an advantage that overwhelms the necessity of (mainly off-line) extra computation.
- 33.R. Chartrand, Shrinkage mappings and their induced penalty functions, in ICASSP, Florence (2014), pp. 1026–1029Google Scholar
- 144.Y.C. Pati, R. Rezaiifar, P.S. Krishnaprasad, Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition, in 27th Asilomar Conference on Signals Systems Computers, vol. 1, November 1993, pp. 40–44Google Scholar