Skip to main content

Structured Dictionaries

  • Chapter
  • First Online:
Dictionary Learning Algorithms and Applications
  • 1291 Accesses

Abstract

Endowing the dictionary with a structure may be beneficial by better modeling certain signals and by speeding up the representation and learning processes, despite losing some of the freedom of a general dictionary. We study here several unrelated types of structures and present DL algorithms adapted to the respective structures. Sparse dictionaries assume that the atoms are sparse combinations of the columns of a matrix, usually those of a square transform. This is equivalent to a factorization of the dictionary as a product between a dense and a sparse matrix or, generalizing the concept, a product of several sparse matrices. This structure can be seen as the ultimate approach to parsimony via sparsity. Dictionaries made of orthogonal blocks have several appealing properties, including better incoherence. Of particular interest is the case where a single block is used for the sparse representation, thus making sparse coding extremely fast because of its simplicity and parallelism. Shift invariant dictionaries bring the advantage of being insensitive to the way a long signal is cut into smaller patches for processing. They also have fast representation algorithms based on FFT. Separable dictionaries work with 2D signals without vectorization; a pair of dictionaries is used instead of a single one. The representation is more economic and may be better suited to image processing. The concept can be generalized to more than two dimensions, working with tensors; we present a few theoretical notions that pave the way to a tensor DL. Finally, composite dictionaries have two components: one is learned off-line, as usual, but the other directly on the set of signals to be processed. This slows the processing, but can bring extra quality.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. P.A. Absil, R. Mahony, R. Sepulchre, Optimization Algorithms on Matrix Manifolds (Princeton University Press, Princeton, 2009)

    Google Scholar 

  2. M. Bahri, Y. Panagakis, S. Zafeiriou, Robust Kronecker-decomposable component analysis for low rank modeling. Preprint (2017). arXiv:1703.07886

    Google Scholar 

  3. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 3(1), 1–122 (2011)

    Article  MathSciNet  Google Scholar 

  4. H. Bristow, A. Eriksson, S. Lucey, Fast convolutional sparse coding, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 391–398

    Google Scholar 

  5. J.D. Carroll, J.J. Chang, Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition. Psychometrika 35(3), 283–319 (1970)

    Article  Google Scholar 

  6. J.M. Chambers, Algorithm 410: partial sorting. Commun. ACM 14(5), 357–358 (1971)

    Article  Google Scholar 

  7. C.F. Dantas, M.N. da Costa, R. da Rocha Lopes, Learning dictionaries as a sum of Kronecker products. IEEE Signal Process. Lett. 24(5), 559–563 (2017)

    Article  Google Scholar 

  8. D.L. Donoho, X. Huo, Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory 47(7), 2845–2862 (2001)

    Article  MathSciNet  Google Scholar 

  9. M. Elad, A.M. Bruckstein, A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans. Inf. Theory 48(9), 2558–2567 (2002)

    Google Scholar 

  10. Y. Fang, J. Wu, B. Huang, 2D sparse signal recovery via 2D orthogonal matching pursuit. Sci. China Inf. Sci. 55(4), 889–897 (2012)

    Article  MathSciNet  Google Scholar 

  11. Y. Fu, J. Gao, Y. Sun, X. Hong, Joint multiple dictionary learning for tensor sparse coding, in 2014 International Joint Conference on Neural Networks (IJCNN) (IEEE, New York, 2014), pp. 2957–2964

    Google Scholar 

  12. G.H. Golub, C. Van Loan, Matrix Computations, 4th edn. (Johns Hopkins University Press, Baltimore, 2013)

    Google Scholar 

  13. J.C. Gower, G.B. Dijksterhuis, Procrustes Problems, vol. 3 (Oxford University Press, Oxford, 2004)

    Chapter  Google Scholar 

  14. R. Gribonval, M. Nielsen, Sparse decompositions in “incoherent” dictionaries, in 2003 Proceedings of the International Conference on Image Processing, 2003 (ICIP 2003), vol. 1 (IEEE, New York, 2003), pp. I–33

    Google Scholar 

  15. R. Grosse, R. Raina, H. Kwong, A.Y. Ng, Shift-invariant sparse coding for audio classification, in Proceedings of 23rd Conference on Uncertainty in Artificial Intelligence (AUAI Press, Corvallis, 2007), pp. 149–158

    Google Scholar 

  16. J. Han, J. Pei, Y. Yin, R. Mao, Mining frequent patterns without candidate generation: a frequent-pattern tree approach. Data Min. Knowl. Disc. 8(1), 53–87 (2004)

    Article  MathSciNet  Google Scholar 

  17. R.A. Harshman, Foundations of the parafac procedure: models and conditions for an “explanatory” multimodal factor analysis. UCLA Working Papers in Phonetics, vol. 16, pp. 1–84 (1970)

    Google Scholar 

  18. S. Hawe, M. Seibert, M. Kleinsteuber, Separable dictionary learning, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 438–445

    Google Scholar 

  19. F.L. Hitchcock, The expression of a tensor or a polyadic as a sum of products. Stud. Appl. Math. 6(1–4), 164–189 (1927)

    Article  Google Scholar 

  20. S.H. Hsieh, C.S. Lu, S.C. Pei, 2D sparse dictionary learning via tensor decomposition, in 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP) (IEEE, New York, 2014), pp. 492–496

    Google Scholar 

  21. P. Irofti, Sparse denoising with learned composite structured dictionaries, in 19th International Conference on System Theory, Control and Computing (2015), pp. 331–336

    Google Scholar 

  22. P. Irofti, Efficient parallel implementation for single block orthogonal dictionary learning. J. Control Eng. Appl. Inf. 18(3), 101–108 (2016)

    Google Scholar 

  23. M.G. Jafari, M.D. Plumbley, Fast dictionary learning for sparse representations of speech signals. IEEE J. Sel. Topics Signal Process. 5(5), 1025–1031 (2011)

    Article  Google Scholar 

  24. M.E. Kilmer, C.D. Martin, Factorization strategies for third-order tensors. Linear Algebra Appl. 435(3), 641–658 (2011)

    Google Scholar 

  25. T.G. Kolda, B.W. Bader, The tophits model for higher-order web link analysis, in Workshop on Link Analysis, Counterterrorism and Security, vol. 7 (2006), pp. 26–29

    Google Scholar 

  26. T.G. Kolda, B.W. Bader, Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  Google Scholar 

  27. L. Le Magoarou, R. Gribonval, Chasing butterflies: in search of efficient dictionaries, in International Conference on Acoustics Speech Signal Processing (ICASSP), Brisbane, April 2015, pp. 3287–3291

    Google Scholar 

  28. S. Lesage, R. Gribonval, F. Bimbot, L. Benaroya, Learning unions of orthonormal bases with thresholded singular value decomposition, in Proceedings. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’05), 2005, vol. 5, March 2005, pp. v/293–v/296

    Google Scholar 

  29. N. Li, S. Kindermann, C. Navasca, Some convergence results on the regularized alternating least-squares method for tensor decomposition. Linear Algebra Appl. 438(2), 796–812 (2013)

    Article  MathSciNet  Google Scholar 

  30. B. Mailhé, S. Lesage, R. Gribonval, F. Bimbot, P. Vandergheynst, Shift-invariant dictionary learning for sparse representations: extending K-SVD, in European Signal Processing Conference (EUSIPCO), Lausanne, August 2008

    Google Scholar 

  31. J. Mairal, G. Sapiro, M. Elad, Learning multiscale sparse representations for image and video restoration. SIAM Multiscale Model. Simul. 7(1), 214–241 (2008)

    Article  MathSciNet  Google Scholar 

  32. G. Pope, C. Aubel, C. Studer, Learning phase-invariant dictionaries, in International Conference on Acoustics Speech Signal Processing (ICASSP), Vancouver, May 2013, pp. 5979–5983

    Google Scholar 

  33. R. Rubinstein, M. Zibulevsky, M. Elad, Double sparsity: learning sparse dictionaries for sparse signal approximation. IEEE Trans. Signal Process. 58(3), 1553–1564 (2010)

    Google Scholar 

  34. C. Rusu, B. Dumitrescu, Block orthonormal overcomplete dictionary learning, in 21st European Signal Processing Conference (EUSIPCO 2013), September 2013, pp. 1–5

    Google Scholar 

  35. C. Rusu, B. Dumitrescu, S.A. Tsaftaris, Explicit shift-invariant dictionary learning. IEEE Signal Process. Lett. 24(1), 6–9 (2014)

    Article  Google Scholar 

  36. O.G. Sezer, O. Harmanci, O.G. Guleryuz, Sparse orthonormal transforms for image compression, in 2008 15th IEEE International Conference on Image Processing, October 2008, pp. 149–152

    Google Scholar 

  37. J.J. Thiagarajan, K.N. Ramamurthy, A. Spanias, Shift-invariant sparse representation of images using learned dictionaries, in IEEE Workshop Machine Learning Signal Processing, Cancun, October 2008, pp. 145–150

    Google Scholar 

  38. C.F. Van Loan, N. Pitsianis, Approximation with Kronecker products, in Linear Algebra for Large Scale and Real-Time Applications (Springer, Berlin, 1993), pp. 293–314

    Chapter  Google Scholar 

  39. Z. Wang, Y. Yang, J. Yang, T. Huang, Designing a composite dictionary adaptively from joint examples, in IEEE Visual Communication Image Processing (VCIP), Singapore, December 2015

    Google Scholar 

  40. H. Wersing, J. Eggert, E. Körner, Sparse coding with invariance constraints. Lect. Notes Comput. Sci. 2714, 385–392 (2003)

    Article  Google Scholar 

  41. B. Wohlberg, Efficient algorithms for convolutional sparse representations. IEEE Trans. Image Process. 15(1), 301–315 (2016)

    Article  MathSciNet  Google Scholar 

  42. Z. Zhang, S. Aeron, Denoising and completion of 3d data via multidimensional dictionary learning, in Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16) (2016), pp. 2371–2377

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Dumitrescu, B., Irofti, P. (2018). Structured Dictionaries. In: Dictionary Learning Algorithms and Applications. Springer, Cham. https://doi.org/10.1007/978-3-319-78674-2_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-78674-2_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-78673-5

  • Online ISBN: 978-3-319-78674-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics