Abstract
Endowing the dictionary with a structure may be beneficial by better modeling certain signals and by speeding up the representation and learning processes, despite losing some of the freedom of a general dictionary. We study here several unrelated types of structures and present DL algorithms adapted to the respective structures. Sparse dictionaries assume that the atoms are sparse combinations of the columns of a matrix, usually those of a square transform. This is equivalent to a factorization of the dictionary as a product between a dense and a sparse matrix or, generalizing the concept, a product of several sparse matrices. This structure can be seen as the ultimate approach to parsimony via sparsity. Dictionaries made of orthogonal blocks have several appealing properties, including better incoherence. Of particular interest is the case where a single block is used for the sparse representation, thus making sparse coding extremely fast because of its simplicity and parallelism. Shift invariant dictionaries bring the advantage of being insensitive to the way a long signal is cut into smaller patches for processing. They also have fast representation algorithms based on FFT. Separable dictionaries work with 2D signals without vectorization; a pair of dictionaries is used instead of a single one. The representation is more economic and may be better suited to image processing. The concept can be generalized to more than two dimensions, working with tensors; we present a few theoretical notions that pave the way to a tensor DL. Finally, composite dictionaries have two components: one is learned off-line, as usual, but the other directly on the set of signals to be processed. This slows the processing, but can bring extra quality.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
P.A. Absil, R. Mahony, R. Sepulchre, Optimization Algorithms on Matrix Manifolds (Princeton University Press, Princeton, 2009)
M. Bahri, Y. Panagakis, S. Zafeiriou, Robust Kronecker-decomposable component analysis for low rank modeling. Preprint (2017). arXiv:1703.07886
S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 3(1), 1–122 (2011)
H. Bristow, A. Eriksson, S. Lucey, Fast convolutional sparse coding, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 391–398
J.D. Carroll, J.J. Chang, Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition. Psychometrika 35(3), 283–319 (1970)
J.M. Chambers, Algorithm 410: partial sorting. Commun. ACM 14(5), 357–358 (1971)
C.F. Dantas, M.N. da Costa, R. da Rocha Lopes, Learning dictionaries as a sum of Kronecker products. IEEE Signal Process. Lett. 24(5), 559–563 (2017)
D.L. Donoho, X. Huo, Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory 47(7), 2845–2862 (2001)
M. Elad, A.M. Bruckstein, A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans. Inf. Theory 48(9), 2558–2567 (2002)
Y. Fang, J. Wu, B. Huang, 2D sparse signal recovery via 2D orthogonal matching pursuit. Sci. China Inf. Sci. 55(4), 889–897 (2012)
Y. Fu, J. Gao, Y. Sun, X. Hong, Joint multiple dictionary learning for tensor sparse coding, in 2014 International Joint Conference on Neural Networks (IJCNN) (IEEE, New York, 2014), pp. 2957–2964
G.H. Golub, C. Van Loan, Matrix Computations, 4th edn. (Johns Hopkins University Press, Baltimore, 2013)
J.C. Gower, G.B. Dijksterhuis, Procrustes Problems, vol. 3 (Oxford University Press, Oxford, 2004)
R. Gribonval, M. Nielsen, Sparse decompositions in “incoherent” dictionaries, in 2003 Proceedings of the International Conference on Image Processing, 2003 (ICIP 2003), vol. 1 (IEEE, New York, 2003), pp. I–33
R. Grosse, R. Raina, H. Kwong, A.Y. Ng, Shift-invariant sparse coding for audio classification, in Proceedings of 23rd Conference on Uncertainty in Artificial Intelligence (AUAI Press, Corvallis, 2007), pp. 149–158
J. Han, J. Pei, Y. Yin, R. Mao, Mining frequent patterns without candidate generation: a frequent-pattern tree approach. Data Min. Knowl. Disc. 8(1), 53–87 (2004)
R.A. Harshman, Foundations of the parafac procedure: models and conditions for an “explanatory” multimodal factor analysis. UCLA Working Papers in Phonetics, vol. 16, pp. 1–84 (1970)
S. Hawe, M. Seibert, M. Kleinsteuber, Separable dictionary learning, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 438–445
F.L. Hitchcock, The expression of a tensor or a polyadic as a sum of products. Stud. Appl. Math. 6(1–4), 164–189 (1927)
S.H. Hsieh, C.S. Lu, S.C. Pei, 2D sparse dictionary learning via tensor decomposition, in 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP) (IEEE, New York, 2014), pp. 492–496
P. Irofti, Sparse denoising with learned composite structured dictionaries, in 19th International Conference on System Theory, Control and Computing (2015), pp. 331–336
P. Irofti, Efficient parallel implementation for single block orthogonal dictionary learning. J. Control Eng. Appl. Inf. 18(3), 101–108 (2016)
M.G. Jafari, M.D. Plumbley, Fast dictionary learning for sparse representations of speech signals. IEEE J. Sel. Topics Signal Process. 5(5), 1025–1031 (2011)
M.E. Kilmer, C.D. Martin, Factorization strategies for third-order tensors. Linear Algebra Appl. 435(3), 641–658 (2011)
T.G. Kolda, B.W. Bader, The tophits model for higher-order web link analysis, in Workshop on Link Analysis, Counterterrorism and Security, vol. 7 (2006), pp. 26–29
T.G. Kolda, B.W. Bader, Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)
L. Le Magoarou, R. Gribonval, Chasing butterflies: in search of efficient dictionaries, in International Conference on Acoustics Speech Signal Processing (ICASSP), Brisbane, April 2015, pp. 3287–3291
S. Lesage, R. Gribonval, F. Bimbot, L. Benaroya, Learning unions of orthonormal bases with thresholded singular value decomposition, in Proceedings. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’05), 2005, vol. 5, March 2005, pp. v/293–v/296
N. Li, S. Kindermann, C. Navasca, Some convergence results on the regularized alternating least-squares method for tensor decomposition. Linear Algebra Appl. 438(2), 796–812 (2013)
B. Mailhé, S. Lesage, R. Gribonval, F. Bimbot, P. Vandergheynst, Shift-invariant dictionary learning for sparse representations: extending K-SVD, in European Signal Processing Conference (EUSIPCO), Lausanne, August 2008
J. Mairal, G. Sapiro, M. Elad, Learning multiscale sparse representations for image and video restoration. SIAM Multiscale Model. Simul. 7(1), 214–241 (2008)
G. Pope, C. Aubel, C. Studer, Learning phase-invariant dictionaries, in International Conference on Acoustics Speech Signal Processing (ICASSP), Vancouver, May 2013, pp. 5979–5983
R. Rubinstein, M. Zibulevsky, M. Elad, Double sparsity: learning sparse dictionaries for sparse signal approximation. IEEE Trans. Signal Process. 58(3), 1553–1564 (2010)
C. Rusu, B. Dumitrescu, Block orthonormal overcomplete dictionary learning, in 21st European Signal Processing Conference (EUSIPCO 2013), September 2013, pp. 1–5
C. Rusu, B. Dumitrescu, S.A. Tsaftaris, Explicit shift-invariant dictionary learning. IEEE Signal Process. Lett. 24(1), 6–9 (2014)
O.G. Sezer, O. Harmanci, O.G. Guleryuz, Sparse orthonormal transforms for image compression, in 2008 15th IEEE International Conference on Image Processing, October 2008, pp. 149–152
J.J. Thiagarajan, K.N. Ramamurthy, A. Spanias, Shift-invariant sparse representation of images using learned dictionaries, in IEEE Workshop Machine Learning Signal Processing, Cancun, October 2008, pp. 145–150
C.F. Van Loan, N. Pitsianis, Approximation with Kronecker products, in Linear Algebra for Large Scale and Real-Time Applications (Springer, Berlin, 1993), pp. 293–314
Z. Wang, Y. Yang, J. Yang, T. Huang, Designing a composite dictionary adaptively from joint examples, in IEEE Visual Communication Image Processing (VCIP), Singapore, December 2015
H. Wersing, J. Eggert, E. Körner, Sparse coding with invariance constraints. Lect. Notes Comput. Sci. 2714, 385–392 (2003)
B. Wohlberg, Efficient algorithms for convolutional sparse representations. IEEE Trans. Image Process. 15(1), 301–315 (2016)
Z. Zhang, S. Aeron, Denoising and completion of 3d data via multidimensional dictionary learning, in Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16) (2016), pp. 2371–2377
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Dumitrescu, B., Irofti, P. (2018). Structured Dictionaries. In: Dictionary Learning Algorithms and Applications. Springer, Cham. https://doi.org/10.1007/978-3-319-78674-2_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-78674-2_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-78673-5
Online ISBN: 978-3-319-78674-2
eBook Packages: EngineeringEngineering (R0)