Kernel Dictionary Learning
Sparse representations are linear by construction, a fact that can hinder their use in classification problems. Building vectors of characteristics from the signals to be classified can overcome the difficulties and is automated by employing kernels, which are functions that quantify the similarities between two vectors. DL can be extended to kernel form by assuming a specific form of the dictionary. DL algorithms have the usual form, comprising sparse coding and dictionary update. We present the kernel versions of OMP and of the most common update algorithms: MOD, SGK, AK-SVD, and K-SVD. The kernel methods use many operations involving a square kernel matrix whose size is equal to the number of signals; hence, their complexities are significantly higher than those of the standard methods. We present two ideas for reducing the size of the problem, the most prominent being that based on Nyström sampling. Finally, we show how kernel DL can be adapted to classification methods involving sparse representations, in particular SRC and discriminative DL.
- 71.A.K. Farahat, A. Elgohary, A. Ghodsi, M.S. Kamel, Greedy column subset selection for large-scale data sets. Springer Knowl. Inf. Syst. 45(1), 1–34 (2015)Google Scholar
- 76.S. Gao, I.W.H. Tsang, L.T. Chia, Sparse representation with kernels. IEEE Trans. Image Process. 22(2), 423–434 (2013)Google Scholar
- 81.M. Gönen, E. Alpaydin, Multiple kernel learning algorithms. J. Mach. Learn. Res. 12, 2211–2268 (2011)Google Scholar
- 109.S.J. Kim, Online kernel dictionary learning, in GlobalSIP, Orlando, December 2015Google Scholar