Abstract
Sparse coding is a matrix factorization technique. It models a target signal as a sparse linear combination of atoms (elementary signals) drawn from a dictionary (a fixed collection). Sparse coding has become a popular paradigm in signal processing, statistics, and machine learning. This chapter introduces compressed sensing, sparse representation/sparse coding, tensor compressed sensing, and sparse PCA.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adamczak, R. (2016). A Note on the sample complexity of the Er-SpUD algorithm by Spielman, Wang and Wright for exact recovery of sparsely used dictionaries. Journal of Machine Learning Research, 17, 1–18.
Aharon, M., Elad, M., & Bruckstein, A. (2006). K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11), 4311–4322.
Attouch, H., Bolte, J., Redont, P., & Soubeyran, A. (2010). Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the Kurdyka-Lojasiewicz inequality. Mathematics of Operational Research, 35(2), 438–457.
Babadi, B., Kalouptsidis, N., & Tarokh, V. (2010). SPARLS: The sparse RLS algorithm. IEEE Transactions on Signal Processing, 58(8), 4013–4025.
Bao, C., Ji, H., Quan, Y., & Shen, Z. (2016). Dictionary learning for sparse coding: Algorithms and convergence analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(7), 1356–1369.
Bandeira, A. S., Fickus, M., Mixon, D. G., & Wong, P. (2013). The road to deterministic matrices with the restricted isometry property. Journal of Fourier Analysis and Applications, 19(6), 1123–1149.
Baraniuk, R., Davenport, M., DeVore, R., & Wakin, M. (2008). A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3), 253–263.
Baraniuk, R. G., Cevher, V., Duarte, M. F., & Hegde, C. (2010). Model-based compressive sensing. IEEE Transactions on Information Theory, 56(4), 1982–2001.
Barg, A., Mazumdar, A., & Wang, R. (2015). Restricted isometry property of random subdictionaries. IEEE Transactions on Information Theory, 61(8), 4440–4450.
Beck, A., & Teboulle, M. (2012). Smoothing and first order methods: a unified framework. SIAM Journal on Optimization, 22(2), 557–580.
Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1–127.
Blumensath, T., & Davies, M. E. (2008). Iterative thresholding for sparse approximations. Journal of Fourier Analysis and Applications, 14, 629–654.
Blumensath, T., & Davies, M. E. (2009). Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3), 265–274.
Blumensath, T., & Davies, M. E. (2010). Normalized iterative hard thresholding: Guaranteed stability and performance. IEEE Journal of Selected Topics in Signal Processing, 4(2), 298–309.
Bolte, J., Sabach, S., & Teboulle, M. (2014). Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Mathematical Programming, 146, 459–494.
Cadima, J., & Jolliffe, I. T. (1995). Loading and correlations in the interpretation of principle compenents. Applied Statistics, 22(2), 203–214.
Cai, T. T., & Zhang, A. (2014). Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Transactions on Information Theory, 60(1), 122–132.
Calderbank, R., Howard, S., & Jafarpour, S. (2010). Construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property. IEEE Journal of Selected Topics in Signal Processing, 4(2), 358–374.
Calderbank, R., & Jafarpour, S. (2010). Reed Muller sensing matrices and the LASSO. In C. Carlet & A. Pott (Eds.), Sequences and their applications (Vol. 6338, pp. 442–463). LNCS. Berlin: Springer.
Candes, E. J. (2006). Compressive sampling. In Proceedings of the International Congress of Mathematicians (Vol. 3, pp. 1433–1452). Madrid, Spain.
Candes, E. J., & Fernandez-Granda, C. (2014). Towards a mathematical theory of super-resolution. Communications on Pure and Applied Mathematics, 67(6), 906–956.
Candes, E. J., Romberg, J. K., & Tao, T. (2006). Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 59(8), 1207–1223.
Candes, E. J., Romberg, J., & Tao, T. (2006). Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2), 489–509.
Candes, E. J., & Tao, T. (2006). Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Transactions on Information Theory, 52(12), 5406–5425.
Candes, E. J., & Tao, T. (2005). Decoding by linear programming. IEEE Transactions on Information Theory, 51(12), 4203–4215.
Candes, E. J., Wakin, M. B., & Boyd, S. P. (2008). Enhancing sparsity by reweighted \(l_1\) minimization. Journal of Fourier Analysis and Applications, 14, 877–905.
Cartis, C., & Thompson, A. (2015). A new and improved quantitative recovery analysis for iterative hard thresholding algorithms in compressed sensing. IEEE Transactions on Information Theory, 61(4), 2019–2042.
Chang, L.-H., & Wu, J.-Y. (2012). Compressive-domain interference cancellation via orthogonal projection: How small the restricted isometry constant of the effective sensing matrix can be? In Proceedings of IEEE Wireless Communications and Networking Conference (WCNC) (pp. 256–261). Shanghai: China.
Chang, L.-H., & Wu, J.-Y. (2014). An improved RIP-based performance Guarantee for sparse signal recovery via orthogonal matching pursuit. IEEE Transactions on Information Theory, 60(9), 5702–5715.
Chartrand, R. (2007). Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Processing Letters, 14(10), 707–710.
Chartrand, R., & Yin, W. (2008). Iteratively reweighted algorithms for compressive sensing. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (pp. 3869–3872). Las Vegas, NV.
Chen, L., & Gu, Y. (2014). The convergence guarantees of a non-convex approach for sparse recovery. IEEE Transactions on Signal Processing, 62(15), 3754–3767.
Chen, S. S., Donoho, D. L., & Saunders, M. A. (1999). Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1), 33–61.
Chen, S. S., Donoho, D. L., & Saunders, M. A. (2001). Atomic decomposition by basis pursuit. SIAM Review, 43(1), 129–159.
Chen, X., Wang, Z. J., & McKeown, M. J. (2010). Asymptotic analysis of robust LASSOs in the presence of noise with large variance. IEEE Transactions on Information Theory, 56(10), 5131–5149.
Chen, Y., Gu, Y., & Hero III, A. O. (2009). Sparse LMS for system identification. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing. Taipei, Taiwan.
Chi, Y., Scharf, L. L., Pezeshki, A., & Calderbank, A. R. (2011). Sensitivity to basis mismatch in compressed sensing. IEEE Transactions on Signal Processing, 59(5), 2182–2195.
Dai, W., & Milenkovic, O. (2009). Subspace pursuit for compressive sensing signal reconstruction. IEEE Transactions on Information Theory, 55(5), 2230–2249.
Dai, W., & Milenkovic, O. (2009). Weighted superimposed codes and constrained integer compressed sensing. IEEE Transactions on Information Theory, 55(5), 2215–2229.
d’Aspremont, A., Bach, F., & El Ghaoui, L. (2008). Optimal solutions for sparse principal component analysis. Journal of Machine Learning Research, 9, 1269–1294.
d’Aspremont, A., El Ghaoui, L., Jordan, M. I., & Lanckriet, G. R. G. (2007). A direct formulation for sparse PCA using semidefinite programming. SIAM Review, 49(3), 434–448.
Daubechies, I., Defrise, M., & De Mol, C. (2004). An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 57(11), 1413–1457.
Davenport, M. A., & Wakin, M. B. (2010). Analysis of orthogonal matching pursuit using the restricted isometry property. IEEE Transactions on Information Theory, 56(9), 4395–4401.
DeVore, R. A. (2007). Deterministic constructions of compressed sensing matrices. Journal of Complexity, 23, 918–925.
Dong, Z., & Zhu, W. (2018). Homotopy methods based on \(l_0\)-norm for compressed sensing. IEEE Transactions on Neural Networks and Learning Systems, 29(4), 1132–1146.
Donoho, D. L. (2006). Compressed sensing. IEEE Transactions on Information Theory, 52(4), 1289–1306.
Donoho, D. L. (2006). For most large underdetermined systems of linear equations the minimal \(l_1\)-norm solution is also the sparsest solution. Communications on Pure and Applied Mathematics, 59, 797–829.
Donoho, D. L., Maleki, A., & Montanari, A. (2009). Message-passing algorithms for compressed sensing. Proceedings of the National Academy of Sciences of the USA, 106(45), 18914–18919.
Donoho, D. L., Tsaig, Y., Drori, I., & Starck, J.-L. (2012). Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Transactions on Information Theory, 58(2), 1094–1121.
Efron, B., Hastie, T., Johnstone, I., & Tibshirani, R. (2004). Least angle regression. Annals of Statistics, 32(2), 407–499.
Eldar, Y. C., & Kutyniok, G. (2012). Compressed sensing: Theory and applications. Cambridge: Cambridge University Press.
Eldar, Y. C., Kuppinger, P., & Bolcskei, H. (2010). Block-sparse signals: Uncertainty relations and efficient recovery. IEEE Transactions on Signal Processing, 58(6), 3042–3054.
Elvira, C., Chainais, P., & Dobigeon, N. (2017). Bayesian antisparse coding. IEEE Transactions on Signal Processing, 65(7), 1660–1672.
Exarchakis, G., & Lucke, J. (2017). Discrete sparse coding. Neural Computation, 29(11), 2979–3013.
Fan, J., & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96, 1348–1360.
Figueiredo, M. A. T., Nowak, R. D., & Wright, S. J. (2007). Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing, 1(4), 586–597.
Foucart, S. (2011). Hard thresholding pursuit: An algorithm for compressive sensing. SIAM Journal on Numerical Analysis, 49(6), 2543–2563.
Foucart, S., & Lai, M.-J. (2009). Sparsest solutions of underdetermined linear systems via \(l_q\)-minimization for \(0 < q \le 1\). Applied and Computational Harmonic Analysis, 26(3), 395–407.
Foucart, S., & Rauhut, H. (2013). A mathematical introduction to compressive sensing. Cambridge: Birkhauser.
Frandi, E., Nanculef, R., Lodi, S., Sartori, C., & Suykens, J. A. K. (2016). Fast and scalable Lasso via stochastic Frank-Wolfe methods with a convergence guarantee. Machine Learning, 104(2), 195–221.
Friedland, S., Li, Q., & Schonfeld, D. (2014). Compressive sensing of sparse tensors. IEEE Transactions on Image Processing, 23(10), 4438–4447.
Genovese, C. R., Jin, J., Wasserman, L., & Yao, Z. (2012). A comparison of the lasso and marginal regression. Journal of Machine Learning Research, 13, 2107–2143.
Giryes, R., & Elad, M. (2012). RIP-based near-oracle performance guarantees for SP, CoSaMP, and IHT. IEEE Transactions on Signal Processing, 60(3), 1465–1568.
Gribonval, R., & Nielsen, M. (2007). Highly sparse representations from dictionaries are unique and independent of the sparseness measure. Applied and Computational Harmonic Analysis, 22(3), 335–355.
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning. New York: Springer.
Haviv, I., & Regev, O. (2016). The restricted isometry property of subsampled Fourier matrices. In Proceedings of the 27th Annual ACM-SIAM Symposium on Discrete Algorithms (pp. 288–297). Arlington, TX.
Homrighausen, D., & McDonald, D. J. (2014). Leave-one-out cross-validation is risk consistent for lasso. Machine Learning, 97, 65–78.
Hoyer, P. (2002). Non-negative sparse coding. In Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing (pp. 557–565).
Huang, S., & Zhu, J. (2011). Recovery of sparse signals using OMP and its variants: Convergence analysis based on RIP. Inverse Problems, 27(3), 035003.
Jaggi, M. (2014). An equivalence between the Lasso and support vector machines. In J. A. K. Suykens, M. Signoretto, & A. Argyriou (Eds.), Regularization, optimization, kernels, and support vector machines (Chap. 1, pp. 1–26). Boca Raton: Chapman & Hall/CRC.
Jenatton, R., Mairal, J., Obozinski, G., & Bach, F. (2011). Proximal methods for hierarchical sparse coding. Journal of Machine Learning Research, 12, 2297–2334.
Jolliffe, I. T. (1989). Rotation of ill-defined principal components. Applied Statistics, 38(1), 139–147.
Jolliffe, I. T., Trendafilov, N. T., & Uddin, M. (2003). A modified principal component technique based on the LASSO. Journal of Computational and Graphical Statistics, 12(3), 531–547.
Journee, M., Nesterov, Y., Richtarik, P., & Sepulchre, R. (2010). Generalized power method for sparse principal component analysis. Journal of Machine Learning Research, 11, 517–553.
Kim, H., & Park, H. (2008). Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method. SIAM Journal on Matrix Analysis and Applications, 30(2), 713–730.
Kreutz-Delgado, K., Murray, J. F., Rao, B. D., Engan, K., Lee, T.-W., & Sejnowski, T. J. (2003). Dictionary learning algorithms for sparse representation. Neural Computation, 15(2), 349–396.
Kwon, S., Wang, J., & Shim, B. (2014). Multipath matching pursuit. IEEE Transactions on Information Theory, 60(5), 2986–3001.
Lai, Z., Xu, Y., Chen, Q., Yang, J., & Zhang, D. (2014). Multilinear sparse principal component analysis. IEEE Transactions on Neural Networks and Learning Systems, 25(10), 1942–1950.
Langford, J., Li, L., & Zhang, T. (2009). Sparse online learning via truncated gradient. Journal of Machine Learning Research, 10, 777–801.
Lee, M., Shen, H., Huang, J. Z., & Marron, J. S. (2010). Biclustering via sparse singular value decomposition. Biometrics, 66(4), 1087–1095.
Li, F., Yang, Y., & Xing, E. (2006). FromLasso regression to feature vector machine. In Y. Weiss, B. Scholkopf, & J. Platt (Eds.), Advances in neural information processing systems (Vol. 18, pp. 779–786). Cambridge: MIT Press.
Li, H., Wang, J., & Yuan, X. (2018). On the fundamental limit of multipath matching pursuit. IEEE Journal of Selected Topics in Signal Processing, 12(5), 916–927.
Li, X. (2013). Compressed sensing and matrix completion with constant proportion of corruptions. Constructive Approximation, 37(1), 73–99.
Lin, D., Pitler, E., Foster, D. P., & Ungar, L. H. (2008). In defense of \(l_0\). In Proceedings of ICML/UAI/COLT Workshop on Sparse Optimization and Variable Selection. Helsinki, Finland.
Liu, E., & Temlyakov, V. N. (2012). The orthogonal super greedy algorithm and applications in compressed sensing. IEEE Transactions on Information Theory, 58(4), 2040–2047.
Lu, Z., & Zhang, Y. (2012). An augmented Lagrangian approach for sparse principal component analysis. Mathematical Programming, 135, 149–193.
Ma, Z. (2013). Sparse principal component analysis and iterative thresholding. Annals of Statistics, 41(2), 772–801.
Mairal, J., Bach, F., & Ponce, J. (2012). Task-driven dictionary learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(4), 791–804.
Malioutov, D. M., Cetin, M., & Willsky, A. S. (2005). Homotopy continuation for sparse signal representation. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (pp. 733–736). Philadelphia, PA.
Mallat, S. G., & Zhang, Z. (1993). Matching pursuits with timefrequency dictionaries. IEEE Transactions on Signal Processing, 41(12), 3397–3415.
Marjanovic, G., & Solo, V. (2012). On \(l_q\) optimization and matrix completion. IEEE Transactions on Signal Processing, 60(11), 5714–5724.
Misra, S., & Parrilo, P. A. (2015). Weighted \(l_1\)-minimization for generalized non-uniform sparse model. IEEE Transactions on Information Theory, 61(8), 4424–4439.
Mo, Q. (2015). A sharp restricted isometry constant bound of orthogonal matching pursuit. https://arxiv.org/pdf/1501.01708.pdf.
Mo, Q., & Yi, S. (2012). A remark on the restricted isometry property in orthogonal matching pursuit. IEEE Transactions on Information Theory, 58(6), 3654–3656.
Moghaddam, B., Weiss, Y., & Avidan, S. (2006). Spectral bounds for sparse PCA: Exact and greedy algorithms. Advances in neural information processing systems (Vol. 18, pp. 915-922). Cambridge: MIT Press.
Natarajan, B. K. (1995). Sparse approximate solutions to linear systems. SIAM Journal on Computing, 24(2), 227–234.
Needell, D., & Tropp, J. A. (2009). CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3), 301–321.
Needell, D., & Vershynin, R. (2010). Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE Journal of Selected Topics in Signal Processing, 4(2), 310–316.
Nesterov, Y. (2013). Gradient methods for minimizing composite functions. Mathematical Programming, 140(1), 125–161.
Nikolova, M. (2013). Description of the minimizers of least squares regularized with \(l_0\)-norm. Uniqueness of the global minimizer. SIAM Journal on Imaging Sciences, 6(2), 904–937.
Olshausen, B. A., & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583), 607–609.
Olshausen, B. A., & Field, D. J. (1997). Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37(23), 3311–3325.
Pati, Y. C., Rezaiifar, R., & Krishnaprasad, P. S. (1993). Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers (Vol. 1, pp. 40–44). Los Alamitos, CA.
Peng, J., Yue, S., & Li, H. (2015). NP/CMP equivalence: A phenomenon hidden among sparsity models \(l_0\) minimization and \(l_p\) minimization for information processing. IEEE Transactions on Information Theory, 61(7), 4028–4033.
Rauhut, H. (2008). Stability results for random sampling of sparse trigonometric polynomials. IEEE Transactions on Information Theory, 54(12), 5661–5670.
Rebollo-Neira, L., & Lowe, D. (2002). Optimized orthogonal matching pursuit approach. IEEE Signal Processing Letters, 9(4), 137–140.
Romero, D., Ariananda, D. D., Tian, Z., & Leus, G.(2016). Compressive covariance sensing: Structure-based compressive sensing beyond sparsity. IEEE Signal Processing Magazine, 33(1), 78–93.
Roth, V. (2004). The generalized Lasso. IEEE Transactions on Neural Networks, 15(1), 16–28.
Shalev-Shwartz, S., & Tewari, A. (2011). Stochastic methods for \(l_1\)-regularized loss minimization. Journal of Machine Learning Research, 12, 1865–1892.
Shen, H., & Huang, J. Z. (2008). Sparse principal component analysis via regularized low rank matrix approximation. Journal of Multivariate Analysis, 99(6), 1015–1034.
Shen, J., & Li, P. (2018). A tight bound of hard thresholding. Journal of Machine Learning Research, 18, 1–42.
Sidiropoulos, N. D., & Kyrillidis, A. (2012). Multi-way compressed sensing for sparse low-rank tensors. IEEE Signal Processing Letters, 19(11), 757–760.
Sivalingam, R., Boley, D., Morellas, V., & Papanikolopoulos, N. (2015). Tensor dictionary learning for positive definite matrices. IEEE Transactions on Image Processing, 24(11), 4592–4601.
Soussen, C., Gribonval, R., Idier, J., & Herzet, C. (2013). Joint \(k\)-step analysis of orthogonal matching pursuit and orthogonal least squares. IEEE Transactions on Information Theory, 59(5), 3158–3174.
Spielman, D., Wang, H., & Wright, J. (2012). Exact recovery of sparsely-used dictionaries. In: JMLR: Workshop and Conference Proceedings of the 25th Annual Conference on Learning Theory (Vol. 23, pp. 37.1–37.18).
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B, 58(1), 267–288.
Tibshirani, R. (2011). Regression shrinkage and selection via the lasso: A retrospective. Journal of the Royal Statistical Society Series B, 73(3), 273–282.
Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., & Knight, K. (2005). Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B, 67(1), 91–108.
Tropp, J. A. (2004). Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information Theory, 50(10), 2231–2242.
Tropp, J. A., & Gilbert, A. C. (2007). Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on Information Theory, 53(12), 4655–4666.
Tropp, J. A., & Wright, S. J. (2010). Computational methods for sparse solution of linear inverse problems. Proceedings of the IEEE, 98(6), 948–958.
Tzagkarakis, G., Nolan, J. P., & Tsakalides, P. (2019). Compressive sensing using symmetric alpha-stable distributions for robust sparse signal reconstruction. IEEE Transactions on Signal Processing, 67(3), 808–820.
Wang, C., Yue, S., & Peng, J., (2015). When is P such that \(l_0\)-minimization equals to \(l_p\)-minimization. CoRR, arxiv:abs/1511.07628.
Wang, J., Kwon, S., & Shim, B. (2012). Generalized orthogonal matching pursuit. IEEE Transactions on Signal Processing, 60(12), 6202–6216.
Wang, J., & Li, P. (2017). Recovery of sparse signals using multiple orthogonal least squares. IEEE Transactions on Signal Processing, 65(8), 2049–2062.
Weed, J. (2018). Approximately certifying the restricted isometry property is hard. IEEE Transactions on Information Theory, 64(8), 5488–5497.
Wen, J., Wang, J., & Zhang, Q. (2017). Nearly optimal bounds for orthogonal least squares. IEEE Transactions on Signal Processing, 65(20), 5347–5356.
Wen, J., Zhou, Z., Wang, J., Tang, X., & Mo, Q. (2017). A sharp condition for exact support recovery with orthogonal matching pursuit. IEEE Transactions on Signal Processing, 65(6), 1370–1382.
Wen, J., Zhou, Z., Liu, Z., Lai, M.-J., & Tang, X. (2019). Sharp sufficient conditions for stable recovery of block sparse signals by block orthogonal matching pursuit. Applied and Computational Harmonic Analysis, 47(3), 948–974.
Wu, R., & Chen, D.-R. (2013a). The improved bounds of restricted isometry constant for recovery via \(\ell _p\)-minimization. IEEE Transactions on Information Theory, 59(9), 6142–6147.
Wu, R., Huang, W., & Chen, D.-R. (2013b). The exact support recovery of sparse signals with noise via orthogonal matching pursuit. IEEE Signal Processing Letters, 20(4), 403–406.
Xu, H., Caramanis, C., & Mannor, S. (2010). Robust regression and Lasso. IEEE Transactions on Information Theory, 56(7), 3561–3574.
Xu, H., Mannor, S., & Caramanis, C. (2008). Sparse algorithms are not stable: A no-free-lunch theorem. In Proceedings of the IEEE 46th Annual Allerton Conference on Communication, Control, and Computing (pp. 1299–1303).
Xu, Z., Chang, X., Xu, F., & Zhang, H. (2012). \(L_{1/2}\) regularization: A thresholding representation theory and a fast solver. IEEE Transactions on Neural Networks and Learning Systems, 23(7), 1013–1027.
Yuan, M., & Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society Series B, 68(1), 49–67.
Yuan, X.-T., Li, P., & Zhang, T. (2018). Gradient hard thresholding pursuit. Journal of Machine Learning Research, 18, 1–43.
Yuan, X.-T., & Zhang, T. (2013). Truncated power method for sparse eigenvalue problems. Journal of Machine Learning Research, 14(1), 899–925.
Zarmehi, N., & Marvasti, F. (2019). Removal of sparse noise from sparse signals. Signal Processing, 158, 91–99.
Zhang, C. H. (2010). Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics, 38(2), 894–942.
Zhang, R., & Li, S. (2019). Optimal RIP bounds for sparse signals recovery via \(\ell _p\) minimization. Applied and Computational Harmonic Analysis, 47(3), 466–584.
Zhang, T. (2011). Sparse recovery with orthogonal matching pursuit under RIP. IEEE Transactions on Information Theory, 57(9), 6215–6221.
Zhang, Y., & El Ghaoui, L. (2011). Large-scale sparse principal component analysis with application to text data. In Advances in neural information processing systems (Vol. 24, pp. 532–539). Red Hook: Curran & Associates Inc.
Zhu, M., & Rozell, C. J. (2013). Visual nonclassical receptive field effects emerge from sparse coding in a dynamical system. PLOS Computational Biology, 9, e1003191.
Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B, 67(2), 301–320.
Zou, H., Hastie, T., & Tibshirani, R. (2006). Sparse principal component analysis. Journal of Computational and Graphical Statistics, 15(2), 265–286.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer-Verlag London Ltd., part of Springer Nature
About this chapter
Cite this chapter
Du, KL., Swamy, M.N.S. (2019). Compressed Sensing and Dictionary Learning. In: Neural Networks and Statistical Learning. Springer, London. https://doi.org/10.1007/978-1-4471-7452-3_18
Download citation
DOI: https://doi.org/10.1007/978-1-4471-7452-3_18
Published:
Publisher Name: Springer, London
Print ISBN: 978-1-4471-7451-6
Online ISBN: 978-1-4471-7452-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)