Skip to main content

Compressed Sensing and Dictionary Learning

  • Chapter
  • First Online:
Neural Networks and Statistical Learning

Abstract

Sparse coding is a matrix factorization technique. It models a target signal as a sparse linear combination of atoms (elementary signals) drawn from a dictionary (a fixed collection). Sparse coding has become a popular paradigm in signal processing, statistics, and machine learning. This chapter introduces compressed sensing, sparse representation/sparse coding, tensor compressed sensing, and sparse PCA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Adamczak, R. (2016). A Note on the sample complexity of the Er-SpUD algorithm by Spielman, Wang and Wright for exact recovery of sparsely used dictionaries. Journal of Machine Learning Research, 17, 1–18.

    Google Scholar 

  2. Aharon, M., Elad, M., & Bruckstein, A. (2006). K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11), 4311–4322.

    Article  MATH  Google Scholar 

  3. Attouch, H., Bolte, J., Redont, P., & Soubeyran, A. (2010). Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the Kurdyka-Lojasiewicz inequality. Mathematics of Operational Research, 35(2), 438–457.

    Article  MATH  Google Scholar 

  4. Babadi, B., Kalouptsidis, N., & Tarokh, V. (2010). SPARLS: The sparse RLS algorithm. IEEE Transactions on Signal Processing, 58(8), 4013–4025.

    Article  MathSciNet  MATH  Google Scholar 

  5. Bao, C., Ji, H., Quan, Y., & Shen, Z. (2016). Dictionary learning for sparse coding: Algorithms and convergence analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(7), 1356–1369.

    Article  Google Scholar 

  6. Bandeira, A. S., Fickus, M., Mixon, D. G., & Wong, P. (2013). The road to deterministic matrices with the restricted isometry property. Journal of Fourier Analysis and Applications, 19(6), 1123–1149.

    Article  MathSciNet  MATH  Google Scholar 

  7. Baraniuk, R., Davenport, M., DeVore, R., & Wakin, M. (2008). A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3), 253–263.

    Article  MathSciNet  MATH  Google Scholar 

  8. Baraniuk, R. G., Cevher, V., Duarte, M. F., & Hegde, C. (2010). Model-based compressive sensing. IEEE Transactions on Information Theory, 56(4), 1982–2001.

    Article  MathSciNet  MATH  Google Scholar 

  9. Barg, A., Mazumdar, A., & Wang, R. (2015). Restricted isometry property of random subdictionaries. IEEE Transactions on Information Theory, 61(8), 4440–4450.

    Article  MathSciNet  MATH  Google Scholar 

  10. Beck, A., & Teboulle, M. (2012). Smoothing and first order methods: a unified framework. SIAM Journal on Optimization, 22(2), 557–580.

    Article  MathSciNet  MATH  Google Scholar 

  11. Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1–127.

    Article  MathSciNet  MATH  Google Scholar 

  12. Blumensath, T., & Davies, M. E. (2008). Iterative thresholding for sparse approximations. Journal of Fourier Analysis and Applications, 14, 629–654.

    Article  MathSciNet  MATH  Google Scholar 

  13. Blumensath, T., & Davies, M. E. (2009). Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3), 265–274.

    Article  MathSciNet  MATH  Google Scholar 

  14. Blumensath, T., & Davies, M. E. (2010). Normalized iterative hard thresholding: Guaranteed stability and performance. IEEE Journal of Selected Topics in Signal Processing, 4(2), 298–309.

    Article  Google Scholar 

  15. Bolte, J., Sabach, S., & Teboulle, M. (2014). Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Mathematical Programming, 146, 459–494.

    Article  MathSciNet  MATH  Google Scholar 

  16. Cadima, J., & Jolliffe, I. T. (1995). Loading and correlations in the interpretation of principle compenents. Applied Statistics, 22(2), 203–214.

    Article  MathSciNet  Google Scholar 

  17. Cai, T. T., & Zhang, A. (2014). Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Transactions on Information Theory, 60(1), 122–132.

    Article  MathSciNet  MATH  Google Scholar 

  18. Calderbank, R., Howard, S., & Jafarpour, S. (2010). Construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property. IEEE Journal of Selected Topics in Signal Processing, 4(2), 358–374.

    Article  Google Scholar 

  19. Calderbank, R., & Jafarpour, S. (2010). Reed Muller sensing matrices and the LASSO. In C. Carlet & A. Pott (Eds.), Sequences and their applications (Vol. 6338, pp. 442–463). LNCS. Berlin: Springer.

    Google Scholar 

  20. Candes, E. J. (2006). Compressive sampling. In Proceedings of the International Congress of Mathematicians (Vol. 3, pp. 1433–1452). Madrid, Spain.

    Google Scholar 

  21. Candes, E. J., & Fernandez-Granda, C. (2014). Towards a mathematical theory of super-resolution. Communications on Pure and Applied Mathematics, 67(6), 906–956.

    Article  MathSciNet  MATH  Google Scholar 

  22. Candes, E. J., Romberg, J. K., & Tao, T. (2006). Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 59(8), 1207–1223.

    Article  MathSciNet  MATH  Google Scholar 

  23. Candes, E. J., Romberg, J., & Tao, T. (2006). Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2), 489–509.

    Article  MathSciNet  MATH  Google Scholar 

  24. Candes, E. J., & Tao, T. (2006). Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Transactions on Information Theory, 52(12), 5406–5425.

    Article  MathSciNet  MATH  Google Scholar 

  25. Candes, E. J., & Tao, T. (2005). Decoding by linear programming. IEEE Transactions on Information Theory, 51(12), 4203–4215.

    Article  MathSciNet  MATH  Google Scholar 

  26. Candes, E. J., Wakin, M. B., & Boyd, S. P. (2008). Enhancing sparsity by reweighted \(l_1\) minimization. Journal of Fourier Analysis and Applications, 14, 877–905.

    Article  MathSciNet  MATH  Google Scholar 

  27. Cartis, C., & Thompson, A. (2015). A new and improved quantitative recovery analysis for iterative hard thresholding algorithms in compressed sensing. IEEE Transactions on Information Theory, 61(4), 2019–2042.

    Article  MathSciNet  MATH  Google Scholar 

  28. Chang, L.-H., & Wu, J.-Y. (2012). Compressive-domain interference cancellation via orthogonal projection: How small the restricted isometry constant of the effective sensing matrix can be? In Proceedings of IEEE Wireless Communications and Networking Conference (WCNC) (pp. 256–261). Shanghai: China.

    Google Scholar 

  29. Chang, L.-H., & Wu, J.-Y. (2014). An improved RIP-based performance Guarantee for sparse signal recovery via orthogonal matching pursuit. IEEE Transactions on Information Theory, 60(9), 5702–5715.

    Article  MathSciNet  MATH  Google Scholar 

  30. Chartrand, R. (2007). Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Processing Letters, 14(10), 707–710.

    Article  Google Scholar 

  31. Chartrand, R., & Yin, W. (2008). Iteratively reweighted algorithms for compressive sensing. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (pp. 3869–3872). Las Vegas, NV.

    Google Scholar 

  32. Chen, L., & Gu, Y. (2014). The convergence guarantees of a non-convex approach for sparse recovery. IEEE Transactions on Signal Processing, 62(15), 3754–3767.

    Article  MathSciNet  MATH  Google Scholar 

  33. Chen, S. S., Donoho, D. L., & Saunders, M. A. (1999). Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1), 33–61.

    Article  MathSciNet  MATH  Google Scholar 

  34. Chen, S. S., Donoho, D. L., & Saunders, M. A. (2001). Atomic decomposition by basis pursuit. SIAM Review, 43(1), 129–159.

    Article  MathSciNet  MATH  Google Scholar 

  35. Chen, X., Wang, Z. J., & McKeown, M. J. (2010). Asymptotic analysis of robust LASSOs in the presence of noise with large variance. IEEE Transactions on Information Theory, 56(10), 5131–5149.

    Article  MathSciNet  MATH  Google Scholar 

  36. Chen, Y., Gu, Y., & Hero III, A. O. (2009). Sparse LMS for system identification. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing. Taipei, Taiwan.

    Google Scholar 

  37. Chi, Y., Scharf, L. L., Pezeshki, A., & Calderbank, A. R. (2011). Sensitivity to basis mismatch in compressed sensing. IEEE Transactions on Signal Processing, 59(5), 2182–2195.

    Article  MathSciNet  MATH  Google Scholar 

  38. Dai, W., & Milenkovic, O. (2009). Subspace pursuit for compressive sensing signal reconstruction. IEEE Transactions on Information Theory, 55(5), 2230–2249.

    Article  MathSciNet  MATH  Google Scholar 

  39. Dai, W., & Milenkovic, O. (2009). Weighted superimposed codes and constrained integer compressed sensing. IEEE Transactions on Information Theory, 55(5), 2215–2229.

    Article  MathSciNet  MATH  Google Scholar 

  40. d’Aspremont, A., Bach, F., & El Ghaoui, L. (2008). Optimal solutions for sparse principal component analysis. Journal of Machine Learning Research, 9, 1269–1294.

    MathSciNet  MATH  Google Scholar 

  41. d’Aspremont, A., El Ghaoui, L., Jordan, M. I., & Lanckriet, G. R. G. (2007). A direct formulation for sparse PCA using semidefinite programming. SIAM Review, 49(3), 434–448.

    Article  MathSciNet  MATH  Google Scholar 

  42. Daubechies, I., Defrise, M., & De Mol, C. (2004). An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 57(11), 1413–1457.

    Article  MathSciNet  MATH  Google Scholar 

  43. Davenport, M. A., & Wakin, M. B. (2010). Analysis of orthogonal matching pursuit using the restricted isometry property. IEEE Transactions on Information Theory, 56(9), 4395–4401.

    Article  MathSciNet  MATH  Google Scholar 

  44. DeVore, R. A. (2007). Deterministic constructions of compressed sensing matrices. Journal of Complexity, 23, 918–925.

    Article  MathSciNet  MATH  Google Scholar 

  45. Dong, Z., & Zhu, W. (2018). Homotopy methods based on \(l_0\)-norm for compressed sensing. IEEE Transactions on Neural Networks and Learning Systems, 29(4), 1132–1146.

    Article  Google Scholar 

  46. Donoho, D. L. (2006). Compressed sensing. IEEE Transactions on Information Theory, 52(4), 1289–1306.

    Article  MathSciNet  MATH  Google Scholar 

  47. Donoho, D. L. (2006). For most large underdetermined systems of linear equations the minimal \(l_1\)-norm solution is also the sparsest solution. Communications on Pure and Applied Mathematics, 59, 797–829.

    Article  MathSciNet  MATH  Google Scholar 

  48. Donoho, D. L., Maleki, A., & Montanari, A. (2009). Message-passing algorithms for compressed sensing. Proceedings of the National Academy of Sciences of the USA, 106(45), 18914–18919.

    Article  Google Scholar 

  49. Donoho, D. L., Tsaig, Y., Drori, I., & Starck, J.-L. (2012). Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Transactions on Information Theory, 58(2), 1094–1121.

    Article  MathSciNet  MATH  Google Scholar 

  50. Efron, B., Hastie, T., Johnstone, I., & Tibshirani, R. (2004). Least angle regression. Annals of Statistics, 32(2), 407–499.

    Article  MathSciNet  MATH  Google Scholar 

  51. Eldar, Y. C., & Kutyniok, G. (2012). Compressed sensing: Theory and applications. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  52. Eldar, Y. C., Kuppinger, P., & Bolcskei, H. (2010). Block-sparse signals: Uncertainty relations and efficient recovery. IEEE Transactions on Signal Processing, 58(6), 3042–3054.

    Article  MathSciNet  MATH  Google Scholar 

  53. Elvira, C., Chainais, P., & Dobigeon, N. (2017). Bayesian antisparse coding. IEEE Transactions on Signal Processing, 65(7), 1660–1672.

    Article  MathSciNet  MATH  Google Scholar 

  54. Exarchakis, G., & Lucke, J. (2017). Discrete sparse coding. Neural Computation, 29(11), 2979–3013.

    Article  MathSciNet  MATH  Google Scholar 

  55. Fan, J., & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96, 1348–1360.

    Article  MathSciNet  MATH  Google Scholar 

  56. Figueiredo, M. A. T., Nowak, R. D., & Wright, S. J. (2007). Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing, 1(4), 586–597.

    Article  Google Scholar 

  57. Foucart, S. (2011). Hard thresholding pursuit: An algorithm for compressive sensing. SIAM Journal on Numerical Analysis, 49(6), 2543–2563.

    Article  MathSciNet  MATH  Google Scholar 

  58. Foucart, S., & Lai, M.-J. (2009). Sparsest solutions of underdetermined linear systems via \(l_q\)-minimization for \(0 < q \le 1\). Applied and Computational Harmonic Analysis, 26(3), 395–407.

    Article  MathSciNet  MATH  Google Scholar 

  59. Foucart, S., & Rauhut, H. (2013). A mathematical introduction to compressive sensing. Cambridge: Birkhauser.

    Book  MATH  Google Scholar 

  60. Frandi, E., Nanculef, R., Lodi, S., Sartori, C., & Suykens, J. A. K. (2016). Fast and scalable Lasso via stochastic Frank-Wolfe methods with a convergence guarantee. Machine Learning, 104(2), 195–221.

    Article  MathSciNet  MATH  Google Scholar 

  61. Friedland, S., Li, Q., & Schonfeld, D. (2014). Compressive sensing of sparse tensors. IEEE Transactions on Image Processing, 23(10), 4438–4447.

    Article  MathSciNet  MATH  Google Scholar 

  62. Genovese, C. R., Jin, J., Wasserman, L., & Yao, Z. (2012). A comparison of the lasso and marginal regression. Journal of Machine Learning Research, 13, 2107–2143.

    MathSciNet  MATH  Google Scholar 

  63. Giryes, R., & Elad, M. (2012). RIP-based near-oracle performance guarantees for SP, CoSaMP, and IHT. IEEE Transactions on Signal Processing, 60(3), 1465–1568.

    Article  MathSciNet  MATH  Google Scholar 

  64. Gribonval, R., & Nielsen, M. (2007). Highly sparse representations from dictionaries are unique and independent of the sparseness measure. Applied and Computational Harmonic Analysis, 22(3), 335–355.

    Article  MathSciNet  MATH  Google Scholar 

  65. Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning. New York: Springer.

    Book  MATH  Google Scholar 

  66. Haviv, I., & Regev, O. (2016). The restricted isometry property of subsampled Fourier matrices. In Proceedings of the 27th Annual ACM-SIAM Symposium on Discrete Algorithms (pp. 288–297). Arlington, TX.

    Google Scholar 

  67. Homrighausen, D., & McDonald, D. J. (2014). Leave-one-out cross-validation is risk consistent for lasso. Machine Learning, 97, 65–78.

    Article  MathSciNet  MATH  Google Scholar 

  68. Hoyer, P. (2002). Non-negative sparse coding. In Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing (pp. 557–565).

    Google Scholar 

  69. Huang, S., & Zhu, J. (2011). Recovery of sparse signals using OMP and its variants: Convergence analysis based on RIP. Inverse Problems, 27(3), 035003.

    Article  MathSciNet  MATH  Google Scholar 

  70. Jaggi, M. (2014). An equivalence between the Lasso and support vector machines. In J. A. K. Suykens, M. Signoretto, & A. Argyriou (Eds.), Regularization, optimization, kernels, and support vector machines (Chap. 1, pp. 1–26). Boca Raton: Chapman & Hall/CRC.

    Google Scholar 

  71. Jenatton, R., Mairal, J., Obozinski, G., & Bach, F. (2011). Proximal methods for hierarchical sparse coding. Journal of Machine Learning Research, 12, 2297–2334.

    MathSciNet  MATH  Google Scholar 

  72. Jolliffe, I. T. (1989). Rotation of ill-defined principal components. Applied Statistics, 38(1), 139–147.

    Article  MathSciNet  Google Scholar 

  73. Jolliffe, I. T., Trendafilov, N. T., & Uddin, M. (2003). A modified principal component technique based on the LASSO. Journal of Computational and Graphical Statistics, 12(3), 531–547.

    Article  MathSciNet  Google Scholar 

  74. Journee, M., Nesterov, Y., Richtarik, P., & Sepulchre, R. (2010). Generalized power method for sparse principal component analysis. Journal of Machine Learning Research, 11, 517–553.

    MathSciNet  MATH  Google Scholar 

  75. Kim, H., & Park, H. (2008). Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method. SIAM Journal on Matrix Analysis and Applications, 30(2), 713–730.

    Article  MathSciNet  MATH  Google Scholar 

  76. Kreutz-Delgado, K., Murray, J. F., Rao, B. D., Engan, K., Lee, T.-W., & Sejnowski, T. J. (2003). Dictionary learning algorithms for sparse representation. Neural Computation, 15(2), 349–396.

    Article  MATH  Google Scholar 

  77. Kwon, S., Wang, J., & Shim, B. (2014). Multipath matching pursuit. IEEE Transactions on Information Theory, 60(5), 2986–3001.

    Article  MathSciNet  MATH  Google Scholar 

  78. Lai, Z., Xu, Y., Chen, Q., Yang, J., & Zhang, D. (2014). Multilinear sparse principal component analysis. IEEE Transactions on Neural Networks and Learning Systems, 25(10), 1942–1950.

    Article  Google Scholar 

  79. Langford, J., Li, L., & Zhang, T. (2009). Sparse online learning via truncated gradient. Journal of Machine Learning Research, 10, 777–801.

    MathSciNet  MATH  Google Scholar 

  80. Lee, M., Shen, H., Huang, J. Z., & Marron, J. S. (2010). Biclustering via sparse singular value decomposition. Biometrics, 66(4), 1087–1095.

    Article  MathSciNet  MATH  Google Scholar 

  81. Li, F., Yang, Y., & Xing, E. (2006). FromLasso regression to feature vector machine. In Y. Weiss, B. Scholkopf, & J. Platt (Eds.), Advances in neural information processing systems (Vol. 18, pp. 779–786). Cambridge: MIT Press.

    Google Scholar 

  82. Li, H., Wang, J., & Yuan, X. (2018). On the fundamental limit of multipath matching pursuit. IEEE Journal of Selected Topics in Signal Processing, 12(5), 916–927.

    Article  Google Scholar 

  83. Li, X. (2013). Compressed sensing and matrix completion with constant proportion of corruptions. Constructive Approximation, 37(1), 73–99.

    Article  MathSciNet  MATH  Google Scholar 

  84. Lin, D., Pitler, E., Foster, D. P., & Ungar, L. H. (2008). In defense of \(l_0\). In Proceedings of ICML/UAI/COLT Workshop on Sparse Optimization and Variable Selection. Helsinki, Finland.

    Google Scholar 

  85. Liu, E., & Temlyakov, V. N. (2012). The orthogonal super greedy algorithm and applications in compressed sensing. IEEE Transactions on Information Theory, 58(4), 2040–2047.

    Article  MathSciNet  MATH  Google Scholar 

  86. Lu, Z., & Zhang, Y. (2012). An augmented Lagrangian approach for sparse principal component analysis. Mathematical Programming, 135, 149–193.

    Article  MathSciNet  MATH  Google Scholar 

  87. Ma, Z. (2013). Sparse principal component analysis and iterative thresholding. Annals of Statistics, 41(2), 772–801.

    Article  MathSciNet  MATH  Google Scholar 

  88. Mairal, J., Bach, F., & Ponce, J. (2012). Task-driven dictionary learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(4), 791–804.

    Article  Google Scholar 

  89. Malioutov, D. M., Cetin, M., & Willsky, A. S. (2005). Homotopy continuation for sparse signal representation. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (pp. 733–736). Philadelphia, PA.

    Google Scholar 

  90. Mallat, S. G., & Zhang, Z. (1993). Matching pursuits with timefrequency dictionaries. IEEE Transactions on Signal Processing, 41(12), 3397–3415.

    Article  MATH  Google Scholar 

  91. Marjanovic, G., & Solo, V. (2012). On \(l_q\) optimization and matrix completion. IEEE Transactions on Signal Processing, 60(11), 5714–5724.

    Article  MathSciNet  MATH  Google Scholar 

  92. Misra, S., & Parrilo, P. A. (2015). Weighted \(l_1\)-minimization for generalized non-uniform sparse model. IEEE Transactions on Information Theory, 61(8), 4424–4439.

    Article  MathSciNet  MATH  Google Scholar 

  93. Mo, Q. (2015). A sharp restricted isometry constant bound of orthogonal matching pursuit. https://arxiv.org/pdf/1501.01708.pdf.

  94. Mo, Q., & Yi, S. (2012). A remark on the restricted isometry property in orthogonal matching pursuit. IEEE Transactions on Information Theory, 58(6), 3654–3656.

    Article  MathSciNet  MATH  Google Scholar 

  95. Moghaddam, B., Weiss, Y., & Avidan, S. (2006). Spectral bounds for sparse PCA: Exact and greedy algorithms. Advances in neural information processing systems (Vol. 18, pp. 915-922). Cambridge: MIT Press.

    Google Scholar 

  96. Natarajan, B. K. (1995). Sparse approximate solutions to linear systems. SIAM Journal on Computing, 24(2), 227–234.

    Article  MathSciNet  MATH  Google Scholar 

  97. Needell, D., & Tropp, J. A. (2009). CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3), 301–321.

    Article  MathSciNet  MATH  Google Scholar 

  98. Needell, D., & Vershynin, R. (2010). Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE Journal of Selected Topics in Signal Processing, 4(2), 310–316.

    Article  Google Scholar 

  99. Nesterov, Y. (2013). Gradient methods for minimizing composite functions. Mathematical Programming, 140(1), 125–161.

    Article  MathSciNet  MATH  Google Scholar 

  100. Nikolova, M. (2013). Description of the minimizers of least squares regularized with \(l_0\)-norm. Uniqueness of the global minimizer. SIAM Journal on Imaging Sciences, 6(2), 904–937.

    Article  MathSciNet  MATH  Google Scholar 

  101. Olshausen, B. A., & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583), 607–609.

    Article  Google Scholar 

  102. Olshausen, B. A., & Field, D. J. (1997). Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37(23), 3311–3325.

    Article  Google Scholar 

  103. Pati, Y. C., Rezaiifar, R., & Krishnaprasad, P. S. (1993). Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers (Vol. 1, pp. 40–44). Los Alamitos, CA.

    Google Scholar 

  104. Peng, J., Yue, S., & Li, H. (2015). NP/CMP equivalence: A phenomenon hidden among sparsity models \(l_0\) minimization and \(l_p\) minimization for information processing. IEEE Transactions on Information Theory, 61(7), 4028–4033.

    Article  MathSciNet  MATH  Google Scholar 

  105. Rauhut, H. (2008). Stability results for random sampling of sparse trigonometric polynomials. IEEE Transactions on Information Theory, 54(12), 5661–5670.

    Article  MathSciNet  MATH  Google Scholar 

  106. Rebollo-Neira, L., & Lowe, D. (2002). Optimized orthogonal matching pursuit approach. IEEE Signal Processing Letters, 9(4), 137–140.

    Article  Google Scholar 

  107. Romero, D., Ariananda, D. D., Tian, Z., & Leus, G.(2016). Compressive covariance sensing: Structure-based compressive sensing beyond sparsity. IEEE Signal Processing Magazine, 33(1), 78–93.

    Google Scholar 

  108. Roth, V. (2004). The generalized Lasso. IEEE Transactions on Neural Networks, 15(1), 16–28.

    Article  Google Scholar 

  109. Shalev-Shwartz, S., & Tewari, A. (2011). Stochastic methods for \(l_1\)-regularized loss minimization. Journal of Machine Learning Research, 12, 1865–1892.

    MATH  Google Scholar 

  110. Shen, H., & Huang, J. Z. (2008). Sparse principal component analysis via regularized low rank matrix approximation. Journal of Multivariate Analysis, 99(6), 1015–1034.

    Article  MathSciNet  MATH  Google Scholar 

  111. Shen, J., & Li, P. (2018). A tight bound of hard thresholding. Journal of Machine Learning Research, 18, 1–42.

    MATH  Google Scholar 

  112. Sidiropoulos, N. D., & Kyrillidis, A. (2012). Multi-way compressed sensing for sparse low-rank tensors. IEEE Signal Processing Letters, 19(11), 757–760.

    Article  Google Scholar 

  113. Sivalingam, R., Boley, D., Morellas, V., & Papanikolopoulos, N. (2015). Tensor dictionary learning for positive definite matrices. IEEE Transactions on Image Processing, 24(11), 4592–4601.

    Article  MathSciNet  MATH  Google Scholar 

  114. Soussen, C., Gribonval, R., Idier, J., & Herzet, C. (2013). Joint \(k\)-step analysis of orthogonal matching pursuit and orthogonal least squares. IEEE Transactions on Information Theory, 59(5), 3158–3174.

    Article  MathSciNet  MATH  Google Scholar 

  115. Spielman, D., Wang, H., & Wright, J. (2012). Exact recovery of sparsely-used dictionaries. In: JMLR: Workshop and Conference Proceedings of the 25th Annual Conference on Learning Theory (Vol. 23, pp. 37.1–37.18).

    Google Scholar 

  116. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B, 58(1), 267–288.

    MathSciNet  MATH  Google Scholar 

  117. Tibshirani, R. (2011). Regression shrinkage and selection via the lasso: A retrospective. Journal of the Royal Statistical Society Series B, 73(3), 273–282.

    Article  MathSciNet  MATH  Google Scholar 

  118. Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., & Knight, K. (2005). Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B, 67(1), 91–108.

    Article  MathSciNet  MATH  Google Scholar 

  119. Tropp, J. A. (2004). Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information Theory, 50(10), 2231–2242.

    Article  MathSciNet  MATH  Google Scholar 

  120. Tropp, J. A., & Gilbert, A. C. (2007). Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on Information Theory, 53(12), 4655–4666.

    Article  MathSciNet  MATH  Google Scholar 

  121. Tropp, J. A., & Wright, S. J. (2010). Computational methods for sparse solution of linear inverse problems. Proceedings of the IEEE, 98(6), 948–958.

    Article  Google Scholar 

  122. Tzagkarakis, G., Nolan, J. P., & Tsakalides, P. (2019). Compressive sensing using symmetric alpha-stable distributions for robust sparse signal reconstruction. IEEE Transactions on Signal Processing, 67(3), 808–820.

    Article  MathSciNet  MATH  Google Scholar 

  123. Wang, C., Yue, S., & Peng, J., (2015). When is P such that \(l_0\)-minimization equals to \(l_p\)-minimization. CoRR, arxiv:abs/1511.07628.

  124. Wang, J., Kwon, S., & Shim, B. (2012). Generalized orthogonal matching pursuit. IEEE Transactions on Signal Processing, 60(12), 6202–6216.

    Article  MathSciNet  MATH  Google Scholar 

  125. Wang, J., & Li, P. (2017). Recovery of sparse signals using multiple orthogonal least squares. IEEE Transactions on Signal Processing, 65(8), 2049–2062.

    Article  MathSciNet  MATH  Google Scholar 

  126. Weed, J. (2018). Approximately certifying the restricted isometry property is hard. IEEE Transactions on Information Theory, 64(8), 5488–5497.

    Article  MathSciNet  MATH  Google Scholar 

  127. Wen, J., Wang, J., & Zhang, Q. (2017). Nearly optimal bounds for orthogonal least squares. IEEE Transactions on Signal Processing, 65(20), 5347–5356.

    Article  MathSciNet  MATH  Google Scholar 

  128. Wen, J., Zhou, Z., Wang, J., Tang, X., & Mo, Q. (2017). A sharp condition for exact support recovery with orthogonal matching pursuit. IEEE Transactions on Signal Processing, 65(6), 1370–1382.

    Article  MathSciNet  MATH  Google Scholar 

  129. Wen, J., Zhou, Z., Liu, Z., Lai, M.-J., & Tang, X. (2019). Sharp sufficient conditions for stable recovery of block sparse signals by block orthogonal matching pursuit. Applied and Computational Harmonic Analysis, 47(3), 948–974.

    Google Scholar 

  130. Wu, R., & Chen, D.-R. (2013a). The improved bounds of restricted isometry constant for recovery via \(\ell _p\)-minimization. IEEE Transactions on Information Theory, 59(9), 6142–6147.

    Article  MathSciNet  MATH  Google Scholar 

  131. Wu, R., Huang, W., & Chen, D.-R. (2013b). The exact support recovery of sparse signals with noise via orthogonal matching pursuit. IEEE Signal Processing Letters, 20(4), 403–406.

    Article  Google Scholar 

  132. Xu, H., Caramanis, C., & Mannor, S. (2010). Robust regression and Lasso. IEEE Transactions on Information Theory, 56(7), 3561–3574.

    Article  MathSciNet  MATH  Google Scholar 

  133. Xu, H., Mannor, S., & Caramanis, C. (2008). Sparse algorithms are not stable: A no-free-lunch theorem. In Proceedings of the IEEE 46th Annual Allerton Conference on Communication, Control, and Computing (pp. 1299–1303).

    Google Scholar 

  134. Xu, Z., Chang, X., Xu, F., & Zhang, H. (2012). \(L_{1/2}\) regularization: A thresholding representation theory and a fast solver. IEEE Transactions on Neural Networks and Learning Systems, 23(7), 1013–1027.

    Article  Google Scholar 

  135. Yuan, M., & Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society Series B, 68(1), 49–67.

    Article  MathSciNet  MATH  Google Scholar 

  136. Yuan, X.-T., Li, P., & Zhang, T. (2018). Gradient hard thresholding pursuit. Journal of Machine Learning Research, 18, 1–43.

    MATH  Google Scholar 

  137. Yuan, X.-T., & Zhang, T. (2013). Truncated power method for sparse eigenvalue problems. Journal of Machine Learning Research, 14(1), 899–925.

    MathSciNet  MATH  Google Scholar 

  138. Zarmehi, N., & Marvasti, F. (2019). Removal of sparse noise from sparse signals. Signal Processing, 158, 91–99.

    Article  Google Scholar 

  139. Zhang, C. H. (2010). Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics, 38(2), 894–942.

    Article  MathSciNet  MATH  Google Scholar 

  140. Zhang, R., & Li, S. (2019). Optimal RIP bounds for sparse signals recovery via \(\ell _p\) minimization. Applied and Computational Harmonic Analysis, 47(3), 466–584.

    Google Scholar 

  141. Zhang, T. (2011). Sparse recovery with orthogonal matching pursuit under RIP. IEEE Transactions on Information Theory, 57(9), 6215–6221.

    Article  MathSciNet  MATH  Google Scholar 

  142. Zhang, Y., & El Ghaoui, L. (2011). Large-scale sparse principal component analysis with application to text data. In Advances in neural information processing systems (Vol. 24, pp. 532–539). Red Hook: Curran & Associates Inc.

    Google Scholar 

  143. Zhu, M., & Rozell, C. J. (2013). Visual nonclassical receptive field effects emerge from sparse coding in a dynamical system. PLOS Computational Biology, 9, e1003191.

    Article  MathSciNet  Google Scholar 

  144. Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B, 67(2), 301–320.

    Article  MathSciNet  MATH  Google Scholar 

  145. Zou, H., Hastie, T., & Tibshirani, R. (2006). Sparse principal component analysis. Journal of Computational and Graphical Statistics, 15(2), 265–286.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ke-Lin Du .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer-Verlag London Ltd., part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Du, KL., Swamy, M.N.S. (2019). Compressed Sensing and Dictionary Learning. In: Neural Networks and Statistical Learning. Springer, London. https://doi.org/10.1007/978-1-4471-7452-3_18

Download citation

Publish with us

Policies and ethics