Advertisement

An Introduction to Compressed Sensing

  • Niklas Koep
  • Arash BehboodiEmail author
  • Rudolf Mathar
Chapter
  • 616 Downloads
Part of the Applied and Numerical Harmonic Analysis book series (ANHA)

Abstract

Compressed sensing and many research activities associated with it can be seen as a framework for signal processing of low-complexity structures. A cornerstone of the underlying theory is the study of inverse problems with linear or nonlinear measurements. Whether it is sparsity, low-rankness, or other familiar notions of low complexity, the theory addresses necessary and sufficient conditions behind the measurement process to guarantee signal reconstruction with efficient algorithms. This includes consideration of robustness to measurement noise and stability with respect to signal model inaccuracies. This introduction aims to provide an overall view of some of the most important results in this direction. After discussing various examples of low-complexity signal models, two approaches to linear inverse problems are introduced which, respectively, focus on the recovery of individual signals and recovery of all low-complexity signals simultaneously. In particular, we focus on the former setting, giving rise to so-called nonuniform signal recovery problems. We discuss different necessary and sufficient conditions for stable and robust signal reconstruction using convex optimization methods. Appealing to concepts from non-asymptotic random matrix theory, we outline how certain classes of random sensing matrices, which fully govern the measurement process, satisfy certain sufficient conditions for signal recovery. Finally, we review some of the most prominent algorithms for signal recovery proposed in the literature.

Notes

Acknowledgements

We would like to thank the anonymous reviewers and contributors to this book for their invaluable comments regarding this introduction.

References

  1. 1.
    S.I. Adalbjörnsson, A. Jakobsson, M.G. Christensen. Estimating multiple pitches using block sparsity, in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (May 2013), pp. 6220–6224Google Scholar
  2. 2.
    R. Adamczak, R. Latała, A.E. Litvak, A. Pajor, N. Tomczak-Jaegermann, Geometry of log-concave ensembles of random matrices and approximate reconstruction. C. R. Math. 349(13), 783–786 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    R. Adamczak, A.E. Litvak, A. Pajor, N. Tomczak-Jaegermann, Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling. Constr. Approx. 34(1), 61–88 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    D. Amelunxen, M. Lotz, M.B. McCoy, J.A. Tropp, Living on the edge: phase transitions in convex programs with random data. Inf. Inference 3(3), 224–294 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    U. Ayaz, S. Dirksen, H. Rauhut, Uniform recovery of fusion frame structured sparse signals. Appl. Comput. Harmon. Anal. 41(2), 341–361 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    W.U. Bajwa, J.D. Haupt, G.M. Raz, S.J. Wright, R.D. Nowak, Toeplitz-structured compressed sensing matrices, in 2007 IEEE/SP 14th Workshop on Statistical Signal Processing (Aug. 2007), pp. 294–298Google Scholar
  7. 7.
    A.S. Bandeira, M.E. Lewis, D.G. Mixon, Discrete Uncertainty Principles and Sparse Signal Processing. J. Fourier Anal. Appl. 24(4), 935–956 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    R. Baraniuk, M. Davenport, R. DeVore, M. Wakin, A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28(3), 253–263 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    S. Becker, J. Bobin, E.J. Candès, Nesta: A fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4, 1–39 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    J. Bennett, S. Lanning, The netflix prize (2007)Google Scholar
  12. 12.
    R. Berinde, A.C. Gilbert, P. Indyk, H. Karloff, M.J. Strauss, Combining geometry and combinatorics: a unified approach to sparse signal recovery, in 2008 46th Annual Allerton Conference on Communication, Control, and Computing (Sept. 2008), pp. 798–805Google Scholar
  13. 13.
    B.N. Bhaskar, G. Tang, B. Recht, Atomic norm denoising with applications to line spectral estimation. IEEE Trans. Signal Process. 61(23), 5987–5999 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    H. Boche, Compressed Sensing and its Applications (Springer Science+Business Media, New York, 2015)Google Scholar
  15. 15.
    P. Boufounos, G. Kutyniok, H. Rauhut, Sparse recovery from combined fusion frame measurements. IEEE Trans. Inf. Theory 57(6), 3864–3876 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    P.T. Boufounos, L. Jacques, F. Krahmer, R. Saab, Quantization and compressive sensing, in Compressed Sensing and its Applications: MATHEON Workshop 2013, Applied and Numerical Harmonic Analysis, ed. by H. Boche, R. Calderbank, G. Kutyniok, J. Vybíral (Springer International Publishing, Cham, 2015), pp. 193–237zbMATHCrossRefGoogle Scholar
  17. 17.
    J. Bourgain, An Improved Estimate in the Restricted Isometry Problem, in Geometric Aspects of Functional Analysis, vol. 2116, ed. by B. Klartag, E. Milman (Springer International Publishing, Cham, 2014), pp. 65–70CrossRefGoogle Scholar
  18. 18.
    S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, 2004)Google Scholar
  19. 19.
    E. Candes, J. Romberg, l1-magic: recovery of sparse signals via convex programming, vol. 4 (2005), p. 14. www.acm.caltech.edu/l1magic/downloads/l1magic.pdf
  20. 20.
    E. Candes, T. Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35(6), 2313–2351 (2007)MathSciNetzbMATHCrossRefGoogle Scholar
  21. 21.
    E.J. Candès, The restricted isometry property and its implications for compressed sensing. C. R. Math. 346(9), 589–592 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    E.J. Candes, D.L. Donoho, Curvelets-a surprisingly effective nonadaptive representation for objects with edges, in Curves and Surfaces Fitting, ed. by L.L. Schumaker, A. Cohen, C. Rabut (Vanderbilt University Press, Nashville, TN, 1999), p. 16Google Scholar
  23. 23.
    E.J. Candès, D.L. Donoho, New tight frames of curvelets and optimal representations of objects with piecewise c2 singularities. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 57(2), 219–266 (2004)zbMATHCrossRefGoogle Scholar
  24. 24.
    E.J. Candes, Y. Plan, Near-ideal model selection by \(\ell _1\) minimization. Ann. Stat. 37, 2145–2177 (2009)zbMATHCrossRefGoogle Scholar
  25. 25.
    E.J. Candès, J.K. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  26. 26.
    E.J. Candès, J.K. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  27. 27.
    E.J. Candes, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  28. 28.
    E.J. Candès, T. Tao, Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  29. 29.
    A.Y. Carmi, L. Mihaylova, S.J. Godsill, Compressed Sensing & Sparse Filtering (Springer, 2016)Google Scholar
  30. 30.
    P.G. Casazza, G. Kutyniok, F. Philipp, Introduction to finite frame theory, in Finite Frames (Springer, 2013), pp. 1–53Google Scholar
  31. 31.
    V. Chandrasekaran, B. Recht, P.A. Parrilo, A.S. Willsky, The convex geometry of linear inverse problems. Found. Comput. Math. 12(6), 805–849 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  32. 32.
    M. Cheraghchi, V. Guruswami, A. Velingker, Restricted isometry of Fourier matrices and list decodability of random linear codes. SIAM J. Comput. 42(5), 1888–1914 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  33. 33.
    A. Cohen, W. Dahmen, R. Devore, Compressed sensing and best k-term approximation. J. Am. Math. Soc. 211–231 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  34. 34.
    R. Coifman, F. Geshwind, Y. Meyer, Noiselets. Appl. Comput. Harmon. Anal. 10(1), 27–44 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
  35. 35.
    W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55, 2230–2249 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  36. 36.
    S. Dasgupta, A. Gupta, An elementary proof of a theorem of Johnson and Lindenstrauss. Random Struct. Algorithms 22(1), 60–65 (2003)MathSciNetzbMATHCrossRefGoogle Scholar
  37. 37.
    R.A. DeVore, Nonlinear approximation. Acta Numer. 7, 51–150 (1998)zbMATHCrossRefGoogle Scholar
  38. 38.
    S. Diamond, S. Boyd, Cvxpy: a python-embedded modeling language for convex optimization. J. Mach. Learn. Res. 17(1), 2909–2913 (2016)MathSciNetzbMATHGoogle Scholar
  39. 39.
    S. Dirksen, G. Lecué, H. Rauhut, On the gap between restricted isometry properties and sparse recovery conditions. IEEE Trans. Inf. Theory 64(8), 5478–5487 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
  40. 40.
    D.L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  41. 41.
    D.L. Donoho, M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via \(\ell _1\) minimization. Proc. Natl. Acad. Sci. 100(5), 2197–2202 (2003)MathSciNetzbMATHCrossRefGoogle Scholar
  42. 42.
    D.L. Donoho, M. Elad, V.N. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 52, 6–18 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  43. 43.
    D.L. Donoho, I. Johnstone, A. Montanari, Accurate prediction of phase transitions in compressed sensing via a connection to minimax denoising. IEEE Trans. Inf. Theory 59, 3396–3433 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  44. 44.
    D.L. Donoho, A. Maleki, A. Montanari, Message passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. U. S. A. 106(45), 18914–9 (2009)CrossRefGoogle Scholar
  45. 45.
    D.L. Donoho, J. Tanner, Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. Philos. Trans. Ser. A Math. Phys. Eng. Sci. 367 (1906), 4273–4293 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  46. 46.
    M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. (Springer, New York, 2010). OCLC: ocn646114450Google Scholar
  47. 47.
    Y.C. Eldar, G. Kutyniok (eds.), Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2012)Google Scholar
  48. 48.
    E. Elhamifar, R. Vidal, Sparse subspace clustering, in 2009 IEEE Conference on Computer Vision and Pattern Recognition (June 2009), pp. 2790–2797Google Scholar
  49. 49.
    H.G. Feichtinger, T. Strohmer, Gabor Analysis and Algorithms: Theory and Applications (Springer Science & Business Media, 2012)Google Scholar
  50. 50.
    M. Fornasier, S. Peter, An overview on algorithms for sparse recovery, in Sparse Reconstruction and Compressive Sensing in Remote Sensing, ed. by X. Zhu, R. Bamler (Springer, June 2015), p. 76Google Scholar
  51. 51.
    M. Fornasier, H. Rauhut, Compressive sensing, in Handbook of Mathematical Methods in Imaging, ed. by O. Scherzer (Springer, New York, 2011), pp. 187–228.  https://doi.org/10.1007/978-0-387-92920-0_6CrossRefGoogle Scholar
  52. 52.
    S. Foucart, Flavors of compressive sensing, in Approximation Theory XV: San Antonio 2016, ed. by G.E. Fasshauer, L.L. Schumaker (Springer International Publishing, Cham, 2017), pp. 61–104CrossRefGoogle Scholar
  53. 53.
    S. Foucart, A. Pajor, H. Rauhut, T. Ullrich, The Gelfand widths of \(\ell _p\)-balls for \(0<p\le 1\). J. Complex. 26(6), 629–640 (2010)zbMATHCrossRefGoogle Scholar
  54. 54.
    S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Birkhäuser, Basel, 2013)zbMATHCrossRefGoogle Scholar
  55. 55.
    R. Foygel, L.W. Mackey, Corrupted sensing: novel guarantees for separating structured signals. IEEE Trans. Inf. Theory 60, 1223–1247 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  56. 56.
    D. Goldberg, D. Nichols, B.M. Oki, D. Terry, Using collaborative filtering to weave an information tapestry. Commun. ACM 35(12), 61–70 (1992)CrossRefGoogle Scholar
  57. 57.
    Y. Gordon, On milman’s inequality and random subspaces which escape through a mesh in \(\mathbb{R}^n\), in Geometric Aspects of Functional Analysis, ed. by J. Lindenstrauss, V.D. Milman (Springer, Berlin, 1988), pp. 84–106Google Scholar
  58. 58.
    J. Gouveia, P.A. Parrilo, R.R. Thomas, Theta bodies for polynomial ideals. SIAM J. Optim. 20, 2097–2118 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  59. 59.
    M. Grant, S. Boyd, Y. Ye, CVX: Matlab software for disciplined convex programming (2008)Google Scholar
  60. 60.
    Z. Han, H. Li, W. Yin, Compressive Sensing for Wireless Networks (Cambridge University Press, 2013)Google Scholar
  61. 61.
    I. Haviv, O. Regev, The restricted isometry property of subsampled fourier matrices, in Geometric Aspects of Functional Analysis, Lecture Notes in Mathematics (Springer, Cham, 2017), pp. 163–179Google Scholar
  62. 62.
    W.B. Johnson, J. Lindenstrauss, Extensions of lipschitz mappings into a hilbert space. Contemp. Math. 26(189–206), 1 (1984)MathSciNetzbMATHGoogle Scholar
  63. 63.
    V. Koltchinskii, Oracle inequalities in empirical risk minimization and sparse recovery problems: École d’été de probabilités de Saint-Flour XXXVIII-2008. Number 2033 in Lecture notes in mathematics. (Springer, Berlin, 2011). OCLC: ocn733246860Google Scholar
  64. 64.
    F. Krahmer, S. Mendelson, H. Rauhut, Suprema of chaos processes and the restricted isometry property. Commun. Pure Appl. Math. 67(11), 1877–1904 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  65. 65.
    G. Kutyniok, D. Labate (eds.), Shearlets: multiscale analysis for multivariate data. Applied and Numerical Harmonic Analysis (Birkhäuser, New York, 2012). OCLC: ocn794844320zbMATHGoogle Scholar
  66. 66.
    C. Liaw, A. Mehrabian, Y. Plan, R. Vershynin, A simple tool for bounding the deviation of random matrices on geometric sets (2016). CoRR, arXiv:1603.00897
  67. 67.
    G.G. Lorentz, M.V. Golitschek, Y. Makovoz, Constructive Approximation: Advanced Problems (Springer, Berlin, 2005). OCLC: 903339623Google Scholar
  68. 68.
    S.G. Mallat, A Wavelet Tour of Signal Processing: The Sparse Way, 3rd edn. (Elsevier/Academic Press, Amsterdam, 2009)Google Scholar
  69. 69.
    C.A. Metzler, A. Maleki, R.G. Baraniuk, From denoising to compressed sensing. IEEE Trans. Inf. Theory 62, 5117–5144 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  70. 70.
    M. Mishali, Y.C. Eldar, Blind multiband signal reconstruction: compressed sensing for analog signals. IEEE Trans. Signal Process. 57(3), 993–1009 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  71. 71.
    Q. Mo, A sharp restricted isometry constant bound of orthogonal matching pursuit (2015). CoRR, arXiv:1501.01708
  72. 72.
    B.K. Natarajan, Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)MathSciNetzbMATHCrossRefGoogle Scholar
  73. 73.
    S. Nathan, A. Shraibman, Rank, trace-norm and max-norm, in COLT (2005)Google Scholar
  74. 74.
    J. Nelson, E. Price, M. Wootters, New constructions of rip matrices with fast multiplication and fewer rows, in Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics (2014), pp. 1515–1528Google Scholar
  75. 75.
    Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, 1st edn. (Springer Publishing Company, Incorporated, 2014)Google Scholar
  76. 76.
    S. Oymak, B. Hassibi, New null space results and recovery thresholds for matrix rank minimization (Nov. 2010). arXiv:1011.6326 [cs, math, stat]
  77. 77.
    N. Parikh, S.P. Boyd, Proximal algorithms. Found. Trends Optim. 1, 127–239 (2014)CrossRefGoogle Scholar
  78. 78.
    F. Parvaresh, H. Vikalo, S. Misra, B. Hassibi, Recovering sparse signals using sparse measurement matrices in compressed dna microarrays. IEEE J. Sel. Top. Signal Process. 2(3), 275–285 (2008)CrossRefGoogle Scholar
  79. 79.
    Y. Plan, R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans. Inf. Theory 59(1), 482–494 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  80. 80.
    Y. Plan, R. Vershynin, The generalized Lasso with non-linear observations. IEEE Trans. Inf. Theory 62(3), 1528–1537 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  81. 81.
    Y.L. Polo, Y. Wang, A. Pandharipande, G. Leus, Compressive wide-band spectrum sensing, in 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (Apr. 2009), pp. 2337–2340Google Scholar
  82. 82.
    S. Rangan, Generalized approximate message passing for estimation with random linear mixing, in2011 IEEE International Symposium on Information Theory Proceedings (2011), pp. 2168–2172Google Scholar
  83. 83.
    S. Rangan, P. Schniter, A.K. Fletcher, Vector approximate message passing, in 2017 IEEE International Symposium on Information Theory (ISIT) (2017), pp. 1588–1592Google Scholar
  84. 84.
    N.S. Rao, B. Recht, R.D. Nowak, Universal measurement bounds for structured sparse signal recovery, in AISTATS (2012)Google Scholar
  85. 85.
    H. Rauhut, Circulant and Toeplitz matrices in compressed sensing, in SPARS 09-Signal Processing with Adaptive Sparse Structured Representations (Saint Malo, France, Apr. 2009), p. 7Google Scholar
  86. 86.
    H. Rauhut, K. Schnass, P. Vandergheynst, Compressed sensing and redundant dictionaries. IEEE Trans. Inf. Theory 54(5), 2210–2219 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  87. 87.
    H. Rauhut, R. Ward, Sparse recovery for spherical harmonic expansions, in Proceedings of the SampTA 2011 (2011)Google Scholar
  88. 88.
    R.T. Rockafellar, Convex Analysis (Princeton University Press, 2015)Google Scholar
  89. 89.
    M. Rudelson, R. Vershynin, On sparse reconstruction from Fourier and Gaussian measurements. Commun. Pure Appl. Math. 61(8), 1025–1045 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  90. 90.
    S. Sarvotham, D. Baron, R.G. Baraniuk, Measurements vs. bits: compressed sensing meets information theory, in Allerton Conference on Communication, Control and Computing (2006)Google Scholar
  91. 91.
    M. Stojnic, \(\ell _1\) optimization and its various thresholds in compressed sensing, in 2010 IEEE International Conference on Acoustics, Speech and Signal Processing (2010), pp. 3910–3913Google Scholar
  92. 92.
    G. Tang, B.N. Bhaskar, P. Shah, B. Recht, Compressed sensing off the grid. IEEE Trans. Inf. Theory 59(11), 7465–7490 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  93. 93.
    R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, K. Knight, Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 67(1), 91–108 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  94. 94.
    R.J. Tibshirani, The lasso problem and uniqueness (2012)Google Scholar
  95. 95.
    A.M. Tillmann, M.E. Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory 60, 1248–1259 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  96. 96.
    J.A. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  97. 97.
    E. van den Berg, M.P. Friedlander, Spgl1: a solver for large-scale sparse reconstruction (2007)Google Scholar
  98. 98.
    E. van den Berg, M.P. Friedlander, Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  99. 99.
    R. Vershynin, Introduction to the non-asymptotic analysis of random matrices, in Compressed Sensing, Theory and Applications (Cambridge University Press, Cambridge, 2012), pp. 210–268Google Scholar
  100. 100.
    R. Vershynin, Estimation in High Dimensions: A Geometric Perspective (Springer International Publishing, Cham, 2015), pp. 3–66Google Scholar
  101. 101.
    L. Welch, Lower bounds on the maximum cross correlation of signals (corresp.). IEEE Trans. Inf. Theory 20(3), 397–399 (1974)zbMATHCrossRefGoogle Scholar
  102. 102.
    J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, Y. Ma, Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)CrossRefGoogle Scholar
  103. 103.
    S.J. Wright, R.D. Nowak, M.A.T. Figueiredo, Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57, 2479–2493 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  104. 104.
    H. Zhang, W. Yin, L. Cheng, Necessary and sufficient conditions of solution uniqueness in 1-norm minimization. J. Optim. Theory Appl. 164, 109–122 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  105. 105.
    Y. Zhang, J. Yang, W. Yin, Yall1: your algorithms for l1 (2011). http://yall1.blogs.rice.edu

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.RWTH Aachen Theoretische InformationstechnikAachenGermany

Personalised recommendations