Skip to main content

Flavors of Compressive Sensing

  • Conference paper
  • First Online:
Approximation Theory XV: San Antonio 2016 (AT 2016)

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 201))

Included in the following conference series:

Abstract

About a decade ago, a couple of groundbreaking articles revealed the possibility of faithfully recovering high-dimensional signals from some seemingly incomplete information about them. Perhaps more importantly, practical procedures to perform the recovery were also provided. These realizations had a tremendous impact in science and engineering. They gave rise to a field called ‘compressive sensing,’ which is now in a mature state and whose foundations rely on an elegant mathematical theory. This survey presents an overview of the field, accentuating elements from approximation theory, but also highlighting connections with other disciplines that have enriched the theory, e.g., statistics, sampling theory, probability, optimization, metagenomics, graph theory, frame theory, and Banach space geometry.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Although the problem is stated in the complex setting, our account will often be presented in the real setting. There are almost no differences in the theory, but this side step avoids discrepancy with existing literature concerning, e.g., Gelfand widths.

  2. 2.

    This is illustrated in the reproducible MATLAB file found on the author’s Webpage.

  3. 3.

    A ‘weaker’ formulation asks for the estimate \(\Vert \varvec{x}- \varDelta (\mathsf {A}\varvec{x}) \Vert _1 \le C \sigma _s(\varvec{x})_1\) for all vectors \(\varvec{x}\in \mathbb {C}^N\).

  4. 4.

    Arguably, orthogonal matching pursuit could require an estimation of the sparsity level if an estimation of the magnitude of the measurement error is not available.

  5. 5.

    For instance, \(\ell _1\)-magic, nesta, and yall1 are freely available online.

  6. 6.

    See also the MATLAB reproducible for a numerical illustration.

References

  1. R. Adamczak, A. Litvak, A. Pajor, N. Tomczak-Jaegermann, Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling. Constr. Approx. 34, 61–88 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  2. D. Amelunxen, M. Lotz, M. McCoy, J. Tropp, Living on the edge: Phase transitions in convex programs with random data. Information and Inference. iau005 (2014)

    Google Scholar 

  3. A. Bandeira, E. Dobriban, D. Mixon, W. Sawin, Certifying the restricted isometry property is hard. IEEE Trans. Inform. Theory 59, 3448–3450 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  4. R. Baraniuk, S. Foucart, D. Needell, Y. Plan, M. Wootters, Exponential decay of reconstruction error from binary measurements of sparse signals. IEEE Trans. Inform. Theory 63(6), 3368–3385 (2017)

    Google Scholar 

  5. R. Baraniuk, S. Foucart, D. Needell, Y. Plan, M. Wootters, One-bit compressive sensing of dictionary-sparse signals (Information and Influence)

    Google Scholar 

  6. R. Berinde, A. Gilbert, P. Indyk, H. Karloff, M. Strauss, Combining geometry and combinatorics: a unified approach to sparse signal recovery, in Proceedings of 46th Annual Allerton Conference on Communication, Control, and Computing (2008), pp. 798–805

    Google Scholar 

  7. S. Bhojanapalli, P. Jain, Universal matrix completion, in Proceedings of the 31st International Conference on Machine Learning (ICML) (MIT Press, 2014)

    Google Scholar 

  8. D. Bilyk, M.T. Lacey, Random tessellations, restricted isometric embeddings, and one bit sensing (2015), arXiv:1512.06697

  9. J.-L. Bouchot, S. Foucart, P. Hitczenko, Hard thresholding pursuit algorithms: number of iterations. Appl. Comput. Harmon. Anal. 41, 412–435 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  10. P. Boufounos, R. Baraniuk, \(1\)-bit compressive sensing, in Proceedings of the 42nd Annual Conference on Information Sciences and Systems (CISS) (IEEE, 2008), pp. 16–21

    Google Scholar 

  11. J. Bourgain, Bounded orthogonal systems and the \(\Lambda (p)\)-set problem. Acta Math. 162, 227–245 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  12. J. Bourgain, S. Dilworth, K. Ford, S. Konyagin, D. Kutzarova, Explicit constructions of RIP matrices and related problems. Duke Math. J. 159, 145–185 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. H. Buhrman, P. Miltersen, J. Radhakrishnan, S. Venkatesh, Are bitvectors optimal? in Proceedings of the 32nd Annual ACM Symposium on Theory of Computing (STOC) (ACM, 2000), pp. 449–458

    Google Scholar 

  14. T. Cai, A. Zhang, Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Trans. Inform. Theory 60, 122–132 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  15. E. Candès, X. Li, Solving quadratic equations via PhaseLift when there are about as many equations as unknowns. Found. Comput. Math. 14, 1017–1026 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  16. E. Candès, Y. Plan, Matrix completion with noise. Proc. IEEE 98, 925–936 (2010)

    Article  Google Scholar 

  17. E. Candès, B. Recht, Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  18. E. Candès, T. Tao, Decoding by linear programming. IEEE Trans. Inform. Theory 51, 4203–4215 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  19. E. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52, 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  20. E. Candès, Y. Eldar, D. Needell, P. Randall, Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal. 31, 59–73 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  21. E. Candès, T. Strohmer, V. Voroninski, Phaselift: exact and stable signal recovery from magnitude measurements via convex programming. Commun. Pure Appl. Math. 66, 1241–1274 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  22. A. Chkifa, N. Dexter, H. Tran, C. Webster, Polynomial approximation via compressed sensing of high-dimensional functions on lower sets (Preprint)

    Google Scholar 

  23. A. Cohen, W. Dahmen, R. DeVore, Compressed sensing and best \(k\)-term approximation. J. Amer. Math. Soc. 22, 211–231 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  24. A. Cohen, W. Dahmen, R. DeVore, Orthogonal matching pursuit under the restricted isometry property. Constr. Approx. 45, 113–127 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  25. M. Davenport, D. Needell, M. Wakin, Signal space CoSaMP for sparse recovery with redundant dictionaries. IEEE Trans. Inform. Theory 59, 6820–6829 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  26. M. Davies, R. Gribonval, Restricted isometry constants where \(\ell ^{p}\) sparse recovery can fail for \(0\, < p \le 1\). IEEE Trans. Inform. Theory 55, 2203–2214 (2009)

    Google Scholar 

  27. D. Donoho, For most large underdetermined systems of linear equations the minimal \(\ell ^1\) solution is also the sparsest solution. Commun. Pure Appl. Math. 59, 797–829 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  28. D. Donoho, J. Tanner, Counting faces of randomly projected polytopes when the projection radically lowers dimension. J. Am. Math. Soc. 22, 1–53 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  29. D. Donoho, J. Tanner, Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. Philos. Trans. R. Soc. A: Math. Phys. Eng. Sci. 367, 4273–4293 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  30. S. Foucart, Stability and Robustness of Weak Orthogonal Matching Pursuits, in Recent Advances in Harmonic Analysis and Applications, ed. by D. Bilyk, L. De Carli, A. Petukhov, A.M. Stokolos, B.D. Wick (Springer, New York, 2012), pp. 395–405

    Chapter  Google Scholar 

  31. S. Foucart, Stability and robustness of \(\ell _1\)-minimizations with Weibull matrices and redundant dictionaries. Linear Algebra Appl. 441, 4–21 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  32. S. Foucart, Dictionary-sparse recovery via thresholding-based algorithms. J. Fourier Anal. Appl. 22, 6–19 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  33. S. Foucart, D. Koslicki, Sparse recovery by means of nonnegative least squares. IEEE Signal Proces. Lett. 21, 498–502 (2014)

    Article  Google Scholar 

  34. S. Foucart, R. Gribonval, Real vs. complex null space properties for sparse vector recovery. C. R. Math. Acad. Sci. Paris 348, 863–865 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  35. S. Foucart, G. Lecué, An IHT algorithm for sparse recovery from subexponential measurements (Preprint)

    Google Scholar 

  36. S. Foucart, M.-J. Lai, Sparse recovery with pre-Gaussian random matrices. Studia Math. 200, 91–102 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  37. S. Foucart, M. Minner, T. Needham, Sparse disjointed recovery from noninflating measurements. Appl. Comput. Harmon. Anal. 39, 558–567 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  38. S. Foucart, A. Pajor, H. Rauhut, T. Ullrich, The Gelfand widths of \(\ell _p\)-balls for \(0 < p \le 1\). J. Compl. 26, 629–640 (2010)

    Article  MATH  Google Scholar 

  39. S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Birkhäuser, Boston, 2013)

    Book  MATH  Google Scholar 

  40. A. Garnaev, E. Gluskin, On widths of the Euclidean ball. Sov. Math. Dokl. 30, 200–204 (1984)

    MATH  Google Scholar 

  41. R. Graham, N. Sloane, Lower bounds for constant weight codes. IEEE Trans. Inform. Theory 26, 37–43 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  42. D. Gross, Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inform. Theory 57, 1548–1566 (2011)

    Article  MathSciNet  Google Scholar 

  43. C. Güntürk, M. Lammers, A. Powell, R. Saab, Ö. Yılmaz, Sigma-Delta quantization for compressed sensing, in Proceedings of the 44th Annual Conference on Information Sciences and Systems (CISS) (IEEE, 2010)

    Google Scholar 

  44. V. Guruswani, C. Umans, S. Vadhan: Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes, in IEEE Conference on Computational Complexity (2007), pp. 237–246

    Google Scholar 

  45. M. Iwen, A. Viswanathan, Y. Wang, Robust sparse phase retrieval made easy. Appl. Comput. Harmon. Ana. 42, 135–142 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  46. B. Kashin, Diameters of some finite-dimensional sets and classes of smooth functions. Math. USSR, Izv 11, 317–333 (1977)

    Article  Google Scholar 

  47. D. Koslicki, S. Foucart, G. Rosen, Quikr: a method for rapid reconstruction of bacterial communities via compressive sensing. Bioinformatics 29(17), 2096–2102 (2013). btt336

    Article  Google Scholar 

  48. D. Koslicki, S. Foucart, G. Rosen, WGSQuikr: fast whole-genome shotgun metagenomic classification. PloS one 9, e91784 (2014)

    Article  Google Scholar 

  49. K. Knudson, R. Saab, R. Ward, One-bit compressive sensing with norm estimation. IEEE Trans. Inform. Theory 62, 2748–2758 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  50. C. Lawson, R. Hanson, Solving Least Squares Problems (SIAM, Philadelphia, 1995)

    Book  MATH  Google Scholar 

  51. G. Lecué, S. Mendelson, Sparse recovery under weak moment assumptions. J. Eur. Math. Soc. 19, 881–904 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  52. X. Li, V. Voroninski, Sparse signal recovery from quadratic measurements via convex programming. SIAM J. Math. Anal. 45, 3019–3033 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  53. N. Linial, I. Novik, How neighborly can a centrally symmetric polytope be? Discrete. Comput. Geom. 36, 273–281 (2006)

    Google Scholar 

  54. G. Lorentz, M. von Golitschek, Y. Makovoz, Constructive Approximation: Advanced Problems (Springer, Berlin, 1996)

    Book  MATH  Google Scholar 

  55. S. Mendelson, A. Pajor, M. Rudelson, The geometry of random \(\{-1,1\}\)-polytopes. Discrete. Comput. Geom. 34, 365–379 (2005)

    Google Scholar 

  56. N. Noam, W. Avi, Hardness vs randomness. J. Comput. Syst. Sci. 49, 149–167 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  57. S. Oymak, A. Jalali, M. Fazel, Y. Eldar, B. Hassibi, Simultaneously structured models with application to sparse and low-rank matrices. IEEE Trans. Inform. Theory 61, 2886–2908 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  58. A. Pinkus, n-Widths in Approximation Theory (Springer, Berlin, 1985)

    Book  MATH  Google Scholar 

  59. Y. Plan, R. Vershynin, One-bit compressed sensing by linear programming. Commun. Pure Appl. Math. 66, 1275–1297 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  60. Y. Plan, R. Vershynin, Robust \(1\)-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans. Inform. Theory 59, 482–494 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  61. B. Recht, A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011)

    MathSciNet  MATH  Google Scholar 

  62. G. Schechtman, Two observations regarding embedding subsets of Euclidean spaces in normed spaces. Adv. Math. 200, 125–135 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  63. M. Talagrand, Selecting a proportion of characters. Israel J. Math. 108, 173–191 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  64. V. Temlyakov, Greedy Approximation (Cambridge University Press, Cambridge, 2011)

    Book  MATH  Google Scholar 

  65. A. Tillmann, R. Gribonval, M. Pfetsch, Projection onto the cosparse set is NP-hard, in Proceedings of the 2014 Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2014)

    Google Scholar 

  66. A. Tillmann, M. Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inform. Theory 60, 1248–1259 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  67. J. Tropp, A. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inform. Theory 53, 4655–4666 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  68. J. Vybíral, Widths of embeddings in function spaces. J. Complex. 24, 545–570 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  69. T. Zhang, Sparse recovery with orthogonal matching pursuit under RIP. IEEE Trans. Inform. Theory 57, 6215–6221 (2011)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

I thank the organizers of the International Conference on Approximation Theory for running this important series of triennial meetings. It was a plenary address by Ron DeVore at the 2007 meeting that drove me into the subject of compressive sensing. His talk was entitled ‘A Taste of Compressed sensing’ and my title is clearly a reference to his. Furthermore, I acknowledge support from the NSF under the grant DMS-1622134. Finally, I am also indebted to the AIM SQuaRE program for funding and hosting a collaboration on one-bit compressive sensing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Simon Foucart .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Foucart, S. (2017). Flavors of Compressive Sensing. In: Fasshauer, G., Schumaker, L. (eds) Approximation Theory XV: San Antonio 2016. AT 2016. Springer Proceedings in Mathematics & Statistics, vol 201. Springer, Cham. https://doi.org/10.1007/978-3-319-59912-0_4

Download citation

Publish with us

Policies and ethics