Skip to main content

Compressive Sensing and Algebraic Coding: Connections and Challenges

  • Chapter
  • First Online:

Part of the book series: Systems & Control: Foundations & Applications ((SCFA))

Abstract

Compressive sensing refers to the reconstruction of high dimensional but low-complexity objects from relatively few measurements. Examples of such objects include: high dimensional but sparse vectors, large images with very few sharp edges, and high-dimensional matrices of low rank. One of the most popular methods for reconstruction is to solve a suitably constrained \(\ell _1\)-norm minimization problem, otherwise known as basis pursuit (BP). In this approach, a key role is played by the measurement matrix, which converts the high dimensional but sparse vector (for example) into a low-dimensional real-valued measurement vector. The widely used sufficient conditions for guaranteeing that BP recovers the unknown vector are the restricted isometry property (RIP), and the robust null space property (RNSP). It has recently been shown that the RIP implies the RNSP. There are two approaches for generating matrices that satisfy the RIP, namely, probabilistic and deterministic. Probabilistic methods are older. In this approach, the measurement matrix consists of samples of a Gaussian or sub-Gaussian random variable. This approach leads to measurement matrices that are “order optimal,” in that the number of measurements required is within a constant factor of the optimum achievable. However, in practice, such matrices have no structure, which leads to enormous storage requirements and CPU time. Recently, the emphasis has shifted to the use of sparse binary matrices, which require less storage and are much faster than randomly generated matrices. A recent trend has been the use of methods from algebraic coding theory, in particular, expander graphs and low-density parity-check (LDPC) codes, to construct sparse binary measurement matrices. In this chapter, we will first briefly summarize the known results on compressed sensing using both probabilistic and deterministic approaches. In the first part of the chapter, we introduce some new constructions of sparse binary measurement matrices based on low-density parity-check (LDPC) codes. Then, we describe some of our recent results that lead to the fastest available algorithms for compressive sensing in specific situations. We suggest some interesting directions for future research.

This research was supported by the National Science Foundation, USA under Award #ECCS-1306630, and by the Department of Science and Technology, Government of India.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Note that the base of the logarithm does not matter because it cancels out between the two \(\log \) terms.

  2. 2.

    This is equivalent to the requirement that every row and every column of A contains at least two ones.

  3. 3.

    If the leading coefficient of a polynomial is zero, then the degree would be less than r.

  4. 4.

    This terminology is introduced in [6] with m / n denoted by \(\delta \) and k / m denoted by \(\rho \). Since these symbols are now used to denote different quantities in the compressed sensing literature, we use \(\theta \) and \(\phi \) instead.

  5. 5.

    We thank Prof. David Donoho for providing the software to reproduce the curve.

  6. 6.

    MATLAB codes are available from the authors.

  7. 7.

    Such a random variable is said to be sub-Gaussian. A normal random variable satisfies (61) with \(c = 1/2\).

  8. 8.

    In many papers on compressed sensing, especially those using Gaussian measurement matrices, the number of measurements m is not chosen in accordance with any theory, but simply picked out of the air.

References

  1. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1999.

    Article  MathSciNet  MATH  Google Scholar 

  2. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Review, vol. 41, no. 1, pp. 129–159, 2001.

    Article  MathSciNet  MATH  Google Scholar 

  3. E. J. Candès and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51(12), pp. 4203–4215, December 2005.

    Article  MathSciNet  MATH  Google Scholar 

  4. E. J. Candès, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications in Pure and Applied Mathematics, vol. 59(8), pp. 1207–1223, August 2006.

    Article  MathSciNet  MATH  Google Scholar 

  5. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52(4), pp. 1289–1306, April 2006.

    Article  MathSciNet  MATH  Google Scholar 

  6. D. L. Donoho, “For most large underdetermined systems of linear equations, the minimal \(\ell _1\)-norm solution is also the sparsest solution,” Communications in Pure and Applied Mathematics, vol. 59(6), pp. 797–829, 2006.

    Article  MathSciNet  MATH  Google Scholar 

  7. E. Candès, “The restricted isometry property and its implications for compresed sensing,” Comptes rendus de l’Académie des Sciences, Série I, vol. 346, pp. 589–592, 2008.

    MATH  Google Scholar 

  8. A. Cohen, W. Dahmen, and R. DeVore, “Compressed sensing and best \(k\)-term approximation,” Journal of the American Mathematical Society, vol. 22(1), pp. 211–231, January 2009.

    Google Scholar 

  9. K. D. Ba, P. Indyk, E. Price, and D. P. Woodruff, “Lower bounds for sparse recovery,” in Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), January 2010, pp. 1190–1197.

    Google Scholar 

  10. D. L. Donoho and J. Tanner, “Neighborliness of randomly projected simplices in high dimensions,” Proceedings of the National Academy of Sciences, vol. 102, pp. 9452–9457, July 2005.

    Article  MathSciNet  MATH  Google Scholar 

  11. D. L. Donoho, “High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension,” Discrete and Computational Geometry, vol. 35, no. 4, pp. 617–652, May 2006.

    Article  MathSciNet  MATH  Google Scholar 

  12. D. L. Donoho and J. Tanner, “Counting faces of randomly projected polytopes when the projection radically lowers dimension,” Journal of the American Mathematical Society, vol. 22, no. 1, pp. 1–53, January 2009.

    Article  MathSciNet  MATH  Google Scholar 

  13. D. Amelunxen, M. Lotz, M. B. McCoy, and J. A. Tropp, “Living on the edge: phase transitions in convex programs with random data,” Information and Inference, vol. 3, pp. 224–294, 2014.

    Article  MathSciNet  MATH  Google Scholar 

  14. H. Monajemi, S. Jafarpour, M. Gavish, and D. Donoho, “Deterministic matrices matching the compressed sensing phase transitions of gaussian random matrices,” Proceedings of the National Academy of Sciences of the United States of America, vol. 110, no. 4, pp. 1181–1186, 2013.

    Article  MathSciNet  MATH  Google Scholar 

  15. R. DeVore, “Deterministic construction of compressed sensing matrices,” Journal of Complexity, vol. 23, pp. 918–925, 2007.

    Article  MathSciNet  MATH  Google Scholar 

  16. T. Cai and A. Zhang, “Sparse representation of a polytope and recovery of sparse signals and low-rank matrices,” IEEE Transactions on Information Theory, vol. 60(1), pp. 122–132, 2014.

    Article  MathSciNet  MATH  Google Scholar 

  17. R. Zhang and S. Li, “A proof of conjecture on restricted isometry property constants \({\delta }_{tk} (0 < t < \frac{4}{3})\),” IEEE Transactions on Information Theory, vol. 64, no. 3, pp. 1699–1705, March 2018.

    Article  MathSciNet  Google Scholar 

  18. S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing. Springer-Verlag, 2013.

    Google Scholar 

  19. A. S. Bandeira, E. Dobriban, D. G. Mixon, and W. F. Sawin, “Certifying the restricted isometry property is hard,” IEEE Transactions on Information Theory, vol. 59, no. 6, pp. 3448–3450, June 2013.

    Article  MathSciNet  MATH  Google Scholar 

  20. W. Xu and B. Hassibi, “Compressed sensing over the Grassmann manifold: A unified analytical framework,” in Proceedings of the 46th Allerton Conference, 2008, pp. 562–567.

    Google Scholar 

  21. S. Foucart, “Stability and robustness of \(\ell _1\)-minimizations with Weibull matrices and redundant dictionaries,” Linear Algebra and Its Applications, vol. 441, pp. 4–21, 2014.

    Article  MathSciNet  MATH  Google Scholar 

  22. S. Ranjan and M. Vidyasagar, “Tight performance bounds for compressed sensing with conventional and group sparsity,” arXiv:1606.05889v2, 2018.

  23. S. Li, F. Gao, G. Ge, and S. Zhang, “Deterministic construction of compressed sensing matrices via algebraic curves,” IEEE Transactions on Information Theory, vol. 58, no. 8, pp. 5035–5041, August 2012.

    Article  MathSciNet  MATH  Google Scholar 

  24. S. D. Howard, A. R. Calderbank, and S. J. Searle, “A fast reconstruction algorithm for deterministic compressive sensing using second order reedmuller codes,” in Proceedings of the 42nd IEEE Annual Conference on Information Sciences and Systems, 2008, pp. 11–15.

    Google Scholar 

  25. R. R. Naidu, P. Jampana, and C. S. Sastry, “Deterministic compressed sensing matrices: Construction via euler squares and applications,” IEEE Transactions on Signal Processing, vol. 64, no. 14, pp. 3566–3575, July 2016.

    Article  MathSciNet  Google Scholar 

  26. H. F. MacNeish, “Euler squares,” Annals of Mathematics, vol. 23, no. 3, pp. 221–227, March 1922.

    Google Scholar 

  27. Y. Erlich, A. Gordon, M. Brand, G. J. Hannon, and P. P. Mitra, “Compressed genotyping,” IEEE Transactions on Information Theory, vol. 56, no. 2, pp. 706–723, 2010.

    Article  MathSciNet  MATH  Google Scholar 

  28. R. Berinde, A. C. Gilbert, P. Indyk, H. Karloff, and M. J. Strauss, “Combining geometry and combinatorics: a unified approach to sparse signal recovery,” in Proceedings of the Forty-Sixth Annual Allerton Conference, 2008, pp. 798–805.

    Google Scholar 

  29. P. Indyk and M. Ružić, “Near-optimal sparse recovery in the \(\ell _1\)-norm,” in Proceedings of the 49th Annual IEEE Symposium on the Foundations of Computer Science (FoCS), 2008, pp. 199–207.

    Google Scholar 

  30. A. Gilbert and P. Indyk, “Sparse recovery using sparse matrices,” IEEE Proceedings, vol. 98, no. 6, pp. 937–947, June 2010.

    Article  Google Scholar 

  31. M. Lotfi and M. Vidyasagar, “A fast noniterative algorithm for compressive sensing using binary measurement matrices,” IEEE Transactions on Signal Processing, vol. 67, pp. 4079–4089, August 2019.

    Google Scholar 

  32. V. Guruswami, C. Umans, and S. Vadhan, “Unbalanced expanders and randomness extractors from ParvareshVardy codes,” Journal of the ACM, vol. 56, no. 4, pp. 20:1–20:34, 2009.

    Article  MathSciNet  MATH  Google Scholar 

  33. A. G. Dimakis, R. Smarandache, and P. O. Vontobel, “LDPC codes for compressed sensing,” IEEE Transactions on Information Theory, vol. 58, no. 5, pp. 3093–3114, May 2012.

    Article  MathSciNet  MATH  Google Scholar 

  34. X.-J. Liu and S.-T. Xia, “Reconstruction guarantee analysis of binary measurement matrices based on girth,” in Proceedings of the International Symposium on Information Theory, 2013, pp. 474–478.

    Google Scholar 

  35. M. Lotfi and M. Vidyasagar, “Compressed sensing using binary matrices of nearly optimal dimensions,” arXiv:1808.03001, 2018.

  36. S. Hoory, “The size of bipartite graphs with a given girth,” Journal of Combinatorial Theory, Series B, vol. 86, pp. 215–220, 2002.

    Article  MathSciNet  MATH  Google Scholar 

  37. K. Yang and T. Helleseth, “On the minimum distance of array codes as LDPC codes,” IEEE Transactions on Information Theory, vol. 49, no. 12, pp. 3268–3271, December 2003.

    Article  MathSciNet  MATH  Google Scholar 

  38. J. L. Fan, “Array codes as ldpc codes,” in Proceedings of 2nd International Symposium on turbo Codes, 2000, pp. 543–546.

    Google Scholar 

  39. S. Sarvotham, D. Baron, and R. G. Baraniuk, “Sudocodes – fast measurement and reconstruction of sparse signals,” in Proceedings of the International Symposium on Information Theory, 2006, pp. 2804–2808.

    Google Scholar 

  40. W. Xu and B. Hassibi, “Efficient compressive sensing with deterministic guarantees using expander graphs,” in Proceedings of IEEE Information Theory Workshop, Lake Tahoe, 2007.

    Google Scholar 

  41. S. Jafarpour, W. Xu, B. Hassibi, and R. Calderbank, “Efficient compressed sensing using optimized expander graphs,” IEEE Transactions on Information Theory, vol. 55, no. 9, pp. 4299–4308, 2009.

    Article  MathSciNet  MATH  Google Scholar 

  42. Y. Wu and S. Verdú, “Optimal phase transitions in compressed sensing,” IEEE Transactions on Information Theory, vol. 58, no. 10, pp. 6241–6263, October 2012.

    Article  MathSciNet  MATH  Google Scholar 

  43. D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proceedings of the National Academy of Sciences, vol. 106, no. 45, pp. 18 914–18 919, 2009.

    Article  Google Scholar 

  44. D. L. Donoho and J. Tanner, “Precise undersampling theorems,” Proceedings of the IEEE, vol. 98, no. 6, pp. 913–924, June 2010.

    Article  Google Scholar 

  45. D. L. Donoho, A. Javanmard, and A. Montanari, “Information-theoretically optimal compressed sensing via spatial coupling and approximate message passing,” IEEE Transactions on Information Theory, vol. 59, no. 11, pp. 7434–7464, November 2013.

    Article  MathSciNet  MATH  Google Scholar 

  46. F. Krzakala, M. Mzard, F. Sausset, Y. Sun, and L. Zdeborová, “Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices,” Journal of Statistical Mechanics: Theory and Experiment, vol. 12, 2012.

    Google Scholar 

  47. D. Donoho and J. Tanner, “Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing,” Philosophical Transactions of The Royal Society, Part A: Mathematical, Physical and Engineering Sciences, vol. 367, no. 1906, pp. 4273–4293, November 2009.

    Article  MathSciNet  MATH  Google Scholar 

  48. D. L. Donoho and J. Tanner, “Counting the faces of randomly-projected hypercubes and orthants, with applications,” Discrete and Computational Geometry, vol. 43, no. 3, pp. 522–541, April 2010.

    Article  MathSciNet  MATH  Google Scholar 

  49. M. Bayati, M. Lelarge, and A. Montanari, “Universality in polytope phase transitions and message passing algorithms,” arXiv:1207.7321v2, 2015.

  50. A. Khajehnejad, A. S. Tehrani, A. G. Dimakis, and B. Hassibi, “Explicit matrices for sparse approximation,” in Proceedings of the International Symposium on Information Theory, 2011, pp. 469–473.

    Google Scholar 

  51. S. Arora, C. Daskalakis, and D. Steurer, “Message-passing algorithms and improved lp decoding,” in Proceedings of the 41st Annual ACM Symposium on the Theory of Computing, 2009, p. 3–12.

    Google Scholar 

  52. D. Achlioptas, “Database-friendly random projections: Johnson-Lindenstrauss with binary coins,” Journal of Computer and System Sciences, vol. 66, pp. 671–687, 2003.

    Article  MathSciNet  MATH  Google Scholar 

  53. G. Xu and Z. Xu, “Compressed sensing matrices from Fourier matrices,” IEEE Transactions on Information Theory, vol. 61(1), pp. 469–478, January 2015.

    Article  MathSciNet  MATH  Google Scholar 

  54. L. Applebaum, S. D. Howard, S. Searle, and R. Calderbank, “Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery,” Applied and Computational Harmonic Analysis, vol. 26, pp. 283–290, 2009.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The contents of this chapter report various results from the doctoral thesis of the second author, carried out under the supervision of the first author. The authors thank Prof. David Doonoho and Mr. Hatef Monajemi of Stanford University for their helpful suggestions on phase transitions, and for providing code to enable us to reproduce their computational results. They also thank Prof. Phanindra Jampana of IIT Hyderabad for helpful discussions on the construction of Euler squares.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mathukumalli Vidyasagar .

Editor information

Editors and Affiliations

Appendix

Appendix

In this appendix, we compare the number of measurements used by probabilistic as well as deterministic methods to guarantee that the corresponding measurement matrix A satisfies the restricted isometry property (RIP), as stated in Theorem 1. Note that the number of measurements is computed from the best available sufficient condition. In principle, it is possible that matrices with fewer rows might also satisfy the RIP. But there would not be any theoretical justification for using such matrices.

In probabilistic methods, the number of measurements m is \(O(k \log (n/k))\). However, in reality, the O symbol hides a huge constant. It is possible to replace the O symbol by carefully collating the relevant theorems in [18]. This leads to the following explicit bounds.

Table 8 Best available bounds for the number of measurements for various choices of n and k using both probabilistic and deterministic constructions. For probabilistic constructions, the failure probability is \(\xi = 10^{-9}\). \(m_G, m_{SG}, m_A\) denote, respectively, the bounds on the number of measurements using a normal Gaussian, a sub-Gaussian with \(c = 1/2\), and a bipolar random variable and the bound of Achlioptas. For deterministic methods, \(m_D\) denotes the number of measurements using DeVore’s construction, while \(m_C\) denotes the number of measurements using chirp matrices

Theorem 24

Suppose X is a random variable with zero mean, unit variance, and suppose in addition that there exists a constant c such thatFootnote 7

$$\begin{aligned} E[ \exp (\theta X)] \le \exp (c \theta ^2) , \; \forall \theta \in {\mathbb R}. \end{aligned}$$
(61)

Define

$$\begin{aligned} \gamma = 2 , \zeta = 1/(4c) , \alpha = \gamma e^{- \zeta } + e^\zeta , \beta = \zeta , \end{aligned}$$
(62)
$$\begin{aligned} \tilde{c}:= \frac{ \beta ^2 }{ 2 ( 2 \alpha + \beta ) } . \end{aligned}$$
(63)

Suppose an integer k and real numbers \(\delta , \xi \in (0,1)\) are specified, and that \(A = (1/\sqrt{m}) \varPhi \), where \(\varPhi \in {\mathbb R}^{m \times n}\) consists of independent samples of X. Then, A satisfies the RIP of order k with constant \(\delta \) with probability \(\ge 1 - \xi \) provided

$$\begin{aligned} m \ge \frac{1}{ \tilde{c}\delta ^2 } \left( \frac{4}{3} k \ln \frac{en}{k} + \frac{14k}{3} + \frac{4}{3} \ln \frac{2}{\xi } \right) . \end{aligned}$$
(64)

In (64), the number of measurements m is indeed \(O(k \log (n/k))\). However, for realistic values of n and k, the number of measurements n would be comparable to, or even to exceed, n, which would render “compressed” sensing meaningless.Footnote 8 For “pure” Gaussian variables, it is possible to find improved bounds for m (see Theorem 2 which is based on [18, Theorem 9.27]. Also, for binary random variables where X equals \(\pm 1\) with equal probability, another set of bounds is available [52]. While all of these bounds are \(O(k \log (n/k))\), in practical situations the bounds are not useful.

This suggests that it is worthwhile to study deterministic methods for generating measurement matrices that satisfy the RIP. There are very few such methods. Indeed, the authors are aware of only three methods. The paper [15] uses a finite field method to construct a binary matrix, and this method is used in the present chapter. The paper [53] gives a procedure for choosing rows from a unitary Fourier matrix such that the resulting matrix satisfies the RIP. This method leads to the same values for the number of measurements m as that in [15]. Constructing partial Fourier matrices is an important part of reconstructing time-domain sparse signals from a limited number of frequency measurements (or vice versa). Therefore, the results of [53] can be used in this situation. In both of these methods, m equals \(q^2\) where q is appropriately chosen prime number. Finally, in [54] a method is given based on chirp matrices. In this case, m equals a prime number q. Note that the partial Fourier matrix and the chirp matrix are complex, whereas the method in [15] leads to a binary matrix. In all three methods, \(m = O(n^{1/2})\), which grows faster than \(O(k \log (n/k))\). However, the constant under this O symbol is quite small. Therefore, for realistic values of k and n, the bounds for m from these methods are much smaller than those derived using probabilistic methods.

Table 8 gives the values of m for various values of n and k. Also, while the chirp matrix has fewer measurements than the binary matrix, \(\ell _1\)-norm minimization with the binary matrix runs much faster than with the chirp matrix, due to the sparsity of the binary matrix. In view of these numbers, in the present chapter, we used DeVore’s construction as the benchmark for the recovery of sparse vectors.

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Vidyasagar, M., Lotfi, M. (2018). Compressive Sensing and Algebraic Coding: Connections and Challenges. In: Başar, T. (eds) Uncertainty in Complex Networked Systems. Systems & Control: Foundations & Applications. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-04630-9_8

Download citation

Publish with us

Policies and ethics