Compressive Sensing and Algebraic Coding: Connections and Challenges

  • Mathukumalli VidyasagarEmail author
  • Mahsa Lotfi
Part of the Systems & Control: Foundations & Applications book series (SCFA)


Compressive sensing refers to the reconstruction of high dimensional but low-complexity objects from relatively few measurements. Examples of such objects include: high dimensional but sparse vectors, large images with very few sharp edges, and high-dimensional matrices of low rank. One of the most popular methods for reconstruction is to solve a suitably constrained \(\ell _1\)-norm minimization problem, otherwise known as basis pursuit (BP). In this approach, a key role is played by the measurement matrix, which converts the high dimensional but sparse vector (for example) into a low-dimensional real-valued measurement vector. The widely used sufficient conditions for guaranteeing that BP recovers the unknown vector are the restricted isometry property (RIP), and the robust null space property (RNSP). It has recently been shown that the RIP implies the RNSP. There are two approaches for generating matrices that satisfy the RIP, namely, probabilistic and deterministic. Probabilistic methods are older. In this approach, the measurement matrix consists of samples of a Gaussian or sub-Gaussian random variable. This approach leads to measurement matrices that are “order optimal,” in that the number of measurements required is within a constant factor of the optimum achievable. However, in practice, such matrices have no structure, which leads to enormous storage requirements and CPU time. Recently, the emphasis has shifted to the use of sparse binary matrices, which require less storage and are much faster than randomly generated matrices. A recent trend has been the use of methods from algebraic coding theory, in particular, expander graphs and low-density parity-check (LDPC) codes, to construct sparse binary measurement matrices. In this chapter, we will first briefly summarize the known results on compressed sensing using both probabilistic and deterministic approaches. In the first part of the chapter, we introduce some new constructions of sparse binary measurement matrices based on low-density parity-check (LDPC) codes. Then, we describe some of our recent results that lead to the fastest available algorithms for compressive sensing in specific situations. We suggest some interesting directions for future research.



The contents of this chapter report various results from the doctoral thesis of the second author, carried out under the supervision of the first author. The authors thank Prof. David Doonoho and Mr. Hatef Monajemi of Stanford University for their helpful suggestions on phase transitions, and for providing code to enable us to reproduce their computational results. They also thank Prof. Phanindra Jampana of IIT Hyderabad for helpful discussions on the construction of Euler squares.


  1. 1.
    S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1999.MathSciNetzbMATHCrossRefGoogle Scholar
  2. 2.
    S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Review, vol. 41, no. 1, pp. 129–159, 2001.MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    E. J. Candès and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51(12), pp. 4203–4215, December 2005.MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    E. J. Candès, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications in Pure and Applied Mathematics, vol. 59(8), pp. 1207–1223, August 2006.MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52(4), pp. 1289–1306, April 2006.MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    D. L. Donoho, “For most large underdetermined systems of linear equations, the minimal \(\ell _1\)-norm solution is also the sparsest solution,” Communications in Pure and Applied Mathematics, vol. 59(6), pp. 797–829, 2006.MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    E. Candès, “The restricted isometry property and its implications for compresed sensing,” Comptes rendus de l’Académie des Sciences, Série I, vol. 346, pp. 589–592, 2008.zbMATHGoogle Scholar
  8. 8.
    A. Cohen, W. Dahmen, and R. DeVore, “Compressed sensing and best \(k\)-term approximation,” Journal of the American Mathematical Society, vol. 22(1), pp. 211–231, January 2009.Google Scholar
  9. 9.
    K. D. Ba, P. Indyk, E. Price, and D. P. Woodruff, “Lower bounds for sparse recovery,” in Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), January 2010, pp. 1190–1197.Google Scholar
  10. 10.
    D. L. Donoho and J. Tanner, “Neighborliness of randomly projected simplices in high dimensions,” Proceedings of the National Academy of Sciences, vol. 102, pp. 9452–9457, July 2005.MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    D. L. Donoho, “High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension,” Discrete and Computational Geometry, vol. 35, no. 4, pp. 617–652, May 2006.MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    D. L. Donoho and J. Tanner, “Counting faces of randomly projected polytopes when the projection radically lowers dimension,” Journal of the American Mathematical Society, vol. 22, no. 1, pp. 1–53, January 2009.MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    D. Amelunxen, M. Lotz, M. B. McCoy, and J. A. Tropp, “Living on the edge: phase transitions in convex programs with random data,” Information and Inference, vol. 3, pp. 224–294, 2014.MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    H. Monajemi, S. Jafarpour, M. Gavish, and D. Donoho, “Deterministic matrices matching the compressed sensing phase transitions of gaussian random matrices,” Proceedings of the National Academy of Sciences of the United States of America, vol. 110, no. 4, pp. 1181–1186, 2013.MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    R. DeVore, “Deterministic construction of compressed sensing matrices,” Journal of Complexity, vol. 23, pp. 918–925, 2007.MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    T. Cai and A. Zhang, “Sparse representation of a polytope and recovery of sparse signals and low-rank matrices,” IEEE Transactions on Information Theory, vol. 60(1), pp. 122–132, 2014.MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    R. Zhang and S. Li, “A proof of conjecture on restricted isometry property constants \({\delta }_{tk} (0 < t < \frac{4}{3})\),” IEEE Transactions on Information Theory, vol. 64, no. 3, pp. 1699–1705, March 2018.MathSciNetCrossRefGoogle Scholar
  18. 18.
    S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing. Springer-Verlag, 2013.Google Scholar
  19. 19.
    A. S. Bandeira, E. Dobriban, D. G. Mixon, and W. F. Sawin, “Certifying the restricted isometry property is hard,” IEEE Transactions on Information Theory, vol. 59, no. 6, pp. 3448–3450, June 2013.MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    W. Xu and B. Hassibi, “Compressed sensing over the Grassmann manifold: A unified analytical framework,” in Proceedings of the 46th Allerton Conference, 2008, pp. 562–567.Google Scholar
  21. 21.
    S. Foucart, “Stability and robustness of \(\ell _1\)-minimizations with Weibull matrices and redundant dictionaries,” Linear Algebra and Its Applications, vol. 441, pp. 4–21, 2014.MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    S. Ranjan and M. Vidyasagar, “Tight performance bounds for compressed sensing with conventional and group sparsity,” arXiv:1606.05889v2, 2018.
  23. 23.
    S. Li, F. Gao, G. Ge, and S. Zhang, “Deterministic construction of compressed sensing matrices via algebraic curves,” IEEE Transactions on Information Theory, vol. 58, no. 8, pp. 5035–5041, August 2012.MathSciNetzbMATHCrossRefGoogle Scholar
  24. 24.
    S. D. Howard, A. R. Calderbank, and S. J. Searle, “A fast reconstruction algorithm for deterministic compressive sensing using second order reedmuller codes,” in Proceedings of the 42nd IEEE Annual Conference on Information Sciences and Systems, 2008, pp. 11–15.Google Scholar
  25. 25.
    R. R. Naidu, P. Jampana, and C. S. Sastry, “Deterministic compressed sensing matrices: Construction via euler squares and applications,” IEEE Transactions on Signal Processing, vol. 64, no. 14, pp. 3566–3575, July 2016.MathSciNetCrossRefGoogle Scholar
  26. 26.
    H. F. MacNeish, “Euler squares,” Annals of Mathematics, vol. 23, no. 3, pp. 221–227, March 1922.Google Scholar
  27. 27.
    Y. Erlich, A. Gordon, M. Brand, G. J. Hannon, and P. P. Mitra, “Compressed genotyping,” IEEE Transactions on Information Theory, vol. 56, no. 2, pp. 706–723, 2010.MathSciNetzbMATHCrossRefGoogle Scholar
  28. 28.
    R. Berinde, A. C. Gilbert, P. Indyk, H. Karloff, and M. J. Strauss, “Combining geometry and combinatorics: a unified approach to sparse signal recovery,” in Proceedings of the Forty-Sixth Annual Allerton Conference, 2008, pp. 798–805.Google Scholar
  29. 29.
    P. Indyk and M. Ružić, “Near-optimal sparse recovery in the \(\ell _1\)-norm,” in Proceedings of the 49th Annual IEEE Symposium on the Foundations of Computer Science (FoCS), 2008, pp. 199–207.Google Scholar
  30. 30.
    A. Gilbert and P. Indyk, “Sparse recovery using sparse matrices,” IEEE Proceedings, vol. 98, no. 6, pp. 937–947, June 2010.CrossRefGoogle Scholar
  31. 31.
    M. Lotfi and M. Vidyasagar, “A fast noniterative algorithm for compressive sensing using binary measurement matrices,” IEEE Transactions on Signal Processing, vol. 67, pp. 4079–4089, August 2019.Google Scholar
  32. 32.
    V. Guruswami, C. Umans, and S. Vadhan, “Unbalanced expanders and randomness extractors from ParvareshVardy codes,” Journal of the ACM, vol. 56, no. 4, pp. 20:1–20:34, 2009.MathSciNetzbMATHCrossRefGoogle Scholar
  33. 33.
    A. G. Dimakis, R. Smarandache, and P. O. Vontobel, “LDPC codes for compressed sensing,” IEEE Transactions on Information Theory, vol. 58, no. 5, pp. 3093–3114, May 2012.MathSciNetzbMATHCrossRefGoogle Scholar
  34. 34.
    X.-J. Liu and S.-T. Xia, “Reconstruction guarantee analysis of binary measurement matrices based on girth,” in Proceedings of the International Symposium on Information Theory, 2013, pp. 474–478.Google Scholar
  35. 35.
    M. Lotfi and M. Vidyasagar, “Compressed sensing using binary matrices of nearly optimal dimensions,” arXiv:1808.03001, 2018.
  36. 36.
    S. Hoory, “The size of bipartite graphs with a given girth,” Journal of Combinatorial Theory, Series B, vol. 86, pp. 215–220, 2002.MathSciNetzbMATHCrossRefGoogle Scholar
  37. 37.
    K. Yang and T. Helleseth, “On the minimum distance of array codes as LDPC codes,” IEEE Transactions on Information Theory, vol. 49, no. 12, pp. 3268–3271, December 2003.MathSciNetzbMATHCrossRefGoogle Scholar
  38. 38.
    J. L. Fan, “Array codes as ldpc codes,” in Proceedings of 2nd International Symposium on turbo Codes, 2000, pp. 543–546.Google Scholar
  39. 39.
    S. Sarvotham, D. Baron, and R. G. Baraniuk, “Sudocodes – fast measurement and reconstruction of sparse signals,” in Proceedings of the International Symposium on Information Theory, 2006, pp. 2804–2808.Google Scholar
  40. 40.
    W. Xu and B. Hassibi, “Efficient compressive sensing with deterministic guarantees using expander graphs,” in Proceedings of IEEE Information Theory Workshop, Lake Tahoe, 2007.Google Scholar
  41. 41.
    S. Jafarpour, W. Xu, B. Hassibi, and R. Calderbank, “Efficient compressed sensing using optimized expander graphs,” IEEE Transactions on Information Theory, vol. 55, no. 9, pp. 4299–4308, 2009.MathSciNetzbMATHCrossRefGoogle Scholar
  42. 42.
    Y. Wu and S. Verdú, “Optimal phase transitions in compressed sensing,” IEEE Transactions on Information Theory, vol. 58, no. 10, pp. 6241–6263, October 2012.MathSciNetzbMATHCrossRefGoogle Scholar
  43. 43.
    D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proceedings of the National Academy of Sciences, vol. 106, no. 45, pp. 18 914–18 919, 2009.CrossRefGoogle Scholar
  44. 44.
    D. L. Donoho and J. Tanner, “Precise undersampling theorems,” Proceedings of the IEEE, vol. 98, no. 6, pp. 913–924, June 2010.CrossRefGoogle Scholar
  45. 45.
    D. L. Donoho, A. Javanmard, and A. Montanari, “Information-theoretically optimal compressed sensing via spatial coupling and approximate message passing,” IEEE Transactions on Information Theory, vol. 59, no. 11, pp. 7434–7464, November 2013.MathSciNetzbMATHCrossRefGoogle Scholar
  46. 46.
    F. Krzakala, M. Mzard, F. Sausset, Y. Sun, and L. Zdeborová, “Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices,” Journal of Statistical Mechanics: Theory and Experiment, vol. 12, 2012.Google Scholar
  47. 47.
    D. Donoho and J. Tanner, “Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing,” Philosophical Transactions of The Royal Society, Part A: Mathematical, Physical and Engineering Sciences, vol. 367, no. 1906, pp. 4273–4293, November 2009.MathSciNetzbMATHCrossRefGoogle Scholar
  48. 48.
    D. L. Donoho and J. Tanner, “Counting the faces of randomly-projected hypercubes and orthants, with applications,” Discrete and Computational Geometry, vol. 43, no. 3, pp. 522–541, April 2010.MathSciNetzbMATHCrossRefGoogle Scholar
  49. 49.
    M. Bayati, M. Lelarge, and A. Montanari, “Universality in polytope phase transitions and message passing algorithms,” arXiv:1207.7321v2, 2015.
  50. 50.
    A. Khajehnejad, A. S. Tehrani, A. G. Dimakis, and B. Hassibi, “Explicit matrices for sparse approximation,” in Proceedings of the International Symposium on Information Theory, 2011, pp. 469–473.Google Scholar
  51. 51.
    S. Arora, C. Daskalakis, and D. Steurer, “Message-passing algorithms and improved lp decoding,” in Proceedings of the 41st Annual ACM Symposium on the Theory of Computing, 2009, p. 3–12.Google Scholar
  52. 52.
    D. Achlioptas, “Database-friendly random projections: Johnson-Lindenstrauss with binary coins,” Journal of Computer and System Sciences, vol. 66, pp. 671–687, 2003.MathSciNetzbMATHCrossRefGoogle Scholar
  53. 53.
    G. Xu and Z. Xu, “Compressed sensing matrices from Fourier matrices,” IEEE Transactions on Information Theory, vol. 61(1), pp. 469–478, January 2015.MathSciNetzbMATHCrossRefGoogle Scholar
  54. 54.
    L. Applebaum, S. D. Howard, S. Searle, and R. Calderbank, “Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery,” Applied and Computational Harmonic Analysis, vol. 26, pp. 283–290, 2009.MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Indian Institute of Technology HyderabadKandiIndia
  2. 2.The University of Texas at DallasRichardsonUSA
  3. 3.Erik Jonsson School of Engineering and Computer ScienceThe University of Texas at DallasRichardsonUSA

Personalised recommendations