Compressive Sensing and Algebraic Coding: Connections and Challenges
Compressive sensing refers to the reconstruction of high dimensional but low-complexity objects from relatively few measurements. Examples of such objects include: high dimensional but sparse vectors, large images with very few sharp edges, and high-dimensional matrices of low rank. One of the most popular methods for reconstruction is to solve a suitably constrained \(\ell _1\)-norm minimization problem, otherwise known as basis pursuit (BP). In this approach, a key role is played by the measurement matrix, which converts the high dimensional but sparse vector (for example) into a low-dimensional real-valued measurement vector. The widely used sufficient conditions for guaranteeing that BP recovers the unknown vector are the restricted isometry property (RIP), and the robust null space property (RNSP). It has recently been shown that the RIP implies the RNSP. There are two approaches for generating matrices that satisfy the RIP, namely, probabilistic and deterministic. Probabilistic methods are older. In this approach, the measurement matrix consists of samples of a Gaussian or sub-Gaussian random variable. This approach leads to measurement matrices that are “order optimal,” in that the number of measurements required is within a constant factor of the optimum achievable. However, in practice, such matrices have no structure, which leads to enormous storage requirements and CPU time. Recently, the emphasis has shifted to the use of sparse binary matrices, which require less storage and are much faster than randomly generated matrices. A recent trend has been the use of methods from algebraic coding theory, in particular, expander graphs and low-density parity-check (LDPC) codes, to construct sparse binary measurement matrices. In this chapter, we will first briefly summarize the known results on compressed sensing using both probabilistic and deterministic approaches. In the first part of the chapter, we introduce some new constructions of sparse binary measurement matrices based on low-density parity-check (LDPC) codes. Then, we describe some of our recent results that lead to the fastest available algorithms for compressive sensing in specific situations. We suggest some interesting directions for future research.
The contents of this chapter report various results from the doctoral thesis of the second author, carried out under the supervision of the first author. The authors thank Prof. David Doonoho and Mr. Hatef Monajemi of Stanford University for their helpful suggestions on phase transitions, and for providing code to enable us to reproduce their computational results. They also thank Prof. Phanindra Jampana of IIT Hyderabad for helpful discussions on the construction of Euler squares.
- 8.A. Cohen, W. Dahmen, and R. DeVore, “Compressed sensing and best \(k\)-term approximation,” Journal of the American Mathematical Society, vol. 22(1), pp. 211–231, January 2009.Google Scholar
- 9.K. D. Ba, P. Indyk, E. Price, and D. P. Woodruff, “Lower bounds for sparse recovery,” in Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), January 2010, pp. 1190–1197.Google Scholar
- 14.H. Monajemi, S. Jafarpour, M. Gavish, and D. Donoho, “Deterministic matrices matching the compressed sensing phase transitions of gaussian random matrices,” Proceedings of the National Academy of Sciences of the United States of America, vol. 110, no. 4, pp. 1181–1186, 2013.MathSciNetzbMATHCrossRefGoogle Scholar
- 18.S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing. Springer-Verlag, 2013.Google Scholar
- 20.W. Xu and B. Hassibi, “Compressed sensing over the Grassmann manifold: A unified analytical framework,” in Proceedings of the 46th Allerton Conference, 2008, pp. 562–567.Google Scholar
- 22.S. Ranjan and M. Vidyasagar, “Tight performance bounds for compressed sensing with conventional and group sparsity,” arXiv:1606.05889v2, 2018.
- 24.S. D. Howard, A. R. Calderbank, and S. J. Searle, “A fast reconstruction algorithm for deterministic compressive sensing using second order reedmuller codes,” in Proceedings of the 42nd IEEE Annual Conference on Information Sciences and Systems, 2008, pp. 11–15.Google Scholar
- 26.H. F. MacNeish, “Euler squares,” Annals of Mathematics, vol. 23, no. 3, pp. 221–227, March 1922.Google Scholar
- 28.R. Berinde, A. C. Gilbert, P. Indyk, H. Karloff, and M. J. Strauss, “Combining geometry and combinatorics: a unified approach to sparse signal recovery,” in Proceedings of the Forty-Sixth Annual Allerton Conference, 2008, pp. 798–805.Google Scholar
- 29.P. Indyk and M. Ružić, “Near-optimal sparse recovery in the \(\ell _1\)-norm,” in Proceedings of the 49th Annual IEEE Symposium on the Foundations of Computer Science (FoCS), 2008, pp. 199–207.Google Scholar
- 31.M. Lotfi and M. Vidyasagar, “A fast noniterative algorithm for compressive sensing using binary measurement matrices,” IEEE Transactions on Signal Processing, vol. 67, pp. 4079–4089, August 2019.Google Scholar
- 34.X.-J. Liu and S.-T. Xia, “Reconstruction guarantee analysis of binary measurement matrices based on girth,” in Proceedings of the International Symposium on Information Theory, 2013, pp. 474–478.Google Scholar
- 35.M. Lotfi and M. Vidyasagar, “Compressed sensing using binary matrices of nearly optimal dimensions,” arXiv:1808.03001, 2018.
- 38.J. L. Fan, “Array codes as ldpc codes,” in Proceedings of 2nd International Symposium on turbo Codes, 2000, pp. 543–546.Google Scholar
- 39.S. Sarvotham, D. Baron, and R. G. Baraniuk, “Sudocodes – fast measurement and reconstruction of sparse signals,” in Proceedings of the International Symposium on Information Theory, 2006, pp. 2804–2808.Google Scholar
- 40.W. Xu and B. Hassibi, “Efficient compressive sensing with deterministic guarantees using expander graphs,” in Proceedings of IEEE Information Theory Workshop, Lake Tahoe, 2007.Google Scholar
- 46.F. Krzakala, M. Mzard, F. Sausset, Y. Sun, and L. Zdeborová, “Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices,” Journal of Statistical Mechanics: Theory and Experiment, vol. 12, 2012.Google Scholar
- 47.D. Donoho and J. Tanner, “Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing,” Philosophical Transactions of The Royal Society, Part A: Mathematical, Physical and Engineering Sciences, vol. 367, no. 1906, pp. 4273–4293, November 2009.MathSciNetzbMATHCrossRefGoogle Scholar
- 49.M. Bayati, M. Lelarge, and A. Montanari, “Universality in polytope phase transitions and message passing algorithms,” arXiv:1207.7321v2, 2015.
- 50.A. Khajehnejad, A. S. Tehrani, A. G. Dimakis, and B. Hassibi, “Explicit matrices for sparse approximation,” in Proceedings of the International Symposium on Information Theory, 2011, pp. 469–473.Google Scholar
- 51.S. Arora, C. Daskalakis, and D. Steurer, “Message-passing algorithms and improved lp decoding,” in Proceedings of the 41st Annual ACM Symposium on the Theory of Computing, 2009, p. 3–12.Google Scholar