Abstract
Compressive sensing refers to the reconstruction of high dimensional but low-complexity objects from relatively few measurements. Examples of such objects include: high dimensional but sparse vectors, large images with very few sharp edges, and high-dimensional matrices of low rank. One of the most popular methods for reconstruction is to solve a suitably constrained \(\ell _1\)-norm minimization problem, otherwise known as basis pursuit (BP). In this approach, a key role is played by the measurement matrix, which converts the high dimensional but sparse vector (for example) into a low-dimensional real-valued measurement vector. The widely used sufficient conditions for guaranteeing that BP recovers the unknown vector are the restricted isometry property (RIP), and the robust null space property (RNSP). It has recently been shown that the RIP implies the RNSP. There are two approaches for generating matrices that satisfy the RIP, namely, probabilistic and deterministic. Probabilistic methods are older. In this approach, the measurement matrix consists of samples of a Gaussian or sub-Gaussian random variable. This approach leads to measurement matrices that are “order optimal,” in that the number of measurements required is within a constant factor of the optimum achievable. However, in practice, such matrices have no structure, which leads to enormous storage requirements and CPU time. Recently, the emphasis has shifted to the use of sparse binary matrices, which require less storage and are much faster than randomly generated matrices. A recent trend has been the use of methods from algebraic coding theory, in particular, expander graphs and low-density parity-check (LDPC) codes, to construct sparse binary measurement matrices. In this chapter, we will first briefly summarize the known results on compressed sensing using both probabilistic and deterministic approaches. In the first part of the chapter, we introduce some new constructions of sparse binary measurement matrices based on low-density parity-check (LDPC) codes. Then, we describe some of our recent results that lead to the fastest available algorithms for compressive sensing in specific situations. We suggest some interesting directions for future research.
This research was supported by the National Science Foundation, USA under Award #ECCS-1306630, and by the Department of Science and Technology, Government of India.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Note that the base of the logarithm does not matter because it cancels out between the two \(\log \) terms.
- 2.
This is equivalent to the requirement that every row and every column of A contains at least two ones.
- 3.
If the leading coefficient of a polynomial is zero, then the degree would be less than r.
- 4.
This terminology is introduced in [6] with m / n denoted by \(\delta \) and k / m denoted by \(\rho \). Since these symbols are now used to denote different quantities in the compressed sensing literature, we use \(\theta \) and \(\phi \) instead.
- 5.
We thank Prof. David Donoho for providing the software to reproduce the curve.
- 6.
MATLAB codes are available from the authors.
- 7.
Such a random variable is said to be sub-Gaussian. A normal random variable satisfies (61) with \(c = 1/2\).
- 8.
In many papers on compressed sensing, especially those using Gaussian measurement matrices, the number of measurements m is not chosen in accordance with any theory, but simply picked out of the air.
References
S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1999.
S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Review, vol. 41, no. 1, pp. 129–159, 2001.
E. J. Candès and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51(12), pp. 4203–4215, December 2005.
E. J. Candès, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications in Pure and Applied Mathematics, vol. 59(8), pp. 1207–1223, August 2006.
D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52(4), pp. 1289–1306, April 2006.
D. L. Donoho, “For most large underdetermined systems of linear equations, the minimal \(\ell _1\)-norm solution is also the sparsest solution,” Communications in Pure and Applied Mathematics, vol. 59(6), pp. 797–829, 2006.
E. Candès, “The restricted isometry property and its implications for compresed sensing,” Comptes rendus de l’Académie des Sciences, Série I, vol. 346, pp. 589–592, 2008.
A. Cohen, W. Dahmen, and R. DeVore, “Compressed sensing and best \(k\)-term approximation,” Journal of the American Mathematical Society, vol. 22(1), pp. 211–231, January 2009.
K. D. Ba, P. Indyk, E. Price, and D. P. Woodruff, “Lower bounds for sparse recovery,” in Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), January 2010, pp. 1190–1197.
D. L. Donoho and J. Tanner, “Neighborliness of randomly projected simplices in high dimensions,” Proceedings of the National Academy of Sciences, vol. 102, pp. 9452–9457, July 2005.
D. L. Donoho, “High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension,” Discrete and Computational Geometry, vol. 35, no. 4, pp. 617–652, May 2006.
D. L. Donoho and J. Tanner, “Counting faces of randomly projected polytopes when the projection radically lowers dimension,” Journal of the American Mathematical Society, vol. 22, no. 1, pp. 1–53, January 2009.
D. Amelunxen, M. Lotz, M. B. McCoy, and J. A. Tropp, “Living on the edge: phase transitions in convex programs with random data,” Information and Inference, vol. 3, pp. 224–294, 2014.
H. Monajemi, S. Jafarpour, M. Gavish, and D. Donoho, “Deterministic matrices matching the compressed sensing phase transitions of gaussian random matrices,” Proceedings of the National Academy of Sciences of the United States of America, vol. 110, no. 4, pp. 1181–1186, 2013.
R. DeVore, “Deterministic construction of compressed sensing matrices,” Journal of Complexity, vol. 23, pp. 918–925, 2007.
T. Cai and A. Zhang, “Sparse representation of a polytope and recovery of sparse signals and low-rank matrices,” IEEE Transactions on Information Theory, vol. 60(1), pp. 122–132, 2014.
R. Zhang and S. Li, “A proof of conjecture on restricted isometry property constants \({\delta }_{tk} (0 < t < \frac{4}{3})\),” IEEE Transactions on Information Theory, vol. 64, no. 3, pp. 1699–1705, March 2018.
S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing. Springer-Verlag, 2013.
A. S. Bandeira, E. Dobriban, D. G. Mixon, and W. F. Sawin, “Certifying the restricted isometry property is hard,” IEEE Transactions on Information Theory, vol. 59, no. 6, pp. 3448–3450, June 2013.
W. Xu and B. Hassibi, “Compressed sensing over the Grassmann manifold: A unified analytical framework,” in Proceedings of the 46th Allerton Conference, 2008, pp. 562–567.
S. Foucart, “Stability and robustness of \(\ell _1\)-minimizations with Weibull matrices and redundant dictionaries,” Linear Algebra and Its Applications, vol. 441, pp. 4–21, 2014.
S. Ranjan and M. Vidyasagar, “Tight performance bounds for compressed sensing with conventional and group sparsity,” arXiv:1606.05889v2, 2018.
S. Li, F. Gao, G. Ge, and S. Zhang, “Deterministic construction of compressed sensing matrices via algebraic curves,” IEEE Transactions on Information Theory, vol. 58, no. 8, pp. 5035–5041, August 2012.
S. D. Howard, A. R. Calderbank, and S. J. Searle, “A fast reconstruction algorithm for deterministic compressive sensing using second order reedmuller codes,” in Proceedings of the 42nd IEEE Annual Conference on Information Sciences and Systems, 2008, pp. 11–15.
R. R. Naidu, P. Jampana, and C. S. Sastry, “Deterministic compressed sensing matrices: Construction via euler squares and applications,” IEEE Transactions on Signal Processing, vol. 64, no. 14, pp. 3566–3575, July 2016.
H. F. MacNeish, “Euler squares,” Annals of Mathematics, vol. 23, no. 3, pp. 221–227, March 1922.
Y. Erlich, A. Gordon, M. Brand, G. J. Hannon, and P. P. Mitra, “Compressed genotyping,” IEEE Transactions on Information Theory, vol. 56, no. 2, pp. 706–723, 2010.
R. Berinde, A. C. Gilbert, P. Indyk, H. Karloff, and M. J. Strauss, “Combining geometry and combinatorics: a unified approach to sparse signal recovery,” in Proceedings of the Forty-Sixth Annual Allerton Conference, 2008, pp. 798–805.
P. Indyk and M. Ružić, “Near-optimal sparse recovery in the \(\ell _1\)-norm,” in Proceedings of the 49th Annual IEEE Symposium on the Foundations of Computer Science (FoCS), 2008, pp. 199–207.
A. Gilbert and P. Indyk, “Sparse recovery using sparse matrices,” IEEE Proceedings, vol. 98, no. 6, pp. 937–947, June 2010.
M. Lotfi and M. Vidyasagar, “A fast noniterative algorithm for compressive sensing using binary measurement matrices,” IEEE Transactions on Signal Processing, vol. 67, pp. 4079–4089, August 2019.
V. Guruswami, C. Umans, and S. Vadhan, “Unbalanced expanders and randomness extractors from ParvareshVardy codes,” Journal of the ACM, vol. 56, no. 4, pp. 20:1–20:34, 2009.
A. G. Dimakis, R. Smarandache, and P. O. Vontobel, “LDPC codes for compressed sensing,” IEEE Transactions on Information Theory, vol. 58, no. 5, pp. 3093–3114, May 2012.
X.-J. Liu and S.-T. Xia, “Reconstruction guarantee analysis of binary measurement matrices based on girth,” in Proceedings of the International Symposium on Information Theory, 2013, pp. 474–478.
M. Lotfi and M. Vidyasagar, “Compressed sensing using binary matrices of nearly optimal dimensions,” arXiv:1808.03001, 2018.
S. Hoory, “The size of bipartite graphs with a given girth,” Journal of Combinatorial Theory, Series B, vol. 86, pp. 215–220, 2002.
K. Yang and T. Helleseth, “On the minimum distance of array codes as LDPC codes,” IEEE Transactions on Information Theory, vol. 49, no. 12, pp. 3268–3271, December 2003.
J. L. Fan, “Array codes as ldpc codes,” in Proceedings of 2nd International Symposium on turbo Codes, 2000, pp. 543–546.
S. Sarvotham, D. Baron, and R. G. Baraniuk, “Sudocodes – fast measurement and reconstruction of sparse signals,” in Proceedings of the International Symposium on Information Theory, 2006, pp. 2804–2808.
W. Xu and B. Hassibi, “Efficient compressive sensing with deterministic guarantees using expander graphs,” in Proceedings of IEEE Information Theory Workshop, Lake Tahoe, 2007.
S. Jafarpour, W. Xu, B. Hassibi, and R. Calderbank, “Efficient compressed sensing using optimized expander graphs,” IEEE Transactions on Information Theory, vol. 55, no. 9, pp. 4299–4308, 2009.
Y. Wu and S. Verdú, “Optimal phase transitions in compressed sensing,” IEEE Transactions on Information Theory, vol. 58, no. 10, pp. 6241–6263, October 2012.
D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proceedings of the National Academy of Sciences, vol. 106, no. 45, pp. 18 914–18 919, 2009.
D. L. Donoho and J. Tanner, “Precise undersampling theorems,” Proceedings of the IEEE, vol. 98, no. 6, pp. 913–924, June 2010.
D. L. Donoho, A. Javanmard, and A. Montanari, “Information-theoretically optimal compressed sensing via spatial coupling and approximate message passing,” IEEE Transactions on Information Theory, vol. 59, no. 11, pp. 7434–7464, November 2013.
F. Krzakala, M. Mzard, F. Sausset, Y. Sun, and L. Zdeborová, “Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices,” Journal of Statistical Mechanics: Theory and Experiment, vol. 12, 2012.
D. Donoho and J. Tanner, “Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing,” Philosophical Transactions of The Royal Society, Part A: Mathematical, Physical and Engineering Sciences, vol. 367, no. 1906, pp. 4273–4293, November 2009.
D. L. Donoho and J. Tanner, “Counting the faces of randomly-projected hypercubes and orthants, with applications,” Discrete and Computational Geometry, vol. 43, no. 3, pp. 522–541, April 2010.
M. Bayati, M. Lelarge, and A. Montanari, “Universality in polytope phase transitions and message passing algorithms,” arXiv:1207.7321v2, 2015.
A. Khajehnejad, A. S. Tehrani, A. G. Dimakis, and B. Hassibi, “Explicit matrices for sparse approximation,” in Proceedings of the International Symposium on Information Theory, 2011, pp. 469–473.
S. Arora, C. Daskalakis, and D. Steurer, “Message-passing algorithms and improved lp decoding,” in Proceedings of the 41st Annual ACM Symposium on the Theory of Computing, 2009, p. 3–12.
D. Achlioptas, “Database-friendly random projections: Johnson-Lindenstrauss with binary coins,” Journal of Computer and System Sciences, vol. 66, pp. 671–687, 2003.
G. Xu and Z. Xu, “Compressed sensing matrices from Fourier matrices,” IEEE Transactions on Information Theory, vol. 61(1), pp. 469–478, January 2015.
L. Applebaum, S. D. Howard, S. Searle, and R. Calderbank, “Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery,” Applied and Computational Harmonic Analysis, vol. 26, pp. 283–290, 2009.
Acknowledgements
The contents of this chapter report various results from the doctoral thesis of the second author, carried out under the supervision of the first author. The authors thank Prof. David Doonoho and Mr. Hatef Monajemi of Stanford University for their helpful suggestions on phase transitions, and for providing code to enable us to reproduce their computational results. They also thank Prof. Phanindra Jampana of IIT Hyderabad for helpful discussions on the construction of Euler squares.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
In this appendix, we compare the number of measurements used by probabilistic as well as deterministic methods to guarantee that the corresponding measurement matrix A satisfies the restricted isometry property (RIP), as stated in Theorem 1. Note that the number of measurements is computed from the best available sufficient condition. In principle, it is possible that matrices with fewer rows might also satisfy the RIP. But there would not be any theoretical justification for using such matrices.
In probabilistic methods, the number of measurements m is \(O(k \log (n/k))\). However, in reality, the O symbol hides a huge constant. It is possible to replace the O symbol by carefully collating the relevant theorems in [18]. This leads to the following explicit bounds.
Theorem 24
Suppose X is a random variable with zero mean, unit variance, and suppose in addition that there exists a constant c such thatFootnote 7
Define
Suppose an integer k and real numbers \(\delta , \xi \in (0,1)\) are specified, and that \(A = (1/\sqrt{m}) \varPhi \), where \(\varPhi \in {\mathbb R}^{m \times n}\) consists of independent samples of X. Then, A satisfies the RIP of order k with constant \(\delta \) with probability \(\ge 1 - \xi \) provided
In (64), the number of measurements m is indeed \(O(k \log (n/k))\). However, for realistic values of n and k, the number of measurements n would be comparable to, or even to exceed, n, which would render “compressed” sensing meaningless.Footnote 8 For “pure” Gaussian variables, it is possible to find improved bounds for m (see Theorem 2 which is based on [18, Theorem 9.27]. Also, for binary random variables where X equals \(\pm 1\) with equal probability, another set of bounds is available [52]. While all of these bounds are \(O(k \log (n/k))\), in practical situations the bounds are not useful.
This suggests that it is worthwhile to study deterministic methods for generating measurement matrices that satisfy the RIP. There are very few such methods. Indeed, the authors are aware of only three methods. The paper [15] uses a finite field method to construct a binary matrix, and this method is used in the present chapter. The paper [53] gives a procedure for choosing rows from a unitary Fourier matrix such that the resulting matrix satisfies the RIP. This method leads to the same values for the number of measurements m as that in [15]. Constructing partial Fourier matrices is an important part of reconstructing time-domain sparse signals from a limited number of frequency measurements (or vice versa). Therefore, the results of [53] can be used in this situation. In both of these methods, m equals \(q^2\) where q is appropriately chosen prime number. Finally, in [54] a method is given based on chirp matrices. In this case, m equals a prime number q. Note that the partial Fourier matrix and the chirp matrix are complex, whereas the method in [15] leads to a binary matrix. In all three methods, \(m = O(n^{1/2})\), which grows faster than \(O(k \log (n/k))\). However, the constant under this O symbol is quite small. Therefore, for realistic values of k and n, the bounds for m from these methods are much smaller than those derived using probabilistic methods.
Table 8 gives the values of m for various values of n and k. Also, while the chirp matrix has fewer measurements than the binary matrix, \(\ell _1\)-norm minimization with the binary matrix runs much faster than with the chirp matrix, due to the sparsity of the binary matrix. In view of these numbers, in the present chapter, we used DeVore’s construction as the benchmark for the recovery of sparse vectors.
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Vidyasagar, M., Lotfi, M. (2018). Compressive Sensing and Algebraic Coding: Connections and Challenges. In: Başar, T. (eds) Uncertainty in Complex Networked Systems. Systems & Control: Foundations & Applications. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-04630-9_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-04630-9_8
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-030-04629-3
Online ISBN: 978-3-030-04630-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)