Advertisement

Matrix Completion and Low-Rank Matrix Recovery

  • Robert Qiu
  • Michael Wicks
Chapter

Abstract

This chapter is a natural development following Chap. 7. In other words, Chaps. 7 and 8 may be viewed as two parallel developments. In Chap. 7, compressed sensing exploits the sparsity structure in a vector, while low-rank matrix recovery—Chap. 8—exploits the low-rank structure of a matrix: sparse in the vector composed of singular values. The theory ultimately traces back to concentration of measure due to high dimensions.

Keywords

Rank Function Convex Optimization Problem Ambiguity Function Phase Retrieval Restricted Isometry Property 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Bibliography

  1. 5.
    R. Qiu, Z. Hu, H. Li, and M. Wicks, Cognitiv Communications and Networking: Theory and Practice. John Wiley and Sons, 2012.Google Scholar
  2. 16.
    F. Zhang, Matrix Theory. Springer Ver, 1999.Google Scholar
  3. 17.
    K. Abadir and J. Magnus, Matrix Algebra. Cambridge Press, 2005.Google Scholar
  4. 48.
    S. Boyd and L. Vandenberghe, Convex optimization. Cambridge Univ Pr, 2004.Google Scholar
  5. 93.
    M. Rudelson, “Random vectors in the isotropic position,” Journal of Functional Analysis, vol. 164, no. 1, pp. 60–72, 1999.MathSciNetMATHCrossRefGoogle Scholar
  6. 102.
    D. Gross, “Recovering low-rank matrices from few coefficients in any basis,” Information Theory, IEEE Transactions on, vol. 57, no. 3, pp. 1548–1566, 2011.CrossRefGoogle Scholar
  7. 103.
    B. Recht, “A simpler approach to matrix completion,” Arxiv preprint arxiv:0910.0651, 2009.Google Scholar
  8. 104.
    B. Recht, “A simpler approach to matrix completion,” The Journal of Machine Learning Research, vol. 7777777, pp. 3413–3430, 2011.MathSciNetGoogle Scholar
  9. 135.
    G. Raskutti, M. Wainwright, and B. Yu, “Minimax rates of estimation for high-dimensional linear regression over¡ formula formulatype=,” Information Theory, IEEE Transactions on, vol. 57, no. 10, pp. 6976–6994, 2011.MathSciNetCrossRefGoogle Scholar
  10. 141.
    M. Ledoux, The concentration of measure phenomenon, vol. 89. Amer Mathematical Society, 2001.Google Scholar
  11. 145.
    K. Davidson and S. Szarek, “Local operator theory, random matrices and banach spaces,” Handbook of the geometry of Banach spaces, vol. 1, pp. 317–366, 2001.MathSciNetCrossRefGoogle Scholar
  12. 153.
    Y. Gordon, A. Litvak, S. Mendelson, and A. Pajor, “Gaussian averages of interpolated bodies and applications to approximate reconstruction,” Journal of Approximation Theory, vol. 149, no. 1, pp. 59–73, 2007.MathSciNetMATHCrossRefGoogle Scholar
  13. 154.
    J. Matousek, Lectures on discrete geometry, vol. 212. Springer, 2002.Google Scholar
  14. 155.
    S. Negahban and M. Wainwright, “Estimation of (near) low-rank matrices with noise and high-dimensional scaling,” The Annals of Statistics, vol. 39, no. 2, pp. 1069–1097, 2011.MathSciNetMATHCrossRefGoogle Scholar
  15. 266.
    R. Kannan, L. Lovász, and M. Simonovits, “Random walks and an o*(n5) volume algorithm for convex bodies,” Random structures and algorithms, vol. 11, no. 1, pp. 1–50, 1997.MathSciNetMATHCrossRefGoogle Scholar
  16. 416.
    D. Achlioptas, “Database-friendly random projections: Johnson-lindenstrauss with binary coins,” Journal of computer and System Sciences, vol. 66, no. 4, pp. 671–687, 2003.MathSciNetMATHCrossRefGoogle Scholar
  17. 444.
    Y. Plan, Compressed sensing, sparse approximation, and low-rank matrix estimation. PhD thesis, California Institute of Technology, 2011.Google Scholar
  18. 445.
    R. Vershynin, “Math 280 lecture notes,” 2007.Google Scholar
  19. 446.
    B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,” SIAM review, vol. 52, no. 3, pp. 471–501, 2010.MathSciNetMATHCrossRefGoogle Scholar
  20. 447.
    R. Vershynin, “On large random almost euclidean bases,” Acta Math. Univ. Comenianae, vol. 69, no. 2, pp. 137–144, 2000.MathSciNetMATHGoogle Scholar
  21. 448.
    E. L. Lehmann and G. Casella, Theory of point estimation, vol. 31. Springer, 1998.Google Scholar
  22. 449.
    S. Negahban, P. Ravikumar, M. Wainwright, and B. Yu, “A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers,” arXiv preprint arXiv:1010.2731, 2010.Google Scholar
  23. 450.
    A. Agarwal, S. Negahban, and M. Wainwright, “Fast global convergence of gradient methods for high-dimensional statistical recovery,” arXiv preprint arXiv:1104.4824, 2011.Google Scholar
  24. 451.
    B. Recht, M. Fazel, and P. Parrilo, “Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization,” in Arxiv preprint arXiv:0706.4138, 2007.Google Scholar
  25. 452.
    H. Lütkepohl, “New introduction to multiple time series analysis,” 2005.Google Scholar
  26. 453.
    A. Agarwal, S. Negahban, and M. Wainwright, “Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions,” arXiv preprint arXiv:1102.4807, 2011.Google Scholar
  27. 454.
    C. Meyer, Matrix analysis and applied linear algebra. SIAM, 2000.Google Scholar
  28. 455.
    M. McCoy and J. Tropp, “Sharp recovery bounds for convex deconvolution, with applications,” Arxiv preprint arXiv:1205.1580, 2012.Google Scholar
  29. 456.
    V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky, “Rank-sparsity incoherence for matrix decomposition,” SIAM Journal on Optimization, vol. 21, no. 2, pp. 572–596, 2011.MathSciNetMATHCrossRefGoogle Scholar
  30. 457.
    V. Koltchinskii, “Von neumann entropy penalization and low-rank matrix estimation,” The Annals of Statistics, vol. 39, no. 6, pp. 2936–2973, 2012.MathSciNetCrossRefGoogle Scholar
  31. 458.
    M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information. Cambridge Press, 10th edition ed., 2010.Google Scholar
  32. 459.
    S. Sra, S. Nowozin, and S. Wright, eds., Optimization for machine learning. MIT Press, 2012. Chapter  4 (Bertsekas) Incremental Gradient, Subgradient, and Proximal Method for Convex Optimization: A Survey.
  33. 460.
    M. Rudelson, “Contact points of convex bodies,” Israel Journal of Mathematics, vol. 101, no. 1, pp. 93–124, 1997.MathSciNetMATHCrossRefGoogle Scholar
  34. 461.
    D. Blatt, A. Hero, and H. Gauchman, “A convergent incremental gradient method with a constant step size,” SIAM Journal on Optimization, vol. 18, no. 1, pp. 29–51, 2007.MathSciNetMATHCrossRefGoogle Scholar
  35. 462.
    M. Rabbat and R. Nowak, “Quantized incremental algorithms for distributed optimization,” Selected Areas in Communications, IEEE Journal on, vol. 23, no. 4, pp. 798–808, 2005.CrossRefGoogle Scholar
  36. 463.
    E. Candes, Y. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” Arxiv preprint arXiv:1109.0573, 2011.Google Scholar
  37. 464.
    E. Candes, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Arxiv preprint arXiv:1109.4499, 2011.Google Scholar
  38. 465.
    E. Candès and B. Recht, “Exact matrix completion via convex optimization,” Foundations of Computational Mathematics, vol. 9, no. 6, pp. 717–772, 2009.MathSciNetMATHCrossRefGoogle Scholar
  39. 466.
    J. Cai, E. Candes, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” Arxiv preprint Arxiv:0810.3286, 2008.Google Scholar
  40. 472.
    E. Candes and Y. Plan, “Matrix completion with noise,” Proceedings of the IEEE, vol. 98, no. 6, pp. 925–936, 2010.CrossRefGoogle Scholar
  41. 473.
    A. Chai, M. Moscoso, and G. Papanicolaou, “Array imaging using intensity-only measurements,” Inverse Problems, vol. 27, p. 015005, 2011.MathSciNetCrossRefGoogle Scholar
  42. 474.
    L. Tian, J. Lee, S. Oh, and G. Barbastathis, “Experimental compressive phase space tomography,” Optics Express, vol. 20, no. 8, pp. 8296–8308, 2012.CrossRefGoogle Scholar
  43. 475.
    Y. Lu and M. Vetterli, “Sparse spectral factorization: Unicity and reconstruction algorithms,” in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, pp. 5976–5979, IEEE, 2011.Google Scholar
  44. 476.
    J. Fienup, “Phase retrieval algorithms: a comparison,” Applied Optics, vol. 21, no. 15, pp. 2758–2769, 1982.CrossRefGoogle Scholar
  45. 477.
    A. Sayed and T. Kailath, “A survey of spectral factorization methods,” Numerical linear algebra with applications, vol. 8, no. 6–7, pp. 467–496, 2001.MathSciNetMATHCrossRefGoogle Scholar
  46. 478.
    C. Beck and R. D’Andrea, “Computational study and comparisons of lft reducibility methods,” in American Control Conference, 1998. Proceedings of the 1998, vol. 2, pp. 1013–1017, IEEE, 1998.Google Scholar
  47. 479.
    M. Mesbahi and G. Papavassilopoulos, “On the rank minimization problem over a positive semidefinite linear matrix inequality,” Automatic Control, IEEE Transactions on, vol. 42, no. 2, pp. 239–243, 1997.MathSciNetMATHCrossRefGoogle Scholar
  48. 480.
    K. Toh, M. Todd, and R. Tütüncü, “Sdpt3-a matlab software package for semidefinite programming, version 1.3,” Optimization Methods and Software, vol. 11, no. 1–4, pp. 545–581, 1999.Google Scholar
  49. 481.
    M. Grant and S. Boyd, “Cvx: Matlab software for disciplined convex programming,” Available httpstanford edu boydcvx, 2008.Google Scholar
  50. 482.
    S. Becker, E. Candès, and M. Grant, “Templates for convex cone problems with applications to sparse signal recovery,” Mathematical Programming Computation, pp. 1–54, 2011.Google Scholar
  51. 483.
    E. Candes, M. Wakin, and S. Boyd, “Enhancing sparsity by reweighted l 1 minimization,” Journal of Fourier Analysis and Applications, vol. 14, no. 5, pp. 877–905, 2008.MathSciNetMATHCrossRefGoogle Scholar
  52. 484.
    M. Fazel, H. Hindi, and S. Boyd, “Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices,” in American Control Conference, 2003. Proceedings of the 2003, vol. 3, pp. 2156–2162, Ieee, 2003.Google Scholar
  53. 485.
    M. Fazel, Matrix rank minimization with applications. PhD thesis, PhD thesis, Stanford University, 2002.Google Scholar
  54. 486.
    L. Mandel and E. Wolf, Optical Coherence and Quantum Optics. Cambridge University Press, 1995.Google Scholar
  55. 487.
    Z. Hu, R. Qiu, J. Browning, and M. Wicks, “A novel single-step approach for self-coherent tomography using semidefinite relaxation,” IEEE Geoscience and Remote Sensing Letters. to appear.Google Scholar
  56. 488.
    M. Grant and S. Boyd, “Cvx: Matlab software for disciplined convex programming, version 1.21.” http://cvxr.com/cvx, 2010.
  57. 489.
    H. Ohlsson, A. Y. Yang, R. Dong, and S. S. Sastry, “Compressive phase retrieval from squared output measurements via semidefinite programming,” arXiv preprint arXiv:1111.6323, 2012.Google Scholar
  58. 490.
    A. Devaney, E. Marengo, and F. Gruber, “Time-reversal-based imaging and inverse scattering of multiply scattering point targets,” The Journal of the Acoustical Society of America, vol. 118, pp. 3129–3138, 2005.CrossRefGoogle Scholar
  59. 491.
    L. Lo Monte, D. Erricolo, F. Soldovieri, and M. C. Wicks, “Radio frequency tomography for tunnel detection,” Geoscience and Remote Sensing, IEEE Transactions on, vol. 48, no. 3, pp. 1128–1137, 2010.CrossRefGoogle Scholar
  60. 492.
    O. Klopp, “Noisy low-rank matrix completion with general sampling distribution,” Arxiv preprint arXiv:1203.0108, 2012.Google Scholar
  61. 493.
    R. Foygel, R. Salakhutdinov, O. Shamir, and N. Srebro, “Learning with the weighted trace-norm under arbitrary sampling distributions,” Arxiv preprint arXiv:1106.4251, 2011.Google Scholar
  62. 494.
    R. Foygel and N. Srebro, “Concentration-based guarantees for low-rank matrix reconstruction,” Arxiv preprint arXiv:1102.3923, 2011.Google Scholar
  63. 495.
    V. Koltchinskii and P. Rangel, “Low rank estimation of similarities on graphs,” Arxiv preprint arXiv:1205.1868, 2012.Google Scholar
  64. 496.
    E. Richard, P. Savalle, and N. Vayatis, “Estimation of simultaneously sparse and low rank matrices,” in Proceeding of 29th Annual International Conference on Machine Learning, 2012.Google Scholar
  65. 497.
    H. Ohlsson, A. Yang, R. Dong, and S. Sastry, “Compressive phase retrieval from squared output measurements via semidefinite programming,” Arxiv preprint arXiv:1111.6323, 2011.Google Scholar
  66. 502.
    H. Ohlsson, A. Yang, R. Dong, and S. Sastry, “Compressive phase retrieval via lifting,”Google Scholar
  67. 503.
    K. Jaganathan, S. Oymak, and B. Hassibi, “On robust phase retrieval for sparse signals,” in Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on, pp. 794–799, IEEE, 2012.Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Robert Qiu
    • 1
  • Michael Wicks
    • 2
  1. 1.Tennessee Technological UniversityCookevilleUSA
  2. 2.UticaUSA

Personalised recommendations