Advertisement

Advances in Computational Mathematics

, Volume 45, Issue 3, pp 1711–1728 | Cite as

Sparse power factorization: balancing peakiness and sample complexity

  • Jakob Geppert
  • Felix Krahmer
  • Dominik Stöger
Article

Abstract

In many applications, one is faced with an inverse problem, where the known signal depends in a bilinear way on two unknown input vectors. Often at least one of the input vectors is assumed to be sparse, i.e., to have only few non-zero entries. Sparse power factorization (SPF), proposed by Lee, Wu, and Bresler, aims to tackle this problem. They have established recovery guarantees for a somewhat restrictive class of signals under the assumption that the measurements are random. We generalize these recovery guarantees to a significantly enlarged and more realistic signal class at the expense of a moderately increased number of measurements.

Keywords

Bilinear inverse problems Sparse power factorization Compressed sensing 

Mathematics Subject Classification (2010)

94A12 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

The authors want to thank Yoram Bresler and Kiryung Lee for helpful discussions. Furthermore, we would like to thank the referees for the ir careful reading and their helpful suggestions, which improved the manuscript.

References

  1. 1.
    Ahmed, A., Recht, B., Romberg, J.: Blind deconvolution using convex programming. IEEE Trans. Inform. Theory 60(3), 1711–1732 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Amini, A.A., Wainwright, M.J.: High-dimensional analysis of semidefinite relaxations for sparse principal components. Ann. Stat. 37(5B), 2877–2921 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Bahmani, S., Romberg, J.: Near-optimal estimation of simultaneously sparse and low-rank matrices from nested linear measurements. Inf. Inference 5(3), 331–351 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Bahmani, S., Romberg, J.: Solving equations of random convex functions via anchored regression. arXiv:1702.05327 (2017)
  5. 5.
    Berthet, Q., Rigollet, P.: Optimal detection of sparse principal components in high dimension. Ann. Stat. 41(4), 1780–1815 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Candes, E.J., Li, X., Soltanolkotabi, M.: Phase retrieval via Wirtinger flow: theory and algorithms. IEEE Trans. Inform. Theory 61(4), 1985–2007 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52(2), 489–509 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    d’Aspremont, A., Bach, F., Ghaoui, L.E.: Optimal solutions for sparse principal component analysis. J. Mach. Learn. Res. 9, 1269–1294 (2008)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Deshpande, Y., Montanari, A.: Sparse PCA via covariance thresholding. In: Advances in Neural Information Processing Systems, pp. 334–342 (2014)Google Scholar
  10. 10.
    Fornasier, M., Maly, J., Naumova, V.: At-las _ {2, 1}: A multi-penalty approach to compressed sensing of low-rank matrices with sparse decompositions. arXiv:1801.06240 (2018)
  11. 11.
    Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Geppert, J.A., Krahmer, F., Stöger, D.: Refined performance guarantees for sparse power factorization. In: 2017 International Conference on Sampling Theory and Applications (SampTA), pp. 509–513. IEEE (2017)Google Scholar
  13. 13.
    Haykin, S.: Blind Deconvolution. Prentice Hall, New Jersey (1994)Google Scholar
  14. 14.
    Iwen, M., Viswanathan, A., Wang, Y.: Robust sparse phase retrieval made easy. Appl. Comput. Harmon. Anal. 42(1), 135–142 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Jain, P., Netrapalli, P., Sanghavi, S.: Low-rank matrix completion using alternating minimization. In: Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC ’13, pp. 665–674. ACM, New York (2013)Google Scholar
  16. 16.
    Journée, M., Nesterov, Y., Richtárik, P., Sepulchre, R.: Generalized power method for sparse principal component analysis. J. Mach. Learn. Res. 11, 517–553 (2010)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Jung, P., Krahmer, F., Stöger, D.: Blind demixing and deconvolution at near-optimal rate. IEEE Trans. Inform. Theory 64(2), 704–727 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Kech, M., Krahmer, F.: Optimal injectivity conditions for bilinear inverse problems with applications to identifiability of deconvolution problems. SIAM J. Appl. Alg. Geom. 1(1), 20–37 (2017).  https://doi.org/10.1137/16M1067469 MathSciNetzbMATHGoogle Scholar
  19. 19.
    Krauthgamer, R., Nadler, B., Vilenchik, D.: Do semidefinite relaxations solve sparse PCA up to the information limit. Ann. Statist. 43(3), 1300–1322 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Lee, K., Junge, M.: Rip-like properties in subsampled blind deconvolution. arXiv:1511.06146 (2015)
  21. 21.
    Lee, K., Krahmer, F., Romberg, J.: Spectral methods for passive imaging: non-asymptotic performance and robustness. arXiv:1708.04343 (2017)
  22. 22.
    Lee, K., Li, Y., Junge, M., Bresler, Y.: Blind recovery of sparse signals from subsampled convolution. IEEE Trans. Inform. Theory 63(2), 802–821 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Lee, K., Wu, Y., Bresler, Y.: Near optimal compressed sensing of a class of sparse low-rank matrices via sparse power factorization. IEEE Trans. Inform Theory (2017)Google Scholar
  24. 24.
    Li, X., Ling, S., Strohmer, T., Wei, K.: Rapid, robust, and reliable blind deconvolution via nonconvex optimization. arXiv:1606.04933 (2016)
  25. 25.
    Ling, S., Strohmer, T.: Self-calibration and biconvex compressive sensing. Inverse Probl 31(11), 115,002 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Ling, S., Strohmer, T.: Blind deconvolution meets blind demixing: algorithms and performance bounds. IEEE Trans. Inform. Theory 63(7), 4497–4520 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Ling, S., Strohmer, T.: Regularized gradient descent: a nonconvex recipe for fast joint blind deconvolution and demixing. arXiv:1703.08642 (2017)
  28. 28.
    Ma, Z.: Sparse principal component analysis and iterative thresholding. Ann. Statist. 41(2), 772–801 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Mendelson, S., Rauhut, H., Ward, R., et al.: Improved bounds for sparse recovery from subsampled random convolutions. Ann. Appl. Probab. 28(6), 3491–3527 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Needell, D., Tropp, J.A.: Cosamp: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Oymak, S., Jalali, A., Fazel, M., Eldar, Y.C., Hassibi, B.: Simultaneously structured models with application to sparse and low-rank matrices. IEEE Trans. Inform. Theory 61(5), 2886–2908 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Qu, Q., Zhang, Y., Eldar, Y.C., Wright, J.: Convolutional phase retrieval via gradient descent. arXiv:1712.00716 (2017)
  33. 33.
    Soltanolkotabi, M.: Structured signal recovery from quadratic measurements: breaking sample complexity barriers via nonconvex optimization. arXiv:1702.06175 (2017)
  34. 34.
    Stöger, D., Geppert, J.A., Krahmer, F.: Sparse power factorization with refined peakiness conditions. In: IEEE Statistical Signal Processing Workshop 2018. IEEE (2018)Google Scholar
  35. 35.
    Tillmann, A.M., Pfetsch, M.E.: The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inform. Theory 60(2), 1248–1259 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Wang, T., Berthet, Q., Samworth, R.J.: Statistical and computational trade-offs in estimation of sparse principal components. Ann. Statist. 44(5), 1896–1930 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Xu, G., Liu, H., Tong, L., Kailath, T.: A least-squares approach to blind channel identification. IEEE Trans. Signal Process. 43(12), 2982–2993 (1995)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.GöttingenGermany
  2. 2.GarchingGermany

Personalised recommendations