Advertisement

New regularization method and iteratively reweighted algorithm for sparse vector recovery

  • Wei ZhuEmail author
  • Hui Zhang
  • Lizhi Cheng
Article
  • 9 Downloads

Abstract

Motivated by the study of regularization for sparse problems, we propose a new regularization method for sparse vector recovery. We derive sufficient conditions on the well-posedness of the new regularization, and design an iterative algorithm, namely the iteratively reweighted algorithm (IR-algorithm), for efficiently computing the sparse solutions to the proposed regularization model. The convergence of the IR-algorithm and the setting of the regularization parameters are analyzed at length. Finally, we present numerical examples to illustrate the features of the new regularization and algorithm.

Key words

regularization method iteratively reweighted algorithm (IR-algorithm) sparse vector recovery 

Chinese Library Classification

O24 

2010 Mathematics Subject Classification

49K35 90C06 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

Thanks to Prof.Mingjun LAI for many enlightening discussions with us on lq minimization, to Prof.Rong HUANG for his guidance during my post-doctoral research period, to Dr.Yangyang XU for his contributions to Theorem 3 and lone-term communication on sparse vector recovery with nonconvex models, and to Dr.Housen LI for his careful reading of the manuscript.

References

  1. [1]
    TIKHONOV, A. N. On the solution of ill-posed problems and the method of regularization. Doklady Akademii Nauk SSSR, 151, 501–504 (1963)MathSciNetGoogle Scholar
  2. [2]
    BENJAMIN, S. and THORSTEN, H. Higher order convergence rates for Bregman iterated variational regularization of inverse problems. Numerische Mathematik, 141, 215–252 (2018)MathSciNetzbMATHGoogle Scholar
  3. [3]
    ZHU, W., SHU, S., and CHENG, L. Z. An efficient proximity point algorithm for total-variation-based image restoration. Advances in Applied Mathematics and Mechanics, 6, 145–164 (2014)MathSciNetCrossRefGoogle Scholar
  4. [4]
    CLASON, C., KRUSE, F., and KUNISH, K. Total variation regularization of multi-material topology optimization. ESAIM: Mathematical Modelling and Numerical Analysis, 52, 275–303 (2018)MathSciNetCrossRefGoogle Scholar
  5. [5]
    ZHANG, H., CHENG, L. Z., and ZHU, W. Nuclear norm regularization with a low-rank constraint for matrix completion. Inverse Problems, 26, 115009 (2010)MathSciNetCrossRefGoogle Scholar
  6. [6]
    DONG, J., XUE, Z. C., GUAN, J., HAN, Z. F., and WANG, W. W. Low rank matrix completion using truncated nuclear norm and sparse regularizer. Signal Processing: Image Communication, 68, 76–87 (2018)Google Scholar
  7. [7]
    USEVICH, K. and COMON, P. Hankel low-rank matrix completion: performance of the nuclear norm relaxation. IEEE Journal of Selected Topics in Signal Processing, 10, 637–646 (2017)CrossRefGoogle Scholar
  8. [8]
    ZHU, W., SHU, S., and CHENG, L. Z. First-order optimality condition of basis pursuit denoise problem. Applied Mathematics and Mechanics (English Edition), 35(10), 1345–1352 (2014) https://doi.org/10.1007/s10483-014-1860-9 MathSciNetCrossRefGoogle Scholar
  9. [9]
    ZHU, W., SHU, S., and CHENG, L. Z. Proximity point algorithm for low-rank matrix recovery from sparse noise corrupted data. Applied Mathematics and Mechanics (English Edition), 35(2), 259–268 (2014) https://doi.org/10.1007/s10483-014-1788-6 MathSciNetCrossRefGoogle Scholar
  10. [10]
    BREDIES, K. and LORENZ, D. A. Regularization with non-convex separable constraints. Inverse Problem, 25, 085011 (2009)MathSciNetCrossRefGoogle Scholar
  11. [11]
    GRASMAIR, M., HALTMEIER, M., and SCHERZER, O. Sparse regularization with lq penalty term. Inverse Problems, 24, 055020 (2008)MathSciNetCrossRefGoogle Scholar
  12. [12]
    ENGL, H. W. and RAMLAU, R. Regularization of inverse problems. Encyclopedia of Applied and Computational Mathematics, Springer, Heidelberg (2015)Google Scholar
  13. [13]
    ARJOUNE, Y., KAABOUCH, N., GHAZI, H. E., and TAMTAOUI, A. Compressive sensing: performance comparison of sparse recovery algorithms. 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC), IEEE, Las Vegas (2017)Google Scholar
  14. [14]
    DAN, W. and ZHANG, Z. Generalized sparse recovery model and its neural dynamical optimization method for compressed sensing. Circuits Systems and Signal Processing, 36, 4326–4353 (2017)MathSciNetCrossRefGoogle Scholar
  15. [15]
    CHARTRAND, R. and STANEVA, V. Restricted isometry properties and nonconvex compressive sensing. Inverse Problems, 24, 657–682 (2008)MathSciNetCrossRefGoogle Scholar
  16. [16]
    CHARTRAND, R. and YIN, W. T. Iteratively reweighted algorithms for compressive sensing. International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2008), 3869–3872 (2008)CrossRefGoogle Scholar
  17. [17]
    CHARTRAND, R. Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Processing Letters, 14, 707–710 (2007)CrossRefGoogle Scholar
  18. [18]
    GE, D., JIANG, X., and YE, Y. A note on complexity of L p minimization. Mathematics Programming, 129, 285–299 (2011)CrossRefGoogle Scholar
  19. [19]
    FOUCART, S. and LAI, M. J. Sparsest solutions of underdetermined linear systems via l q-minimization for 0 < q ≤ 1. Applied and Computational Harmonic Analysis, 26, 395–407 (2009)MathSciNetCrossRefGoogle Scholar
  20. [20]
    FOUCART, S. A note on guaranteed sparse recovery via l 1-minimization. Applied and Computational Harmonic Analysis, 29, 97–103 (2010)MathSciNetCrossRefGoogle Scholar
  21. [21]
    CANDÈS, E. J. and PLAN, Y. A probabilistic and RIPless theory of compressed sensing. IEEE Transactions on Information Theory, 57, 7235–7254 (2010)MathSciNetCrossRefGoogle Scholar
  22. [22]
    SUN, Q. Y. Sparse approximation property and stable recovery of sparse signals from noisy measurements. IEEE Transactions on Signal Processing, 59, 5086–5090 (2011)MathSciNetCrossRefGoogle Scholar
  23. [23]
    FAZEL, M. Matrix Rank Minimization with Applications, Ph.D. dissertation, Stanford University, California (2002)Google Scholar
  24. [24]
    CANDÈS, E. J., WAKIN, M. B., and BOYD, S. P. Enhancing sparsity by reweighted l1 minimization. Journal of Fourier Analysis and Applications, 14, 877–905 (2008)MathSciNetCrossRefGoogle Scholar
  25. [25]
    DAUCHEBIES, I., DEVORE, R., FORNASIER, M., and GUNTURK, C. S. Iteratively reweighted least squares minimization for sparse recovery. Communications on Pure and Applied Mathematics, 63, 1–38 (2010)MathSciNetCrossRefGoogle Scholar
  26. [26]
    MOURAD, N. and REILLY, J. F. Minimizaing nonconvex functions for sparse vector reconstruction. IEEE Transactions on Signal Processing, 58, 3485–3496 (2010)MathSciNetCrossRefGoogle Scholar
  27. [27]
    NEEDELL, D. Noisy signal recovery via iterative reweighted L 1-minimization. 2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers, IEEE, Pacific Grove (2009)Google Scholar
  28. [28]
    XU, W. Y., KHAJEHNEJAD, M. A., AVESTIMEHR, S., and HASSIBI, B. Breaking through the thresholds: an analysis for iterative reweighted l 1 minimization via the Grassmann angle framework. ICASSP 2010: IEEE International Conference on Acoustics, Speech and Signal, IEEE, Texas (2009)Google Scholar
  29. [29]
    CHEN, X. J., XU, F., and YE, Y. Lower bound theory of nonzero entries in solutions of l 2l p minimization. SIAM Journal on Scientific Computing, 32, 2832–2852 (2010)MathSciNetCrossRefGoogle Scholar
  30. [30]
    LAI, M. J. and WANG, J. An unconstrained lq minimization with 0 < q ≤ 1 for sparse solution of underdetermined linear systems. SIAM Journal on Optimization, 21, 82–101 (2011)MathSciNetCrossRefGoogle Scholar
  31. [31]
    MOL, C. D., VITO, E. D., and ROSASCO, L. Elastic-net regularization in learning theory. Journal of Complexity, 25, 201–230 (2009)MathSciNetCrossRefGoogle Scholar
  32. [32]
    JIN, B. T., LORENZ, D. A., and SCHIFFLER, S. Elastic-net regularization: error estimates and active set methods. Inverse Problems, 25, 115022 (2009)MathSciNetCrossRefGoogle Scholar
  33. [33]
    ZOU, H. and HASTIE, T. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B, 67, 301–320 (2005)MathSciNetCrossRefGoogle Scholar
  34. [34]
    CAI, J. F., OSHER, S., and SHEN, Z. W. Linearized Bregman iterations for compressed sensing. Mathematics of Computation, 78, 1515–1536 (2009)MathSciNetCrossRefGoogle Scholar
  35. [35]
    YIN, W. Analysis and generalizations of the linearized Bregman method. SIAM Journal on Imaging Sciences, 3, 856–877 (2010)MathSciNetCrossRefGoogle Scholar
  36. [36]
    ZHANG, H., CHENG, L. Z., and ZHU, W. A lower bound guaranteeing exact matrix completion via singular value thresholding algorithm. Applied and Computational Harmonic Analysis, 31, 454–459 (2011)MathSciNetCrossRefGoogle Scholar
  37. [37]
    BERTSEKAS, D. P., NEDIĆC, A., and OZDAGLAR, A. E. Convex Analysis and Optimization, Athena Scietific and Tsinghua University Press, Beijing (2006)Google Scholar
  38. [38]
    YUAN, Z. Y. and WANG, H. X. Phase retrieval via reweighted wirtinger flow method. Applied Optics, 56, 2418–2427 (2017)CrossRefGoogle Scholar
  39. [39]
    HARDY, G. H., LITTLEWOOD, J. E., and PÓLYA, G. Inequalities, Posts and Telecom Press, Beijing (2010)zbMATHGoogle Scholar
  40. [40]
    FORNASIER, M. Theoretical Foundations and Numerical Methods for Sparse Recovery, De Gruyter, Berlin (2010)CrossRefGoogle Scholar

Copyright information

© Shanghai University and Springer-Verlag GmbH Germany, part of Springer Nature 2020

Authors and Affiliations

  1. 1.Post-doctoral Research Station of Statistics, School of Mathematics and Computational ScienceXiangtan UniversityXiangtan, Hunan ProvinceChina
  2. 2.Department of MathematicsNational University of Defense TechnologyChangshaChina

Personalised recommendations