Advertisement

A new globally convergent algorithm for non-Lipschitz p-q minimization

  • Zhifang Liu
  • Chunlin WuEmail author
  • Yanan Zhao
Article
  • 13 Downloads

Abstract

We consider the non-Lipschitz p-q (0 < p < 1 ≤ q < ) minimization problem, which has many applications and is a great challenge for optimization. The problem contains a non-Lipschitz regularization term and a possibly nonsmooth fidelity. In this paper, we present a new globally convergent algorithm, which gradually shrinks the variable support and uses linearization and proximal approximations. The subproblem at each iteration is then convex with increasingly fewer unknowns. By showing a lower bound theory for the sequence generated by our algorithm, we prove that the sequence globally converges to a stationary point of the p-q objective function. Our method can be extended to the p-regularized elastic net model. Numerical experiments demonstrate the performances and flexibilities of the proposed algorithm, such as the applicability to measurements with either Gaussian or heavy-tailed noise.

Keywords

Nonconvex nonsmooth regularization Non-Lipschitz optimization Support shrinking Lower bound theory ADMM (alternating direction method of multipliers) Gaussian noise Heavy-tailed noise 

Mathematics Subject Classification (2010)

49M05 49K30 90C26 94A12 90C30 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Funding information

This work was supported by the National Natural Science Foundation of China (Grants 11301289, 11531013, and 11871035), Recruitment Program of Global Young Expert, and the Fundamental Research Funds for the Central Universities.

References

  1. 1.
    Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 35(2), 438–457 (2010)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 137(1–2), 91–129 (2013)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Bian, W., Chen, X.: Worst-case complexity of smoothing quadratic regularization methods for non-Lipschitzian, optimization. SIAM J. Optim. 23(3), 1718–1741 (2013)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Bolte, J., Daniilidis, A., Lewis, A.: The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17(4), 1205–1223 (2006)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Bolte, J., Daniilidis, A., Lewis, A., Shiota, M.: Clarke subgradients of stratifiable functions. SIAM J. Optim. 18(2), 556–572 (2007)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Bolte, J.B., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146(1–2), 459–494 (2014)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)zbMATHGoogle Scholar
  9. 9.
    Bredies, K., Lorenz, D.A., Reiterer, S.: Minimization of non-smooth, non-convex functionals by iterative thresholding. J. Optim Theory Appl. 165(1), 78–112 (2015)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Candès, E. J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted 1 minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math Imaging Vis. 40(1), 120–145 (2011)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Chan, R.H., Liang, H.-X.: Half-quadratic algorithm for p- q problems with applications to TV- 1 image restoration and compressive sensing. In: Bruhn, A., Pock, T., Tai, X.-C. (eds.) Efficient Algorithms for Global Optimization Methods in Computer Vision, pp 78–103. Springer, Berlin (2014)Google Scholar
  13. 13.
    Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal. Proc. Let. 14(10), 707–710 (Oct. 2007)Google Scholar
  14. 14.
    Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008)Google Scholar
  15. 15.
    Chen, X.: Smoothing methods for nonsmooth, nonconvex minimization. Math. Program. 134(1), 71–99 (2012)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Chen, X., Niu, L., Yuan, Y.: Optimality conditions and a smoothing trust region newton method for nonLipschitz, optimization. SIAM J. Optim. 23(3), 1528–1552 (2013)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of 2- p minimization. SIAM J. Sci Comput. 32(5), 2832–2852 (2010)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Chen, X., Zhou, W.: Convergence of the reweighted 1 minimization algorithm for 2- p minimization. Comput. Optim Appl. 59(1), 47–61 (2014)MathSciNetGoogle Scholar
  19. 19.
    Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm Pure Appl. Math. 57 (11), 1413–1457 (2004)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Dielman, T.E.: Least absolute value regression: recent contributions. J. Stat. Comput. Simul. 75(4), 263–286 (2005)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Esser, E., Zhang, X., Chan, T.F.: A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci. 3(4), 1015–1046 (2010)MathSciNetzbMATHGoogle Scholar
  22. 22.
    Foucart, S., Lai, M.-J.: Sparsest solutions of underdetermined linear systems via q-minimization for 0 < q ≤ 1. Appl. Comput Harmon. Anal. 26(3), 395–407 (2009)MathSciNetzbMATHGoogle Scholar
  23. 23.
    Goldstein, T., Osher, S.: The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009)MathSciNetzbMATHGoogle Scholar
  24. 24.
    Gorodnitsky, I.F., Rao, B.D.: Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 45(3), 600–616 (1997)Google Scholar
  25. 25.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer, New York (2009)zbMATHGoogle Scholar
  26. 26.
    He, B., Yuan, X.: On the O(1/n) convergence rate of the Douglas-Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)MathSciNetzbMATHGoogle Scholar
  27. 27.
    Huber, P.J., Ronchetti, E.M.: Robust Statistics, 2nd edn. Wiley (2009)Google Scholar
  28. 28.
    Krishnan, D., Fergus, R.: Fast image deconvolution using hyper-Laplacian priors. In: Proc. 22nd Int. Conf. Neural Information Processing Systems, pp. 1033–1041 (2009)Google Scholar
  29. 29.
    Kurdyka, K.: On gradients of functions definable in o-minimal structures. Ann. Inst. Fourier (Grenoble) 48(3), 769–783 (1998)MathSciNetzbMATHGoogle Scholar
  30. 30.
    Lai, M.-J., unconstrained, J. Wang.: q minimization with 0 < q ≤ 1 for sparse solution of underdetermined linear systems. SIAM J. Optim. 21(1), 82–101 (2011)MathSciNetGoogle Scholar
  31. 31.
    Lai, M.-J., Xu, Y., Yin, W.: Improved iteratively reweighted least squares for unconstrained smoothed q minimization. SIAM J. Numer Anal. 51(2), 927–957 (2013)MathSciNetzbMATHGoogle Scholar
  32. 32.
    Lanza, A., Morigi, S., Reichel, L., Sgallari, F.: A generalized Krylov subspace method for p- q minimization. SIAM J. Sci Comput. 37(5), S30–S50 (2015)MathSciNetzbMATHGoogle Scholar
  33. 33.
    Laporte, L., Flamary, R., Canu, S., Déjean, S., Mothe, J.: Nonconvex regularizations for feature selection in ranking with sparse SVM. IEEE Trans. Neural Netw. Learn Syst. 25(6), 1118–1130 (2014)Google Scholar
  34. 34.
    Łojasiewicz, S.: Une propriété topologique des sous-ensembles analytiques réels. In: Les Équations aux Dérivées Partielles (Paris, 1962), pp. 87–89. Éditions du Centre National de la Recherche Scientifique, Paris (1963)Google Scholar
  35. 35.
    Lu, Z.: Iterative reweighted minimization methods for p regularized unconstrained nonlinear programming. Math. Program. 147(1), 277–307 (2014)MathSciNetzbMATHGoogle Scholar
  36. 36.
    Paredes, J.L., Arce, G.R.: Compressive sensing signal reconstruction by weighted median regression estimates. IEEE Trans. Signal Process. 59(6), 2585–2601 (2011)MathSciNetzbMATHGoogle Scholar
  37. 37.
    Price, B.S., Sherwood, B.: A cluster elastic net for multivariate regression. J. Mach. Learn. Res. 18(232), 1–39 (2018)MathSciNetzbMATHGoogle Scholar
  38. 38.
    Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis, Volume 317 of Grundlehren der Mathematischen Wissenschaften. Springer, Berlin (1998)Google Scholar
  39. 39.
    Shen, Y., Han, B., Braverman, E.: Stability of the elastic net estimator. J Complexity 32(1), 20–39 (2016)MathSciNetzbMATHGoogle Scholar
  40. 40.
    Shen, Y., Li, S.: Restricted p-isometry property and its application for nonconvex compressive sensing. Adv. Comput. Math. 37(3), 441–452 (2012)MathSciNetzbMATHGoogle Scholar
  41. 41.
    Sun, Q.: Recovery of sparsest signals via q-minimization. Appl. Comput. Harmon. Anal. 32(3), 329–341 (2012)MathSciNetzbMATHGoogle Scholar
  42. 42.
    Van den Dries, L., Miller, C., et al.: Geometric categories and o-minimal structures. Duke Math. J 84(2), 497–540 (1996)MathSciNetzbMATHGoogle Scholar
  43. 43.
    Wang, H., Pan, J., Su, Z., Liang, S.: Blind image deblurring using elastic-net based rank prior. Comput. Vis. Image Underst. 168, 157–171 (2018)Google Scholar
  44. 44.
    Wu, C., Tai, X. -C.: Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J. Imaging Sci. 3(3), 300–339 (2010)MathSciNetzbMATHGoogle Scholar
  45. 45.
    Xu, Z., Chang, X., Xu, F., Zhang, H.: L 1/2 regularization: A thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1013–1027 (2012)Google Scholar
  46. 46.
    Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for 1-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)MathSciNetzbMATHGoogle Scholar
  47. 47.
    Yukawa, M., Amari, S.-I.: p-regularized least squares (0 < p < 1) and critical path. IEEE Trans. Inform. Theory 62(1), 488–502 (2016)MathSciNetzbMATHGoogle Scholar
  48. 48.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Statist. Soc. B 67(2), 301–320 (2005)MathSciNetzbMATHGoogle Scholar
  49. 49.
    Zou, H., Li, R.: One-step sparse estimates in nonconcave penalized likelihood models. Ann Statist. 36(4), 1509–1533 (2008)MathSciNetzbMATHGoogle Scholar
  50. 50.
    Zuo, W., Meng, D., Zhang, L., Feng, X., Zhang, D.: A generalized iterated shrinkage algorithm for non-convex sparse coding. In: Proc. IEEE Int. Conf. Computer Vision, pp. 217–224 (2013)Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Mathematical SciencesNankai UniversityTianjinChina

Personalised recommendations