Science China Mathematics

, Volume 61, Issue 6, pp 1139–1152 | Cite as

Global optimality condition and fixed point continuation algorithm for non-Lipschitz ℓ p regularized matrix minimization

  • Dingtao Peng
  • Naihua Xiu
  • Jian Yu


Regularized minimization problems with nonconvex, nonsmooth, even non-Lipschitz penalty functions have attracted much attention in recent years, owing to their wide applications in statistics, control, system identification and machine learning. In this paper, the non-Lipschitz ℓ p (0 < p < 1) regularized matrix minimization problem is studied. A global necessary optimality condition for this non-Lipschitz optimization problem is firstly obtained, specifically, the global optimal solutions for the problem are fixed points of the so-called p-thresholding operator which is matrix-valued and set-valued. Then a fixed point iterative scheme for the non-Lipschitz model is proposed, and the convergence analysis is also addressed in detail. Moreover, some acceleration techniques are adopted to improve the performance of this algorithm. The effectiveness of the proposed p-thresholding fixed point continuation (p-FPC) algorithm is demonstrated by numerical experiments on randomly generated and real matrix completion problems.


p regularized matrix minimization matrix completion problem p-thresholding operator global optimality condition fixed point continuation algorithm 


90C06 90C26 90C46 65F22 65F30 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.



This work was supported by National Natural Science Foundation of China (Grant Nos. 11401124 and 71271021), the Scientific Research Projects for the Introduced Talents of Guizhou University (Grant No. 201343) and the Key Program of Natural Science Foundation of China (Grant No. 11431002). The authors are thankful to the two anonymous referees for their valuable suggestions and comments that helped us to revise the paper into the present form.


  1. 1.
    Attouch H, Bolte J. On the convergence of proximal algorithm for nonsmooth function involving analytic features. Math Program, 2009, 116: 5–16MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Attouch H, Bolte J, Svaiter B F. Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math Program, 2013, 137: 91–129MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Cai J, Candès E, Shen Z. A singular value thresholding algorithm for matrix completion. SIAM J Optim, 2010, 20: 1956–1982MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Candès E, Plan Y. Matrix completion with noise. In: Proceedings of the IEEE, vol. 98. New York: IEEE, 2010, 925–936CrossRefGoogle Scholar
  5. 5.
    Candès E, Recht B. Exact matrix completion via convex optimization. Found Comput Math, 2009, 9: 717–772MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Candès E, Tao T. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans Inform Theory, 2010, 56: 2053–2080MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Cao W, Sun J, Xu Z. Fast image deconvolution using closed-form thresholding formulas of L q (q = 1/2, 2/3) regular-ization. J Vis Commun Image Represent, 2013, 24: 31–41CrossRefGoogle Scholar
  8. 8.
    Chartrand R. Exact reconstructions of sparse signals via nonconvex minimization. IEEE Signal Process Lett, 2007, 14: 707–710CrossRefGoogle Scholar
  9. 9.
    Chen X, Ge D, Wang Z, et al. Complexity of unconstrained l 2-l p minimization. Math Program, 2014, 143: 371–383MathSciNetCrossRefGoogle Scholar
  10. 10.
    Chen X, Niu L, Yuan Y. Optimality conditions and smoothing trust region Newton method for non-Lipschitz opti-mization. SIAM J Optim, 2013, 23: 1528–1552MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Chen X, Xu F, Ye Y. Lower bound theory of nonzero entries in solutions of l 2-l p minimization. SIAM J Sci Comput, 2010, 32: 2832–2852MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Chen Y, Xiu N, Peng D. Global solutions of non-Lipschitz S 2-S p minimization over the positive semidefinite cone. Optim Lett, 2014, 8: 2053–2064MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Daubechies I, DeVore R, Fornasier M, et al. Iteratively reweighted least squares minimization for sparse recovery. Commun Pure Appl Math, 2010, 63: 1–38MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Drineas P, Kannan R, MahoneyMW. Fast Monte Carlo algorithms for matrices II: Computing low-rank approximations to a matrix. SIAM J Comput, 2006, 36: 158–183MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Efron B, Hastie T, Johnstone I M, et al. Least angle regression. Ann Statist, 2004, 32: 407–499MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Fazel M, Hindi H, Boyd S. A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of the American Control Conference. New York: IEEE, 2001, doi: 10.1109/ACC.2001.945730Google Scholar
  17. 17.
    Fazel M, Hindi H, Boyd S. Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. In: Proceedings of the American Control Conference. New York: IEEE, 2003, doi: 10.1109/AC-C.2003.1243393Google Scholar
  18. 18.
    Foucart S, Lai M-J. Sparsest solutions of underdetermined linear systems via l q minimization for 0 < q 6 1. Appl Comput Harmon Anal, 2009, 26: 395–407MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Hale E, Yin W, Zhang Y. A fixed-point continuation method for l 1-regularized minimization: Methodology and convergence. SIAM J Optim, 2008, 19: 1107–1130MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Ji S, Sze K, Zhou Z, et al. Beyond convex relaxation: A polynomial-time non-convex optimization approach to network localization. In: IEEE Conference on Computer Communications. New York: IEEE, 2013, 2499–2507Google Scholar
  21. 21.
    Keshavan R, Montanari A, Oh S. Matrix completion from a few entries. IEEE Trans Inform Theory, 2010, 56: 2980–2998MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Lai M-J, Xu Y, Yin W. Improved iteratively rewighted least squares for unconstrained smoothed l p minimization. SIAM J Numer Anal, 2013, 5: 927–957CrossRefzbMATHGoogle Scholar
  23. 23.
    Liu Z, Vandenberghe L. Interior-point method for nuclear norm approximation with application to system identification. SIAM J Matrix Anal Appl, 2009, 31: 1235–1256MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Lu Y, Zhang L, Wu J. A smoothing majorization method for l 2 2 -l p p matrix minimization. Optim Method Softw, 2014, 30: 1–24Google Scholar
  25. 25.
    Lu Z. Iterative reweighted minimization methods for l p regularized unconstrained nonlinear programming. Math Program, 2014, 147: 277–307MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Lu Z, Zhang Y, Li X. Penalty decomposition methods for rank minimization. Optim Method Softw, 2015, 30: 531–558MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Lu Z, Zhang Y, Lu J. l p regularized low-rank approximation via iterative reweighted singular value minimization. Comput Optim Appl, 2017, 68: 619–642MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Ma S, Goldfarb D, Chen L. Fixed point and Bregman iterative methods for matrix rank minimization. Math Program, 2011, 128: 321–353MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Ma S, Li Q. Lower bound theory for Schatten-p regularized least squares problems. Technical report. Beijing: Beijing Institute of Technology, 2013Google Scholar
  30. 30.
    Mohan K, Fazel M. Iterative reweighted algorithms for matrix rank minimization. J Mach Learn Res, 2012, 13: 3253–3285MathSciNetzbMATHGoogle Scholar
  31. 31.
    Rakotomamonjy A, Flamary R, Gasso G, et al. l p-l q penalty for sparse linear and sparse multiple kernel multitask learning. IEEE Trans Neural Network, 2011, 22: 1307–1320CrossRefGoogle Scholar
  32. 32.
    Rohde A, Tsybakov A. Estimation of high-dimensional low-rank matrices. Ann Statist, 2011, 39: 887–930MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Skelton R, Iwasaki T, Grigoriadis K. A Unified Algebraic Approach to Linear Control Design. Abingdon: Taylor and Francis, 1998Google Scholar
  34. 34.
    Sun Q. Recovery of sparsest signals via l q minimization. Appl Comput Harmon Anal, 2012, 32: 329–341MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Toh K C, Yun S. An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pac J Optim, 2010, 6: 615–640MathSciNetzbMATHGoogle Scholar
  36. 36.
    Wen Z, Yin W, Zhang Y. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math Program Comp, 2012, 4: 333–361MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Xu Z, Chang X, Xu F, et al. L 1/2 regularization: A thresholding representation theory and a fast solver. IEEE Trans Neural Network Learn Syst, 2012, 23: 1013–1027CrossRefGoogle Scholar
  38. 38.
    Zeng J, Lin S, Wang Y, et al. L 1/2 regularization: Convergence of iterative half thresholding algorithm. IEEE Trans Signal Process, 2014, 62: 2317–2329MathSciNetCrossRefGoogle Scholar

Copyright information

© Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Mathematics and StatisticsGuizhou UniversityGuiyangChina
  2. 2.Department of MathematicsBeijing Jiaotong UniversityBeijingChina

Personalised recommendations