Computational Optimization and Applications

, Volume 59, Issue 1–2, pp 263–284 | Cite as

A constrained optimization reformulation and a feasible descent direction method for \(L_{1/2}\) regularization



In this paper, we first propose a constrained optimization reformulation to the \(L_{1/2}\) regularization problem. The constrained problem is to minimize a smooth function subject to some quadratic constraints and nonnegative constraints. A good property of the constrained problem is that at any feasible point, the set of all feasible directions coincides with the set of all linearized feasible directions. Consequently, the KKT point always exists. Moreover, we will show that the KKT points are the same as the stationary points of the \(L_{1/2}\) regularization problem. Based on the constrained optimization reformulation, we propose a feasible descent direction method called feasible steepest descent method for solving the unconstrained \(L_{1/2}\) regularization problem. It is an extension of the steepest descent method for solving smooth unconstrained optimization problem. The feasible steepest descent direction has an explicit expression and the method is easy to implement. Under very mild conditions, we show that the proposed method is globally convergent. We apply the proposed method to solve some practical problems arising from compressed sensing. The results show its efficiency.


\(L_{1/2}\) regularization Reformulation Feasible descent direction method 



The authors would like to thank two anonymous referees for their valuable suggestions and comments. Supported by the NSF of China Grant 11371154, 11071087, 11201197 and 11126147.


  1. 1.
    Bian, W., Chen, X., Ye, Y.: Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization. Math. Program. (2014). doi: 10.1007/s10107-014-0753-5
  2. 2.
    Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51, 34–81 (2009)CrossRefMATHMathSciNetGoogle Scholar
  3. 3.
    Candes, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    Candes, E., Wakin, M., Boyd, S.: Enhancing sparsity by reweighted \(L_1\) minimization. J. Fourier Anal. Appl. 14, 877–905 (2008)CrossRefMATHMathSciNetGoogle Scholar
  5. 5.
    Canon, M., Cullum, C.: A tight upper bound on the rate of convergence of the Frank–Wolfe algorithm. SIAM J. Control 6, 509–516 (1968)CrossRefMATHGoogle Scholar
  6. 6.
    Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14, 707–710 (2007)CrossRefGoogle Scholar
  7. 7.
    Chartrand, R.: Nonconvex regularization for shape preservation. In: IEEE International Conference on Image Processing (ICIP). IEEE (2007)Google Scholar
  8. 8.
    Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24, 1–14 (2008)CrossRefMathSciNetGoogle Scholar
  9. 9.
    Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61 (1998)CrossRefMathSciNetGoogle Scholar
  10. 10.
    Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of \(\ell _2-\ell _p\) minimization. SIAM J. Sci. Comput. 32, 2832–2852 (2010)CrossRefMATHMathSciNetGoogle Scholar
  11. 11.
    Chen, X., Ge, D., Wang, Z., Ye, Y.: Complexity of unconstrained \(L_2-L_p\) minimization. Math. Program. 143, 371–383 (2014)CrossRefMATHMathSciNetGoogle Scholar
  12. 12.
    Chen, X., Zhou, W.: Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 3, 765–790 (2010)CrossRefMATHMathSciNetGoogle Scholar
  13. 13.
    Chen, X., Zhou, W.: Convergence of the reweighted \(l_1\) minimization algorithm for \(l_2-l_p\) minimization. Comput. Optim. Appl. (2013). doi: 10.1007/s10589-013-9553-8
  14. 14.
    Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)CrossRefMATHMathSciNetGoogle Scholar
  15. 15.
    Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348–1360 (2001)CrossRefMATHMathSciNetGoogle Scholar
  16. 16.
    Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1, 586–598 (2007)CrossRefGoogle Scholar
  17. 17.
    Frank, M., Wolfe, P.: An algorithm for quadratic programming. Naval Res. Logist. Quart. 3, 95–110 (1956)CrossRefMathSciNetGoogle Scholar
  18. 18.
    Lai, M., Wang, J.: An unconstrained \(L_q\) minimization with \(0<q\le 1\) for sparse solution of under-determined linear systems. SIAM J. Optim. 21, 82–101 (2011)CrossRefMATHMathSciNetGoogle Scholar
  19. 19.
    Lu, Z.: Iterative reweighted minimization methods for \(l_p\) regularized unconstrained nonlinear programming. Math. Program. (2013). doi: 10.1007/s10107-013-0722-4
  20. 20.
    Nikolova, M.: Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Model. Simul. 4, 960–991 (2005)Google Scholar
  21. 21.
    Osborne, M., Presnell, B., Turlach, B.: A new approach to variable selection in least squares problems. IMA J. Numer. Anal. 20, 389–404 (2000)CrossRefMATHMathSciNetGoogle Scholar
  22. 22.
    Petukhov, A.: Fast implementation of orthogonal greedy algorithm for tight wavelet frames. Signal Process 86, 471–479 (2006)CrossRefMATHGoogle Scholar
  23. 23.
    Pironneau, O., Polak, E.: On the rate of convergence of certain method of centers. Math. Program. 2, 230–257 (1972)CrossRefMATHMathSciNetGoogle Scholar
  24. 24.
    Pironneau, O., Polak, E.: Rate of convergence of a class of methods of feasible directions. SIAM J. Numer. Anal. 10, 161–174 (1973)CrossRefMathSciNetGoogle Scholar
  25. 25.
    Polyk, E.: Computational Method in Optimization. Academic Press, New York (1971)Google Scholar
  26. 26.
    Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B 58, 267–288 (1996)MATHMathSciNetGoogle Scholar
  27. 27.
    Topkis, D., Veinnott, A.: On the convergence of some feasible direction algorithms for non-linear programming. SIAM J. Control 5, 268–279 (1967)CrossRefMATHMathSciNetGoogle Scholar
  28. 28.
    Tropp, J.A., Gilbert, A.C.: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inform. Theory 53, 4655–4667 (2007)CrossRefMATHMathSciNetGoogle Scholar
  29. 29.
    Wu, L., Sun, Z., Li, D.H.: A gradient based method for the \(L_2-L_{1/2}\) minimization and application to compressive sensing. Pac. J. Optim. 10, 401–414 (2014)MATHGoogle Scholar
  30. 30.
    Xu, Z., Zhang, H., Wang, Y., Chang, X.: \(L_{1/2}\) regularizer. Sci. China Ser. F 52, 1–9 (2009)Google Scholar
  31. 31.
    Xu, Z., Chang, X., Xu, F., Zhang, H.: \(L_{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23, 1013–1027 (2012)CrossRefGoogle Scholar
  32. 32.
    Zoutendijk, G.: Methods of Feasible Directions. Elsevier, Amsterdam (1960)MATHGoogle Scholar
  33. 33.
    Zukhoviskii, S., Polak, R., Primak, M.: An algorithm for the solution of convex programming problems. Dokl. Akad. Nauk SSSR 153, 991–1000 (1963)MathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.School of Mathematical SciencesSouth China Normal UniversityGuangzhouChina
  2. 2.College of Mathematics and Information ScienceJiangxi Normal UniversityNanchangChina

Personalised recommendations