Convergence Analysis of Penalty Decomposition Algorithm for Cardinality Constrained Convex Optimization in Hilbert Spaces

  • Michael Pleshakov
  • Sergei SidorovEmail author
  • Kirill Spiridonov
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12095)


The paper examines an algorithm for finding approximate sparse solutions of convex cardinality constrained optimization problem in Hilbert spaces. The proposed algorithm uses the penalty decomposition (PD) approach and solves sub-problems on each iteration approximately. We examine the convergence of the algorithm to a stationary point satisfying necessary optimality conditions. Unlike other similar works, this paper discusses the properties of PD algorithms in infinite-dimensional (Hilbert) space. The results showed that the convergence property obtained in previous works for cardinality constrained optimization in Euclidean space also holds for infinite-dimensional (Hilbert) space. Moreover, in this paper we established a similar result for convex optimization problems with cardinality constraint with respect to a dictionary (not necessarily the basis).


Nonlinear optimization Convex optimization Sparsity Cardinality constraint Penalty decomposition 


  1. 1.
    Bollhofer, M., Eftekhari, A., Scheidegger, S., Schenk, O.: Large-scale sparse inverse covariance matrix estimation. SIAM J. Sci. Comput. 41(1), A380–A401 (2019). Scholar
  2. 2.
    Bubeck, S.: Convex optimization: algorithms and complexity. Found. Trends Mach. Learn. 8(3–4), 231–358 (2015)CrossRefGoogle Scholar
  3. 3.
    Chen, Z., Huang, C., Lin, S.: A new sparse representation framework for compressed sensing MRI. Knowl.-Based Syst. 188, 104969 (2020). Scholar
  4. 4.
    Dempster, A.P.: Covariance selection. Biometrics 28(1), 157–175 (1972). Scholar
  5. 5.
    Deng, Q., et al.: Compressed sensing for image reconstruction via back-off and rectification of greedy algorithm. Sig. Process. 157, 280–287 (2019). Scholar
  6. 6.
    Dereventsov, A., Temlyakov, V.N.: Biorthogonal greedy algorithms in convex optimization. CoRR abs/2001.05530 (2020).
  7. 7.
    Dong, Z., Zhu, W.: An improvement of the penalty decomposition method for sparse approximation. Sig. Process. 113, 52–60 (2015). Scholar
  8. 8.
    Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theor. 52(4), 1289–1306 (2006). Scholar
  9. 9.
    Dou, H.X., Huang, T.Z., Deng, L.J., Zhao, X.L., Huang, J.: Directional \(l_0\) sparse modeling for image stripe noise removal. Remote Sens. 10(3) (2018).
  10. 10.
    Fan, J., Lv, J., Qi, L.: Sparse high-dimensional models in economics. Ann. Rev. Econ. 3(1), 291–317 (2011). Scholar
  11. 11.
    Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Sig. Process. 1(4), 586–597 (2007). Scholar
  12. 12.
    Gajare, S., Sonawani, S.: Improved logistic regression approach in feature selection for EHR. In: Abraham, A., Cherukuri, A.K., Melin, P., Gandhi, N. (eds.) ISDA 2018 2018. AISC, vol. 940, pp. 325–334. Springer, Cham (2020). Scholar
  13. 13.
    Gudkov, A.A., Mironov, S.V., Sidorov, S.P., Tyshkevich, S.V.: A dual active set algorithm for optimal sparse convex regression. Vestn. Samar. Gos. Tekhn. Univ. Ser. Fiz.-Mat. Nauki (J. Samara State Tech. Univ. Ser. Phys. Math. Sci.) 23(1), 113–130 (2019). Scholar
  14. 14.
    Holmberg, K.: Creative modeling: variable and constraint duplicationin primal - dual decomposition methods. Ann. Oper. Res. 82, 355–390 (1998). Scholar
  15. 15.
    Kampa, K., Mehta, S., Chou, C.A., Chaovalitwongse, W.A., Grabowski, T.J.: Sparse optimization in feature selection: application in neuroimaging. J. Global Optim. 59(2), 439–457 (2014). Scholar
  16. 16.
    Lu, Z., Li, X.: Sparse recovery via partial regularization: models, theory, and algorithms. Math. Oper. Res. 43(4), 1290–1316 (2018). Scholar
  17. 17.
    Lu, Z., Zhang, Y.: Sparse approximation via penalty decomposition methods. SIAM J. Optim. 23(4), 2448–2478 (2013). Scholar
  18. 18.
    Luo, X., Chang, X., Ban, X.: Regression and classification using extreme learning machine based on l1-norm and l2-norm. Neurocomputing 174, 179–186 (2016). Scholar
  19. 19.
    Pan, L.L., Xiu, N.H., Fan, J.: Optimality conditions for sparse nonlinear programming. Sci. China Math. 60(5), 759–776 (2017). Scholar
  20. 20.
    Pun, C.S., Wong, H.Y.: A linear programming model for selection of sparse high-dimensional multiperiod portfolios. Eur. J. Oper. Res. 273(2), 754–771 (2019). Scholar
  21. 21.
    Sidorov, S.P., Faizliev, A.R., Khomchenko, A.A.: Algorithms for \(l_1\)-norm minimisation of index tracking error and their performance. Int. J. Math. Oper. Res. 11(4), 497–519 (2017). Scholar
  22. 22.
    Temlyakov, V.N.: Greedy approximation in convex optimization. Constr. Approx. 41(2), 269–296 (2015). Scholar
  23. 23.
    Teng, Y., Yang, L., Yu, B., Song, X.: A penalty palm method for sparse portfolio selection problems. Optim. Methods Softw. 32(1), 126–147 (2017). Scholar
  24. 24.
    Wipf, D.P., Rao, B.D.: Sparse Bayesian learning for basis selection. IEEE Trans. Signal Process. 52(8), 2153–2164 (2004). Scholar
  25. 25.
    Xu, F., Deng, R.: Fast algorithms for sparse inverse covariance estimation. Int. J. Comput. Math. 96(8), 1668–1686 (2019). Scholar
  26. 26.
    Zhu, W., Dong, Z., Yu, Y., Chen, J.: Lagrange dual method for sparsity constrained optimization. IEEE Access 6, 28404–28416 (2018). Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Saratov State UniversitySaratovRussian Federation

Personalised recommendations