Skip to main content

Convergence Analysis of Penalty Decomposition Algorithm for Cardinality Constrained Convex Optimization in Hilbert Spaces

  • Conference paper
  • First Online:
Book cover Mathematical Optimization Theory and Operations Research (MOTOR 2020)

Abstract

The paper examines an algorithm for finding approximate sparse solutions of convex cardinality constrained optimization problem in Hilbert spaces. The proposed algorithm uses the penalty decomposition (PD) approach and solves sub-problems on each iteration approximately. We examine the convergence of the algorithm to a stationary point satisfying necessary optimality conditions. Unlike other similar works, this paper discusses the properties of PD algorithms in infinite-dimensional (Hilbert) space. The results showed that the convergence property obtained in previous works for cardinality constrained optimization in Euclidean space also holds for infinite-dimensional (Hilbert) space. Moreover, in this paper we established a similar result for convex optimization problems with cardinality constraint with respect to a dictionary (not necessarily the basis).

This work was supported by the Ministry of science and education of the Russian Federation in the framework of the basic part of the scientific research state task, project FSRR-2020-0006.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bollhofer, M., Eftekhari, A., Scheidegger, S., Schenk, O.: Large-scale sparse inverse covariance matrix estimation. SIAM J. Sci. Comput. 41(1), A380–A401 (2019). https://doi.org/10.1137/17M1147615

    Article  MathSciNet  MATH  Google Scholar 

  2. Bubeck, S.: Convex optimization: algorithms and complexity. Found. Trends Mach. Learn. 8(3–4), 231–358 (2015)

    Article  Google Scholar 

  3. Chen, Z., Huang, C., Lin, S.: A new sparse representation framework for compressed sensing MRI. Knowl.-Based Syst. 188, 104969 (2020). https://doi.org/10.1016/j.knosys.2019.104969. http://www.sciencedirect.com/science/article/pii/S0950705119303983

    Article  Google Scholar 

  4. Dempster, A.P.: Covariance selection. Biometrics 28(1), 157–175 (1972). https://doi.org/10.2307/2528966

    Article  MathSciNet  Google Scholar 

  5. Deng, Q., et al.: Compressed sensing for image reconstruction via back-off and rectification of greedy algorithm. Sig. Process. 157, 280–287 (2019). https://doi.org/10.1016/j.sigpro.2018.12.007. http://www.sciencedirect.com/science/article/pii/S0165168418303980

    Article  Google Scholar 

  6. Dereventsov, A., Temlyakov, V.N.: Biorthogonal greedy algorithms in convex optimization. CoRR abs/2001.05530 (2020). https://arxiv.org/abs/2001.05530

  7. Dong, Z., Zhu, W.: An improvement of the penalty decomposition method for sparse approximation. Sig. Process. 113, 52–60 (2015). https://doi.org/10.1016/j.sigpro.2015.01.012. http://www.sciencedirect.com/science/article/pii/S0165168415000353

    Article  Google Scholar 

  8. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theor. 52(4), 1289–1306 (2006). https://doi.org/10.1109/TIT.2006.871582

    Article  MathSciNet  MATH  Google Scholar 

  9. Dou, H.X., Huang, T.Z., Deng, L.J., Zhao, X.L., Huang, J.: Directional \(l_0\) sparse modeling for image stripe noise removal. Remote Sens. 10(3) (2018). https://doi.org/10.3390/rs10030361. https://www.mdpi.com/2072-4292/10/3/361

  10. Fan, J., Lv, J., Qi, L.: Sparse high-dimensional models in economics. Ann. Rev. Econ. 3(1), 291–317 (2011). https://doi.org/10.1146/annurev-economics-061109-080451

    Article  Google Scholar 

  11. Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Sig. Process. 1(4), 586–597 (2007). https://doi.org/10.1109/JSTSP.2007.910281

    Article  Google Scholar 

  12. Gajare, S., Sonawani, S.: Improved logistic regression approach in feature selection for EHR. In: Abraham, A., Cherukuri, A.K., Melin, P., Gandhi, N. (eds.) ISDA 2018 2018. AISC, vol. 940, pp. 325–334. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-16657-1_30

    Chapter  Google Scholar 

  13. Gudkov, A.A., Mironov, S.V., Sidorov, S.P., Tyshkevich, S.V.: A dual active set algorithm for optimal sparse convex regression. Vestn. Samar. Gos. Tekhn. Univ. Ser. Fiz.-Mat. Nauki (J. Samara State Tech. Univ. Ser. Phys. Math. Sci.) 23(1), 113–130 (2019). https://doi.org/10.14498/vsgtu1673

    Article  MATH  Google Scholar 

  14. Holmberg, K.: Creative modeling: variable and constraint duplicationin primal - dual decomposition methods. Ann. Oper. Res. 82, 355–390 (1998). https://doi.org/10.1023/A:1018927123151

    Article  MathSciNet  MATH  Google Scholar 

  15. Kampa, K., Mehta, S., Chou, C.A., Chaovalitwongse, W.A., Grabowski, T.J.: Sparse optimization in feature selection: application in neuroimaging. J. Global Optim. 59(2), 439–457 (2014). https://doi.org/10.1007/s10898-013-0134-2

    Article  MathSciNet  MATH  Google Scholar 

  16. Lu, Z., Li, X.: Sparse recovery via partial regularization: models, theory, and algorithms. Math. Oper. Res. 43(4), 1290–1316 (2018). https://doi.org/10.1287/moor.2017.0905

    Article  MathSciNet  MATH  Google Scholar 

  17. Lu, Z., Zhang, Y.: Sparse approximation via penalty decomposition methods. SIAM J. Optim. 23(4), 2448–2478 (2013). https://doi.org/10.1137/100808071

    Article  MathSciNet  MATH  Google Scholar 

  18. Luo, X., Chang, X., Ban, X.: Regression and classification using extreme learning machine based on l1-norm and l2-norm. Neurocomputing 174, 179–186 (2016). https://doi.org/10.1016/j.neucom.2015.03.112. http://www.sciencedirect.com/science/article/pii/S092523121501139X

    Article  Google Scholar 

  19. Pan, L.L., Xiu, N.H., Fan, J.: Optimality conditions for sparse nonlinear programming. Sci. China Math. 60(5), 759–776 (2017). https://doi.org/10.1007/s11425-016-9010-x

    Article  MathSciNet  MATH  Google Scholar 

  20. Pun, C.S., Wong, H.Y.: A linear programming model for selection of sparse high-dimensional multiperiod portfolios. Eur. J. Oper. Res. 273(2), 754–771 (2019). https://doi.org/10.1016/j.ejor.2018.08.025. http://www.sciencedirect.com/science/article/pii/S0377221718307203

    Article  MathSciNet  MATH  Google Scholar 

  21. Sidorov, S.P., Faizliev, A.R., Khomchenko, A.A.: Algorithms for \(l_1\)-norm minimisation of index tracking error and their performance. Int. J. Math. Oper. Res. 11(4), 497–519 (2017). https://ideas.repec.org/a/ids/ijmore/v11y2017i4p497-519.html

    Article  MathSciNet  Google Scholar 

  22. Temlyakov, V.N.: Greedy approximation in convex optimization. Constr. Approx. 41(2), 269–296 (2015). https://doi.org/10.1007/s00365-014-9272-0

    Article  MathSciNet  MATH  Google Scholar 

  23. Teng, Y., Yang, L., Yu, B., Song, X.: A penalty palm method for sparse portfolio selection problems. Optim. Methods Softw. 32(1), 126–147 (2017). https://doi.org/10.1080/10556788.2016.1204299

    Article  MathSciNet  MATH  Google Scholar 

  24. Wipf, D.P., Rao, B.D.: Sparse Bayesian learning for basis selection. IEEE Trans. Signal Process. 52(8), 2153–2164 (2004). https://doi.org/10.1109/TSP.2004.831016

    Article  MathSciNet  MATH  Google Scholar 

  25. Xu, F., Deng, R.: Fast algorithms for sparse inverse covariance estimation. Int. J. Comput. Math. 96(8), 1668–1686 (2019). https://doi.org/10.1080/00207160.2018.1506108

    Article  MathSciNet  Google Scholar 

  26. Zhu, W., Dong, Z., Yu, Y., Chen, J.: Lagrange dual method for sparsity constrained optimization. IEEE Access 6, 28404–28416 (2018). https://doi.org/10.1109/ACCESS.2018.2836925

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergei Sidorov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pleshakov, M., Sidorov, S., Spiridonov, K. (2020). Convergence Analysis of Penalty Decomposition Algorithm for Cardinality Constrained Convex Optimization in Hilbert Spaces. In: Kononov, A., Khachay, M., Kalyagin, V., Pardalos, P. (eds) Mathematical Optimization Theory and Operations Research. MOTOR 2020. Lecture Notes in Computer Science(), vol 12095. Springer, Cham. https://doi.org/10.1007/978-3-030-49988-4_10

Download citation

Publish with us

Policies and ethics