Advertisement

Convergence Analysis of Penalty Decomposition Algorithm for Cardinality Constrained Convex Optimization in Hilbert Spaces

  • Michael Pleshakov
  • Sergei SidorovEmail author
  • Kirill Spiridonov
Conference paper
  • 201 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12095)

Abstract

The paper examines an algorithm for finding approximate sparse solutions of convex cardinality constrained optimization problem in Hilbert spaces. The proposed algorithm uses the penalty decomposition (PD) approach and solves sub-problems on each iteration approximately. We examine the convergence of the algorithm to a stationary point satisfying necessary optimality conditions. Unlike other similar works, this paper discusses the properties of PD algorithms in infinite-dimensional (Hilbert) space. The results showed that the convergence property obtained in previous works for cardinality constrained optimization in Euclidean space also holds for infinite-dimensional (Hilbert) space. Moreover, in this paper we established a similar result for convex optimization problems with cardinality constraint with respect to a dictionary (not necessarily the basis).

Keywords

Nonlinear optimization Convex optimization Sparsity Cardinality constraint Penalty decomposition 

References

  1. 1.
    Bollhofer, M., Eftekhari, A., Scheidegger, S., Schenk, O.: Large-scale sparse inverse covariance matrix estimation. SIAM J. Sci. Comput. 41(1), A380–A401 (2019).  https://doi.org/10.1137/17M1147615MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Bubeck, S.: Convex optimization: algorithms and complexity. Found. Trends Mach. Learn. 8(3–4), 231–358 (2015)CrossRefGoogle Scholar
  3. 3.
    Chen, Z., Huang, C., Lin, S.: A new sparse representation framework for compressed sensing MRI. Knowl.-Based Syst. 188, 104969 (2020).  https://doi.org/10.1016/j.knosys.2019.104969. http://www.sciencedirect.com/science/article/pii/S0950705119303983CrossRefGoogle Scholar
  4. 4.
    Dempster, A.P.: Covariance selection. Biometrics 28(1), 157–175 (1972).  https://doi.org/10.2307/2528966MathSciNetCrossRefGoogle Scholar
  5. 5.
    Deng, Q., et al.: Compressed sensing for image reconstruction via back-off and rectification of greedy algorithm. Sig. Process. 157, 280–287 (2019).  https://doi.org/10.1016/j.sigpro.2018.12.007. http://www.sciencedirect.com/science/article/pii/S0165168418303980CrossRefGoogle Scholar
  6. 6.
    Dereventsov, A., Temlyakov, V.N.: Biorthogonal greedy algorithms in convex optimization. CoRR abs/2001.05530 (2020). https://arxiv.org/abs/2001.05530
  7. 7.
    Dong, Z., Zhu, W.: An improvement of the penalty decomposition method for sparse approximation. Sig. Process. 113, 52–60 (2015).  https://doi.org/10.1016/j.sigpro.2015.01.012. http://www.sciencedirect.com/science/article/pii/S0165168415000353CrossRefGoogle Scholar
  8. 8.
    Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theor. 52(4), 1289–1306 (2006).  https://doi.org/10.1109/TIT.2006.871582MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Dou, H.X., Huang, T.Z., Deng, L.J., Zhao, X.L., Huang, J.: Directional \(l_0\) sparse modeling for image stripe noise removal. Remote Sens. 10(3) (2018).  https://doi.org/10.3390/rs10030361. https://www.mdpi.com/2072-4292/10/3/361
  10. 10.
    Fan, J., Lv, J., Qi, L.: Sparse high-dimensional models in economics. Ann. Rev. Econ. 3(1), 291–317 (2011).  https://doi.org/10.1146/annurev-economics-061109-080451CrossRefGoogle Scholar
  11. 11.
    Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Sig. Process. 1(4), 586–597 (2007).  https://doi.org/10.1109/JSTSP.2007.910281CrossRefGoogle Scholar
  12. 12.
    Gajare, S., Sonawani, S.: Improved logistic regression approach in feature selection for EHR. In: Abraham, A., Cherukuri, A.K., Melin, P., Gandhi, N. (eds.) ISDA 2018 2018. AISC, vol. 940, pp. 325–334. Springer, Cham (2020).  https://doi.org/10.1007/978-3-030-16657-1_30CrossRefGoogle Scholar
  13. 13.
    Gudkov, A.A., Mironov, S.V., Sidorov, S.P., Tyshkevich, S.V.: A dual active set algorithm for optimal sparse convex regression. Vestn. Samar. Gos. Tekhn. Univ. Ser. Fiz.-Mat. Nauki (J. Samara State Tech. Univ. Ser. Phys. Math. Sci.) 23(1), 113–130 (2019).  https://doi.org/10.14498/vsgtu1673CrossRefzbMATHGoogle Scholar
  14. 14.
    Holmberg, K.: Creative modeling: variable and constraint duplicationin primal - dual decomposition methods. Ann. Oper. Res. 82, 355–390 (1998).  https://doi.org/10.1023/A:1018927123151MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Kampa, K., Mehta, S., Chou, C.A., Chaovalitwongse, W.A., Grabowski, T.J.: Sparse optimization in feature selection: application in neuroimaging. J. Global Optim. 59(2), 439–457 (2014).  https://doi.org/10.1007/s10898-013-0134-2MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Lu, Z., Li, X.: Sparse recovery via partial regularization: models, theory, and algorithms. Math. Oper. Res. 43(4), 1290–1316 (2018).  https://doi.org/10.1287/moor.2017.0905MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Lu, Z., Zhang, Y.: Sparse approximation via penalty decomposition methods. SIAM J. Optim. 23(4), 2448–2478 (2013).  https://doi.org/10.1137/100808071MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Luo, X., Chang, X., Ban, X.: Regression and classification using extreme learning machine based on l1-norm and l2-norm. Neurocomputing 174, 179–186 (2016).  https://doi.org/10.1016/j.neucom.2015.03.112. http://www.sciencedirect.com/science/article/pii/S092523121501139XCrossRefGoogle Scholar
  19. 19.
    Pan, L.L., Xiu, N.H., Fan, J.: Optimality conditions for sparse nonlinear programming. Sci. China Math. 60(5), 759–776 (2017).  https://doi.org/10.1007/s11425-016-9010-xMathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Pun, C.S., Wong, H.Y.: A linear programming model for selection of sparse high-dimensional multiperiod portfolios. Eur. J. Oper. Res. 273(2), 754–771 (2019).  https://doi.org/10.1016/j.ejor.2018.08.025. http://www.sciencedirect.com/science/article/pii/S0377221718307203MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Sidorov, S.P., Faizliev, A.R., Khomchenko, A.A.: Algorithms for \(l_1\)-norm minimisation of index tracking error and their performance. Int. J. Math. Oper. Res. 11(4), 497–519 (2017). https://ideas.repec.org/a/ids/ijmore/v11y2017i4p497-519.htmlMathSciNetCrossRefGoogle Scholar
  22. 22.
    Temlyakov, V.N.: Greedy approximation in convex optimization. Constr. Approx. 41(2), 269–296 (2015).  https://doi.org/10.1007/s00365-014-9272-0MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Teng, Y., Yang, L., Yu, B., Song, X.: A penalty palm method for sparse portfolio selection problems. Optim. Methods Softw. 32(1), 126–147 (2017).  https://doi.org/10.1080/10556788.2016.1204299MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Wipf, D.P., Rao, B.D.: Sparse Bayesian learning for basis selection. IEEE Trans. Signal Process. 52(8), 2153–2164 (2004).  https://doi.org/10.1109/TSP.2004.831016MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Xu, F., Deng, R.: Fast algorithms for sparse inverse covariance estimation. Int. J. Comput. Math. 96(8), 1668–1686 (2019).  https://doi.org/10.1080/00207160.2018.1506108MathSciNetCrossRefGoogle Scholar
  26. 26.
    Zhu, W., Dong, Z., Yu, Y., Chen, J.: Lagrange dual method for sparsity constrained optimization. IEEE Access 6, 28404–28416 (2018).  https://doi.org/10.1109/ACCESS.2018.2836925CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Saratov State UniversitySaratovRussian Federation

Personalised recommendations