Abstract
In this chapter we study an algorithm for minimization of the sum of two functions, the first one being smooth and convex and the second being convex. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the first function while the second one is a proximal gradient step for the second function. In each of these two steps there is a computational error. In general, these two computational errors are different. We show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Alber YI, Yao JC (2009) Another version of the proximal point algorithm in a Banach space. Nonlinear Anal 70:3159–3171
Alvarez F, Lopez J, Ramirez CH (2010) Interior proximal algorithm with variable metric for second-order cone programming: applications to structural optimization and support vector machines. Optim Methods Softw 25:859–881
Aragon Artacho FJ, Geoffroy MH (2007) Uniformity and inexact version of a proximal method for metrically regular mappings. J Math Anal Appl 335:168–183
Attouch H, Bolte J (2009) On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math Program Ser B 116:5–16
Bauschke HH, and Combettes PL (2011) Convex analysis and monotone operator theory in Hilbert spaces. Springer, New York
Bauschke HH, Goebel R, Lucet Y, Wang X (2008) The proximal average: basic theory. SIAM J Optim 19:766–785
Benker H, Hamel A, Tammer C (1996) A proximal point algorithm for control approximation problems, I. Theoretical background. Math Methods Oper Res 43:261–280
Burachik RS, Iusem AN (1998) A generalized proximal point algorithm for the variational inequality problem in a Hilbert space. SIAM J Optim 8:197–216
Burachik RS, Kaya CY, Sabach S (2012) A generalized univariate Newton method motivated by proximal regularization. J Optim Theory Appl 155:923–940
Burachik RS, Lopes JO, Da Silva GJP (2009) An inexact interior point proximal method for the variational inequality problem. Comput Appl Math 28:15–36
Butnariu D, Kassay G (2008) A proximal-projection method for finding zeros of set-valued operators. SIAM J Control Optim 47:2096–2136
Ceng LC, Mordukhovich BS, Yao JC (2010) Hybrid approximate proximal method with auxiliary variational inequality for vector optimization. J Optim Theory Appl 146:267–303
Censor Y, Zenios SA (1992) The proximal minimization algorithm with D-functions. J Optim Theory Appl 73:451–464
Chen Z, Zhao K (2009) A proximal-type method for convex vector optimization problem in Banach spaces. Numer Funct Anal Optim 30:70–81
Chuong TD, Mordukhovich BS, Yao JC (2011) Hybrid approximate proximal algorithms for efficient solutions in for vector optimization. J Nonlinear Convex Anal 12:861–864
Gockenbach MS, Jadamba B, Khan AA, Tammer Chr, Winkler B (2015) Proximal methods for the elastography inverse problem of tumor identification using an equation error approach. Adv Var Hemivariational Inequal 33:173–197
Hager WW, Zhang H (2008) Self-adaptive inexact proximal point methods. Comput Optim Appl 39:161–181
Iusem A, Nasri M (2007) Inexact proximal point methods for equilibrium problems in Banach spaces. Numer Funct Anal Optim 28:1279–1308
Iusem A, Resmerita E (2010) A proximal point method in nonreflexive Banach spaces. Set-Valued Var Anal 18:109–120
Nguyen TP, Pauwels E, Richard E, Suter BW (2018) Extragradient method in optimization: convergence and complexity. J Optim Theory Appl 176:137–162
Gopfert A, Tammer Chr, Riahi, H (1999) Existence and proximal point algorithms for nonlinear monotone complementarity problems. Optimization 45:57–68
Grecksch W, Heyde F, Tammer Chr (2000) Proximal point algorithm for an approximated stochastic optimal control problem. Monte Carlo Methods Appl 6:175–189
Griva I (2018) Convergence analysis of augmented Lagrangian-fast projected gradient method for convex quadratic problems. Pure Appl Funct Anal 3:417–428
Griva I, Polyak R (2011) Proximal point nonlinear rescaling method for convex optimization. Numer Algebra Control Optim 1:283–299
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
J. Zaslavski, A. (2020). An Optimization Problems with a Composite Objective Function. In: Convex Optimization with Computational Errors. Springer Optimization and Its Applications, vol 155. Springer, Cham. https://doi.org/10.1007/978-3-030-37822-6_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-37822-6_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-37821-9
Online ISBN: 978-3-030-37822-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)