Skip to main content

An Optimization Problems with a Composite Objective Function

  • Chapter
  • First Online:
  • 878 Accesses

Part of the book series: Springer Optimization and Its Applications ((SOIA,volume 155))

Abstract

In this chapter we study an algorithm for minimization of the sum of two functions, the first one being smooth and convex and the second being convex. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the first function while the second one is a proximal gradient step for the second function. In each of these two steps there is a computational error. In general, these two computational errors are different. We show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   49.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   49.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Alber YI, Yao JC (2009) Another version of the proximal point algorithm in a Banach space. Nonlinear Anal 70:3159–3171

    Article  MathSciNet  Google Scholar 

  2. Alvarez F, Lopez J, Ramirez CH (2010) Interior proximal algorithm with variable metric for second-order cone programming: applications to structural optimization and support vector machines. Optim Methods Softw 25:859–881

    Article  MathSciNet  Google Scholar 

  3. Aragon Artacho FJ, Geoffroy MH (2007) Uniformity and inexact version of a proximal method for metrically regular mappings. J Math Anal Appl 335:168–183

    Article  MathSciNet  Google Scholar 

  4. Attouch H, Bolte J (2009) On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math Program Ser B 116:5–16

    Article  MathSciNet  Google Scholar 

  5. Bauschke HH, and Combettes PL (2011) Convex analysis and monotone operator theory in Hilbert spaces. Springer, New York

    Book  Google Scholar 

  6. Bauschke HH, Goebel R, Lucet Y, Wang X (2008) The proximal average: basic theory. SIAM J Optim 19:766–785

    Article  MathSciNet  Google Scholar 

  7. Benker H, Hamel A, Tammer C (1996) A proximal point algorithm for control approximation problems, I. Theoretical background. Math Methods Oper Res 43:261–280

    Article  MathSciNet  Google Scholar 

  8. Burachik RS, Iusem AN (1998) A generalized proximal point algorithm for the variational inequality problem in a Hilbert space. SIAM J Optim 8:197–216

    Article  MathSciNet  Google Scholar 

  9. Burachik RS, Kaya CY, Sabach S (2012) A generalized univariate Newton method motivated by proximal regularization. J Optim Theory Appl 155:923–940

    Article  MathSciNet  Google Scholar 

  10. Burachik RS, Lopes JO, Da Silva GJP (2009) An inexact interior point proximal method for the variational inequality problem. Comput Appl Math 28:15–36

    Article  MathSciNet  Google Scholar 

  11. Butnariu D, Kassay G (2008) A proximal-projection method for finding zeros of set-valued operators. SIAM J Control Optim 47:2096–2136

    Article  MathSciNet  Google Scholar 

  12. Ceng LC, Mordukhovich BS, Yao JC (2010) Hybrid approximate proximal method with auxiliary variational inequality for vector optimization. J Optim Theory Appl 146:267–303

    Article  MathSciNet  Google Scholar 

  13. Censor Y, Zenios SA (1992) The proximal minimization algorithm with D-functions. J Optim Theory Appl 73:451–464

    Article  MathSciNet  Google Scholar 

  14. Chen Z, Zhao K (2009) A proximal-type method for convex vector optimization problem in Banach spaces. Numer Funct Anal Optim 30:70–81

    Article  MathSciNet  Google Scholar 

  15. Chuong TD, Mordukhovich BS, Yao JC (2011) Hybrid approximate proximal algorithms for efficient solutions in for vector optimization. J Nonlinear Convex Anal 12:861–864

    MathSciNet  MATH  Google Scholar 

  16. Gockenbach MS, Jadamba B, Khan AA, Tammer Chr, Winkler B (2015) Proximal methods for the elastography inverse problem of tumor identification using an equation error approach. Adv Var Hemivariational Inequal 33:173–197

    MathSciNet  MATH  Google Scholar 

  17. Hager WW, Zhang H (2008) Self-adaptive inexact proximal point methods. Comput Optim Appl 39:161–181

    Article  MathSciNet  Google Scholar 

  18. Iusem A, Nasri M (2007) Inexact proximal point methods for equilibrium problems in Banach spaces. Numer Funct Anal Optim 28:1279–1308

    Article  MathSciNet  Google Scholar 

  19. Iusem A, Resmerita E (2010) A proximal point method in nonreflexive Banach spaces. Set-Valued Var Anal 18:109–120

    Article  MathSciNet  Google Scholar 

  20. Nguyen TP, Pauwels E, Richard E, Suter BW (2018) Extragradient method in optimization: convergence and complexity. J Optim Theory Appl 176:137–162

    Article  MathSciNet  Google Scholar 

  21. Gopfert A, Tammer Chr, Riahi, H (1999) Existence and proximal point algorithms for nonlinear monotone complementarity problems. Optimization 45:57–68

    Article  MathSciNet  Google Scholar 

  22. Grecksch W, Heyde F, Tammer Chr (2000) Proximal point algorithm for an approximated stochastic optimal control problem. Monte Carlo Methods Appl 6:175–189

    Article  MathSciNet  Google Scholar 

  23. Griva I (2018) Convergence analysis of augmented Lagrangian-fast projected gradient method for convex quadratic problems. Pure Appl Funct Anal 3:417–428

    MathSciNet  Google Scholar 

  24. Griva I, Polyak R (2011) Proximal point nonlinear rescaling method for convex optimization. Numer Algebra Control Optim 1:283–299

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

J. Zaslavski, A. (2020). An Optimization Problems with a Composite Objective Function. In: Convex Optimization with Computational Errors. Springer Optimization and Its Applications, vol 155. Springer, Cham. https://doi.org/10.1007/978-3-030-37822-6_7

Download citation

Publish with us

Policies and ethics