Skip to main content
Log in

Subgradient Methods for Saddle-Point Problems

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

We study subgradient methods for computing the saddle points of a convex-concave function. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Low, S., Lapsley, D.E.: Optimization flow control, I: Basic algorithm and convergence. IEEE/ACM Trans. Netw. 7(6), 861–874 (1999)

    Article  Google Scholar 

  2. Srikant, R.: Mathematics of Internet Congestion Control. Birkhauser, Basel (2004)

    MATH  Google Scholar 

  3. Chiang, M., Low, S.H., Calderbank, A.R., Doyle, J.C.: Layering as optimization decomposition: a mathematical theory of network architectures. Proc. IEEE 95(1), 255–312 (2007)

    Article  Google Scholar 

  4. Arrow, K.J., Hurwicz, L., Uzawa, H.: Studies in Linear and Non-Linear Programming. Stanford University Press, Stanford (1958)

    MATH  Google Scholar 

  5. Bruck, R.E.: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 61, 159–164 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  6. Nemirovski, A.S., Judin, D.B.: Cezare convergence of gradient method approximation of saddle points for convex-concave functions. Dokl. Akad. Nauk SSSR 239, 1056–1059 (1978)

    MathSciNet  Google Scholar 

  7. Uzawa, H.: Iterative methods in concave programming. In: Arrow, K., Hurwicz, L., Uzawa, H. (eds.) Studies in Linear and Nonlinear Programming, pp. 154–165. Stanford University Press, Stanford (1958)

    Google Scholar 

  8. Gol’shtein, E.G.: A generalized gradient method for finding saddle points. Matekon 10, 36–52 (1974)

    Google Scholar 

  9. Maistroskii, D.: Gradient methods for finding saddle points. Matekon 13, 3–22 (1977)

    Google Scholar 

  10. Zabotin, I.Y.: A subgradient method for finding a saddle point of a convex-concave function. Issled. Prikl. Mat. 15, 6–12 (1988)

    MATH  Google Scholar 

  11. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matekon 13, 35–49 (1977)

    Google Scholar 

  12. Kallio, M., Ruszczyński, A.: Perturbation methods for saddle point computation. Report No. WP-94-38, International Institute for Applied Systems Analysis (1994)

  13. Kallio, M., Rosa, C.H.: Large-scale convex optimization via saddle-point computation. Oper. Res. 47, 93–101 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  14. Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31, 167–175 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  15. Nemirovski, A.S.: Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15(1), 229–251 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  16. Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16, 697–725 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  17. Auslender, A., Teboulle, M.: Projected subgradient methods with non-Euclidean distances for nondifferentiable convex minimization and variational inequalities. Math. Program., Ser. B (2007). doi:10.1007/s10107-007-0147-z

    Google Scholar 

  18. Shor, N.Z.: Minimization methods for Nondifferentiable Functions. Springer, Berlin (1985). Translated from Russian by K.C. Kiwiel and A. Ruszczynski

    Google Scholar 

  19. Polyak, B.T.: Introduction to Optimisation. Optimization Software, New York (1987)

    Google Scholar 

  20. Demyanov, V.F., Vasilyev, L.V.: Nondifferentiable Optimization. Optimization Software, New York (1985)

    MATH  Google Scholar 

  21. Correa, R., Lemaréchal, C.: Convergence of some algorithms for convex minimization. Math. Program. 62, 261–271 (1993)

    Article  Google Scholar 

  22. Nedić, A., Bertsekas, D.P.: Convergence rate of incremental subgradient algorithms. In: Uryasev, S., Pardalos, P. (eds.) Stochastic Optimization: Algorithms and Applications. Kluwer Academic, Dordrecht (2000)

    Google Scholar 

  23. Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12(1), 109–138 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  24. Nedić, A., Ozdaglar, A.: Approximate primal solutions and rate analysis for dual subgradient methods. SIAM J. Optim. 19, 1757–1780 (2009)

    Article  Google Scholar 

  25. Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Cambridge (1999)

    MATH  Google Scholar 

  26. Bertsekas, D.P., Nedić, A., Ozdaglar, A.: Convex Analysis and Optimization. Athena Scientific, Belmont (2003)

    MATH  Google Scholar 

  27. Sherali, H.D., Choi, G.: Recovery of primal solutions when using subgradient optimization methods to solve Lagrangian duals of linear programs. Oper. Res. Lett. 19, 105–113 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  28. Larsson, T., Patriksson, M., Strömberg, A.: Ergodic results and bounds on the optimal value in subgradient optimization. In: Kelinschmidt, P., (eds.) Operations Research Proceedings, pp. 30–35. Springer, Berlin (1995)

    Google Scholar 

  29. Larsson, T., Patriksson, M., Strömberg, A.: Ergodic convergence in subgradient optimization. Optim. Methods Soft. 9, 93–120 (1998)

    Article  MATH  Google Scholar 

  30. Larsson, T., Patriksson, M., Strömberg, A.: Ergodic primal convergence in dual subgradient schemes for convex programming. Math. Program. 86, 283–312 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  31. Nesterov, Y.: Primal-dual subgradient methods for convex problems. Center for Operations Research and Econometrics (CORE) Report No. 67, Catholic University of Louvain (UCL) (2005)

  32. Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. Springer, Berlin (1996)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Nedić.

Additional information

Communicated by P.M. Pardalos.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Nedić, A., Ozdaglar, A. Subgradient Methods for Saddle-Point Problems. J Optim Theory Appl 142, 205–228 (2009). https://doi.org/10.1007/s10957-009-9522-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-009-9522-7

Keywords

Navigation