Abstract
We study subgradient methods for computing the saddle points of a convex-concave function. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.
Similar content being viewed by others
References
Low, S., Lapsley, D.E.: Optimization flow control, I: Basic algorithm and convergence. IEEE/ACM Trans. Netw. 7(6), 861–874 (1999)
Srikant, R.: Mathematics of Internet Congestion Control. Birkhauser, Basel (2004)
Chiang, M., Low, S.H., Calderbank, A.R., Doyle, J.C.: Layering as optimization decomposition: a mathematical theory of network architectures. Proc. IEEE 95(1), 255–312 (2007)
Arrow, K.J., Hurwicz, L., Uzawa, H.: Studies in Linear and Non-Linear Programming. Stanford University Press, Stanford (1958)
Bruck, R.E.: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 61, 159–164 (1977)
Nemirovski, A.S., Judin, D.B.: Cezare convergence of gradient method approximation of saddle points for convex-concave functions. Dokl. Akad. Nauk SSSR 239, 1056–1059 (1978)
Uzawa, H.: Iterative methods in concave programming. In: Arrow, K., Hurwicz, L., Uzawa, H. (eds.) Studies in Linear and Nonlinear Programming, pp. 154–165. Stanford University Press, Stanford (1958)
Gol’shtein, E.G.: A generalized gradient method for finding saddle points. Matekon 10, 36–52 (1974)
Maistroskii, D.: Gradient methods for finding saddle points. Matekon 13, 3–22 (1977)
Zabotin, I.Y.: A subgradient method for finding a saddle point of a convex-concave function. Issled. Prikl. Mat. 15, 6–12 (1988)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matekon 13, 35–49 (1977)
Kallio, M., Ruszczyński, A.: Perturbation methods for saddle point computation. Report No. WP-94-38, International Institute for Applied Systems Analysis (1994)
Kallio, M., Rosa, C.H.: Large-scale convex optimization via saddle-point computation. Oper. Res. 47, 93–101 (1999)
Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31, 167–175 (2003)
Nemirovski, A.S.: Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15(1), 229–251 (2004)
Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16, 697–725 (2006)
Auslender, A., Teboulle, M.: Projected subgradient methods with non-Euclidean distances for nondifferentiable convex minimization and variational inequalities. Math. Program., Ser. B (2007). doi:10.1007/s10107-007-0147-z
Shor, N.Z.: Minimization methods for Nondifferentiable Functions. Springer, Berlin (1985). Translated from Russian by K.C. Kiwiel and A. Ruszczynski
Polyak, B.T.: Introduction to Optimisation. Optimization Software, New York (1987)
Demyanov, V.F., Vasilyev, L.V.: Nondifferentiable Optimization. Optimization Software, New York (1985)
Correa, R., Lemaréchal, C.: Convergence of some algorithms for convex minimization. Math. Program. 62, 261–271 (1993)
Nedić, A., Bertsekas, D.P.: Convergence rate of incremental subgradient algorithms. In: Uryasev, S., Pardalos, P. (eds.) Stochastic Optimization: Algorithms and Applications. Kluwer Academic, Dordrecht (2000)
Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12(1), 109–138 (2001)
Nedić, A., Ozdaglar, A.: Approximate primal solutions and rate analysis for dual subgradient methods. SIAM J. Optim. 19, 1757–1780 (2009)
Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Cambridge (1999)
Bertsekas, D.P., Nedić, A., Ozdaglar, A.: Convex Analysis and Optimization. Athena Scientific, Belmont (2003)
Sherali, H.D., Choi, G.: Recovery of primal solutions when using subgradient optimization methods to solve Lagrangian duals of linear programs. Oper. Res. Lett. 19, 105–113 (1996)
Larsson, T., Patriksson, M., Strömberg, A.: Ergodic results and bounds on the optimal value in subgradient optimization. In: Kelinschmidt, P., (eds.) Operations Research Proceedings, pp. 30–35. Springer, Berlin (1995)
Larsson, T., Patriksson, M., Strömberg, A.: Ergodic convergence in subgradient optimization. Optim. Methods Soft. 9, 93–120 (1998)
Larsson, T., Patriksson, M., Strömberg, A.: Ergodic primal convergence in dual subgradient schemes for convex programming. Math. Program. 86, 283–312 (1999)
Nesterov, Y.: Primal-dual subgradient methods for convex problems. Center for Operations Research and Econometrics (CORE) Report No. 67, Catholic University of Louvain (UCL) (2005)
Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. Springer, Berlin (1996)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by P.M. Pardalos.
Rights and permissions
About this article
Cite this article
Nedić, A., Ozdaglar, A. Subgradient Methods for Saddle-Point Problems. J Optim Theory Appl 142, 205–228 (2009). https://doi.org/10.1007/s10957-009-9522-7
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-009-9522-7