An Efficient Stochastic Approximation Algorithm for Stochastic Saddle Point Problems
We show that Polyak’s (1990) stochastic approximation algorithm with averaging originally developed for unconstrained minimization of a smooth strongly convex objective function observed with noise can be naturally modified to solve convex-concave stochastic saddle point problems. We also show that the extended algorithm, considered on general families of stochastic convex-concave saddle point problems, possesses a rate of convergence unimprovable in order in the minimax sense. We finally present supporting numerical results for the proposed algorithm.
KeywordsModeling Uncertainty Stochastic Approximation Search Point Saddle Point Problem Minimax Problem
Unable to display preview. Download preview PDF.
- Nemirovski, A. and D. Yudin. (1978). “On Cesàro’s convergence of the gradient descent method for finding saddle points of convex-concave functions” Doklady Akademii Nauk SSSR, Vol. 239,No. 4, (in Russian; translated into English as Soviet Math. Doklady).Google Scholar
- Nemirovski, A. and D. Yudin. (1983). Problem Complexity and Method Efficiency in Optimization, J. Wiley & Sons.Google Scholar
- Rockafellar, R.T. (1970). Convex Analysis, Princeton University Press.Google Scholar
- Rubinstein, R.Y. and A. Shapiro. (1993). Discrete Event Systems: Sensitivity Analysis and Stochastic Optimization Via the Score Function Method, to be published by John Wiley & Sons.Google Scholar