Advertisement

Journal of Optimization Theory and Applications

, Volume 171, Issue 1, pp 121–145 | Cite as

Stochastic Intermediate Gradient Method for Convex Problems with Stochastic Inexact Oracle

  • Pavel Dvurechensky
  • Alexander Gasnikov
Article

Abstract

In this paper, we introduce new methods for convex optimization problems with stochastic inexact oracle. Our first method is an extension of the Intermediate Gradient Method proposed by Devolder, Glineur and Nesterov for problems with deterministic inexact oracle. Our method can be applied to problems with composite objective function, both deterministic and stochastic inexactness of the oracle, and allows using a non-Euclidean setup. We estimate the rate of convergence in terms of the expectation of the non-optimality gap and provide a way to control the probability of large deviations from this rate. Also we introduce two modifications of this method for strongly convex problems. For the first modification, we estimate the rate of convergence for the non-optimality gap expectation and, for the second, we provide a bound for the probability of large deviations from the rate of convergence in terms of the expectation of the non-optimality gap. All the rates lead to the complexity estimates for the proposed methods, which up to a multiplicative constant coincide with the lower complexity bound for the considered class of convex composite optimization problems with stochastic inexact oracle.

Keywords

Convex optimization Inexact oracle Rate of convergence Stochastic optimization 

Mathematics Subject Classification

90C30 90C15 90C25 

Notes

Acknowledgments

The research presented in Sect. 4 of this paper was conducted in IITP RAS and was supported by the Russian Science Foundation Grant (project 14-50-00150), and the research presented in other sections was supported by RFBR, research project No. 15-31-20571 mol_a_ved. Authors would like to thank professor Yurii Nesterov and professor Arkadi Nemirovski for useful discussions. Also we are grateful to two anonymous reviewers for their suggestions which helped to improve the text.

References

  1. 1.
    Evtushenko, Y.: Methods of Solving Extremal Problems and Their Application in Optimization Systems. Nauka, Moscow (1982)MATHGoogle Scholar
  2. 2.
    Polyak, B.T.: Introduction to Optimization. Optimization Software Inc, New York (1987)MATHGoogle Scholar
  3. 3.
    Nemirovski, A., Yudin, D.: Problem Complexity and Method Efficiency in Optimization. Wiley, NewYork (1983)Google Scholar
  4. 4.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, Massachusetts (2004)CrossRefMATHGoogle Scholar
  5. 5.
    Khachiyan, L., Tarasov, S., Erlich, E.: The inscribed ellipsoid method. Soviet Math. Dokl. 298, (1988) (In Russian)Google Scholar
  6. 6.
    Nemirovski, A., Nesterov, Y.: Interior Point Polynomial Methods in Convex Programming: Theory and Applications. SIAM, Philadelphia (1994)Google Scholar
  7. 7.
    Nesterov, Y.: Subgradient Methods for Huge-Scale Optimization Problems. CORE Discussion Paper 2012/2, Louvain-la-Neuve (2012)Google Scholar
  8. 8.
    Devolder, O., Glineur, F., Nesterov, Y.: First-order methods of smooth convex optimization with inexact oracle. Math. Program. 146(1–2), 37–75 (2014)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Devolder, O., Glineur, F., Nesterov, Y.: Intermediate Gradient Methods for Smooth Convex Problems with Inexact Oracle. CORE Discussion Paper 2013/17, Louvain-la-Neuve (2013)Google Scholar
  10. 10.
    Devolder, O.: Exactness, Inexactness and Stochasticity in First-Order Methods for Large-Scale Convex Optimization, Ph.D. thesis, Louvain-la-Neuve (2013)Google Scholar
  11. 11.
    Ghadimi, S., Lan, G.: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization I: a generic algorithmic framework. SIAM J. Optim. 22(4), 1469–1492 (2012)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Ghadimi, S., Lan, G.: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization II: shrinking procedures and optimal algorithms. SIAM J. Optim. 23(4), 2061–2089 (2013)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58(1), 267–288 (1996)MathSciNetMATHGoogle Scholar
  14. 14.
    Nesterov, Y.: Gradient methods for minimizing composite functions. Math. Prog. B 140(1), 125–161 (2013)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Juditsky, A., Lan, G., Nemirovski, A., Shapiro, A.: Stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Lan, G., Nemirovski, A., Shapiro, A.: Validation analysis of mirror descent stochastic approximation method. Math. Program. Ser. A 134(2), 425–458 (2012)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Juditsky, A., Nesterov, Y.: Primal-dual subgradient methods for minimizing uniformly convex functions. Preprint ArXiv:1401.1792 (2014)

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Weierstrass Institute for Applied Analysis and StochasticsBerlinGermany
  2. 2.Institute for Information Transmission Problems RASMoscowRussia
  3. 3.Moscow Institute of Physics and TechnologyDolgoprudny, Moscow RegionRussia

Personalised recommendations