Advertisement

Journal of Global Optimization

, Volume 73, Issue 1, pp 59–81 | Cite as

Spectral projected gradient method for stochastic optimization

  • Nataša Krejić
  • Nataša Krklec JerinkićEmail author
Article
  • 74 Downloads

Abstract

We consider the Spectral Projected Gradient method for solving constrained optimization problems with the objective function in the form of mathematical expectation. It is assumed that the feasible set is convex, closed and easy to project on. The objective function is approximated by a sequence of different Sample Average Approximation functions with different sample sizes. The sample size update is based on two error estimates—SAA error and approximate solution error. The Spectral Projected Gradient method combined with a nonmonotone line search is used. The almost sure convergence results are achieved without imposing explicit sample growth condition. Preliminary numerical results show the efficiency of the proposed method.

Keywords

Spectral projected gradient Constrained stochastic problems Sample average approximation Variable sample size 

Notes

Acknowledgements

We are grateful to the associate editor and two anonymous referees whose constructive remarks helped us to improve this paper.

References

  1. 1.
    Andradottir, S.: A scaled stochastic approximation algorithm. Manage. Sci. 42(4), 475–498 (1996)CrossRefzbMATHGoogle Scholar
  2. 2.
    Bastin, F.: Trust-Region Algorithms for Nonlinear Stochastic Programming and Mixed Logit Models, Ph.D. thesis, University of Namur, Belgium (2004)Google Scholar
  3. 3.
    Bastin, F., Cirillo, C., Toint, P.L.: An adaptive Monte Carlo algorithm for computing mixed logit estimators. CMS 3(1), 55–79 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Bastin, F., Cirillo, C., Toint, P.L.: Convergence theory for nonconvex stochastic programming with an application to mixed logit. Math. Program. Ser. B 108, 207–234 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Bayraksan, G., Morton, D.P.: A sequential sampling procedure for stochastic programming. Oper. Res. Int. J. 59, 898–913 (2011)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Homem-de-Mello, T., Bayraksan, G.: Monte Carlo sampling-based methods for stochastic optimization. Surv. Oper. Res. Manag. Sci. 19, 56–85 (2014)MathSciNetGoogle Scholar
  7. 7.
    Birgin, E.G., Martínez, J.M., Raydan, M.: Nonmonotone Spectral Projected Gradients on Convex Sets. SIAM J. Optim. 10, 1196–1211 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Birgin, E.G., Martínez, J.M., Raydan, M.: Spectral projected gradient methods: review and perspectives. J. Stat. Softw. 60(3), 1–21 (2014)CrossRefGoogle Scholar
  9. 9.
    Byrd, R., Chin, G., Neveitt, W., Nocedal, J.: On the use of stochastic Hessian information in optimization methods for machine learning. SIAM J. Optim. 21(3), 977–995 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Byrd, R., Chin, G., Neveitt, W., Nocedal, J.: Sample size selection in optimization methods for machine learning. Math. Program. 134(1), 127–155 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Byrd, R.H., Hansen, S.L., Nocedal, J., Singer, Y.: A stochastic quasi-Newton method for large scale optimization, arxiv.org/abs/1401.7020)
  12. 12.
    Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. Ser. A 91, 201–213 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Friedlander, M.P., Schmidt, M.: Hybrid deterministic-stochastic methods for data fitting. SIAM J. Sci. Comput. 34(3), 1380–1405 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Fu, M.C.: Gradient estimation, In: Henderson, S.G., Nelson, B.L. (eds.), Handbook in OR & MS 13, pp. 575–616. Elsevier B.V. (2006)Google Scholar
  15. 15.
    Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23(4), 707–716 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Grippo, L., Lampariello, F., Lucidi, S.: A class of nonmonotone stabilization methods in unconstrained optimization. Numer. Math. 59, 779–805 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Homem-de-Mello, T.: Variable-sample methods for stochastic optimization. ACM Trans. Model. Comput. Simul. 13(2), 108–133 (2003)CrossRefzbMATHGoogle Scholar
  18. 18.
    Kao, C., Song, W.T., Chen, S.: A modified quasi-Newton method for optimization in simulation. Int. Trans. Oper. Res. 4(3), 223–233 (1997)CrossRefzbMATHGoogle Scholar
  19. 19.
    Krejić, N., Lužanin, Z., Ovcin, Z., Stojkovska, I.: Descent direction method with line search for unconstrained optimization in noisy environment. Optim. Methods Softw. 30(6), 1164–1184 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Krejić, N., Krklec, N.: Line search methods with variable sample size for unconstrained optimization. J. Comput. Appl. Math. 245, 213–231 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Krejić, N., Jerinkić, N.Krklec: Nonmonotone line search methods with variable sample size. Numer. Algorithms 68, 711–739 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Krejić, N., Martínez, J.M.: Inexact restoration approach for minimization with inexact evaluation of the objective function. Math. Comput. 85(300), 1775–1791 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Law, A.M.: Simulation Modeling and Analysis. McGraw-Hill Education, New York (2014)Google Scholar
  24. 24.
    Mokhtary, A., Ribeiro, A.: Global convergence of online limited memory BFGS. J. Mach. Learn. Res. 16, 3151–3181 (2015)MathSciNetzbMATHGoogle Scholar
  25. 25.
    Ali, M.Montaz, Khompatraporn, C., Zabinsky, Z.B.: A numerical evaluation of several stochastic algorithms on selected continous global optimization test problems. J. Global Optim. 31(4), 635–672 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Li, D.H., Fukushima, M.: A derivative-free line search and global convergence of Broyden-like method for nonlinear equations. Optim. Methods Softw. 13, 181–201 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Pasupathy, R.: On choosing parameters in retrospective-approximation algorithms for stochastic root finding and simulation optimization. Oper. Res. 58(4), 889–901 (2010)CrossRefzbMATHGoogle Scholar
  28. 28.
    Polak, E., Royset, J.O.: Eficient sample sizes in stochastic nonlinear programing. J. Comput. Appl. Math. 217(2), 301–310 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Royset, J.O.: Optimality functions in stochastic programming. Math. Program. 135(1–2), 293–321 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Shapiro, A., Dentcheva, D., Ruszczynski, A.: Lectures on stochastic programming: modeling and theory. MPS/SIAM Ser. Optim. 9, 1–413 (2009)zbMATHGoogle Scholar
  31. 31.
    Shapiro, A., Wardi, Y.: Convergence analysis of gradient descent stochastic algorithms. J. Optim. Theory Appl. 91(2), 439–454 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Spall, J.C.: Introduction to Stochastic Search and Optimization. Wiley, Hoboken (2003)CrossRefzbMATHGoogle Scholar
  33. 33.
    Wardi, Y.: Stochastic algorithms with Armijo stepsizes for minimization of functions. J. Optim. Theory Appl. 64, 399–417 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 4, 1043–1056 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Yan, D., Mukai, H.: Optimization algorithm with probabilistic estimation. J. Optim. Theory Appl. 79(2), 345–371 (1993)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Mathematics and Informatics, Faculty of ScienceUniversity of Novi SadNovi SadSerbia

Personalised recommendations