Advertisement

Automation and Remote Control

, Volume 80, Issue 8, pp 1487–1501 | Cite as

Accelerated Gradient-Free Optimization Methods with a Non-Euclidean Proximal Operator

  • E. A. VorontsovaEmail author
  • A. V. GasnikovEmail author
  • E. A. GorbunovEmail author
  • P. E. DvurechenskiiEmail author
Optimization, System Analysis, and Operations Research
  • 6 Downloads

Abstract

We propose an accelerated gradient-free method with a non-Euclidean proximal operator associated with the p-norm (1 ⩽ p ⩽ 2). We obtain estimates for the rate of convergence of the method under low noise arising in the calculation of the function value. We present the results of computational experiments.

Keywords

accelerated optimization methods convex optimization non-gradient methods inaccurate oracle non-Euclidean proximal operator prox-structure 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgments

The work shown in Section 3 was supported by the Russian Science Foundation, project no. 17-11-01027. In the remaining sections, the work of A.V. Gasnikov was funded within the framework of the State Support of the Leading Universities of the Russian Federation “5-100” and was supported by the Russian Foundation for Basic Research, project no. 18-31-20005 mol-a-ved, the work of E.A. Gorbunov was supported by the grant of the President of the Russian Federation MD-1320.2018.1, the work of P.E. Dvurechenskii and E.A. Vorontsova was supported by the Russian Foundation for Basic Research, project no. 18-29-03071 mk.

References

  1. 1.
    Rosenbrock, H.H., An Automatic Method for Finding the Greatest or Least Value of a Function, Comput. J., 1960, vol. 3, no. 3, pp. 175–184. doi:  https://doi.org/10.1093/comjnl/3.3.175 MathSciNetCrossRefGoogle Scholar
  2. 2.
    Brent, R.P., Algorithms for Minimization Without Derivatives, Dover Books on Mathematics, Mineola: Dover, 1973, ISBN 9780486419985. https://books.google.de/books?id=6Ay2biHG-GEC zbMATHGoogle Scholar
  3. 3.
    Spall, J.C., Introduction to Stochastic Search and Optimization, New York: Wiley, 2003.CrossRefzbMATHGoogle Scholar
  4. 4.
    Rumelhart, D.E., Hinton, G.E., and Williams, R.J., Learning Representations by Back-Propagating Errors, Nature, 1986, no. 323, pp. 533–536.Google Scholar
  5. 5.
    Schmidhuber, J., Deep Learning in Neural Networks: An Overview, Neural Networks, 2015, vol. 61, pp. 85–117. arXiv:1404.7828.CrossRefGoogle Scholar
  6. 6.
    Goodfellow, I., Bengio, Y., and Courville, A., Deep Learning, Cambridge: MIT Press, 2016.zbMATHGoogle Scholar
  7. 7.
    Nikolenko, S., Kadurin, A., and Arkhangel’skaya, E., Glubokoe obuchenie. Pogruzhenie v mir neironnykh setei (Deep Learning: An Immersion in the World of Neural Networks), St. Petersburg: Piter, 2018.Google Scholar
  8. 8.
    Nesterov, Yu., Random Gradient-Free Minimization of Convex Functions, Universite catholique de Louvain, Center for Operations Research and Econometrics (CORE), no. 2011001, 2011.zbMATHGoogle Scholar
  9. 9.
    Nesterov, Yu. and Spokoiny, V., Random Gradient-Free Minimization of Convex Functions, Found. Comput. Math., 2017, vol. 17, no. 2, pp. 527–566.MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Nesterov, Yu.E., A Method of Solving a Convex Programming Problem with Convergence Rate O(1/k 2), Soviet Math. Dokl., 1983, vol. 27, no. 2, pp. 372–376.zbMATHGoogle Scholar
  11. 11.
    Gasnikov, A.V., Dvurechenskii, P.E., and Nesterov, Yu.E., Stochastic Gradient Methods with an Imprecise Oracle, Tr. MFTI, 2016, vol. 8, no. 1, pp. 41–91, arXiv preprint arXiv:1411.4218.Google Scholar
  12. 12.
    Chopra, P., Reinforcement Learning without Gradients: Evolving Agents using Genetic Algorithms, https://towardsdatascience.com/reinforcement-learning-without-gradients-evolving-agents-usinggenetic-algorithms-8685817d84f
  13. 13.
    Vorontsova, E.A., Gasnikov, A.V., and Gorbunov, E.A., Accelerated Directional Search with Non- Euclidean Prox-Structure, Autom. Remote Control, 2019, vol. 80, no. 4, pp. 693–707.CrossRefGoogle Scholar
  14. 14.
    Allen-Zhu, Z. and Orecchia, L., Linear Soupling: An Ultimate Unification of Gradient and Mirror Descent, arXiv preprint arXiv:1407.1537.Google Scholar
  15. 15.
    Dvurechensky, P., Gasnikov, A., and Tiurin, A., Randomized Similar Triangles Method: A Unifying Framework for Accelerated Randomized Optimization Methods (Coordinate Descent, Directional Search, Derivative-Free Method), arXiv preprint arXiv:1707.08486.Google Scholar
  16. 16.
    Nemirovskii, A.S. and Yudin, D.B., Slozhnost’ zadach i effektivnost’ metodov optimizatsii (Complexity of Problems and Efficiency of Optimization Methods), Moscow: Nauka, 1979.Google Scholar
  17. 17.
    Gorbunov, E.A., Vorontsova, E., and Gasnikov, A.V., On the Upper Bound for the Expectation of the Norm of a Vector Uniformly Distributed on the Sphere and the Phenomenon of Concentration of Uniform Measure on the Sphere, Mat. Zam., 2019, vol. 106, no. 1, pp. 13–23.MathSciNetCrossRefGoogle Scholar
  18. 18.
    Bogolubsky, L., Dvurechensky, P., Gasnikov, A., Gusev, G., Nesterov, Y., Raigorodskii, A., Tikhonov, A., and Zhukovskii, M., Learning Supervised PageRank with Gradient-Based and Gradient-Free Optimization Methods, in Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R., Eds., Proc. 30th Int. Conf. on Neural Information Processing Systems (NIPS’16), pp. 4914–4922, Curran Associates, Inc., 2016, arXiv:1603.00717.Google Scholar
  19. 19.
    ACDF method Python code. https://github.com/evorontsova/ACDF
  20. 20.
    Gasnikov, A.V., Effektivnye chislennye metody poiska ravnovesii v bol’shikh transportnykh setyakh (Efficient Numerical Methods for Finding Equilibria in Large-Scale Transportation Networks), Doctoral Dissertation, Moscow: MFTI, 2016, arXiv preprint arXiv:1607.03142.Google Scholar

Copyright information

© Pleiades Publishing, Ltd. 2019

Authors and Affiliations

  1. 1.Far Eastern Federal UniversityVladivostokRussia
  2. 2.Université Grenoble AlpsGrenobleFrance
  3. 3.Moscow Institute of Physics and TechnologyMoscowRussia
  4. 4.National Research University Higher School of EconomicsMoscowRussia
  5. 5.Caucasus Mathematical CenterAdyghe State UniversityMaikop, Republic of AdygeaRussia
  6. 6.Weierstrass Institute for Applied Analysis and StochasticsBerlinGermany

Personalised recommendations