Advertisement

Computational Optimization and Applications

, Volume 71, Issue 1, pp 95–113 | Cite as

Modified Fejér sequences and applications

  • Junhong Lin
  • Lorenzo Rosasco
  • Silvia Villa
  • Ding-Xuan Zhou
Article

Abstract

In this note, we propose and study the notion of modified Fejér sequences. Within a Hilbert space setting, this property has been used to prove ergodic convergence of proximal incremental subgradient methods. Here we show that indeed it provides a unifying framework to prove convergence rates for objective function values of several optimization algorithms. In particular, our results apply to forward–backward splitting algorithm, incremental subgradient proximal algorithm, and the Douglas–Rachford splitting method including and generalizing known results.

Keywords

Convergence of first order methods Proximal methods Subgradient method Fejér sequence 

References

  1. 1.
    Attouch, H., Bolte, J., Svaiter, B.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. Ser. A 137, 91–129 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Alber, Y.I., Iusem, A.N., Solodov, M.V.: On the projected subgradient method for nonsmooth convex optimization in a Hilbert space. Math. Program. 81, 23–35 (1998)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert spaces. Springer, New York (2011)CrossRefzbMATHGoogle Scholar
  4. 4.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Bello-Cruz, J.-Y.: On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions. Set Valued Var. Anal. 25, 245–263 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Bonettini, S., Benfenati, A., Ruggiero, V.: Scaling techniques for \(\epsilon \)-subgradient methods. SIAM J. Optim. 26, 891–921 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Bertsekas, D.P.: Incremental proximal methods for large scale convex optimization. Math. Program. Ser. B 129, 163–195 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Boyd, S., Xiao, L., Mutapcic, A.: Subgradient methods. https://web.stanford.edu/class/ee392o/subgrad_method.pdf. Accessed 14 Oct 2015
  9. 9.
    Bredies, K., Lorenz, D.A.: Linear convergence of iterative soft-thresholding. J. Fourier Anal. Appl. 14, 813–837 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Chen, G.H., Rockafellar, R.T.: Convergence rates in forward–backward splitting. SIAM J. Optim. 7, 421–444 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Combettes, P.L.: Quasi-Fejérian analysis of some optimization algorithms. Stud. Comput. Math. 8, 115–152 (2001)CrossRefzbMATHGoogle Scholar
  12. 12.
    Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Combettes, P.L.: Fejér monotonicity in convex optimization. In: Floudas, C.A., Pardalos, P.M. (eds.) Encyclopedia of Optimization, pp. 1016–1024. Springer, New York (2009)CrossRefGoogle Scholar
  14. 14.
    Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York (2011)CrossRefGoogle Scholar
  15. 15.
    Combettes, P.L., Pesquet, J.C.: Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping. SIAM J. Optim. 25, 1221–1248 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Combettes, P.L., Pesquet, J.C.: Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping II: mean-square and linear convergence. https://arxiv.org/abs/1704.08083. (2017)
  17. 17.
    Combettes, P.L., Salzo, S., Villa, S.: Consistent learning by composite proximal thresholding. Math. Program. Ser. B.  https://doi.org/10.1007/s10107-017-1133-8 (2017)
  18. 18.
    Combettes, P.L., Vũ, B.C.: Variable metric quasi Fejér monotonicity. Nonlinear Anal. 78, 17–31 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Darzentas, J.: Problem complexity and method efficiency in optimization. J. Oper. Res. Soc. 35, 455–455 (1984)CrossRefGoogle Scholar
  20. 20.
    Davis, D.: Convergence rate analysis of the forward-Douglas–Rachford splitting scheme. SIAM J. Optim. 25, 1760–1786 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Ermol’ev, Y.M., Tuniev, A.D.: Random Fejér and quasi-Fejér sequences, Theory of Optimal Solutions—Akademiya Nauk Ukrainsko\(\breve{\rm {\i }}\) SSR Kiev 2, 76–83 (1968); translated in: American Mathematical Society Selected Translations in Mathematical Statistics and Probability 13, 143–148 (1973)Google Scholar
  23. 23.
    Goffin, J.L.: On convergence rates of subgradient optimization methods. Math. Program. 13, 329–347 (1977)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Kiwiel, K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14, 807–840 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Knopp, K.: Infinite Sequences and Series. Dover Publications Inc., New York (1956)zbMATHGoogle Scholar
  26. 26.
    Larsson, T., Patriksson, M., Stromberg, A.-B.: On the convergence of conditional \(\epsilon \)-subgradient methods for convex programs and convex-concave saddle-point problems. EJOR 151, 461–473 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Lieutaud, J.: Approximation d’Opérateurs par des Méthodes de Décomposition. Thèse, Université de Paris (1969)Google Scholar
  28. 28.
    Lin, J., Rosasco, L., Zhou, D.X.: Iterative regularization for learning with convex loss functions. J. Mach. Learn. Res. 17, 1–38 (2016)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Moreau, J.J.: Fonctions convexes duales et points proximaux dans un espace hilbertien. C.R. Acad. Sci. Paris 255, 2897–2899 (1962)MathSciNetzbMATHGoogle Scholar
  31. 31.
    Nedic, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12, 109–138 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Nedic, A., Bertsekas, D.: Convergence rate of incremental subgradient algorithms. In: Uryasev, S., Pardalos, P.M. (eds.) Stochastic Optimization: Algorithms and Applications, pp. 223–264. Springer, New York (2001)CrossRefGoogle Scholar
  33. 33.
    Nesterov, Y.: Introductory Lectures on Convex Optimization. Springer, New York (2004)CrossRefzbMATHGoogle Scholar
  34. 34.
    Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383–390 (1979)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Phelps, R.R.: Convex Functions, Monotone Operators and Differentiability, 2nd edn. Springer, New York (1993)zbMATHGoogle Scholar
  36. 36.
    Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987)zbMATHGoogle Scholar
  37. 37.
    Rosasco, L., Villa, S., Vũ, B.C.: Stochastic forward-backward splitting for monotone inclusions. J. Optim. Theory Appl. 169, 388–406 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Rosasco, L., Villa, S., Vũ, B.C.: A stochastic inertial forward-backward splitting algorithm for multivariate monotone inclusions. Optimization 65, 1293–1314 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    Rosasco, L., Villa, S., Vũ, B.C.: A first-order stochastic primal-dual algorithm with correction step. Numer. Funct. Anal. Optim. 38, 602–626 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  40. 40.
    Salzo, S.: The variable metric forward-backward splitting algorithm under mild differentiability assumptions. SIAM J. Optim. 27, 2153–2181 (2017)Google Scholar
  41. 41.
    Shor, N.Z.: Minimization Methods for Non-Differentiable Functions. Springer, New York (1979)zbMATHGoogle Scholar
  42. 42.
    Singer, Y., Duchi, J.C.: Efficient learning using forward-backward splitting. In: Advances in Neural Information Processing Systems, pp. 495–503 (2009)Google Scholar
  43. 43.
    Tseng, P.: Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM J. Control Optim. 29, 119–138 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
  44. 44.
    Zălinescu, C.: Convex Analysis in General Vector Spaces. World Scientific, River Edge (2002)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  • Junhong Lin
    • 1
  • Lorenzo Rosasco
    • 1
    • 3
  • Silvia Villa
    • 2
  • Ding-Xuan Zhou
    • 4
  1. 1.LCSLIstituto Italiano di Tecnologia and Massachusetts Institute of TechnologyCambridgeUSA
  2. 2.Dipartimento di MatematicaPolitecnico di MilanoMilanoItaly
  3. 3.DIBRISUniversità degli Studi di GenovaGenoaItaly
  4. 4.Department of MathematicsCity University of Hong KongKowloonChina

Personalised recommendations