Advertisement

Inertial Forward–Backward Algorithms with Perturbations: Application to Tikhonov Regularization

  • Hedy Attouch
  • Alexandre Cabot
  • Zaki Chbani
  • Hassan Riahi
Article

Abstract

In a Hilbert space, we analyze the convergence properties of a general class of inertial forward–backward algorithms in the presence of perturbations, approximations, errors. These splitting algorithms aim to solve, by rapid methods, structured convex minimization problems. The function to be minimized is the sum of a continuously differentiable convex function whose gradient is Lipschitz continuous and a proper lower semicontinuous convex function. The algorithms involve a general sequence of positive extrapolation coefficients that reflect the inertial effect and a sequence in the Hilbert space that takes into account the presence of perturbations. We obtain convergence rates for values and convergence of the iterates under conditions involving the extrapolation and perturbation sequences jointly. This extends the recent work of Attouch–Cabot which was devoted to the unperturbed case. Next, we consider the introduction into the algorithms of a Tikhonov regularization term with vanishing coefficient. In this case, when the regularization coefficient does not tend too rapidly to zero, we obtain strong ergodic convergence of the iterates to the minimum norm solution. Taking a general sequence of extrapolation coefficients makes it possible to cover a wide range of accelerated methods. In this way, we show in a unifying way the robustness of these algorithms.

Keywords

Structured convex optimization Inertial forward–backward algorithms Accelerated Nesterov method FISTA Perturbations Tikhonov regularization 

Mathematics Subject Classification

49M37 65K05 90C25 

References

  1. 1.
    Attouch, H., Cabot, A.: Convergence rates of inertial forward–backward algorithms. SIAM J. Optim. 28(1), 849–874 (2018)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Nesterov, Y.: A method of solving a convex programming problem with convergence rate \(O(1/k^2)\). Sov. Math. Dokl. 27, 372–376 (1983)MATHGoogle Scholar
  3. 3.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Applied Optimization, vol. 87. Kluwer Academic Publishers, Boston (2004)MATHGoogle Scholar
  4. 4.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Bauschke, H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, Berlin (2011)CrossRefMATHGoogle Scholar
  6. 6.
    Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1, 123–231 (2013)Google Scholar
  8. 8.
    Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987)MATHGoogle Scholar
  9. 9.
    Su, W., Boyd, S., Candès, E.J.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17, 1–43 (2016)MathSciNetMATHGoogle Scholar
  10. 10.
    Chambolle, A., Dossal, Ch.: On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm. J. Optim. Theory Appl. 166, 968–982 (2015)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Kim, D., Fessler, J.A.: Optimized first-order methods for smooth convex minimization. Math. Program. 159, 81–107 (2016)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Liang, J., Fadili, M.J., Peyré, G.: Activity identification and local linear convergence of forward–backward-type methods. SIAM J. Optim. 27(1), 408–437 (2017)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Lorenz, D.A., Pock, T.: An inertial forward–backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Villa, S., Salzo, S., Baldassarres, L., Verri, A.: Accelerated and inexact forward–backward. SIAM J. Optim. 23, 1607–1633 (2013)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Schmidt, M., Le Roux, N., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. In: NIPS’11—25 th Annual Conference on Neural Information Processing Systems, Dec 2011, Grenada. HAL inria-00618152v3 (2011)Google Scholar
  16. 16.
    Aujol, J.-F., Dossal, Ch.: Stability of over-relaxations for the Forward–Backward algorithm, application to FISTA. SIAM J. Optim. 25, 2408–2433 (2015)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Attouch, H., Chbani, Z., Peypouquet, J., Redont, P.: Fast convergence of inertial dynamics and algorithms with asymptotic vanishing damping. Math. Program. Ser. B 168, 123–175 (2018)CrossRefMATHGoogle Scholar
  18. 18.
    Fiacco, A., McCormick, G.: Nonlinear Programming: Sequential Unconstrained Minimization Techniques. Wiley, Hoboken (1968)MATHGoogle Scholar
  19. 19.
    Cominetti, R.: Coupling the proximal point algorithm with approximation methods. J. Optim. Theory Appl. 95, 581–600 (1997)MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Attouch, H., Czarnecki, M.-O., Peypouquet, J.: Prox-penalization and splitting methods for constrained variational problems. SIAM J. Optim. 21, 149–173 (2011)MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Attouch, H., Czarnecki, M.-O., Peypouquet, J.: Coupling forward–backward with penalty schemes and parallel splitting for constrained variational inequalities. SIAM J. Optim. 21, 1251–1274 (2011)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Bot, R.I., Csetnek, E.R.: Second order forward–backward dynamical systems for monotone inclusion problems. SIAM J. Control Optim. 54, 1423–1443 (2016)MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Cabot, A.: Proximal point algorithm controlled by a slowly vanishing term: applications to hierarchical minimization. SIAM J. Optim. 15, 555–572 (2005)MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Hirstoaga, S.A.: Approximation et résolution de problèmes d’équilibre, de point fixe et d’inclusion monotone. PhD thesis, Paris VI. HAL Id: tel-00137228. https://tel.archives-ouvertes.fr/tel-00137228 (2006)
  25. 25.
    Attouch, H., Czarnecki, M.-O.: Asymptotic control and stabilization of nonlinear oscillators with non-isolated equilibria. J. Differ. Equ. 179, 278–310 (2002)MathSciNetCrossRefMATHGoogle Scholar
  26. 26.
    Attouch, H., Chbani, Z., Riahi, H.: Combining fast inertial dynamics for convex optimization with Tikhonov regularization. J. Math. Anal. Appl. 457, 1065–1094 (2018)MathSciNetCrossRefMATHGoogle Scholar
  27. 27.
    Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Bauschke, H., et al. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and Its Applications, vol. 49, pp. 185–212. Springer, New York (2011)CrossRefGoogle Scholar
  28. 28.
    Lemaire, B.: The proximal algorithm. In: Penot, J.P. (ed.) New Methods in Optimization and Their Industrial Uses. Int. Ser. Numer. Math., vol. 87, pp. 73–89. Birkhäuser, Basel (1989)Google Scholar
  29. 29.
    Peypouquet, J.: Convex Optimization in Normed Spaces: Theory, Methods and Examples. Springer, Berlin (2015)CrossRefMATHGoogle Scholar
  30. 30.
    Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)MathSciNetCrossRefMATHGoogle Scholar
  31. 31.
    Attouch, H., Peypouquet, J.: The rate of convergence of Nesterov’s accelerated forward–backward method is actually faster than \(\frac{1}{k^2}\). SIAM J. Optim. 26, 1824–1834 (2016)MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Apidopoulos, V., Aujol, J.-F., Dossal, Ch.: Convergence rate of inertial Forward–Backward algorithm beyond Nesterov’s rule. HAL Preprint 01551873 (2017)Google Scholar
  33. 33.
    Attouch, H., Chbani, Z., Riahi, H.: Rate of convergence of the Nesterov accelerated gradient method in the subcritical case \(\alpha \le 3\). ESAIM: COCV (2017).  https://doi.org/10.1051/cocv/2017083 (Forthcoming article)
  34. 34.
    Alvarez, F.: On the minimizing property of a second-order dissipative system in Hilbert spaces. SIAM J. Control Optim. 38, 1102–1119 (2000)MathSciNetCrossRefMATHGoogle Scholar
  35. 35.
    Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)MathSciNetCrossRefMATHGoogle Scholar
  36. 36.
    Cabot, A., Engler, H., Gadat, S.: On the long time behavior of second order differential equations with asymptotically small dissipation. Trans. Am. Math. Soc. 361, 5983–6017 (2009)MathSciNetCrossRefMATHGoogle Scholar
  37. 37.
    Cabot, A., Engler, H., Gadat, S.: Second order differential equations with asymptotically small dissipation and piecewise flat potentials. Electron. J. Differ. Equ. 17, 33–38 (2009)MathSciNetMATHGoogle Scholar
  38. 38.
    Attouch, H., Cabot, A.: Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity. J. Differ. Equ. 263, 5412–5458 (2017)MathSciNetCrossRefGoogle Scholar
  39. 39.
    Attouch, H.: Viscosity solutions of minimization problems. SIAM J. Optim. 6, 769–806 (1996)MathSciNetCrossRefMATHGoogle Scholar
  40. 40.
    Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–597 (1967)MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Département de Mathématiques, IMAG, UMR CNRS 5149Univ MontpellierMontpellier Cedex 5France
  2. 2.Institut de Mathématiques de Bourgogne, UMR 5584, CNRSUniv. Bourgogne Franche-ComtéDijonFrance
  3. 3.Faculty of Sciences Semlalia, MathematicsCadi Ayyad UniversityMarrakechMorocco

Personalised recommendations