Advertisement

AFSI: Adaptive Restart for Fast Semi-Iterative Schemes for Convex Optimisation

  • Jón Arnar TómassonEmail author
  • Peter Ochs
  • Joachim Weickert
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11269)

Abstract

Smooth optimisation problems arise in many fields including image processing, and having fast methods for solving them has clear benefits. Widely and successfully used strategies to solve them are accelerated gradient methods. They accelerate standard gradient-based schemes by means of extrapolation. Unfortunately, most acceleration strategies are generic, in the sense, that they ignore specific information about the objective function. In this paper, we implement an adaptive restarting into a recently proposed efficient acceleration strategy that was coined Fast Semi-Iterative (FSI) scheme. Our analysis shows clear advantages of the adaptive restarting in terms of a theoretical convergence rate guarantee and state-of-the-art performance on a challenging image processing task.

Notes

Acknowledgements

Our research has been partially funded by the Cluster of Excellence on Multimodal Computing and Interaction within the Excellence Initiative of the German Research Foundation (DFG) and by the ERC Advanced Grant INCOVID. This is gratefully acknowledged.

Supplementary material

480455_1_En_46_MOESM1_ESM.pdf (201 kb)
Supplementary material 1 (pdf 201 KB)

References

  1. 1.
    Bähr, M., Dachsel, R., Breuß, M.: Fast solvers for solving shape matching by time integration. In: Annual Workshop of the AAPR, vol. 42, pp. 65–72, May 2018Google Scholar
  2. 2.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Charbonnier, P., Blanc-Feraud, L., Aubert, G., Barlaud, M.: Deterministic edge-preserving regularization in computed imaging. IEEE Trans. Image Process. 6(2), 298–311 (1997)CrossRefGoogle Scholar
  4. 4.
    Grewenig, S., Weickert, J., Bruhn, A.: From box filtering to fast explicit diffusion. In: Goesele, M., Roth, S., Kuijper, A., Schiele, B., Schindler, K. (eds.) DAGM 2010. LNCS, vol. 6376, pp. 533–542. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15986-2_54CrossRefGoogle Scholar
  5. 5.
    Hafner, D., Ochs, P., Weickert, J., Reißel, M., Grewenig, S.: FSI schemes: fast semi-iterative solvers for PDEs and optimisation methods. In: Rosenhahn, B., Andres, B. (eds.) GCPR 2016. LNCS, vol. 9796, pp. 91–102. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-45886-1_8CrossRefGoogle Scholar
  6. 6.
    Nemirovski, A., Yudin, D.: Problem Complexity and Method Efficiency in Optimization. Wiley-Interscience Series in Discrete Mathematics. Wiley, Hoboken (1983)Google Scholar
  7. 7.
    Nesterov, Y.: A method of solving a convex programming problem with convergence rate O(\(1/k^2\)). Soviet Math. Doklady 27, 372–376 (1983)zbMATHGoogle Scholar
  8. 8.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-1-4419-8853-9CrossRefzbMATHGoogle Scholar
  9. 9.
    Nikolova, M.: A variational approach to remove outliers and impulse noise. J. Math. Imaging Vis. 20(1), 99–120 (2004)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Ochs, P., Chen, Y., Brox, T., Pock, T.: iPiano: inertial proximal algorithm for non-convex optimization. SIAM J. Imaging Sci. (SIIMS) 7, 1388–1419 (2014)CrossRefGoogle Scholar
  11. 11.
    O’Donoghue, B., Candès, E.: Adaptive restart for accelerated gradient schemes. Found. Comput. Math. 15(3), 715–732 (2015)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Polyak, B.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–17 (1964)CrossRefGoogle Scholar
  13. 13.
    Rumelhart, D., Hinton, G., Williams, R.: Learning internal representations by error propagation. In: Rumelhart, D., McClelland, J. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, chap. 8, pp. 318–362. MIT Press, Cambridge (1986)Google Scholar
  14. 14.
    Su, W., Boyd, S., Candès, E.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17, 1–43 (2016)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: Dasgupta, S., McAllester, D. (eds.) Proceedings of the 30th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 28, pp. 1139–1147. PMLR, Atlanta, 17–19 June 2013Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Jón Arnar Tómasson
    • 1
    Email author
  • Peter Ochs
    • 2
  • Joachim Weickert
    • 1
  1. 1.Mathematical Image Analysis Group, Faculty of Mathematics and Computer Science, Campus E1.7Saarland UniversitySaarbrückenGermany
  2. 2.Mathematical Optimization Group, Faculty of Mathematics and Computer Science, Campus E1.7Saarland UniversitySaarbrückenGermany

Personalised recommendations