Skip to main content

Large-Scale and Distributed Optimization: An Introduction

  • Chapter
  • First Online:
Large-Scale and Distributed Optimization

Part of the book series: Lecture Notes in Mathematics ((LNM,volume 2227))

Abstract

The recent explosion in size and complexity of datasets and the increased availability of computational resources has led us to what is sometimes called the big data era. In many big data fields, mathematical optimization has over the last decade emerged as a vital tool in extracting information from the data sets and creating predictors for unseen data. The large dimension of these data sets and the often parallel, distributed, or decentralized computational structures used for storing and handling the data, set new requirements on the optimization algorithms that solve these problems. This has led to a dramatic shift in focus in the optimization community over this period. Much effort has gone into developing algorithms that scale favorably with problem dimension and that can exploit structure in the problem as well as the computational environment. This is also the main focus of this book, which is comprised of individual chapters that further contribute to this development in different ways. In this introductory chapter, we describe the individual contributions, relate them to each other, and put them into a wider context.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, 2017)

    Book  Google Scholar 

  2. A. Beck, M. Teboulle, Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31(3), 167–175 (2003). ISSN: 0167-6377

    Article  MathSciNet  Google Scholar 

  3. A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  4. D.P. Bertsekas, Incremental Aggregated Proximal and Augmented Lagrangian Algorithms, Sept. 2015, arXiv:1509.09257

    Google Scholar 

  5. D.P. Bertsekas, J.N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods (Prentice-Hall, Upper Saddle River, NJ, 1989)

    MATH  Google Scholar 

  6. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    Article  Google Scholar 

  7. A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision 40(1), 120–145 (2011)

    Article  MathSciNet  Google Scholar 

  8. P.L. Combettes, J.-C. Pesquet, Proximal splitting methods in signal processing, in Fixed-Point Algorithms for Inverse Problems in Science and Engineering, ed. by H.H. Bauschke, R. S. Burachik, P.L. Combettes, V. Elser, D.R. Luke, H. Wolkowicz (Springer, New York, 2011), pp. 185–212

    Chapter  Google Scholar 

  9. D. Davis, W. Yin, A three-operator splitting scheme and its optimization applications. Set-Valued Var. Anal. 25(4), 829–858 (2017)

    Article  MathSciNet  Google Scholar 

  10. J. Douglas, H.H. Rachford, On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)

    Article  MathSciNet  Google Scholar 

  11. J. Eckstein, Splitting methods for monotone operators with applications to parallel optimization. PhD thesis, MIT, 1989

    Google Scholar 

  12. M. Frank, P. Wolfe, An algorithm for quadratic programming. Naval Res. Log. Q. 3, 95–110 (1956)

    Article  MathSciNet  Google Scholar 

  13. D. Gabay, B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2(1), 17–40 (1976)

    Article  Google Scholar 

  14. R. Glowinski, A. Marroco, Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problémes de dirichlet non linéaires. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique 9, 41–76 (1975)

    MATH  Google Scholar 

  15. M. Jaggi, Revisiting Frank-Wolfe: projection-free sparse convex optimization, in Proceedings of the 30th International Conference on Machine Learning, Atlanta, ed. by S. Dasgupta, D. McAllester. Proceedings of Machine Learning Research, vol. 28 of number 1, pp. 427–435 (2013)

    Google Scholar 

  16. D.P. Kingma, J. Ba, Adam: a method for stochastic optimization, Dec. 2014. arXiv:1412.6980

    Google Scholar 

  17. P. Latafat, P. Patrinos, Asymmetric forward–backward–adjoint splitting for solving monotone inclusions involving three operators. Comput. Optim. Appl. 68(1), 57–93 (2017)

    Article  MathSciNet  Google Scholar 

  18. P.L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    Article  MathSciNet  Google Scholar 

  19. Y. Nesterov, A method of solving a convex programming problem with convergence rate O (1/k2). Sov. Math. Dokl. 27(2), 372–376 (1983)

    MATH  Google Scholar 

  20. Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, 1st edn. (Springer, Boston, 2003)

    MATH  Google Scholar 

  21. Y. Nesterov, Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. 22(2), 341–362 (2012)

    Article  MathSciNet  Google Scholar 

  22. Y. Nesterov, A. Nemirovskii, Interior-Point Polynomial Algorithms in Convex Programming (Society for Industrial and Applied Mathematics, Philadelphia, PA, 1994)

    Book  Google Scholar 

  23. N. Parikh, S. Boyd, Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2014)

    Google Scholar 

  24. G. Pierra, Decomposition through formalization in a product space. Math. Program. 28(1), 96–115 (1984)

    Article  MathSciNet  Google Scholar 

  25. E.K. Ryu, S. Boyd, A primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016)

    MathSciNet  MATH  Google Scholar 

  26. P. Tseng, A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000)

    Article  MathSciNet  Google Scholar 

  27. L. Vandenberghe, M.S. Andersen, Chordal graphs and semidefinite optimization. Found. Trends Optim. 1(4), 241–433 (2015)

    Article  Google Scholar 

  28. Y. Ye, M.J. Todd, S. Mizuno, An o(\(\sqrt {n}L\))-iteration homogeneous and self-dual linear programming algorithm. Math. Oper. Res. 19(1), 53–67 (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pontus Giselsson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Giselsson, P., Rantzer, A. (2018). Large-Scale and Distributed Optimization: An Introduction. In: Giselsson, P., Rantzer, A. (eds) Large-Scale and Distributed Optimization. Lecture Notes in Mathematics, vol 2227. Springer, Cham. https://doi.org/10.1007/978-3-319-97478-1_1

Download citation

Publish with us

Policies and ethics