Advertisement

Computational Optimization and Applications

, Volume 72, Issue 1, pp 179–213 | Cite as

Convergence of the augmented decomposition algorithm

  • Hongsheng LiuEmail author
  • Shu Lu
Article
  • 130 Downloads

Abstract

We study the convergence of the augmented decomposition algorithm (ADA) proposed in Rockafellar et al. (Problem decomposition in block-separable convex optimization: ideas old and new, https://www.washington.edu/, 2017) for solving multi-block separable convex minimization problems subject to linear constraints. We show that the global convergence rate of the exact ADA is \(o(1/\nu )\) under the assumption that there exists a saddle point. We consider the inexact augmented decomposition algorithm and establish global and local convergence results under some mild assumptions, by providing a stability result for the maximal monotone operator \(\mathcal {T}\) associated with the perturbation from both primal and dual perspectives. This result implies the local linear convergence of the inexact ADA for many applications such as the lasso, total variation reconstruction, exchange problem and many other problems from statistics, machine learning and engineering with \(\ell _1\) regularization.

Keywords

Separable convex minimization Convergence rate Augmented decomposition algorithm Distributed computing 

Notes

Acknowledgements

The authors are grateful to Professor R. Tyrrell Rockafellar for suggestions on this research project. Shu Lu’s research is supported by National Science Foundation under the Grant DMS-1407241.

References

  1. 1.
    Bai, J., Zhang, H., Li, J.: A parameterized proximal point algorithm for separable convex optimization. Optim. Lett. 12(7), 1–20 (2017)MathSciNetGoogle Scholar
  2. 2.
    Beck, A., Nedic, A., Ozdaglar, A., Teboulle, M.: An \(O (1/k) \) gradient method for network resource allocation problems. IEEE Trans. Control Netw. Syst. 1(1), 64–73 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends\({\textregistered }\) Mach. Learn. 3(1), 1–122 (2011)Google Scholar
  4. 4.
    Chang, T.H., Nedic, A., Scaglione, A.: Distributed constrained optimization by consensus-based primal-dual perturbation method. IEEE Trans. Autom. Control 59(6), 1524–1538 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Chatzipanagiotis, N., Dentcheva, D., Zavlanos, M.M.: An augmented Lagrangian method for distributed optimization. Math. Program. 152(1–2), 405–434 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Chen, C., He, B., Ye, Y., Yuan, X.: The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent. Math. Program. 155(1–2), 57–79 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Chen, G., Teboulle, M.: A proximal-based decomposition method for convex minimization problems. Math. Program. 64(1–3), 81–101 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Cui, Y., Sun, D., Toh, K.C.: On the R-superlinear convergence of the KKT residues generated by the augmented Lagrangian method for convex composite conic programming (2017). arXiv preprint arXiv:1706.08800
  9. 9.
    Deng, W., Lai, M.J., Peng, Z., Yin, W.: Parallel multi-block ADMM with o(1/k) convergence. J. Sci. Comput. 71(2), 712–736 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Deng, W., Yin, W.: On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66(3), 889–916 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Dontchev, A.L.: Implicit Functions and Solution Mappings. Springer, New York (2009)CrossRefzbMATHGoogle Scholar
  12. 12.
    Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Güler, O.: New proximal point algorithms for convex minimization. SIAM J. Optim. 2(4), 649–664 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Han, D., Sun, D., Zhang, L.: Linear rate convergence of the alternating direction method of multipliers for convex composite quadratic and semi-definite programming (2015). arXiv preprint arXiv:1508.02134
  15. 15.
    Han, D., Yuan, X.: A note on the alternating direction method of multipliers. J. Optim. Theory Appl. 155(1), 227–238 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    He, B., Liao, L.Z., Han, D., Yang, H.: A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92(1), 103–118 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    He, B., Yuan, X.: On the acceleration of augmented lagrangian method for linearly constrained optimization. Optimization online 3 (2010)Google Scholar
  18. 18.
    He, B., Yuan, X.: On the O(1/n) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    He, B., Yuan, X.: On non-ergodic convergence rate of Douglas–Rachford alternating direction method of multipliers. Numer. Math. 130(3), 567–577 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Hoffman, A.J.: On approximate solutions of systems of linear inequalities. Selected Papers Of Alan J Hoffman: With Commentary, pp. 174–176 (2003)Google Scholar
  21. 21.
    Hong, M., Luo, Z.Q.: On the linear convergence of the alternating direction method of multipliers. Math. Program. 162(1–2), 165–199 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Li, X., Sun, D., Toh, K.C.: A highly efficient semismooth Newton augmented Lagrangian method for solving Lasso problems (2016). arXiv preprint arXiv:1607.05428
  23. 23.
    Liu, Y.J., Sun, D., Toh, K.C.: An implementable proximal point algorithmic framework for nuclear norm minimization. Math. Program. 133(1), 399–436 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Luo, Z.Q., Tseng, P.: On the convergence rate of dual ascent methods for linearly constrained convex minimization. Math. Oper. Res. 18(4), 846–867 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Luque, F.J.: Asymptotic convergence analysis of the proximal point algorithm. SIAM J. Control Optim. 22(2), 277–293 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Ma, S.: Alternating proximal gradient method for convex minimization. J. Sci. Comput. 68(2), 546–572 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Mulvey, J.M., Ruszczyn, A.: A diagonal quadratic approximation method for large scale linear programs. Oper. Res. Lett. 12(4), 205–215 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Nesterov, Y.: A method of solving a convex programming problem with convergence rate o(1/k2). Sov. Math. Dokl. 27(2), 372–376 (1983)zbMATHGoogle Scholar
  29. 29.
    Robinson, S.M.: Some continuity properties of polyhedral multifunctions. In: König, H., Korte, B., Ritter, K. (eds.) Mathematical Programming at Oberwolfach, pp. 206–214. Springer, Berlin (1981)CrossRefGoogle Scholar
  30. 30.
    Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Rockafellar, R.T.: Problem decomposition in block-separable convex optimization: ideas old and new (2017). https://www.washington.edu/
  33. 33.
    Shefi, R., Teboulle, M.: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim. 24(1), 269–297 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Spingarn, J.E.: Applications of the method of partial inverses to convex programming: decomposition. Math. Program. 32(2), 199–223 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Tseng, P.: Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM J. Control Optim. 29(1), 119–138 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Wang, X., Hong, M., Ma, S., Luo, Z.Q.: Solving multiple-block separable convex minimization problems using two-block alternating direction method of multipliers (2013). arXiv preprint arXiv:1308.5294
  37. 37.
    Wright, S.J.: Accelerated block-coordinate relaxation for regularized optimization. SIAM J. Optim. 22(1), 159–186 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Xiao, L., Boyd, S.: Optimal scaling of a gradient method for distributed resource allocation. J. Optim. Theory Appl. 129(3), 469–488 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    You, K., Xie, L.: Network topology and communication data rate for consensusability of discrete-time multi-agent systems. IEEE Trans. Autom. Control 56(10), 2262–2275 (2011)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Statistics and Operations ResearchUniversity of North Carolina at Chapel HillChapel HillUSA

Personalised recommendations