Skip to main content

Block-Wise Alternating Direction Method of Multipliers with Gaussian Back Substitution for Multiple-Block Convex Programming

  • Chapter
  • First Online:
Splitting Algorithms, Modern Operator Theory, and Applications

Abstract

We consider the linearly constrained convex minimization model with a separable objective function which is the sum of m functions without coupled variables, and discuss how to design an efficient algorithm based on the fundamental technique of splitting the augmented Lagrangian method (ALM). Our focus is the specific big-data scenario where m is huge. A pretreatment on the original data is to regroup the m functions in the objective and the corresponding m variables as t subgroups, where t is a handleable number (usually t ≥ 3 but much smaller than m). To tackle the regrouped model with t blocks of functions and variables, some existing splitting methods in the literature are applicable. We concentrate on the application of the alternating direction method of multiplier with Gaussian back substitution (ADMM-GBS) whose efficiency and scalability have been well verified in the literature. The block-wise ADMM-GBS is thus resulted and named when the ADMM-GBS is applied to solve the t-block regrouped model. To alleviate the difficulty of the resulting ADMM-GBS subproblems, each of which may still require minimizing more than one function with coupled variables, we suggest further decomposing these subproblems but regularizing these further decomposed subproblems with proximal terms to ensure the convergence. With this further decomposition, each of the resulting subproblems only requires handling one function in the original objective plus a simple quadratic term; it thus may be very easy for many concrete applications where the functions in the objective have some specific properties. Moreover, these further decomposed subproblems can be solved in parallel, making it possible to handle big-data by highly capable computing infrastructures. Consequently, a splitting version of the block-wise ADMM-GBS is proposed for the particular big-data scenario. The implementation of this new algorithm is suitable for a centralized-distributed computing system, where the decomposed subproblems of each block can be computed in parallel by a distributed-computing infrastructure and the blocks are updated by a centralized-computing station. For the new algorithm, we prove its convergence and establish its worst-case convergence rate measured by the iteration complexity. Two refined versions of this new algorithm with iteratively calculated step sizes and linearized subproblems are also proposed, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imag. Sci., 2 (2009), pp. 183–202.

    Article  MathSciNet  Google Scholar 

  2. E. Blum and W. Oettli, Mathematische Optimierung. Grundlagen und Verfahren. Ökonometrie und Unternehmensforschung, Springer-Verlag, Berlin-Heidelberg-New York, 1975.

    Google Scholar 

  3. S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Foun. Trends Mach. Learn., 3 (2010), pp. 1–122.

    Article  Google Scholar 

  4. E. J. Cand\(\grave {e}\)s and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, 51 (2004), pp. 4203–4215.

    Google Scholar 

  5. E. J. Cand\(\grave {e}\)s and T. Tao, Reflections on compressed sensing, IEEE Inform. Theory Soc. News., 58(4) (2008), pp. 14–17.

    Google Scholar 

  6. C. H. Chen, B. S. He, Y. Y. Ye and X. M. Yuan, The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent, Math. Program., Ser. A, 155(2016), pp. 57–79.

    Google Scholar 

  7. S. S. Chen, D. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM Rev., 43(1) (2006), pp. 129–159.

    Article  MathSciNet  Google Scholar 

  8. J. Eckstein and W. Yao, Augmented Lagrangian and alternating direction methods for convex optimization: A tutorial and some illustrative computational results, Pacific J. Optim., 11(4) (2015), pp. 619–644.

    MathSciNet  Google Scholar 

  9. F. Facchinei and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity problems, Volume I, Springer Series in Operations Research, Springer-Verlag, 2003.

    MATH  Google Scholar 

  10. R. Glowinski, On alternating direction methods of multipliers: A historical perspective, Modeling, Simulation and Optimization for Science and Technology, W. Fitzgibbon, Y.A. Kuznetsov, P. Neittaanm\(\ddot {a}\)ki and O. Pironneau, eds., Computational Methods in Applied Sciences, 34 (2014).

    Google Scholar 

  11. R. Glowinski and A. Marrocco, Approximation par \(\acute {e}\) l \(\acute {e}\) ments finis d’ordre un et r \(\acute {e}\) solution par p \(\acute {e}\) nalisation-dualit \(\acute {e}\) d’une classe de probl \(\grave {e}\) mes non lin \(\acute {e}\) aires, R.A.I.R.O., R2 (1975), pp. 41–76.

    Google Scholar 

  12. D. R. Han, X. M. Yuan and W. X. Zhang, An augmented-Lagrangian-based parallel splitting method for separable convex programming with applications to image processing, Math. Comput., 83 (2014), pp. 2263–2291.

    Article  Google Scholar 

  13. B. S. He, L. S. Hou and X. M. Yuan, On full Jacobian decomposition of the augmented Lagrangian method for separable convex programming, SIAM J. Optim., 25(4) (2015), pp. 2274–2312.

    Article  MathSciNet  Google Scholar 

  14. B. S. He, H. Liu, J. Lu, and X. M. Yuan, Application of the strictly contractive Peaceman-Rachford splitting method to multi-block convex programming, in Operator Splitting Methods and Applications, edited by R. Glowinski, S. Osher and W. Yin, Springer, 2016.

    Google Scholar 

  15. B. S. He, M. Tao and X. M. Yuan, Alternating direction method with Gaussian back substitution for separable convex programming, SIAM J. Optim., 22 (2012), pp. 313–340.

    Article  MathSciNet  Google Scholar 

  16. B. S. He, M. Tao and X. M. Yuan, Convergence rate and iteration complexity on the alternating direction method of multipliers with a substitution procedure for separable convex programming, Math. Oper. Res., 42 (3) (2017), pp. 662–691.

    Article  MathSciNet  Google Scholar 

  17. B. S. He, M. Tao and X. M. Yuan, A splitting method for separable convex programming, IMA J. Numer. Anal., 35(2015), pp. 394–426.

    Article  MathSciNet  Google Scholar 

  18. B. S. He, H. K. Xu and X. M. Yuan, On the proximal Jacobian decomposition of ALM for multiple-block separable convex minimization problems and its relationship to ADMM, J. Sci. Comput., 66 (2016), 1204–1217.

    Article  MathSciNet  Google Scholar 

  19. B. S. He, M. H. Xu and X. M. Yuan, Block-wise ADMM with a relaxation factor for multiple-block convex programming, Journal of the Operational Research Society of China, 6(4) (2018), pp. 485–505.

    Article  MathSciNet  Google Scholar 

  20. B. S. He and X. M. Yuan, On the O(1∕n) convergence rate of the alternating direction method, SIAM J. Numer. Anal., 50 (2012), pp. 700–709.

    Article  MathSciNet  Google Scholar 

  21. B. S. He and X. M. Yuan, On nonergodic convergence rate of Douglas-Rachford alternating direction method of multipliers, Numer. Math., 130 (3)(2015), pp. 567–577.

    Google Scholar 

  22. B. S. He and X. M. Yuan, Linearized alternating direction method with Gaussian back substitution for separable convex programming, Numer. Alge., Cont. and Opt., 3(2)(2013), pp. 247–260.

    Google Scholar 

  23. B. S. He and X. M. Yuan, Block-wise alternating direction method of multipliers for multiple-block convex programming and beyond, SMAI J. Comp. Math., 1(2015), pp. 145–174.

    Article  MathSciNet  Google Scholar 

  24. M. R. Hestenes, Multiplier and gradient methods, J. Optim. Theory Appli., 4(1969), pp. 303–320.

    Article  MathSciNet  Google Scholar 

  25. M. Hong and Z. Q. Luo, On the linear convergence of the alternating direction method of multipliers, Math. Program., 162(1–2)(2017), pp. 165–199.

    Article  MathSciNet  Google Scholar 

  26. Y. E. Nesterov, Gradient methods for minimizing composite objective function, Math. Prog., Ser. B, 140 (2013), pp. 125–161.

    Google Scholar 

  27. G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space, J. Math. Analy. Applic. 72 (1979), pp. 383–390.

    Article  MathSciNet  Google Scholar 

  28. Y. G. Peng, A. Ganesh, J. Wright, W. L. Xu and Y. Ma, Robust alignment by sparse and low-rank decomposition for linearly correlated images, IEEE Tran. Pattern Anal. Mach. Intel., 34 (2012), pp. 2233–2246.

    Article  Google Scholar 

  29. M. J. D. Powell, A method for nonlinear constraints in minimization problems, In Optimization edited by R. Fletcher, pp. 283–298, Academic Press, New York, 1969.

    Google Scholar 

  30. M. Tao and X. M. Yuan, Recovering low-rank and sparse components of matrices from incomplete and noisy observations, SIAM J. Optim., 21 (2011), pp. 57–81.

    Article  MathSciNet  Google Scholar 

  31. X. F. Wang and X. M. Yuan, The linearized alternating direction method for Dantzig Selector, SIAM J. Sci. Comput., 34 (5) (2012), pp. A2792 - A2811.

    Article  MathSciNet  Google Scholar 

  32. J. F. Yang and X. M. Yuan, Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization, Math. Comput., 82 (281) (2013), pp. 301–329.

    Article  MathSciNet  Google Scholar 

  33. X. Q. Zhang, M. Burger and S. Osher, A unified primal-dual algorithm framework based on Bregman iteration. J. Sci. Comput., 46 (2011), pp. 20–46.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author “Xiaoling Fu” was supported by the Fundamental Research Funds for the Central Universities 2242019K40168 and partly supported by Natural Science Foundation of Jiangsu Province Grant BK20181258. The author “Bingsheng He” was supported by the NSFC Grant 11871029 and 11471156. The author “Xiangfeng Wang” was supported by the NSFC Grant 61672231, 11871279 and 11971090. The author “Xiaoming Yuan” was supported by the General Research Fund from Hong Kong Research Grants Council: 12313516.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoming Yuan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Fu, X., He, B., Wang, X., Yuan, X. (2019). Block-Wise Alternating Direction Method of Multipliers with Gaussian Back Substitution for Multiple-Block Convex Programming. In: Bauschke, H., Burachik, R., Luke, D. (eds) Splitting Algorithms, Modern Operator Theory, and Applications. Springer, Cham. https://doi.org/10.1007/978-3-030-25939-6_8

Download citation

Publish with us

Policies and ethics