Abstract
We consider the linearly constrained convex minimization model with a separable objective function which is the sum of m functions without coupled variables, and discuss how to design an efficient algorithm based on the fundamental technique of splitting the augmented Lagrangian method (ALM). Our focus is the specific big-data scenario where m is huge. A pretreatment on the original data is to regroup the m functions in the objective and the corresponding m variables as t subgroups, where t is a handleable number (usually t ≥ 3 but much smaller than m). To tackle the regrouped model with t blocks of functions and variables, some existing splitting methods in the literature are applicable. We concentrate on the application of the alternating direction method of multiplier with Gaussian back substitution (ADMM-GBS) whose efficiency and scalability have been well verified in the literature. The block-wise ADMM-GBS is thus resulted and named when the ADMM-GBS is applied to solve the t-block regrouped model. To alleviate the difficulty of the resulting ADMM-GBS subproblems, each of which may still require minimizing more than one function with coupled variables, we suggest further decomposing these subproblems but regularizing these further decomposed subproblems with proximal terms to ensure the convergence. With this further decomposition, each of the resulting subproblems only requires handling one function in the original objective plus a simple quadratic term; it thus may be very easy for many concrete applications where the functions in the objective have some specific properties. Moreover, these further decomposed subproblems can be solved in parallel, making it possible to handle big-data by highly capable computing infrastructures. Consequently, a splitting version of the block-wise ADMM-GBS is proposed for the particular big-data scenario. The implementation of this new algorithm is suitable for a centralized-distributed computing system, where the decomposed subproblems of each block can be computed in parallel by a distributed-computing infrastructure and the blocks are updated by a centralized-computing station. For the new algorithm, we prove its convergence and establish its worst-case convergence rate measured by the iteration complexity. Two refined versions of this new algorithm with iteratively calculated step sizes and linearized subproblems are also proposed, respectively.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imag. Sci., 2 (2009), pp. 183–202.
E. Blum and W. Oettli, Mathematische Optimierung. Grundlagen und Verfahren. Ökonometrie und Unternehmensforschung, Springer-Verlag, Berlin-Heidelberg-New York, 1975.
S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Foun. Trends Mach. Learn., 3 (2010), pp. 1–122.
E. J. Cand\(\grave {e}\)s and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, 51 (2004), pp. 4203–4215.
E. J. Cand\(\grave {e}\)s and T. Tao, Reflections on compressed sensing, IEEE Inform. Theory Soc. News., 58(4) (2008), pp. 14–17.
C. H. Chen, B. S. He, Y. Y. Ye and X. M. Yuan, The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent, Math. Program., Ser. A, 155(2016), pp. 57–79.
S. S. Chen, D. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM Rev., 43(1) (2006), pp. 129–159.
J. Eckstein and W. Yao, Augmented Lagrangian and alternating direction methods for convex optimization: A tutorial and some illustrative computational results, Pacific J. Optim., 11(4) (2015), pp. 619–644.
F. Facchinei and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity problems, Volume I, Springer Series in Operations Research, Springer-Verlag, 2003.
R. Glowinski, On alternating direction methods of multipliers: A historical perspective, Modeling, Simulation and Optimization for Science and Technology, W. Fitzgibbon, Y.A. Kuznetsov, P. Neittaanm\(\ddot {a}\)ki and O. Pironneau, eds., Computational Methods in Applied Sciences, 34 (2014).
R. Glowinski and A. Marrocco, Approximation par \(\acute {e}\) l \(\acute {e}\) ments finis d’ordre un et r \(\acute {e}\) solution par p \(\acute {e}\) nalisation-dualit \(\acute {e}\) d’une classe de probl \(\grave {e}\) mes non lin \(\acute {e}\) aires, R.A.I.R.O., R2 (1975), pp. 41–76.
D. R. Han, X. M. Yuan and W. X. Zhang, An augmented-Lagrangian-based parallel splitting method for separable convex programming with applications to image processing, Math. Comput., 83 (2014), pp. 2263–2291.
B. S. He, L. S. Hou and X. M. Yuan, On full Jacobian decomposition of the augmented Lagrangian method for separable convex programming, SIAM J. Optim., 25(4) (2015), pp. 2274–2312.
B. S. He, H. Liu, J. Lu, and X. M. Yuan, Application of the strictly contractive Peaceman-Rachford splitting method to multi-block convex programming, in Operator Splitting Methods and Applications, edited by R. Glowinski, S. Osher and W. Yin, Springer, 2016.
B. S. He, M. Tao and X. M. Yuan, Alternating direction method with Gaussian back substitution for separable convex programming, SIAM J. Optim., 22 (2012), pp. 313–340.
B. S. He, M. Tao and X. M. Yuan, Convergence rate and iteration complexity on the alternating direction method of multipliers with a substitution procedure for separable convex programming, Math. Oper. Res., 42 (3) (2017), pp. 662–691.
B. S. He, M. Tao and X. M. Yuan, A splitting method for separable convex programming, IMA J. Numer. Anal., 35(2015), pp. 394–426.
B. S. He, H. K. Xu and X. M. Yuan, On the proximal Jacobian decomposition of ALM for multiple-block separable convex minimization problems and its relationship to ADMM, J. Sci. Comput., 66 (2016), 1204–1217.
B. S. He, M. H. Xu and X. M. Yuan, Block-wise ADMM with a relaxation factor for multiple-block convex programming, Journal of the Operational Research Society of China, 6(4) (2018), pp. 485–505.
B. S. He and X. M. Yuan, On the O(1∕n) convergence rate of the alternating direction method, SIAM J. Numer. Anal., 50 (2012), pp. 700–709.
B. S. He and X. M. Yuan, On nonergodic convergence rate of Douglas-Rachford alternating direction method of multipliers, Numer. Math., 130 (3)(2015), pp. 567–577.
B. S. He and X. M. Yuan, Linearized alternating direction method with Gaussian back substitution for separable convex programming, Numer. Alge., Cont. and Opt., 3(2)(2013), pp. 247–260.
B. S. He and X. M. Yuan, Block-wise alternating direction method of multipliers for multiple-block convex programming and beyond, SMAI J. Comp. Math., 1(2015), pp. 145–174.
M. R. Hestenes, Multiplier and gradient methods, J. Optim. Theory Appli., 4(1969), pp. 303–320.
M. Hong and Z. Q. Luo, On the linear convergence of the alternating direction method of multipliers, Math. Program., 162(1–2)(2017), pp. 165–199.
Y. E. Nesterov, Gradient methods for minimizing composite objective function, Math. Prog., Ser. B, 140 (2013), pp. 125–161.
G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space, J. Math. Analy. Applic. 72 (1979), pp. 383–390.
Y. G. Peng, A. Ganesh, J. Wright, W. L. Xu and Y. Ma, Robust alignment by sparse and low-rank decomposition for linearly correlated images, IEEE Tran. Pattern Anal. Mach. Intel., 34 (2012), pp. 2233–2246.
M. J. D. Powell, A method for nonlinear constraints in minimization problems, In Optimization edited by R. Fletcher, pp. 283–298, Academic Press, New York, 1969.
M. Tao and X. M. Yuan, Recovering low-rank and sparse components of matrices from incomplete and noisy observations, SIAM J. Optim., 21 (2011), pp. 57–81.
X. F. Wang and X. M. Yuan, The linearized alternating direction method for Dantzig Selector, SIAM J. Sci. Comput., 34 (5) (2012), pp. A2792 - A2811.
J. F. Yang and X. M. Yuan, Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization, Math. Comput., 82 (281) (2013), pp. 301–329.
X. Q. Zhang, M. Burger and S. Osher, A unified primal-dual algorithm framework based on Bregman iteration. J. Sci. Comput., 46 (2011), pp. 20–46.
Acknowledgements
The author “Xiaoling Fu” was supported by the Fundamental Research Funds for the Central Universities 2242019K40168 and partly supported by Natural Science Foundation of Jiangsu Province Grant BK20181258. The author “Bingsheng He” was supported by the NSFC Grant 11871029 and 11471156. The author “Xiangfeng Wang” was supported by the NSFC Grant 61672231, 11871279 and 11971090. The author “Xiaoming Yuan” was supported by the General Research Fund from Hong Kong Research Grants Council: 12313516.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Fu, X., He, B., Wang, X., Yuan, X. (2019). Block-Wise Alternating Direction Method of Multipliers with Gaussian Back Substitution for Multiple-Block Convex Programming. In: Bauschke, H., Burachik, R., Luke, D. (eds) Splitting Algorithms, Modern Operator Theory, and Applications. Springer, Cham. https://doi.org/10.1007/978-3-030-25939-6_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-25939-6_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-25938-9
Online ISBN: 978-3-030-25939-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)