Abstract
We suggest a majorization-minimization method for solving nonconvex minimization problems. The method is based on minimizing at each iterate a properly constructed consistent majorizer of the objective function. We describe a variety of classes of functions for which such a construction is possible. We introduce an inexact variant of the method, in which only approximate minimization of the consistent majorizer is performed at each iteration. Both the exact and the inexact algorithms are shown to be descent methods whose accumulation points have a property which is stronger than standard stationarity. We give examples of cases in which the exact method can be applied. Finally, we show that the inexact method can be applied to a specific problem, called sparse source localization, by utilizing a fast optimization method on a smooth convex dual of its subproblems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Which is the same as saying that the function x↦h(y, x) is upper semicontinuous.
- 2.
A monomial is called pure if ∃j ∀k≠j p i,k = 0.
References
T. Achterberg, T. Berthold, T. Koch, K. Wolter, Constraint integer programming: a new approach to integrate CP and MIP. ZIB-Report 08-01 (2008)
A. Auslender, M. Teboulle, Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16(3), 697–725 (2006)
M.S. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Programming: Theory and Algorithms, 3rd edn. (Wiley-Interscience [Wiley], Hoboken, 2006), p. xvi+ 853. ISBN: 978-0-471- 48600-8; 0-471-48600-0
A. Beck, First-Order Methods in Optimization. MOS-SIAM Series on Optimization (Society for Industrial and Applied Mathematics, Philadelphia, 2017)
A. Beck, Introduction to Nonlinear Optimization Theory Algorithms, and Applications with MATLAB. MOS-SIAM Series on Optimization, vol. 19 (Society for Industrial and Applied Mathematics, Philadelphia, 2014), p. xii+ 282. ISBN: 978-1-611973-64-8
A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009)
A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization (Society for Industrial and Applied Mathematics, Philadelphia, 2001)
D.P. Bertsekas, Nonlinear Programming. Athena Scientific Optimization and Computation Series, 2nd edn. (Athena Scientific, Belmont, 1999), p. xiv+ 777. ISBN: 1-886529-00-0
J. Bolte, Z. Chen, E. Pauwels. The multiproximal linearization method for convex composite problems (2017, Preprint)
C. Cartis, N.I.M. Gould, P.L. Toint, On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming. SIAM J. Optim. 21(4), 1721–1739 (2011). https://epubs.siam.org/doi/abs/10.1137/11082381X
D. Drusvyatskiy, C. Paquette, Efficiency of minimizing compositions of convex functions and smooth maps (2017, Preprint)
D.R. Hunter, K. Lange, A tutorial on MM algorithms. Am Stat. 58(1), 30–37 (2004). ISSN: 0003-1305
K. Lange, MM Optimization Algorithms (Society for Industrial and Applied Mathematics, Philadelphia, 2016), p. ix+ 223. ISBN: 978-1-611974-39-3
A.S. Lewis, S.J. Wright, A proximal method for composite minimization. Math. Program. 158(1–2, Ser A), 501–546 (2016). ISSN: 0025-5610
Y. Nesterov, Introductory Lectures on Convex Optimization, a Basic Course. Applied Optimization, vol. 87 (Kluwer Academic Publishers, Boston, 2004)
Y. Nesterov, Modified gauss-newton scheme with worst case guarantees for global performances. Optim. Methods Softw. 22(3), 469–483 (2007)
Y. Nesterov, Gradient methods for minimizing composite functions. Math. Program. 140(1, Ser B), 125–161 (2013)
M. Sion, On general minmax theorems. Pac. J. Math. 8(1), 171–176 (1958)
P. Tseng, Approximation accuracy gradient methods, and error bound for structured convex optimization. Math. Program. 125(2, Ser B), 263–295 (2010). ISSN: 0025-5610
Acknowledgements
The research of Amir Beck was partially supported by the Israel Science Foundation Grant 1821/16.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix: A Proof of Lemma 2
Appendix: A Proof of Lemma 2
We provide a proof of Lemma 2. The necessity is proven very similarly to the proof given in [4, Theorem 9.2] for the case where F is continuously differentiable.
Proof
Let x ∗ be a local minimizer of problem (13.2). Assume to the contrary that x ∗ is not a stationary point of (13.2). Then, recalling that F is directionally differentiable (dd), there exists y ∈dom(F) such that F ′(x ∗;y −x ∗) < 0. By the definition of a directional derivative, it follows that there exists a number 0 < δ < 1 such that F(x ∗ + t(y −x ∗)) < F(x ∗) for all 0 < t < δ. Since dom(F) is convex (as F is dd), we have x ∗ + t(y −x ∗) = (1 − t)x ∗ + t y ∈dom(F) for all 0 < t < δ, contradicting the local minimality of x ∗.
As for the sufficiency part when F is convex, let x ∗ be a stationary point of (13.2), and assume to the contrary that x ∗ is not a global minimizer of (13.2). Then there exists y ∈ dom(F) such that F(y) < F(x ∗). By the stationarity of x ∗ and the convexity of F, we obtain
which is a contradiction. â–¡
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Beck, A., Pan, D. (2018). Convergence of an Inexact Majorization-Minimization Method for Solving a Class of Composite Optimization Problems. In: Giselsson, P., Rantzer, A. (eds) Large-Scale and Distributed Optimization. Lecture Notes in Mathematics, vol 2227. Springer, Cham. https://doi.org/10.1007/978-3-319-97478-1_13
Download citation
DOI: https://doi.org/10.1007/978-3-319-97478-1_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-97477-4
Online ISBN: 978-3-319-97478-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)