Abstract
In this paper, we propose an iterative algorithm designed to minimize a convex function subject to linear equality and bound constraints. The algorithm generalizes the reduced gradient algorithm of Wolfe while avoiding the hypothesis of differentiability. The set of variables is decomposed into basic and non-basic variables and in order to obtain a method with the descent property of the objective function, the subdifferential is approximated by a bundle which contains a finite number of elements. Each of these vectors is reduced and the search direction for the non-basic variables is obtained by solving an appropriate quadratic problem. Then a descent direction is built in the whole space and an inexact line-search is performed along this one. We discuss the problem of the basis change and we prove the convergence of our algorithm under a very weak condition on the basis choice due to Huard. Finally we give some numerical results of using the algorithm on a number of test problems taken from the literature.
Preview
Unable to display preview. Download preview PDF.
References
J.A. Chatelon, D.W. Hearn and T.J. Lowe, “A subgradient algorithm for certain minimax and minisum problems”, SIAM Journal on Control and Optimization 20 (1982) 455–469.
Chi-Ye and M. Yueh, “A new reduced gradient method”, Scientia Sinica 22 (1979) 1099–1113.
J. Denel, “Convergence de la méthode du gradient réduit en l’absence de concavité”, Bulletin de la Direction des Etudes et Recherches, E.D.F., Série C 2 (1980) 55–62.
M. Gaudioso and M.F. Monaco, “A bundle type approach to the unconstrained minimization of convex nonsmooth functions”, Mathematical Programming 23 (1982), 216–226.
W. Gochet and Y. Smeers, “A modified reduced gradient method for a class of nondifferentiable problems”, Mathematical Programming 19 (1980) 137–154.
J.B. Hiriart-Urruty, “∈-Subdifferential calculus”, presented at the Colloquium on Convex Analysis and Optimization, Imperial College, London, 19–29, preprint of the Department of Applied Mathematics, Université de Clermont II, 1980.
P. Huard, “Convergence of the reduced gradient method”, in: O.L. Mangasarian, R.R. Meyer and S.M. Robinson, eds., Nonlinear programming 2 (Academic Press, New York, 1975) pp. 29–54.
P. Huard, “Un algorithme général de gradient réduit”, Bulletin de la Direction des Etudes et Recherches, E.D.F., Série C, 2 (1982) 91–109.
C. Lemarechal and R. Mifflin, Nonsmooth optimization (Pergamon Press, New York, 1977).
C. Lemarechal “Extensions diverses des méthodes de gradient et applications”, Thèse d’Etat, Paris IX Dauphine (Paris, 1980).
C. Lemarechal, J.J. Strodiot and A. Bihai, “On a bundle algorithm for nonsmooth optimization”, in: O.L. Mangasarian, R.R. Meyer and S.M. Robinson, eds., Nonlinear programming 4 (Academic Press, New York, 1981) pp. 245–282.
C. Lemarechal, “An extension of Davidon methods to non-differentiable problems”, Mathematical Programming Study 3 (1975) 95–109.
E.S. Levitin and B.T. Polyak, “Constrained minimization problems”, USSR Computational Mathematics and Mathematical Physics 6 (1966) 1–50.
D.G. Luenberger, Introduction to linear and non-linear programming (Academic Press, New York, 1973).
K. Madsen and H. Schjaer-Jacobsen, “Linearly constrained minimax optimization”, Mathematical Programming 14 (1978) 208–223.
G. McCormick, “Antizigzagging by bending”, Management Science 15 (1969) 315–320.
R. Mifflin, “A stable method for solving certain constrained least squares problems”, Mathematical Programming 16 (1979) 141–158.
R. Mifflin, “An algorithm for constrained optimization with semismooth functions”, Mathematics of Operations Research 2 (1977) 191–207.
H. Mokhtar-Kharroubi, “Sur quelques méthodes de gradient réduit sous contraintes linéaires”, R.A.I.R.O., Numerical Analysis 13 (1979) 176–180.
R.T. Rockafellar, Convex analysis (University Press Princeton, Princeton, NJ, 1970).
R. Saigal, “The fixed point approach to nonlinear programming”, Mathematical Programming Study 10 (1979) 142–157.
J.J. Strodiot, V.H. Nguyen and N. Heukemes, “∈-optimal solutions in nondifferentiable convex programming and some related questions”, Mathematical Programming 25 (1983) 307–328.
P. Wolfe, “A method of conjugate subgradients for minimizing nondifferentiable functions”, Mathematical Programming Study 3 (1975) 145–173.
P. Wolfe, “On the convergence of gradient methods under constraints,” Technical Report RZ-204, I.B.M. Research Center (Yorktown Heights, NY, 1966).
P. Wolfe, “The reduced gradient method”, Rand Document, 1962.
P. Wolfe, “Convergence conditions for ascent methods”, SIAM Review 11 (1969) 226–234.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1987 The Mathematical Programming Society, Inc.
About this chapter
Cite this chapter
Bihain, A., Nguyen, V.H., Strodiot, JJ. (1987). A reduced subgradient algorithm. In: Cornet, B., Nguyen, V.H., Vial, J.P. (eds) Nonlinear Analysis and Optimization. Mathematical Programming Studies, vol 30. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0121158
Download citation
DOI: https://doi.org/10.1007/BFb0121158
Received:
Revised:
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-00930-3
Online ISBN: 978-3-642-00931-0
eBook Packages: Springer Book Archive