Abstract
This paper is intended to accomplish two tasks. It gives a tutorial introduction to quasi-Newton methods and to collinear scaling methods for unconstrained optimization. It also addresses the problem of developing unconstrained optimization techniques which take advantage of a known sparsity pattern in the Hessian matrix of the objective function. Methods which generalize quasi-Newton methods to the sparse case are discussed and an example is given which indicates that positive definite update formulas may, not generalize well. This is because the quasi-Newton equation may be inconsistent with sparsity and positive definiteness. It is shown that the method of collinear scaling together with sparse Cholesky, factor updates may overcome these difficulties. The connection between sequential estimation and this updating process is briefly discussed.
This work was supported in part by the Applied Mathematical Sciences Research Program (KC-04-02) of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38, and in part by the National Science Foundation while the author was at University of Kentucky under Contract MCS-78-02547.
Preview
Unable to display preview. Download preview PDF.
References
D. Bertsekas, “Control of uncertain systems with a set membership description”, Ph.D. Thesis, Department of Electrical Engineering, Massachusetts Institute of Technology, Boston, MA (1971).
C.G. Broyden, “A class of methods for solving nonlinear simultaneous equations”, Mathematics of Computation 19 (1965) 577–593.
C.G. Broyden, “A new double-rank minimization algorithm”, Notices of the American Mathematical Society 16 (1969) 670.
W.C. Davidon, “Variable metric method for minimization”, Report ANL-5990 Rev., Argonne National Laboratories, Argonne, IL (1959).
W.C. Davidon, “Conic aproximations and collinear scalings for optimizers,” SIAM Journal on Numerical Analysis 17 (1980).
J.E. Dennis and J.J. Moré, “Quasi-Newton methods, motivation and theory”, SIAM Review 19 (1977) 46–89.
J.E. Dennis and R.B. Schnabel, “Least change secant updates for quasi-Newton methods”, SIAM Review 21 (1979) 443–459.
J.J. Dongarra, C.B. Moler, J.R. Bunch and G.W. Stewart, LINPACK user’s guide, (SIAM Publications, Philadelphia, PA, 1979).
R. Fletcher, “A new approach to variable metric algorithms”, The Computer Journal 13 (1970) 313–322.
R. Fletcher and M.J.D. Powell, “A rapidly convergent descent method for minimization”, The Computer Journal 6 (1963) 163–168.
D.M. Gay, “Computing optimal locally constrained steps”, Mathematics Research Center Report #2000, University of Wisconsin, Madison, WI (1979).
P.E. Gill and W. Murray, “Newton type methods for unconstrained and linearly constrained optimization”, Mathematical Programming 7 (1974) 311–350.
D. Goldfarb, “A family of variable-metric methods derived by variational means”, Mathematics of Computation 24 (1970) 23–26.
D. Goldfarb, “Generating conjugate directions without line searches using factorized variable metric formulas”, Mathematical Programming 13 (1977) 94–110.
R.E. Kalman, “A new approach to linear, filtering and prediction problems”, AMSE Transactions. Journal of Basic Engineering 82D (1960) 35–45.
R.E. Kalman, “New methods in Wiener filtering theory”, in: J.L. Bogdanoff and F. Kozin, eds., Proceedings of the First Symposium on Engineering Applications of Random Function Theory and Probability (Wiley, New York, 1963), pp. 270–388.
B. Lam, “On the convergence of a quasi-Newton method for, sparse nonlinear systems”, Mathematics of Computation 32 (1978) 447–451.
E.S. Marwill, “Exploiting sparsity in Newton-like methods”, Ph.D. Thesis, Department of Computer Science Report TR 78-335 Cornell University, Ithaca, NY, (1978).
E.S. Marwill, “Convergence results for Schubert’s method for solving sparse nonlinear equations”, SIAM Journal of Numerical Analysis 16 (1979) 588–604.
S.K. Mitter and P. Todalagi, “Variable metric methods and filtering theory,” Department of Electrical Engineering and Computer Science Report, Massachusetts Institute of Technology, Boston, MA (1979).
J.J. Moré, “The Levenberg-Marquardt algorithm: Implementation and theory”, in: G.A. Watson, ed., Numerical analysis, Lecture Notes in Mathematics 630, Proceedings biennial conference, Dundee, 1977, (Springer, Heidelberg, 1978) pp. 105–116.
J.M. Ortega and W.C. Rheinboldt, Iterative solution of nonlinear equations in several variables (Academic Press, New York, 1970).
M.J.D. Powell, “Convergence properties of a class of minimization algorithms”, in: O.L. Mangasarian, R.R. Meyer, and S.M. Robinson, eds. Nonlinear programming 2 (Academic Press, New York, 1975), pp. 1–27.
M.J.D. Powell, “Quasi-Newton formulae for sparse second derivative matrices”, Internal Report DAMTP 1979/Na7 department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, England (1979).
L.K. Schubert, “Modification of a quasi-Newton method for nonlinear equations with a sparse Jacobian”, Mathematics of Computation 24 (1970) 27–30.
F.C. Schweppe, “Recursive state estimation: Unknown but bounded errors and system input”, IEEE Transactions on Automatic Control AC-13 (1968).
D.F. Shanno, “Conditioning of quasi-Newton methods for function minimization”, Mathematics of Computation 24 (1970) 647–656.
D.F. Shanno, “On variable metric methods for sparse Hessians”, Mathematics of Computation 834 (1980) 499–514.
D.C. Sorensen, “The Q-superlinear convergence of a collinear scaling algorithm for unconstrained optimization”, SIAM Journal on Numerical Analysis 17 (1980) 84–114.
D.C. Sorensen, “Newton’s method with a model trust-region modification”, SIAM Journal on Numerical Analysis 16 (1982).
G.W. Stewart, Introduction matrix computations (Academic Press, New York, 1973).
M.N. Thapa, “Optimization of unconstrained with sparse Hessian matrices”, Ph.D. Thesis, Department of Operations Research, Stanford University, Stanford, CA (1980).
S.W. Thomas, “Sequential estimation techniques fr quasi-Newton algorithms”, Ph.D. Thesis, Department of Computer Science Report TR 75-227, Cornell University, Ithaca, NY (1975).
Ph.L. Toint, “On sparse and symmetric updating subject to a linear equation”, Mathematics of Computation 31 (1977) 954–961.
Ph.L. Toint, “On the superlinear convergence of an algorithm for solving sparse minimization problems”, SIAM Journal on Numerical Analysis 16 (1979) 1036–1045.
Ph.L. Toint, “A note about sparsity exploiting quasi-Newton updates”, Department of Mathematics Report 7915, Facultes Universitaires de Namur, Namur, Belgium (1979).
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1982 The Mathematical Programming Society, Inc.
About this chapter
Cite this chapter
Sorensen, D.C. (1982). Collinear scaling and sequential estimation in sparse optimization algorithms. In: Sorensen, D.C., Wets, R.J.B. (eds) Algorithms and Theory in Filtering and Control. Mathematical Programming Studies, vol 18. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0120977
Download citation
DOI: https://doi.org/10.1007/BFb0120977
Received:
Revised:
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-00847-4
Online ISBN: 978-3-642-00848-1
eBook Packages: Springer Book Archive