Skip to main content

Collinear scaling and sequential estimation in sparse optimization algorithms

  • Chapter
  • First Online:
Book cover Algorithms and Theory in Filtering and Control

Part of the book series: Mathematical Programming Studies ((MATHPROGRAMM,volume 18))

Abstract

This paper is intended to accomplish two tasks. It gives a tutorial introduction to quasi-Newton methods and to collinear scaling methods for unconstrained optimization. It also addresses the problem of developing unconstrained optimization techniques which take advantage of a known sparsity pattern in the Hessian matrix of the objective function. Methods which generalize quasi-Newton methods to the sparse case are discussed and an example is given which indicates that positive definite update formulas may, not generalize well. This is because the quasi-Newton equation may be inconsistent with sparsity and positive definiteness. It is shown that the method of collinear scaling together with sparse Cholesky, factor updates may overcome these difficulties. The connection between sequential estimation and this updating process is briefly discussed.

This work was supported in part by the Applied Mathematical Sciences Research Program (KC-04-02) of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38, and in part by the National Science Foundation while the author was at University of Kentucky under Contract MCS-78-02547.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. Bertsekas, “Control of uncertain systems with a set membership description”, Ph.D. Thesis, Department of Electrical Engineering, Massachusetts Institute of Technology, Boston, MA (1971).

    Google Scholar 

  2. C.G. Broyden, “A class of methods for solving nonlinear simultaneous equations”, Mathematics of Computation 19 (1965) 577–593.

    Article  MATH  MathSciNet  Google Scholar 

  3. C.G. Broyden, “A new double-rank minimization algorithm”, Notices of the American Mathematical Society 16 (1969) 670.

    Google Scholar 

  4. W.C. Davidon, “Variable metric method for minimization”, Report ANL-5990 Rev., Argonne National Laboratories, Argonne, IL (1959).

    Google Scholar 

  5. W.C. Davidon, “Conic aproximations and collinear scalings for optimizers,” SIAM Journal on Numerical Analysis 17 (1980).

    Google Scholar 

  6. J.E. Dennis and J.J. Moré, “Quasi-Newton methods, motivation and theory”, SIAM Review 19 (1977) 46–89.

    Article  MATH  MathSciNet  Google Scholar 

  7. J.E. Dennis and R.B. Schnabel, “Least change secant updates for quasi-Newton methods”, SIAM Review 21 (1979) 443–459.

    Article  MATH  MathSciNet  Google Scholar 

  8. J.J. Dongarra, C.B. Moler, J.R. Bunch and G.W. Stewart, LINPACK user’s guide, (SIAM Publications, Philadelphia, PA, 1979).

    Google Scholar 

  9. R. Fletcher, “A new approach to variable metric algorithms”, The Computer Journal 13 (1970) 313–322.

    Article  Google Scholar 

  10. R. Fletcher and M.J.D. Powell, “A rapidly convergent descent method for minimization”, The Computer Journal 6 (1963) 163–168.

    MATH  MathSciNet  Google Scholar 

  11. D.M. Gay, “Computing optimal locally constrained steps”, Mathematics Research Center Report #2000, University of Wisconsin, Madison, WI (1979).

    Google Scholar 

  12. P.E. Gill and W. Murray, “Newton type methods for unconstrained and linearly constrained optimization”, Mathematical Programming 7 (1974) 311–350.

    Article  MATH  MathSciNet  Google Scholar 

  13. D. Goldfarb, “A family of variable-metric methods derived by variational means”, Mathematics of Computation 24 (1970) 23–26.

    Article  MATH  MathSciNet  Google Scholar 

  14. D. Goldfarb, “Generating conjugate directions without line searches using factorized variable metric formulas”, Mathematical Programming 13 (1977) 94–110.

    Article  MATH  MathSciNet  Google Scholar 

  15. R.E. Kalman, “A new approach to linear, filtering and prediction problems”, AMSE Transactions. Journal of Basic Engineering 82D (1960) 35–45.

    Google Scholar 

  16. R.E. Kalman, “New methods in Wiener filtering theory”, in: J.L. Bogdanoff and F. Kozin, eds., Proceedings of the First Symposium on Engineering Applications of Random Function Theory and Probability (Wiley, New York, 1963), pp. 270–388.

    Google Scholar 

  17. B. Lam, “On the convergence of a quasi-Newton method for, sparse nonlinear systems”, Mathematics of Computation 32 (1978) 447–451.

    Article  MATH  MathSciNet  Google Scholar 

  18. E.S. Marwill, “Exploiting sparsity in Newton-like methods”, Ph.D. Thesis, Department of Computer Science Report TR 78-335 Cornell University, Ithaca, NY, (1978).

    Google Scholar 

  19. E.S. Marwill, “Convergence results for Schubert’s method for solving sparse nonlinear equations”, SIAM Journal of Numerical Analysis 16 (1979) 588–604.

    Article  Google Scholar 

  20. S.K. Mitter and P. Todalagi, “Variable metric methods and filtering theory,” Department of Electrical Engineering and Computer Science Report, Massachusetts Institute of Technology, Boston, MA (1979).

    Google Scholar 

  21. J.J. Moré, “The Levenberg-Marquardt algorithm: Implementation and theory”, in: G.A. Watson, ed., Numerical analysis, Lecture Notes in Mathematics 630, Proceedings biennial conference, Dundee, 1977, (Springer, Heidelberg, 1978) pp. 105–116.

    Chapter  Google Scholar 

  22. J.M. Ortega and W.C. Rheinboldt, Iterative solution of nonlinear equations in several variables (Academic Press, New York, 1970).

    MATH  Google Scholar 

  23. M.J.D. Powell, “Convergence properties of a class of minimization algorithms”, in: O.L. Mangasarian, R.R. Meyer, and S.M. Robinson, eds. Nonlinear programming 2 (Academic Press, New York, 1975), pp. 1–27.

    Google Scholar 

  24. M.J.D. Powell, “Quasi-Newton formulae for sparse second derivative matrices”, Internal Report DAMTP 1979/Na7 department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, England (1979).

    Google Scholar 

  25. L.K. Schubert, “Modification of a quasi-Newton method for nonlinear equations with a sparse Jacobian”, Mathematics of Computation 24 (1970) 27–30.

    Article  MATH  MathSciNet  Google Scholar 

  26. F.C. Schweppe, “Recursive state estimation: Unknown but bounded errors and system input”, IEEE Transactions on Automatic Control AC-13 (1968).

    Google Scholar 

  27. D.F. Shanno, “Conditioning of quasi-Newton methods for function minimization”, Mathematics of Computation 24 (1970) 647–656.

    Article  MathSciNet  Google Scholar 

  28. D.F. Shanno, “On variable metric methods for sparse Hessians”, Mathematics of Computation 834 (1980) 499–514.

    Article  MathSciNet  Google Scholar 

  29. D.C. Sorensen, “The Q-superlinear convergence of a collinear scaling algorithm for unconstrained optimization”, SIAM Journal on Numerical Analysis 17 (1980) 84–114.

    Article  MATH  MathSciNet  Google Scholar 

  30. D.C. Sorensen, “Newton’s method with a model trust-region modification”, SIAM Journal on Numerical Analysis 16 (1982).

    Google Scholar 

  31. G.W. Stewart, Introduction matrix computations (Academic Press, New York, 1973).

    MATH  Google Scholar 

  32. M.N. Thapa, “Optimization of unconstrained with sparse Hessian matrices”, Ph.D. Thesis, Department of Operations Research, Stanford University, Stanford, CA (1980).

    Google Scholar 

  33. S.W. Thomas, “Sequential estimation techniques fr quasi-Newton algorithms”, Ph.D. Thesis, Department of Computer Science Report TR 75-227, Cornell University, Ithaca, NY (1975).

    Google Scholar 

  34. Ph.L. Toint, “On sparse and symmetric updating subject to a linear equation”, Mathematics of Computation 31 (1977) 954–961.

    Article  MATH  MathSciNet  Google Scholar 

  35. Ph.L. Toint, “On the superlinear convergence of an algorithm for solving sparse minimization problems”, SIAM Journal on Numerical Analysis 16 (1979) 1036–1045.

    Article  MATH  MathSciNet  Google Scholar 

  36. Ph.L. Toint, “A note about sparsity exploiting quasi-Newton updates”, Department of Mathematics Report 7915, Facultes Universitaires de Namur, Namur, Belgium (1979).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Danny C. Sorensen Roger J. -B. Wets

Rights and permissions

Reprints and permissions

Copyright information

© 1982 The Mathematical Programming Society, Inc.

About this chapter

Cite this chapter

Sorensen, D.C. (1982). Collinear scaling and sequential estimation in sparse optimization algorithms. In: Sorensen, D.C., Wets, R.J.B. (eds) Algorithms and Theory in Filtering and Control. Mathematical Programming Studies, vol 18. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0120977

Download citation

  • DOI: https://doi.org/10.1007/BFb0120977

  • Received:

  • Revised:

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-00847-4

  • Online ISBN: 978-3-642-00848-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics