Parallel Newton-Raphson Methods for Unconstrained Minimization with Asynchronous Updates of the Hessian Matrix or Its Inverse

  • F. A. Lootsma
Conference paper
Part of the Lecture Notes in Economics and Mathematical Systems book series (LNE, volume 367)


We consider a parallel variant of the Newton-Raphson method for unconstrained optimization, which uses as many finite differences of gradients as possible to update the inverse Hessian matrix. The method is based on the Gauss-Seidel type of updating for quasi-Newton methods originally proposed by Straeter (1973). It incorporates the finite-difference approximations via the symmetric rank-one updates analysed by Van Laarhoven (1985). At the end of the paper we discuss the potential of the method for on-line, real-time optimization. The development of hardware for parallel computing has been so turbulent, and the development of programming languages for parallel processing has been so slow, that it is still unreasonable to expect a large market for standard optimization software. Hence, we have restricted ourselves to the testing of algorithmic ideas on sequential computers. Moreover, we also considered the asynchronous method of Fischer and Ritter (1988) which uses finite differences of gradients to update as many rows and columns as possible of the Hessian matrix itself. The test results reveal both promising research directions as well as possible pitfalls for parallel unconstrained optimization.


Hessian Matrix Unconstrained Optimization Unconstrained Minimization Linear Search Promising Research Direction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    D.P. Bertsekas and J.N. Tsitsiklis, Parallel and Distributed Computation. Prentice Hall, Englewood Cliffs, New Jersey, 1989.Google Scholar
  2. [2]
    C.G. Broyden, Quasi-Newton Methods and their Application to Function Minimization. Mathematics of Computation 21, 368–381, 1967.CrossRefGoogle Scholar
  3. [3]
    R.H. Byrd, R.B. Schnabel, and G.A. Shultz, Parallel Quasi-Newton Methods for Unconstrained Optimization. Mathematical Programming 42, 273–306, 1988.CrossRefGoogle Scholar
  4. [4]
    M. Dayde, Parallélisation d’Algorithmes d’Optimisation pour des Problèmes d’Optimum Design. Thèse, Institut National Polytechnique de Toulouse, France, 1986.Google Scholar
  5. [5]
    M. Dayde, Parallel Algorithms for Nonlinear Programming Problems. Journal of Optimization Theory and Applications 61, 23–46, 1989.CrossRefGoogle Scholar
  6. [6]
    M. Dayde, M. Lescrenier, and Ph. Toint, A Comparison between Straeter’s Parallel Variable Metric Algorithm and Parallel Discrete Newton Methods. Report of the ENSEEIHT de Toulouse, Frances, Département Informatique, 1989.Google Scholar
  7. [7]
    A.V. Fiacco and G.P. McCormick, Nonlinear Programming, Sequential Unconstrained Minimization Techniques. Wiley, New York, 1968.Google Scholar
  8. [8]
    H. Fischer and K. Ritter, An Asynchronous Parallel Newton Method. Mathematical Programming 42, 363–374, 1988.CrossRefGoogle Scholar
  9. [9]
    R. Fletcher, Practical Methods of Optimization, Volume 1, Uncontrained Optimization. Wiley, New York, 1980.Google Scholar
  10. [10]
    T.J. Freeman, Parallel Projected Variable Metric Algorithms for Unconstrained Optimization. NASA, ICASE Report 89–73, Hampton, Virginia 23665, USA, 1989.Google Scholar
  11. [11]
    H.Y. Huang, A Unified Approach to Quadratically Convergent Algorithms for Function Minimization. Journal of Optimization Theory and Applications 5, 405–423, 1970.CrossRefGoogle Scholar
  12. [12]
    P. van Laarhoven, Parallel Variable Metric Algorithms for Unconstrained Optimization. Mathematical Programming 33, 68–81, 1985.CrossRefGoogle Scholar
  13. [13]
    F.A. Lootsma, Nonlinear Optimization in Industry and the Development of Optimization Programs. In L.C.W. Dixon (ed.). Optimization in Action. Academic Press, London, 1976, pp. 252–266.Google Scholar
  14. [14]
    F.A. Lootsma, The ALGOL 60 Procedure minifun for Solving Non-linear Optimization Programs. In H.J. Greenberg (ed.). Design and Implementation of Optimization Software. NATO ASI Series E-28, Sijthoff and Noordhoff, Alphen aan den Rijn, The Netherlands, 1978, pp. 397–446.Google Scholar
  15. [15]
    F.A. Lootsma, Performance Evaluation of Nonlinear Optimization Methods via Multi-Criteria Analysis and via Linear Model Analysis. In M.J.D. Powell (ed.). Nonlinear Optimization 1981, Academic Press, London, 1982, pp. 419–453.Google Scholar
  16. [16]
    F.A. Lootsma and K.M. Ragsdell, State-of-the-Art in Parallel Nonlinear Optimization. Parallel Computing 6, 133–155, 1988.CrossRefGoogle Scholar
  17. [17]
    H. Mukai, Parallel Algorithms for Solving Systems of Nonlinear Equations. Computation and Mathematics with Applications 7, 235–250, 1981.CrossRefGoogle Scholar
  18. [18]
    B.A. Murtagh and R.W.H. Sargent, A Constrained Minimization Method with Quadratic Convergence. In R. Fletcher (ed.). Optimization. Academic Press, London, 1969, pp. 215–246.Google Scholar
  19. [19]
    B.A. Murtagh and R.W.H. Sargent, Computational Experience with Quadratically Convergent Minimization Methods. The Computer Journal 13, 185–194, 1970.CrossRefGoogle Scholar
  20. [20]
    C. Schweigman, Constrained Minimization: Handling of Linear and Nonlinear Constraints. Thesis, Delft University of Technology, Faculty of Mathematics, The Netherlands, 1974.Google Scholar
  21. [21]
    T.A. Straeter, A Parallel Variable Metric Optimization Algorithm. NASA, Technical Note D-7329, Hampton, Virginia 23665, 1973.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1991

Authors and Affiliations

  • F. A. Lootsma
    • 1
  1. 1.Faculty of Technical Mathematics and InformaticsDelft University of TechnologyDelftThe Netherlands

Personalised recommendations