Parallel Newton-Raphson Methods for Unconstrained Minimization with Asynchronous Updates of the Hessian Matrix or Its Inverse
We consider a parallel variant of the Newton-Raphson method for unconstrained optimization, which uses as many finite differences of gradients as possible to update the inverse Hessian matrix. The method is based on the Gauss-Seidel type of updating for quasi-Newton methods originally proposed by Straeter (1973). It incorporates the finite-difference approximations via the symmetric rank-one updates analysed by Van Laarhoven (1985). At the end of the paper we discuss the potential of the method for on-line, real-time optimization. The development of hardware for parallel computing has been so turbulent, and the development of programming languages for parallel processing has been so slow, that it is still unreasonable to expect a large market for standard optimization software. Hence, we have restricted ourselves to the testing of algorithmic ideas on sequential computers. Moreover, we also considered the asynchronous method of Fischer and Ritter (1988) which uses finite differences of gradients to update as many rows and columns as possible of the Hessian matrix itself. The test results reveal both promising research directions as well as possible pitfalls for parallel unconstrained optimization.
KeywordsHessian Matrix Unconstrained Optimization Unconstrained Minimization Linear Search Promising Research Direction
Unable to display preview. Download preview PDF.
- D.P. Bertsekas and J.N. Tsitsiklis, Parallel and Distributed Computation. Prentice Hall, Englewood Cliffs, New Jersey, 1989.Google Scholar
- M. Dayde, Parallélisation d’Algorithmes d’Optimisation pour des Problèmes d’Optimum Design. Thèse, Institut National Polytechnique de Toulouse, France, 1986.Google Scholar
- M. Dayde, M. Lescrenier, and Ph. Toint, A Comparison between Straeter’s Parallel Variable Metric Algorithm and Parallel Discrete Newton Methods. Report of the ENSEEIHT de Toulouse, Frances, Département Informatique, 1989.Google Scholar
- A.V. Fiacco and G.P. McCormick, Nonlinear Programming, Sequential Unconstrained Minimization Techniques. Wiley, New York, 1968.Google Scholar
- R. Fletcher, Practical Methods of Optimization, Volume 1, Uncontrained Optimization. Wiley, New York, 1980.Google Scholar
- T.J. Freeman, Parallel Projected Variable Metric Algorithms for Unconstrained Optimization. NASA, ICASE Report 89–73, Hampton, Virginia 23665, USA, 1989.Google Scholar
- F.A. Lootsma, Nonlinear Optimization in Industry and the Development of Optimization Programs. In L.C.W. Dixon (ed.). Optimization in Action. Academic Press, London, 1976, pp. 252–266.Google Scholar
- F.A. Lootsma, The ALGOL 60 Procedure minifun for Solving Non-linear Optimization Programs. In H.J. Greenberg (ed.). Design and Implementation of Optimization Software. NATO ASI Series E-28, Sijthoff and Noordhoff, Alphen aan den Rijn, The Netherlands, 1978, pp. 397–446.Google Scholar
- F.A. Lootsma, Performance Evaluation of Nonlinear Optimization Methods via Multi-Criteria Analysis and via Linear Model Analysis. In M.J.D. Powell (ed.). Nonlinear Optimization 1981, Academic Press, London, 1982, pp. 419–453.Google Scholar
- B.A. Murtagh and R.W.H. Sargent, A Constrained Minimization Method with Quadratic Convergence. In R. Fletcher (ed.). Optimization. Academic Press, London, 1969, pp. 215–246.Google Scholar
- C. Schweigman, Constrained Minimization: Handling of Linear and Nonlinear Constraints. Thesis, Delft University of Technology, Faculty of Mathematics, The Netherlands, 1974.Google Scholar
- T.A. Straeter, A Parallel Variable Metric Optimization Algorithm. NASA, Technical Note D-7329, Hampton, Virginia 23665, 1973.Google Scholar