Abstract
An iterative algorithm is proposed for nonlinearly constrained optimization calculations when there are no derivatives. Each iteration forms linear approximations to the objective and constraint functions by interpolation at the vertices of a simplex and a trust region bound restricts each change to the variables. Thus a new vector of variables is calculated, which may replace one of the current vertices, either to improve the shape of the simplex or because it is the best vector that has been found so far, according to a merit function that gives attention to the greatest constraint violation. The trust region radius ρ is never increased, and it is reduced when the approximations of a well-conditioned simplex fail to yield an improvement to the variables, until ρ reaches a prescribed value that controls the final accuracy. Some convergence properties and several numerical results are given, but there are no more than 9 variables in these calculations because linear approximations can be highly inefficient. Nevertheless, the algorithm is easy to use for small numbers of variables.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
R. Fletcher (1987), Practical Methods of Optimization,John Wiley and Sons (Chichester).
R.E. Griffith and R.A. Stewart (1961), “A nonlinear programming technique for the optimization of continuous processing systems”, Management Sci., Vol. 7, pp. 379–392.
D.M. Himmelblau (1972), Applied Nonlinear Programming, McGraw-Hill (NewYork).
W. Hock and K. Schittkowski (1980), Test Examples for Nonlinear Programming Codes, Lecture Notes in Economics and Mathematical Systems 187,Springer-Verlag (Berlin).
J.A. Nelder and R. Mead (1965), “A simplex method for function minimization”, Comput. J., Vol. 7, pp. 308–313.
M.J.D. Powell (1978), “A fast algorithm for nonlinearly constrained optimization calculations”, in Numerical Analysis, Dundee 1977, Lecture Notes in Mathematics 630, ed. G.A. Watson, Springer-Verlag (Berlin), pp. 144–157.
H.H. Rosenbrock (1960), “An automatic method for finding the greatest or least value of a function”, Comput. J., Vol. 3, pp. 175 – 184.
W. Spendley, G.R. Hext and F.R. Himsworth (1962), “Sequential application of simplex designs in optimisation and evolutionary operation”, Technometrics, Vol. 4, pp. 441—461.
M.B. Subrahmanyam (1989), “An extension of the simplex method to constrained optimization”, J. Optim. Theory Appl., Vol. 62, pp. 311–319.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1994 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Powell, M.J.D. (1994). A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation. In: Gomez, S., Hennart, JP. (eds) Advances in Optimization and Numerical Analysis. Mathematics and Its Applications, vol 275. Springer, Dordrecht. https://doi.org/10.1007/978-94-015-8330-5_4
Download citation
DOI: https://doi.org/10.1007/978-94-015-8330-5_4
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-4358-0
Online ISBN: 978-94-015-8330-5
eBook Packages: Springer Book Archive