# Nonlinear Programming

• Jonathan F. Bard
Part of the Nonconvex Optimization and Its Applications book series (NOIA, volume 30)

## Abstract

The central concern of this chapter is with the study of theory and algorithms for mathematical programs of the following general form: min f(x) subject to
$$\begin{gathered} {h_i}\left( x \right) = 0,i = 1,...,m \hfill \\ {g_i}\left( x \right) \geqslant 0,i = 1,...,q \hfill \\ x \in X \hfill \\ \end{gathered}$$
(4.1)
where f, h 1,..., h m , g 1,...,g q are functions whose range is defined on R 1, X is a subset of R n that places additional restrictions on the variables such as bounds, and x is a vector of n components x 1,..., x n . A value of x satisfying the constraints of (4.1) is called a feasible solution to the problem. The collection of all such solutions forms the feasible region. The nonlinear programming problem (NLP) then, is to find a feasible point $$\overline x$$ such that $$f\left( {\overline x } \right)$$f(x) for each feasible point x. Such a point is called the optimal solution, or simply, a solution to (4.1). Needless to say, an NLP can be stated as a maximization problem, and that some or all of the inequality constraints in (4.1) can be written as g i (x) ≦ 0 for i = 1,..., q. In the special case when the objective function in (4.1) is linear and all the constraints, including the set X,can be represented by linear inequalities or equations, the above problem reduces to a linear program as discussed in Chapter 2.

## Keywords

Nonlinear Programming Inequality Constraint Line Search Feasible Region Penalty Method
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.