Seeking Feasibility in Nonlinear Programs
Finding a feasible point in an NLP can be notoriously difficult. There may be multiple disconnected feasible regions for example, possibly at extreme distances from each other. Many feasibility-seeking algorithms rely on optimizing a phase 1 objective that minimizes a penalty function, reaching zero at a feasible point. This is just as tricky as solving any NLP, and is subject to the same difficulties, such as the possibility of multiple local optima, including some which trap the phase 1 solution process, but which are not actually feasible points. It may be difficult to solve for the intersection of nonlinear constraints, or difficult to get correct derivatives.
Unless the constraints have specific properties (e.g. form a convex set), there is no guarantee that a particular algorithm will be able to find a feasible point when started from an arbitrary initial point. This means that it is very difficult to conclude that a given model is infeasible: it may simply mean that you have not started your solver in the right place. The best approach is to use knowledge about the problem such as a previous solution to a similar problem or logical reasoning to provide the solver with a “good” initial point. As the joke goes, the best way to solve an NLP is to start at the optimum. If that approach fails, the only recourse may be to start the solver in many different places, i.e. a multi-start or scatter search approach, hoping that it will be able to reach feasibility from one of those initial points.
KeywordsInitial Point Penalty Function Feasible Region Variable Bound Marker Point
Unable to display preview. Download preview PDF.