Chapter 6 showed that there is no universal modeling tool; the situation is similar for optimization tools. The research on mathematical programming and computerized optimization tools (which are often called optimization solvers) is now over 50 years old, since it started even before modern computers were invented. This research resulted in a wealth of results covering diverse areas. To present all these results in detail would require more than just a chapter. Thus, we assume that the reader already knows basic concepts and has some theory of mathematical programming (e.g., Luenberger, 1984; Fletcher, 1987; Williams, 1993). We will reintroduce some basic notation and results only in order to comment on specific features of optimization solvers.
Unable to display preview. Download preview PDF.
- 4.In Chapters 4 and 8, and also in other parts of this book, we use a maxmin aggregation when defining an achievement scalarizing function; in the minmax aggregation we simply change the signs of functions. However, we should warn the reader that the maxmin aggregation might result in weakly efficient solutions, unacceptable in practice. To avoid them, it is sufficient, for example, to add the sum of these objectives with a small coefficient to the minimum of these functions (see Chapters 4 and 8). Even good optimization systems, such as Optimization Toolkit in MATLAB (Part-Enander et al, 1996), treat multi-objective optimization without the attention it deserves and instruct the user simply to apply minmax (or maxmin) aggregation without warning that this results in weakly efficient solutions. One can get even worse advice from some Web pages on optimization, where the suggested treatment of a multi-objective optimization problem is to convert it into a single-objective one by summing up the objectives with positive weighting coefficients, which, as shown in other chapters of this book, is possibly the worst method of treating multi-objective problems and leads the user to serious difficulties.Google Scholar
- 5.For example, in the GAMS modeling system, equation (7.3) can be represented while using special aliased sets in assignments (see Brooke et al, 1992); there are no tricks required to represent equation (7.3) in the AMPL programming language (see Fourer et al, 1993).Google Scholar
- 7.In 1980, L. Khachian from the Computing Laboratory in Moscow, Russia, used a nonlinear optimization algorithm of N. Shor from Kiev, Ukraine, to prove that linear programming problems can be solved in a number of iterations that grow polynomially with the dimensions of the problem. This created a renewed interest in large-scale linear programming. In 1984, N. Karmarkar from Bell Laboratories in the USA proposed another, more practical, nonlinear programming approach to linear programming. Further research led to the so-called barrier and interior point methods. It is worthy of note, however, that the idea for using nonlinear optimization algorithms for linear programming problems originated earlier than with Khachian. In 1977, J. Sosnowski (Warsaw, Poland) had already used a special nonlinear optimization algorithm for linear programming problems; this algorithm is included in the HYBRID solver described briefly in Section 7.2.5 of this chapter. Moreover, this algorithm used earlier ideas of O.L. Mangasarian and R.T. Rockafellar (Seattle and Madison, USA, respectively); in 1976 Rockafellar had already proven finite convergence of an augmented Lagrangian multiplier method when applied to piece-wise linear programming problems (which could also be used to obtain a polynomial estimate of the number of iterations needed).Google Scholar
- 9.The basic idea for this algorithm was presented in 1977 (Sosnowski, 1981); hence, it is probably one of the oldest nonlinear programming algorithms applied to linear programming. A theoreti-cal basis of this algorithm using an augmented Lagrangian approach, including the proof of finite convergence of a variant of it applied to piece-wise linear programming problems, is attributed to Rockafellar (1976) and to earlier ideas of Mangasarian (summarized later in Mangasarian, 1981). The basic idea of regularization of optimization problems as used in this algorithm is attributed even earlier to Polyak and Tretiyakov (1972), who in turn exploited the general, earlier concept of Tik-chonov regularization. The basic idea of an augmented Lagrangian multiplier iteration also relies on earlier work on an iteratively shifted penalty function (see Powell, 1969, for equality constraints and Wierzbicki, 1971, for the case of inequality constraints).Google Scholar
- 15.If second-order sufficient conditions for optimality are satisfied at x, refer to Rockafellar (1976).Google Scholar
- 17.The objective function is called sometimes the goal function. However, this term is also associ-ated with goal programming. Therefore we prefer to use the term objective function in this book.Google Scholar
- 18.Recall that such a continuum — a nonlinear manifold of nonunique Solutions - is characteristic for the illustrative example in Chapter 6.Google Scholar
- 19.The authors would like to advise users of dual Solutions to read the following: Jansen et al. (1993) and Güler et al (1993). The former provide a good summary of the related problems and their experiences with application; the latter present a survey of degeneracy in the interior point methods, which are consideredby many users to be free of degeneracy problems.Google Scholar