Minimizing and Stationary Sequences
Solving an optimization problem usually consists in generating a sequence by some numerical algorithm. The key question is then to show that such a sequence converges to some solution as well as to evaluate the efficiency of the convergent procedure. In general, the coercivity hypothesis on the problem’s data is assumed to ensure the asymptotic convergence of the produced sequence. For convex minimization problems, if the produced sequence is stationary, i.e., the sequence of subgradients approaches zero, it is interesting to know for what class of functions we can reach convergence under a weaker assumption than coercivity. This chapter introduces the concept of well-behaved asymptotic functions, which in turn are linked to the problems of error bounds associated with a given subset of a Euclidean space. A general framework is developed around these two themes to characterize asymptotic optimality and error bounds for convex inequality systems.
KeywordsConvex Function Error Bound Stationary Sequence Convex Minimization Problem Calculus Rule
Unable to display preview. Download preview PDF.