Advertisement

Abstract

Solving an optimization problem usually consists in generating a sequence by some numerical algorithm. The key question is then to show that such a sequence converges to some solution as well as to evaluate the efficiency of the convergent procedure. In general, the coercivity hypothesis on the problem’s data is assumed to ensure the asymptotic convergence of the produced sequence. For convex minimization problems, if the produced sequence is stationary, i.e., the sequence of subgradients approaches zero, it is interesting to know for what class of functions we can reach convergence under a weaker assumption than coercivity. This chapter introduces the concept of well-behaved asymptotic functions, which in turn are linked to the problems of error bounds associated with a given subset of a Euclidean space. A general framework is developed around these two themes to characterize asymptotic optimality and error bounds for convex inequality systems.

Keywords

Convex Function Error Bound Stationary Sequence Convex Minimization Problem Calculus Rule 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag New York, Inc. 2003

Personalised recommendations