- 129 Downloads
We extend the Lagrangian duality theory for convex optimization problems to incorporate approximate solutions. In particular, we generalize well-known relationships between minimizers of a convex optimization problem, maximizers of its Lagrangian dual, saddle points of the Lagrangian, Kuhn–Tucker vectors, and Kuhn–Tucker conditions to incorporate approximate versions. As an application, we show how the theory can be used for convex quadratic programming and then apply the results to support vector machines from learning theory.
KeywordsLagrangian duality Approximations Saddle points Kuhn–Tucker conditions Support vector machines
Unable to display preview. Download preview PDF.
- 6.Zălinescu, C.: Convex Analysis in General Vector Spaces. World Scientific, River Edge (2002) Google Scholar
- 7.Yokoyama, K., Shiraishi, S.: An ε-optimality condition for convex programming problems without Slater’s constraint qualification. Preprint (2006) Google Scholar
- 9.Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms II. Springer, New York (1996) Google Scholar