Skip to main content
Book cover

Lagrange-type Functions in Constrained Non-Convex Optimization

  • Book
  • © 2003

Overview

Part of the book series: Applied Optimization (APOP, volume 85)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (7 chapters)

Keywords

About this book

Lagrange and penalty function methods provide a powerful approach, both as a theoretical tool and a computational vehicle, for the study of constrained optimization problems. However, for a nonconvex constrained optimization problem, the classical Lagrange primal-dual method may fail to find a mini­ mum as a zero duality gap is not always guaranteed. A large penalty parameter is, in general, required for classical quadratic penalty functions in order that minima of penalty problems are a good approximation to those of the original constrained optimization problems. It is well-known that penaity functions with too large parameters cause an obstacle for numerical implementation. Thus the question arises how to generalize classical Lagrange and penalty functions, in order to obtain an appropriate scheme for reducing constrained optimiza­ tion problems to unconstrained ones that will be suitable for sufficiently broad classes of optimization problems from both the theoretical and computational viewpoints. Some approaches for such a scheme are studied in this book. One of them is as follows: an unconstrained problem is constructed, where the objective function is a convolution of the objective and constraint functions of the original problem. While a linear convolution leads to a classical Lagrange function, different kinds of nonlinear convolutions lead to interesting generalizations. We shall call functions that appear as a convolution of the objective function and the constraint functions, Lagrange-type functions.

Reviews

From the reviews:

"Lagrange and penalty functions provide a powerful approach for study of constrained optimization problems. … The book gives a systematic and unified presentation of many important results that have been obtained in this area during last several years. … The book develops a unified approach to duality and penalization and to convergence analysis of the first and second order optimality conditions. … A number of impressive new results on the existence of an exact penalty parameter have been obtained in the book." (Vladimir Gaitsgory, gazette The Australian Mathematical Society, Vol. 32 (4), 2005)

"In the monograph a whole optimization theory is developed … . Besides a large number of theoretical statements, results of numerical experiments showing usefulness of the presented approach are also reported … . It is shown that a much larger class of optimization problems than that of the convex ones allow for a thorough theoretical analysis and deep results. … The monograph can be recommended to researchers in mathematical optimization being in interested in nonconvex problems." (Stephan Dempe, OR-News, Issue 23, 2005)

Authors and Affiliations

  • School of Information Technology and Mathematical Sciences, University of Ballarat, Victoria, Australia

    Alexander Rubinov

  • Department of Applied Mathematics, Hong Kong Polytechnic University, Hong Kong, China

    Xiaoqi Yang

Bibliographic Information

Publish with us