# Open image in new window -Duality Theorems for Convex Semidefinite Optimization Problems with Conic Constraints

Open Access
Research Article
Part of the following topical collections:
1. Selected Papers from the 10th International Conference 2009 on Nonlinear Functional Analysis and Applications

## Abstract

A convex semidefinite optimization problem with a conic constraint is considered. We formulate a Wolfe-type dual problem for the problem for its -approximate solutions, and then we prove -weak duality theorem and -strong duality theorem which hold between the problem and its Wolfe type dual problem. Moreover, we give an example illustrating the duality theorems.

### Keywords

Approximate Solution Feasible Solution Convex Function Linear Matrix Inequality Constraint Qualification

## 1. Introduction

Convex semidefinite optimization problem is to optimize an objective convex function over a linear matrix inequality. When the objective function is linear and the corresponding matrices are diagonal, this problem becomes a linear optimization problem.

For convex semidefinite optimization problem, Lagrangean duality without constraint qualification [1, 2], complete dual characterization conditions of solutions [1, 3, 4], saddle point theorems [5], and characterizations of optimal solution sets [6, 7] have been investigated.

To get the -approximate solution, many authors have established -optimality conditions, -saddle point theorems and -duality theorems for several kinds of optimization problems [1, 8, 9, 10, 11, 12, 13, 14, 15, 16].

Recently, Jeyakumar and Glover [11] gave -optimality conditions for convex optimization problems, which hold without any constraint qualification. Yokoyama and Shiraishi [16] gave a special case of convex optimization problem which satisfies -optimality conditions. Kim and Lee [12] proved sequential -saddle point theorems and -duality theorems for convex semidefinite optimization problems which have not conic constraints.

The purpose of this paper is to extend the -duality theorems by Kim and Lee [12] to convex semidefinite optimization problems with conic constraints. We formulate a Wolfe type dual problem for the problem for its -approximate solutions, and then prove -weak duality theorem and -strong duality theorem for the problem and its Wolfe type dual problem, which hold under a weakened constraint qualification. Moreover, we give an example illustrating the duality theorems.

## 2. Preliminaries

Consider the following convex semidefinite optimization problem:

where is a convex function, is a closed convex cone of , and for , where is the space of real symmetric matrices. The space is partially ordered by the Lwner order, that is, for if and only if is positive semidefinite. The inner product in is defined by , where is the trace operation.

Let . Then is self-dual, that is,
Let , , Then is a linear operator from to and its dual is defined by

for any Clearly, is the feasible set of SDP.

Definition 2.1.

Let be a convex function.

(1)The subdifferential of at where , is given by

where is the scalar product on .

(2)The -subdifferential of at is given by

Definition 2.2.

Let Then is called an -approximate solution of SDP, if, for any ,

Definition 2.3.

The conjugate function of a function is defined by

Definition 2.4.

The epigraph of a function , , is defined by

If is sublinear (i.e., convex and positively homogeneous of degree one), then , for all . If , , , then . It is worth nothing that if is sublinear, then

Moreover, if is sublinear and if , , and , then

(2.10)

Definition 2.5.

Let be a closed convex set in and .

(1)Let . Then is called the normal cone to at .

(2)Let . Let . Then is called the -normal set to at .

(3)When is a closed convex cone in , we denoted by and called the negative dual cone of .

Proposition 2.6 (see [17, 18]).

Let be a convex function and let be the indicator function with respect to a closed convex subset C of , that is, if , and if . Let . Then
(2.11)

Proposition 2.7 (see [7]).

Let be a continuous convex function and let be a proper lower semicontinuous convex function. Then
(2.12)

Following the proof of Lemma in [1], we can prove the following lemma.

Lemma 2.8.

Let . Suppose that Let and . Then the following are equivalent:
(2.13)

## 3. Open image in new window-Duality Theorem

Now we give -duality theorems for SDP. Using Lemma 2.8, we can obtain the following lemma which is useful in proving our -strong duality theorems for SDP.

Lemma 3.1.

Let . Suppose that
is closed. Then is an -approximate solution of SDP if and only if there exists such that for any ,

Proof.

() Let be an -approximate solution of SDP. Then , for any . Let . Then , for any . Thus we have, from Proposition 2.7,
and hence, . So there exists such that and hence there exists such that for any . Since , for any ; and hence it follows from Lemma 2.8 that
Thus there exist , and such that
This gives
for any . Thus we have

for any .

() Suppose that there exists such that

for any . Then we have

for any Thus , for any . Hence is an -approximate solution of SDP.

Now we formulate the dual problem SDD of SDP as follows:

(3.10)

We prove -weak and -strong duality theorems which hold between SDP and SDD.

Theorem 3.2 (-weak duality).

For any feasible solution of SDP and any feasible solution of SDD,
(3.11)

Proof.

Let and be feasible solutions of SDP and SDD respectively. Then and there exist and such that . Thus, we have
(3.12)

Hence .

Theorem 3.3 (-strong duality).

Suppose that
(3.13)

is closed. If is an -approximate solution of SDP, then there exists such that is a -approximate solution of SDD.

Proof.

Let be an -approximate solution of SDP. Then for any By Lemma 3.1, there exists such that
(3.14)

for any . Letting in (3.14), . Since and , .

Thus from (3.14),

(3.15)
for any . Hence is an -approximate solution of the following problem:
(3.16)
and so, , and hence, by Proposition 2.6, there exist , such that and
(3.17)
So, is a feasible solution of SDD. For any feasible solution of SDD,
(3.18)

Thus is a 2-approximate solution to SDD.

Now we characterize the -normal set to .

Proposition 3.4.

Let and Then
(3.19)
where
(3.20)

Proof.

Let and . Then
(3.21)
Let (where is at the th position in )
(3.22)
Thus, we have
(3.23)

From Proposition 3.4, we can calculate .

Corollary 3.5.

Let and Then following hold.

(iv)If and and , then
(3.24)

Now we give an example illustrating our -duality theorems.

Example 3.6.

Consider the following convex semidefinite program.
(3.25)
Let ,
(3.26)
and 0. Let and
(3.27)
Then is the set of all feasible solutions of SDP and the set of all -approximate solutions of SDP is . Let . Then is the set of all feasible solution of SDD. Now we calculate the set .
(3.28)
Thus . We can check that for any and any ,
(3.29)

that is, -weak duality holds.

Let be an -approximate solution of SDP. Then and . So, we can easily check that .

Since , from (3.29),

(3.30)

for any . So is an -approximate solution of SDD. Hence -strong duality holds.

## Notes

### Acknowledgment

This work was supported by the Korea Science and Engineering Foundation (KOSEF) NRL Program grant funded by the Korean government (MEST)(no. R0A-2008-000-20010-0).

### References

1. 1.
Jeyakumar V, Dinh N: Avoiding duality gaps in convex semidefinite programming without Slater's condition. In Applied Mathematics Report. University of New South Wales, Sydney, Australia; 2004.Google Scholar
2. 2.
Ramana MV, Tunçel L, Wolkowicz H: Strong duality for semidefinite programming. SIAM Journal on Optimization 1997, 7(3):641–662. 10.1137/S1052623495288350
3. 3.
Jeyakumar V, Lee GM, Dinh N: New sequential Lagrange multiplier conditions characterizing optimality without constraint qualification for convex programs. SIAM Journal on Optimization 2003, 14(2):534–547. 10.1137/S1052623402417699
4. 4.
Jeyakumar V, Nealon MJ: Complete dual characterizations of optimality for convex semidefinite programming. In Constructive, Experimental, and Nonlinear Analysis (Limoges, 1999), CMS Conference Proceedings. Volume 27. American Mathematical Society, Providence, RI, USA; 2000:165–173.Google Scholar
5. 5.
Dinh N, Jeyakumar V, Lee GM: Sequential Lagrangian conditions for convex programs with applications to semidefinite programming. Journal of Optimization Theory and Applications 2005, 125(1):85–112. 10.1007/s10957-004-1712-8
6. 6.
Jeyakumar V, Lee GM, Dinh N: Lagrange multiplier conditions characterizing the optimal solution sets of cone-constrained convex programs. Journal of Optimization Theory and Applications 2004, 123(1):83–103.
7. 7.
Jeyakumar V, Lee GM, Dinh N: Characterizations of solution sets of convex vector minimization problems. European Journal of Operational Research 2006, 174(3):1380–1395. 10.1016/j.ejor.2005.05.007
8. 8.
Govil MG, Mehra A: -optimality for multiobjective programming on a Banach space. European Journal of Operational Research 2004, 157(1):106–112. 10.1016/S0377-2217(03)00206-6
9. 9.
Gutiérrez C, Jiménez B, Novo V: Multiplier rules and saddle-point theorems for Helbig's approximate solutions in convex Pareto problems. Journal of Global Optimization 2005, 32(3):367–383. 10.1007/s10898-004-5904-4
10. 10.
Hamel A: An -lagrange multiplier rule for a mathematical programming problem on Banach spaces. Optimization 2001, 49(1–2):137–149. 10.1080/02331930108844524
11. 11.
Jeyakumar V, Glover BM: Characterizing global optimality for DC optimization problems under convex inequality constraints. Journal of Global Optimization 1996, 8(2):171–187. 10.1007/BF00138691
12. 12.
Kim GS, Lee GM: On -approximate solutions for convex semidefinite optimization problems. Taiwanese Journal of Mathematics 2007, 11(3):765–784.
13. 13.
Liu JC: -duality theorem of nondifferentiable nonconvex multiobjective programming. Journal of Optimization Theory and Applications 1991, 69(1):153–167. 10.1007/BF00940466
14. 14.
Liu JC: -Pareto optimality for nondifferentiable multiobjective programming via penalty function. Journal of Mathematical Analysis and Applications 1996, 198(1):248–261. 10.1006/jmaa.1996.0080
15. 15.
Strodiot J-J, Nguyen VH, Heukemes N: -optimal solutions in nondifferentiable convex programming and some related questions. Mathematical Programming 1983, 25(3):307–328. 10.1007/BF02594782
16. 16.
Yokoyama K, Shiraishi S: An -optimal condition for convex programming problems without Slater's constraint qualifications. preprintGoogle Scholar
17. 17.
Hiriart-Urruty JB, Lemarechal C: Convex Analysis and Minimization Algorithms. I. Fundamentals, Grundlehren der mathematischen Wissenschaften. Volume 305. Springer, Berlin, Germany; 1993.
18. 18.
Hiriart-Urruty JB, Lemarechal C: Convex Analysis and Minimization Algorithms. II. Advanced Theory and Bundle Methods, Grundlehren der mathematischen Wissenschaften. Volume 306. Springer, Berlin, Germany; 1993.Google Scholar