Advertisement

A New Method for Solving Monotone Generalized Variational Inequalities

Open Access
Research Article

Abstract

We suggest new dual algorithms and iterative methods for solving monotone generalized variational inequalities. Instead of working on the primal space, this method performs a dual step on the dual space by using the dual gap function. Under the suitable conditions, we prove the convergence of the proposed algorithms and estimate their complexity to reach an Open image in new window -solution. Some preliminary computational results are reported.

Keywords

Variational Inequality Variational Inequality Problem Projection Point Polyhedral Convex Convex Programming Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1. Introduction

Let Open image in new window be a convex subset of the real Euclidean space Open image in new window , Open image in new window be a continuous mapping from Open image in new window into Open image in new window , and Open image in new window be a lower semicontinuous convex function from Open image in new window into Open image in new window . We say that a point Open image in new window is a solution of the following generalized variational inequality if it satisfies

where Open image in new window denotes the standard dot product in Open image in new window .

Associated with the problem (GVI), the dual form of this is expressed as following which is to find Open image in new window such that

In recent years, this generalized variational inequalities become an attractive field for many researchers and have many important applications in electricity markets, transportations, economics, and nonlinear analysis (see [1, 2, 3, 4, 5, 6, 7, 8, 9]).

It is well known that the interior quadratic and dual technique are powerfull tools for analyzing and solving the optimization problems (see [10, 11, 12, 13, 14, 15, 16]). Recently these techniques have been used to develop proximal iterative algorithm for variational inequalities (see [17, 18, 19, 20, 21, 22]).

In addition Nesterov [23] introduced a dual extrapolation method for solving variational inequalities. Instead of working on the primal space, this method performs a dual step on the dual space.

In this paper we extend results in [23] to the generalized variational inequality problem (GVI) in the dual space. In the first approach, a gap function Open image in new window is constructed such that Open image in new window , for all Open image in new window and Open image in new window if and only if Open image in new window solves (GVI). Namely, we first develop a convergent algorithm for (GVI) with Open image in new window being monotone function satisfying a certain Lipschitz type condition on Open image in new window . Next, in order to avoid the Lipschitz condition we will show how to find a regularization parameter at every iteration Open image in new window such that the sequence Open image in new window converges to a solution of (GVI).

The remaining part of the paper is organized as follows. In Section 2, we present two convergent algorithms for monotone and generalized variational inequality problems with Lipschitzian condition and without Lipschitzian condition. Section 3 deals with some preliminary results of the proposed methods.

2. Preliminaries

First, let us recall the well-known concepts of monotonicity that will be used in the sequel (see [24]).

Definition 2.1.

Let Open image in new window be a convex set in Open image in new window , and Open image in new window . The function Open image in new window is said to be

(iv)Lipschitz with constant Open image in new window on Open image in new window (shortly Open image in new window -Lipschitz), if

Note that when Open image in new window is differentiable on some open set containing Open image in new window , then, since Open image in new window is lower semicontinuous proper convex, the generalized variational inequality (GVI) is equivalent to the following variational inequalities (see [25, 26]):

Throughout this paper, we assume that:

(A 1 ) the interior set of Open image in new window , int  Open image in new window is nonempty,

(A 2 ) the set Open image in new window is bounded,

(A 3 ) Open image in new window is upper semicontinuous on Open image in new window , and Open image in new window is proper, closed convex and subdifferentiable on Open image in new window ,

(A 4 ) Open image in new window is monotone on Open image in new window .

In special case Open image in new window , problem (GVI) can be written by the following.

It is well known that the problem (VI) can be formulated as finding the zero points of the operator Open image in new window , where
The dual gap function of problem (GVI) is defined as follows:

The following lemma gives two basic properties of the dual gap function (2.7) whose proof can be found, for instance, in [6].

Lemma 2.2.

The function Open image in new window is a gap function of (GVI), that is,

(i) Open image in new window for all Open image in new window ,

(ii) Open image in new window and Open image in new window if and only if Open image in new window is a solution to (DGVI). Moreover, if Open image in new window is pseudomonotone then Open image in new window is a solution to (DGVI) if and only if it is a solution to (GVI).

The problem Open image in new window may not be solvable and the dual gap function Open image in new window may not be well-defined. Instead of using gap function Open image in new window , we consider a truncated dual gap function Open image in new window . Suppose that Open image in new window fixed and Open image in new window . The truncated dual gap function is defined as follows:

For the following consideration, we define Open image in new window as a closed ball in Open image in new window centered at Open image in new window and radius Open image in new window , and Open image in new window . The following lemma gives some properties for Open image in new window .

Lemma 2.3.

Under assumptions (A1)–(A4), the following properties hold.

(i)The function Open image in new window is well-defined and convex on Open image in new window .

(ii)If a point Open image in new window is a solution to (DGVI) then Open image in new window .

(iii)If there exists Open image in new window such that Open image in new window and Open image in new window , and Open image in new window is pseudomonotone, then Open image in new window is a solution to (DGVI) (and also (GVI)).

Proof.
  1. (i)

    Note that Open image in new window is upper semicontinuous on Open image in new window for Open image in new window and Open image in new window is bounded. Therefore, the supremum exists which means that Open image in new window is well-defined. Moreover, since Open image in new window is convex on Open image in new window and Open image in new window is the supremum of a parametric family of convex functions (which depends on the parameter Open image in new window ), then Open image in new window is convex on Open image in new window

     
  2. (ii)
    By definition, it is easy to see that Open image in new window for all Open image in new window . Let Open image in new window be a solution of (DGVI) and Open image in new window . Then we have
     
In particular, we have
this implies Open image in new window .
  1. (iii)
    For some Open image in new window , Open image in new window means that Open image in new window is a solution to (DGVI) restricted to Open image in new window . Since Open image in new window is pseudomonotone, Open image in new window is also a solution to (GVI) restricted to Open image in new window . Since Open image in new window , for any Open image in new window , we can choose Open image in new window sufficiently small such that
     

where (2.13) follows from the convexity of Open image in new window . Since Open image in new window , dividing this inequality by Open image in new window , we obtain that Open image in new window is a solution to (GVI) on Open image in new window . Since Open image in new window is pseudomonotone, Open image in new window is also a solution to (DGVI).

Let Open image in new window be a nonempty, closed convex set and Open image in new window . Let us denote Open image in new window the Euclidean distance from Open image in new window to Open image in new window and Open image in new window the point attained this distance, that is,

As usual, Open image in new window is referred to the Euclidean projection onto the convex set Open image in new window . It is well-known that Open image in new window is a nonexpansive and co-coercive operator on Open image in new window (see [27, 28]).

The following lemma gives a tool for the next discussion.

Lemma 2.4.

For any Open image in new window and for any Open image in new window , the function Open image in new window and the mapping Open image in new window defined by (2.14) satisfy

Proof.

Inequality (2.15) is obvious from the property of the projection Open image in new window (see [27]). Now, we prove the inequality (2.16). For any Open image in new window , applying (2.15) we have
Using the definition of Open image in new window and noting that Open image in new window and taking minimum with respect to Open image in new window in (2.18), then we have

which proves (2.16).

From the definition of Open image in new window , we have

Since Open image in new window , applying (2.15) with Open image in new window instead of Open image in new window and Open image in new window for (2.20), we obtain the last inequality in Lemma 2.4.

For a given integer number Open image in new window , we consider a finite sequence of arbitrary points Open image in new window , a finite sequence of arbitrary points Open image in new window and a finite positive sequence Open image in new window . Let us define

Then upper bound of the dual gap function Open image in new window is estimated in the following lemma.

Lemma 2.5.

Suppose that Assumptions (A1)–(A4) are satisfied and

Then, for any Open image in new window ,

(i) Open image in new window , for all Open image in new window , Open image in new window .

(ii) Open image in new window .

Proof.
  1. (i)
    We define Open image in new window as the Lagrange function of the maximizing problem Open image in new window . Using duality theory in convex optimization, then we have
     
 (ii) From the monotonicity of Open image in new window and (2.22), we have
Combining (2.24), Lemma 2.5(i) and

3. Dual Algorithms

Now, we are going to build the dual interior proximal step for solving (GVI). The main idea is to construct a sequence Open image in new window such that the sequence Open image in new window tends to 0 as Open image in new window . By virtue of Lemma 2.5, we can check whether Open image in new window is an Open image in new window -solution to (GVI) or not.

The dual interior proximal step Open image in new window at the iteration Open image in new window is generated by using the following scheme:

where Open image in new window and Open image in new window are given parameters, Open image in new window is the solution to (2.22).

The following lemma shows an important property of the sequence Open image in new window .

Lemma 3.1.

The sequence Open image in new window generated by scheme (3.1) satisfies

Proof.

This implies that
From the subdifferentiability of the convex function Open image in new window to scheme (3.1), using the first-order necessary optimality condition, we have
for all Open image in new window . This inequality implies that

where Open image in new window .

We apply inequality (3.4) with Open image in new window , Open image in new window and Open image in new window and using (3.8) to obtain
Combine this inequality and (3.6), we get
On the other hand, if we denote Open image in new window , then it follows that
Combine (3.10) and (3.11), we get

which proves (3.2).

On the other hand, from (3.9) we have

Then the inequality (3.3) is deduced from this inequality and (3.6).

The dual algorithm is an iterative method which generates a sequence Open image in new window based on scheme (3.1). The algorithm is presented in detail as follows:

Algorithm 3.2.

One has the following.

Initialization:

Given a tolerance Open image in new window , fix an arbitrary point Open image in new window and choose Open image in new window , Open image in new window . Take Open image in new window and Open image in new window .

Iterations:

For each Open image in new window , execute four steps below.

Step 1.

Compute a projection point Open image in new window by taking

Step 2.

Solve the strongly convex programming problem

to get the unique solution Open image in new window .

Step 3.

Set Open image in new window .

Step 4.

If Open image in new window , where Open image in new window is a given tolerance, then stop.

Otherwise, increase Open image in new window by 1 and go back to Step 1.

Output:

Compute the final output Open image in new window as:

Now, we prove the convergence of Algorithm 3.2 and estimate its complexity.

Theorem 3.3.

Suppose that assumptions (A1)–(A3) are satisfied and Open image in new window is Open image in new window -Lipschitz continuous on Open image in new window . Then, one has

where Open image in new window is the final output defined by the sequence Open image in new window in Algorithm 3.2. As a consequence, the sequence Open image in new window converges to 0 and the number of iterations to reach an Open image in new window -solution is Open image in new window , where Open image in new window denotes the largest integer such that Open image in new window .

Proof.

Substituting (3.20) into (3.2), we obtain
If we choose Open image in new window for all Open image in new window in (2.21), then we have
Hence, from Lemma 2.5(ii), we have
Using inequality (3.22) and Open image in new window , it implies that
Note that Open image in new window . It follows from the inequalities (3.24) and (3.25) that

which implies that Open image in new window . The termination criterion at Step 4, Open image in new window , using inequality (2.26) we obtain Open image in new window and the number of iterations to reach an Open image in new window -solution is Open image in new window .

If there is no the guarantee for the Lipschitz condition, but the sequences Open image in new window and Open image in new window are uniformly bounded, we suppose that

then the algorithm can be modified to ensure that it still converges. The variant of Algorithm 3.2 is presented as Algorithm 3.4 below.

Algorithm 3.4.

One has the following.

Initialization:

Fix an arbitrary point Open image in new window and set Open image in new window . Take Open image in new window and Open image in new window . Choose Open image in new window for all Open image in new window .

Iterations:

For each Open image in new window execute the following steps.

Step 1.

Compute the projection point Open image in new window by taking

Step 2.

Solve the strong convex programming problem

to get the unique solution Open image in new window .

Step 3.

Set Open image in new window .

Step 4.

If Open image in new window , where Open image in new window is a given tolerance, then stop.

Otherwise, increase Open image in new window by 1, update Open image in new window and go back to Step 1.

Output:

Compute the final output Open image in new window as

The next theorem shows the convergence of Algorithm 3.4.

Theorem 3.5.

Let assumptions (A1)–(A3) be satisfied and the sequence Open image in new window be generated by Algorithm 3.4. Suppose that the sequences Open image in new window and Open image in new window are uniformly bounded by (3.27). Then, we have

As a consequence, the sequence Open image in new window converges to 0 and the number of iterations to reach an Open image in new window -solution is Open image in new window .

Proof.

If we choose Open image in new window for all Open image in new window in (2.21), then we have Open image in new window . Since Open image in new window , it follows from Step 3 of Algorithm 3.4 that
From (3.34) and Lemma 2.5(ii), for all Open image in new window we have
We define Open image in new window . Then, we have
Then derivative of Open image in new window is given by
Thus Open image in new window is nonincreasing. Combining this with (3.36) and Open image in new window , we have
Combining (3.39) and this inequality, we have
By induction on Open image in new window , it follows from (3.41) and Open image in new window that
From (3.35) and (3.42), we obtain

which implies that Open image in new window . The remainder of the theorem is trivially follows from (3.33).

4. Illustrative Example and Numerical Results

In this section, we illustrate the proposed algorithms on a class of generalized variational inequalities (GVI), where Open image in new window is a polyhedral convex set given by
where Open image in new window , Open image in new window is a symmetric positive semidefinite matrix and Open image in new window . The function Open image in new window is defined by

Then Open image in new window is subdifferentiable, but it is not differentiable on Open image in new window .

For this class of problem (GVI) we have the following results.

Lemma 4.1.

Let Open image in new window . Then

(i)if Open image in new window is Open image in new window -strongly monotone on Open image in new window , then Open image in new window is monotone on Open image in new window whenever Open image in new window .

(ii)if Open image in new window is Open image in new window -strongly monotone on Open image in new window , then Open image in new window is Open image in new window -strongly monotone on Open image in new window whenever Open image in new window .

(iii)if Open image in new window is Open image in new window -Lipschitz on Open image in new window , then Open image in new window is Open image in new window -Lipschitz on Open image in new window .

Proof.

Then (i) and (ii) easily follow.

Using the Lipschitz condition, it is not difficult to obtain (iii).

To illustrate our algorithms, we consider the following data.
with Open image in new window , Open image in new window , Open image in new window . From Lemma 4.1, we have Open image in new window is monotone on Open image in new window . The subproblems in Algorithm 3.2 can be solved efficiently, for example, by using MATLAB Optimization Toolbox R2008a. We obtain the approximate solution
Now we use Algorithm 3.4 on the same variational inequalities except that

where the Open image in new window components of the Open image in new window are defined by: Open image in new window , with Open image in new window randomly chosen in Open image in new window and the Open image in new window components of Open image in new window are randomly chosen in Open image in new window . The function Open image in new window is given by Bnouhachem [19]. Under these assumptions, it can be proved that Open image in new window is continuous and monotone on Open image in new window .

With Open image in new window and the tolerance Open image in new window , we obtained the computational results (see, the Table 1).
Table 1

Numerical results: Algorithm 3.4 with Open image in new window .

Open image in new window

Open image in new window

Open image in new window

Open image in new window

Open image in new window

Open image in new window

Open image in new window

Open image in new window

Open image in new window

Open image in new window

Open image in new window

1

Open image in new window

0.001

Open image in new window

Open image in new window

0.272

Open image in new window

Open image in new window

Open image in new window

0.395

Open image in new window

2

Open image in new window

0.133

Open image in new window

Open image in new window

Open image in new window

0.080

0.493

Open image in new window

Open image in new window

0.307

3

Open image in new window

0.320

Open image in new window

Open image in new window

0.463

Open image in new window

Open image in new window

0.255

Open image in new window

Open image in new window

4

0.197

0.161

0.434

Open image in new window

0.505

Open image in new window

0.451

Open image in new window

Open image in new window

0.278

5

0.291

0.071

Open image in new window

Open image in new window

0.453

Open image in new window

Open image in new window

Open image in new window

0.238

0.166

6

Open image in new window

0.246

0.211

Open image in new window

0.044

Open image in new window

0.466

Open image in new window

0.486

Open image in new window

7

Open image in new window

0.220

0.134

0.321

Open image in new window

0.364

Open image in new window

0.551

0.421

Open image in new window

8

Open image in new window

Open image in new window

0.365

Open image in new window

Open image in new window

0.387

0.217

Open image in new window

Open image in new window

Open image in new window

9

Open image in new window

0.562

Open image in new window

Open image in new window

Open image in new window

Open image in new window

Open image in new window

0.124

Open image in new window

0.319

10

0.071

0.134

Open image in new window

Open image in new window

0.307

0.010

0.052

Open image in new window

Open image in new window

Open image in new window

Notes

Acknowledgments

The authors would like to thank the referees for their useful comments, remarks and suggestions. This work was completed while the first author was staying at Kyungnam University for the NRF Postdoctoral Fellowship for Foreign Researchers. And the second author was supported by Kyungnam University Research Fund, 2010.

References

  1. 1.
    Anh PN, Muu LD, Strodiot J-J: Generalized projection method for non-Lipschitz multivalued monotone variational inequalities. Acta Mathematica Vietnamica 2009, 34(1):67–79.MathSciNetMATHGoogle Scholar
  2. 2.
    Anh PN, Muu LD, Nguyen VH, Strodiot JJ: Using the Banach contraction principle to implement the proximal point method for multivalued monotone variational inequalities. Journal of Optimization Theory and Applications 2005, 124(2):285–306. 10.1007/s10957-004-0926-0MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Bello Cruz JY, Iusem AN: Convergence of direct methods for paramontone variational inequalities. Computational Optimization and Applications 2010, 46(2):247–263. 10.1007/s10589-009-9246-5MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementary Problems. Springer, New York, NY, USA; 2003.MATHGoogle Scholar
  5. 5.
    Fukushima M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Mathematical Programming 1992, 53(1):99–110. 10.1007/BF01585696MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Konnov IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin, Germany; 2000.MATHGoogle Scholar
  7. 7.
    Mashreghi J, Nasri M: Forcing strong convergence of Korpelevich's method in Banach spaces with its applications in game theory. Nonlinear Analysis: Theory, Methods & Applications 2010, 72(3–4):2086–2099. 10.1016/j.na.2009.10.009MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Noor MA: Iterative schemes for quasimonotone mixed variational inequalities. Optimization 2001, 50(1–2):29–44. 10.1080/02331930108844552MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Zhu DL, Marcotte P: Co-coercivity and its role in the convergence of iterative schemes for solving variational inequalities. SIAM Journal on Optimization 1996, 6(3):714–726. 10.1137/S1052623494250415MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Daniele P, Giannessi F, Maugeri A: Equilibrium Problems and Variational Models, Nonconvex Optimization and Its Applications. Volume 68. Kluwer Academic Publishers, Norwell, Mass, USA; 2003:xiv+445.CrossRefMATHGoogle Scholar
  11. 11.
    Fang SC, Peterson EL: Generalized variational inequalities. Journal of Optimization Theory and Applications 1982, 38(3):363–383. 10.1007/BF00935344MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Goh CJ, Yang XQ: Duality in Optimization and Variational Inequalities, Optimization Theory and Applications. Volume 2. Taylor & Francis, London, UK; 2002:xvi+313.CrossRefMATHGoogle Scholar
  13. 13.
    Iusem AN, Nasri M: Inexact proximal point methods for equilibrium problems in Banach spaces. Numerical Functional Analysis and Optimization 2007, 28(11–12):1279–1308. 10.1080/01630560701766668MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Kim JK, Kim KS: New systems of generalized mixed variational inequalities with nonlinear mappings in Hilbert spaces. Journal of Computational Analysis and Applications 2010, 12(3):601–612.MathSciNetMATHGoogle Scholar
  15. 15.
    Kim JK, Kim KS: A new system of generalized nonlinear mixed quasivariational inequalities and iterative algorithms in Hilbert spaces. Journal of the Korean Mathematical Society 2007, 44(4):823–834. 10.4134/JKMS.2007.44.4.823MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Waltz RA, Morales JL, Nocedal J, Orban D: An interior algorithm for nonlinear optimization that combines line search and trust region steps. Mathematical Programming 2006, 107(3):391–408. 10.1007/s10107-004-0560-5MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Anh PN: An interior proximal method for solving monotone generalized variational inequalities. East-West Journal of Mathematics 2008, 10(1):81–100.MathSciNetMATHGoogle Scholar
  18. 18.
    Auslender A, Teboulle M: Interior projection-like methods for monotone variational inequalities. Mathematical Programming 2005, 104(1):39–68. 10.1007/s10107-004-0568-xMathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Bnouhachem A: An LQP method for pseudomonotone variational inequalities. Journal of Global Optimization 2006, 36(3):351–363. 10.1007/s10898-006-9013-4MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Iusem AN, Nasri M: Augmented Lagrangian methods for variational inequality problems. RAIRO Operations Research 2010, 44(1):5–25. 10.1051/ro/2010006MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Kim JK, Cho SY, Qin X: Hybrid projection algorithms for generalized equilibrium problems and strictly pseudocontractive mappings. Journal of Inequalities and Applications 2010, 2010:-17.Google Scholar
  22. 22.
    Kim JK, Buong N: Regularization inertial proximal point algorithm for monotone hemicontinuous mapping and inverse strongly monotone mappings in Hilbert spaces. Journal of Inequalities and Applications 2010, 2010:-10.Google Scholar
  23. 23.
    Nesterov Y: Dual extrapolation and its applications to solving variational inequalities and related problems. Mathematical Programming 2007, 109(2–3):319–344. 10.1007/s10107-006-0034-zMathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Aubin J-P, Ekeland I: Applied Nonlinear Analysis, Pure and Applied Mathematics. John Wiley & Sons, New York, NY, USA; 1984:xi+518.MATHGoogle Scholar
  25. 25.
    Anh PN, Muu LD: Coupling the Banach contraction mapping principle and the proximal point algorithm for solving monotone variational inequalities. Acta Mathematica Vietnamica 2004, 29(2):119–133.MathSciNetMATHGoogle Scholar
  26. 26.
    Cohen G: Auxiliary problem principle extended to variational inequalities. Journal of Optimization Theory and Applications 1988, 59(2):325–333.MathSciNetMATHGoogle Scholar
  27. 27.
    Mangasarian OL, Solodov MV: A linearly convergent derivative-free descent method for strongly monotone complementarity problems. Computational Optimization and Applications 1999, 14(1):5–16. 10.1023/A:1008752626695MathSciNetCrossRefMATHGoogle Scholar
  28. 28.
    Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization 1976, 14(5):877–898. 10.1137/0314056MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Pham Ngoc Anh and Jong Kyu Kim. 2010

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Authors and Affiliations

  1. 1.Department of MathematicsKyungnam UniversityMasanRepublic of Korea

Personalised recommendations