# A New Method for Solving Monotone Generalized Variational Inequalities

- 941 Downloads
- 1 Citations

## Abstract

We suggest new dual algorithms and iterative methods for solving monotone generalized variational inequalities. Instead of working on the primal space, this method performs a dual step on the dual space by using the dual gap function. Under the suitable conditions, we prove the convergence of the proposed algorithms and estimate their complexity to reach an Open image in new window -solution. Some preliminary computational results are reported.

### Keywords

Variational Inequality Variational Inequality Problem Projection Point Polyhedral Convex Convex Programming Problem## 1. Introduction

where Open image in new window denotes the standard dot product in Open image in new window .

In recent years, this generalized variational inequalities become an attractive field for many researchers and have many important applications in electricity markets, transportations, economics, and nonlinear analysis (see [1, 2, 3, 4, 5, 6, 7, 8, 9]).

It is well known that the interior quadratic and dual technique are powerfull tools for analyzing and solving the optimization problems (see [10, 11, 12, 13, 14, 15, 16]). Recently these techniques have been used to develop proximal iterative algorithm for variational inequalities (see [17, 18, 19, 20, 21, 22]).

In addition Nesterov [23] introduced a dual extrapolation method for solving variational inequalities. Instead of working on the primal space, this method performs a dual step on the dual space.

In this paper we extend results in [23] to the generalized variational inequality problem (GVI) in the dual space. In the first approach, a gap function Open image in new window is constructed such that Open image in new window , for all Open image in new window and Open image in new window if and only if Open image in new window solves (GVI). Namely, we first develop a convergent algorithm for (GVI) with Open image in new window being monotone function satisfying a certain Lipschitz type condition on Open image in new window . Next, in order to avoid the Lipschitz condition we will show how to find a regularization parameter at every iteration Open image in new window such that the sequence Open image in new window converges to a solution of (GVI).

The remaining part of the paper is organized as follows. In Section 2, we present two convergent algorithms for monotone and generalized variational inequality problems with Lipschitzian condition and without Lipschitzian condition. Section 3 deals with some preliminary results of the proposed methods.

## 2. Preliminaries

First, let us recall the well-known concepts of monotonicity that will be used in the sequel (see [24]).

Definition 2.1.

Let Open image in new window be a convex set in Open image in new window , and Open image in new window . The function Open image in new window is said to be

Note that when Open image in new window is differentiable on some open set containing Open image in new window , then, since Open image in new window is lower semicontinuous proper convex, the generalized variational inequality (GVI) is equivalent to the following variational inequalities (see [25, 26]):

Throughout this paper, we assume that:

(*A* _{ 1 }) the interior set of Open image in new window , int Open image in new window is nonempty,

(*A* _{ 2 }) the set Open image in new window is bounded,

(*A* _{ 3 }) Open image in new window is upper semicontinuous on Open image in new window , and Open image in new window is proper, closed convex and subdifferentiable on Open image in new window ,

(*A* _{ 4 }) Open image in new window is monotone on Open image in new window .

In special case Open image in new window , problem (GVI) can be written by the following.

The following lemma gives two basic properties of the dual gap function (2.7) whose proof can be found, for instance, in [6].

Lemma 2.2.

The function Open image in new window is a gap function of (GVI), that is,

(i) Open image in new window for all Open image in new window ,

(ii) Open image in new window and Open image in new window if and only if Open image in new window is a solution to (DGVI). Moreover, if Open image in new window is pseudomonotone then Open image in new window is a solution to (DGVI) if and only if it is a solution to (GVI).

For the following consideration, we define Open image in new window as a closed ball in Open image in new window centered at Open image in new window and radius Open image in new window , and Open image in new window . The following lemma gives some properties for Open image in new window .

Lemma 2.3.

Under assumptions (A_{1})–(A_{4}), the following properties hold.

(i)The function Open image in new window is well-defined and convex on Open image in new window .

(ii)If a point Open image in new window is a solution to (DGVI) then Open image in new window .

(iii)If there exists Open image in new window such that Open image in new window and Open image in new window , and Open image in new window is pseudomonotone, then Open image in new window is a solution to (DGVI) (and also (GVI)).

- (i)
Note that Open image in new window is upper semicontinuous on Open image in new window for Open image in new window and Open image in new window is bounded. Therefore, the supremum exists which means that Open image in new window is well-defined. Moreover, since Open image in new window is convex on Open image in new window and Open image in new window is the supremum of a parametric family of convex functions (which depends on the parameter Open image in new window ), then Open image in new window is convex on Open image in new window

- (ii)By definition, it is easy to see that Open image in new window for all Open image in new window . Let Open image in new window be a solution of (DGVI) and Open image in new window . Then we have(2.9)

- (iii)For some Open image in new window , Open image in new window means that Open image in new window is a solution to (DGVI) restricted to Open image in new window . Since Open image in new window is pseudomonotone, Open image in new window is also a solution to (GVI) restricted to Open image in new window . Since Open image in new window , for any Open image in new window , we can choose Open image in new window sufficiently small such that(2.12)

where (2.13) follows from the convexity of Open image in new window . Since Open image in new window , dividing this inequality by Open image in new window , we obtain that Open image in new window is a solution to (GVI) on Open image in new window . Since Open image in new window is pseudomonotone, Open image in new window is also a solution to (DGVI).

As usual, Open image in new window is referred to the Euclidean projection onto the convex set Open image in new window . It is well-known that Open image in new window is a nonexpansive and co-coercive operator on Open image in new window (see [27, 28]).

The following lemma gives a tool for the next discussion.

Lemma 2.4.

Proof.

which proves (2.16).

Since Open image in new window , applying (2.15) with Open image in new window instead of Open image in new window and Open image in new window for (2.20), we obtain the last inequality in Lemma 2.4.

Then upper bound of the dual gap function Open image in new window is estimated in the following lemma.

Lemma 2.5.

Then, for any Open image in new window ,

(i) Open image in new window , for all Open image in new window , Open image in new window .

(ii) Open image in new window .

- (i)We define Open image in new window as the Lagrange function of the maximizing problem Open image in new window . Using duality theory in convex optimization, then we have(2.23)

## 3. Dual Algorithms

Now, we are going to build the dual interior proximal step for solving (GVI). The main idea is to construct a sequence Open image in new window such that the sequence Open image in new window tends to 0 as Open image in new window . By virtue of Lemma 2.5, we can check whether Open image in new window is an Open image in new window -solution to (GVI) or not.

where Open image in new window and Open image in new window are given parameters, Open image in new window is the solution to (2.22).

The following lemma shows an important property of the sequence Open image in new window .

Lemma 3.1.

Proof.

where Open image in new window .

which proves (3.2).

Then the inequality (3.3) is deduced from this inequality and (3.6).

The dual algorithm is an iterative method which generates a sequence Open image in new window based on scheme (3.1). The algorithm is presented in detail as follows:

Algorithm 3.2.

One has the following.

Initialization:

Given a tolerance Open image in new window , fix an arbitrary point Open image in new window and choose Open image in new window , Open image in new window . Take Open image in new window and Open image in new window .

Iterations:

For each Open image in new window , execute four steps below.

Step 1.

Step 2.

to get the unique solution Open image in new window .

Step 3.

Set Open image in new window .

Step 4.

If Open image in new window , where Open image in new window is a given tolerance, then stop.

Otherwise, increase Open image in new window by 1 and go back to Step 1.

Output:

Now, we prove the convergence of Algorithm 3.2 and estimate its complexity.

Theorem 3.3.

_{1})–(A

_{3}) are satisfied and Open image in new window is Open image in new window -Lipschitz continuous on Open image in new window . Then, one has

where Open image in new window is the final output defined by the sequence Open image in new window in Algorithm 3.2. As a consequence, the sequence Open image in new window converges to 0 and the number of iterations to reach an Open image in new window -solution is Open image in new window , where Open image in new window denotes the largest integer such that Open image in new window .

Proof.

which implies that Open image in new window . The termination criterion at Step 4, Open image in new window , using inequality (2.26) we obtain Open image in new window and the number of iterations to reach an Open image in new window -solution is Open image in new window .

then the algorithm can be modified to ensure that it still converges. The variant of Algorithm 3.2 is presented as Algorithm 3.4 below.

Algorithm 3.4.

One has the following.

Initialization:

Fix an arbitrary point Open image in new window and set Open image in new window . Take Open image in new window and Open image in new window . Choose Open image in new window for all Open image in new window .

Iterations:

For each Open image in new window execute the following steps.

Step 1.

Step 2.

to get the unique solution Open image in new window .

Step 3.

Set Open image in new window .

Step 4.

If Open image in new window , where Open image in new window is a given tolerance, then stop.

Otherwise, increase Open image in new window by 1, update Open image in new window and go back to Step 1.

Output:

The next theorem shows the convergence of Algorithm 3.4.

Theorem 3.5.

_{1})–(A

_{3}) be satisfied and the sequence Open image in new window be generated by Algorithm 3.4. Suppose that the sequences Open image in new window and Open image in new window are uniformly bounded by (3.27). Then, we have

As a consequence, the sequence Open image in new window converges to 0 and the number of iterations to reach an Open image in new window -solution is Open image in new window .

Proof.

which implies that Open image in new window . The remainder of the theorem is trivially follows from (3.33).

## 4. Illustrative Example and Numerical Results

Then Open image in new window is subdifferentiable, but it is not differentiable on Open image in new window .

For this class of problem (GVI) we have the following results.

Lemma 4.1.

Let Open image in new window . Then

(i)if Open image in new window is Open image in new window -strongly monotone on Open image in new window , then Open image in new window is monotone on Open image in new window whenever Open image in new window .

(ii)if Open image in new window is Open image in new window -strongly monotone on Open image in new window , then Open image in new window is Open image in new window -strongly monotone on Open image in new window whenever Open image in new window .

(iii)if Open image in new window is Open image in new window -Lipschitz on Open image in new window , then Open image in new window is Open image in new window -Lipschitz on Open image in new window .

Proof.

Then (i) and (ii) easily follow.

Using the Lipschitz condition, it is not difficult to obtain (iii).

where the Open image in new window components of the Open image in new window are defined by: Open image in new window , with Open image in new window randomly chosen in Open image in new window and the Open image in new window components of Open image in new window are randomly chosen in Open image in new window . The function Open image in new window is given by Bnouhachem [19]. Under these assumptions, it can be proved that Open image in new window is continuous and monotone on Open image in new window .

Numerical results: Algorithm 3.4 with Open image in new window .

1 | 0.001 | 0.272 | 0.395 | |||||||

2 | 0.133 | 0.080 | 0.493 | 0.307 | ||||||

3 | 0.320 | 0.463 | 0.255 | |||||||

4 | 0.197 | 0.161 | 0.434 | 0.505 | 0.451 | 0.278 | ||||

5 | 0.291 | 0.071 | 0.453 | 0.238 | 0.166 | |||||

6 | 0.246 | 0.211 | 0.044 | 0.466 | 0.486 | |||||

7 | 0.220 | 0.134 | 0.321 | 0.364 | 0.551 | 0.421 | ||||

8 | 0.365 | 0.387 | 0.217 | |||||||

9 | 0.562 | 0.124 | 0.319 | |||||||

10 | 0.071 | 0.134 | 0.307 | 0.010 | 0.052 |

## Notes

### Acknowledgments

The authors would like to thank the referees for their useful comments, remarks and suggestions. This work was completed while the first author was staying at Kyungnam University for the NRF Postdoctoral Fellowship for Foreign Researchers. And the second author was supported by Kyungnam University Research Fund, 2010.

### References

- 1.Anh PN, Muu LD, Strodiot J-J: Generalized projection method for non-Lipschitz multivalued monotone variational inequalities.
*Acta Mathematica Vietnamica*2009, 34(1):67–79.MathSciNetMATHGoogle Scholar - 2.Anh PN, Muu LD, Nguyen VH, Strodiot JJ: Using the Banach contraction principle to implement the proximal point method for multivalued monotone variational inequalities.
*Journal of Optimization Theory and Applications*2005, 124(2):285–306. 10.1007/s10957-004-0926-0MathSciNetCrossRefMATHGoogle Scholar - 3.Bello Cruz JY, Iusem AN: Convergence of direct methods for paramontone variational inequalities.
*Computational Optimization and Applications*2010, 46(2):247–263. 10.1007/s10589-009-9246-5MathSciNetCrossRefMATHGoogle Scholar - 4.Facchinei F, Pang JS:
*Finite-Dimensional Variational Inequalities and Complementary Problems*. Springer, New York, NY, USA; 2003.MATHGoogle Scholar - 5.Fukushima M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems.
*Mathematical Programming*1992, 53(1):99–110. 10.1007/BF01585696MathSciNetCrossRefMATHGoogle Scholar - 6.Konnov IV:
*Combined Relaxation Methods for Variational Inequalities*. Springer, Berlin, Germany; 2000.MATHGoogle Scholar - 7.Mashreghi J, Nasri M: Forcing strong convergence of Korpelevich's method in Banach spaces with its applications in game theory.
*Nonlinear Analysis: Theory, Methods & Applications*2010, 72(3–4):2086–2099. 10.1016/j.na.2009.10.009MathSciNetCrossRefMATHGoogle Scholar - 8.Noor MA: Iterative schemes for quasimonotone mixed variational inequalities.
*Optimization*2001, 50(1–2):29–44. 10.1080/02331930108844552MathSciNetCrossRefMATHGoogle Scholar - 9.Zhu DL, Marcotte P: Co-coercivity and its role in the convergence of iterative schemes for solving variational inequalities.
*SIAM Journal on Optimization*1996, 6(3):714–726. 10.1137/S1052623494250415MathSciNetCrossRefMATHGoogle Scholar - 10.Daniele P, Giannessi F, Maugeri A:
*Equilibrium Problems and Variational Models, Nonconvex Optimization and Its Applications*.*Volume 68*. Kluwer Academic Publishers, Norwell, Mass, USA; 2003:xiv+445.CrossRefMATHGoogle Scholar - 11.Fang SC, Peterson EL: Generalized variational inequalities.
*Journal of Optimization Theory and Applications*1982, 38(3):363–383. 10.1007/BF00935344MathSciNetCrossRefMATHGoogle Scholar - 12.Goh CJ, Yang XQ:
*Duality in Optimization and Variational Inequalities, Optimization Theory and Applications*.*Volume 2*. Taylor & Francis, London, UK; 2002:xvi+313.CrossRefMATHGoogle Scholar - 13.Iusem AN, Nasri M: Inexact proximal point methods for equilibrium problems in Banach spaces.
*Numerical Functional Analysis and Optimization*2007, 28(11–12):1279–1308. 10.1080/01630560701766668MathSciNetCrossRefMATHGoogle Scholar - 14.Kim JK, Kim KS: New systems of generalized mixed variational inequalities with nonlinear mappings in Hilbert spaces.
*Journal of Computational Analysis and Applications*2010, 12(3):601–612.MathSciNetMATHGoogle Scholar - 15.Kim JK, Kim KS: A new system of generalized nonlinear mixed quasivariational inequalities and iterative algorithms in Hilbert spaces.
*Journal of the Korean Mathematical Society*2007, 44(4):823–834. 10.4134/JKMS.2007.44.4.823MathSciNetCrossRefMATHGoogle Scholar - 16.Waltz RA, Morales JL, Nocedal J, Orban D: An interior algorithm for nonlinear optimization that combines line search and trust region steps.
*Mathematical Programming*2006, 107(3):391–408. 10.1007/s10107-004-0560-5MathSciNetCrossRefMATHGoogle Scholar - 17.Anh PN: An interior proximal method for solving monotone generalized variational inequalities.
*East-West Journal of Mathematics*2008, 10(1):81–100.MathSciNetMATHGoogle Scholar - 18.Auslender A, Teboulle M: Interior projection-like methods for monotone variational inequalities.
*Mathematical Programming*2005, 104(1):39–68. 10.1007/s10107-004-0568-xMathSciNetCrossRefMATHGoogle Scholar - 19.Bnouhachem A: An LQP method for pseudomonotone variational inequalities.
*Journal of Global Optimization*2006, 36(3):351–363. 10.1007/s10898-006-9013-4MathSciNetCrossRefMATHGoogle Scholar - 20.Iusem AN, Nasri M: Augmented Lagrangian methods for variational inequality problems.
*RAIRO Operations Research*2010, 44(1):5–25. 10.1051/ro/2010006MathSciNetCrossRefMATHGoogle Scholar - 21.Kim JK, Cho SY, Qin X: Hybrid projection algorithms for generalized equilibrium problems and strictly pseudocontractive mappings.
*Journal of Inequalities and Applications*2010, 2010:-17.Google Scholar - 22.Kim JK, Buong N: Regularization inertial proximal point algorithm for monotone hemicontinuous mapping and inverse strongly monotone mappings in Hilbert spaces.
*Journal of Inequalities and Applications*2010, 2010:-10.Google Scholar - 23.Nesterov Y: Dual extrapolation and its applications to solving variational inequalities and related problems.
*Mathematical Programming*2007, 109(2–3):319–344. 10.1007/s10107-006-0034-zMathSciNetCrossRefMATHGoogle Scholar - 24.Aubin J-P, Ekeland I:
*Applied Nonlinear Analysis, Pure and Applied Mathematics*. John Wiley & Sons, New York, NY, USA; 1984:xi+518.MATHGoogle Scholar - 25.Anh PN, Muu LD: Coupling the Banach contraction mapping principle and the proximal point algorithm for solving monotone variational inequalities.
*Acta Mathematica Vietnamica*2004, 29(2):119–133.MathSciNetMATHGoogle Scholar - 26.Cohen G: Auxiliary problem principle extended to variational inequalities.
*Journal of Optimization Theory and Applications*1988, 59(2):325–333.MathSciNetMATHGoogle Scholar - 27.Mangasarian OL, Solodov MV: A linearly convergent derivative-free descent method for strongly monotone complementarity problems.
*Computational Optimization and Applications*1999, 14(1):5–16. 10.1023/A:1008752626695MathSciNetCrossRefMATHGoogle Scholar - 28.Rockafellar RT: Monotone operators and the proximal point algorithm.
*SIAM Journal on Control and Optimization*1976, 14(5):877–898. 10.1137/0314056MathSciNetCrossRefMATHGoogle Scholar

## Copyright information

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.