# Self-Adaptive Implicit Methods for Monotone Variant Variational Inequalities

Open Access
Research Article

## Abstract

The efficiency of the implicit method proposed by He (1999) depends on the parameter heavily; while it varies for individual problem, that is, different problem has different "suitable" parameter, which is difficult to find. In this paper, we present a modified implicit method, which adjusts the parameter automatically per iteration, based on the message from former iterates. To improve the performance of the algorithm, an inexact version is proposed, where the subproblem is just solved approximately. Under mild conditions as those for variational inequalities, we prove the global convergence of both exact and inexact versions of the new method. We also present several preliminary numerical results, which demonstrate that the self-adaptive implicit method, especially the inexact version, is efficient and robust.

## Keywords

Variational Inequality Variational Inequality Problem Nonempty Closed Convex Subset Cluster Point Implicit Method
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## 1. Introduction

Let be a closed convex subset of and let be a mapping from into itself. The so-called finite-dimensional variant variational inequalities, denoted by , is to find a vector , such that
while a classical variational inequality problem, abbreviated by , is to find a vector , such that

where is a mapping from into itself.

Both and serve as very general mathematical models of numerous applications arising in economics, engineering, transportation, and so forth. They include some widely applicable problems as special cases, such as mathematical programming problems, system of nonlinear equations, and nonlinear complementarity problems, and so forth. Thus, they have been extensively investigated. We refer the readers to the excellent monograph of Faccinei and Pang [1, 2] and the references therein for theoretical and algorithmic developments on , for example, [3, 4, 5, 6, 7, 8, 9, 10], and [11, 12, 13, 14, 15, 16] for .

It is observed that if is invertible, then by setting , the inverse mapping of can be reduced to . Thus, theoretically, all numerical methods for solving can be used to solve . However, in many practical applications, the inverse mapping may not exist. On the other hand, even if it exists, it is not easy to find it. Thus, there is a need to develop numerical methods for and recently, the Goldstein's type method was extended from solving to [12, 17].

In [11], He proposed an implicit method for solving general variational inequality problems. A general variational inequality problem is to find a vector , such that

When is the identity mapping, it reduces to and if is the identity mapping, it reduces to . He's implicit method is as follows.

(S0)Given , and a positive definite matrix .

(S1)Find via
where

with being the projection from onto , under the Euclidean norm.

He's method is attractive since it solves the general variational inequality problem, which is essentially equivalent to a system of nonsmooth equations
via solving a series of smooth equations (1.4). The mapping in the subproblem is well conditioned and many efficient numerical methods, such as Newton's method, can be applied to solve it. Furthermore, to improve the efficiency of the algorithm, He [11] proposed to solve the subproblem approximately. That is, at Step??1, instead of finding a zero of , it only needs to find a vector satisfying
where is a nonnegative sequence. He proved the global convergence of the algorithm under the condition that the error tolerance sequence satisfies

In the above algorithm, there are two parameters and , which affect the efficiency of the algorithm. It was observed that nearly for all problems, close to is a better choice than smaller , while different problem has different optimal. A suitable parameter is thus difficult to find for an individual problem. For solving variational inequality problems, He et al. [18] proposed to choose a sequence of parameters , instead of a fixed parameter , to improve the efficiency of the algorithm. Under the same conditions as those in [11], they proved the global convergence of the algorithm. The numerical results reported there indicated that for any given initial parameter , the algorithm can find a suitable parameter self-adaptively. This improves the efficiency of the algorithm greatly and makes the algorithm easy and robust to implement in practice.

In this paper, in a similar theme as [18], we suggest a general rule for choosing suitable parameter in the implicit method for solving . By replacing the constant factor in (1.4) and (1.5) with a self-adaptive variable positive sequence , the efficiency of the algorithm can be improved greatly. Moreover, it is also robust to the initial choice of the parameter . Thus, for any given problems, we can choose a parameter arbitrarily, for example, or . The algorithm chooses a suitable parameter self-adaptively, based on the information from the former iteration, which makes it able to add a little additional computational cost against the original algorithm with fixed parameter . To further improve the efficiency of the algorithm, we also admit approximate computation in solving the subproblem per iteration. That is, per iteration, we just need to find a vector that satisfies (1.8).

Throughout this paper, we make the following assumptions.

Assumption A.

The solution set of , denoted by , is nonempty.

Assumption B.

The operator is monotone, that is, for any ,
(1.10)

The rest of this paper is organized as follows. In Section 2, we summarize some basic properties which are useful in the convergence analysis of our method. In Sections 3 and 4, we describe the exact version and inexact version of the method and prove their global convergence, respectively. We report our preliminary computational results in Section 5 and give some final conclusions in the last section.

## 2. Preliminaries

For a vector and a symmetric positive definite matrix , we denote as the Euclidean-norm and as the matrix-induced norm, that is, .

Let be a nonempty closed convex subset of , and let denote the projection mapping from onto , under the matrix-induced norm. That is,
It is known [12, 19] that the variant variational inequality problem (1.1) is equivalent to the projection equation

where is an arbitrary positive constant. Then, we have the following lemma.

Lemma 2.1.

is a solution of if and only if for any fixed constant , where

is the residual function of the projection equation (2.2).

Proof.

See [11, Theorem??1].

The following lemma summarizes some basic properties of the projection operator, which will be used in the subsequent analysis.

Lemma 2.2.

Let be a closed convex set in and let denote the projection operator onto under the matrix-induced norm, then one has

The following lemma plays an important role in convergence analysis of our algorithm.

Lemma 2.3.

For a given , let Then it holds that

Proof.

See [20] for a simple proof.

Lemma 2.4.

Let , then for all and , one has

Proof.

It follows from the definition of (see (1.1) that
By setting and in (2.4), we obtain
Adding (2.8) and (2.9), and using the definition of in (2.3), we get
(2.10)
that is,
(2.11)

where the last inequality follows from the monotonicity of (Assumption B). This completes the proof.

## 3. Exact Implicit Method and Convergence Analysis

We are now in the position to describe our algorithm formally.

### 3.1. Self-Adaptive Exact Implicit Method

(S0)Given , , and a positive definite matrix .

(S1)Compute such that
(S2)If the given stopping criterion is satisfied, then stop; otherwise choose a new parameter , where satisfies

Set and go to Step??1.

From (3.1), we know that is the (exact) unique zero of

We refer to the above method as the self-adaptive exact implicit method.

Remark 3.1.

According to the assumption and , we have . Denote

Hence, the sequence is bounded. Then, let and .

Now, we analyze the convergence of the algorithm, beginning with the following lemma.

Lemma 3.2.

Let be the sequence generated by the proposed self-adaptive exact implicit method. Then for any and , one has

Proof.

Using (3.1), we get

where the inequality follows from (2.7). This completes the proof.

Since and is monotone, it follows that
where the inequality follows from the monotonicity of the mapping . Combining (3.5) and (3.7), we have
Now, we give the self-adaptive rule in choosing the parameter . For the sake of balance, we hope that
That is, for given constant , if
(3.10)
we should increase in the next iteration; on the other hand, we should decrease when
(3.11)
Let
(3.12)
Then we give
(3.13)

Such a self-adaptive strategy was adopted in [18, 21, 22, 23, 24] for solving variational inequality problems, where the numerical results indicated its efficiency and robustness to the choice of the initial parameter . Here we adopted it for solving variant variational inequality problems.

We are now in the position to give the convergence result of the algorithm, the main result of this section.

Theorem 3.3.

The sequence generated by the proposed self-adaptive exact implicit method converges to a solution of .

Proof.

Let . Then from the assumption that , we have that , which means that . Denote
(3.14)
From (3.8), for any , that is, an arbitrary solution of , we have
(3.15)
(3.16)

This, together with the monotonicity of the mapping , means that the generated sequence is bounded.

Also from (3.8), we have
(3.17)
Adding both sides of the above inequality, we obtain
(3.18)
where the second inequality follows from (3.15). Thus, we have
(3.19)
which, from Lemma 2.3, means that
(3.20)
Since is bounded, it has at least one cluster point. Let be a cluster point of and let be the subsequence converging to . Since is continuous, taking limit in (3.20) along the subsequence, we get
(3.21)

Thus, from Lemma 2.1, is a solution of .

In the following we prove that the sequence has exactly one cluster point. Assume that is another cluster point of , which is different from . Because is a cluster point of the sequence and is monotone, there is a such that
(3.22)
where
(3.23)
On the other hand, since and is an arbitrary solution, by setting in (3.15), we have for all ,
(3.24)
that is,
(3.25)
Then,
(3.26)
Using the monotonicity of and the choosing rule of , we have
(3.27)
Combing (3.25)–(3.27), we have that for any ,
(3.28)

which means that cannot be a cluster point of . Thus, has just one cluster point.

## 4. Inexact Implicit Method and Convergence Analysis

The main task at each iteration of the implicit exact algorithm in the last section is to solve a system of nonlinear equations. To solve it exactly per iteration is time consuming, and there is little justification to solve it exactly, especially when the iterative point is far away from the solution set. Thus, in this section, we propose to solve the subproblem approximately. That is, for a given , instead of finding the exact solution of (3.1), we would accept as the new iterate if it satisfies

where is a nonnegative sequence with . If (3.1) is replaced by (4.1), the modified method is called inexact implicit method.

We now analyze the convergence of the inexact implicit method.

Lemma 4.1.

Let be the sequence generated by the inexact implicit method. Then there exists a such that for any and ,

Proof.

Denote
Then (4.1) can be rewritten as
According to (4.3) and (2.7),
Using Cauchy-Schwarz inequality and (4.4), we have
Since , there is a constant , such that for all ,
and (4.7) becomes that for all ,

Substituting (4.6) and (4.9) into (4.5), we complete the proof.

In a similar way to (3.7), by using the monotonicity and the assumption that and (4.2), we obtain that for all
(4.10)

Now, we prove the convergence of the inexact implicit method.

Theorem 4.2.

The sequence generated by the proposed self-adaptive inexact implicit method converges to a solution point of .

Proof.

Let
(4.11)
Then, it follows from (4.10) that for all ,
(4.12)
From the assumptions that
(4.13)
it follows that
(4.14)

are finite. The rest of the proof is similar to that of Theorem 3.3 and is thus omitted here.

## 5. Computational Results

In this section, we present some numerical results for the proposed self-adaptive implicit methods. Our main interests are two folds: the first one is to compare the proposed method with He's method [11] in solving a simple nonlinear problem, showing the numerical advantage; the second one is to indicate that the strategy is rather insensitive to the initial point, the initial choice of the parameter, as well as the size of the problems. All codes were written in Matlab and run on a AMD 3200+ personal computer. In the following tests, the parameter is changed when
That is, we set in (3.13). We set , and the matrix-induced norm projection is just the projection under Euclidean norm, which is very easy to implement when has some special structure. For example, when is the nonnegative orthant,
then
when is a box,
then
when is a ball,
then
At each iteration, we use Newton's method [25, 26] to solve the system of nonlinear equations
approximately; that is, we stop the iteration of Newton's method as soon as the current iterative point satisfies (4.1), and adopt it as the next iterative point, where
In our first test problem , we take
(5.10)
where the matrix is constructed by . Here
(5.11)
are Householder matrices and is a diagonal matrix with . The vectors , and contain pseudorandom numbers:
(5.12)
The closed convex set in this problem is defined as
(5.13)
with different prescribed . Note that in the case (otherwise is the trivial solution ). Therefore, we test the problem with and . In the test we take , and . The stopping criterion is
(5.14)
The results in Table 1 show that is a "proper" parameter for the problem with , while for the other two cases with larger and with smaller , it is not. For any of these three cases, the method with self-adaptive strategy rule is efficient.
Table 1

Comparison of the proposed method and He's method [11].

Proposed method

He's method

Proposed method

He's method

It. no.

CPU

It. no.

CPU

It. no.

CPU

It. no.

CPU

25

0.3910

100

1.0780

34

50.4850

20

0.3120

37

0.4850

25

39.8440

17

25.0940

26

0.4060

350

5.8750

33

61.4070

"—" means iteration numbers 200 and CPU 2000 (sec).

The second example considered here is the variant mixed complementarity problem for short VMCP, with , where and are randomly generated parameters. The mapping is taken as
(5.15)
where and are the nonlinear part and the linear part of , respectively. We form the linear part similarly as in [27]. The matrix , where is an matrix whose entries are randomly generated in the interval , and a skew-symmetric matrix is generated in the same way. The vector is generated from a uniform distribution in the interval . In , the nonlinear part of , the components are , and is a random variable in . The numerical results are summarized in Tables 25, where the initial iterative point is in Tables 2 and 3 and is randomly generated in in Tables 4 and 5, respectively. The other parameters are the same: and for and otherwise. The stopping criterion is
(5.16)
Table 2

Numerical results for VMCP with dimension .

Proposed method

He's method

It. no.

CPU

It. no.

CPU

69

0.0780

65

0.1250

7335

6.1250

61

0.0790

485

0.4530

59

0.0620

60

4.0780

10

60

0.0780

315

0.3280

1

66

0.0110

2672

2.500

70

0.0940

22541

21.0320

73

0.0780

"—" means iteration numbers 3000 and CPU 300 (sec).

Table 3

Numerical results for VMCP with dimension .

Proposed method

He's method

It. no.

CPU

It. no.

CPU

82

1.6090

74

1.4850

1434

28.3750

64

1.2660

199

3.8910

63

1.2500

174

3.4060

10

68

1.3500

1486

30.4840

1

75

1.4850

75

1.5000

86

1.7030

"—" means iteration numbers 3000 and CPU 300 (sec).

Table 4

Numerical results for VMCP with dimension .

Proposed method

He's method

It. no.

CPU

It. no.

CPU

61

0.0620

61

0.0940

3422

3.7190

60

0.0790

684

0.6410

67

0.0780

59

0.0620

10

65

0.0940

309

0.2970

1

69

0.0940

2637

2.3750

72

0.0940

21949

18.9220

75

0.1250

"—" means iteration numbers 3000 and CPU 300 (sec).

Table 5

Numerical results for VMCP with dimension .

Proposed method

He's method

It. no.

CPU

It. no.

CPU

61

1.2500

64

1.2810

1527

29.8750

64

1.2660

150

2.9220

64

1.2810

222

4.3440

10

89

1.7920

1922

37.6250

1

70

1.3910

88

1.7340

84

1.6560

"—" means iteration numbers 5000 and CPU 300 (sec).

As the results in Table 1, the results in Tables 2 to 5 indicate that the number of iterations and CPU time are rather insensitive to the initial parameter , while He's method is efficient for proper choice of . The results also show that the proposed method, as well as He's method, is very stable and efficient to the choice of the initial point .

## 6. Conclusions

In this paper, we proposed a self-adaptive implicit method for solving monotone variant variational inequalities. The proposed self-adaptive adjusting rule avoids the difficult task of choosing a "suitable" parameter, which makes the method efficient for initial parameter. Our self-adaptive rule adds only a tiny amount of computation than the method with fixed parameter, while the efficiency is enhanced greatly. To make the method more efficient and practical, an approximate version of the algorithm was proposed. The global convergence of both the exact version and the inexact version of the new algorithm was proved under mild assumptions; that is, the underlying mapping of is monotone and there is at least one solution of the problem. The reported preliminary numerical results verified our assertion.

## Notes

### Acnowledgments

This research was supported by the NSFC Grants 10501024, 10871098, and NSF of Jiangsu Province at Grant no. BK2006214. D. Han was also supported by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.

## References

1. 1.
Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Vol. I, Springer Series in Operations Research. Springer, New York, NY, USA; 2003:xxxiv+624+I69.
2. 2.
Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems, Vol. II, Springer Series in Operations Research. Springer, New York, NY, USA; 2003:i-xxxiv, 625–1234 and II1–II57.
3. 3.
Bertsekas DP, Gafni EM: Projection methods for variational inequalities with application to the traffic assignment problem. Mathematical Programming Study 1982, (17):139–159.Google Scholar
4. 4.
Rachunková I, Tvrdý M: Nonlinear systems of differential inequalities and solvability of certain boundary value problems. Journal of Inequalities and Applications 2000,6(2):199–226.
5. 5.
Agarwal RP, Elezovic N, Pecaric J: On some inequalities for beta and gamma functions via some classical inequalities. Journal of Inequalities and Applications 2005,2005(5):593–613. 10.1155/JIA.2005.593
6. 6.
Dafermos S: Traffic equilibrium and variational inequalities. Transportation Science 1980,14(1):42–54. 10.1287/trsc.14.1.42
7. 7.
Verma RU: A class of projection-contraction methods applied to monotone variational inequalities. Applied Mathematics Letters 2000,13(8):55–62. 10.1016/S0893-9659(00)00096-3
8. 8.
Verma RU: Projection methods, algorithms, and a new system of nonlinear variational inequalities. Computers & Mathematics with Applications 2001,41(7–8):1025–1031. 10.1016/S0898-1221(00)00336-9
9. 9.
Ceng LC, Mastroeni G, Yao JC: An inexact proximal-type method for the generalized variational inequality in Banach spaces. Journal of Inequalities and Applications 2007, 2007:-14.Google Scholar
10. 10.
Chidume CE, Chidume CO, Ali B: Approximation of fixed points of nonexpansive mappings and solutions of variational inequalities. Journal of Inequalities and Applications 2008, 2008:-12.Google Scholar
11. 11.
He BS: Inexact implicit methods for monotone general variational inequalities. Mathematical Programming 1999,86(1):199–217. 10.1007/s101070050086
12. 12.
He BS: A Goldstein's type projection method for a class of variant variational inequalities. Journal of Computational Mathematics 1999,17(4):425–434.
13. 13.
Noor MA: Quasi variational inequalities. Applied Mathematics Letters 1988,1(4):367–370. 10.1016/0893-9659(88)90152-8
14. 14.
Outrata JV, Zowe J: A Newton method for a class of quasi-variational inequalities. Computational Optimization and Applications 1995,4(1):5–21. 10.1007/BF01299156
15. 15.
Pang JS, Qi LQ: Nonsmooth equations: motivation and algorithms. SIAM Journal on Optimization 1993,3(3):443–465. 10.1137/0803021
16. 16.
Pang JS, Yao JC: On a generalization of a normal map and equation. SIAM Journal on Control and Optimization 1995,33(1):168–184. 10.1137/S0363012992241673
17. 17.
Li M, Yuan XM: An improved Goldstein's type method for a class of variant variational inequalities. Journal of Computational and Applied Mathematics 2008,214(1):304–312. 10.1016/j.cam.2007.02.032
18. 18.
He BS, Liao LZ, Wang SL: Self-adaptive operator splitting methods for monotone variational inequalities. Numerische Mathematik 2003,94(4):715–737.
19. 19.
Eaves BC: On the basic theorem of complementarity. Mathematical Programming 1971,1(1):68–75. 10.1007/BF01584073
20. 20.
Zhu T, Yu ZQ: A simple proof for some important properties of the projection mapping. Mathematical Inequalities & Applications 2004,7(3):453–456.
21. 21.
He BS, Yang H, Meng Q, Han DR: Modified Goldstein-Levitin-Polyak projection method for asymmetric strongly monotone variational inequalities. Journal of Optimization Theory and Applications 2002,112(1):129–143. 10.1023/A:1013048729944
22. 22.
Han D, Sun W: A new modified Goldstein-Levitin-Polyak projection method for variational inequality problems. Computers & Mathematics with Applications 2004,47(12):1817–1825. 10.1016/j.camwa.2003.12.002
23. 23.
Han D: Inexact operator splitting methods with selfadaptive strategy for variational inequality problems. Journal of Optimization Theory and Applications 2007,132(2):227–243. 10.1007/s10957-006-9060-5
24. 24.
Han D, Xu W, Yang H: An operator splitting method for variational inequalities with partially unknown mappings. Numerische Mathematik 2008,111(2):207–237. 10.1007/s00211-008-0181-7
25. 25.
Dembo RS, Eisenstat SC, Steihaug T: Inexact Newton methods. SIAM Journal on Numerical Analysis 1982,19(2):400–408. 10.1137/0719025
26. 26.
Pang JS: Inexact Newton methods for the nonlinear complementarity problem. Mathematical Programming 1986,36(1):54–71. 10.1007/BF02591989
27. 27.
Harker PT, Pang JS: A damped-Newton method for the linear complementarity problem. In Computational Solution of Nonlinear Systems of Equations (Fort Collins, CO, 1988), Lectures in Applied Mathematics. Volume 26. American Mathematical Society, Providence, RI, USA; 1990:265–284.Google Scholar