Advertisement

A New Projection Algorithm for Generalized Variational Inequality

  • Changjie Fang
  • Yiran He
Open Access
Research Article

Abstract

We propose a new projection algorithm for generalized variational inequality with multivalued mapping. Our method is proven to be globally convergent to a solution of the variational inequality problem, provided that the multivalued mapping is continuous and pseudomonotone with nonempty compact convex values. Preliminary computational experience is also reported.

Keywords

Variational Inequality Multivalued Mapping Variational Inequality Problem Projection Algorithm Proximal Point Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1. Introduction

We consider the following generalized variational inequality. To find Open image in new window and Open image in new window such that

where Open image in new window is a nonempty closed convex set in Open image in new window , Open image in new window is a multivalued mapping from Open image in new window into Open image in new window with nonempty values, and Open image in new window and Open image in new window denote the inner product and norm in Open image in new window , respectively.

Theory and algorithm of generalized variational inequality have been much studied in the literature [1, 2, 3, 4, 5, 6, 7, 8, 9]. Various algorithms for computing the solution of (1.1) are proposed. The well-known proximal point algorithm [10] requires the multivalued mapping Open image in new window to be monotone. Relaxing the monotonicity assumption, [1] proved if the set Open image in new window is a box and Open image in new window is order monotone, then the proximal point algorithm still applies for problem (1.1). Assume that Open image in new window is pseudomonotone, and [11] described a combined relaxation method for solving (1.1); see also [12, 13]. Projection-type algorithms have been extensively studied in the literature; see [14, 15, 16, 17] and the references therein. Recently, [15] proposes a projection algorithm for generalized variational inequality with pseudomonotone mapping. In [15], choosing Open image in new window needs solving a single-valued variational inequality and hence is computationally expensive; see expression (2.1) in [15]. In this paper, we introduce a different projection algorithm for generalized variational inequality. In our method, Open image in new window can be taken arbitrarily. Moreover, the main difference of our method from that of [15] is the procedure of Armijo-type linesearch; see expression (2.2) in [15] and expression (2.2) in the next section.

Let Open image in new window be the solution set of (1.1), that is, those points Open image in new window satisfying (1.1). Throughout this paper, we assume that the solution set Open image in new window of problem (1.1) is nonempty and Open image in new window is continuous on Open image in new window with nonempty compact convex values satisfying the following property:

Property (1.2) holds if Open image in new window is pseudomonotone on Open image in new window in the sense of Karamardian [18]. In particular, if Open image in new window is monotone, then (1.2) holds.

The organization of this paper is as follows. In the next section, we recall the definition of continuous multivalued mapping, present the algorithm details, and prove the preliminary result for convergence analysis in Section 3. Numerical results are reported in the last section.

2. Algorithms

Let us recall the definition of continuous multivalued mapping. Open image in new window is said to be upper semicontinuous at Open image in new window if for every open set Open image in new window containing Open image in new window , there is an open set Open image in new window containing Open image in new window such that Open image in new window for all Open image in new window . F is said to be lower semicontinuous at Open image in new window , if we give any sequence Open image in new window converging to Open image in new window and any Open image in new window , there exists a sequence Open image in new window that converges to Open image in new window . Open image in new window is said to be continuous at Open image in new window if it is both upper semicontinuous and lower semicontinuous at Open image in new window . If Open image in new window is single valued, then both upper semicontinuity and lower semicontinuity reduce to the continuity of Open image in new window .

Let Open image in new window denote the projector onto Open image in new window and let Open image in new window be a parameter.

Proposition 2.1.

and Open image in new window solve problem (1.1) if and only if

Algorithm 2.2.

Choose Open image in new window and three parameters Open image in new window , and Open image in new window Set Open image in new window

Step 1.

If Open image in new window for some Open image in new window , stop; else take arbitrarily Open image in new window .

Step 2.

Let Open image in new window be the smallest nonnegative integer satisfying

where Open image in new window . Set Open image in new window .

Step 3.

Let Open image in new window and go to Step 1.

Remark 2.3.

Since Open image in new window has compact convex values, Open image in new window has closed convex values. Therefore, Open image in new window in Step 2 is uniquely determined by Open image in new window .

Remark 2.4.

If Open image in new window is a single-valued mapping, the Armijo-type linesearch procedure (2.2) becomes that of Algorithm 2.2 in [14].

We show that Algorithm 2.2 is well defined and implementable.

Proposition 2.5.

If Open image in new window is not a solution of problem (1.1), then there exists a nonnegative integer Open image in new window satisfying (2.2).

Proof.

Suppose that for all Open image in new window , we have

So Open image in new window . Let Open image in new window in (2.4), we have Open image in new window . This contradiction completes the proof.

Lemma 2.6.

Proof.

See [15, Lemma Open image in new window ].

Lemma 2.7.

where Open image in new window denotes the distance from Open image in new window to Open image in new window .

Proof.

See [14, Lemma Open image in new window ].

Lemma 2.8.

Let Open image in new window solve the variational inequality (1.1) and let the function Open image in new window be defined by (2.3). Then Open image in new window and Open image in new window . In particular, if Open image in new window then Open image in new window

Proof.

It follows from (2.3) that
where the first inequality follows from (2.2) and the last one follows from Lemma 2.6 and Open image in new window . If Open image in new window , then Open image in new window because Open image in new window . It remains to be proved that Open image in new window . Since Open image in new window , we have
on the other hand, assumption (1.2) implies that
Adding the last two expressions, we obtain that
It follows that

where the second inequality follows from assumption (1.2) and Open image in new window . Thus Open image in new window is verified.

3. Main Results

Theorem 3.1.

If Open image in new window is continuous with nonempty compact convex values on Open image in new window and condition (1.2) holds, then either Algorithm 2.2 terminates in a finite number of iterations or generates an infinite sequence Open image in new window converging to a solution of (1.1).

Proof.

Let Open image in new window be a solution of the variational inequality problem. By Lemma 2.8, Open image in new window . We assume that Algorithm 2.2 generates an infinite sequence Open image in new window . In particular, Open image in new window for every Open image in new window . By Step 3, it follows from Lemma Open image in new window in [14] that
where the last inequality is due to Open image in new window . It follows that the squence Open image in new window is nonincreasing, and hence is a convergent sequence. Therefore, Open image in new window is bounded and

By the boundedness of Open image in new window , there exists a convergent subsequence Open image in new window converging to Open image in new window .

If Open image in new window is a solution of problem (1.1), we show next that the whole sequence Open image in new window converges to Open image in new window . Replacing Open image in new window by Open image in new window in the preceding argument, we obtain that the sequence Open image in new window is nonincreasing and hence converges. Since Open image in new window is an accumulation point of Open image in new window , some subsequence of Open image in new window converges to zero. This shows that the whole sequence Open image in new window converges to zero, hence Open image in new window .

Suppose now that Open image in new window is not a solution of problem (1.1). We show first that Open image in new window in Algorithm 2.2 cannot tend to Open image in new window . Since Open image in new window is continuous with compact values, Proposition Open image in new window in [19] implies that Open image in new window is a bounded set, and so the sequence Open image in new window is bounded. Therefore, there exists a subsequence Open image in new window converging to Open image in new window . Since Open image in new window is upper semicontinuous with compact values, Proposition Open image in new window in [19] implies that Open image in new window is closed, and so Open image in new window . By the definition of Open image in new window , we have
Letting Open image in new window , we obtain the contradiction

with Open image in new window being continuous. Therefore, Open image in new window is bounded and so is Open image in new window .

It follows from (2.3) that
Since Open image in new window and Open image in new window are bounded, we have the sequence Open image in new window and hence the sequence Open image in new window is bounded. Thus, for some Open image in new window ,
Therefore, each function Open image in new window is Lipschitz continuous on Open image in new window with modulus Open image in new window . Noting that Open image in new window and applying Lemma 2.7, we obtain that
It follows from (3.8) and Lemma 2.8 that
Then (3.2) implies that

By the boundedness of Open image in new window , we obtain that Open image in new window Since Open image in new window is continuous and the sequences Open image in new window and Open image in new window are bounded, there exists an accumulation point Open image in new window of Open image in new window such that Open image in new window . This implies that Open image in new window solves the variational inequality (1.1). Similar to the preceding proof, we obtain that Open image in new window .

4. Numerical Experiments

In this section, we present some numerical experiments for the proposed algorithm. The MATLAB codes are run on a PC (with CPU Intel P-T2390) under MATLAB Version 7.0.1.24704(R14) Service Pack 1. We compare the performance of our Algorithm 2.2 and [15, Algorithm 1]. In the Tables 1 and 2, "It." denotes number of iteration, and "CPU" denotes the CPU time in seconds. The tolerance Open image in new window means when Open image in new window the procedure stops.
Table 1

Example 4.1.

 

Algorithm 2.2

[15, Algorithm Open image in new window ]

Open image in new window

It. (num.)

CPU (sec.)

It. (num.)

CPU (sec.)

Open image in new window

55

0.625

74

0.984375

Open image in new window

39

0.546875

51

0.75

Open image in new window

23

0.4375

27

0.5

Table 2

Example 4.2.

  

Algorithm 2.2

[15, Algorithm Open image in new window ]

Initial point

Open image in new window

It. (num.)

CPU (sec.)

It. (num.)

CPU (sec.)

(0,0,0,1)

Open image in new window

53

0.75

61

0.90625

(0,0,1,0)

Open image in new window

47

0.625

79

1.28125

(0.5,0,0.5,0)

Open image in new window

42

0.53125

76

1.28125

(0,0,0,1)

Open image in new window

42

0.625

43

0.671875

(0,0,1,0)

Open image in new window

35

0.53125

56

0.921875

(0.5,0,0.5,0)

Open image in new window

31

0.5

53

0.890625

Example 4.1.

Then the set Open image in new window and the mapping Open image in new window satisfy the assumptions of Theorem 3.1 and (0,0,1) is a solution of the generalized variational inequality. Example 4.1 is tested in [15]. We choose Open image in new window , and Open image in new window for our algorithm; Open image in new window , and Open image in new window for Algorithm 1 in [15]. We use Open image in new window as the initial point.

Example 4.2.

Then the set Open image in new window and the mapping Open image in new window satisfy the assumptions of Theorem 3.1 and (1,0,0,0) is a solution of the generalized variational inequality. We choose Open image in new window , and Open image in new window for the two algorithms.

Notes

Acknowledgments

This work is partially supported by the National Natural Science Foundation of China (no. 10701059), by the Sichuan Youth Science and Technology Foundation (no. 06ZQ026-013), and by Natural Science Foundation Projection of CQ CSTC (no. 2008BB7415).

References

  1. 1.
    Allevi E, Gnudi A, Konnov IV: The proximal point method for nonmonotone variational inequalities. Mathematical Methods of Operations Research 2006, 63(3):553–565. 10.1007/s00186-005-0052-2MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Auslender A, Teboulle M: Lagrangian duality and related multiplier methods for variational inequality problems. SIAM Journal on Optimization 2000, 10(4):1097–1115. 10.1137/S1052623499352656MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Bao TQ, Khanh PQ: A projection-type algorithm for pseudomonotone nonlipschitzian multivalued variational inequalities. In Generalized Convexity, Generalized Monotonicity and Applications, Nonconvex Optimization and Its Applications. Volume 77. Springer, New York, NY, USA; 2005:113–129. 10.1007/0-387-23639-2_6CrossRefGoogle Scholar
  4. 4.
    Ceng LC, Mastroeni G, Yao JC: An inexact proximal-type method for the generalized variational inequality in Banach spaces. Journal of Inequalities and Applications 2007, 2007:-14.Google Scholar
  5. 5.
    Fang SC, Peterson EL: Generalized variational inequalities. Journal of Optimization Theory and Applications 1982, 38(3):363–383. 10.1007/BF00935344MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Fukushima M: The primal Douglas-Rachford splitting algorithm for a class of monotone mappings with application to the traffic equilibrium problem. Mathematical Programming 1996, 72(1):1–15. 10.1007/BF02592328MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    He Y: Stable pseudomonotone variational inequality in reflexive Banach spaces. Journal of Mathematical Analysis and Applications 2007, 330(1):352–363. 10.1016/j.jmaa.2006.07.063MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Saigal R: Extension of the generalized complementarity problem. Mathematics of Operations Research 1976, 1(3):260–266. 10.1287/moor.1.3.260MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Salmon G, Strodiot J-J, Nguyen VH: A bundle method for solving variational inequalities. SIAM Journal on Optimization 2003, 14(3):869–893.MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization 1976, 14(5):877–898. 10.1137/0314056MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Konnov IV: On the rate of convergence of combined relaxation methods. Izvestiya Vysshikh Uchebnykh Zavedenii. Matematika 1993, (12):89–92.Google Scholar
  12. 12.
    Konnov IV: Combined Relaxation Methods for Variational Inequalities, Lecture Notes in Economics and Mathematical Systems. Volume 495. Springer, Berlin, Germany; 2001:xii+181.CrossRefGoogle Scholar
  13. 13.
    Konnov IV: Combined relaxation methods for generalized monotone variational inequalities. In Generalized Convexity and Related Topics, Lecture Notes in Econom. and Math. Systems. Volume 583. Springer, Berlin, Germany; 2007:3–31.CrossRefGoogle Scholar
  14. 14.
    He Y: A new double projection algorithm for variational inequalities. Journal of Computational and Applied Mathematics 2006, 185(1):166–173. 10.1016/j.cam.2005.01.031MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Li F, He Y: An algorithm for generalized variational inequality with pseudomonotone mapping. Journal of Computational and Applied Mathematics 2009, 228(1):212–218. 10.1016/j.cam.2008.09.014MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Solodov MV, Svaiter BF: A new projection method for variational inequality problems. SIAM Journal on Control and Optimization 1999, 37(3):765–776. 10.1137/S0363012997317475MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementary Problems. Springer, New York, NY, USA; 2003.MATHGoogle Scholar
  18. 18.
    Karamardian S: Complementarity problems over cones with monotone and pseudomonotone maps. Journal of Optimization Theory and Applications 1976, 18(4):445–454. 10.1007/BF00932654MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Aubin J-P, Ekeland I: Applied Nonlinear Analysis, Pure and Applied Mathematics. John Wiley & Sons, New York, NY, USA; 1984:xi+518.MATHGoogle Scholar

Copyright information

© C. Fang and Y. He. 2010

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Authors and Affiliations

  1. 1.Department of MathematicsSichuan Normal UniversityChengduChina
  2. 2.Institute of Applied MathematicsChongqing University of Posts and TelecommunicationsChongqingChina

Personalised recommendations