Advertisement

A two-level domain decomposition algorithm for linear complementarity problem

Open Access
Research
  • 1.1k Downloads

Abstract

In this paper, a two-level domain decomposition algorithm for linear complementarity problem (LCP) is introduced. Inner and outer approximation sequences to the solution of LCP are generated by the proposed algorithm. The algorithm is proved to be convergent and can reach the solution of the problem within finite steps. Some simple numerical results are presented to show the effectiveness of the proposed algorithm.

Keywords

Linear Complementarity Problem Obstacle Problem Domain Decomposition Method Nonlinear Complementarity Problem Outer Approximation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In this paper, we consider the following linear complementarity problem (LCP) of finding u R n Open image in new window such that
u 0 , F ( u ) 0 , u T F ( u ) = 0 , Open image in new window
(1.1)

where F ( u ) = A u + b Open image in new window, A is an M-matrix, b R n Open image in new window is a given vector.

LCP is a wide class of problems and has many applications in such fields as physics, optimum control, economics, etc. As a result of their broad applications, the literature in this field has benefited from contributions made by mathematicians, computer scientists, engineers of many kinds, and economists of diverse expertise. There are many surveys and special volumes (see, e.g., [1, 2, 3] and the references therein).

Domain decomposition techniques are widely used to solve PDEs since 1980’s. This kind of techniques attracts much attention, since it is portable and easy to be parallelized on parallel machines. It has been applied to solve various linear and nonlinear variational inequality problems, and the numerical results show that it is efficient, see, for example, [4, 5, 6]. It contains many algorithms, such as classical additive Schwarz method (AS), multiplicative Schwarz method (MS), restricted additive Schwarz method (RAS), and so on. In [7], a variant of Schwarz algorithm, called two-level additive Schwarz algorithm (TLAS), was proposed for the solution of a kind of linear obstacle problem. This method can divide the original problem into subproblems in an ‘efficient’ way. In other words, the domain is decomposed in different way at each step and the dimensions of the subproblems we deal with are lower than that of the original problem. The numerical results show that the TLAS is significant. In [8], the TLAS is extended for the nonlinear complementarity problem with an M-function. The algorithm offers a possibility of making use of fast nonlinear solvers to the subproblems, and the choice of the initial is much easier than that of the TLAS. Another efficient way to solve problem (1.1) is given by semismooth Newton methods (e.g., see [9, 10]). This method is attractive, because it converges rapidly from any sufficiently good initial iterate, and the subproblems are also systems of equation. An active set strategy is also an efficient way to solve discrete obstacle problems, see, for example, [11, 12, 13]. Based on some kind of active set strategy, the discrete obstacle problem can be reduced to a sequence of linear problems, which are then solved by some efficient methods. In this paper, we combine the idea of the active set strategy with the thought of TLAS, i.e., constructing inner and outer approximation sequences to the solution of LCP, and present a two-level domain decomposition algorithm. As we will see in the sequel, the main difference between the two-level domain decomposition algorithm (TLDD) and TLAS discussed in [7] lies in the way of constructing the outer approximation of the solution. What’s more, with the idea of an active set strategy, the TLDD may be easier extended to other problems, such as bilateral obstacle problem.

The paper in the sequel is organized as follows. In Section 2, we give some preliminaries and present a two-level domain decomposition algorithm for problem (1.1). In Section 3, we discuss the convergence of the algorithm proposed in Section 2. In Section 4, we report some simple numerical results.

2 Preliminaries and two-level domain decomposition algorithm

In this section, we give some preliminaries and present a two-level domain decomposition algorithm for solving problem (1.1).

Firstly, similarly to [7, 8], we introduce two operators, which will be useful in the construction of the algorithm in this paper. Let N = { 1 , 2 , , n } Open image in new window. Let I, J be a nonoverlapping decomposition of N. That is, N = I J Open image in new window and I J = Open image in new window. For any v R n Open image in new window, we introduce the following linear problem of finding w R n Open image in new window such that
w I = v I , F J ( w ) = 0 , Open image in new window
(2.1)
where v I Open image in new window denotes the subvector of v with elements v j Open image in new window ( j I Open image in new window). Similar notation will be used in the sequel. We denote linear system (2.1) above by the operation form
w = G J ( v ) . Open image in new window
Similarly, we introduce the following problem of finding w R n Open image in new window such that
w I = v I , min { F J ( w ) , w J } = 0 . Open image in new window
(2.2)
We denote nonlinear problem (2.2) above by the operation form
w = T J ( v ) . Open image in new window

Theorem 2.1 [1]

Problem (1.1) is equivalent to the following variational inequality of finding u R n Open image in new window such that
( F ( u ) , v u ) 0 , v R n . Open image in new window
(2.3)

Theorem 2.2 [8]

The solution of problem (1.1), or equivalently (2.3), is unique and is the minimal element of S, where S is the supersolution set of problem (1.1), which is defined by
S = { v R n : v 0 and F ( v ) 0 } . Open image in new window

Similarly, we have the following theorem.

Theorem 2.3 The solution of problem (1.1), or equivalently (2.3), is unique and is the maximal element of U, where U is the subsolution set of problem (1.1), which is defined by
U = { v R n : v 0 and min { v , F ( v ) } 0 } . Open image in new window

Based on Theorems 2.1 and 2.2, we can construct the following additive Schwarz algorithm for LCP (1.1).

Algorithm 2.1 (Additive Schwarz algorithm with two subdomains)

Let I and J be a decomposition of N, i.e., I J = N Open image in new window. Given u 0 S Open image in new window. For k = 0 , 1 , Open image in new window , do the following two steps until convergence.

Step 1: Solve the following two subproblems in parallel
{ find  u k , 1 K k , 1  such that ( F I ( u k , 1 ) , v I u I k , 1 ) 0 , v K k , 1 , {  find  u k , 2 K k , 2  such that ( F J ( u k , 2 ) , v J u J k , 2 ) 0 , v K k , 2 , Open image in new window
where
K k , 1 = { v R n : ( v u k ) N I = 0 } , K k , 2 = { v R n : ( v u k ) N J = 0 } . Open image in new window

Here we define N I = { j N : j I } Open image in new window for any subset I of N.

Step 2: u k + 1 = min ( u k , 1 , u k , 2 ) Open image in new window, where ‘min’ should be understood by componentwise.

Similar to the proof of Theorem 2.4 in [8], we have the following convergence theorem for Algorithm 2.1.

Theorem 2.4 Let the sequence { u k } Open image in new window be generated by Algorithm 2.1. For k = 0 , 1 , Open image in new window , we have
  1. (a)

    u k , i u k Open image in new window, i = 1 , 2 Open image in new window and then u k + 1 u k Open image in new window,

     
  2. (b)

    u k , i S Open image in new window, i = 1 , 2 Open image in new window and then u k + 1 S Open image in new window,

     
  3. (c)

    lim k u k = u Open image in new window,

     

where u is the solution of problem (1.1).

In what follows, we let N 0 = { j N : u j = 0 } Open image in new window, N + = { j N : u j > 0 } Open image in new window, where u is the solution of problem (1.1). If u 0 S Open image in new window, then the sequence { u k } Open image in new window generated by Algorithm 2.1 is in S and monotonically decreases and converges to the solution. Hence, if we define the coincidence set of u k Open image in new window as follows
I k = { j N : u j k = 0 } , Open image in new window
(2.4)
we have by the monotonicity of { u k } Open image in new window such that
I k I k + 1 N 0 , k = 0 , 1 , . Open image in new window

Actually, this gives inner approximations for the coincidence set N 0 Open image in new window.

There are many algorithms based on active set strategy. Based on some kind of criterion, the index set is divided into two parts: active set and inactive set. We only need to calculate the simplified linear system related to the inactive set. We also draw on the experience of active set strategy to derive the outer approximations for the coincidence set. To be precise, we define
O k = { j N : w j k = 0  and  F j ( w k ) 0 } , L k = N O k , k = 0 , 1 , , Open image in new window
(2.5)
and define C k Open image in new window as
C k = N ( I k L k ) . Open image in new window
(2.6)
C k Open image in new window may contain both elements of N 0 Open image in new window and N + Open image in new window. So, it is called the critical subsets. Let
C ˆ k = C k H k , Open image in new window
(2.7)

where H k Open image in new window is a subset of N corresponding to an overlapping of the subsets associated with L k Open image in new window and C k Open image in new window. That is H k L k Open image in new window and H k = C ˆ k L k Open image in new window.

Now, we are ready to present two-level domain decomposition algorithm for problem (1.1).

Algorithm 2.2 (Two-level domain decomposition algorithm)

  1. 1.

    Initialization. k : = 0 Open image in new window:

     
  2. (a)

    Choose an initial u 0 Open image in new window, w such that u 0 S Open image in new window and w U Open image in new window. Define the coincidence set I 0 Open image in new window according to (2.4).

     
  3. (b)
    Solve w 0 Open image in new window such that
    { w i 0 = 0 , i I 0  or  w i = 0 , F i ( w ) 0 , F i ( w 0 ) = 0 , otherwise , Open image in new window
    (2.8)
     
and define L 0 Open image in new window, C 0 Open image in new window and C ˆ 0 Open image in new window according to (2.5), (2.6) and (2.7), respectively.
  1. 2.

    Iteration step:

     
  2. (a)

    Inner approximation (additive Schwarz algorithm with two subdomains). Solve the following two subproblems in parallel: Let u k + 1 = min ( u k , 1 , u k , 2 ) Open image in new window and define the coincidence set I k + 1 Open image in new window according to (2.4).

     
  3. (i)
    The subproblem defined by the following obstacle problem
    u k , 1 = T C ˆ k ( u k ) . Open image in new window
     
  4. (ii)
    The subproblem defined by the following linear equation
    u k , 2 = G L k ( u k ) . Open image in new window
    (2.9)
     
  5. (b)
    Outer approximation. Solve the linear system
    { w i k + 1 = 0 , i I k + 1  or  w i k = 0 , F i ( w k ) 0 , F i ( w k + 1 ) = 0 , otherwise . Open image in new window
    (2.10)
     

If F ( w k + 1 ) 0 Open image in new window, then stop; w k + 1 Open image in new window is the solution. Otherwise, define L k + 1 Open image in new window and C ˆ k + 1 Open image in new window according to (2.5) and (2.7), respectively, and let k : = k + 1 Open image in new window and return to step 2.

Remark 2.1 The subproblems in PSOR method and classical Schwarz algorithm are obstacle problems, while the subproblems (2.9) and (2.10) can be solved by the use of fast linear solvers.

Remark 2.2 The difference between Algorithm 2.2 and Algorithm 3.3 in [7] lies on the way of generating the outer approximation sequence. Algorithm 3.3 in [7] seems difficult to extend to other problems, such as bilateral obstacle problem, while the idea of Algorithm 2.2 may be easier to be applied to other problems.

3 The convergence of Algorithm 2.2

In this section, we analyze the convergence of Algorithm 2.2. First, we introduce some lemmas.

Lemma 3.1 Let u 0 S Open image in new window, w U Open image in new window and w 0 Open image in new window be defined by (2.8). Then, we have 0 w w 0 Open image in new window and w 0 U Open image in new window.

Proof Let I ˆ = I ˆ 1 I ˆ 2 Open image in new window, where I ˆ 1 = { i | i I 0 } Open image in new window, I ˆ 2 = { i | w i = 0  and  F i ( w ) 0 } Open image in new window and J ˆ = N I ˆ Open image in new window. By the definition of w 0 Open image in new window, we have w I ˆ 2 0 = w I ˆ 2 = 0 Open image in new window. By Theorems 2.2 and 2.3, we have u 0 w 0 Open image in new window. Hence if i I ˆ 1 Open image in new window, we have w i = 0 Open image in new window. Then w I ˆ 0 = w I ˆ = 0 Open image in new window. Since w U Open image in new window, we have F J ˆ ( w ) F J ˆ ( w 0 ) = 0 Open image in new window. Hence, noting that F ( u ) = A u + b Open image in new window, and A is an M-matrix, we have 0 w w 0 Open image in new window. This completes the proof. □

Lemma 3.2 Let u 0 S Open image in new window, let subsets L 0 Open image in new window and C ˆ 0 Open image in new window be defined by (2.5) and (2.7), respectively, then
u 0 , 1 = T C ˆ k ( u 0 ) S , u 0 , 1 u 0 , Open image in new window
(3.1)
u 0 , 2 = G L 0 ( u 0 ) S , u 0 , 2 u 0 , Open image in new window
(3.2)
u 1 = min ( u 0 , 1 , u 0 , 2 ) S , Open image in new window
(3.3)
u u 1 u 0 . Open image in new window
(3.4)
Proof Equation (3.1) can be directly obtained by Theorem 2.4. By (2.9), we have
F L 0 ( u 0 , 2 ) = 0 , u N L 0 0 , 2 = u N L 0 0 . Open image in new window
(3.5)
Since u 0 S Open image in new window, we have
F L 0 ( u 0 ) 0 . Open image in new window
Noticing that F ( u ) = A u + b Open image in new window, and A is an M-matrix, (3.5) concludes
u 0 , 2 u 0 . Open image in new window
(3.6)
We have by (3.5), (3.6) that
F N L 0 ( u 0 , 2 ) F N L 0 ( u 0 ) 0 . Open image in new window
(3.7)
Let L 0 = L 1 0 L 2 0 Open image in new window, where L 1 0 = { i L 0 | w i 0 = 0 , F i ( w 0 ) < 0 } Open image in new window, and L 2 0 = { i L 0 | w i 0 > 0 } Open image in new window. Since w 0 U Open image in new window, w 0 u Open image in new window, we have u i > 0 Open image in new window for i L 2 0 Open image in new window, and then F L 2 0 ( u ) = 0 Open image in new window. For i L 1 0 Open image in new window, we have i I 0 Open image in new window, and then u i = 0 Open image in new window. It follows then from (3.5) and u 0 S Open image in new window that
u N L 2 0 0 , 2 u N L 2 0 , F L 2 0 ( u 0 , 2 ) = F L 2 0 ( u ) = 0 , Open image in new window
(3.8)

which means u 0 , 2 u 0 Open image in new window. This, together with (3.7) and (3.8), implies that u 0 , 2 S Open image in new window. Therefore, (3.2) holds. Similar to the proof in Theorem 2.4, we have (3.3) and (3.4). The proof is then completed. □

By Lemmas 3.1 and 3.2, and the principle of induction, we have w k + 1 w k 0 Open image in new window, u k u k + 1 0 Open image in new window, k = 1 , 2 , Open image in new window .

Lemma 3.3 O k + 1 O k Open image in new window and I k I k + 1 N 0 Open image in new window, k = 0 , 1 , Open image in new window . If F ( w k + 1 ) 0 Open image in new window, w k + 1 Open image in new window is the solution.

Proof If j O k + 1 Open image in new window, by the definition of O k + 1 Open image in new window, we have w j k + 1 = 0 Open image in new window, F j ( w k + 1 ) 0 Open image in new window. Noting that w k + 1 w k 0 Open image in new window, we have w j k = 0 Open image in new window. If F j ( w k ) < 0 Open image in new window, notice that A is an M-matrix, we have F j ( w k + 1 ) F j ( w k ) < 0 Open image in new window, which is a contradiction. Hence, j O k Open image in new window and O k + 1 O k Open image in new window. I k I k + 1 N 0 Open image in new window, k = 0 , 1 , Open image in new window is obvious. Noting (2.10), it is obvious that if for some k such that F ( w k ) 0 Open image in new window, w k Open image in new window is the solution. □

Theorem 3.4 The sequence generated by two-level domain decomposition method (Algorithm 2.2) converges to the solution u of problem (1.1) after a finite number of iterations.

Proof If for some k, F ( w k ) 0 Open image in new window, by Lemma 3.1, w k Open image in new window is the solution and w k = u Open image in new window since the problem (1.1) has only one solution. Otherwise, since u k + 1 = min ( u k , 1 , u k , 2 ) Open image in new window, we have I k , 1 I k + 1 Open image in new window and hence I k + 1 I k Open image in new window. Noting that I k I k + 1 Open image in new window and that N is an index set with finite elements, I k + 1 I k Open image in new window can only occur in finite steps. By Lemma 3.3, we have O k + 1 O k Open image in new window, and O k O k + 1 Open image in new window also can only occur in finite steps. In this case, after some finite steps, we have I k + 1 = I k Open image in new window, O k + 1 = O k Open image in new window. By the definition of w k + 1 Open image in new window, we have F i ( w k + 1 ) 0 Open image in new window, i N Open image in new window. Hence, by Lemma 3.3, w k + 1 Open image in new window is the solution. This completes the proof. □

4 Numerical experiment

In this section, we present numerical experiments in order to investigate the efficiency of Algorithm 2.2. The programs are coded in Visual C++ 6.0 and run on a computer with 2.0 GHz CPU. In the tests, we consider the following LCP:
u 0 , A ( u ) b 0 , u T ( A ( u ) b ) = 0 , Open image in new window
(4.1)
where
A = 1 h 2 ( H I I H I I H ) Open image in new window
and
H = ( 4 1 1 4 1 1 4 ) , Open image in new window

h = 1 n + 1 Open image in new window, b = ( b 1 , b 2 , , b n ) T Open image in new window is a given vector. In our test, we set b i = 0.5 Open image in new window, i = 1 , 2 , , n / 2 Open image in new window and b i = 0.5 Open image in new window, i = n / 2 + 1 , n / 2 + 2 , , n Open image in new window.

The matrix A may be obtained by discretizing the operator u Open image in new window by using five-point difference scheme with a constant mesh step size h = 1 / ( m + 1 ) Open image in new window, where m denotes the number of mesh nodes in x- or y-direction ( n = m 2 Open image in new window is the total number of unknowns).

We compare different algorithms from the point of view of iteration numbers and CPU times. Here, we consider three algorithms: classical additive Schwarz algorithm (i.e., Algorithm 2.1, denoted by AS), Newton’s method proposed in [9] (denoted by SSN), and Algorithm 2.2 (denoted by TLDD). In the AS, we decompose N into two equal parts with the overlapping size O ( 1 10 ) Open image in new window. In the algorithms we considered, all subproblems relating to obstacle problems are solved by PSOR with the same relaxation parameter ω = 1.4 Open image in new window, and the initial point is u 0 = A 1 e Open image in new window with e = ( 1 , 1 , , 1 ) T Open image in new window. The tolerance in the subproblems of the algorithms is chosen to be equal to 10−3 in 2 Open image in new window-norm, while in the outer iterative processes, it is chosen to be equal to 10−6 in 2 Open image in new window-norm. In the TLDD, we choose initial w = 0 Open image in new window. The tolerance in the subproblems of the algorithms is chosen to be equal to 10−4 in 2 Open image in new window-norm, while in the outer iterative processes, it is chosen to be equal to 10−6 in 2 Open image in new window-norm. In the SSN, we choose ϵ = 10 6 Open image in new window, p = 3 Open image in new window, ρ = 0.5 Open image in new window, β = 0.3 Open image in new window, which is defined by the procedure proposed in ([9], Section 7). We choose the initial point u 0 = 0 Open image in new window.

We investigate the performances of each algorithm with different dimensions. Table 1 gives the iteration numbers and CPU times for the above-mentioned algorithms. From the table, we can easily see that the iteration numbers of TLDD are fewest among the algorithms we considered. The subproblems in AS are solved by PSOR, and it takes very little time to find an approximate solution to the obstacle subproblems. Nevertheless, in order to find the exact solution of subproblems, SSN and TLDD spent much more time to solve the related linear equations at each iteration step. This may explain why these two algorithms did not perform as well as we expected.
Table 1

Comparisons of iteration numbers and cpu times

n 2 Open image in new window

AS

SSN

TLDD

iter.

cpu

iter.

cpu

iter.

cpu

100

51

0.015

5

0.015

3

0.047

400

155

0.64

10

1.997

6

1.154

900

318

6.567

12

27.378

9

17.755

1,600

546

34.564

14

178.101

11

112.117

Concluding remark In this paper, we propose a new kind of domain decomposition method for linear complementarity problem and establish its convergence. From the numerical result, we can see that this method needs less iteration number to converge to the solution rapidly than the additive Schwarz method and SSN. There are still some interesting future works that need to be done. For example, as we can see from TLDD, the main work is calculating the linear equations; we can discuss the affect of inexact solution for related linear subproblems. It is also interesting for us to extend the new method to some other problems, such as nonlinear complementarity problem and bilateral obstacle problem. We leave it as a possible future research topic.

Notes

Acknowledgements

The work was supported by the Natural Science Foundation of Guangdong Province, China (Grant No. S2012040007993) and the Educational Commission of Guangdong Province, China (Grant No. 2012LYM_0122), NSF (Grand No. 11126147) and NSF (Grand No. 11201197).

References

  1. 1.
    Harker PT, Pang JS: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 1990, 48: 161–220. 10.1007/BF01582255MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Billups SC, Murty KG: Complementarity problems. J. Comput. Appl. Math. 2000, 124: 303–318. 10.1016/S0377-0427(00)00432-5MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Ferris MC, Mangasarian OL, Pang JS (Eds): Complementarity: Applications, Algorithms and Extensions. Kluwer Academic, Dordrecht; 2001.Google Scholar
  4. 4.
    Badea L, Wang JP: An additive Schwarz method for variational inequalities. Math. Comput. 1999, 69: 1341–1354. 10.1090/S0025-5718-99-01164-3MathSciNetCrossRefGoogle Scholar
  5. 5.
    Zeng JP, Zhou SZ: On monotone and geometric convergence of Schwarz methods for two-sided obstacle problems. SIAM J. Numer. Anal. 1998, 35: 600–616. 10.1137/S0036142995288920MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Jiang YJ, Zeng JP: Additive Schwarz algorithm for the nonlinear complementarity problem with M -function. Appl. Math. Comput. 2007, 190: 1007–1019. 10.1016/j.amc.2006.10.062MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Tarvainen P: Two-level Schwarz method for unilateral variational inequalities. IMA J. Numer. Anal. 1999, 19: 273–290. 10.1093/imanum/19.2.273MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Xu HR, Zeng JP, Sun Z: Two-level additive Schwarz algorithms for nonlinear complementarity problem with an M -funcion. Numer. Linear Algebra Appl. 2010, 17: 599–613.MathSciNetMATHGoogle Scholar
  9. 9.
    Luca TD, Facchinei F, Kanzow C: A theoretical and numerical comparison of some semismooth algorithms for complementarity problems. Comput. Optim. Appl. 2000, 16: 173–205. 10.1023/A:1008705425484MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Li DH, Li Q, Xu HR: An almost smooth equation reformulation to the nonlinear complementarity problem and Newton’s method. Optim. Methods Softw. 2012, 27: 969–981. 10.1080/10556788.2010.550288MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Hintermüller M, Ito K, Kunisch K: The primal-dual active set strategy as a semismooth Newton method. SIAM J. Optim. 2002, 13: 865–888. 10.1137/S1052623401383558MathSciNetCrossRefGoogle Scholar
  12. 12.
    Kunisch K, Rösch A: Primal-dual active set strategy for a general class of constrained optimal control problems. SIAM J. Optim. 2002, 13: 321–334. 10.1137/S1052623499358008MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Kärkkäinen T, Kunisch K, Tarvainen P: Augmented Lagrangian active set methods for obstacle problems. J. Optim. Theory Appl. 2003, 119: 499–533.MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Xie et al.; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Authors and Affiliations

  1. 1.School of MathematicsJiaying UniviersityMeizhouP.R. China
  2. 2.College of Mathematics and Information ScienceJiangxi Normal UniversityNanchangP.R. China

Personalised recommendations