A two-level domain decomposition algorithm for linear complementarity problem
- 1k Downloads
In this paper, a two-level domain decomposition algorithm for linear complementarity problem (LCP) is introduced. Inner and outer approximation sequences to the solution of LCP are generated by the proposed algorithm. The algorithm is proved to be convergent and can reach the solution of the problem within finite steps. Some simple numerical results are presented to show the effectiveness of the proposed algorithm.
KeywordsLinear Complementarity Problem Obstacle Problem Domain Decomposition Method Nonlinear Complementarity Problem Outer Approximation
where , A is an M-matrix, is a given vector.
LCP is a wide class of problems and has many applications in such fields as physics, optimum control, economics, etc. As a result of their broad applications, the literature in this field has benefited from contributions made by mathematicians, computer scientists, engineers of many kinds, and economists of diverse expertise. There are many surveys and special volumes (see, e.g., [1, 2, 3] and the references therein).
Domain decomposition techniques are widely used to solve PDEs since 1980’s. This kind of techniques attracts much attention, since it is portable and easy to be parallelized on parallel machines. It has been applied to solve various linear and nonlinear variational inequality problems, and the numerical results show that it is efficient, see, for example, [4, 5, 6]. It contains many algorithms, such as classical additive Schwarz method (AS), multiplicative Schwarz method (MS), restricted additive Schwarz method (RAS), and so on. In , a variant of Schwarz algorithm, called two-level additive Schwarz algorithm (TLAS), was proposed for the solution of a kind of linear obstacle problem. This method can divide the original problem into subproblems in an ‘efficient’ way. In other words, the domain is decomposed in different way at each step and the dimensions of the subproblems we deal with are lower than that of the original problem. The numerical results show that the TLAS is significant. In , the TLAS is extended for the nonlinear complementarity problem with an M-function. The algorithm offers a possibility of making use of fast nonlinear solvers to the subproblems, and the choice of the initial is much easier than that of the TLAS. Another efficient way to solve problem (1.1) is given by semismooth Newton methods (e.g., see [9, 10]). This method is attractive, because it converges rapidly from any sufficiently good initial iterate, and the subproblems are also systems of equation. An active set strategy is also an efficient way to solve discrete obstacle problems, see, for example, [11, 12, 13]. Based on some kind of active set strategy, the discrete obstacle problem can be reduced to a sequence of linear problems, which are then solved by some efficient methods. In this paper, we combine the idea of the active set strategy with the thought of TLAS, i.e., constructing inner and outer approximation sequences to the solution of LCP, and present a two-level domain decomposition algorithm. As we will see in the sequel, the main difference between the two-level domain decomposition algorithm (TLDD) and TLAS discussed in  lies in the way of constructing the outer approximation of the solution. What’s more, with the idea of an active set strategy, the TLDD may be easier extended to other problems, such as bilateral obstacle problem.
The paper in the sequel is organized as follows. In Section 2, we give some preliminaries and present a two-level domain decomposition algorithm for problem (1.1). In Section 3, we discuss the convergence of the algorithm proposed in Section 2. In Section 4, we report some simple numerical results.
2 Preliminaries and two-level domain decomposition algorithm
In this section, we give some preliminaries and present a two-level domain decomposition algorithm for solving problem (1.1).
Theorem 2.1 
Theorem 2.2 
Similarly, we have the following theorem.
Based on Theorems 2.1 and 2.2, we can construct the following additive Schwarz algorithm for LCP (1.1).
Algorithm 2.1 (Additive Schwarz algorithm with two subdomains)
Let I and J be a decomposition of N, i.e., . Given . For , do the following two steps until convergence.
Here we define for any subset I of N.
Step 2: , where ‘min’ should be understood by componentwise.
Similar to the proof of Theorem 2.4 in , we have the following convergence theorem for Algorithm 2.1.
, and then ,
, and then ,
where u is the solution of problem (1.1).
Actually, this gives inner approximations for the coincidence set .
where is a subset of N corresponding to an overlapping of the subsets associated with and . That is and .
Now, we are ready to present two-level domain decomposition algorithm for problem (1.1).
Algorithm 2.2 (Two-level domain decomposition algorithm)
Choose an initial , w such that and . Define the coincidence set according to (2.4).
- (b)Solve such that(2.8)
Inner approximation (additive Schwarz algorithm with two subdomains). Solve the following two subproblems in parallel: Let and define the coincidence set according to (2.4).
- (i)The subproblem defined by the following obstacle problem
- (ii)The subproblem defined by the following linear equation(2.9)
- (b)Outer approximation. Solve the linear system(2.10)
If , then stop; is the solution. Otherwise, define and according to (2.5) and (2.7), respectively, and let and return to step 2.
Remark 2.1 The subproblems in PSOR method and classical Schwarz algorithm are obstacle problems, while the subproblems (2.9) and (2.10) can be solved by the use of fast linear solvers.
Remark 2.2 The difference between Algorithm 2.2 and Algorithm 3.3 in  lies on the way of generating the outer approximation sequence. Algorithm 3.3 in  seems difficult to extend to other problems, such as bilateral obstacle problem, while the idea of Algorithm 2.2 may be easier to be applied to other problems.
3 The convergence of Algorithm 2.2
In this section, we analyze the convergence of Algorithm 2.2. First, we introduce some lemmas.
Lemma 3.1 Let , and be defined by (2.8). Then, we have and .
Proof Let , where , and . By the definition of , we have . By Theorems 2.2 and 2.3, we have . Hence if , we have . Then . Since , we have . Hence, noting that , and A is an M-matrix, we have . This completes the proof. □
which means . This, together with (3.7) and (3.8), implies that . Therefore, (3.2) holds. Similar to the proof in Theorem 2.4, we have (3.3) and (3.4). The proof is then completed. □
By Lemmas 3.1 and 3.2, and the principle of induction, we have , , .
Lemma 3.3 and , . If , is the solution.
Proof If , by the definition of , we have , . Noting that , we have . If , notice that A is an M-matrix, we have , which is a contradiction. Hence, and . , is obvious. Noting (2.10), it is obvious that if for some k such that , is the solution. □
Theorem 3.4 The sequence generated by two-level domain decomposition method (Algorithm 2.2) converges to the solution u of problem (1.1) after a finite number of iterations.
Proof If for some k, , by Lemma 3.1, is the solution and since the problem (1.1) has only one solution. Otherwise, since , we have and hence . Noting that and that N is an index set with finite elements, can only occur in finite steps. By Lemma 3.3, we have , and also can only occur in finite steps. In this case, after some finite steps, we have , . By the definition of , we have , . Hence, by Lemma 3.3, is the solution. This completes the proof. □
4 Numerical experiment
, is a given vector. In our test, we set , and , .
The matrix A may be obtained by discretizing the operator by using five-point difference scheme with a constant mesh step size , where m denotes the number of mesh nodes in x- or y-direction ( is the total number of unknowns).
We compare different algorithms from the point of view of iteration numbers and CPU times. Here, we consider three algorithms: classical additive Schwarz algorithm (i.e., Algorithm 2.1, denoted by AS), Newton’s method proposed in  (denoted by SSN), and Algorithm 2.2 (denoted by TLDD). In the AS, we decompose N into two equal parts with the overlapping size . In the algorithms we considered, all subproblems relating to obstacle problems are solved by PSOR with the same relaxation parameter , and the initial point is with . The tolerance in the subproblems of the algorithms is chosen to be equal to 10−3 in -norm, while in the outer iterative processes, it is chosen to be equal to 10−6 in -norm. In the TLDD, we choose initial . The tolerance in the subproblems of the algorithms is chosen to be equal to 10−4 in -norm, while in the outer iterative processes, it is chosen to be equal to 10−6 in -norm. In the SSN, we choose , , , , which is defined by the procedure proposed in (, Section 7). We choose the initial point .
Comparisons of iteration numbers and cpu times
Concluding remark In this paper, we propose a new kind of domain decomposition method for linear complementarity problem and establish its convergence. From the numerical result, we can see that this method needs less iteration number to converge to the solution rapidly than the additive Schwarz method and SSN. There are still some interesting future works that need to be done. For example, as we can see from TLDD, the main work is calculating the linear equations; we can discuss the affect of inexact solution for related linear subproblems. It is also interesting for us to extend the new method to some other problems, such as nonlinear complementarity problem and bilateral obstacle problem. We leave it as a possible future research topic.
The work was supported by the Natural Science Foundation of Guangdong Province, China (Grant No. S2012040007993) and the Educational Commission of Guangdong Province, China (Grant No. 2012LYM_0122), NSF (Grand No. 11126147) and NSF (Grand No. 11201197).
- 3.Ferris MC, Mangasarian OL, Pang JS (Eds): Complementarity: Applications, Algorithms and Extensions. Kluwer Academic, Dordrecht; 2001.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.