Self-Adaptive Implicit Methods for Monotone Variant Variational Inequalities

Open Access
Research Article
  • 585 Downloads

Abstract

The efficiency of the implicit method proposed by He (1999) depends on the parameter Open image in new window heavily; while it varies for individual problem, that is, different problem has different "suitable" parameter, which is difficult to find. In this paper, we present a modified implicit method, which adjusts the parameter Open image in new window automatically per iteration, based on the message from former iterates. To improve the performance of the algorithm, an inexact version is proposed, where the subproblem is just solved approximately. Under mild conditions as those for variational inequalities, we prove the global convergence of both exact and inexact versions of the new method. We also present several preliminary numerical results, which demonstrate that the self-adaptive implicit method, especially the inexact version, is efficient and robust.

Keywords

Variational Inequality Variational Inequality Problem Nonempty Closed Convex Subset Cluster Point Implicit Method 

1. Introduction

Let Open image in new window be a closed convex subset of Open image in new window and let Open image in new window be a mapping from Open image in new window into itself. The so-called finite-dimensional variant variational inequalities, denoted by Open image in new window , is to find a vector Open image in new window , such that
while a classical variational inequality problem, abbreviated by Open image in new window , is to find a vector Open image in new window , such that

where Open image in new window is a mapping from Open image in new window into itself.

Both Open image in new window and Open image in new window serve as very general mathematical models of numerous applications arising in economics, engineering, transportation, and so forth. They include some widely applicable problems as special cases, such as mathematical programming problems, system of nonlinear equations, and nonlinear complementarity problems, and so forth. Thus, they have been extensively investigated. We refer the readers to the excellent monograph of Faccinei and Pang [1, 2] and the references therein for theoretical and algorithmic developments on Open image in new window , for example, [3, 4, 5, 6, 7, 8, 9, 10], and [11, 12, 13, 14, 15, 16] for Open image in new window .

It is observed that if Open image in new window is invertible, then by setting Open image in new window , the inverse mapping of Open image in new window can be reduced to Open image in new window . Thus, theoretically, all numerical methods for solving Open image in new window can be used to solve Open image in new window . However, in many practical applications, the inverse mapping Open image in new window may not exist. On the other hand, even if it exists, it is not easy to find it. Thus, there is a need to develop numerical methods for Open image in new window and recently, the Goldstein's type method was extended from solving Open image in new window to Open image in new window [12, 17].

In [11], He proposed an implicit method for solving general variational inequality problems. A general variational inequality problem is to find a vector Open image in new window , such that

When Open image in new window is the identity mapping, it reduces to Open image in new window and if Open image in new window is the identity mapping, it reduces to Open image in new window . He's implicit method is as follows.

(S0)Given Open image in new window , and a positive definite matrix Open image in new window .

with Open image in new window being the projection from Open image in new window onto Open image in new window , under the Euclidean norm.

He's method is attractive since it solves the general variational inequality problem, which is essentially equivalent to a system of nonsmooth equations
via solving a series of smooth equations (1.4). The mapping in the subproblem is well conditioned and many efficient numerical methods, such as Newton's method, can be applied to solve it. Furthermore, to improve the efficiency of the algorithm, He [11] proposed to solve the subproblem approximately. That is, at Step??1, instead of finding a zero of Open image in new window , it only needs to find a vector Open image in new window satisfying
where Open image in new window is a nonnegative sequence. He proved the global convergence of the algorithm under the condition that the error tolerance sequence Open image in new window satisfies

In the above algorithm, there are two parameters Open image in new window and Open image in new window , which affect the efficiency of the algorithm. It was observed that nearly for all problems, Open image in new window close to Open image in new window is a better choice than smaller Open image in new window , while different problem has different optimal Open image in new window . A suitable parameter Open image in new window is thus difficult to find for an individual problem. For solving variational inequality problems, He et al. [18] proposed to choose a sequence of parameters Open image in new window , instead of a fixed parameter Open image in new window , to improve the efficiency of the algorithm. Under the same conditions as those in [11], they proved the global convergence of the algorithm. The numerical results reported there indicated that for any given initial parameter Open image in new window , the algorithm can find a suitable parameter self-adaptively. This improves the efficiency of the algorithm greatly and makes the algorithm easy and robust to implement in practice.

In this paper, in a similar theme as [18], we suggest a general rule for choosing suitable parameter in the implicit method for solving Open image in new window . By replacing the constant factor Open image in new window in (1.4) and (1.5) with a self-adaptive variable positive sequence Open image in new window , the efficiency of the algorithm can be improved greatly. Moreover, it is also robust to the initial choice of the parameter Open image in new window . Thus, for any given problems, we can choose a parameter Open image in new window arbitrarily, for example, Open image in new window or Open image in new window . The algorithm chooses a suitable parameter self-adaptively, based on the information from the former iteration, which makes it able to add a little additional computational cost against the original algorithm with fixed parameter Open image in new window . To further improve the efficiency of the algorithm, we also admit approximate computation in solving the subproblem per iteration. That is, per iteration, we just need to find a vector Open image in new window that satisfies (1.8).

Throughout this paper, we make the following assumptions.

Assumption A.

The solution set of Open image in new window , denoted by Open image in new window , is nonempty.

Assumption B.

The operator Open image in new window is monotone, that is, for any Open image in new window ,

The rest of this paper is organized as follows. In Section 2, we summarize some basic properties which are useful in the convergence analysis of our method. In Sections 3 and 4, we describe the exact version and inexact version of the method and prove their global convergence, respectively. We report our preliminary computational results in Section 5 and give some final conclusions in the last section.

2. Preliminaries

For a vector Open image in new window and a symmetric positive definite matrix Open image in new window , we denote Open image in new window as the Euclidean-norm and Open image in new window as the matrix-induced norm, that is, Open image in new window .

Let Open image in new window be a nonempty closed convex subset of Open image in new window , and let Open image in new window denote the projection mapping from Open image in new window onto Open image in new window , under the matrix-induced norm. That is,
It is known [12, 19] that the variant variational inequality problem (1.1) is equivalent to the projection equation

where Open image in new window is an arbitrary positive constant. Then, we have the following lemma.

Lemma 2.1.

is the residual function of the projection equation (2.2).

Proof.

See [11, Theorem??1].

The following lemma summarizes some basic properties of the projection operator, which will be used in the subsequent analysis.

Lemma 2.2.

Let Open image in new window be a closed convex set in Open image in new window and let Open image in new window denote the projection operator onto Open image in new window under the matrix-induced norm, then one has

The following lemma plays an important role in convergence analysis of our algorithm.

Lemma 2.3.

Proof.

See [20] for a simple proof.

Lemma 2.4.

Proof.

It follows from the definition of Open image in new window (see (1.1) that
Adding (2.8) and (2.9), and using the definition of Open image in new window in (2.3), we get
that is,

where the last inequality follows from the monotonicity of Open image in new window (Assumption B). This completes the proof.

3. Exact Implicit Method and Convergence Analysis

We are now in the position to describe our algorithm formally.

3.1. Self-Adaptive Exact Implicit Method

(S0)Given Open image in new window , Open image in new window , Open image in new window and a positive definite matrix Open image in new window .

(S2)If the given stopping criterion is satisfied, then stop; otherwise choose a new parameter Open image in new window , where Open image in new window satisfies

Set Open image in new window and go to Step??1.

From (3.1), we know that Open image in new window is the (exact) unique zero of

We refer to the above method as the self-adaptive exact implicit method.

Remark 3.1.

Hence, the sequence Open image in new window is bounded. Then, let Open image in new window and Open image in new window .

Now, we analyze the convergence of the algorithm, beginning with the following lemma.

Lemma 3.2.

Let Open image in new window be the sequence generated by the proposed self-adaptive exact implicit method. Then for any Open image in new window and Open image in new window , one has

Proof.

Using (3.1), we get

where the inequality follows from (2.7). This completes the proof.

where the inequality follows from the monotonicity of the mapping Open image in new window . Combining (3.5) and (3.7), we have
Now, we give the self-adaptive rule in choosing the parameter Open image in new window . For the sake of balance, we hope that
That is, for given constant Open image in new window , if
we should increase Open image in new window in the next iteration; on the other hand, we should decrease Open image in new window when
Then we give

Such a self-adaptive strategy was adopted in [18, 21, 22, 23, 24] for solving variational inequality problems, where the numerical results indicated its efficiency and robustness to the choice of the initial parameter Open image in new window . Here we adopted it for solving variant variational inequality problems.

We are now in the position to give the convergence result of the algorithm, the main result of this section.

Theorem 3.3.

The sequence Open image in new window generated by the proposed self-adaptive exact implicit method converges to a solution of Open image in new window .

Proof.

Let Open image in new window . Then from the assumption that Open image in new window , we have that Open image in new window , which means that Open image in new window . Denote
From (3.8), for any Open image in new window , that is, an arbitrary solution of Open image in new window , we have

This, together with the monotonicity of the mapping Open image in new window , means that the generated sequence Open image in new window is bounded.

Also from (3.8), we have
Adding both sides of the above inequality, we obtain
where the second inequality follows from (3.15). Thus, we have
which, from Lemma 2.3, means that
Since Open image in new window is bounded, it has at least one cluster point. Let Open image in new window be a cluster point of Open image in new window and let Open image in new window be the subsequence converging to Open image in new window . Since Open image in new window is continuous, taking limit in (3.20) along the subsequence, we get

Thus, from Lemma 2.1, Open image in new window is a solution of Open image in new window .

In the following we prove that the sequence Open image in new window has exactly one cluster point. Assume that Open image in new window is another cluster point of Open image in new window , which is different from Open image in new window . Because Open image in new window is a cluster point of the sequence Open image in new window and Open image in new window is monotone, there is a Open image in new window such that
On the other hand, since Open image in new window and Open image in new window is an arbitrary solution, by setting Open image in new window in (3.15), we have for all Open image in new window ,
that is,
Using the monotonicity of Open image in new window and the choosing rule of Open image in new window , we have
Combing (3.25)–(3.27), we have that for any Open image in new window ,

which means that Open image in new window cannot be a cluster point of Open image in new window . Thus, Open image in new window has just one cluster point.

4. Inexact Implicit Method and Convergence Analysis

The main task at each iteration of the implicit exact algorithm in the last section is to solve a system of nonlinear equations. To solve it exactly per iteration is time consuming, and there is little justification to solve it exactly, especially when the iterative point is far away from the solution set. Thus, in this section, we propose to solve the subproblem approximately. That is, for a given Open image in new window , instead of finding the exact solution of (3.1), we would accept Open image in new window as the new iterate if it satisfies

where Open image in new window is a nonnegative sequence with Open image in new window . If (3.1) is replaced by (4.1), the modified method is called inexact implicit method.

We now analyze the convergence of the inexact implicit method.

Lemma 4.1.

Let Open image in new window be the sequence generated by the inexact implicit method. Then there exists a Open image in new window such that for any Open image in new window and Open image in new window ,

Proof.

Then (4.1) can be rewritten as
According to (4.3) and (2.7),
Using Cauchy-Schwarz inequality and (4.4), we have
and (4.7) becomes that for all Open image in new window ,

Substituting (4.6) and (4.9) into (4.5), we complete the proof.

In a similar way to (3.7), by using the monotonicity and the assumption that Open image in new window and (4.2), we obtain that for all Open image in new window

Now, we prove the convergence of the inexact implicit method.

Theorem 4.2.

The sequence Open image in new window generated by the proposed self-adaptive inexact implicit method converges to a solution point of Open image in new window .

Proof.

Then, it follows from (4.10) that for all Open image in new window ,
From the assumptions that
it follows that

are finite. The rest of the proof is similar to that of Theorem 3.3 and is thus omitted here.

5. Computational Results

In this section, we present some numerical results for the proposed self-adaptive implicit methods. Our main interests are two folds: the first one is to compare the proposed method with He's method [11] in solving a simple nonlinear problem, showing the numerical advantage; the second one is to indicate that the strategy is rather insensitive to the initial point, the initial choice of the parameter, as well as the size of the problems. All codes were written in Matlab and run on a AMD 3200+ personal computer. In the following tests, the parameter Open image in new window is changed when
That is, we set Open image in new window in (3.13). We set Open image in new window , and the matrix-induced norm projection is just the projection under Euclidean norm, which is very easy to implement when Open image in new window has some special structure. For example, when Open image in new window is the nonnegative orthant,
At each iteration, we use Newton's method [25, 26] to solve the system of nonlinear equations
approximately; that is, we stop the iteration of Newton's method as soon as the current iterative point satisfies (4.1), and adopt it as the next iterative point, where

with Open image in new window

In our first test problem , we take
where the matrix Open image in new window is constructed by Open image in new window . Here
are Householder matrices and Open image in new window is a diagonal matrix with Open image in new window . The vectors Open image in new window , and Open image in new window contain pseudorandom numbers:
The closed convex set Open image in new window in this problem is defined as
with different prescribed Open image in new window . Note that in the case Open image in new window (otherwise Open image in new window is the trivial solution ). Therefore, we test the problem with Open image in new window and Open image in new window . In the test we take Open image in new window , and Open image in new window . The stopping criterion is
The results in Table 1 show that Open image in new window is a "proper" parameter for the problem with Open image in new window , while for the other two cases with larger Open image in new window and with smaller Open image in new window , it is not. For any of these three cases, the method with self-adaptive strategy rule is efficient.
Table 1

Comparison of the proposed method and He's method [11].

 

Open image in new window

Open image in new window

 

Proposed method

He's method

Proposed method

He's method

 

It. no.

CPU

It. no.

CPU

It. no.

CPU

It. no.

CPU

0.5 Open image in new window

25

0.3910

100

1.0780

34

50.4850

0.05 Open image in new window

20

0.3120

37

0.4850

25

39.8440

17

25.0940

0.01 Open image in new window

26

0.4060

350

5.8750

33

61.4070

"—" means iteration numbers Open image in new window 200 and CPU Open image in new window 2000 (sec).

The second example considered here is the variant mixed complementarity problem for short VMCP, with Open image in new window , where Open image in new window and Open image in new window are randomly generated parameters. The mapping Open image in new window is taken as
where Open image in new window and Open image in new window are the nonlinear part and the linear part of Open image in new window , respectively. We form the linear part Open image in new window similarly as in [27]. The matrix Open image in new window , where Open image in new window is an Open image in new window matrix whose entries are randomly generated in the interval Open image in new window , and a skew-symmetric matrix Open image in new window is generated in the same way. The vector Open image in new window is generated from a uniform distribution in the interval Open image in new window . In Open image in new window , the nonlinear part of Open image in new window , the components are Open image in new window , and Open image in new window is a random variable in Open image in new window . The numerical results are summarized in Tables 25, where the initial iterative point is Open image in new window in Tables 2 and 3 and Open image in new window is randomly generated in Open image in new window in Tables 4 and 5, respectively. The other parameters are the same: Open image in new window and Open image in new window for Open image in new window and Open image in new window otherwise. The stopping criterion is
Table 2

Numerical results for VMCP with dimension Open image in new window .

Open image in new window

Proposed method

He's method

 

It. no.

CPU

It. no.

CPU

Open image in new window

69

0.0780

Open image in new window

65

0.1250

7335

6.1250

Open image in new window

61

0.0790

485

0.4530

Open image in new window

59

0.0620

60

4.0780

10

60

0.0780

315

0.3280

1

66

0.0110

2672

2.500

Open image in new window

70

0.0940

22541

21.0320

Open image in new window

73

0.0780

"—" means iteration numbers Open image in new window 3000 and CPU Open image in new window 300 (sec).

Table 3

Numerical results for VMCP with dimension Open image in new window .

Open image in new window

Proposed method

He's method

 

It. no.

CPU

It. no.

CPU

Open image in new window

82

1.6090

Open image in new window

74

1.4850

1434

28.3750

Open image in new window

64

1.2660

199

3.8910

Open image in new window

63

1.2500

174

3.4060

10

68

1.3500

1486

30.4840

1

75

1.4850

Open image in new window

75

1.5000

Open image in new window

86

1.7030

"—" means iteration numbers Open image in new window 3000 and CPU Open image in new window 300 (sec).

Table 4

Numerical results for VMCP with dimension Open image in new window .

Open image in new window

Proposed method

He's method

 

It. no.

CPU

It. no.

CPU

Open image in new window

61

0.0620

Open image in new window

61

0.0940

3422

3.7190

Open image in new window

60

0.0790

684

0.6410

Open image in new window

67

0.0780

59

0.0620

10

65

0.0940

309

0.2970

1

69

0.0940

2637

2.3750

Open image in new window

72

0.0940

21949

18.9220

Open image in new window

75

0.1250

"—" means iteration numbers Open image in new window 3000 and CPU Open image in new window 300 (sec).

Table 5

Numerical results for VMCP with dimension Open image in new window .

Open image in new window

Proposed method

He's method

 

It. no.

CPU

It. no.

CPU

Open image in new window

61

1.2500

Open image in new window

64

1.2810

1527

29.8750

Open image in new window

64

1.2660

150

2.9220

Open image in new window

64

1.2810

222

4.3440

10

89

1.7920

1922

37.6250

1

70

1.3910

Open image in new window

88

1.7340

Open image in new window

84

1.6560

"—" means iteration numbers Open image in new window 5000 and CPU Open image in new window 300 (sec).

As the results in Table 1, the results in Tables 2 to 5 indicate that the number of iterations and CPU time are rather insensitive to the initial parameter Open image in new window , while He's method is efficient for proper choice of Open image in new window . The results also show that the proposed method, as well as He's method, is very stable and efficient to the choice of the initial point Open image in new window .

6. Conclusions

In this paper, we proposed a self-adaptive implicit method for solving monotone variant variational inequalities. The proposed self-adaptive adjusting rule avoids the difficult task of choosing a "suitable" parameter, which makes the method efficient for initial parameter. Our self-adaptive rule adds only a tiny amount of computation than the method with fixed parameter, while the efficiency is enhanced greatly. To make the method more efficient and practical, an approximate version of the algorithm was proposed. The global convergence of both the exact version and the inexact version of the new algorithm was proved under mild assumptions; that is, the underlying mapping of Open image in new window is monotone and there is at least one solution of the problem. The reported preliminary numerical results verified our assertion.

Notes

Acnowledgments

This research was supported by the NSFC Grants 10501024, 10871098, and NSF of Jiangsu Province at Grant no. BK2006214. D. Han was also supported by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.

References

  1. 1.
    Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Vol. I, Springer Series in Operations Research. Springer, New York, NY, USA; 2003:xxxiv+624+I69.MATHGoogle Scholar
  2. 2.
    Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems, Vol. II, Springer Series in Operations Research. Springer, New York, NY, USA; 2003:i-xxxiv, 625–1234 and II1–II57.MATHGoogle Scholar
  3. 3.
    Bertsekas DP, Gafni EM: Projection methods for variational inequalities with application to the traffic assignment problem. Mathematical Programming Study 1982, (17):139–159.Google Scholar
  4. 4.
    Rachunková I, Tvrdý M: Nonlinear systems of differential inequalities and solvability of certain boundary value problems. Journal of Inequalities and Applications 2000,6(2):199–226.MathSciNetGoogle Scholar
  5. 5.
    Agarwal RP, Elezovic N, Pecaric J: On some inequalities for beta and gamma functions via some classical inequalities. Journal of Inequalities and Applications 2005,2005(5):593–613. 10.1155/JIA.2005.593MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Dafermos S: Traffic equilibrium and variational inequalities. Transportation Science 1980,14(1):42–54. 10.1287/trsc.14.1.42MathSciNetCrossRefGoogle Scholar
  7. 7.
    Verma RU: A class of projection-contraction methods applied to monotone variational inequalities. Applied Mathematics Letters 2000,13(8):55–62. 10.1016/S0893-9659(00)00096-3MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Verma RU: Projection methods, algorithms, and a new system of nonlinear variational inequalities. Computers & Mathematics with Applications 2001,41(7–8):1025–1031. 10.1016/S0898-1221(00)00336-9MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Ceng LC, Mastroeni G, Yao JC: An inexact proximal-type method for the generalized variational inequality in Banach spaces. Journal of Inequalities and Applications 2007, 2007:-14.Google Scholar
  10. 10.
    Chidume CE, Chidume CO, Ali B: Approximation of fixed points of nonexpansive mappings and solutions of variational inequalities. Journal of Inequalities and Applications 2008, 2008:-12.Google Scholar
  11. 11.
    He BS: Inexact implicit methods for monotone general variational inequalities. Mathematical Programming 1999,86(1):199–217. 10.1007/s101070050086MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    He BS: A Goldstein's type projection method for a class of variant variational inequalities. Journal of Computational Mathematics 1999,17(4):425–434.MathSciNetMATHGoogle Scholar
  13. 13.
    Noor MA: Quasi variational inequalities. Applied Mathematics Letters 1988,1(4):367–370. 10.1016/0893-9659(88)90152-8MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Outrata JV, Zowe J: A Newton method for a class of quasi-variational inequalities. Computational Optimization and Applications 1995,4(1):5–21. 10.1007/BF01299156MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Pang JS, Qi LQ: Nonsmooth equations: motivation and algorithms. SIAM Journal on Optimization 1993,3(3):443–465. 10.1137/0803021MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Pang JS, Yao JC: On a generalization of a normal map and equation. SIAM Journal on Control and Optimization 1995,33(1):168–184. 10.1137/S0363012992241673MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Li M, Yuan XM: An improved Goldstein's type method for a class of variant variational inequalities. Journal of Computational and Applied Mathematics 2008,214(1):304–312. 10.1016/j.cam.2007.02.032MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    He BS, Liao LZ, Wang SL: Self-adaptive operator splitting methods for monotone variational inequalities. Numerische Mathematik 2003,94(4):715–737.MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Eaves BC: On the basic theorem of complementarity. Mathematical Programming 1971,1(1):68–75. 10.1007/BF01584073MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Zhu T, Yu ZQ: A simple proof for some important properties of the projection mapping. Mathematical Inequalities & Applications 2004,7(3):453–456.MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    He BS, Yang H, Meng Q, Han DR: Modified Goldstein-Levitin-Polyak projection method for asymmetric strongly monotone variational inequalities. Journal of Optimization Theory and Applications 2002,112(1):129–143. 10.1023/A:1013048729944MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Han D, Sun W: A new modified Goldstein-Levitin-Polyak projection method for variational inequality problems. Computers & Mathematics with Applications 2004,47(12):1817–1825. 10.1016/j.camwa.2003.12.002MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Han D: Inexact operator splitting methods with selfadaptive strategy for variational inequality problems. Journal of Optimization Theory and Applications 2007,132(2):227–243. 10.1007/s10957-006-9060-5MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Han D, Xu W, Yang H: An operator splitting method for variational inequalities with partially unknown mappings. Numerische Mathematik 2008,111(2):207–237. 10.1007/s00211-008-0181-7MathSciNetCrossRefMATHGoogle Scholar
  25. 25.
    Dembo RS, Eisenstat SC, Steihaug T: Inexact Newton methods. SIAM Journal on Numerical Analysis 1982,19(2):400–408. 10.1137/0719025MathSciNetCrossRefMATHGoogle Scholar
  26. 26.
    Pang JS: Inexact Newton methods for the nonlinear complementarity problem. Mathematical Programming 1986,36(1):54–71. 10.1007/BF02591989MathSciNetCrossRefMATHGoogle Scholar
  27. 27.
    Harker PT, Pang JS: A damped-Newton method for the linear complementarity problem. In Computational Solution of Nonlinear Systems of Equations (Fort Collins, CO, 1988), Lectures in Applied Mathematics. Volume 26. American Mathematical Society, Providence, RI, USA; 1990:265–284.Google Scholar

Copyright information

© Z. Ge and D. Han. 2009

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Authors and Affiliations

  1. 1.Institute of Mathematics, School of Mathematics and Computer ScienceNanjing Normal UniversityNanjingChina

Personalised recommendations