Bounds for Tail Probabilities of the Sample Variance

Open Access
Research Article

Abstract

We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

Keywords

Convex Function Central Limit Theorem Sample Variance Elementary Calculation Point Distribution 

1. Introduction and Results

Let Open image in new window be a random sample of independent identically distributed observations. Throughout we write

for the mean, variance, and the fourth central moment of Open image in new window , and assume that Open image in new window . Some of our results hold only for bounded random variables. In such cases without loss of generality we assume that Open image in new window . Note that Open image in new window is a natural condition in audit applications.

The sample variance Open image in new window of the sample Open image in new window is defined as
where Open image in new window is the sample mean, Open image in new window . We can rewrite (1.2) as
We are interested in deviations of the statistic Open image in new window from its mean Open image in new window , that is, in bounds for the tail probabilities of the statistic Open image in new window ,

The paper is organized as follows. In the introduction we give a description of bounds, some comments, and references. In Section 2 we obtain sharp upper bounds for the fourth moment. In Section 3 we give proofs of all facts and results from the introduction.

If Open image in new window , then the range of interest in (1.5) is Open image in new window , where

The restriction Open image in new window on the range of Open image in new window in (1.4) (resp., Open image in new window in (1.5) in cases where the condition Open image in new window is fulfilled) is natural. Indeed, Open image in new window for Open image in new window , due to the obvious inequality Open image in new window . Furthermore, in the case of Open image in new window we have Open image in new window for Open image in new window since Open image in new window (see Proposition 2.3 for a proof of the latter inequality).

The asymptotic (as Open image in new window ) properties of Open image in new window (see Section 3 for proofs of (1.7) and (1.8)) can be used to test the quality of bounds for tail probabilities. Under the condition Open image in new window the statistic Open image in new window is asymptotically normal provided that Open image in new window is not a Bernoulli random variable symmetric around its mean. Namely, if Open image in new window , then
If Open image in new window (which happens if and only if Open image in new window is a Bernoulli random variable symmetric around its mean), then asymptotically Open image in new window has Open image in new window type distribution, that is,

where Open image in new window is a standard normal random variable, and Open image in new window is the standard normal distribution function.

Let us recall already known bounds for the tail probabilities of the sample variance (see (1.19)–(1.21)). We need notation related to certain functions coming back to Hoeffding [1]. Let Open image in new window and Open image in new window . Write
For Open image in new window we define Open image in new window . For Open image in new window we set Open image in new window . Note that our notation for the function Open image in new window is slightly different from the traditional one. Let Open image in new window . Introduce as well the function

All our bounds are expressed in terms of the function Open image in new window . Using (1.11), it is easy to replace them by bounds expressed in terms of the function Open image in new window , and we omit related formulations.

Let Open image in new window be a Bernoulli random variable such that Open image in new window and Open image in new window . Then Open image in new window and Open image in new window . The function Open image in new window is related to the generating function (the Laplace transform) of binomial distributions since
where Open image in new window are independent copies of Open image in new window . Note that (1.14) is an obvious corollary of (1.13). We omit elementary calculations leading to (1.13). In a similar way

where Open image in new window is a Poisson random variable with parameter Open image in new window .

The functions Open image in new window and Open image in new window satisfy a kind of the Central Limit Theorem. Namely, for given Open image in new window and Open image in new window we have
(we omit elementary calculations leading to (1.16)). Furthermore, we have [1]
and we also have [2]
Using the introduced notation, we can recall the known results (see [2, Lemma Open image in new window ]). Let Open image in new window be the integer part of Open image in new window . Assume that Open image in new window . If Open image in new window is known, then
The right-hand side of (1.19) is an increasing function of Open image in new window (see Section 3 for a short proof of (1.19) as a corollary of Theorem 1.1). If Open image in new window is unknown but Open image in new window is known, then
Using the obvious estimate Open image in new window , the bound (1.20) is implied by (1.19). In cases where both Open image in new window and Open image in new window are not known, we have

as it follows from (1.19) using the obvious bound Open image in new window .

Let us note that the known bounds (1.19)–(1.21) are the best possible in the framework of an approach based on analysis of the variance, usage of exponential functions, and of an inequality of Hoeffding (see (3.3)), which allows to reduce the problem to estimation of tail probabilities for sums of independent random variables. Our improvement is due to careful analysis of the fourth moment which appears to be quite complicated; see Section 2. Briefly the results of this paper are the following: we prove a general bound involving Open image in new window , Open image in new window , and the fourth moment Open image in new window ; this general bound implies all other bounds, in particular a new precise bound involving Open image in new window and Open image in new window ; we provide as well bounds for lower tails Open image in new window ; we compare the bounds analytically, mostly as Open image in new window is sufficiently large.

From the mathematical point of view the sample variance is one of the simplest nonlinear statistics. Known bounds for tail probabilities are designed having in mind linear statistics, possibly also for dependent observations. See a seminal paper of Hoeffding [1] published in JASA. For further development see Talagrand [3], Pinelis [4, 5], Bentkus [6, 7], Bentkus et al. [8, 9], and so forth. Our intention is to develop tools useful in the setting of nonlinear statistics, using the sample variance as a test statistic.

Theorem 1.1 extends and improves the known bounds (1.19)–(1.21). We can derive (1.19)–(1.21) from this theorem since we can estimate the fourth moment Open image in new window via various combinations of Open image in new window and Open image in new window using the boundedness assumption Open image in new window .

Theorem 1.1.

Let Open image in new window and Open image in new window .

Both bounds Open image in new window and Open image in new window are increasing functions of Open image in new window , Open image in new window and Open image in new window .

Remark 1.2.

In order to derive upper confidence bounds we need only estimates of the upper tail Open image in new window (see [2]). To estimate the upper tail the condition Open image in new window is sufficient. The lower tail Open image in new window has a different type of behavior since to estimate it we indeed need the assumption that Open image in new window is a bounded random variable.

For Open image in new window Theorem 1.1 implies the known bounds (1.19)–(1.21) for the upper tail of Open image in new window . It implies as well the bounds (1.26)–(1.29) for the lower tail. The lower tail has a bit more complicated structure, (cf. (1.26)–(1.29) with their counterparts (1.19)–(1.21) for the upper tail).

One can show (we omit details) that the bound Open image in new window is not an increasing function of Open image in new window . A bit rougher inequality
has the monotonicity property since Open image in new window is an increasing function of Open image in new window . If Open image in new window is known, then using the obvious inequality Open image in new window , the bound (1.27) yields
If we have no information about Open image in new window and Open image in new window , then using Open image in new window , the bound (1.27) implies

The bounds above do not cover the situation where both Open image in new window and Open image in new window are known. To formulate a related result we need additional notation. In case of Open image in new window we use the notation

In view of the well-known upper bound Open image in new window for the variance of Open image in new window , we can partition the set
of possible values of Open image in new window and Open image in new window into a union Open image in new window of three subsets

Theorem 1.3.

Write Open image in new window . Assume that Open image in new window .

The upper tail of the statistic Open image in new window satisfies
and where one can write
The lower tail of Open image in new window satisfies

with Open image in new window , where Open image in new window , and Open image in new window is defined by (1.34).

The bounds above are obtained using the classical transform Open image in new window ,

of survival functions Open image in new window (cf. definitions (1.13) and (1.14) of the related Hoeffding functions). The bounds expressed in terms of Hoeffding functions have a simple analytical structure and are easily numerically computable.

All our upper and lower bounds satisfy a kind of the Central Limit Theorem. Namely, if we consider an upper bound, say Open image in new window (resp., a lower bound Open image in new window ) as a function of Open image in new window , then there exist limits
with some positive Open image in new window and Open image in new window . The values of Open image in new window and Open image in new window can be used to compare the bounds—the larger these constants, the better the bound. To prove (1.38) it suffices to note that with Open image in new window
The Central Limit Theorem in the form of (1.7) restricts the ranges of possible values of Open image in new window and Open image in new window . Namely, using (1.7) it is easy to see that Open image in new window and Open image in new window have to satisfy

We provide the values of these constants for all our bounds and give the numerical values of them in the following two cases.

(i) Open image in new window is a random variable uniformly distributed in the interval Open image in new window . The moments of this random variable satisfy

For Open image in new window defined by (1.41), the constants Open image in new window and Open image in new window we give as Open image in new window .

(ii) Open image in new window is uniformly distributed in Open image in new window , and in this case

For the constants Open image in new window and Open image in new window with Open image in new window defined by (1.42) we give as Open image in new window .

while calculating the constants in (1.44) and (1.46) we choose Open image in new window . The quantity Open image in new window in (1.43) and (1.45) is defined by (1.34).

Conclusions

Our new bounds provide a substantial improvement of the known bounds. However, from the asymptotic point of view these bounds seem to be still rather crude. To improve the bounds further one needs new methods and approaches. Some preliminary computer simulations show that in applications where Open image in new window is finite and random variables have small means and variances (like in auditing, where a typical value of Open image in new window is Open image in new window ), the asymptotic behavior is not related much to the behavior for small Open image in new window . Therefore bounds specially designed to cover the case of finite Open image in new window have to be developed.

2. Sharp Upper Bounds for the Fourth Moment

Recall that we consider bounded random variables such that Open image in new window , and that we write Open image in new window and Open image in new window . In Lemma 2.1 we provide an optimal upper bound for the fourth moment of Open image in new window given a shift Open image in new window , a mean Open image in new window , and a variance Open image in new window . The maximizers of the fourth moment are either Bernoulli or trinomial random variables. It turns out that their distributions, say Open image in new window , are of the following three types (i)–(iii):

(i)a two point distribution such that
(ii)a family of three point distributions depending on Open image in new window such that
where we write

notice that (2.4) supplies a three-point probability distribution only in cases where the inequalities Open image in new window and Open image in new window hold;

(iii)a two point distribution such that

Note that the point Open image in new window in (2.2)–(2.7) satisfies Open image in new window and that the probability distribution Open image in new window has mean Open image in new window and variance Open image in new window .

Introduce the set
Using the well-known bound Open image in new window valid for Open image in new window , it is easy to see that
Let Open image in new window . We represent the set Open image in new window as a union Open image in new window of three subsets setting

and Open image in new window , where Open image in new window and Open image in new window are given in (2.5). Let us mention the following properties of the regions.

(a)If Open image in new window , then Open image in new window since for such Open image in new window obviously Open image in new window for all Open image in new window . The set Open image in new window is a one-point set. The set Open image in new window is empty.

(b)If Open image in new window , then Open image in new window since for such Open image in new window clearly Open image in new window for all Open image in new window . The set Open image in new window is a one-point set. The set Open image in new window is empty.

For Open image in new window all three regions Open image in new window , Open image in new window , Open image in new window are nonempty sets. The sets Open image in new window and Open image in new window have only one common point Open image in new window , that is, Open image in new window .

Lemma 2.1.

Let Open image in new window . Assume that a random variable Open image in new window satisfies

with a random variable Open image in new window satisfying (2.11) and defined as follows:

(i)if Open image in new window , then Open image in new window is a Bernoulli random variable with distribution (2.2);

(ii)if Open image in new window , then Open image in new window is a trinomial random variable with distribution (2.4);

(iii)if Open image in new window , then Open image in new window is a Bernoulli random variable with distribution (2.7).

Proof.

Writing Open image in new window , we have to prove that if

with Open image in new window . Henceforth we write Open image in new window , so that Open image in new window can assume only the values Open image in new window , Open image in new window , Open image in new window with probabilities Open image in new window , Open image in new window , Open image in new window defined in (2.2)–(2.7), respectively. The distribution Open image in new window is related to the distribution Open image in new window as Open image in new window for all Open image in new window .

Formally in our proof we do not need the description (2.17) of measures Open image in new window satisfying (2.15). However, the description helps to understand the idea of the proof. Let Open image in new window and Open image in new window . Assume that a signed measure Open image in new window of subsets of Open image in new window is such that the total variation measure Open image in new window is a discrete measure concentrated in a three-point set Open image in new window and
Then Open image in new window is a uniquely defined measure such that

We omit the elementary calculations leading to (2.17). The calculations are related to solving systems of linear equations.

Let Open image in new window . Consider the polynomial
It is easy to check that
The proofs of (i)–(iii) differ only in technical details. In all cases we find Open image in new window , Open image in new window , and Open image in new window (depending on Open image in new window , Open image in new window and Open image in new window ) such that the polynomial Open image in new window defined by (2.18) satisfies Open image in new window for Open image in new window , and such that the coefficient Open image in new window in (2.18) vanishes, Open image in new window . Using Open image in new window , the inequality Open image in new window is equivalent to Open image in new window , which obviously leads to Open image in new window . We note that the random variable Open image in new window assumes the values from the set
Therefore we have

which proves the lemma.

(i)Now Open image in new window . We choose Open image in new window and Open image in new window . In order to ensure Open image in new window (cf. (2.19)) we have to take

To complete the proof we note that the random variable Open image in new window with Open image in new window defined by (2.2) assumes its values in the set Open image in new window . To find the distribution of Open image in new window we use (2.17). Setting Open image in new window in (2.17) we obtain Open image in new window and Open image in new window , Open image in new window as in (2.2).

(ii)Now Open image in new window or, equivalently Open image in new window and Open image in new window . Moreover, we can assume that Open image in new window since only for such Open image in new window the region Open image in new window is nonempty. We choose Open image in new window and Open image in new window . Then Open image in new window for all Open image in new window . In order to ensure Open image in new window (cf. (2.19)) we have to take

By our construction Open image in new window . To find a distribution of Open image in new window supported by the set Open image in new window we use (2.17). It follows that Open image in new window has the distribution defined in (2.4).

(iii)We choose Open image in new window and Open image in new window . In order to ensure Open image in new window (cf. (2.19)) we have to take

To conclude the proof we notice that the random variable Open image in new window with Open image in new window given by (2.7) assumes values from the set Open image in new window .

To prove Theorems 1.1 and 1.3 we apply Lemma 2.1 with Open image in new window . We provide the bounds of interest as Corollary 2.2. To prove the corollary it suffices to plug Open image in new window in Lemma 2.1 and, using (2.2)–(2.7), to calculate Open image in new window explicitly. We omit related elementary however cumbersome calculations. The regions Open image in new window , Open image in new window , and Open image in new window are defined in (1.32).

Corollary 2.2.

Proposition 2.3.

Let Open image in new window . Then, with probability Open image in new window , the sample variance satisfies Open image in new window with Open image in new window given by (1.6).

Proof.

Using the representation (1.3) of the sample variance as an Open image in new window -statistic, it suffices to show that the function Open image in new window ,
in the domain

satisfies Open image in new window . The function Open image in new window is convex. To see this, it suffices to check that Open image in new window restricted to straight lines is convex. Any straight line can be represented as Open image in new window with some Open image in new window . The convexity of Open image in new window on Open image in new window is equivalent to the convexity of the function Open image in new window of the real variable Open image in new window . It is clear that the second derivative Open image in new window is nonnegative since Open image in new window . Thus both Open image in new window and Open image in new window are convex.

Since both Open image in new window and Open image in new window are convex, the function Open image in new window attains its maximal value on the boundary of Open image in new window . Moreover, the maximal value of Open image in new window is attained on the set of extremal points of Open image in new window . In our case the set of the extremal points is just the set of vertexes of the cube Open image in new window . In other words, the maximal value of Open image in new window is attained when each of Open image in new window is either Open image in new window or Open image in new window . Since Open image in new window is a symmetric function, we can assume that the maximal value of Open image in new window is attained when Open image in new window and Open image in new window with some Open image in new window . Using (2.28), the corresponding value of Open image in new window is Open image in new window . Maximizing with respect to Open image in new window we get Open image in new window , if Open image in new window is even, and Open image in new window , if Open image in new window is odd, which we can rewrite as the desired inequality Open image in new window .

3. Proofs

We use the following observation which in the case of an exponential function comes back to Hoeffding [1, Section Open image in new window ]. Assume that we can represent a random variable, say Open image in new window , as a weighted mixture of other random variables, say Open image in new window , so that
where Open image in new window are nonrandom numbers. Let Open image in new window be a convex function. Then, using Jensen's inequality Open image in new window , we obtain
Moreover, if random variables Open image in new window are identically distributed, then
One can specialize (3.3) for Open image in new window -statistics of the second order. Let Open image in new window be a symmetric function of its arguments. For an i.i.d. sample Open image in new window consider the Open image in new window -statistic
Then (3.3) yields
for any convex function Open image in new window . To see that (3.6) holds, let Open image in new window be a permutation of Open image in new window . Define Open image in new window as (3.5) replacing the sample Open image in new window by its permutation Open image in new window . Then (see [1, Section Open image in new window ])

which means that Open image in new window allows a representation of type (3.1) with Open image in new window and all Open image in new window identically distributed, due to our symmetry and i.i.d. assumptions. Thus, (3.3) implies (3.6).

Using (1.3) we can write
with Open image in new window . By an application of (3.6) we derive
for any convex function Open image in new window , where Open image in new window is a sum of i.i.d. random variables such that
Consider the following three families of functions depending on parameters Open image in new window :
Any of functions Open image in new window given by (3.11) dominates the indicator function Open image in new window of the interval Open image in new window . Therefore Open image in new window . Combining this inequality with (3.9), we get

with Open image in new window being a sum of Open image in new window i.i.d. random variables specified in (3.10). Depending on the choice of the family of functions Open image in new window given by (3.11), the Open image in new window in (3.14) is taken over Open image in new window or Open image in new window , respectively.

Proposition 3.1.

If Open image in new window , then Open image in new window .

Proof.

Let us prove (3.15). Using the i.i.d. assumption, we have

which yields the desired bound for Open image in new window .

Proposition 3.2.

Let Open image in new window be a bounded random variable such that Open image in new window with some nonrandom Open image in new window . Then for any convex function Open image in new window one has

where Open image in new window is a Bernoulli random variable such that Open image in new window and Open image in new window .

Proof.

See [2, Lemmas Open image in new window and Open image in new window ].

Proof of Theorem 1.1.

The proof is based on a combination of Hoeffding's observation (3.6) using the representation (3.8) of Open image in new window as a Open image in new window -statistic, of Chebyshev's inequality involving exponential functions, and of Proposition 3.2. Let us provide more details. We have to prove (1.22) and (1.24).

Let us prove (1.22). We apply (3.14) with the family (3.13) of exponential functions Open image in new window . We get
By (3.10), the sum Open image in new window is a sum of Open image in new window copies of a random variable, say Open image in new window , such that
We note that
Indeed, the first two relations in (3.23) are obvious; the third one is implied by Open image in new window ,

and Open image in new window ; see Proposition 3.1.

Let Open image in new window stand for the class of random variables Open image in new window satisfying (3.23). Taking into account (3.21), to prove (1.22) it suffices to check that
where Open image in new window is a sum of Open image in new window independent copies Open image in new window of Open image in new window . It is clear that the left-hand side of (3.25) is an increasing function of Open image in new window . To prove (3.25), we apply Proposition 3.2. Conditioning Open image in new window times on all random variables except one, we can replace all random variables Open image in new window by Bernoulli ones. To find the distribution of the Bernoulli random variables we use (3.23). We get

where Open image in new window is a sum of Open image in new window independent copies of a Bernoulli random variable, say Open image in new window , such that Open image in new window and Open image in new window with Open image in new window as in (1.23), that is, Open image in new window . Note that in (3.26) we have the equality since Open image in new window .

Using (3.26) we have

To see that the third equality in (3.27) holds, it suffices to change the variable Open image in new window by Open image in new window . The fourth equality holds by definition (1.13) of the Hoeffding function since Open image in new window is a Bernoulli random variable with mean zero and such that Open image in new window . The relation (3.27) proves (3.25) and (1.22).

A proof of (1.24) repeats the proof of (1.22) replacing everywhere Open image in new window and Open image in new window by Open image in new window and Open image in new window , respectively. The inequality Open image in new window in (3.23) has to be replaced by Open image in new window , which holds due to our assumption Open image in new window . Respectively, the probability Open image in new window now is given by (1.25).

Proof.

The bound is an obvious corollary of Theorem 1.1 since by Proposition 3.1 we have Open image in new window , and therefore we can choose Open image in new window . Setting this value of Open image in new window into (1.22), we obtain (1.19).

Proof.

To prove (1.26), we set Open image in new window in (1.24). Such choice of Open image in new window is justified in the proof of (1.19).

To prove (1.27) we use (1.26). We have to prove that
and that the right-hand side of (3.28) is an increasing function of Open image in new window . By the definition of the Hoeffding function we have
where Open image in new window is a Bernoulli random variable such that Open image in new window and Open image in new window . It is easy to check that Open image in new window assumes as well the value Open image in new window with probability Open image in new window . Hence Open image in new window . Therefore Open image in new window , and we can write
where Open image in new window is the class of random variables Open image in new window such that Open image in new window and Open image in new window . Combining (3.29) and (3.30) we obtain
The definition of the latter Open image in new window in (3.31) shows that the right-hand side of (3.31) is an increasing function of Open image in new window . To conclude the proof of (1.27) we have to check that the right-hand sides of (3.28) and (3.31) are equal. Using (3.18) of Proposition 3.2, we get Open image in new window , where Open image in new window is a mean zero Bernoulli random variable assuming the values Open image in new window and Open image in new window with positive probabilities such that Open image in new window . Since Open image in new window , we have

Using the definition of the Hoeffding function we see that the right-hand sides of (3.28) and (3.31) are equal.

Proof of Theorem 1.3.

We use Theorem 1.1. In bounds of this theorem we substitute the value of Open image in new window being the right-hand side of (2.27), where a bound of type Open image in new window is given. We omit related elementary analytical manipulations.

Proof.

To describe the limiting behavior of Open image in new window we use Hoeffding's decomposition. We can write
To derive (3.33), use the representation of Open image in new window as a Open image in new window -statistic (3.8). The kernel functions Open image in new window and Open image in new window are degenerated, that is, Open image in new window and Open image in new window for all Open image in new window . Therefore
It follows that in cases where Open image in new window the statistic Open image in new window is asymptotically normal:
where Open image in new window is a standard normal random variable. It is easy to see that Open image in new window if and only if Open image in new window is a Bernoulli random variable symmetric around its mean. In this special case we have Open image in new window , and (3.33) turns to
where Open image in new window are i.i.d. Rademacher random variables. It follows that

which completes the proof of (1.7) and (1.8).

Notes

Acknowledgment

Figure 1 was produced by N. Kalosha. The authors thank him for the help. The research was supported by the Lithuanian State Science and Studies Foundation, Grant no. T-15/07.

References

  1. [1]
    Hoeffding W: Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association 1963, 58: 13–30. 10.2307/2282952MathSciNetCrossRefMATHGoogle Scholar
  2. [2]
    Bentkus V, van Zuijlen M: On conservative confidence intervals. Lithuanian Mathematical Journal 2003,43(2):141–160. 10.1023/A:1024210921597MathSciNetCrossRefMATHGoogle Scholar
  3. [3]
    Talagrand M: The missing factor in Hoeffding's inequalities. Annales de l'Institut Henri Poincaré B 1995,31(4):689–702.MathSciNetMATHGoogle Scholar
  4. [4]
    Pinelis I: Optimal tail comparison based on comparison of moments. In High Dimensional Probability (Oberwolfach, 1996), Progress in Probability. Volume 43. Birkhäuser, Basel, Switzerland; 1998:297–314.CrossRefGoogle Scholar
  5. [5]
    Pinelis I: Fractional sums and integrals of -concave tails and applications to comparison probability inequalities. In Advances in Stochastic Inequalities (Atlanta, Ga, 1997), Contemporary Mathematics. Volume 234. American Mathematical Society, Providence, RI, USA; 1999:149–168.CrossRefGoogle Scholar
  6. [6]
    Bentkus V: A remark on the inequalities of Bernstein, Prokhorov, Bennett, Hoeffding, and Talagrand. Lithuanian Mathematical Journal 2002,42(3):262–269. 10.1023/A:1020221925664MathSciNetCrossRefMATHGoogle Scholar
  7. [7]
    Bentkus V: On Hoeffding's inequalities. The Annals of Probability 2004,32(2):1650–1673. 10.1214/009117904000000360MathSciNetCrossRefMATHGoogle Scholar
  8. [8]
    Bentkus V, Geuze GDC, van Zuijlen M: Trinomial laws dominating conditionally symmetric martingales. Department of Mathematics, Radboud University Nijmegen; 2005.Google Scholar
  9. [9]
    Bentkus V, Kalosha N, van Zuijlen M: On domination of tail probabilities of (super)martingales: explicit bounds. Lithuanian Mathematical Journal 2006,46(1):3–54.MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© V. Bentkus and M. Van Zuijlen. 2009

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Authors and Affiliations

  1. 1.Vilnius Pedagogical UniversityVilniusLithuania
  2. 2.IMAPP, Radboud University NijmegenGL NijmegenThe Netherlands

Personalised recommendations