# Bounds for Tail Probabilities of the Sample Variance

- 631 Downloads
- 1 Citations

## Abstract

We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

### Keywords

Convex Function Central Limit Theorem Sample Variance Elementary Calculation Point Distribution## 1. Introduction and Results

for the mean, variance, and the fourth central moment of Open image in new window , and assume that Open image in new window . Some of our results hold only for bounded random variables. In such cases without loss of generality we assume that Open image in new window . Note that Open image in new window is a natural condition in audit applications.

The paper is organized as follows. In the introduction we give a description of bounds, some comments, and references. In Section 2 we obtain sharp upper bounds for the fourth moment. In Section 3 we give proofs of all facts and results from the introduction.

If Open image in new window , then the range of interest in (1.5) is Open image in new window , where

The restriction Open image in new window on the range of Open image in new window in (1.4) (resp., Open image in new window in (1.5) in cases where the condition Open image in new window is fulfilled) is natural. Indeed, Open image in new window for Open image in new window , due to the obvious inequality Open image in new window . Furthermore, in the case of Open image in new window we have Open image in new window for Open image in new window since Open image in new window (see Proposition 2.3 for a proof of the latter inequality).

where Open image in new window is a standard normal random variable, and Open image in new window is the standard normal distribution function.

All our bounds are expressed in terms of the function Open image in new window . Using (1.11), it is easy to replace them by bounds expressed in terms of the function Open image in new window , and we omit related formulations.

where Open image in new window is a Poisson random variable with parameter Open image in new window .

as it follows from (1.19) using the obvious bound Open image in new window .

Let us note that the known bounds (1.19)–(1.21) are the best possible in the framework of an approach based on analysis of the variance, usage of exponential functions, and of an inequality of Hoeffding (see (3.3)), which allows to reduce the problem to estimation of tail probabilities for sums of independent random variables. Our improvement is due to careful analysis of the fourth moment which appears to be quite complicated; see Section 2. Briefly the results of this paper are the following: we prove a general bound involving Open image in new window , Open image in new window , and the fourth moment Open image in new window ; this general bound implies all other bounds, in particular a new precise bound involving Open image in new window and Open image in new window ; we provide as well bounds for lower tails Open image in new window ; we compare the bounds analytically, mostly as Open image in new window is sufficiently large.

From the mathematical point of view the sample variance is one of the simplest nonlinear statistics. Known bounds for tail probabilities are designed having in mind linear statistics, possibly also for dependent observations. See a seminal paper of Hoeffding [1] published in JASA. For further development see Talagrand [3], Pinelis [4, 5], Bentkus [6, 7], Bentkus et al. [8, 9], and so forth. Our intention is to develop tools useful in the setting of nonlinear statistics, using the sample variance as a test statistic.

Theorem 1.1 extends and improves the known bounds (1.19)–(1.21). We can derive (1.19)–(1.21) from this theorem since we can estimate the fourth moment Open image in new window via various combinations of Open image in new window and Open image in new window using the boundedness assumption Open image in new window .

Theorem 1.1.

Let Open image in new window and Open image in new window .

Both bounds Open image in new window and Open image in new window are increasing functions of Open image in new window , Open image in new window and Open image in new window .

Remark 1.2.

In order to derive upper confidence bounds we need only estimates of the upper tail Open image in new window (see [2]). To estimate the upper tail the condition Open image in new window is sufficient. The lower tail Open image in new window has a different type of behavior since to estimate it we indeed need the assumption that Open image in new window is a bounded random variable.

For Open image in new window Theorem 1.1 implies the known bounds (1.19)–(1.21) for the upper tail of Open image in new window . It implies as well the bounds (1.26)–(1.29) for the lower tail. The lower tail has a bit more complicated structure, (cf. (1.26)–(1.29) with their counterparts (1.19)–(1.21) for the upper tail).

The bounds above do not cover the situation where both Open image in new window and Open image in new window are known. To formulate a related result we need additional notation. In case of Open image in new window we use the notation

Theorem 1.3.

Write Open image in new window . Assume that Open image in new window .

with Open image in new window , where Open image in new window , and Open image in new window is defined by (1.34).

of survival functions Open image in new window (cf. definitions (1.13) and (1.14) of the related Hoeffding functions). The bounds expressed in terms of Hoeffding functions have a simple analytical structure and are easily numerically computable.

We provide the values of these constants for all our bounds and give the numerical values of them in the following two cases.

For Open image in new window defined by (1.41), the constants Open image in new window and Open image in new window we give as Open image in new window .

For the constants Open image in new window and Open image in new window with Open image in new window defined by (1.42) we give as Open image in new window .

while calculating the constants in (1.44) and (1.46) we choose Open image in new window . The quantity Open image in new window in (1.43) and (1.45) is defined by (1.34).

Conclusions

Our new bounds provide a substantial improvement of the known bounds. However, from the asymptotic point of view these bounds seem to be still rather crude. To improve the bounds further one needs new methods and approaches. Some preliminary computer simulations show that in applications where Open image in new window is finite and random variables have small means and variances (like in auditing, where a typical value of Open image in new window is Open image in new window ), the asymptotic behavior is not related much to the behavior for small Open image in new window . Therefore bounds specially designed to cover the case of finite Open image in new window have to be developed.

## 2. Sharp Upper Bounds for the Fourth Moment

Recall that we consider bounded random variables such that Open image in new window , and that we write Open image in new window and Open image in new window . In Lemma 2.1 we provide an optimal upper bound for the fourth moment of Open image in new window given a shift Open image in new window , a mean Open image in new window , and a variance Open image in new window . The maximizers of the fourth moment are either Bernoulli or trinomial random variables. It turns out that their distributions, say Open image in new window , are of the following three types (i)–(iii):

notice that (2.4) supplies a three-point probability distribution only in cases where the inequalities Open image in new window and Open image in new window hold;

Note that the point Open image in new window in (2.2)–(2.7) satisfies Open image in new window and that the probability distribution Open image in new window has mean Open image in new window and variance Open image in new window .

and Open image in new window , where Open image in new window and Open image in new window are given in (2.5). Let us mention the following properties of the regions.

(a)If Open image in new window , then Open image in new window since for such Open image in new window obviously Open image in new window for all Open image in new window . The set Open image in new window is a one-point set. The set Open image in new window is empty.

(b)If Open image in new window , then Open image in new window since for such Open image in new window clearly Open image in new window for all Open image in new window . The set Open image in new window is a one-point set. The set Open image in new window is empty.

For Open image in new window all three regions Open image in new window , Open image in new window , Open image in new window are nonempty sets. The sets Open image in new window and Open image in new window have only one common point Open image in new window , that is, Open image in new window .

Lemma 2.1.

with a random variable Open image in new window satisfying (2.11) and defined as follows:

(i)if Open image in new window , then Open image in new window is a Bernoulli random variable with distribution (2.2);

(ii)if Open image in new window , then Open image in new window is a trinomial random variable with distribution (2.4);

(iii)if Open image in new window , then Open image in new window is a Bernoulli random variable with distribution (2.7).

Proof.

with Open image in new window . Henceforth we write Open image in new window , so that Open image in new window can assume only the values Open image in new window , Open image in new window , Open image in new window with probabilities Open image in new window , Open image in new window , Open image in new window defined in (2.2)–(2.7), respectively. The distribution Open image in new window is related to the distribution Open image in new window as Open image in new window for all Open image in new window .

We omit the elementary calculations leading to (2.17). The calculations are related to solving systems of linear equations.

which proves the lemma.

To complete the proof we note that the random variable Open image in new window with Open image in new window defined by (2.2) assumes its values in the set Open image in new window . To find the distribution of Open image in new window we use (2.17). Setting Open image in new window in (2.17) we obtain Open image in new window and Open image in new window , Open image in new window as in (2.2).

By our construction Open image in new window . To find a distribution of Open image in new window supported by the set Open image in new window we use (2.17). It follows that Open image in new window has the distribution defined in (2.4).

To conclude the proof we notice that the random variable Open image in new window with Open image in new window given by (2.7) assumes values from the set Open image in new window .

To prove Theorems 1.1 and 1.3 we apply Lemma 2.1 with Open image in new window . We provide the bounds of interest as Corollary 2.2. To prove the corollary it suffices to plug Open image in new window in Lemma 2.1 and, using (2.2)–(2.7), to calculate Open image in new window explicitly. We omit related elementary however cumbersome calculations. The regions Open image in new window , Open image in new window , and Open image in new window are defined in (1.32).

Corollary 2.2.

Proposition 2.3.

Let Open image in new window . Then, with probability Open image in new window , the sample variance satisfies Open image in new window with Open image in new window given by (1.6).

Proof.

satisfies Open image in new window . The function Open image in new window is convex. To see this, it suffices to check that Open image in new window restricted to straight lines is convex. Any straight line can be represented as Open image in new window with some Open image in new window . The convexity of Open image in new window on Open image in new window is equivalent to the convexity of the function Open image in new window of the real variable Open image in new window . It is clear that the second derivative Open image in new window is nonnegative since Open image in new window . Thus both Open image in new window and Open image in new window are convex.

Since both Open image in new window and Open image in new window are convex, the function Open image in new window attains its maximal value on the boundary of Open image in new window . Moreover, the maximal value of Open image in new window is attained on the set of extremal points of Open image in new window . In our case the set of the extremal points is just the set of vertexes of the cube Open image in new window . In other words, the maximal value of Open image in new window is attained when each of Open image in new window is either Open image in new window or Open image in new window . Since Open image in new window is a symmetric function, we can assume that the maximal value of Open image in new window is attained when Open image in new window and Open image in new window with some Open image in new window . Using (2.28), the corresponding value of Open image in new window is Open image in new window . Maximizing with respect to Open image in new window we get Open image in new window , if Open image in new window is even, and Open image in new window , if Open image in new window is odd, which we can rewrite as the desired inequality Open image in new window .

## 3. Proofs

which means that Open image in new window allows a representation of type (3.1) with Open image in new window and all Open image in new window identically distributed, due to our symmetry and i.i.d. assumptions. Thus, (3.3) implies (3.6).

with Open image in new window being a sum of Open image in new window i.i.d. random variables specified in (3.10). Depending on the choice of the family of functions Open image in new window given by (3.11), the Open image in new window in (3.14) is taken over Open image in new window or Open image in new window , respectively.

Proposition 3.1.

If Open image in new window , then Open image in new window .

Proof.

which yields the desired bound for Open image in new window .

Proposition 3.2.

where Open image in new window is a Bernoulli random variable such that Open image in new window and Open image in new window .

Proof.

See [2, Lemmas Open image in new window and Open image in new window ].

Proof of Theorem 1.1.

The proof is based on a combination of Hoeffding's observation (3.6) using the representation (3.8) of Open image in new window as a Open image in new window -statistic, of Chebyshev's inequality involving exponential functions, and of Proposition 3.2. Let us provide more details. We have to prove (1.22) and (1.24).

and Open image in new window ; see Proposition 3.1.

where Open image in new window is a sum of Open image in new window independent copies of a Bernoulli random variable, say Open image in new window , such that Open image in new window and Open image in new window with Open image in new window as in (1.23), that is, Open image in new window . Note that in (3.26) we have the equality since Open image in new window .

To see that the third equality in (3.27) holds, it suffices to change the variable Open image in new window by Open image in new window . The fourth equality holds by definition (1.13) of the Hoeffding function since Open image in new window is a Bernoulli random variable with mean zero and such that Open image in new window . The relation (3.27) proves (3.25) and (1.22).

A proof of (1.24) repeats the proof of (1.22) replacing everywhere Open image in new window and Open image in new window by Open image in new window and Open image in new window , respectively. The inequality Open image in new window in (3.23) has to be replaced by Open image in new window , which holds due to our assumption Open image in new window . Respectively, the probability Open image in new window now is given by (1.25).

Proof.

The bound is an obvious corollary of Theorem 1.1 since by Proposition 3.1 we have Open image in new window , and therefore we can choose Open image in new window . Setting this value of Open image in new window into (1.22), we obtain (1.19).

Proof.

To prove (1.26), we set Open image in new window in (1.24). Such choice of Open image in new window is justified in the proof of (1.19).

Using the definition of the Hoeffding function we see that the right-hand sides of (3.28) and (3.31) are equal.

Proof of Theorem 1.3.

We use Theorem 1.1. In bounds of this theorem we substitute the value of Open image in new window being the right-hand side of (2.27), where a bound of type Open image in new window is given. We omit related elementary analytical manipulations.

Proof.

which completes the proof of (1.7) and (1.8).

## Notes

### Acknowledgment

Figure 1 was produced by N. Kalosha. The authors thank him for the help. The research was supported by the Lithuanian State Science and Studies Foundation, Grant no. T-15/07.

### References

- [1]Hoeffding W:
**Probability inequalities for sums of bounded random variables.***Journal of the American Statistical Association*1963,**58:**13–30. 10.2307/2282952MathSciNetCrossRefMATHGoogle Scholar - [2]Bentkus V, van Zuijlen M:
**On conservative confidence intervals.***Lithuanian Mathematical Journal*2003,**43**(2):141–160. 10.1023/A:1024210921597MathSciNetCrossRefMATHGoogle Scholar - [3]Talagrand M:
**The missing factor in Hoeffding's inequalities.***Annales de l'Institut Henri Poincaré B*1995,**31**(4):689–702.MathSciNetMATHGoogle Scholar - [4]Pinelis I:
**Optimal tail comparison based on comparison of moments.**In*High Dimensional Probability (Oberwolfach, 1996), Progress in Probability*.*Volume 43*. Birkhäuser, Basel, Switzerland; 1998:297–314.CrossRefGoogle Scholar - [5]Pinelis I:
**Fractional sums and integrals of -concave tails and applications to comparison probability inequalities.**In*Advances in Stochastic Inequalities (Atlanta, Ga, 1997), Contemporary Mathematics*.*Volume 234*. American Mathematical Society, Providence, RI, USA; 1999:149–168.CrossRefGoogle Scholar - [6]Bentkus V:
**A remark on the inequalities of Bernstein, Prokhorov, Bennett, Hoeffding, and Talagrand.***Lithuanian Mathematical Journal*2002,**42**(3):262–269. 10.1023/A:1020221925664MathSciNetCrossRefMATHGoogle Scholar - [7]Bentkus V:
**On Hoeffding's inequalities.***The Annals of Probability*2004,**32**(2):1650–1673. 10.1214/009117904000000360MathSciNetCrossRefMATHGoogle Scholar - [8]Bentkus V, Geuze GDC, van Zuijlen M:
*Trinomial laws dominating conditionally symmetric martingales.*Department of Mathematics, Radboud University Nijmegen; 2005.Google Scholar - [9]Bentkus V, Kalosha N, van Zuijlen M:
**On domination of tail probabilities of (super)martingales: explicit bounds.***Lithuanian Mathematical Journal*2006,**46**(1):3–54.MathSciNetCrossRefMATHGoogle Scholar

## Copyright information

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.