Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Testing normality via a distributional fixed point property in the Stein characterization

  • 180 Accesses

  • 1 Citations

Abstract

We propose two families of tests for the classical goodness-of-fit problem to univariate normality. The new procedures are based on \(L^2\)-distances of the empirical zero-bias transformation to the empirical distribution or the normal distribution function. Weak convergence results are derived under the null hypothesis, under contiguous as well as under fixed alternatives. A comparative finite-sample power study shows the competitiveness to classical procedures.

This is a preview of subscription content, log in to check access.

References

  1. Allison JS, Santana L (2015) On a data-dependent choice of the tuning parameter appearing in certain goodness-of-fit tests. J Stat Comput Simul 85(16):3276–3288

  2. Allison JS, Santana L, Smit N, Visagie IJH (2017) An ‘apples to apples’ comparison of various tests for exponentiality. Comput Stat 32(4):1241–1283

  3. Anderson TW, Darling DA (1952) Asymptotic theory of certain “goodness of fit” criteria based on stochastic processes. Ann Math Stat 23:193–212

  4. Baringhaus L, Henze N (1988) A consistent test for multivariate normality based on the empirical characteristic function. Metrika 35(1):339–348

  5. Baringhaus L, Henze N (2017) Cramér–von Mises distance: probabilistic interpretation, confidence intervals, and neighbourhood-of-model validation. J Nonparametr Stat 29(2):167–188

  6. Baringhaus L, Danschke R, Henze N (1989) Recent and classical tests for normality—a comparative study. Commun Stat Simul Comput 18(1):363–379

  7. Baringhaus L, Gürtler N, Henze N (2000) Weighted integral test statistics and components of smooth tests of fit. Aust N Z J Stat 42(2):179–192

  8. Baringhaus L, Ebner B, Henze N (2017) The limit distribution of weighted \({L}^2\)-goodness-of-fit statistics under fixed alternatives, with applications. Ann Inst Stat Math 69(5):969–995

  9. Bera AK, Galvao AF, Wang L, Xiao Z (2016) A new characterization of the normal distribution and test for normality. Econom Theory 32(5):1216–1252

  10. Billingsley P (1995) Probability and measure, 3rd edn. Wiley, Hoboken

  11. Chapman DG (1958) A comparative study of several one-sided goodness-of-fit tests. Ann Math Stat 29(3):655–674

  12. Chen LHY, Goldstein L, Shao QM (2011) Normal approximation by Steins method. Probability and its applications. Springer, Berlin

  13. del Barrio E, Cuesta-Albertos JA, Matrán C, Rodriguez-Rodriguez JM (1999) Tests of goodness of fit based on the \({L}_2\)-Wasserstein distance. Ann Stat 27(4):1230–1239

  14. del Barrio E, Cuesta-Albertos JA, Matrán C, Csörgö S, Cuadras CM, de Wet T, Giné E, Lockhart R, Munk A, Stute W (2000) Contributions of empirical and quantile processes to the asymptotic theory of goodness-of-fit tests. TEST 9(1):1–96

  15. Epps TW, Pulley LB (1983) A test for normality based on the empirical characteristic function. Biometrika 70(3):723–726

  16. Farrell PJ, Rogers-Stewart K (2006) Comprehensive study of tests for normality and symmetry: extending the Spiegelhalter test. J Stat Comput Simul 76(9):803–816

  17. Goldstein L, Reinert G (1997) Stein’s method and the zero bias transformation with application to simple random sampling. Ann Appl Probab 7(4):935–952

  18. Gross J, Ligges U (2015) nortest: tests for normality. R package version 1.0-4

  19. Hájek J, S̆idák Z, Sen PK (1999) Theory of rank tests. Academic Press, Cambridge, Probability and Mathematical Statistics

  20. Henze N (1990) An approximation to the limit distribution of the Epps–Pulley test statistic for normality. Metrika 37:7–18

  21. Henze N (1994) Tests of normality (in German). Allgemeines statistisches Archiv. J German Stat Soc 78(3):293–317

  22. Henze N (2002) Invariant tests for multivariate normality: a critical review. Stat Pap 43(4):467–506

  23. Henze N, Wagner T (1997) A new approach to the BHEP tests for multivariate normality. J Multivariate Anal 62:1–23

  24. Henze N, Zirkler B (1990) A class of invariant consistent tests for multivariate normality. Commun Stat Theory Methods 19(10):3595–3617

  25. Henze N, Jiménez-Gamero MD (2018) A new class of tests for multinormality with iid. and Garch data based on the empirical moment generating function. TEST. https://doi.org/10.1007/s11749-018-0589-z

  26. Klar B (2001) Goodness-of-fit tests for the exponential and the normal distribution based on the integrated distribution function. Ann Inst Stat Math 53(2):338–353

  27. Krauczi É (2009) A study of the quantile correlation test for normality. TEST 18(1):156–165

  28. Landry L, Lepage Y (1992) Empirical behavior of some tests for normality. Commun Stat Simul Comput 21(4):971–999

  29. Ledoux M, Talagrand M (2011) Probability in Banach spaces. Isoperimetry and processes. Springer, Berlin

  30. Liu Q, Lee JD, Jordan M (2016) A kernelized Stein discrepancy for goodness-of-fit tests. In: Proceedings of the 33rd international conference on international conference on machine learning, vol 48, pp 276–284

  31. Mecklin CJ, Mundfrom DJ (2004) An appraisal and bibliography of tests for multivariate normality. Int Stat Rev 72(1):123–138

  32. Neuhaus G (1979) Asymptotic theory of goodness of fit tests when parameters are present: a survey. Stat J Theor Appl Stat 10(3):479–494

  33. Pearson ES, D’Agostino RB, Bowman KO (1977) Tests for departure from normality: comparison of powers. Biometrika 64(2):231–246

  34. Core Team R (2017) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna

  35. Romão X, Delgado R, Costa A (2010) An empirical power comparison of univariate goodness-of-fit tests for normality. J Stat Comput Simul 80(5):545–591

  36. Ross N (2011) Fundamentals of Steins method. Probab Surv 8:210–293

  37. Roussas GG (1972) Contiguity of probability measures: some applications in statistics. Cambridge University Press, Cambridge

  38. Sen PK (1981) Sequential nonparametrics : invariance principles and statistical inference. Wiley, New York

  39. Shapiro SS, Francia RS (1972) An approximate analysis of variance test for normality. J Am Stat Assoc 67(337):215–216

  40. Shapiro SS, Wilk MB (1965) An analysis of variance test for normality (complete samples). Biometrika 52(3/4):591–611

  41. Shapiro SS, Wilk MB, Chen HJ (1968) A comparative study of various tests for normality. J Am Stat Assoc 63(324):1343–1372

  42. Stein C (1972) A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In: Proceedings of the sixth Berkeley symposium on mathematical statistics and probability, volume 2: probability theory, pp 583–602

  43. Stein C (1986) Approximate computation of expectations, vol 7. Lecture Notes-Monograph Series. Institute of Mathematical Statistics, Hayward

  44. Vasicek O (1976) A test for normality based on sample entropy. J R Stat Soc Ser B (Methodol) 38(1):54–59

  45. Villaseñor-Alva JA, González-Estrada E (2015) A correlation test for normality based on the Lévy characterization. Commun Stat Simul Comput 44(5):1225–1238

  46. Widder DV (1959) The Laplace transform, 5th printing. Princeton University Press, Princeton

  47. Yap BW, Sim CH (2011) Comparisons of various types of normality tests. J Stat Comput Simul 81(12):2141–2155

Download references

Acknowledgements

The authors thank Norbert Henze for useful comments and also express their gratitude to three anonymous referees for careful reading and suggestions that helped improve the article.

Author information

Correspondence to Bruno Ebner.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A Preliminary results concerning the weight functions

We first prove that the density function of a centred normal distribution is an admissible weight function. Then, we give a general result for the asymptotic behaviour of integral terms involving weight functions of the type we consider. In the whole section, we adopt the setting and notation from Sect. 2.

Lemma 2

The functions \(\omega _a(s) = (2 \pi a)^{-1/2} \exp (- s^2 / (2 a))\), \(s \in \mathbb {R}\), \(a > 0\), satisfy the weight function conditions stated in Sect. 2.

Proof

The only non-trivial statement is that \(\omega _a\) satisfies (8). Let \(0< \varepsilon < 1/8\) be arbitrary. In the case \(|S_n^{-1} - 1| \le \varepsilon \) and \(|{\overline{X}}_n| / S_n \le \varepsilon \), a Taylor expansion gives

$$\begin{aligned} \omega _a\left( \frac{s - {\overline{X}}_n}{S_n} \right) - \omega _a(s) = \omega _a^{\prime }\big ( \xi _n(s) \big ) \left( \frac{s - {\overline{X}}_n}{S_n} - s \right) , \end{aligned}$$
(24)

where \(\big |\xi _n(s) - s\big | \le \big | (s - {\overline{X}}_n) / S_n - s \big | \le (|s| + 1) / 8\). Consequently,

$$\begin{aligned} \big ( \xi _n(s) \big )^2 - s^2&\ge \min \left\{ \left| s - \frac{|s| + 1}{8} \right| ,\quad \left| s + \frac{|s| + 1}{8} \right| \right\} ^2 - s^2 \\&= - \frac{15}{64} s^2 - \frac{7}{32} |s| + \frac{1}{64} \end{aligned}$$

from which we conclude

$$\begin{aligned} \frac{\big | \omega _a^{\prime }\big ( \xi _n(s) \big ) \big |^3}{\big (\omega _a(s)\big )^2}&= \frac{\big | \xi _n(s) \big |^3}{a^3 \sqrt{2 \pi a}} \exp \left( - \frac{3}{2 a} \left( \big (\xi _n(s)\big )^2 - s^2 \right) - \frac{1}{2 a} s^2 \right) \\&\le \frac{1}{a^3 \sqrt{2 \pi a}} \big ( 2 |s| + 1 \big )^3 \exp \left( - \frac{s^2}{8 a} + \frac{|s|}{a} \right) . \end{aligned}$$

Combining this with (24),

$$\begin{aligned}&n \int _{\mathbb {R}} \left| \omega _a\left( \frac{s - {\overline{X}}_n}{S_n} \right) - \omega _a(s) \right| ^{3} \big ( \omega _a(s) \big )^{-2} \mathrm {d}s \\&\quad \le \varepsilon \int _{\mathbb {R}} n \left| \left( \frac{1}{S_n} - 1 \right) s - \frac{{\overline{X}}_n}{S_n} \right| ^2 \frac{\big ( 2 |s| + 1 \big )^4}{a^3 \sqrt{2 \pi a}} \exp \left( - \frac{s^2}{8 a} + \frac{|s|}{a} \right) \mathrm {d}s. \end{aligned}$$

As \(\varepsilon \) was arbitrary, the claim follows from the boundedness in probability of \(\sqrt{n} \big ( S_n^{-1} - 1 \big )\) and \(\sqrt{n} \big ( {\overline{X}}_n / S_n \big )\). \(\square \)

Lemma 3

Let \({\mathcal {U}}_n\) be a random element of \({\mathcal {H}}\), \(n \in \mathbb {N}\), such that \(\left||\sqrt{n} \, {\mathcal {U}}_n \right||_{{\mathcal {H}}} = O_\mathbb {P}(1)\). Then,

$$\begin{aligned} \int _{\mathbb {R}} \big |\sqrt{n} \, {\mathcal {U}}_n(s)\big | \left| \omega \left( \frac{s - {\overline{X}}_n}{S_n} \right) - \omega (s) \right| \mathrm {d}s = o_\mathbb {P}(1). \end{aligned}$$

If in addition \(\sup _{s \, \in \, \mathbb {R}} \big | {\mathcal {U}}_n(s) \big | \le C\)\(\mathbb {P}\)-a.s. for each \(n \in \mathbb {N}\) and some \(C > 0\),

$$\begin{aligned} \int _{\mathbb {R}} \big |\sqrt{n} \, {\mathcal {U}}_n(s) \big |^2 \, \omega \left( \frac{s - {\overline{X}}_n}{S_n} \right) \mathrm {d}s = \left||\sqrt{n} \, {\mathcal {U}}_n \right||_{{\mathcal {H}}}^2 + o_\mathbb {P}(1). \end{aligned}$$

Proof

By Hölder’s inequality (\(p = q = 2\)) and Slutsky’s lemma

$$\begin{aligned}&\int _{\mathbb {R}} \big |\sqrt{n} \, {\mathcal {U}}_n(s)\big | \left| \omega \left( \frac{s - {\overline{X}}_n}{S_n} \right) - \omega (s) \right| \mathrm {d}s \\&\quad \le \left||\sqrt{n} \, {\mathcal {U}}_n \right||_{{\mathcal {H}}} \left( \int _{\mathbb {R}} \left| \omega \left( \frac{s - {\overline{X}}_n}{S_n} \right) \Big / \omega (s) - 1 \right| ^2 \omega (s) \, \mathrm {d}s \right) ^{1/2} \\&\quad = o_\mathbb {P}(1), \end{aligned}$$

where we used the assumption on \({\mathcal {U}}_n\) and the fact that (8) implies

$$\begin{aligned}&\int _{\mathbb {R}} \left| \omega \left( \frac{s - {\overline{X}}_n}{S_n} \right) \Big / \omega (s) - 1 \right| ^2 \omega (s) \, \mathrm {d}s \\&\quad \le \left( \int _{\mathbb {R}} \left| \omega \left( \frac{s - {\overline{X}}_n}{S_n} \right) \Big / \omega (s) - 1 \right| ^3 \omega (s) \mathrm {d}s \right) ^{2/3} \left( \int _{\mathbb {R}} \omega (s) \, \mathrm {d}s \right) ^{1/3} \\&\quad = o_\mathbb {P}(1). \end{aligned}$$

The second claim also follows from Hölder’s inequality (\(p = 3/2, \, q = 3\)) and (8) since

$$\begin{aligned}&\left| \int _{\mathbb {R}} \big |\sqrt{n} \, {\mathcal {U}}_n(s)\big |^2 \, \omega \left( \frac{s - {\overline{X}}_n}{S_n} \right) \mathrm {d}s - \left||\sqrt{n} \, {\mathcal {U}}_n \right||_{{\mathcal {H}}}^2 \right| \\&\quad \le n \int _{\mathbb {R}} \big | {\mathcal {U}}_n(s) \big |^2 \big (\omega (s)\big )^{2/3} \, \left| \omega \left( \frac{s - {\overline{X}}_n}{S_n} \right) \Big / \omega (s) - 1 \right| \big (\omega (s)\big )^{1/3} \, \mathrm {d}s \\&\quad \le n^{2/3} \left( \int _{\mathbb {R}} \big | {\mathcal {U}}_n(s) \big |^3 \omega (s) \mathrm {d}s \right) ^{2/3} n^{1/3} \left( \int _{\mathbb {R}} \left| \omega \left( \frac{s - {\overline{X}}_n}{S_n} \right) \Big / \omega (s) - 1 \right| ^3 \omega (s) \, \mathrm {d}s \right) ^{1/3} \\&\quad \le C^{2/3} \left||\sqrt{n} \, {\mathcal {U}}_n \right||_{{\mathcal {H}}}^{4/3} \left( n \int _{\mathbb {R}} \left| \omega \left( \frac{s - {\overline{X}}_n}{S_n} \right) - \omega (s) \right| ^{3} \big ( \omega (s) \big )^{-2} \mathrm {d}s \right) ^{1/3} \\&\quad = o_\mathbb {P}(1).\square \end{aligned}$$

B Asymptotic expansions

We adopt the setting from Sect. 2, that is, we let \(X, X_1, X_2, \ldots \) be iid. random variables with distribution function F and \(\mathbb {E}[X^2] < \infty \) as well as \(\mathbb {E}X = 0\), \(\mathbb {V}(X) = 1\). The following lemma collects basic facts about a quantity closely related to the empirical zero-bias distribution function.

Lemma 4

The function

$$\begin{aligned} {\widehat{F}}_n^X (s) = \frac{1}{n} \sum _{j=1}^{n} \frac{X_j - {\overline{X}}_n}{S_n^2} \, (X_j - s) \, \mathbb {1}\{X_j \le s\}, \quad s \in \mathbb {R}, \end{aligned}$$

is a continuous distribution function for each \(n \in \mathbb {N}\) (and on a set of measure one). Furthermore,

$$\begin{aligned} \sup \limits _{s \, \in \, \mathbb {R}} \left| {\widehat{F}}_n^X (s) - F^X (s) \right| \longrightarrow 0 \end{aligned}$$
(25)

\(\mathbb {P}\)-a.s., as \(n \rightarrow \infty \), and

$$\begin{aligned} \sqrt{n} \, {\widehat{F}}_n^X (s) \approx \frac{\sqrt{n}}{S_n^2} \left\{ \frac{1}{n} \sum \limits _{j=1}^{n} X_j (X_j - s) \mathbb {1}\{ X_j \le s \} - {\overline{X}}_n \, \mathbb {E}\big [ (X - s) \mathbb {1}\{X \le s\} \big ] \phantom {\sum \limits _{j=1}^{n}} \right\} . \end{aligned}$$
(26)

Proof

We fix \(n \in \mathbb {N}\) and notice that

$$\begin{aligned} {\widehat{d}}_n^X (s) = \frac{1}{n} \sum _{j=1}^{n} \frac{X_j - {\overline{X}}_n}{S_n^2} \, \mathbb {1}\{X_j > s\} = - \frac{1}{n} \sum _{j=1}^{n} \frac{X_j - {\overline{X}}_n}{S_n^2} \, \mathbb {1}\{X_j \le s\} \,(\ge 0) . \end{aligned}$$

Using the first representation when integrating over \(({\overline{X}}_n, \infty )\) and the second for \((- \infty , {\overline{X}}_n]\), we obtain

$$\begin{aligned} \int _{\mathbb {R}} {\widehat{d}}_n^X (t) \, \mathrm {d}t = \frac{1}{S_n^2} \left( \frac{1}{n} \sum _{j=1}^{n} \left( X_j - {\overline{X}}_n \right) ^2 \right) = 1. \end{aligned}$$

Now, we conclude from

$$\begin{aligned} \int _{- \infty }^{s} {\widehat{d}}_n^X (t) \, \mathrm {d}t = - \frac{1}{n} \sum _{j=1}^{n} \frac{X_j - {\overline{X}}_n}{S_n^2} \int _{- \infty }^{s} \mathbb {1}\{X_j \le t\} \, \mathrm {d}t = {\widehat{F}}_n^X (s) \end{aligned}$$

that \({\widehat{F}}_n^X\) is a continuous distribution function. By the strong law of large numbers and the almost sure convergence \(({\overline{X}}_n, S_n^2) \rightarrow (0,1)\), we have

$$\begin{aligned} {\widehat{F}}_n^X (s)&= \frac{1}{S_n^2} \cdot \frac{1}{n} \sum _{j=1}^{n} X_j (X_j - s) \mathbb {1}\{ X_j \le s \} \\&\quad - \frac{{\overline{X}}_n}{S_n^2} \cdot \frac{1}{n} \sum _{j=1}^{n} (X_j - s) \mathbb {1}\{ X_j \le s \} \\ {}&\longrightarrow F^X (s) \end{aligned}$$

\(\mathbb {P}\)-a.s., as \(n \rightarrow \infty \), for any fixed \(s \in \mathbb {R}\). The proof of the classical Glivenko–Cantelli theorem applies to \({\widehat{F}}_n^X\) which yields (25). For the last claim, we set

$$\begin{aligned} A_n(s) = \frac{1}{n} \sum _{j=1}^{n} (X_j - s) \, \mathbb {1}\{X_j \le s\} - \mathbb {E}\big [(X - s) \, \mathbb {1}\{X \le s\} \big ] , \quad s \in \mathbb {R}. \end{aligned}$$

Straightforward calculations using Tonelli’s theorem and the integrability condition (7) give

$$\begin{aligned} \mathbb {E}\left[ \int _{\mathbb {R}} A_n(s)^2 \, \omega (s) \, \mathrm {d}s \right] \longrightarrow 0, \quad \text {as} \quad n \rightarrow \infty , \end{aligned}$$

so \(\left||A_n \right||_{{\mathcal {H}}}^2 = o_{\mathbb {P}}(1)\). Together with \(\sqrt{n} \, {\overline{X}}_n = O_\mathbb {P}(1)\) and Slutsky’s lemma, this implies (26). \(\square \)

We proceed by proving further asymptotic expansions of the same type as (26).

Lemma 5

Assume, in addition to the above prerequisites, that X has a continuously differentiable density function p with

$$\begin{aligned} \mathop {\sup }\nolimits _{s \, \in \, \mathbb {R}} \big | p(s) \big | \le K_1< \infty \quad \text {and} \quad \mathop {\sup }\nolimits _{s \, \in \, \mathbb {R}} \big | p^{\prime }(s) \big | \le K_2 < \infty . \end{aligned}$$

We have

$$\begin{aligned} \sqrt{n} \, F \left( \frac{s - {\overline{X}}_n}{S_n} \right) \approx \sqrt{n} \left\{ F(s) + p(s) \left( \left( \frac{1}{S_n} - 1 \right) s - \frac{{\overline{X}}_n}{S_n} \right) \right\} \end{aligned}$$

and, with \(F^X\) as in Lemma 1,

$$\begin{aligned} \sqrt{n} \, F^X \left( \frac{s - {\overline{X}}_n}{S_n} \right) \approx \sqrt{n} \left\{ F^X(s) + d^X(s) \left( \left( \frac{1}{S_n} - 1 \right) s - \frac{{\overline{X}}_n}{S_n} \right) \right\} . \end{aligned}$$

Moreover,

$$\begin{aligned} \sqrt{n} \, S_n^2 \, F^X(s) \approx \frac{1}{\sqrt{n}} \sum \limits _{j=1}^{n} X_j^2 \, F^X(s) \end{aligned}$$

which reads as \(\sqrt{n} \, S_n^2 \, \varPhi (s) \approx n^{- 1/2} \sum _{j=1}^{n} X_j^2 \, \varPhi (s)\) when \(\mathbb {P}^X = {\mathcal {N}}(0, 1)\) (cf. Theorem 1).

Proof

By Taylor’s theorem,

$$\begin{aligned} \sqrt{n} \, F \left( \frac{s - {\overline{X}}_n}{S_n} \right) = \sqrt{n} \left\{ F(s) + p(s) \left( \frac{s - {\overline{X}}_n}{S_n} - s \right) \right\} + R_n(s), \end{aligned}$$

where

$$\begin{aligned} R_n (s) = \sqrt{n} \, \frac{p^{\prime }\big (\xi _n(s)\big )}{2} \left( \frac{s - {\overline{X}}_n}{S_n} - s \right) ^2 \end{aligned}$$

and \(\big | \xi _n(s) - s \big | \le \big | (s - {\overline{X}}_n) / S_n - s \big |\). Condition (7) assures that \(R_n \in {\mathcal {H}}\)\(\mathbb {P}\)-a.s. and with \(\sqrt{n} \big (S_n^{-1} - 1\big ) = O_\mathbb {P}(1)\), \(\sqrt{n} \, {\overline{X}}_n = O_\mathbb {P}(1)\) we conclude

$$\begin{aligned} \left||R_n \right||_{{\mathcal {H}}}^2 \le \frac{K_2^2}{4} \int _{\mathbb {R}} n \left| \left( \frac{1}{S_n} - 1 \right) s - \frac{{\overline{X}}_n}{S_n} \right| ^4 \omega (s) \, \mathrm {d}s = o_\mathbb {P}(1). \end{aligned}$$

Now, let \(0< \varepsilon < 1\) be arbitrary. In the case \(\big | S_n^{-1} - 1 \big | \le \varepsilon \) and \(\big | {\overline{X}}_n \big | / S_n \le \varepsilon \), we have

$$\begin{aligned} \sqrt{n} \, F^X \left( \frac{s - {\overline{X}}_n}{S_n} \right) = \sqrt{n} \left\{ F^X(s) + d^X(s) \left( \frac{s - {\overline{X}}_n}{S_n} - s \right) \right\} + {\widetilde{R}}_n(s) , \end{aligned}$$

where

$$\begin{aligned} {\widetilde{R}}_n(s) = - \frac{\sqrt{n}}{2} \, {\widetilde{\xi }}_n(s) \, p\big ( {\widetilde{\xi }}_n(s) \big ) \left( \frac{s - {\overline{X}}_n}{S_n} - s \right) ^2 \end{aligned}$$

and \(\big |{\widetilde{\xi }}_n(s) - s \big | \le \big | (s - {\overline{X}}_n) / S_n - s \big | \le |s| + 1\). Using \(\big ( {\widetilde{\xi }}_n(s) \big )^2 \le (2 |s| + 1)^2\), we get

$$\begin{aligned} \left||{\widetilde{R}}_n \right||_{{\mathcal {H}}}^2&\le \frac{K_1^2}{4} \int _{\mathbb {R}} n \left| \frac{s - {\overline{X}}_n}{S_n} - s \right| ^4 \big ( 2 |s| + 1 \big )^2 \, \omega (s) \, \mathrm {d}s \\&\le \frac{\varepsilon ^2 K_1^2}{4} \int _{\mathbb {R}} n \left| \left( \frac{1}{S_n} - 1 \right) s - \frac{{\overline{X}}_n}{S_n} \right| ^2 \big ( 2 |s| + 1 \big )^4 \, \omega (s) \, \mathrm {d}s . \end{aligned}$$

Since \(\sqrt{n} \big (S_n^{-1} - 1\big )\) and \(\sqrt{n} \big ( {\overline{X}}_n / S_n \big )\) are bounded in probability and \(\varepsilon \) was arbitrary, \(||{\widetilde{R}}_n||_{{\mathcal {H}}}^2 = o_\mathbb {P}(1)\). The last claim of the lemma follows from

$$\begin{aligned} \left||\sqrt{n} \, S_n^2 \, F^X - \frac{1}{\sqrt{n}} \sum \limits _{j=1}^{n} X_j^2 \, F^X \right||_{{\mathcal {H}}} = \sqrt{n} \, {\overline{X}}_n^2 \left||F^X \right||_{{\mathcal {H}}} = o_\mathbb {P}(1). \end{aligned}$$

\(\square \)

C Proof of the limit relations in (11) and (12)

We will give the proof of (11), using the notation from Sect. 2. The limit in (12) is obtained by the same argument. Set

$$\begin{aligned} g(s) = s^{-1/2} \left( \frac{1}{n} \sum \limits _{j=1}^{n} \big ( Y_j (Y_j - \sqrt{2 s}) - 1 \big ) \, \mathbb {1}\{ Y_j \le \sqrt{2 s} \} \right) ^2, \quad s > 0, \end{aligned}$$

as well as

$$\begin{aligned} {\widetilde{g}}(s) = s^{-1/2} \left( \frac{1}{n} \sum \limits _{j=1}^{n} \big ( Y_j (Y_j + \sqrt{2 s}) - 1 \big ) \, \mathbb {1}\{ Y_j \le - \sqrt{2 s} \} \right) ^2, \quad s > 0. \end{aligned}$$

Splitting the integral in the definition of \(G_{n, a}^{(1)}\) (see (5)) into integrals over \((- \infty , 0]\) and \((0, \infty )\), simple changes of variable yield

$$\begin{aligned} \lim \limits _{a \, \searrow \, 0} G_{n, a}^{(1)}&= \lim \limits _{a \, \searrow \, 0} \frac{n}{2 \sqrt{\pi }} \left( a^{-1/2} \int _0^\infty g(s) \, e^{-s / a} \, \mathrm {d}s + a^{-1/2} \int _0^\infty {\widetilde{g}}(s) \, e^{-s / a} \, \mathrm {d}s \right) \\&= \lim \limits _{a \, \rightarrow \, \infty } \frac{n}{2 \sqrt{\pi }} \left( a^{1/2} \int _0^\infty g(s) \, e^{-a s} \, \mathrm {d}s + a^{1/2} \int _0^\infty {\widetilde{g}}(s) \, e^{-a s} \, \mathrm {d}s \right) . \end{aligned}$$

Since the integrals on the right-hand side of the above equation are Laplace transforms, and since we have

$$\begin{aligned} \lim \limits _{s \, \searrow \, 0} \varGamma (1/2) \, s^{1/2} g(s) = \sqrt{\pi } \left( \frac{1}{n} \sum \limits _{j=1}^{n} (Y_j^2 - 1) \, \mathbb {1}\{ Y_j \le 0 \} \right) ^2 \end{aligned}$$

and

$$\begin{aligned} \lim \limits _{s \, \searrow \, 0} \varGamma (1/2) \, s^{1/2} \, {\widetilde{g}}(s) = \sqrt{\pi } \left( \frac{1}{n} \sum \limits _{j=1}^{n} (Y_j^2 - 1) \, \mathbb {1}\{ Y_j < 0 \} \right) ^2, \end{aligned}$$

an Abelian theorem for the Laplace transform, as stated on p. 182 in the book by Widder (1959) (see also Baringhaus et al. 2000), implies the claim. Here, \(\varGamma (1/2) = \sqrt{\pi }\) denotes the Gamma function evaluated at 1 / 2.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Betsch, S., Ebner, B. Testing normality via a distributional fixed point property in the Stein characterization. TEST 29, 105–138 (2020). https://doi.org/10.1007/s11749-019-00630-0

Download citation

Keywords

  • Goodness-of-fit test
  • Normal distribution
  • Stein’s method
  • Zero-bias transformation

Mathematics Subject Classification

  • Primary 62F03
  • Secondary 62F05
  • 60E10
  • 62E10