Skip to main content
Log in

A new powerful version of the BUS test of normality

  • Published:
Statistical Methods & Applications Aims and scope Submit manuscript

Abstract

In this paper we introduce a modified version of the BUS test, which we call NBUS (New Borovkov–Utev Statistic). This latter defines a family of goodness of fit tests that can be used to detect normality against alternative hypothesis of which all moments up to the fifth exist. The test statistic depends on empirical moments and real parameters that have to be chosen appropriately. The good abilities of the NBUS with respect to BUS and other powerful normality tests are illustrated by means of a Monte Carlo experiment for finite samples. Besides, we show how an adaptation of NBUS for testing departing from normality due only to kurtosis, leads to comparable performances with classical tests based on the fourth moment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  • Anderson TW, Darling DA (1952) Asymptotic theory of certain “goodness-of-fit” criteria based on stochastic processes. Ann Stat 23:193–212

    Article  MATH  MathSciNet  Google Scholar 

  • Borovkov AA, Utev SA (1984) On an inequality and a related characterization of the normal distribution. Theory Probab Appl 28:219–228

    Article  MATH  Google Scholar 

  • Bonett DG, Seier E (2002) A test of normality with high uniform power. Comput Stat Data Anal 40:435–445

    Article  MATH  MathSciNet  Google Scholar 

  • Carota C (2010) Tests for normality in classes of skew-t alternatives. Stat Probab Lett 80:1–8

    Article  MATH  MathSciNet  Google Scholar 

  • Coin D (2008) A goodness-of-fit test for normality based on polynomial regression. Comput Stat Data Anal 52:2185–2198

    Article  MATH  MathSciNet  Google Scholar 

  • Daniel C (1959) Use of half-normal plots in interpreting factorial two level experiments. Technometrics 1:311–341

    Article  MathSciNet  Google Scholar 

  • D’Agostino RB, Belanger A, D’Agostino RB Jr (1990) A suggestion for using powerful and informative tests of normality. Am Stat 44:316–321

    Google Scholar 

  • D’Agostino RB, Stephens MA (eds) (1986) Goodness-of-fit techniques. Statistics: textbooks and monograph. Marcel Dekker, New York

    Google Scholar 

  • Donnell DJ, Buja A, Stuetzle W (1994) Analysis of additive dependencies and concurvities using smallest additive principal components. Ann Stat 22:1635–1668

    Article  MATH  MathSciNet  Google Scholar 

  • Goia A, Salinelli E (2010) Optimal nonlinear transformations of random variables. Ann Inst H Poincaré Probab Statist 46:653–676

    Article  MATH  MathSciNet  Google Scholar 

  • Goia A, Salinelli E, Sarda P (2011) Exploring the statistical applicability of the Poincaré inequality: a test of normality. Test 20:334–352

    Article  MATH  MathSciNet  Google Scholar 

  • Jarque CM, Bera AK (1980) Efficient tests for normality, homoschedasticity and serial independence of regression residuals. Econ Lett 6:255–259

    Article  MathSciNet  Google Scholar 

  • Lilliefors HW (1967) On the Kolmogorov–Smirnov test for normality with mean and variance. J Am Stat Assoc 62:399–402

    Article  Google Scholar 

  • Mudholkar GS, Marchetti CE, Lin CT (2002) Independence characterizations and testing normality against restricted skewness–kurtosis alternatives. J Stat Plan Inferference 104:485–501

    Article  MATH  MathSciNet  Google Scholar 

  • Ning W, Ngunkeng G (2013) An empirical likelihood ratio based goodness-of-fit test for skew normality. Stat Methods Appl 22:209–226

    Article  MathSciNet  Google Scholar 

  • Poitras G (2006) More on the correct use of omnibus tests for normality. Econ Lett 90:304–309

    Article  MATH  MathSciNet  Google Scholar 

  • Razali NM, Wah YB (2011) Power comparisons of Shapiro–Wilk, Kolmogorov–Smirnov, Lilliefors, and Anderson–Darling tests. J Stat Model Anal 2:21–33

    Google Scholar 

  • Salinelli E (2009) Nonlinear principal components II. Characterization of normal distributions. J Multivar Anal 100:652–660

    Article  MATH  MathSciNet  Google Scholar 

  • Shapiro SS, Wilk MB (1965) An analysis of variance test for normality (complete samples). Biometrika 52:591–611

    Article  MATH  MathSciNet  Google Scholar 

  • Stuart A, Ord JK (1994) Kendall’s advanced theory of statistics. Distribution theory, vol 1, VI edn. Edward Arnold, New York

    Google Scholar 

  • Thode HC (2002) Testing for normality. Marcel Dekker Inc., New York

    Book  MATH  Google Scholar 

  • Thode HC, Smith LA, Finch SJ (1983) Power of tests of normality for detecting scale contaminated normal samples. Commun Stat Simul 12:675–695

    Article  MATH  Google Scholar 

  • Urzúa CM (1996) On the correct use of omnibus tests for normality. Econ Lett 53:247–251

    Article  MATH  Google Scholar 

  • Yap BW, Sim CH (2011) Comparisons of various types of normality tests. J Stat Comput Simul 81:2141–2155

Download references

Acknowledgments

We wish to thank the Associate Editor and an anonymous referee for their helpful comments and suggestions which have led to substantial improvement in the presentation of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aldo Goia.

Appendices

Appendix: Proof of Proposition 1

For \(m_{3}=0\) the matrix:

$$\begin{aligned} \mathbf {M}_{Q}=\left( \begin{array}{ccc} 1 &{}\quad 0 &{}\quad -\frac{\sqrt{6}}{6}\left( m_{4}-3\right) \\ 0 &{}\quad 2 &{}\quad -\frac{2\sqrt{3}}{3}\frac{m_{5}}{m_{4}-1} \\ -\frac{\sqrt{6}}{6}\left( m_{4}-3\right) &{}\quad -\frac{2\sqrt{3}}{3}\frac{m_{5}}{ m_{4}-1} &{}\quad \frac{2m_{5}^{2}}{3\left( m_{4}-1\right) ^{2}}+\frac{1}{6} m_{4}\left( m_{4}+3\right) \end{array} \right) \end{aligned}$$

has characteristic polynomial:

$$\begin{aligned} \mathcal {P}\left( x\right)&= x^{3}-\left( \frac{1}{6}m_{4}\left( m_{4}+3\right) +3+\frac{2}{3}\frac{m_{5}^{2}}{\left( m_{4}-1\right) ^{2}} \right) x^{2}+ \\&\quad +\,\left( \frac{1}{3}m_{4}^{2}+\frac{5}{2}m_{4}+\frac{1}{2}+\frac{2}{3}\frac{ m_{5}^{2}}{\left( m_{4}-1\right) ^{2}}\right) x-3\left( m_{4}-1\right) . \end{aligned}$$

Since

$$\begin{aligned} \mathcal {P}\left( 1\right) =\frac{1}{6}\left( m_{4}-3\right) ^{2} \end{aligned}$$

\(\xi =1\) is an eigenvalue of \(\mathbf {M}_{Q}\) if and only if \(m_{4}=3\). Furthermore, if \(m_{4}\ne 3\) then \(\mathcal {P}\left( 1\right) >0\), hence there exists a positive eigenvalue of \(\mathbf {M}_{Q}\) smaller than \(1\). If \( m_{4}=3\), since

$$\begin{aligned} \mathcal {P}^{\prime }\left( x\right)&= 3x^{2}-2\left( \frac{1}{6} m_{5}^{2}+6\right) x+\frac{1}{6}m_{5}^{2}+11, \\ \mathcal {P}^{\prime \prime }\left( x\right)&= 6x-2\left( \frac{1}{6} m_{5}^{2}+6\right) \end{aligned}$$

we find

$$\begin{aligned} \mathcal {P}^{\prime }\left( 1\right) =\frac{1}{6}\left( 12-m_{5}^{2}\right) ,\quad \quad \mathcal {P}^{\prime \prime }\left( 1\right) =-\frac{1}{3} m_{5}^{2}-6<0. \end{aligned}$$

These results imply that:

  1. 1.

    if \(m_{5}^{2}>12\), then there exists an eigenvalue smaller than \(1\);

  2. 2.

    if \(m_{5}^{2}<12\), the smaller eigenvalues of \(\mathbf {M}_{Q}\) is \(1\);

  3. 3.

    if \(m_{5}^{2}=12\), the smaller eigenvalue of \(\mathbf {M}_{Q}\) is \(1\) that is a double root of the characteristic equation of \(\mathbf {M}_{Q}\).

Proof of Proposition 2

Note first of all that \(A^{2}>\xi _{2}\) for any \(m_{4}\), \(B\) and \(D\). Indeed, this inequality is true if and only if

$$\begin{aligned} m_{4}\left( m_{4}+3\right) D^{2}-A^{2}<\sqrt{\Delta }. \end{aligned}$$
(13)

If the left hand side of (13) is negative, then the inequality is always satisfied while, if it is non negative, (13) is equivalent to:

$$\begin{aligned} -4A^{2}D^{2}\left( m_{4}-3\right) ^{2}<0, \end{aligned}$$

which is always satisfied. This result implies that if \(\xi ^{G}=A^{2}\) then \(\xi ^{G}>\xi _{\min }\). The conclusion is obvious when \(\xi ^{G}=4B^{2}\). Finally, we suppose that \(\xi ^{G}=18D^{2}\) for which the inequality \(\xi ^{G}>\xi _{\min }\) is equivalent to \(18D^{2}>\xi _{2}\). Then in this case we have \(18D^{2}<A^{2}\) and \(\xi _{2}<4B^{2}\). Assume now that these two last inequalities are satisfied. Then, \(18D^{2}>\xi _{2}\) if and only if

$$\begin{aligned} \xi _{2}-18D^{2}=A^{2}+\left( m_{4}\left( m_{4}+3\right) -36\right) D^{2}- \sqrt{\Delta }<0. \end{aligned}$$
(14)

Note that

$$\begin{aligned} A^{2}+\left( m_{4}\left( m_{4}+3\right) -36\right) D^{2}=A^{2}-18D^{2}+\left( m_{4}-3\right) \left( m_{4}+6\right) D^{2}. \end{aligned}$$

Assume that \(m_{4}<3\). Then, either \(A^{2}-18D^{2}\le \left( 3-m_{4}\right) \left( m_{4}+6\right) D^{2}\) and (14) is satisfied or \( A^{2}-18D^{2}>\left( 3-m_{4}\right) \left( m_{4}+6\right) D^{2}\), which leads by straightforward calculations to

$$\begin{aligned} \left( m_{4}-3\right) \left( 18D^{2}-A^{2}+2\left( m_{4}-3\right) D^{2}\right) >0 \end{aligned}$$
(15)

which again implies that (14) is satisfied.

Assume now that \(m_{4}\ge 3\). Then (14) is satisfied if and only if

$$\begin{aligned} m_{4}>3+\frac{A^{2}-18D^{2}}{2D^{2}}. \end{aligned}$$

The derivation of an approximated distribution for the test statistic \(\mathfrak {N}_{n,3}\)

In order to compute the null distribution of the test statistic \(\mathfrak {N} _{n,3}\) for finite sample sizes, one could refer to the classical formula for the density of a transformation of a continuous r.v., when the transformation is defined for \(x\ge 1\) by

$$\begin{aligned} g\left( x\right) {=}A^{2}-\frac{1}{2}\left( A^{2}+x\left( x+3\right) D^{2}{-} \sqrt{\left( A^{2}{+}x\left( x+3\right) D^{2}\right) ^{2}-36\left( x{-}1\right) A^{2}D^{2}}\right) . \end{aligned}$$

If \(A=1\), \(B=1/\sqrt{2}\) and \(D=1/\sqrt{6}\), then \(g\left( x\right) \) is non negative, decreasing for \(x<3\) and increasing for \(x>3\). Hence the density of \(\mathfrak {N}_{n,3}/n=g\left( \widehat{m}_{4}\right) \) can be derived using the following expression:

$$\begin{aligned} \sum _{j=1}^{2}f\left( g_{j}^{-1}\left( \widehat{m}_{4}\right) \right) \frac{1 }{g^{\prime }\left( g_{j}^{-1}\left( \widehat{m}_{4}\right) \right) } \end{aligned}$$

where \(f\) is the density of \(\widehat{m}_{4}\) and \(g_{j}\) is the restriction of \(g\) to the \(j\)-th interval of monotonicity.

Since we have not an explicit expression for \(f\), we can use one of the approximations available in the literature (see e.g. Thode 2002, Ch. 3). For instance, when we apply the approximation based on the Pearson Type IV curve, we have

$$\begin{aligned} f\left( x\right) =c\left( 1+\frac{x^{2}}{a^{2}} \right) ^{-m}e^{-v\arctan \frac{x}{a}}\quad \quad x\in \mathbb {R} \end{aligned}$$

where the coefficients are given by the following expressions. Defining

$$\begin{aligned} s=6\frac{1+\beta _{1}-\beta _{2}}{3\beta _{1}-2\beta _{2}+6} \end{aligned}$$

we have

$$\begin{aligned} a&= \frac{\sigma }{4}\sqrt{16\left( s-1\right) -\beta _{1}\left( s-2\right) ^{2}} \\ m&= \frac{s+2}{2} \\ v&= \frac{s\left( s-2\right) \sqrt{\beta _{1}}}{\sqrt{16\left( s-1\right) -\beta _{1}\left( s-2\right) ^{2}}} \\ c&= \left( ae^{-v\frac{\pi }{2}}\int _{0}^{\pi }e^{vt}\left( \sin \left( t\right) \right) ^{s}dt\right) \end{aligned}$$

where the moment coefficients \(\sigma \), \(\beta _{1}\) and \(\beta _{2}\) for the statistic \(\widehat{m}_{4}\) are given at page 51 of Thode (2002).

Proof of Proposition 3

The proof is based on a direct application of the Second Order Delta Method when we consider the function:

$$\begin{aligned} g\left( \widehat{m}_{4}\right)&= \frac{1}{2}\Bigg ( A^{2}+\widehat{m} _{4}\left( \widehat{m}_{4}+3\right) D^{2}\\&-\sqrt{\left( A^{2}+\widehat{m} _{4}\left( \widehat{m}_{4}+3\right) D^{2}\right) ^{2}-36\left( \widehat{m} _{4}-1\right) A^{2}D^{2}}\Bigg ). \end{aligned}$$

Using the second order Taylor expansion at \(\widehat{m}_{4}=3\), we have

$$\begin{aligned} g\left( \widehat{m}_{4}\right) =A^{2}+\frac{24A^{2}D^{2}}{A^{2}-18D^{2}} \left( \widehat{m}_{4}-3\right) ^{2}+o_{P}\left( \left( \widehat{m} _{4}-3\right) ^{2}\right) \end{aligned}$$

and thus

$$\begin{aligned} n\left( g\left( \widehat{m}_{4}\right) -A^{2}\right) =n\frac{24A^{2}D^{2}}{ A^{2}-18D^{2}}\left( \widehat{m}_{4}-3\right) ^{2}+o_{P}\left( n\left( \widehat{m}_{4}-3\right) ^{2}\right) . \end{aligned}$$

Since under the hypothesis of normality \(\sqrt{n}\left( \widehat{m} _{4}-3\right) \) converges in distribution to a standard normal r.v., it follows that

$$\begin{aligned} n\left( \widehat{m}_{4}-3\right) ^{2}\overset{\mathcal {L}}{\longrightarrow } \chi ^{2}\left( 1\right) \quad \hbox {and}\quad n\left( \widehat{m}_{4}-3\right) ^{2}=O_{P}\left( 1\right) . \end{aligned}$$

Therefore, by Slutsky Theorem, it follows

$$\begin{aligned} n\frac{18D^{2}-A^{2}}{24A^{2}D^{2}}\left( A^{2}-\widehat{\mathbf {\xi }} \right) \overset{\mathcal {L}}{\longrightarrow }\chi ^{2}\left( 1\right) . \end{aligned}$$

With the choice \(A=1\) and \(D=1/\sqrt{6}\) the result follows.

The distribution of NBUS test statistic for \(k=2\)

We study the distribution of \(\mathfrak {N}_{n,2}\) under \(\mathcal {H}_{0}\). As the power of the test does not change with \(B\) and \(C\), it is not restrictive to set \(A=1\) and \(B=1/2\) (that is \(C=2\)), a choice that gives a simplified form to \(\mathfrak {N}_{n,2}\) useful for further computations. Since \(\mathfrak {N}_{n,2}\) is a transformation of \(\widehat{m}_{3}\), it is possible to deduce an approximation of the critical value of the test for finite sample sizes, as illustrated in the following remark.

Remark 3

An approximation of the critical value of the test at level \( \alpha \) when \(n\ge 8\) is given by

$$\begin{aligned} \frac{t_{1-\alpha /2}^{2}s^{2}\left( \nu -2\right) }{8\nu }\left( \sqrt{1+ \frac{16\nu }{t_{1-\alpha /2}^{2}s^{2}\left( \nu -2\right) }}-1\right) \end{aligned}$$
(16)

where \(t_{1-\alpha /2}\) is the \(1-\alpha /2\) order quantile of the Student’s t r.v. with \(v\) degrees of freedom, and

$$\begin{aligned} \nu =\frac{4\kappa -6}{\kappa -3},\quad \quad \kappa =3+\frac{36\left( n-7\right) \left( n^{2}+2n-5\right) }{\left( n-2\right) \left( n+5\right) \left( n+7\right) \left( n+9\right) },\quad \quad s^{2}=\dfrac{6\left( n-2\right) }{\left( n+1\right) \left( n+3\right) }. \end{aligned}$$

The steps to obtain the result above are the following.

We have to consider the r.v.\(~Y=g\left( X\right) \) where

$$\begin{aligned} g\left( x\right) =\frac{1}{8}\left[ \sqrt{ x^{2}\left( x^{2}+16\right) }-x^{2}\right] \quad x\in \mathbb {R }. \end{aligned}$$
(17)

Such function is bounded between \(0\) and \(1\), even and strictly increasing on the interval \(\left( 0,+\infty \right) \). Therefore, the quantile of order \(1-\alpha \) of \(Y\) can be deduced directly from the ones of order \( 1-\alpha /2\) of \(X\). Since the law of the r.v. \(X=\widehat{m}_{3}\) is well approximated (for \(n\ge 8\)) by the one of a scaled Student’s t distribution with \(\nu \) degrees of freedom (see Thode 2002, p. 50):

$$\begin{aligned} \widehat{m}_{3}=s\left( \dfrac{\nu }{\nu -2}\right) ^{-1/2}T_{\nu } \end{aligned}$$

where

$$\begin{aligned} \nu =\frac{4\kappa -6}{\kappa -3},\quad \kappa =3+\frac{36\left( n-7\right) \left( n^{2}+2n-5\right) }{\left( n-2\right) \left( n+5\right) \left( n+7\right) \left( n+9\right) },\quad s^{2}=\dfrac{6\left( n-2\right) }{\left( n+1\right) \left( n+3\right) }, \end{aligned}$$

it follows that the quantile of order \(1-\alpha \) of \(Y\) is obtained by inserting the expression of the quantile of order \(1-\alpha /2\) of \(\widehat{ m}_{3}\), that is \(t_{1-\alpha /2}s\left( \dfrac{\nu }{\nu -2}\right) ^{-1/2}\) (where \(t_{1-\alpha /2}\) is the \(1-\alpha /2\) order quantile of \(T_{\nu }\) ), in (17).

For the sake of completeness, we derive the explicit expression of the density of \(Y\) and its cumulative distribution function (cdf). Since the inverse of the restriction of \(g\) to the interval of monotonicity \(\left( 0,+\infty \right) \) is equal to

$$\begin{aligned} \gamma _{1}\left( y\right) =\frac{2y}{\sqrt{1-y}}\quad \quad 0<y<1, \end{aligned}$$

by straightforward calculations we obtain the expression of the density of the r.v. \(Y\):

$$\begin{aligned} f\left( y\right)&= \frac{2}{s}\left( \frac{\nu }{\nu -2}\right) ^{1/2}\frac{ \Gamma \left( \frac{\nu +1}{2}\right) }{\Gamma \left( \frac{\nu }{2}\right) \sqrt{\nu \pi }}\left( 1+\frac{4}{s^{2}\left( \nu -2\right) }\frac{y^{2}}{1-y }\right) ^{-\left( \nu +1\right) /2} \\&\quad \times \, \left| \frac{2-y}{\left( 1-y\right) \sqrt{1-y}}\right| \quad \quad 0<y<1. \end{aligned}$$

The cdf of \(Y\) on the interval \(\left( 0,1\right) \) is given by

$$\begin{aligned} F\left( y\right) =2G\left( \frac{2y}{\sqrt{1-y}}\frac{1}{s}\left( \frac{\nu }{\nu -2}\right) ^{1/2}\right) -1 \end{aligned}$$

where \(G\) is the cdf of the Student’s t with \(\nu \) degree of freedom.

For large samples, the asymptotic law of \(\mathfrak {N}_{n,2}\) is provided by the following:

Proposition 5

Under the hypothesis of normality one has

$$\begin{aligned} \mathfrak {N}_{n,2}\overset{\mathcal {L}}{\longrightarrow }\sqrt{\frac{3}{2}}H \end{aligned}$$

where \(H\) is the Daniel’s half-normal distribution (see Daniel 1959).

Proof of Proposition 5

We have to consider the transformation (17). Since \( g\left( x\right) =\left| x\right| /2+o\left( \left| x\right| \right) \) for \(x\rightarrow 0\), it follows that

$$\begin{aligned} \sqrt{n}\left( 1-\widehat{\xi }\right) =\sqrt{n}\frac{\left| \widehat{m} _{3}\right| }{2}+o\left( \sqrt{n}\left| \widehat{m}_{3}\right| \right) . \end{aligned}$$

Recalling that for a Gaussian sample \(\sqrt{n}\left( \widehat{m} _{3}-m_{3}\right) \) converges in distribution to a centered Gaussian r.v. with variance equal to \(6\), we obtain

$$\begin{aligned} \sqrt{n}\left| \widehat{m}_{3}\right| \overset{\mathcal {L}}{ \longrightarrow }\sqrt{6}\left| \mathcal {N}\left( 0,1\right) \right| \quad \hbox {and then}\quad \sqrt{n}\left| \widehat{m} _{3}\right| =O_{P}\left( 1\right) . \end{aligned}$$

Finally, recalling that \(H=\left| \mathcal {N}\left( 0,1\right) \right| \) is a half-normal distribution, by Slutsky Theorem we get

$$\begin{aligned} 2\sqrt{\frac{n}{6}}\left( 1-\widehat{\mathbf {\xi }}\right) \overset{\mathcal { L}}{\longrightarrow }H. \end{aligned}$$

\(\square \)

Proof of Proposition 4

Consider

$$\begin{aligned} \mathfrak {B}_{n,2}=\sqrt{\frac{n}{2}}\left( \widehat{U}_{n,2}-1\right) = \sqrt{\frac{n}{2}}\frac{1-\widehat{\xi }}{\widehat{\xi }} \end{aligned}$$

where \(\widehat{\xi }\) is defined in (12). The first partial derivatives of

$$\begin{aligned} g\left( x,y\right) =1-\xi \left( x,y\right) =1-\frac{1}{2\left( y-x^{2}-1\right) }\left( y+3-\sqrt{16x^{2}+\left( y-5\right) ^{2}}\right) \end{aligned}$$

for \(x=0\) and \(y=3\) are zero and

$$\begin{aligned} \frac{\partial ^{2}}{\partial x^{2}}g\left( 0,3\right) =1,\quad \frac{ \partial ^{2}}{\partial y^{2}}g\left( 0,3\right) =0,\quad \frac{\partial }{ \partial x\partial y}g\left( 0,3\right) =0. \end{aligned}$$

Since in the normal case \(m_{3}=0\), \(m_{4}=3\), and \(\left( \sqrt{n}\left( \widehat{m}_{3}-m_{3}\right) ,\sqrt{n}\left( \widehat{m}_{4}-m_{4}\right) \right) \) is asymptotically normally distributed with zero mean and covariance matrix

$$\begin{aligned} \Sigma =\left( \sigma _{ij}\right) _{\begin{array}{c} 1\le i\le 2 \\ 1\le j\le 2 \end{array}}=\left( \begin{array}{cc} 6 &{} 0 \\ 0 &{} 24 \end{array} \right) \end{aligned}$$

a direct application of the Second Order Delta Method gives

$$\begin{aligned} n\left( g\left( \widehat{m}_{3},\widehat{m}_{4}\right) -g\left( m_{3},m_{4}\right) \right) \overset{\mathcal {L}}{\longrightarrow }3\chi ^{2}\left( 1\right) . \end{aligned}$$

Hence (recall that under assumption of normality, \(\xi =1\)) one has:

$$\begin{aligned} n\left( 1-\widehat{\xi }\right) \overset{\mathcal {L}}{\longrightarrow }3\chi ^{2}\left( 1\right) . \end{aligned}$$

Since, for a Gaussian population, \(\widehat{m}_{3}\overset{P}{ \longrightarrow }0\), \(\widehat{m}_{4}\overset{P}{\longrightarrow }3\) and \( \xi \left( x,y\right) \) is continuous at \(\left( 0,3\right) \), it follows \( \widehat{\xi }\overset{P}{\longrightarrow }1\). Therefore, thanks to Slutsky Theorem the statements follows.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Goia, A., Salinelli, E. & Sarda, P. A new powerful version of the BUS test of normality. Stat Methods Appl 24, 449–474 (2015). https://doi.org/10.1007/s10260-014-0292-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10260-014-0292-5

Keywords

Navigation