Goodness of fit test for general linear model with nonignorable missing on response variable


In this paper, we consider a general linear model where missing data occur in the response variable with a nonignorable mechanism. Also, to deal with missing data, we assume that the probability of missing data follows a logistic model. The main purpose of this paper is to construct some test functions to check the goodness of fit of the general linear model based on the score-type test. To achieve this aim, we use two appropriate estimating models and we construct two test functions based on these models. The asymptotic properties of the test functions are obtained under the null and the alternative hypotheses based on the estimated tilting parameter. The performances of the test functions are checked by some simulation studies. Also, these methods are used to check goodness of fit of the fitted models for real data.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4


  1. Bahari, F., Parsi, S., Ganjali, M.: A doubly robust goodness-of-fit test in general linear models with missing covariates. Commun. Stat. Simul. Comput. 46(10), 7909–7923 (2017)

    MathSciNet  Article  Google Scholar 

  2. Cotos-Yáñez, T.R., Pérez-González, A., González-Manteiga, W.: Model checks for nonparametric regression with missing data: a comparative study. J. Stat. Comput. Simul. 86(16), 3188–3204 (2016)

    MathSciNet  Article  Google Scholar 

  3. Creemers, A., Aerts, M., Hens, H., Molenbergh, G.: A nonparametric approach to weighted estimating equations for regression analysis with missing covariates. Comput. Stat. Data Anal. 56, 100–113 (2011)

    MathSciNet  Article  Google Scholar 

  4. Fang, F., Shao, J.: Model selection with nonignorable nonresponse. Biometrika 103(1), 1–14 (2016)

    MathSciNet  Article  Google Scholar 

  5. Genback, M., Stanghellini, E., de Luna: Uncertainty intervals for regression parameters with nonignorable missingness in the outcome. Stat. Pap. 56(3), 829–847 (2015)

    Article  Google Scholar 

  6. Guo, X., Xu, W.: Goodness-of-fit tests for general linear models with covariates missed at random. J. Stat. Plan. Inference 142, 2047–2058 (2012)

    MathSciNet  Article  Google Scholar 

  7. Hardle, W., Mammen, E.: Comparing nonparametric versus parametric regression fits. Ann. Stat. 21, 1921–1947 (1993)

    MathSciNet  MATH  Google Scholar 

  8. Hardle, W., Mammen, E., Muller, M.: Testing parametric versus semiparametric modeling in generalized linear models. J. Am. Stat. Assoc. 93(444), 1461–1474 (1998)

    MathSciNet  MATH  Google Scholar 

  9. Kim, J.K., Yu, C.L.: A semiparametric estimation of mean functionals with nonignorable missing data. J. Am. Stat. Assoc. 106(493), 157–165 (2011)

    MathSciNet  Article  Google Scholar 

  10. Li, X.: Lack-of-fit testing of a regression model with response missing at random. J. Stat. Plan. Inference 142(1), 155–170 (2012)

    MathSciNet  Article  Google Scholar 

  11. Niu, C., Guo, X., Xu, W., Zhu, L.: Empirical likelihood inference in linear regression with nonignorable missing response. Comput. Stat. Data Anal. 79, 91–112 (2014)

    MathSciNet  Article  Google Scholar 

  12. Park, J., Cho, S.: Investigating the effect of task complexities on the response time of human operators to perform the emergency tasks of nuclear power plants. Ann. Nuclear Energy 37(9), 1160–1171 (2010)

    Article  Google Scholar 

  13. Robins, J., Rotnitzky, A., Zhao, L.: Estimation of regression coefficients when some regressors are not always observed. J. Am. Stat. Assoc. 89, 846–866 (1994)

    MathSciNet  Article  Google Scholar 

  14. Rotnitzky, A., Robins, J., Scharfstein, D.: Semiparametric regression for repeated outcomes with non-ignorable non-response. J. Am. Stat. Assoc. 93, 1321–1339 (1998)

    Article  Google Scholar 

  15. Rubin, D.B.: Inference and missing data. Biometrika 63, 581–592 (1976)

    MathSciNet  Article  Google Scholar 

  16. Sun, Z., Wang, Q., Dai, P.: Model checking for partially linear models with missing responses at random. J. Multivar. Anal. 100(4), 636–651 (2009)

    MathSciNet  Article  Google Scholar 

  17. Sun, Z., Ip, W.C., Wang, H.: Model checking for a general linear model with nonignorable missing covariates. Acta Math. Appl. Sin. (Engl. Ser.) 28(1), 99–110 (2012)

    MathSciNet  Article  Google Scholar 

  18. Wang, S., Wang, C.Y.: A note on kernel assisted estimators in missing covariate regression. Stat. Probab. Lett. 55, 439–449 (2001)

    MathSciNet  Article  Google Scholar 

  19. Wang, F., Peng, H., Shi, W.: The relationship between air layers and evaporative resistance of male Chinese ethnic clothing. App. Ergon. 56, 194–202 (2016)

    Article  Google Scholar 

  20. Zhao, L.P., Lipsitz, S.: Design and analysis of two-stage studies. Stat. Med. 11, 769–782 (1992)

    Article  Google Scholar 

  21. Zhao, L.X., Zhao, P.Y., Tang, N.S.: Empirical likelihood inference for mean functionals with nonignorably missing response data. Comput. Stat. Data Anal. 66, 101–116 (2013)

    MathSciNet  Article  Google Scholar 

  22. Zhu, L., Cui, H.: Testing the adequacy for a general linear errors-in-variables model. Stat. Sin. 14, 1049–1068 (2005)

    MathSciNet  MATH  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Safar Parsi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.



To achieve our conclusions, we assume that regularity conditions to be held (Wang and Wang 2001). These conditions are as follow:

(C1) \(\pi (v,\gamma )\) is bounded and has partial derivatives up to order 2 almost surely.

(C2) The kernel function \(k_{h}(.)\) is continuous and is from order r. It is always at least from order 2.

(C3) The density function of \({\varvec{X}}, f({\varvec{x}})\), exists and has bounded derivatives up to at least order 2.

(C4) The matrix \(\Sigma\) has a finite component and is positive definite.

(C5) \([nh^{2r}+(nh^{2m})^{-1}]\) converges to zero as n goes to infinity. m is the dimension of vector \({\varvec{x}}\).

(C6) \(E(Y^2)\) and \(E({\mathrm{e}}^{2\gamma Y})\) exist and are finite.

(C7) The matrices \(V_A\) and \(V_{{\mathrm{IPW}}}\) which are defined in Theorem 1, are positive definite.

Proof of Lemma 2

We just prove the parts (c) and (d) of Lemma 2. The other parts of Lemma 2 can be concluded in a similar way by taking \(C_n=0\).

Proof in IPW case

If \(\gamma\) is estimated from Eq. (13), based on the IPW method we can write,

$$\begin{aligned} \sqrt{n}\left( {\hat{\beta }}_{{\mathrm{IPW}}}-\beta \right)&=\left\{ \frac{1}{n}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},{\hat{\gamma }}\right) }g\left( {\varvec{x}}_{i}\right) g^{\mathrm{T}}\left( {\varvec{x}}_{i}\right) \right] \right\} ^{-1}\nonumber \\&\quad \times \left\{ \frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},{\hat{\gamma }}\right) }g\left( {\varvec{x}}_{i}\right) \left( y_{i}-g^{\mathrm{T}}\left( {\varvec{x}}_{i}\right) \beta \right) \right] \right\} :=A^{-1}B. \end{aligned}$$

Now, we can write A as follow:

$$\begin{aligned} A&=\frac{1}{n}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},{\hat{\gamma }}\right) }g\left( {\varvec{x}}_{i}\right) g^{\mathrm{T}}\left( \varvec{{\varvec{x}}}_{i}\right) \right] = \frac{1}{n}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) g^{\mathrm{T}}\left( {\varvec{x}}_{i}\right) \right] \\&\quad +\frac{1}{n}\sum _{i=1}^{n}\left[ \left( \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},{\hat{\gamma }}\right) }-\frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},\gamma \right) }\right) g\left( {\varvec{x}}_{i}\right) g^{\mathrm{T}}\left( {\varvec{x}}_{i}\right) \right] :=A_1+A_2. \end{aligned}$$

For enough large sample size, by \({\hat{\pi }}({\varvec{v}}_i,\gamma )=\pi ({\varvec{v}}_i,\gamma )+o_P(1)\) and regularity condition C1, we can write \(A_1\) as follow:

$$\begin{aligned} A_1=\frac{1}{n}\sum _{i=1}^{n}\frac{\delta _{i}}{\pi ({\varvec{v}}_{i},\gamma )}g({\varvec{x}}_{i})g^{\mathrm{T}}({\varvec{x}}_{i})+o_P(1)=\Sigma +o_P(1). \end{aligned}$$

\({\hat{\gamma }}\) is a consistent estimator of \(\gamma\). On the other hand, for big n, it is not difficult to prove that \({\hat{\pi }}({\varvec{v}}_i,\gamma )-{\hat{\pi }}({\varvec{v}}_i,{\hat{\gamma }})=o_P(1)\). Therefore,

$$\begin{aligned} A_2=o_P(1). \end{aligned}$$

Consequently, from (23) and (24), we will have:

$$\begin{aligned} A=A_1+A_2=\Sigma +o_P(1). \end{aligned}$$

For B, we cn write:

$$\begin{aligned} B&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},{\hat{\gamma }}\right) }g\left( {\varvec{x}}_{i}\right) \left( y_{i}-g^{\mathrm{T}}\left( {\varvec{x}}_{i}\right) \beta \right) \right] \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},{\hat{\gamma }}\right) }g\left( {\varvec{x}}_{i}\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] \\&\quad +\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \left( \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},{\hat{\gamma }}\right) }-\frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},\gamma \right) }\right) g\left( {\varvec{x}}_{i}\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] \\&:=B_1+B_2. \end{aligned}$$

Now, we rewrite \(B_1\) as follow:

$$\begin{aligned} B_1&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi \left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] \\&\quad +\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \left( \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},\gamma \right) }-\frac{\delta _{i}}{\pi \left( {\varvec{v}}_{i},\gamma \right) }\right) g\left( {\varvec{x}}_{i}\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] \\&:=B_{11}+B_{12}. \end{aligned}$$

We keep \(B_{11}\) without change. For \(B_{12}\) by replacing \({\hat{\pi }}(\cdot )\) and \(\pi (\cdot )\) with their equivalent formula from Eq. (10), we will have:

$$\begin{aligned} B_{12}&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}[\delta _i({\hat{\alpha }}({\varvec{x}}_i,\gamma )-\alpha ({\varvec{x}}_i,\gamma ))g({\varvec{x}}_{i})(\eta _i+C_nG({\varvec{x}}_i))] \nonumber \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}[\delta _i({\hat{\alpha }}({\varvec{x}}_i,\gamma ) \nonumber \\&\quad -\left. \alpha ({\varvec{x}}_i,\gamma ))g({\varvec{x}}_{i})(\eta _i+C_nG({\varvec{x}}_i))\frac{\sum _{j=1}^{n}[(1-\delta _j)-\delta _j{\mathrm{e}}^{\gamma y_{j}}\alpha ({\varvec{x}}_i,\gamma )K_h({\varvec{x}}_i-{\varvec{x}}_j)]}{\sum _{j=1}^{n}[\delta _j{\mathrm{e}}^{\gamma y_{j}}K_h({\varvec{x}}_i-{\varvec{x}}_j)]}\right] , \end{aligned}$$

where, the last equality follows from the empirical estimator of \({\hat{\alpha }}(\cdot )\) [Eq. (9)] and some simple mathematical operations. For denominator of Eq. (27), we have:

$$\begin{aligned} \frac{1}{n}\sum _{j=1}^{n}\left[ \delta _j{\mathrm{e}}^{\gamma y_j}K_h({\varvec{x}}_i-{\varvec{x}}_j)\right] =\frac{1}{n}\sum _{j=1}^{n}\left[ \delta _j\left( \frac{1}{\pi ({\varvec{v}}_i,\gamma )}-1\right) \alpha ^{-1}({\varvec{x}}_j,\gamma )K_h({\varvec{x}}_i-{\varvec{x}}_j)\right] . \end{aligned}$$

By the SLN (The expectation has been taken conditionally on \({\varvec{x}}_i\)) and by using some mathematical operations, it is not difficult to prove that

$$\begin{aligned} \frac{1}{n}\sum _{j=1}^{n}\left[ \delta _j\left( \frac{1}{\pi ({\varvec{v}}_i,\gamma )}-1\right) \alpha ^{-1}({\varvec{x}}_j,\gamma )K_h({\varvec{x}}_i-{\varvec{x}}_j)\right] =\alpha ^{-1}({\varvec{x}}_i,\gamma )f({\varvec{x}}_i)E(1-\delta |{\varvec{x}}_i)+o_P(1), \end{aligned}$$

where \(f({\varvec{x}}_i)\) is the density function of random variable \({\varvec{X}}_i\). Now, by replacing the above statement in Eq. (27), we can write \(B_{12}\) as follow:

$$\begin{aligned} B_{12}&=\frac{1}{n^{\frac{3}{2}}}\sum _{j=1}^{n}\sum _{i=1}^{n}\left[ \frac{\delta _i{\mathrm{e}}^{\gamma y_i}g({\varvec{x}}_i)(\eta _i+C_nG({\varvec{x}}_i))}{\alpha ^{-1}({\varvec{x}}_i,\gamma )f({\varvec{x}}_i)E(1-\delta |{\varvec{x}}_i)}((1-\delta _j)-\delta _j{\mathrm{e}}^{\gamma y_j}\alpha ({\varvec{x}}_i,\gamma )K_h({\varvec{x}}_j-{\varvec{x}}_i))\right] \nonumber \\&\quad +o_P(1)\nonumber \\&=\frac{1}{n^{\frac{3}{2}}}\sum _{j=1}^{n}\sum _{i=1}^{n}\left[ \frac{\delta _iO({\varvec{v}}_i)g({\varvec{x}}_i)(\eta _i+C_nG({\varvec{x}}_i))}{f({\varvec{x}}_i)E(1-\delta |{\varvec{x}}_i)}((1-\delta _j)-\delta _j{\mathrm{e}}^{\gamma y_j}\alpha ({\varvec{x}}_i,\gamma )K_h({\varvec{x}}_j-{\varvec{x}}_i))\right] \nonumber \\&\quad +o_P(1). \end{aligned}$$

By using SLN for inner summation and by using some mathematical operation, one can conclude that:

$$\begin{aligned} B_{12}=\frac{1}{\sqrt{n}}\sum _{j=1}^{n}\left[ \left( 1-\frac{\delta _j}{\pi ({\varvec{v}}_j,\gamma )}\right) \frac{E(g(X)(\eta +C_nG(X))(1-\delta )|{\varvec{x}}_i)}{E(1-\delta |{\varvec{x}}_i)}\right] +o_P(1). \end{aligned}$$

Based on the exponential tilting property of Eq. (5), we can write:

$$\begin{aligned} E(g({\varvec{X}})(\eta +C_nG({\varvec{X}}))|{\varvec{x}}_i,\delta =0)&=\frac{E(g({\varvec{X}})(\eta +C_nG({\varvec{X}})){\mathrm{e}}^{\gamma Y}|{\varvec{x}}_i,\delta =1)}{E({\mathrm{e}}^{\gamma Y}|{\varvec{x}}_i,\delta =1)}\nonumber \\&=\frac{E(g({\varvec{X}})(\eta +C_nG({\varvec{X}}))(1-\delta )|{\varvec{x}}_i)}{E((1-\delta )|{\varvec{x}}_i)}. \end{aligned}$$


$$\begin{aligned} E(g({\varvec{X}})(\eta +C_nG({\varvec{X}}))(1-\delta )|{\varvec{x}}_i)=E((1-\delta )|{\varvec{x}}_i)E(g({\varvec{X}})(\eta +C_nG({\varvec{X}}))|{\varvec{x}}_i,\delta =0). \end{aligned}$$

Now, by inserting Eq. (33) in Eq. (31), we can conclude that:

$$\begin{aligned} B_{12}=\frac{1}{\sqrt{n}}\sum _{j=1}^{n}\left[ \left( 1-\frac{\delta _j}{\pi ({\varvec{v}}_j,\gamma )}\right) E(g({\varvec{X}})(\eta +C_nG({\varvec{X}}))|{\varvec{x}}_i,\delta =0)\right] +o_P(1). \end{aligned}$$

For enough big sample size, it is easy to prove that \(E(B_{12})=o_P(1)\) and \(Var(B_{12})=o_P(1)\). Consequently, by Chebyshev’s inequality,

$$\begin{aligned} B_{12}=o_p(1). \end{aligned}$$


$$\begin{aligned} B_1&=B_{11}+B_{12}=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi \left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] +o_P\left( 1\right) \nonumber \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi \left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) \eta _i\right] +\sqrt{n}C_nE\left( g\left( {\varvec{X}}\right) G\left( {\varvec{X}}\right) \right) +o_P\left( 1\right) . \end{aligned}$$

For \(B_2\) we have,

$$\begin{aligned} B_2&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \left( \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }\right) g\left( {\varvec{x}}_i\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] \nonumber \\&=\frac{1}{n}\sum _{i=1}^{n}\left[ \delta _i g\left( {\varvec{x}}_i\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \frac{\partial }{\partial \gamma }\pi ^{-1}\left( {\varvec{x}}_i,\gamma \right) \right] \sqrt{n}\left( {\hat{\gamma }}-\gamma \right) +o_P\left( 1\right) \nonumber \\&=\frac{1}{n}\sum _{i=1}^{n}\left[ \delta _ig\left( {\varvec{x}}_i\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) {\hat{\alpha }}\left( {\varvec{x}}_i,\gamma \right) {\mathrm{e}}^{\gamma y_i}\left( y_i-{\hat{m}}\left( {\varvec{x}}_i,\gamma \right) \right) \right] \sqrt{n}\left( {\hat{\gamma }}-\gamma \right) +o_P\left( 1\right) , \end{aligned}$$

where the middle equation follows from the mean value theorem and the last equation is concluded by inserting the equivalent function of the derivative function. By \({\hat{m}}({\varvec{x}}_i,\gamma )=m({\varvec{x}}_{i},\gamma )+o_P(1)\) and \({\hat{\alpha }}({\varvec{x}}_i,\gamma ){\mathrm{e}}^{\gamma y_i}=(\frac{1}{{\hat{\pi }}({\varvec{v}}_i,\gamma )}-1)\), we can rewrite \(B_2\) as follow:

$$\begin{aligned} B_2= & {} \frac{1}{n}\sum _{i=1}^{n}\left[ \delta _i\left( \frac{1}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }-1\right) g\left( {\varvec{x}}_i\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \left( y_i-m\left( {\varvec{x}}_{i},\gamma \right) \right) \right] \sqrt{n}\left( {\hat{\gamma }}-\gamma \right) \nonumber \\&+o_P\left( 1\right) , \end{aligned}$$

Therefore, by the SLN we will have:

$$\begin{aligned} B_2&=E\left( \left( 1-\pi \left( {\varvec{V}},\gamma \right) \right) g\left( {\varvec{X}}\right) \left( \eta -\eta ^*\right) ^2\right) \sqrt{n}\left( {\hat{\gamma }}-\gamma \right) +o_P\left( 1\right) \nonumber \\&=H\sqrt{n}\left( {\hat{\gamma }}-\gamma \right) +o_P\left( 1\right) . \end{aligned}$$

Now, by using Eqs. (36) and (39), we can write,

$$\begin{aligned} B=B_1+B_2=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi ({\varvec{v}}_{i},\gamma )}g({\varvec{x}}_{i})\eta _i\right] +H\sqrt{n}({\hat{\gamma }}-\gamma )+o_P(1). \end{aligned}$$

By Lemma 1, we can write the above equation as follow:

$$\begin{aligned} B=B_1+B_2&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi \left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) \eta _i\right] +\sqrt{n}C_nE\left( g\left( {\varvec{X}}\right) G\left( {\varvec{X}}\right) \right) \nonumber \\&\quad +HM^{-1}\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ r_i\left( 1-\delta _i\right) \right. \nonumber \\&\quad \left. -\delta _iE\left( r|\delta =0\right) \left( \frac{1}{\pi \left( {\varvec{v}}_i,\gamma \right) }-1\right) \left( y_i-m\left( {\varvec{x}}_{i},\gamma \right) \right) \right] +o_P\left( 1\right) . \end{aligned}$$

Finally, by replacing Eqs. (26) and (41) in Eq. (23), part (c) of Lemma 2 can be concluded. Also, the part (a) of Lemma 2 obtains by taking \(C_n=0\) in part (c).

Proof in augmented case

Proof of part (d) of Lemma 2 is very similar to the third part. For this reason, we just review the proof part of (c) of Lemma 2. If the parameters of the general linear model are estimated by the Augmented method, we will have,

$$\begin{aligned} \sqrt{n}\left( {\hat{\beta }}_{{\mathrm{A}}}-\beta \right)&=\left\{ \frac{1}{n}\sum _{i=1}^{n}\left[ g\left( {\varvec{x}}_{i}\right) g^{\mathrm{T}}\left( {\varvec{x}}_{i}\right) \right] \right\} ^{-1}\left\{ \frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( v_{i},{\hat{\gamma }}\right) }g\left( {\varvec{x}}_{i}\right) \left( y_{i}-g^{\mathrm{T}}\left( {\varvec{x}}_{i}\right) \beta \right) \right. \right. \nonumber \\&\quad \left. \left. +\left( 1-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }\right) g\left( {\varvec{x}}_i\right) \left( {\hat{m}}\left( {\varvec{x}}_i,\gamma \right) -g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \beta \right) \right] \right\} :=C^{-1}D. \end{aligned}$$

By SLN, C will be,

$$\begin{aligned} C=\Sigma +o_P\left( 1\right) . \end{aligned}$$

For D, we can write:

$$\begin{aligned} D&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},{\hat{\gamma }}\right) }g\left( {\varvec{x}}_{i}\right) \left( y_{i}-g^{\mathrm{T}}\left( {\varvec{x}}_{i}\right) \beta \right) \right. \\&\quad \left. +\left( 1-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }\right) g\left( {\varvec{x}}_i\right) \left( {\hat{m}}\left( {\varvec{x}}_i,\gamma \right) -g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \beta \right) \right] \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},{\hat{\gamma }}\right) }g\left( {\varvec{x}}_{i}\right) \eta _i+\left( 1-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }\right) g\left( {\varvec{x}}_i\right) \eta _{i}^{*}+C_nG\left( {\varvec{x}}_i\right) g\left( {\varvec{x}}_i\right) \right] +o_P\left( 1\right) \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) \eta _i+\left( 1-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }\right) g\left( {\varvec{x}}_i\right) \eta _{i}^{*}+C_nG^{\mathrm{T}}\left( {\varvec{x}}_i\right) g\left( {\varvec{x}}_i\right) \right] \\&\quad +\sum _{i=1}^{n}\left[ \left( \frac{\delta _{i}}{{\hat{\pi }}({\varvec{x}}_{i},\hat{\gamma )}}-\frac{\delta _{i}}{{\hat{\pi }}\left( {\varvec{x}}_{i},\gamma \right) }\right) \left( y_i-m\left( {\varvec{x}}_{i},\gamma \right) \right) \right] +o_P\left( 1\right) \\&:=D_1+D_2+o_P\left( 1\right) , \end{aligned}$$

where the second equality follows from \({\hat{m}}(\cdot )=m(\cdot )+o_P(1)\) and \(\eta ^{*}=E(\eta |{\varvec{x}})\). In a same way to obtaining \(B_1\), we can conclude that

$$\begin{aligned} D_1=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi ({\varvec{x}}_{i},\gamma )}g({\varvec{x}}_{i})\eta _i+\left( 1-\frac{\delta _i}{\pi ({\varvec{v}}_i,\gamma )}\right) g({\varvec{x}}_i)\eta _{i}^{*}\right] +\sqrt{n}C_nE(g({\varvec{X}})G({\varvec{X}}))+o_P(1) \end{aligned}$$

and in a similar way to obtaining \(B_2\), we can write

$$\begin{aligned} D_2&=\frac{1}{n}\sum _{i=1}^{n}\left[ \delta _i\left( \frac{1}{{\hat{\pi }}({\varvec{v}}_i,\gamma )}-1\right) g({\varvec{x}}_i)(y_i-m({\varvec{x}}_{i},\gamma ))(y_i-m({\varvec{x}}_{i},\gamma ))\right] \sqrt{n}({\hat{\gamma }}-\gamma )+o_P(1)\nonumber \\&=\frac{1}{n}\sum _{i=1}^{n}\left[ \delta _i\left( \frac{1}{{\hat{\pi }}({\varvec{v}}_i,\gamma )}-1\right) g({\varvec{x}}_i)(\eta _i-\eta _{i}^{*})^2\right] \sqrt{n}({\hat{\gamma }}-\gamma )+o_P(1). \end{aligned}$$

Therefore, by SLN we will have:

$$\begin{aligned} D_2=H\sqrt{n}({\hat{\gamma }}-\gamma )+o_P(1). \end{aligned}$$

Now, by using Eqs. (44) and (46), we can write,

$$\begin{aligned} D= & {} \frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi ({\varvec{v}}_{i},\gamma )}g({\varvec{x}}_{i})\eta _i+\left( 1-\frac{\delta _{i}}{\pi ({\varvec{v}}_{i},\gamma )}\right) g({\varvec{x}}_{i})\eta _{i}^{*}\right] +H\sqrt{n}({\hat{\gamma }}-\gamma ) \nonumber \\&+\sqrt{n}C_nE(g({\varvec{X}})G({\varvec{X}}))+o_P(1). \end{aligned}$$

Moreover, by using Lemma 1, we can conclude:

$$\begin{aligned} D&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi ({\varvec{v}}_{i},\gamma )}g({\varvec{x}}_{i})\eta _i+\left( 1-\frac{\delta _{i}}{\pi ({\varvec{v}}_{i},\gamma )}\right) g({\varvec{x}}_{i})\eta _{i}^{*}\right] \nonumber \\&\quad +HM^{-1}\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ r_i(1-\delta _i)-\delta _iE(r|\delta =0)\left( \frac{1}{\pi ({\varvec{v}}_i,\gamma )}-1\right) (y_i-m({\varvec{x}}_{i},\gamma ))\right] \nonumber \\&\quad +\sqrt{n}C_nE(g({\varvec{X}})G({\varvec{X}}))+o_P(1) \end{aligned}$$

Finally, by replacing Eqs. (43) and (48) in Eq. (42), part (d) of Lemma 2 can be concluded. The part (b) of Lemma 2 comes from taking \(C_n=0\) in part (c).

Proof of Theorem 1

We just prove Theorem 1 under the alternative hypothesis, because the null hypothesis follows by taking \(C_n=0\).

Empirical properties of test function based on the IPW method

We can rewrite \(T_{{\mathrm{IPW}}}\) as follow:

$$\begin{aligned} T_{{\mathrm{IPW}}}&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }\left( y_i-g^{\mathrm{T}}\left( {\varvec{x}}_i\right) {\hat{\beta }}_{{\mathrm{IPW}}}\right) \right] \nonumber \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }\left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] \nonumber \\&\quad -\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \left( {\hat{\beta }}_{{\mathrm{IPW}}}-\beta \right) \right] \nonumber \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }\left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] \nonumber \\&\quad -\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \left( {\hat{\beta }}_{{\mathrm{IPW}}}-\beta \right) \right] \nonumber \\&\quad +\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \left( \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }\right) \left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] \nonumber \\&\quad -\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \left( \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }\right) g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \left( {\hat{\beta }}_{{\mathrm{IPW}}}-\beta \right) \right] \nonumber \\&:=T_{{\mathrm{IPW}},1}+T_{{\mathrm{IPW}},2}+T_{{\mathrm{IPW}},3}+T_{{\mathrm{IPW}},4}. \end{aligned}$$

In a similar way to obtaining \(B_1\), we can prove that

$$\begin{aligned} T_{{\mathrm{IPW}},1}&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }\left( \eta _i+C_nG\left( {\varvec{x}}_i\right) \right) \right] +o_P\left( 1\right) \nonumber \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }\eta _i\right] +\sqrt{n}C_nE\left( G\left( {\varvec{X}}\right) \right) +o_P\left( 1\right) , \end{aligned}$$

and in a similar way to obtaining \(B_{12}\) we can prove that \(T_3=o_P(1)\) and \(T_{{\mathrm{IPW}},4}=o_P(1)\). Also, for \(T_{{\mathrm{IPW}},2}\), we have:

$$\begin{aligned} T_{{\mathrm{IPW}},2}&=-\frac{1}{n}\sum _{i=1}^{n}\left[ \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \right] \sqrt{n}\left( {\hat{\beta }}_{{\mathrm{IPW}}}-\beta \right) \nonumber \\&=-\frac{1}{n}\sum _{i=1}^{n}\left[ \frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \right] \sqrt{n}\left( {\hat{\beta }}_{{\mathrm{IPW}}}-\beta \right) \nonumber \\&\quad +\frac{1}{n}\sum _{i=1}^{n}\left[ \left( \frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }\right) g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \right] \sqrt{n}\left( {\hat{\beta }}_{{\mathrm{IPW}}}-\beta \right) \nonumber \\&=-\frac{1}{n}\sum _{i=1}^{n}\left[ \frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \right] \sqrt{n}\left( {\hat{\beta }}_{{\mathrm{IPW}}}-\beta \right) +o_P\left( 1\right) , \end{aligned}$$

Last equation follows from the fact that \({\hat{\pi }}(\cdot )=\pi (\cdot )+o_P(1)\) and \(E(g(\cdot ))<\infty\). By using the third part of Lemma 2:

$$\begin{aligned} T_{{\mathrm{IPW}},2}&=-\frac{1}{n}\sum _{i=1}^{n}\left[ \frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \right] \Sigma ^{-1}\left\{ \frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi \left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) \eta _{i}\right] \right. \nonumber \\&\quad \left. +HM^{-1}\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ r_i\left( 1-\delta _i\right) -\delta _iE\left( r|\delta =0\right) \left( \frac{1}{\pi \left( {\varvec{v}}_i,\gamma \right) }-1\right) \left( y_i-m\left( {\varvec{x}}_{i},\gamma \right) \right) \right] \right\} \nonumber \\&\quad -\sqrt{n}C_nE\left( g^{\mathrm{T}}\left( X\right) \right) \Sigma ^{-1}E\left( g\left( {\varvec{X}}\right) G\left( {\varvec{X}}\right) \right) +o_{p}\left( 1\right) . \end{aligned}$$

Therefore, we will have:

$$\begin{aligned} T_{{\mathrm{IPW}}}&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }\eta _i\nonumber \\&\quad -\frac{1}{n}\sum _{i=1}^{n}\left[ \frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \right] \Sigma ^{-1}\left\{ \frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi \left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) \eta _{i}\right] \right. \nonumber \\&\quad \left. +HM^{-1}\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ r_i\left( 1-\delta _i\right) -\delta _iE\left( r|\delta =0\right) \left( \frac{1}{\pi \left( {\varvec{v}}_i,\gamma \right) }-1\right) \left( y_i-m\left( {\varvec{x}}_{i},\gamma \right) \right) \right] \right\} \nonumber \\&\quad +\sqrt{n}C_nE\left( G\left( {\varvec{X}}\right) \right) -\sqrt{n}C_nE\left( g^{\mathrm{T}}\left( {\varvec{X}}\right) \right) \Sigma ^{-1}E\left( g\left( {\varvec{X}}\right) G\left( {\varvec{X}}\right) \right) +o_{p}\left( 1\right) . \end{aligned}$$

Finally, the empirical properties of the test function based on the IPW method are obtained by using the above equation and by applying CLT.

Empirical properties of test the function based on augmented method

We can rewrite \(T_A\) as follow:

$$\begin{aligned} T_{{\mathrm{A}}}&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }\left( y_i-g^{\mathrm{T}}\left( {\varvec{x}}_i\right) {\hat{\beta }}_{{\mathrm{A}}}\right) +\left( 1-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }\right) \left( {\hat{m}}\left( {\varvec{x}}_i,\gamma \right) -g^{\mathrm{T}}\left( {\varvec{x}}_i\right) {\hat{\beta }}_{{\mathrm{A}}}\right) \right] \nonumber \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }\eta _i+\left( 1-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }\right) \eta _{i}^{*}+C_nG\left( {\varvec{x}}_i\right) \right] \nonumber \\&\quad -\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \left( {\hat{\beta }}_{{\mathrm{A}}}-\beta \right) \right] +o_P\left( 1\right) \nonumber \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }\eta _i+\left( 1-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }\right) \eta _{i}^{*}+C_nG\left( {\varvec{x}}_i\right) \right] \nonumber \\&\quad -\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \left( {\hat{\beta }}_{{\mathrm{A}}}-\beta \right) \right] \nonumber \\&\quad +\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \left( \frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,{\hat{\gamma }}\right) }-\frac{\delta _i}{{\hat{\pi }}\left( {\varvec{v}}_i,\gamma \right) }\right) \left( y_i-m\left( {\varvec{x}}_{i},\gamma \right) \right) \right] +o_P\left( 1\right) \nonumber \\ :&=T_{A,1}+T_{A,2}+T_{A,3}. \end{aligned}$$

In similar way to obtaining \(B_1\), we can prove that

$$\begin{aligned} T_{A,1}&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }\eta _i+\left( 1-\frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }\right) \eta _{i}^{*}+C_nG\left( {\varvec{x}}_i\right) \right] +o_P\left( 1\right) \nonumber \\&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }\eta _i+\left( 1-\frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }\right) \eta _{i}^{*}\right] +\sqrt{n}C_nE\left( G\left( {\varvec{X}}\right) \right) +o_P\left( 1\right) , \end{aligned}$$

and in a similar way to obtaining \(B_{12}\), we can prove that \(T_{A,3}=o_P\left( 1\right)\). Also, for \(T_{A,2}\) based on SLN and the forth part of Lemma 2, we will have:

$$\begin{aligned} T_{A,2}&=-\frac{1}{n}\sum _{i=1}^{n}\left[ g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \right] \Sigma ^{-1}\left\{ \frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi \left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) \eta _{i}\right] \right. \nonumber \\&\quad \left. +HM^{-1}\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ r_i\left( 1-\delta _i\right) -\delta _iE\left( r|\delta =0\right) \left( \frac{1}{\pi \left( {\varvec{v}}_i,\gamma \right) }-1\right) \left( y_i-m\left( {\varvec{x}}_{i},\gamma \right) \right) \right] \right\} \nonumber \\&\quad -\sqrt{n}C_nE\left( g^{\mathrm{T}}\left( {\varvec{X}}\right) \right) \Sigma ^{-1}E\left( g\left( {\varvec{X}}\right) G\left( {\varvec{X}}\right) \right) +o_{p}\left( 1\right) . \end{aligned}$$


$$\begin{aligned} T_{{\mathrm{A}}}&=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }\eta _i+\left( 1-\frac{\delta _i}{\pi \left( {\varvec{v}}_i,\gamma \right) }\right) \eta _{i}^{*}\right] \nonumber \\&\quad -\frac{1}{n}\sum _{i=1}^{n}\left[ g^{\mathrm{T}}\left( {\varvec{x}}_i\right) \right] \Sigma ^{-1}\left\{ \frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ \frac{\delta _{i}}{\pi \left( {\varvec{v}}_{i},\gamma \right) }g\left( {\varvec{x}}_{i}\right) \eta _{i}+\left( 1-\frac{1}{\pi \left( {\varvec{v}}_i,\gamma \right) }\right) \eta _{i}^{*}\right] \right. \end{aligned}$$
$$\begin{aligned}&\quad \left. +HM^{-1}\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left[ r_i\left( 1-\delta _i\right) -\delta _iE\left( r|\delta =0\right) \left( \frac{1}{\pi \left( {\varvec{v}}_i,\gamma \right) }-1\right) \left( y_i-m\left( {\varvec{x}}_{i},\gamma \right) \right) \right] \right\} \nonumber \\&\quad +\sqrt{n}C_nE\left( G\left( {\varvec{X}}\right) \right) -\sqrt{n}C_nE\left( g^{\mathrm{T}}\left( {\varvec{X}}\right) \right) \Sigma ^{-1}E\left( g\left( {\varvec{X}}\right) G\left( {\varvec{X}}\right) \right) +o_{p}\left( 1\right) . \end{aligned}$$

Finally, the part (d) of Theorem 1 follows by the above equation and by applying CLT. Also, the second part of Theorem 1 can be obtained by taking \(C_n=0\) in the last stabilization.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bahari, F., Parsi, S. & Ganjali, M. Goodness of fit test for general linear model with nonignorable missing on response variable. AStA Adv Stat Anal 105, 163–196 (2021).

Download citation


  • General linear model
  • Missing data
  • Goodness of fit test
  • Score-type test
  • Tilting parameter