Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Empirical likelihood of conditional quantile difference with left-truncated and dependent data

Abstract

We, in this paper, apply the smoothed and maximum empirical likelihood (EL) methods to construct the confidence intervals of the conditional quantile difference with left-truncated data. In particular, we prove the smoothed empirical log-likelihood ratio of the conditional quantile difference is asymptotically chi-squared when the observations with multivariate covariates form a stationary \(\alpha\)-mixing sequence. At the same time, we establish the asymptotic normality of the maximum EL estimator for the conditional quantile difference. A simulation study is conducted to investigate the finite sample behavior of the proposed methods and a real data application is provided.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2

References

  1. Cai, J., & Kim, J. (2003). Nonparametric quantile estimation with correlated failure time data. Lifetime Data Analysis, 9, 357–371.

  2. Cai, T., Wei, L. J., & Wilcox, M. (2000). Semiparametric regression analysis for clustered failure time data. Biometrika, 87(4), 867–878.

  3. Chaudhuri, P., Doksum, K., & Samarov, A. (1997). On average derivative quantile regression. The Annals of Statistics, 25, 715–744.

  4. Doukhan, P. (1994). Mixing: Properties and Examples, (Lecture Notes in Statistics) (Vol. 85). Springer, New York.

  5. Fan, J., Hu, T. C., & Truong, Y. K. (1994). Robust non-parametric function estimation. Scandinavian Journal of Statistics, 21(4), 433–446.

  6. Hall, P., & Heyde, C. C. (1980). Martingale limit theory and its application. New York: Academic Press.

  7. Karlsson, M., & Lindmark, A. (2014). truncSP: An R package for estimation of semi-parametric truncated linear regression models. Journal of Statistical Software, 57(14), 1–19.

  8. Liang, H. Y., & de Uña-Álvarez, J. (2012). Empirical likelihood for conditional quantile with left-truncated and dependent data. Annals of the Institute of Statistical Mathematics, 64(4), 765–790.

  9. Liang, H. Y., de Uña-Álvarez, J., & Iglesias-Pérez, C. (2011). Local polynomial estimation of a conditional mean function with dependent truncated data. Test, 20(3), 653–677.

  10. Liebscher, E. (1996). Strong coverage of sums of \(\alpha\)-mixing random variables with applications to density estimation. Stochastic Processes and Their Applications, 65(1), 69–80.

  11. Liebscher, E. (2001). Estimation of the density and the regression function under mixing conditions. Statistics & Decision, 19(1), 9–26.

  12. Mehra, K. L., Rao, M. S., & Upadrasta, S. P. (1991). A smooth conditional quantile estimator and related applications of conditional empirical processes. Journal of Multivariate Analysis, 37(2), 151–179.

  13. Molanes-Lopez, E. M., Cao, R., & Van Keilegom, I. (2010). Smoothed empirical likelihood confidence intervals for the relative distribution with left-truncated and right-censored data. Canadian Journal of Statistics, 38(3), 453–473.

  14. Owen, A. B. (1988). Empirical likelihood ratio confidence intervals for a single functional. Biometrika, 75, 237–249.

  15. Reiss, R. D. (1989). Approximate distributions of order statistics. New York: Springer.

  16. Shen, J., & He, S. (2007). Empirical likelihood for the difference of quantiles under censorship. Statistical Papers, 48(3), 437–457.

  17. Veraverbeke, N. (2001). Estimation of the quantiles of the duration of old age. Journal of Statistical Planning and Inference, 98(1–2), 101–106.

  18. Volkonskii, V. A., & Rozanov, Y. A. (1959). Some limit theorems for random functions. Theory of Probability & Its Applications, 4(2), 178–197.

  19. Xiang, X. (1996). A kernel estimator of a conditional quantile. Journal of Multivariate Analysis, 59(2), 206–216.

  20. Xun, L., & Zhou, Y. (2017). Estimators and their asymptotic properties for quantile difference with left truncated and right censored data. Acta Mathematica Sinica, 60(3), 451–464. (in Chinese).

  21. Yang, H., Yau, C., & Zhao, Y. (2014). Smoothed empirical likelihood inference for the difference of two quantiles with right censoring. Journal of Statistical Planning and Inference, 146, 95–101.

  22. Yang, H., & Zhao, Y. (2018). Smoothed jackknife empirical likelihood for the one-sample difference of quantiles. Computational Statistics & Data Analysis, 120, 58–69.

  23. Zhou, W., & Jing, B. Y. (2003). Smoothed empirical likelihood confidence intervals for the difference of quantiles. Statistica Sinica, 13(1), 83–95.

Download references

Acknowledgements

This research was supported by the National Natural Science Foundation of China (11671299).

Author information

Correspondence to Han-Ying Liang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Some preliminary lemmas

In this subsection, we give some preliminary lemmas, which are used to prove the main results. Let \(\{Z_i, i\ge 1\}\) be a sequence of \(\alpha\)-mixing real random variables with the mixing coefficients \(\{\alpha (k)\}\).

Lemma 6.1.1

[Liebscher (1996), Lemma 2.3] Assume \(\alpha (k)\le C_1k^{-r}\), for some \(r>1\), \(C_1>0\). Let \(\sup _{1\le i,j\le n,i\ne j}\left| Cov(Z_i,Z_j)\right| :=R^*(n)<+\infty\) be satisfied. Moreover, let \(R_m(n)<+\infty\) for some m, \(2r/(r-1)<m\le +\infty\), where \(R_m(n)=\sup _{1\le i\le n}\left( E|Z_i|^m\right) ^{1/m}\), for \(1\le m<+\infty\), and \(R_{\infty }(n)=\sup _{1\le i\le n}\mathrm{esssup}_{w\in \Omega }|Z_i|\) (esssup stands for the essential supremum). Then \(\mathrm{Var}\big (\sum \nolimits _{i=1}^nZ_i\big )\le n\left\{ C_2(r,m)(R_m(n))^{2m/(r(m-2))}(R^*(n))^{1-m/(r(m-2))}+R_2^2(n)\right\}\) holds with \(C_2(r,m):=\frac{20r-40r/m}{r-1-2r/m}C_1^{1/r}\).

Lemma 6.1.2

[Liebscher (2001), Proposition 5.1] Assume \(EZ_i=0\), \(|Z_i|\le S\,a.s.\,(i=1,2,\dots ,n)\). Set \(D_N=\max _{1\le j\le 2N}Var\big (\sum \nolimits _{i=1}^jZ_i\big )\). Then for n, N, \(0<N<\frac{n}{2}\), \(\varepsilon >0\), we have \(P\big (\big |\sum \nolimits _{i=1}^nZ_i\big |>\varepsilon \big )\le 4 \exp \big \{-\frac{\varepsilon ^2}{16}(nN^{-1} D_N+\frac{1}{3}\varepsilon S N)^{-1}\big \}+32\frac{S}{\varepsilon } n\alpha (N).\)

Lemma 6.1.3

[Volkonskii and Rozanov (1959)] Let \(V_1,\ldots , V_m\) be \(\alpha\)-mixing random variables measurable with respect to the \(\sigma\)-algebra \(\mathcal{F}^{j_1}_{i_1}, \ldots , {{\mathcal {F}}}^{j_m}_{i_m}\), respectively, with \(1\le i_1<j_1<\cdots <j_m\le n,\, i_{l+1}-j_l\ge w\ge 1\) and \(|V_j|\le 1\) for \(l, j=1, 2, \ldots , m\). Then \(|E(\prod ^m_{j=1}V_j)-\prod ^m_{j=1}EV_j|\le 16(m-1)\alpha (w),\) where \({{\mathcal {F}}}^b_a=\sigma \{V_i, a\le i\le b\}\) and \(\alpha (w)\) is the mixing coefficient.

Lemma 6.1.4

[Hall and Heyde (1980), Corollary A.2] Suppose that X and Y are random variables such that \(E|X|^p<\infty\), \(E|Y|^q<\infty\), where p, \(q>1\), \(p^{-1}+q^{-1}<1\), then \(|EXY-EXEY|\le 8\Vert X\Vert _p\Vert Y\Vert _q\big \{\sup \limits _{A\in \sigma (X),B\in \sigma (Y)}|P(A\cap B)-P(A)P(B)|\big \}^{1-p^{-1}-q^{-1}}.\)

Lemma 6.1.5

Suppose that \(\alpha (n)=O(n^{-\gamma })\) for some \(\gamma >3\). Then under (A1) we have \(\sup _{y\ge a_{{\widetilde{F}}}}|G_n(y)-G(y)|=O_p(n^{-1/2})\), \(\sup _{y\ge a_{{\widetilde{F}}}}|G_n(y)-G(y)|=O_{a.s.}((\ln \ln n/n)^{1/2})\).

Proof

The first conclusion from Lemma 5.4 in Liang et al. (2011), the second claim can be proved by using similar method in Lemma 5.4 in Liang et al. (2011).

Some lemmas and proofs

In this subsection, let \(\delta _n=(nh_n^d)^{-\tau }\) with \(1/3<\tau <1/2\).

Lemma 6.2.1

Let (A1)-(A5),(A7) hold. Then for each \(\xi\) satisfying \(|\xi -\xi _p|\le \delta _n\), we have

$$\begin{aligned}&\frac{\mu }{h_n^d}EW_{1i}(\xi )=l(\mathbf{x})[F(\xi |\mathbf{x})-F(\xi _p|\mathbf{x})]+O(h_n^{r_0}+a_n^{r_1}),\\&\frac{\mu }{h_n^d}EW_{2i}(\xi ,\theta _0)=l(\mathbf{x})[F(\xi +\theta _0|\mathbf{x})-F(\xi _p+\theta _0|\mathbf{x})]+O(h_n^{r_0}+a_n^{r_1}),\\&\frac{\mu }{h_n^d}\mathrm{Var}\left( W_{1i}(\xi )\right) =\sigma _{11}+O(h_n+a_n),\,\frac{\mu }{h_n^d}\mathrm{Var}\left( W_{2i}(\xi ,\theta _0)\right) =\sigma _{22}+O(h_n+a_n) \end{aligned}$$

and \(\frac{\mu }{h_n^d}\mathrm{Cov}\left( W_{1i}(\xi ),W_{2i}(\xi ,\theta _0)\right) =\sigma _{12}+O(h_n+a_n)\).

Proof

We prove only the results related to \(W_{1i}(\xi )\), the proofs of the other results are similar.

  1. (a)

    In view of (A2)–(A5), from (2.1), it follows that

    $$\begin{aligned}&\frac{\mu }{h_n^d}EW_{1i}(\xi )\\&\quad =\frac{\mu }{h_n^d}E\Big \{K\Big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\Big )G^{-1}(Y_i)\Big [\Lambda \Big (\frac{\xi -Y_i}{a_n}\Big )-p\Big ]\Big \}\\&\quad =\int _{{\mathbb {R}}^d}K(\mathbf{s})l(\mathbf{x}-h_n\mathbf{s})F(\xi |\mathbf{x}-h_n\mathbf{s})d\mathbf{s}-F(\xi _p|\mathbf{x})\int _{{\mathbb {R}}^d}K(\mathbf{s})l(\mathbf{x}-h_n\mathbf{s})d\mathbf{s}+O(a_n^{r_1})\\&\quad =l(\mathbf{x})[F(\xi |\mathbf{x})-F(\xi _p|\mathbf{x})]+O(h_n^{r_0}+a_n^{r_1}). \end{aligned}$$
  2. (b)

    Applying (A1)–(A5) and \(\delta _n/a_n\rightarrow 0\), from (2.1) we have

    $$\begin{aligned}\frac{\mu }{h_n^d}EW_{1i}^2(\xi )&=\frac{\mu }{h_n^d}E\Big \{K^2\Big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\Big )G^{-2}(Y_i)\Big [\Lambda \Big (\frac{\xi -Y_i}{a_n}\Big )-p\Big ]^2\Big \}\\&=\left\{ (1-2p){\mathbb {E}}\left[ G^{-1}(Y)I(Y\le \xi _p)|\mathbf{X}=\mathbf{x}\right] \right. \\&\left. \quad +p^2{\mathbb {E}}[G^{-1}(Y)|\mathbf{X}=\mathbf{x}]\right\} l(\mathbf{x})\int _{{\mathbb {R}}^d}K^2(\mathbf{s})d\mathbf{s}+O(h_n+a_n)\\&=\sigma _{11}+O(h_n+a_n). \end{aligned}$$

    Clearly, (a) gives \(\frac{\mu }{h_n^d}(EW_{1i}(\xi ))^2=\frac{h_n^d}{\mu }(\frac{\mu }{h_n^d}EW_{1i}(\xi ))^2=o(h_n+a_n)\), which, together with (b), yields that \(\frac{\mu }{h_n^d}\mathrm{Var}(W_{1i}(\xi ))=\sigma _{11}+O(h_n+a_n).\)

\(\square\)

Lemma 6.2.2

Let \(\alpha (n)=O(n^{-\lambda })\) for some \(\lambda \ge 4\). If (A1)–(A7) hold, then uniformly for \(\xi\) satisfying \(|\xi -\xi _p|\le \delta _n\), we have

  1. (1)
    1. (i)

      \(\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{1ni}(\xi )=O_{a.s.}(\delta _n)\),  (ii) \(\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{1ni}(\xi _p)=o_{a.s.}(\delta _n)\),

    2. (iii)

      \(c_1\delta _n\le \big |\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{1ni}(\xi _p\pm \delta _n)\big |\le c_2\delta _n\,a.s.\);

  2. (2)
    1. (i)

      \(\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{2ni}(\xi ,\theta _0)=O_{a.s.}(\delta _n)\),  (ii) \(\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{2ni}(\xi _p,\theta _0)=o_{a.s.}(\delta _n)\),

    2. (iii)

      \(c_1\delta _n\le \big |\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{2ni}(\xi _p\pm \delta _n,\theta _0)\big |\le c_2\delta _n\,a.s.\);

  3. (3)
    1. (i)

      \(\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{1ni}^2(\xi )=\sigma _{11}+O_{a.s.}(h_n+a_n)\)

    2. (ii)

      \(\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{2ni}^2(\xi ,\theta _0)=\sigma _{22}+O_{a.s.}(h_n+a_n)\),

    3. (iii)

      \(\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{1ni}(\xi )W_{2ni}(\xi ,\theta _0)=\sigma _{12}+O_{a.s.}(h_n+a_n)\).

Proof

Here, we give only the proofs (i) in (1) and (3), respectively, other results can be proved similarly.

  1. (a)

    We write

    $$\begin{aligned}&\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1ni}(\xi )\\&\quad =\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nK\Big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\Big )\Big [\Lambda \Big (\frac{\xi -Y_i}{a_n}\Big )-p\Big ]\frac{1}{G(Y_i)}\\&\qquad +\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nK\Big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\Big )\Big [\Lambda \Big (\frac{\xi -Y_i}{a_n}\Big )-p\Big ]\frac{G(Y_i)-G_n(Y_i)}{G(Y_i)G_n(Y_i)} :=A_n+B_n. \end{aligned}$$

    In view of Lemma 6.1.5, we find \(|B_n| =O_{a.s.}\Big (\sqrt{\frac{\ln \ln n}{n}}\Big )\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nK\big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\big )G^{-1}(Y_i).\) Using the Taylor expansion, we have

    $$\begin{aligned} A_n&=\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nK\Big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\Big )G^{-1}(Y_i)\Big [\Lambda \Big (\frac{\xi _p-Y_i}{a_n}\Big )-p\Big ]\\&\quad +\frac{(\xi -\xi _p)\mu }{nh_n^da_n}\sum \limits _{i=1}^nK\Big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\Big )G^{-1}(Y_i)w\Big (\frac{\xi ^*-Y_i}{a_n}\Big ), \end{aligned}$$

    where \(\xi ^*\) is between \(\xi\) and \(\xi _p\). To prove (i) in (1), it suffices to show that

    $$\begin{aligned}&\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nK\Big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\Big )G^{-1}(Y_i)\Big [\Lambda \Big (\frac{\xi _p-Y_i}{a_n}\Big )-p\Big ]=o_{a.s.}(\delta _n), \end{aligned}$$
    (6.1)

    \(\frac{\mu }{nh_n^da_n}\sum \nolimits _{i=1}^nK\big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\big )G^{-1}(Y_i)w\big (\frac{\xi ^*-Y_i}{a_n}\big )\xrightarrow {a.s.} l(\mathbf{x})f(\xi _p|\mathbf{x})\) and \(\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nK\big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\big )G^{-1}(Y_i)\xrightarrow {a.s.} l(\mathbf{x}).\) Next we only prove (6.1), the proofs of the others are similar. Applying Lemma 6.2.1, we can verify \(\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nEK\big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\big )G^{-1}(Y_i)\big [\Lambda \big (\frac{\xi _p-Y_i}{a_n}\big )-p\big ]=O(h_n^{r_0}+a_n^{r_1})=o(\delta _n)\) by (A7). Hence, it is sufficient to show that

    $$\begin{aligned}&\frac{\mu }{nh_n^d}\sum \limits _{i=1}^n\Big [W_{1i}(\xi _p)-EW_{1i}(\xi _p)\Big ]=O\Bigg (\sqrt{\frac{\ln n}{nh_n^d}}\Bigg )\, a.s. \end{aligned}$$
    (6.2)

    Set \(\beta _i=W_{1i}(\xi _p)-EW_{1i}(\xi _p).\) It is easy to verify that for any \(1\le j\le 2m\le \infty\),

    $$\begin{aligned} R^*(j):=\max _{1\le i_1,i_2\le j,i_1\ne i_2}|\mathrm{Cov}(\beta _{i_1},\beta _{i_2})|=O(h_n^{2d}),\, R_2^2(j):=\max _{1\le i\le j}E\beta _i^2=O(h_n^d) \end{aligned}$$

    and \(R_{\infty }(j):=\max _{1\le i\le j}esssup_{w\in \Omega }|\beta _i|\le C.\) Hence, by Lemma 6.1.1 (taking \(m=+\infty\)), we have \(D_m=\max _{1\le j\le 2m} \mathrm{Var}(\sum _{i=1}^{j}\beta _i)=O(mh_n^d).\) So, in the view of Lemma 6.1.2 and taking \(m=[(nh_n^d/\ln n)^{1/2}],\) then for sufficiently large \(\varepsilon _0>0\), it follows that

    $$\begin{aligned}&P\Big (\Big |\frac{\mu }{nh_n^d}\sum \limits _{i=1}^n\Big [W_{1i}(\xi _p)-EW_{1i}(\xi _p)\Big ]\Big |\\&\quad \ge \varepsilon _0 (\ln n/nh_n^d)^{1/2}\Big ) \le \frac{C}{n^2}+\frac{C}{h_n^d}(\ln n/nh_n^d)^{(\lambda -1)/2}, \end{aligned}$$

    which yields (6.2) by using Borel-Cantelli lemma under condition (A7). Therefore, (i) in (1) is proved.

  2. (b)

    Note that

    $$\begin{aligned}&\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1ni}^2(\xi )\\&\quad =\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1i}^2(\xi )+\frac{2\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1i}^2(\xi )\frac{G(Y_i)-G_n(Y_i)}{G_n(Y_i)}\\&\qquad +\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1i}^2(\xi )\left( \frac{G(Y_i)-G_n(Y_i)}{G_n(Y_i)}\right) ^2 :=\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1i}^2(\xi )+I_1+I_2. \end{aligned}$$

    Moreover,

    $$\begin{aligned}\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1i}^2(\xi )&=\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1i}^2(\xi _p)+\frac{2(\xi -\xi _p)\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1i}(\xi ^{**})W'_{1i}(\xi ^{**})\\& =\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1i}^2(\xi _p)+\frac{2(\xi -\xi _p)\mu }{nh_n^da_n}\sum _{i=1}^{n}K^2\Big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\Big )\\&\qquad G^{-2}(Y_i)\Big [\Lambda \Big (\frac{\xi ^{**}-Y_i}{a_n}\Big )-p\Big ]w\Big (\frac{\xi ^{**}-Y_i}{a_n}\Big ), \end{aligned}$$

    where \(\xi ^{**}\) is between \(\xi\) and \(\xi _p\). Similarly to the proof in Lemma 6.2.1, and (6.2), one can deduce \(\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{1i}^2(\xi _p)=\sigma _{11}+O_{a.s.}(h_n+a_n)\) and \(\frac{\mu }{nh_n^da_n}\sum _{i=1}^{n}K^2\Big (\frac{\mathbf{x}-\mathbf{X}_i}{h_n}\big )G^{-2}(Y_i)\big [\Lambda \big (\frac{\xi ^{**}-Y_i}{a_n}\big )-p\big ]w\big (\frac{\xi ^{**}-Y_i}{a_n}\big )=O(1)\,a.s.\) Thus \(\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1i}^2(\xi )=\sigma _{11}+O_{a.s.}(a_n+h_n).\) On the other hand, \(I_1=O\Big (\sqrt{\frac{\ln \ln n}{n}}\Big )=o(a_n)\,a.s., I_2=o(a_n)\,a.s.\) by using Lemma 6.1.5. Therefore, (i) in (3) is proved.

\(\square\)

Lemma 6.2.3

Let \(\alpha (n)=O(n^{-\lambda })\) for some \(\lambda \ge 4\). Under (A1)-(A7) and (A9), we have \(\lambda _1(\xi ,\theta _0)=O_{a.s.}(\delta _n),\,\lambda _2(\xi ,\theta _0)=O_{a.s}(\delta _n)\) uniformly over \(\{\xi : |\xi -\xi _p|\le \delta _n\}\). Furthermore, \(\lambda _1(\xi _p,\theta _0)=o_{a.s.}(\delta _n)\), \(c_1\delta _n\le |\lambda _1(\xi _p\pm \delta _n,\theta _0)|\le c_2\delta _n\,a.s.;\)\(\lambda _2(\xi _p,\theta _0)=o_{a.s.}(\delta _n)\), \(c_1\delta _n\le |\lambda _2(\xi _p\pm \delta _n,\theta _0)|\le c_2\delta _n\,a.s.\)

Proof

Set \(\lambda (\xi ,\theta _0)=\Vert \lambda (\xi ,\theta _0)\Vert \eta\). When \(\theta =\theta _0\), \(\lambda (\xi ,\theta _0)\) is the solution of (2.5) and (2.6), then

$$\begin{aligned} 0&=\left| \eta ^{\mathrm T}\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}(\xi ,\theta _0)\left[ 1-\frac{\lambda ^{\mathrm T}(\xi ,\theta _0)W_{ni}(\xi ,\theta _0)}{1+\lambda ^{\mathrm T}(\xi ,\theta _0)W_{ni}(\xi ,\theta _0)}\right] \right| \\&\ge \frac{\Vert \lambda (\xi ,\theta _0)\Vert \big |\eta ^{\mathrm T}\big [\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}(\xi ,\theta _0)W^{\mathrm T}_{ni}(\xi ,\theta _0)\big ] \eta \big |}{1+\Vert \lambda (\xi ,\theta _0)\Vert \max \limits _{1\le i\le n}\left| \eta ^{\mathrm T}W_{ni}(\xi ,\theta _0)\right| }-\left| \eta ^{\mathrm T}\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}(\xi ,\theta _0)\right| . \end{aligned}$$

From (3) in Lemma 6.2.2 we find \(\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{ni}(\xi ,\theta _0)W^{\mathrm T}_{ni}(\xi ,\theta _0)\xrightarrow {a.s.} \Bigg (\begin{array}{cc} \sigma _{11}&{}\sigma _{12}\\ \sigma _{12}&{}\sigma _{22} \end{array} \Bigg ).\) (A9) implies that for sufficiently large n, \(\eta ^{\mathrm T}\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{ni}(\xi ,\theta _0) W_{ni}^{\mathrm T}(\xi ,\theta _0)\eta \ge c>0\). In addition, from \(\max _{1\le i\le n}|W_{1i}(\xi )|\le C\) and Lemma 6.1.5, we have \(\max _{1\le i\le n}|W_{1ni}(\xi )|\le C,\,a.s.\) and \(\max _{1\le i\le n}|W_{2ni}(\xi ,\theta _0)|\le C,\,a.s.,\) which yield that \(\max _{1\le i\le n}|\eta ^{\mathrm T}W_{ni}(\xi ,\theta _0)|\le C,\,a.s.\) Thus, from Lemma 6.2.2 we have

$$\begin{aligned} \frac{\Vert \lambda (\xi ,\theta _0)\Vert \eta ^{\mathrm T} \frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{ni}(\xi ,\theta _0)W^{\mathrm T}_{ni}(\xi ,\theta _0) \eta }{1+\Vert \lambda (\xi ,\theta _0)\Vert \max \limits _{1\le i\le n}\left| \eta ^{\mathrm T}W_{ni}(\xi ,\theta _0)\right| }=O_{a.s.}(\delta _n) \end{aligned}$$

and \(\Vert \lambda (\xi ,\theta _0)\Vert =O_{a.s.}(\delta _n)\). Therefore \(\lambda _1(\xi ,\theta _0)=O_{a.s.}(\delta _n)\), \(\lambda _2(\xi ,\theta _0)=O_{a.s.}(\delta _n).\)

Similarly, one can verify the results on \(\lambda _{1}(\xi _p,\theta _0)\), \(\lambda _{2}(\xi _p,\theta _0)\), \(\lambda _{1}(\xi _p\pm \delta _n,\theta _0)\) and \(\lambda _{2}(\xi _p\pm \delta _n,\theta _0)\). \(\square\)

Lemma 6.2.4

Let \(\alpha (n)=O(n^{-\lambda })\) with \(\lambda \ge 4\). Assume (A1)-(A7) and (A9) in Section 3 hold, then there exists a point \(\widehat{\xi }\) in the interior of \(\{\xi :\,|\xi -\xi _p|\le \delta _n\}\), which maximizes \(R(\xi ,\theta _0)\) and satisfies the Eqs. (2.7)–(2.9).

Proof

To prove this conclusion, it is equivalent to proving that there exists a point \(\widehat{\xi }\) in the interior of \(\{\xi :\,|\xi -\xi _p|\le \delta _n\}\), which minimizes \(l_{\theta _0}(\xi )\) and satisfies the Eqs. (2.7)–(2.9), where \(l_{\theta _0}(\xi )=\sum \nolimits _{i=1}^n \log {\big [1+\lambda _1}W_{1ni}(\xi )+\lambda _2W_{2ni}(\xi ,\theta _0)\big ]\).

Put \(\xi _1=\xi _p+\delta _n\) and according to (2.5)-(2.6), \(\lambda (\xi _1,\theta _0)\) is the solution of \(\mathbf{0}=\frac{\mu }{nh_n^d}\sum \limits _{i=1}^n\frac{W_{ni}(\xi _1,\theta _0)}{1+\lambda ^{\mathrm T}W_{ni}(\xi _1,\theta _0)}\), which can be rewritten as

$$\begin{aligned} \mathbf{0}&=\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}(\xi _1,\theta _0)-\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}(\xi _1,\theta _0)W_{ni}^{\mathrm T}(\xi _1,\theta _0)\lambda \\&\quad +\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}(\xi _1,\theta _0)\frac{(\lambda ^{\mathrm T}W_{ni}(\xi _1,\theta _0))^2}{1+\lambda ^{\mathrm T}W_{ni}(\xi _1,\theta _0)}. \end{aligned}$$

Using Lemmas 6.2.3 and 6.1.5, from (A1), it follows that

$$\begin{aligned}&\Big \Vert \frac{\mu }{nh_n^d}\sum _{i=1}^nW_{ni}(\xi _1,\theta _0)\frac{(\lambda ^{\mathrm T}W_{ni}(\xi _1,\theta _0))^2}{1+\lambda ^{\mathrm T}W_{ni}(\xi _1,\theta _0)}\Big \Vert \\&\quad \le C\Vert \lambda \Vert ^2\frac{\mu }{nh_n^d}\sum _{i=1}^n\Vert W_{ni}(\xi _1,\theta _0)\Vert ^3=O_{a.s.}(\delta _n^2). \end{aligned}$$

Set \(S_n(\xi _1,\theta _0)=\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}(\xi _1,\theta _0)W_{ni}^{\mathrm T}(\xi _1,\theta _0)\). Then \(\lambda (\xi _1,\theta _0)=S_n^{-1}(\xi _1,\theta _0)\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{ni}(\xi _1,\theta _0)+O_{a.s.}(\delta _n^2)\). Applying the Taylor expansion for \(l_{\theta _0}(\xi _1)\) we have

$$\begin{aligned}&l_{\theta _0}(\xi _1)\\&\quad =\frac{nh_n^d}{\mu }\Big (\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1ni}(\xi _1),\frac{\mu }{nh_n^d} \sum \limits _{i=1}^nW_{2ni}(\xi _1,\theta _0)\Big )\lambda (\xi _1,\theta _0)\\&\qquad -\frac{1}{2} \sum \limits _{i=1}^n\left[ \left( W_{1ni}(\xi _1),W_{2ni}(\xi _1,\theta _0)\right) \lambda (\xi _1,\theta _0)\right] ^2\\&\qquad +\frac{1}{3}\sum \limits _{i=1}^n\left[ \lambda _1(\xi _1,\theta _0)W_{1ni}(\xi _1)+\lambda _2(\xi _1,\theta _0)W_{2ni}(\xi _1,\theta _0)\right] ^3/(1+\zeta _i)^3, \end{aligned}$$

where \(\zeta _i\) is between 0 and \(\lambda _1(\xi _1,\theta _0)W_{1ni}(\xi _1)+\lambda _2(\xi _1,\theta _0)W_{2ni}(\xi _1,\theta _0)\). Note that \(\sum \nolimits _{i=1}^n[\lambda _1(\xi _1,\theta _0)W_{1ni}(\xi _1)+\lambda _2(\xi _1,\theta _0)W_{2ni}(\xi _1,\theta _0)]^3/(1+\zeta _i)^3 = O_{a.s.}(nh_n^d\delta _n^3).\) Therefore, from Lemma 6.2.2, we can deduce

$$\begin{aligned} l_{\theta _0}(\xi _1)=&\frac{nh_n^d}{2\mu }\Big (\frac{\mu }{nh_n^d}\sum _{i=1}^nW_{1ni}(\xi _1),\frac{\mu }{nh_n^d} \sum _{i=1}^nW_{2ni}(\xi _1,\theta _0)\Big )S_n^{-1}\\&\quad (\xi _1,\theta _0)\left( \begin{array}{c} \frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1ni}(\xi _1)\\ \frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{2ni}(\xi _1,\theta _0) \end{array}\right) +O\big (nh_n^d\delta _n^3\big )>cnh_n^d\delta _n^2. \end{aligned}$$

On applying Lemma 6.2.2 we have \(l_{\theta _0}(\xi _p)= o(nh_n^d\delta _n^2)\), so \(l_{\theta _0}(\xi _p+\delta _n)>l_{\theta _0}(\xi _p).\,\)Similarly, \(l_{\theta _0}(\xi _p-\delta _n)>l_{\theta _0}(\xi _p).\)

Since \(l_{\theta _0}(\xi )\) is a continuously differentiable function of \(\xi\), \(l_{\theta _0}(\xi )\) attains its minimum in \((\xi _p-\delta _n,\xi _p+\delta _n)\), say \(\widehat{\xi }\), and \(\widehat{\xi },\)\(\widehat{\lambda }_1,\)\(\widehat{\lambda }_2\) satisfy Eqs. (2.7)–(2.9). \(\square\)

Lemma 6.2.5

Let \(\alpha (n)=O(n^{-\lambda })\) with \(\lambda \ge 4\). If (A1)-(A7) and (A9) in Section 3 hold, then for sufficiently large n, \(l(\Theta )\) attains its minimum value at some point \({\widetilde{\Theta }}\) in the interior of \(\{\Theta :\,\Vert \Theta -\Theta _0\Vert \le \delta _n\}\) in probability 1, and \({\widetilde{\Theta }}\) , \({\widetilde{\lambda }}=\lambda ({\widetilde{\Theta }})\) satisfy \(Q_{4n}({\widetilde{\Theta }},{\widetilde{\lambda }})=\mathbf{0}\) and \(Q_{5n}({\widetilde{\Theta }},{\widetilde{\lambda }})=\mathbf{0}\), where \(Q_{4n}\) and \(Q_{5n}\) are defined in Section 3.

Proof

In order to prove the results, it suffices to show that \(l(\Theta )> l(\Theta _0)\) for \(\Theta \in \{\Theta : \Vert \Theta -\Theta _0\Vert =\delta _n\}\).

In fact. Following the proof line in Lemmas 6.2.2 and 6.2.3, for \(\Theta =\Theta _0+u\delta _n\) with \(\Vert u\Vert =1\), we have \(c_1\delta _n\le \big \Vert \frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{ni}(\Theta )\big \Vert \le c_2\delta _n,\,a.s.,\)\(\big \Vert \frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{ni}(\Theta _0)\big \Vert =o_{a.s.}(\delta _n)\) and \(\lambda (\Theta )=O_{a.s.}(\delta _n).\)

In addition, it is easy to see that

$$\begin{aligned} \mathbf{0}&=\frac{\mu }{nh_n^d}\sum \limits _{i=1}^n\frac{W_{ni}(\Theta )}{1+\lambda ^{\mathrm T}W_{ni}(\Theta )}\\&=\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}(\Theta )-\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}(\Theta )W_{ni}^{\mathrm T}(\Theta )\lambda +O_{a.s.}(\delta _n^2). \end{aligned}$$

From the proof in (3) of Lemma 6.2.2, we have \(S_n(\Theta ):=\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{ni}(\Theta )W_{ni}^{\mathrm T}(\Theta )\xrightarrow {a.s.}\Bigg (\begin{array}{cc} \sigma _{11}&{}\sigma _{12}\\ \sigma _{12}&{}\sigma _{22} \end{array}\Bigg ).\) Thus, for sufficiently large n, we have \(\lambda =\lambda (\Theta )=S_n^{-1}(\Theta )\frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{ni}(\Theta )+O_{a.s.}(\delta _n^2).\)

By using the Taylor expression, it follows that \(l(\Theta )=\frac{nh_n^d}{2\mu }\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}^{\mathrm T}(\Theta )\cdot S_{n}^{-1}(\Theta )\cdot \frac{\mu }{nh_n^d}\sum \nolimits _{i=1}^nW_{ni}(\Theta )+O_{a.s.}(nh_n^d\delta _n^3) > cnh_n^d\delta _n^2.\) Then, from \(l(\Theta _0)=o_{a.s.}(nh_n^d\delta _n^2)\) we have \(l(\Theta )> l(\Theta _0)\). \(\square\)

Lemma 6.2.6

Let \(\alpha (n)=O(n^{-\lambda })\) for some \(\lambda \ge 4\). If (A1)-(A9) hold, then \(\sqrt{nh_n^d}\Bigg (\begin{array}{c}Q_{1n}(\xi _p,0,0)\\ Q_{2n}(\xi _p,0,0)\end{array}\Bigg )\xrightarrow {d}N(0,\Sigma ).\)

Proof

One can write

$$\begin{aligned}&\sqrt{nh_n^d}\Bigg (\begin{array}{c} Q_{1n}(\xi _p,0,0)\\ Q_{2n}(\xi _p,0,0) \end{array}\Bigg )\\&\quad =\frac{\mu }{\sqrt{nh_n^d}}\sum \limits _{i=1}^n[W_{i}(\xi _p,\theta _0)-EW_{i}(\xi _p,\theta _0)]+\frac{\mu }{\sqrt{nh_n^d}}\sum \limits _{i=1}^nEW_{i}(\xi _p,\theta _0)\\&\qquad +\frac{\mu }{\sqrt{nh_n^d}}\sum \limits _{i=1}^nW_{i}(\xi _p,\theta _0)\frac{G(Y_i)-G_n(Y_i)}{G_n(Y_i)} :=I_3+I_4+I_5. \end{aligned}$$

It is sufficient to show that \(I_3\xrightarrow {d}N(0,\Sigma )\), \(I_4=o(1)\), \(I_5=o_p(1).\) In fact. Lemma 6.2.1 gives \(\Vert I_4\Vert =O(\sqrt{nh_n^{d+2r_0}}+\sqrt{nh_n^da_n^{2r_1}})\rightarrow 0\) by (A7). In view of Lemma 6.1.5, we find

$$\begin{aligned} \Vert I_5\Vert\le & {} \frac{\mu }{\sqrt{nh_n^d}}\Big \Vert \sum \limits _{i=1}^n\Bigg (\begin{array}{c} K(\frac{\mathbf{x}-\mathbf{X}_i}{h_n})G^{-1}(Y_i)\\ K(\frac{\mathbf{x}-\mathbf{X}_i}{h_n})G^{-1}(Y_i) \end{array}\Bigg )\Big \Vert O_p(\frac{1}{\sqrt{n}})\\= & {} O_p\big ((nh_n^d)^{1/2}\big ) O_p(\frac{1}{\sqrt{n}})=o_p(1). \end{aligned}$$

To prove \(I_3\xrightarrow {d}N(0,\Sigma )\), it is sufficient to prove that for \(b=(b_1,b_2)^{\mathrm T}\ne 0,\)\(\frac{\mu b^{\mathrm T}}{\sqrt{nh_n^d}}\sum \nolimits _{i=1}^n\big [W_i(\xi _p,\theta _0)-EW_i(\xi _p,\theta _0)\big ]\xrightarrow {d}N(0,b^{\mathrm T}\Sigma b),\) which can be proved by applying the Bernstein’s big-block and small-block procedure. \(\square\)

Lemma 6.2.7

Let \(\alpha (n)=O(n^{-\lambda })\) for some \(\lambda \ge 4\). Suppose that (A1)-(A9) hold. Then, for \(\widehat{\xi }\), \(\widehat{\lambda }_1\) and \(\widehat{\lambda }_2\) given in Lemma 6.2.4, we have

$$\begin{aligned}&(nh_n^d)^{1/2}(\widehat{\xi }-\xi _p)\xrightarrow {d}N\left( 0,\frac{\mu (S_{11}^2\sigma _{11}+2S_{11}S_{12}\sigma _{12}+S_{12}^2\sigma _{22})}{\det ^2(S)}\right) ,\\&\widehat{\lambda }_1=-\frac{f(\xi _p+\theta _0|\mathbf{x})}{f(\xi _p|\mathbf{x})}\widehat{\lambda }_2+o_p((nh_n^d)^{-\frac{1}{2}}),\text { and }(nh_n^d)^{1/2}\widehat{\lambda }_2\xrightarrow d N\left( 0,\frac{\mu l^2(\mathbf{x})f^2(\xi _p|\mathbf{x})}{\det (S)}\right) , \end{aligned}$$

where \(S_{11}=-\sigma _{12}l(\mathbf{x})f(\xi _p+\theta _0|\mathbf{x})+\sigma _{22}l(\mathbf{x})f(\xi _p|\mathbf{x})\), \(S_{12}=\sigma _{11}l(\mathbf{x})f(\xi _p+\theta _0|\mathbf{x})-\sigma _{12}l(\mathbf{x})f(\xi _p|\mathbf{x})\) and

$$\begin{aligned} \det (S)=l^2(\mathbf{x})\left[ f^2(\xi _p|\mathbf{x})\sigma _{22}+f^2(\xi _p+\theta _0|\mathbf{x})\sigma _{11}-2f(\xi _p|\mathbf{x})f(\xi _p+\theta _0|\mathbf{x})\sigma _{12}\right] . \end{aligned}$$

Proof

Lemma 6.2.4 shows \(\widehat{\xi },\)\(\widehat{\lambda }_1\) and \(\widehat{\lambda }_2\) satisfy Eqs. (2.7)–(2.9), so \(Q_{in}(\widehat{\xi },\widehat{\lambda }_1,\widehat{\lambda }_2)=0,i=1,2,3\). Applying the Taylor expansion to \(Q_{in}(\widehat{\xi },\widehat{\lambda }_1,\widehat{\lambda }_2)\) at point \((\xi _p,0,0)\), one can deduce

$$\begin{aligned} \left( \begin{array}{c} 0\\ 0\\ 0 \end{array}\right) =\left( \begin{array}{c} Q_{1n}(\xi _p,0,0)\\ Q_{2n}(\xi _p,0,0)\\ 0 \end{array}\right) +{\widehat{S}}_n\left( \begin{array}{c} \widehat{\xi }-\xi _p\\ \widehat{\lambda }_1\\ \widehat{\lambda }_2 \end{array}\right) +o_p(R_n), \end{aligned}$$
(6.3)

where \({\widehat{S}}_n=\left( \begin{array}{ccc} \frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW'_{1ni}(\xi _p)&{}-\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1ni}^2(\xi _p)&{}-\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1ni}(\xi _p)W_{2ni}(\xi _p,\theta _0)\\ \frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW'_{2ni}(\xi _p,\theta _0)&{}-\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1ni}(\xi _p)W_{2ni}(\xi _p,\theta _0)&{}-\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{2ni}^2(\xi _p,\theta _0)\\ 0&{}\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW'_{1ni}(\xi _p)&{}\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW'_{2ni}(\xi _p,\theta _0) \end{array}\right)\) and \(R_n=|\widehat{\xi }-\xi _p|+|\widehat{\lambda }_1|+|\widehat{\lambda }_2|\). Similarly to the proof as in Lemma 6.2.2, we can also get

$$\begin{aligned}&\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW'_{1ni}(\xi _p)\xrightarrow {a.s.} l(\mathbf{x})f(\xi _p|\mathbf{x})\,\,\text{ and }\\ &\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW'_{2ni}(\xi _p,\theta _0)\xrightarrow {a.s.} l(\mathbf{x})f(\xi _p+\theta _0|\mathbf{x}), \end{aligned}$$

which, together with Lemma 6.2.2, yield that

$$\begin{aligned} {\widehat{S}}_n=\left( \begin{array}{ccc} l(\mathbf{x})f(\xi _p|\mathbf{x})&{}-\sigma _{11}&{}-\sigma _{12}\\ l(\mathbf{x})f(\xi _p+\theta _0|\mathbf{x})&{}-\sigma _{12}&{}-\sigma _{22}\\ 0&{}l(\mathbf{x})f(\xi _p|\mathbf{x})&{}l(\mathbf{x})f(\xi _p+\theta _0|\mathbf{x}) \end{array}\right) +o_{a.s.}(1):=S+o_{a.s.}(1). \end{aligned}$$

Because \(\sigma _{11}\sigma _{22}-\sigma _{12}^2>0\), we have \(\det (S)=l^2(\mathbf{x})\left[ f^2(\xi _p|\mathbf{x})\sigma _{22}+f^2(\xi _p+\theta _0|\mathbf{x})\sigma _{11}-2f(\xi _p|\mathbf{x})f(\xi _p+\theta _0|\mathbf{x})\sigma _{12}\right] >0,\) thus, the matrix S is invertible and \(\left( \begin{array}{c} \widehat{\xi }-\xi _p\\ \widehat{\lambda }_1\\ \widehat{\lambda }_2 \end{array}\right) =-S^{-1}\left( \begin{array}{c} Q_{1n}(\xi _p,0,0)+o_p(R_n)\\ Q_{2n}(\xi _p,0,0)+o_p(R_n)\\ o_p(R_n) \end{array}\right)\), which yields

$$\begin{aligned}\widehat{\xi }-\xi _p=&-\frac{1}{\det (S)}[S_{11}Q_{1n}(\xi _p,0,0)+S_{12}Q_{2n}(\xi _p,0,0)]+o_p(R_n),\\\widehat{\lambda }_1=&-\frac{1}{\det (S)}\left[ -l^2(\mathbf{x})f^2(\xi _p+\theta _0|\mathbf{x})Q_{1n}(\xi _p,0,0)\right. \\&\left. \quad +l^2(\mathbf{x})f(\xi _p|\mathbf{x})f(\xi _p+\theta _0|\mathbf{x})Q_{2n}(\xi _p,0,0)\right] +o_p(R_n),\\\widehat{\lambda }_2=&-\frac{1}{\det (S)}\left[ l^2(\mathbf{x})f(\xi _p|\mathbf{x})f(\xi _p+\theta _0|\mathbf{x})Q_{1n}(\xi _p,0,0)\right. \\&\left. \quad -l^2(\mathbf{x})f^2(\xi _p|\mathbf{x})Q_{2n}(\xi _p,0,0)\right] +o_p(R_n). \end{aligned}$$

Lemma 6.2.6 implies \(Q_{1n}(\xi _p,0,0)=O_p((nh_n^d)^{-1/2})\) and \(Q_{2n}(\xi _p,0,0)=O_p((nh_n^d)^{-1/2})\). Then \(R_n=O_p((nh_n^d)^{-1/2})\). Hence, using Lemma 6.2.6, the results are proved. \(\square\)

Proofs of main results

Proof of Theorem 3.1

The proof of the first conclusion can be obtained from Lemma 6.2.4. Next we prove the second claim.

Using the Taylor expansion to \(Q_{1n}(\widehat{\xi },\widehat{\lambda }_1,\widehat{\lambda }_2)=0\), \(Q_{2n}(\widehat{\xi },\widehat{\lambda }_1,\widehat{\lambda }_2)=0\) and based on the fact \(\widehat{\lambda }_1=O_p((nh_n^d)^{-1/2}),\,\widehat{\lambda }_2=O_p((nh_n^d)^{-1/2})\) from Lemma 6.2.7, we have

$$\begin{aligned}&\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1ni}(\widehat{\xi })=\widehat{\lambda }_1\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1ni}^2(\widehat{\xi })\\&\quad +\widehat{\lambda }_2\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1ni}(\widehat{\xi })W_{2ni}(\widehat{\xi },\theta _0)+O_p((nh_n^d)^{-1}),\\&\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{2ni}(\widehat{\xi },\theta _0)=\widehat{\lambda }_1\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{1ni}(\widehat{\xi })W_{2ni}(\widehat{\xi },\theta _0)\\&\quad +\widehat{\lambda }_2\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{2ni}^2(\widehat{\xi },\theta _0)+O_p((nh_n^d)^{-1}). \end{aligned}$$

Combined with Lemmas 6.2.2 and 6.2.7, one can derive that

$$\begin{aligned} 2l_n(\theta _0)=\frac{nh_n^d}{\mu }\widehat{\lambda }_2^2\left[ \frac{f^2(\xi _p+\theta _0|\mathbf{x})}{f^2(\xi _p|\mathbf{x})}\sigma _{11}-2\frac{f(\xi _p+\theta _0|\mathbf{x})}{f(\xi _p|\mathbf{x})}\sigma _{12}+\sigma _{22}\right] +o_p(1). \end{aligned}$$

On the other hand, based on Lemma 6.2.7, we have \(\frac{nh_n^d}{\mu }\widehat{\lambda }_2^2\frac{f^2(\xi _p+\theta _0|\mathbf{x})\sigma _{11}-2f(\xi _p|\mathbf{x})f(\xi _p+\theta _0|\mathbf{x})\sigma _{12}+f^2(\xi _p|\mathbf{x})\sigma _{22}}{f^2(\xi _p|\mathbf{x})}\xrightarrow {d}\chi ^2_{1},\) thus, \(2 l_n(\theta _0)\xrightarrow {d}\chi ^2_{1}\). \(\square\)

Proof of Theorem 3.2

The proof of the first conclusion can be obtained from Lemma 6.2.5. Next we prove the second claim.

Applying the Taylor expansion to \(Q_{4n}({\widetilde{\Theta }},{\widetilde{\lambda }})=\mathbf{0},\)\(Q_{5n}({\widetilde{\Theta }},{\widetilde{\lambda }})=\mathbf{0}\) at point \((\Theta _0,\,\mathbf{0})\), it follows that

$$\begin{aligned}&\mathbf{0}=Q_{4n}({\widetilde{\Theta }},{\widetilde{\lambda }})=Q_{4n}(\Theta _0,\mathbf{0})+\frac{\partial Q_{4n}(\Theta _0,\mathbf{0})}{\partial \Theta ^{\mathrm T}}({\widetilde{\Theta }}-\Theta _0)\\&\quad +\frac{\partial Q_{4n}(\Theta _0,\mathbf{0})}{\partial \lambda ^{\mathrm T}}{\widetilde{\lambda }}+o_p(M_n),\\&\mathbf{0}=Q_{5n}({\widetilde{\Theta }},{\widetilde{\lambda }})=Q_{5n}(\Theta _0,\mathbf{0})+\frac{\partial Q_{5n}(\Theta _0,\mathbf{0})}{\partial \Theta ^{\mathrm T}}({\widetilde{\Theta }}-\Theta _0)\\&\quad +\frac{\partial Q_{5n}(\Theta _0,\mathbf{0})}{\partial \lambda ^{\mathrm T}}{\widetilde{\lambda }}+o_p(M_n), \end{aligned}$$

where \(M_n=\Vert {\widetilde{\Theta }}-\Theta _0\Vert +\Vert {\widetilde{\lambda }}\Vert\). After simple calculations, we have

$$\begin{aligned} \frac{\partial Q_{4n}(\Theta _0,\mathbf{0})}{\partial \Theta ^{\mathrm T}}=\frac{\mu }{nh_n^d}\sum \limits _{i=1}^n\frac{\partial W_{ni}(\Theta _0)}{\partial \Theta ^{\mathrm T}},\,\frac{\partial Q_{4n}(\Theta _0,\mathbf{0})}{\partial \lambda ^{\mathrm T}}=-\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}(\Theta _0)W_{ni}^{\mathrm T}(\Theta _0), \end{aligned}$$

\(\frac{\partial Q_{5n}(\Theta _0,\mathbf{0})}{\partial \Theta ^{\mathrm T}}=\mathbf{0}\) and \(\frac{\partial Q_{5n}(\Theta _0,\mathbf{0})}{\partial \lambda ^{\mathrm T}}=\frac{\mu }{nh_n^d}\sum \limits _{i=1}^n\frac{\partial W_{ni}(\Theta _0)}{\partial \Theta }\). Thus \(\Bigg (\begin{array}{c} {\widetilde{\lambda }}\\ {\widetilde{\Theta }}-\Theta _0 \end{array}\Bigg )=P_n^{-1}\Bigg (\begin{array}{c} -Q_{4n}(\Theta _0,\mathbf{0})+o_p(M_n)\\ o_p(M_n) \end{array}\Bigg ),\) where

$$\begin{aligned} P_n&=\Bigg (\begin{array}{cc} \frac{\partial Q_{4n}}{\partial \lambda ^{\mathrm T}}&{}\frac{\partial Q_{4n}}{\partial \Theta ^{\mathrm T}}\\ \frac{\partial Q_{5n}}{\partial \lambda ^{\mathrm T}}&{}\frac{\partial Q_{5n}}{\partial \Theta ^{\mathrm T}} \end{array}\Bigg )_{(\Theta _0,\mathbf{0})}\\&=\left( \begin{array}{cc} -\frac{\mu }{nh_n^d}\sum \limits _{i=1}^nW_{ni}(\Theta _0)W_{ni}^{\mathrm T}(\Theta _0)&{}\frac{\mu }{nh_n^d}\sum \limits _{i=1}^n\frac{\partial W_{ni}(\Theta _0)}{\partial \Theta ^{\mathrm T}}\\ \frac{\mu }{nh_n^d}\sum \limits _{i=1}^n\frac{\partial W_{ni}(\Theta _0)}{\partial \Theta }&{}\mathbf{0} \end{array}\right) \xrightarrow {P}\Bigg (\begin{array}{cc} P_{11}&{}P_{12}\\ P_{21}&{}\mathbf{0} \end{array}\Bigg ), \end{aligned}$$

with \(P_{11}=-\Bigg (\begin{array}{cc} \sigma _{11}&{}\sigma _{12}\\ \sigma _{12}&{}\sigma _{22} \end{array}\Bigg ),P_{12}=\Bigg (\begin{array}{cc} 0&{}l(\mathbf{x})f(\xi _p|\mathbf{x})\\ l(\mathbf{x})f(\xi _p+\theta _0|\mathbf{x})&{}l(\mathbf{x})f(\xi _p+\theta _0|\mathbf{x}) \end{array}\Bigg ),P_{21}=P_{12}^{\mathrm T}\).

Lemma 6.2.6 gives \(\sqrt{nh_n^d}Q_{4n}(\Theta _0,\mathbf{0})=\sqrt{nh_n^d}(Q_{1n}(\xi _p,0,0),\,Q_{2n}(\xi _p,0,0))^\mathrm T\xrightarrow {d}N(0,\Sigma )\), so \(Q_{4n}(\Theta _0,\mathbf{0})=O_p((nh_n^d)^{-1/2})\). Thus we have \(M_n=O_p((nh_n^d)^{-1/2})\) and \({\widetilde{\Theta }}-\Theta _0=-P_{12}^{-1}Q_{4n}(\Theta _0,\mathbf{0})+o_p((nh_n^d)^{-1/2}).\) Therefore \(\sqrt{nh_n^d}({\widetilde{\Theta }}-\Theta _0)=-P_{12}^{-1}\sqrt{nh_n^d}Q_{4n}(\Theta _0,\mathbf{0})+o_p(1)\xrightarrow {d}N(0,P_{12}^{-1}\Sigma P_{21}^{-1}).\)\(\square\)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kong, C., Liang, H. Empirical likelihood of conditional quantile difference with left-truncated and dependent data. J. Korean Stat. Soc. (2020). https://doi.org/10.1007/s42952-019-00045-5

Download citation

Keywords

  • Asymptotic normality
  • conditional quantile difference
  • Empirical likelihood
  • Truncated data
  • \(\alpha\)-mixing

Mathematics Subject Classification

  • 62N02
  • 62G20