Skip to main content
Log in

Regression analysis of current status data with auxiliary covariates and informative observation times

  • Published:
Lifetime Data Analysis Aims and scope Submit manuscript

Abstract

This paper discusses regression analysis of current status failure time data with information observations and continuous auxiliary covariates. Under the additive hazards model, we employ a frailty model to describe the relationship between the failure time of interest and censoring time through some latent variables and propose an estimated partial likelihood estimator of regression parameters that makes use of the available auxiliary information. Asymptotic properties of the resulting estimators are established. To assess the finite sample performance of the proposed method, an extensive simulation study is conducted, and the results indicate that the proposed method works well. An illustrative example is also provided.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Similar content being viewed by others

References

Download references

Acknowledgements

The authors wish to thank the Associate Editor and two reviewers for their critical and constructive comments that greatly improved the paper. This research is supported by National Natural Science Foundation of China with Grants Nos. 11471252 and 11201349.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yurong Chen.

Appendix: Proofs of the asymptotic properties of \(\hat{\theta }\)

Appendix: Proofs of the asymptotic properties of \(\hat{\theta }\)

For the asymptotic properties of \(\hat{\theta }\), we need the following regularity conditions.

  1. (1)

    Z(t) is bounded.

  2. (2)

    \(P\{Y_{i}(t)=1\}>0,\) for all \(t\in [0,\tau ]\) and \(i=1,\ldots ,n\).

  3. (3)

    \(\int _0^{\tau }{\tilde{\lambda }}(t)dt<\infty .\)

  4. (4)

    There exists a neighborhood \({\mathcal {B}}\) of the true parameter \(\theta _0\) such that \((\partial ^2/\partial \theta _l\partial \theta _j)\varphi _{i}(t,\theta )\) exists and is uniformly continuous on \({\mathcal {B}};\) the function \(\phi _{i}(t,\theta )\) is bounded away from 0 on \([0,\tau ]\times {\mathcal {B}}\). The matrix \(\Sigma (\theta _0)\) is positive definite.

  5. (5)

    Kernel function \(Q(\cdot )\) is non-negative and uniformly bounded with finite support satisfying that \(\int Q(t)dt=1\) and \(\int Q^2(t)dt<\infty \). Furthermore, \(Q(\cdot )\) has order \(\alpha _{0}\) in the sense that \(\alpha _{0}\equiv \inf \{|\alpha |>p;\int _{R^p}{{\varvec{u}}}^\alpha Q({{\varvec{u}}})d{{\varvec{u}}}\ne 0\},\) where \({{\varvec{u}}}^\alpha =u_1^\alpha \ldots u_p^\alpha , |\alpha |=\alpha _1+\cdots +\alpha _p,{{\varvec{u}}}=(u_1,\ldots ,u_p)',\alpha =(\alpha _1,\ldots ,\alpha _p)',\) and \(\alpha _i's\) are non-negative integers. The bandwidth B satisfying \(\sqrt{n}\parallel B\parallel ^{\alpha _{0}}\rightarrow 0\) and \(\log n/\left( \sqrt{n}\parallel B\parallel ^p\right) \rightarrow 0.\)

  6. (6)

    For given t,  let \(H(u,{{\varvec{v}}},{{\varvec{s}}})\) be the joint distribution of \((\eta Y(t),{{\varvec{Z}}},{{\varvec{X}}}).\) Assume that \(h({{\varvec{v}}},{{\varvec{s}}})=\partial ^2H(1,{{\varvec{v}}},{{\varvec{s}}})/\partial {{\varvec{v}}}\partial {{\varvec{s}}}\) has \(\alpha _{0}\)th continuous derivations with respect to every component of \({{\varvec{v}}}.\)

Proof of the consistency of \({\hat{\theta }}.\) First note the fact that \(n^{-1}U({\hat{\theta }})=0\). Thus it follows from the arguments of Foutz (1977) that it is sufficient to show that the following four statements hold.

  1. (a)

    \(n^{-1}(\partial U(\theta )/\partial \theta )\) exists and is continuous in an open neighborhood of \(\theta _0.\)

  2. (b)

    \(n^{-1}(\partial U(\theta _0)/\partial \theta _0)\) is negative definite in probability.

  3. (c)

    \(n^{-1}(\partial U(\theta )/\partial \theta )\) converges in probability to a fixed function uniformly in an open neighborhood of \(\theta _0.\)

  4. (d)

    \(n^{-1} U(\theta _0)\) converges to 0 in probability.

First (a) holds obviously. For (b) and (c), using a similar method to that used in the proof of Lemma 1 of Liu et al. (2010), under conditions (1), (2), (5) and (6), we can obtain that

$$\begin{aligned}&\sup _{{{\mathcal {B}}}\times [0, \tau ]}\parallel {\hat{\phi }}_{i}^{(j)}-\phi _{i}^{(j)}\parallel \overset{P}{\longrightarrow }0,\ \ \text {for}\ j=0,1,2, \end{aligned}$$
(8)
$$\begin{aligned}&\sup _{{{\mathcal {B}}}\times [0, \tau ]}\parallel {\hat{S}}_{n}^{(j)}-s^{(j)}\parallel \overset{P}{\longrightarrow }0,\ \ \text {for}\ j=0,1,2, \end{aligned}$$
(9)
$$\begin{aligned}&\sup _{{{\mathcal {B}}}\times [0,\tau ]}\parallel {\hat{D}}_{n}^{(1)}-d^{(1)}\parallel \overset{P}{\longrightarrow }0,\ \ \text {and}\ \sup _{{{\mathcal {B}}}\times [0, \tau ]}\parallel D_{n}^{(2)}-d^{(2)}\parallel \overset{P}{\longrightarrow }0. \end{aligned}$$
(10)

We also can prove that

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^n\int _0^{\tau }\left( \frac{{\hat{\Phi }}_{i}^{(2)}(t,\theta )}{{\hat{\Phi }}_{i}(t,\theta )} -\left( \frac{{\hat{\Phi }}_{i}^{(1)}(t,\theta )}{{\hat{\Phi }}_{i}(t,\theta )}\right) ^{\otimes 2}\right) d M_{i}(t)\longrightarrow 0,\quad \text {in probability in } {{\mathcal {B}}}. \end{aligned}$$

Furthermore, note that

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^n\int _0^{\tau }\left( \frac{{\hat{S}}_{n}^{(2)}(t,\theta )}{{\hat{S}}_{n}^{(0)}(t,\theta )}-\left( \frac{{\hat{S}}_{n}^{(1)}(t,\theta )}{{\hat{S}}_{n}^{(0)}(t,\theta )}\right) ^{\otimes 2}\right) d M_{i}(t) \end{aligned}$$
(11)

is asymptotically equivalent to a local square integrable martingale. Therefore by the inequality of Lenglart (Andersen and Gill 1982), we have that the function (11) converges to 0 in probability uniformly in \({\mathcal {B}}.\) Thus we get \(A_1(\tau ,\theta )\) converges in probability to 0 in \({{\mathcal {B}}}\). For \(A_2(\tau ,\theta ),\) note that

$$\begin{aligned} A_2(\tau ,\theta )= & {} \int _0^\tau \left( {\hat{D}}_{n}^{(2)}(t,\theta )-\frac{ {\hat{S}}_{n}^{(2)}(t,\theta )}{{\hat{S}}_{n}^{(0)}(t,\theta )}S_{n}^{(0)}(t,\theta _0)\right) {\tilde{\lambda }}(t)dt\\&-\,\int _0^\tau \left( {\hat{D}}_{n}^{(1)}(t,\theta )-\left( \frac{ {\hat{S}}_{n}^{(1)}(t,\theta )}{{\hat{S}}_{n}^{(0)}(t,\theta )}\right) ^{\otimes 2}S_{n}^{(0)}(t,\theta _0)\right) {\tilde{\lambda }}(t)dt, \end{aligned}$$

where \(S_{n}^{(0)}(t,\theta )\) is equal to \({\hat{S}}_{n}^{(0)}(t,\theta )\) with \({\hat{\Phi }}_{i}(t,\theta )\) replaced by \(\Phi _{i}(t,\theta _0).\) So, \(n^{-1}(\partial /\partial \theta )U(\theta )\) and \(A_2(\tau ,\theta )\) converge in probability to the same limit. It can be easily seen that \(A_2(\tau ,\theta )\) converges in probability to

$$\begin{aligned}&\int _0^\tau \left( d^{(2)}(t,\theta )-\frac{s^{(2)}(t,\theta )}{s^{(0)}(t,\theta )}s^{(0)}(t,\theta _0)\right) {\tilde{\lambda }}(t)dt\\&-\int _0^\tau \left( d^{(1)}(t,\theta ) -\left( \frac{s^{(1)}(t,\theta )}{s^{(0)}(t,\theta )}\right) ^{\otimes 2}s^{(0)}(t,\theta _0)\right) {\tilde{\lambda }}(t)dt\\&:= {\tilde{\Sigma }}(\theta )-\Sigma _3(\theta ). \end{aligned}$$

So (c) holds. By the fact of \(d^{(2)}(t,\theta _0)=s^{(2)}(t,\theta _0)\), that is, \({\tilde{\Sigma }}(\theta _0)=0\), we obtain that

$$\begin{aligned} n^{-1}(\partial /\partial \theta _0) U(\theta _0)\overset{P}{\longrightarrow }-\Sigma _3(\theta _0). \end{aligned}$$
(12)

Thus (b) follows from the condition (4) immediately.

For (d), using some similar arguments used as above, we can obtain that \(n^{-1} U(\theta _0)\) converges to the same limit as that of

$$\begin{aligned} D(\tau ,\theta _0):=\frac{1}{n}\sum _{i=1}^n\int _0^{\tau }\left( \frac{{\hat{\Phi }}_{i}^{(1)}(t,\theta _0)}{{\hat{\Phi }}_{i}(t,\theta _0)} -\frac{{\hat{S}}_{n}^{(1)}(t,\theta _0)}{{\hat{S}}_{n}^{(0)}(t,\theta _0)}\right) \Phi _{i}(t,\theta _0)Y_{i}(t){\tilde{\lambda }}(t)dt. \end{aligned}$$

Thus it is sufficient to show that \(D(\tau ,\theta _0)\) converges to 0 in probability. By the arguments used in Zhou and Wang (2000), we can show that \(\sqrt{n}D(\tau ,\beta _0)\) is equal to

$$\begin{aligned}&-\frac{1}{\sqrt{n}}\frac{1-\rho }{\rho }\sum _{j\in V}\int _0^\tau \left( \frac{\phi ^{(1)}_{j}(t,\theta _0)}{\phi _{j}^{(0)}(t,\beta _0)}-\frac{s^{(1)}(t,\theta _0)}{s^{(0)}(t,\theta _0)}\right) \\&\quad (\varphi _{j}(t,\theta _0)-\phi _{j}(t,\theta _0))Y_{j}(t){\tilde{\lambda }}(t)dt+o_p(1)\\&\quad := -\frac{1}{\sqrt{n}}\frac{1-\rho }{\rho }\sum _{j\in V}Q_{j}+o_p(1). \end{aligned}$$

It can be easily seen that, \(E(Q_j)=0,\) by the strong law of large numbers, (d) holds immediately.

Proof of the asymptotic normality of \({\hat{\theta }}\). First note that

$$\begin{aligned} \frac{1}{\sqrt{n}}U(\theta )= & {} \frac{1}{\sqrt{n}}\sum _{i=1}^n\int _0^{\tau }\left( \frac{{\hat{\Phi }}_{i}^{(1)}(t,\theta )}{{\hat{\Phi }}_{i}(t,\theta )}-\frac{{\hat{S}}_{n}^{(1)}(t,\theta )}{{\hat{S}}_{n}^{(0)}(t,\theta )}\right) d M_{i}(t)\\&+\,\frac{1}{\sqrt{n}}\sum _{i=1}^n\int _0^{\tau }\left( \frac{{\hat{\Phi }}_{i}^{(1)}(t,\theta )}{{\hat{\Phi }}_{i}(t,\theta )}-\frac{{\hat{S}}_{n}^{(1)}(t,\theta )}{{\hat{S}}_{n}^{(0)}(t,\theta )}\right) \Phi _{i}(t,\theta _0)Y_{i}(t){\tilde{\lambda }}(t)dt, \end{aligned}$$

and

$$\begin{aligned} \frac{1}{\sqrt{n}} U(\theta _0)=-\frac{1}{\sqrt{n}}(U({\hat{\theta }})-U(\theta _0))=\left\{ -\frac{1}{n}(\partial /\partial \theta ^*) U(\theta ^*)\right\} \sqrt{n}({\hat{\theta }}-\theta _0), \end{aligned}$$
(13)

where \(\theta ^*\) is on the line segment between \({\hat{\theta }}\). To prove the asymptotic normality, it suffices to prove that \(n^{-1/2} U(\theta _0)\) converges to a normal random variable in distribution and that \(n^{-1}(\partial /\partial \theta ^*)U(\theta ^*)\) converges to an invertible matrix. The latter is straightforward by the consistency of \({\hat{\theta }}\) and the convergence proof of \(n^{-1}(\partial U(\theta )/\partial \theta ),\) that is

$$\begin{aligned} -n^{-1}(\partial /\partial \theta ^*)U(\theta ^*)\overset{P}{\longrightarrow }\Sigma _3(\theta _0). \end{aligned}$$
(14)

For the asymptotic normality of \(n^{-1/2} U(\theta _0),\) note that \(n^{-1/2} U(\theta _0)\) can be rewritten as

$$\begin{aligned}&\frac{1}{\sqrt{n}}\sum _{i\in {\bar{V}}}\int _0^{\tau }\left( \frac{\phi _{i}^{(1)}(t,\theta _0)}{\phi _{i}(t,\theta _0)}-\frac{s^{(1)}(t,\theta _0)}{s^{(0)}(t,\theta _0)}\right) d M_{i}(t)\\&\quad +\,\frac{1}{\sqrt{n}}\sum _{i\in V}\left\{ \int _0^{\tau }\left( \frac{\varphi _{i}^{(1)}(t,\theta _0)}{\varphi _{i}(t,\theta _0)} -\frac{s^{(1)}(t,\theta _0)}{s^{(0)}(t,\theta _0)}\right) dM_{i}(t)-\frac{1-\rho }{\rho }Q_{i}\right\} +o_p(1)\\&:= I_1+I_2+o_p(1). \end{aligned}$$

It can be easily seen that both the first and the second terms are means zero since that \(M_{i}(t)\) is a martingale and \(E(Q_{i})=0,\) and the two terms are mutually independent. By the martingale central limit theorem (Fleming and Harrington 1991), \(I_1\) converges weakly to the normal variable with mean zero and the covariance \((1-\rho )\Sigma _1(\theta _0).\) Through the simple central limit theorem, we obtain that \(I_2\) is asymptotically normal with mean zero and covariance \(\rho \Sigma _2(\theta _0)\). It thus follows from the independence between \(I_1\) and \(I_2\) that \(n^{-1/2}U(\theta )\) converges to a mean zero normal random vector with covariance \((1-\rho )\Sigma _1(\theta _0)+\rho \Sigma _2(\theta _0).\) The theorem thus follows from combining (13) and (14) and the asymptotic normality of \(I_1+I_2.\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Feng, Y., Chen, Y. Regression analysis of current status data with auxiliary covariates and informative observation times. Lifetime Data Anal 24, 293–309 (2018). https://doi.org/10.1007/s10985-016-9389-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10985-016-9389-5

Keywords

Mathematics Subject Classification

Navigation