Skip to main content
Log in

Estimating the hazard functions of two alternating recurrent events in the presence of covariates

  • Original Paper
  • Published:
AStA Advances in Statistical Analysis Aims and scope Submit manuscript

Abstract

The motivation for this paper is a cystic fibrosis data which records a patient’s times to relapse and times to cure under several recurrences of the disease. The idea is to study the impact of covariates on the hazard rates of two alternately occurring events. The dependence between the times to the two events over the different cycles is modeled through an autoregressive-type setup. The partial likelihood function is then derived and the estimators obtained. The estimators are shown to be consistent and asymptotically normal. The technique is applied to study the motivating data. A simulation study is also conducted to corroborate the results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Andersen, P.K., Gill, R.D.: Cox’s regression model for counting processes: a large sample study. Ann Stat 10(4), 1100–1120 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  • Breslow, N.: Covariance analysis of censored survival data. Biometrics 30(1), 89–99 (1974)

    Article  Google Scholar 

  • Breslow, N.: Analysis of survival data under the proportional hazards model. Rev Int Stat 43(1), 45–57 (1975)

    Article  MATH  Google Scholar 

  • Cox, D.R.: Regression models and life-tables. J R Stat Soc Ser B (Methodol) 34(2), 187–220 (1972)

    MathSciNet  MATH  Google Scholar 

  • Cox, D.R.: Partial likelihood. Biometrika 62, 269–276 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  • Cox, D.R., Oakes, D.: Analysis of survival data. CRC Press, Boca Raton (1984)

    Google Scholar 

  • Fuchs, H., Borowitz, D., Christiansen, D., Morris, E., Nash, M., Ramsey, B., Rosenstein, B., Smith, A., Wohl, M.: Effect of aerosolized re-combinant human DNase on exacerbations of respiratory symptoms and on pulmonary function in patients with Cystic Fibrosis. N. Engl. J. Med. 331(10), 637–642 (1994)

    Article  Google Scholar 

  • Huang, C.Y., Wang, M.C.: Nonparametric estimation of the bivariate recurrence time distribution. Biometrics 94(61), 392–402 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Kalbfleisch, J.D., Prentice, R.L.: The Statistical Analysis of Failure Time Data, 2nd edn. Wiley, New Jersey (2002)

    Book  MATH  Google Scholar 

  • Lenglart, E.: Relation de domination entre deux processus. Biometrics 13(2), 171–179 (1977)

    MathSciNet  MATH  Google Scholar 

  • Lin, D.Y.: On the Breslow estimator. Lifetime Data Anal 13, 471–480 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  • Rebolledo, R.: Central limit theorems for local martingales. Z. Wahrsch. verw. Gebiete 51, 269–286 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  • Sen Roy, S., Chatterjee, M.: Estimating the hazard functions of two alternately occurring recurrent events. J. Appl. Stat. 42(7), 1547–1555 (2015)

    Article  MathSciNet  Google Scholar 

  • Therneau, T.M., Hamilton, S.A.: rhDNase as an example of recurrent event analysis. Stat. Med. 16(18), 2029–2047 (1997)

    Article  Google Scholar 

  • Wang, M.C., Chang, S.H.: Nonparametric estimation of a recurrent survival function. J. Am. Stat. Assoc. 94(445), 146–153 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  • Wei, L.J., Lin, D.Y., Weissfeld, L.: Regression analysis of multivariate incomplete failure time data by modeling marginal distributions. J. Am. Stat. Assoc. 84(408), 1065–1073 (1989)

    Article  MathSciNet  Google Scholar 

  • Yan, J., Fine, J.P.: Analysis of episodic data with application to recurrent pulmonary exacerbations in cystic fibrosis patients. J. Am. Stat. Assoc. 103(482), 498–510 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the editors and the referees for their helpful comments, which went a long way in improving the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sugata Sen Roy.

Appendix

Appendix

Here we show that the maximum likelihood estimators are consistent and asymptotically normal (as before we restrict ourselves to \((\varvec{\theta },\varvec{\mu })\) and the results for \((\varvec{\eta },\varvec{\gamma })\) follow similarly).

Let \(N_{ji}(t) = I \{X_{ji} \le t , \delta _{1ji}=1 , \delta ^*_{1ji}=1\}\), where \(N_{ji}(1)\) is a.s. finite.

Observe that, \(\lambda _{Eji}(t)=Q_{ji}(t)\lambda _{E0j}(t)r_{Eji}(t)\), so that the log-likelihood evaluated at time t is

$$\begin{aligned} C(\varvec{\theta ,\mu } ; t)= & {} \sum _{j=1}^{m}\sum _{i=1}^{n} \int _{0}^t \left[ \varvec{\theta _j'z_{ji}}(u) + \varvec{\mu }' \varvec{y}_{ji}(u) \right] ~\text {d}N_{ji}(u)\\&- \,\sum _{j=1}^{m} \sum _{i=1}^{n} \int _{0}^t \log ~\left[ ~\sum _{l=1}^{n} Q_{jl}(u) e^{\{\varvec{\theta _j'Z_{jl}}(u) + \varvec{\mu }' \varvec{y}_{jl}(u)\}}\right] ~ \text {d}{N_{ji}}(u). \end{aligned}$$

Let \((\varvec{\theta _0},\varvec{\mu _0})\) be the true value of \((\varvec{\theta },\varvec{\mu })\). Then define

$$\begin{aligned} C^*(\varvec{\theta },\varvec{\mu },t)= & {} C(\varvec{\theta },\varvec{\mu },t)-C(\varvec{\theta }_0,\varvec{\mu }_0,t)\\= & {} \sum _{j=1}^{m}\sum _{i=1}^{n} \int _{0}^t [ (\varvec{\theta _j}-\varvec{\theta _{0j}})'\varvec{z}_{ji}(u) + (\varvec{\mu }-\varvec{\mu }_0)' \varvec{y}_{ji}(u) ]~\text {d}N_{ji}(u) \\&-\,\sum _{j=1}^{m} \sum _{i=1}^{n} \int _{0}^t \log ~\left[ ~\frac{\sum _{l=1}^{n} Q_{jl}(u) e^{\{\varvec{\theta }_j'\varvec{Z}_{jl}(u) + \varvec{\mu }' \varvec{y}_{jl}(u)\}}}{\sum _{l=1}^{n} Q_{jl}(u) e^{\{\varvec{\theta }_{0j}'\varvec{Z}_{jl}(u) + \varvec{\mu }_0' \varvec{y}_{jl}(u)\}}}\right] ~ \text {d}{N_{ji}}(u). \end{aligned}$$

Let \(S^{(0)}_j ({\varvec{\theta }}_j,\varvec{\mu },t)=n_m^{-1}\sum _{l=1}^{n} Q_{jl}(t) r_{Ejl}\),

Then the first and second order derivatives of \(S^{(0)}_j ({\varvec{\theta }}_j,\varvec{\mu },t)\) are

$$\begin{aligned}&S^{(1)}_{j{\varvec{\theta }_j}} ({\varvec{\theta }}_j,\varvec{\mu },t)=n_m^{-1}\sum _{l=1}^{n} Q_{jl}(t) r_{Ejl}{\varvec{z}}_{jl}\\&S^{(1)}_{j\varvec{\mu }} ({\varvec{\theta }}_j,\varvec{\mu },t)= n_m^{-1}\sum _{l=1}^{n} Q_{jl}(t) r_{Ejl}~{\varvec{y}}_{jl},\\&S^{(2)}_{j{\varvec{\theta }_j}} ({\varvec{\theta }}_j,\varvec{\mu },t)=n_m^{-1}\sum _{l=1}^{n}Q_{jl}(t) r_{Ejl}{\varvec{z}}_{jl}{\varvec{z}}_{jl}',\\&S^{(2)}_{j\varvec{\mu }} ({\varvec{\theta }}_j,\varvec{\mu },t)=n_m^{-1}\sum _{l=1}^{n}Q_{jl}(t) r_{Ejl}{\varvec{y}}_{jl}{\varvec{y}}_{jl}',\\&S^{(2)}_{j{\varvec{\theta }_j\mu }} ({\varvec{\theta }}_j,\varvec{\mu },t)=n_m^{-1}\sum _{l=1}^{n}Q_{jl}(t) r_{Ejl}{\varvec{z}}_{jl}{\varvec{y}}_{jl}'. \end{aligned}$$

Also let \(E_{j{\varvec{\theta }_j}}({\varvec{\theta }_j},\varvec{\mu },t)=\frac{S^{(1)}_{j{\varvec{\theta }_j}}}{S^{(0)}_j}, \quad \quad E_{j\varvec{\mu }}({\varvec{\theta }_j},\varvec{\mu },t)=\frac{S^{(1)}_{j\varvec{\mu }}}{S^{(0)}_{j}}\),

$$\begin{aligned} V_{j{\varvec{\theta }_j}}({\varvec{\theta }_j},\varvec{\mu },t)=\frac{S^{(2)}_{j{\varvec{\theta }_j}}}{S^{(0)}_j}-E_{j{\varvec{\theta }_j}} E'_{j{\varvec{\theta }_j}}, \quad V_{j\varvec{\mu }}({\varvec{\theta }_j},\varvec{\mu },t)=\frac{S^{(2)}_{j\varvec{\mu }}}{S^{(0)}_{j}}-E_{j\varvec{\mu }} E'_{j\varvec{\mu }}, \end{aligned}$$

and \(\quad V_{j{\varvec{\theta }_j}\mu }({\varvec{\theta }_j},\varvec{\mu },t)=\frac{S^{(2)}_{j{\varvec{\theta }_j}\mu }}{S^{(0)}_j}-E_{j{\varvec{\theta }_j}} E'_{j{\varvec{\mu }}}\).

Next suppose the following conditions hold :

  1. A1.

    \(\int _{0}^{1} \lambda _{E0j}(t)\text {d}t <\infty \).

  2. A2.

    There exists a neighborhood \(\Theta _j\) of \(\theta _j\) and \(s^{(0)}_j\), \(s^{(k)}_{j{\varvec{\theta }_j}}\) for \(k=1,2\), defined on \(\Theta _j\times [0,1]\), such that \(sup_{t\epsilon [0,1],\theta _j\epsilon \Theta _j} \parallel S^{(0)}_j(t) - s^{(0)}_j(t) \parallel {\mathop {\rightarrow }\limits ^{p}} 0,\) and \(sup_{t\epsilon [0,1],\theta _j\epsilon \Theta _j} \parallel S^{(k)}_{j{\varvec{\theta }_j}}(t) - s^{(k)}_{j{\varvec{\theta }_j}}(t) \parallel {\mathop {\rightarrow }\limits ^{p}} 0,\). For all \(t\epsilon [0,1],\theta _j\epsilon \Theta _j\), with \(s^{(1)}_{j{\varvec{\theta }_j}}=\frac{\delta }{\delta \theta _j}s^{(0)}_j\) and \(s^{(2)}_{j{\varvec{\theta }_j}}=\frac{\delta ^2}{\delta ^2\theta _j}s^{(0)}_j\), \(s^{(0)}_j(t), s^{(1)}_{j{\varvec{\theta }_j}}(t)\) and \(s^{(2 )}_{j{\varvec{\theta }_j}}(t)\) are continuous functions of \(\theta _j\epsilon \Theta _j\), uniformly in \(t\epsilon [0,1], s^{(0)}_j, s^{(1)}_{j{\varvec{\theta }_j}}\) and \(s^{(2)}_{j{\varvec{\theta }_j}}\) are bounded on \(\Theta _j\times [0,1]\), \(s^{(0)}_j\) is bounded away from zero on \(\Theta _j\times [0,1]\) and the matrix \(\Sigma _{j\theta _j}\) is positive definite, where \(\Sigma _{j\theta _j}=\int _{0}^{1} v_{j\theta _j}(\theta _{0j},\mu _0,t)~s_j^{(0)}(\theta _{0j},\mu _0,t)~\lambda _{E0j}(t)\text {d}t\).

  3. A3.

    There exists a neighborhood \(\Theta _{\varvec{\mu }}\) of \({\varvec{\mu }}\) and \(s^{(k)}_{j\varvec{\mu }}\) defined on \(\Theta _{\varvec{\mu }} \times [0,1]\), such that \(sup_{t\epsilon [0,1],{\varvec{\mu }}\epsilon \Theta _{\varvec{\mu }}} \parallel S^{(k)}_{j\varvec{\mu }}(t) - s^{(k)}_{j\varvec{\mu }}(t) \parallel {\mathop {\rightarrow }\limits ^{p}} 0\), for all \(t\epsilon [0,1],{\varvec{\mu }}\epsilon \Theta _{\varvec{\mu }}\), with \(s^{(1)}_{j\varvec{\mu }}=\frac{\delta }{\delta {\varvec{\mu }}}s^{(0)}_{j\varvec{\mu }}\) and \(s^{(2)}_{j\varvec{\mu }}=\frac{\delta ^2}{\delta ^2{\varvec{\mu }}}s^{(0)}_{j\varvec{\mu }}\), \(s^{(0)}_{j\varvec{\mu }}(t), s^{(1)}_{j\varvec{\mu }}(t)\) and \(s^{(2)}_{j\varvec{\mu }}(t)\) are continuous functions of \({\varvec{\mu }}\epsilon \Theta _{\varvec{\mu }}\), uniformly in \(t\epsilon [0,1], s^{(0)}_{j\varvec{\mu }}, s^{(1)}_{j\varvec{\mu }}\) and \(s^{(2)}_{j\varvec{\mu }} \) are bounded on \(\Theta _{\varvec{\mu }}\times [0,1]\), \(s^{(0)}_{\varvec{\mu }}\) is bounded away from zero on \(\Theta _{\varvec{\mu }}\times [0,1]\) and the matrix \(\Sigma _{\varvec{\mu }}=\int _{0}^{1} \sum _{j=1}^{m}v_{j\varvec{\mu }}(\theta _{0j},\mu _0,t)~s_{j}^{(0)}(\theta _{0j},\mu _0,t)~\lambda _{E0j}(t)\text {d}t\) is positive definite.

  4. A4.

    For \(s^{(2)}_{j{\varvec{\theta }_j\mu }}\) , defined on \(\Theta _j\times [0,1]\) and \(\Theta _\mu \times [0,1]\), such that \(sup_{t\epsilon [0,1],\theta _j\epsilon \mu } \parallel S^{(2)}_{j{\varvec{\theta }_j\mu }}(t) - s^{(2)}_{j{\varvec{\theta }_j\mu }}(t) \parallel {\mathop {\rightarrow }\limits ^{p}} 0,\) for all \(t\epsilon [0,1],\theta _j\epsilon \Theta _j,\theta _\mu \epsilon \Theta _\mu \), with \(s^{(2)}_{j{\varvec{\theta }_j\mu }}=\frac{\delta ^2}{\delta ^2\theta _j\mu }s^{(0)}_j\), also, \(s^{(2 )}_{j{\varvec{\theta }_j\mu }}(t)\) is a continuous function of \(\theta _j\epsilon \Theta _j\) and \(\theta _\mu \epsilon \Theta _\mu \), uniformly in \(t\epsilon [0,1]\) and \(s^{(2)}_{j{\varvec{\theta }_j\mu }}\) are bounded on \(\Theta _j\times [0,1]\) and \(\Theta _\mu \times [0,1]\) and the matrix \(\Sigma _{j\theta _j\varvec{\mu }}=\int _{0}^{1} v_{j\theta _j\varvec{\mu }}(\theta _{0j},\mu _0,t)~s_{j}^{(0)}(\theta _{0j},\mu _0,t)~\lambda _{E0j}(t)\text {d}t\) exists.

  5. A5.

    There exists \(\epsilon >0\), such that,

    $$\begin{aligned} n_m^{-\frac{1}{2}}\hbox {sup}_{i,j,t}|(\varvec{z}_{ji}(t),\varvec{y}_{ji}(t))'|Q_{ji}(t)I\{\varvec{\theta }'_{0j}\varvec{z_{ji}}+\varvec{\mu '}\varvec{y}_{ji}>-\epsilon |(\varvec{z}'_{ji}(t),\varvec{y}'_{ji}(t))'|\}{\mathop {\rightarrow }\limits ^{p}} 0 . \end{aligned}$$

For \(\lambda _{E00}(u) = \sum _{j=1}^m \lambda _{E0j}(u)\), define

$$\begin{aligned} A(\varvec{\theta },\varvec{\mu },t)= & {} \sum _{j=1}^{m}\sum _{i=1}^{n} \int _{0}^t [ (\varvec{\theta _j}-\varvec{\theta _{0j}})'\varvec{z}_{ji}(u) + (\varvec{\mu }-\varvec{\mu }_0)' \varvec{y}_{ji}(u) ]~\lambda _{Eji}(u)\text {d}u \\&- \,\sum _{j=1}^{m}\sum _{i=1}^{n} \int _{0}^{t}{\log } \frac{S_j^{(0)}(\varvec{\theta }_j,\varvec{\mu },u)}{S_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu _0},u)}\lambda _{Eji}(u)\text {d}u\\= & {} \int _{0}^{t}\left[ \sum _{j=1}^{m} (\varvec{\theta _j}-\varvec{\theta _{0j}})'S_{j{\varvec{\theta }_j}}^{(1)}(\varvec{\theta }_{0j},\varvec{\mu }_0,t)\lambda _{E0j}(u) \right. \\&\left. + \,(\varvec{\mu }-\varvec{\mu }_0)' S_{\varvec{\mu }}^{(1)}(\varvec{\theta }_0,\varvec{\mu }_0,t)\lambda _{E00}(u) \right. \\&\left. - \,\sum _{j=1}^{m} {\log } \frac{S_j^{(0)}(\varvec{\theta }_j,\varvec{\mu },u)}{S_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu _0},u)}S_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)\lambda _{E0j}(u)\right] \text {d}u \end{aligned}$$

Next defining \( M_{ji}(t) = N_{ji}(t) - \int _{0}^t Q_{ji}(u)\lambda _{Eji}(u)\text {d}u \) and \(\bar{M}_j=\sum _{i=1}^{n}M_{ji}\), it is easy to see that \(M_{ji}(t)\) are local martingales on the time interval [0, 1].

Thus for every \((\varvec{\theta },\varvec{\mu },t)\), \(C^*(\varvec{\theta ,\mu } ; t) - A(\varvec{\theta },\varvec{\mu },t)\) is a local square integrable martingale with

$$\begin{aligned} B(\varvec{\theta },\varvec{\mu },t)= & {} <C^*(\varvec{\theta ,\mu } ; t) - A(\varvec{\theta },\varvec{\mu },t), C^*(\varvec{\theta ,\mu } ; t) - A(\varvec{\theta },\varvec{\mu },t)>\\= & {} \int _{0}^{t}\left[ \sum _{j=1}^{m}\sum _{i=1}^{n} (\varvec{\theta _j}-\varvec{\theta _{0j}})'\varvec{z}_{ji}(u) + (\varvec{\mu }-\varvec{\mu }_0)' \varvec{y}_{ji}(u) \right. \\&\left. - \,\sum _{j=1}^{m}\sum _{i=1}^{n} {\log } \frac{S_j^{(0)}(\varvec{\theta }_j,\varvec{\mu },u)}{S_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)}S_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)\right] ^2\lambda _{Eji}(u)\text {d}u. \end{aligned}$$

Under the conditions \(A1-A3\),

$$\begin{aligned}&A(\varvec{\theta },\varvec{\mu },1) {\mathop {\rightarrow }\limits ^{p}} \int _{0}^{1}\left[ \sum _{j=1}^{m} (\varvec{\theta _j}-\varvec{\theta _{0j}})'s_{j{\varvec{\theta }_j}}^{(1)}(\varvec{\theta }_{0j},\varvec{\mu }_0,t)\lambda _{E0j}(u) \right. \nonumber \\&\qquad \left. + \,(\varvec{\mu }-\varvec{\mu }_0)' s_{\varvec{\mu }}^{(1)}(\varvec{\theta }_0,\varvec{\mu }_0,t)\lambda _{E00}(u)\right. \nonumber \\&\qquad \left. -\,\sum _{j=1}^{m} {\log } \frac{s_j^{(0)}(\varvec{\theta }_j,\varvec{\mu },u)}{s_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu _0},u)}s_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)\lambda _{E0j}(u)\right] \text {d}u \end{aligned}$$
(6.1)

and \(n_mB(\varvec{\theta },\varvec{\mu },t)\) converges in probability to some finite quantity. Hence \(C^*(\varvec{\theta ,\mu } ; 1)\) converges in probability to the same limit as \(A(\varvec{\theta },\varvec{\mu },1)\) (Lenglart 1977).

Now by the boundedness conditions in \(A2-A3\), the first and second derivatives of the limiting function on the right-hand side of (5.1) can be obtained by taking partial derivatives under the integral sign. For \(e_j(\varvec{\theta }_{j},\varvec{\mu },u)=\frac{s_{j{\varvec{\theta }_j}}^{(1)}(\varvec{\theta }_{j},\varvec{\mu },u)}{s_j^{(0)}(\varvec{\theta }_{j},\varvec{\mu },u)}\), the first derivative with respect to \(\varvec{\theta }_j\) is

$$\begin{aligned}&\int _{0}^{1}\{s_{j{\varvec{\theta }_j}}^{(1)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u) - \frac{s_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)}{s_j^{(0)}(\varvec{\theta }_{j},\varvec{\mu },u)}s_{j{\varvec{\theta }_j}}^{(1)}(\varvec{\theta }_{j},\varvec{\mu },u)\}\lambda _{E0j}(u)\text {d}u \nonumber \\&\qquad \quad =\int _{0}^{1}\{e_j(\varvec{\theta }_{0j},\varvec{\mu }_0,u)~-~e_j(\varvec{\theta }_{j},\varvec{\mu },u)\}s_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)\lambda _{E0j}(u)\text {d}u \end{aligned}$$
(6.2)

while for \(e_{j\varvec{\mu }}(\varvec{\theta }_{j},\varvec{\mu },u)=\frac{s_{j\varvec{\mu }}^{(1)}(\varvec{\theta }_{j},\varvec{\mu },u)}{s_j^{(0)}(\varvec{\theta }_{j},\varvec{\mu },u)}\), the first derivative with respect to \(\varvec{\mu }\) is

$$\begin{aligned}&\int _{0}^{1}\sum _{j=1}^{m}\{s_{j\varvec{\mu }}^{(1)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u) - \frac{s_{j}^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)}{s_{j}^{(0)}(\varvec{\theta }_{j},\varvec{\mu },u)}s_{j\varvec{\mu }}^{(1)}(\varvec{\theta }_{j},\varvec{\mu },u)\}\lambda _{E0j}(u)\text {d}u \nonumber \\&\qquad \quad =\int _{0}^{1}\sum _{j=1}^{m}\{e_{j\varvec{\mu }}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)~-~e_{j\varvec{\mu }}(\varvec{\theta }_{j},\varvec{\mu },u)\}s_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)\lambda _{E0j}(u)\text {d}u.\nonumber \\ \end{aligned}$$
(6.3)

Both (6.2) and (6.3) are zero at \((\varvec{\theta },\varvec{\mu })=(\varvec{\theta _0},\varvec{\mu _0})\) so that the first derivative of the limiting function is zero at \((\varvec{\theta _0},\varvec{\mu _0})\).

Next for \(v_j(\varvec{\theta }_{j},\varvec{\mu },u)=\frac{s^{(2)}_{j{\varvec{\theta }_j}}}{s^{(0)}_j}-e_j e'_j\), taking second order derivative with respect to \(\varvec{\theta }_j\) leads to

$$\begin{aligned}&\int _{0}^{1}\left\{ -s_{j{\varvec{\theta }_j}}^{(2)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)\frac{s_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)}{s_j^{(0)}(\varvec{\theta }_{j},\varvec{\mu },u)}\right. \nonumber \\&\left. +\, \frac{s_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)}{{s_j^{(0)}(\varvec{\theta }_{j},\varvec{\mu },u)}^2}s_{j{\varvec{\theta }_j}}^{(1)}(\varvec{\theta }_{j},\varvec{\mu },u)\left( s_{j{\varvec{\theta }_j}}^{(1)}(\varvec{\theta }_{j},\varvec{\mu },u)\right) ' \right\} \lambda _{E0j}(u)\text {d}u \nonumber \\&\qquad =-\,v_j(\varvec{\theta }_{j},\varvec{\mu },u)s_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)\lambda _{E0j}(u)du \quad \quad \quad \quad \end{aligned}$$
(6.4)

and with respect to \(\varvec{\mu }\), for \(v_{j\varvec{\mu }}(\varvec{\theta }_{j},\varvec{\mu },u)=\frac{s^{(2)}_{j\varvec{\mu }}}{s^{(0)}_j}-e_{j\varvec{\mu }} e'_{j\varvec{\mu }}\) gives

$$\begin{aligned}&\int _{0}^{1}\sum _{j=1}^{m}\left\{ -s_{j\varvec{\mu }}^{(2)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)\frac{s_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)}{s_j^{(0)}(\varvec{\theta }_{j},\varvec{\mu },u)} \right. \nonumber \\&\left. +\, \frac{s_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)}{{s_j^{(0)}(\varvec{\theta }_{j},\varvec{\mu },u)}^2}s_{j\varvec{\mu }}^{(1)}(\varvec{\theta }_{j},\varvec{\mu },u)\left( s_{j\varvec{\mu }}^{(1)}(\varvec{\theta }_{j},\varvec{\mu },u)\right) ' \right\} \lambda _{E0j}(u)\text {d}u \nonumber \\&=-\,\int _{0}^{1} \sum _{j=1}^{m} v_{j\varvec{\mu }}(\varvec{\theta }_{j},\varvec{\mu },u)s_j^{(0)}(\varvec{\theta }_{0j},\varvec{\mu }_0,u)\lambda _{E0j}(u)\text {d}u \end{aligned}$$
(6.5)

The second derivative turns out to be negative in both (6.4) and (6.5) and hence for each \((\varvec{\theta ,\mu })\), \(C^*(\varvec{\theta ,\mu } ; 1)\) converges to a concave function with a unique maximum at \((\varvec{\theta _0},\varvec{\mu _0})\). Hence it follows that since \((\varvec{\hat{\theta },\hat{\mu }})\) maximizes \(C^*(\varvec{\theta ,\mu } ; 1)\), \((\varvec{\hat{\theta },\hat{\mu }}){\mathop {\rightarrow }\limits ^{p}}(\varvec{\theta _0},\varvec{\mu _0})\).

To prove asymptotic normality, define

$$\begin{aligned} U(\varvec{\theta },\varvec{\mu },t ) = \sum _{i=1}^{n} \int _{0}^{t} [\tilde{\mathbf{z}}_i(\varvec{\theta },\varvec{\mu },u) - \mathbf{r}(\varvec{\theta },\varvec{\mu },u)]\text {d}\bar{N}_i(u) \end{aligned}$$

and

            \(V(\varvec{\theta ,\mu ,t}) = \left( \begin{array}{ll} V_{\varvec{\theta \theta }}(t) &{} V_{\varvec{\theta \mu }}(t) \\ V_{\varvec{\theta \mu }}(t)'&{} V_{\varvec{\mu \mu }}(t) \\ \end{array}\right) ,\) where \(V_{\varvec{\theta \theta }}(t)^{mp \times mp} = diag((V_{\varvec{\theta _j\theta _j}}(t))), V_{\varvec{\theta }\mu }(t)^{mp \times 2} = (V'_{\varvec{\theta _1\mu }}(t)~\cdots ~V'_{\varvec{\theta _m\mu }}(t))'\)

$$\begin{aligned} V_{\varvec{\theta _j\theta _j}}(t)= & {} \int _{0}^{t} \sum _{i=1}^{n} \left\{ (a_{Ej}^{-1}(u) \sum _{l=1}^{n} Q_{jl}(u) r_{Ejl} {\varvec{z}}_{jil}{{\varvec{z}'}_{jil}} \right. \\&\left. - \,\left( a_{Ej}^{-2}(u) \left( \sum _{l=1}^{n} Q_{jl}(u) r_{Ejl}{\varvec{z}}_{jl}\right) \left( \sum _{l=1}^{n} Q_{jl}(u) r_{Ejl}{\varvec{z}}_{jl}\right) ' \right) \right\} \text {d}N_{ji}(u),\\ V_{\varvec{\theta _j\mu }}(t)= & {} \int _{0}^{t} \sum _{i=1}^{n} \left\{ \left( a_{Ej}^{-1}(u) \sum _{l=1}^{n} Q_{jl}(u) r_{Ejl}{\varvec{z}}_{jil}~{\varvec{y}}^{'}_{jil}\right) \right. \\&\left. - \,\left( a_{Ej}^{-2}(u)\left( \sum _{l=1}^{n} Q_{jl}(u) r_{Ejl}{\varvec{z}}_{jl}\right) \left( \sum _{l=1}^{n} Q_{jl}(u) r_{Ejl}~{\varvec{y}}_{jl}\right) '\right) \right\} \text {d}N_{ji}(u),\\ V_{\varvec{\mu \mu }}(t)= & {} \int _{0}^{t} \sum _{j=1}^m \sum _{i=1}^{n} \left\{ \left( a_{Ej}^{-1}(u) \sum _{l=1}^{n} Q_{jl}(u) r_{Ejl} {\varvec{y}}_{jil}{\varvec{y}}^{'}_{jil}\right) \right. \\&\left. - \,\left( a_{Ej}^{-2}(u) \left( \sum _{l=1}^{n} Q_{jl}(u) r_{Ejl}~{\varvec{y}}_{jl}\right) \left( \sum _{l=1}^{n} Q_{jl}(u) r_{Ejl}~{\varvec{y}}_{jl}\right) '\right) \right\} \text {d}N_{ji}(u). \end{aligned}$$

Also let    \(\Sigma = \left( \begin{array}{ll} \Sigma _{\varvec{\theta }} &{} \Sigma _{\varvec{\theta \mu }} \\ \Sigma _{\varvec{\theta \mu }}'&{} \Sigma _{\varvec{\mu }} \\ \end{array} \right) , \hbox {where} \quad \Sigma _{\varvec{\theta }}^{mp \times mp} = diag((\Sigma _{\varvec{j\theta _j}})), \Sigma _{\varvec{\theta }\mu }^{mp \times 2} = \left( \Sigma '_{\varvec{1\theta _1\mu }}~\cdots ~\Sigma '_{\varvec{m\theta _m\mu }}\right) '\).

We then need to show that for \((\varvec{\theta }^{*'},\varvec{\mu }^{*'})'{\mathop {\rightarrow }\limits ^{p}}(\varvec{\theta }_0^{'},\varvec{\mu }_0^{'})'\),

$$\begin{aligned} \sqrt{n_m} [(\varvec{\hat{\theta }'},\varvec{\hat{\mu }'})' - (\varvec{\theta '}_0,\varvec{\mu '}_0)']= & {} \left[ \frac{1}{n_m}V(\varvec{\theta }^{*'},\varvec{\mu }^{*'},1)\right] ^{-1}\frac{1}{\sqrt{n_m}}U(\varvec{\theta }_0,\varvec{\mu }_0,1) {\mathop {\rightarrow }\limits ^{d}} N(\mathbf{0}, \Sigma ^{-1}).\\ \text {Now} \quad \quad n_m^{-\frac{1}{2}}U(\varvec{\theta }_0,\varvec{\mu }_0,t )= & {} n_m^{-\frac{1}{2}}\sum _{i=1}^{n} \int _{0}^{t} \tilde{\mathbf{z}}_i(\varvec{\theta }_0,\varvec{\mu }_0,u) \text {d}\bar{N}_i(u) - n_m^{-\frac{1}{2}}\mathbf{r}(\varvec{\theta }_0,\varvec{\mu }_0,u)\text {d}\bar{N}(u)\\= & {} n_m^{-\frac{1}{2}}\sum _{i=1}^{n} \int _{0}^{t} \tilde{\mathbf{z}}_i(\varvec{\theta }_0,\varvec{\mu }_0,u)\text {d}\bar{M}_i(u) - n_m^{-\frac{1}{2}}{} \mathbf{r}(\varvec{\theta }_0,\varvec{\mu }_0,u)\text {d}\bar{M}(u) \\= & {} n_m^{-\frac{1}{2}}\sum _{i=1}^{n} \int _{0}^{t} n_m^{-\frac{1}{2}}(\tilde{\mathbf{z}}_i(\varvec{\theta }_0,\varvec{\mu }_0,u) - n_m^{-\frac{1}{2}}{} \mathbf{r}(\varvec{\theta }_0,\varvec{\mu }_0,u))\text {d}\bar{M_i}(u), \end{aligned}$$

where, \( d\bar{M_i}(u)=\sum _{j=1}^{m_i}dM_{ji}(u), \tilde{\mathbf{z}}_i(\varvec{\theta }_0,\varvec{\mu }_0,u) = ({\varvec{z}'}_{1i}, \ldots , {\varvec{z}'}_{mi}, \sum _{j=1}^m {\varvec{y}}'_{ji})'\) and \(\mathbf{r}(\varvec{\theta },\varvec{\mu },u) = (E_{1\theta _1}(u), \ldots , E_{m\theta _m}(u), \sum _{j=1}^m E_{j\mu }(u))'\).

By techniques similar to Andersen and Gill (1982), it can be shown that the conditions of Rebolledo’s (1980) CLT are satisfied. Taking \(H_{il}(t)=n_m^{-\frac{1}{2}}(\tilde{\mathbf{z}}_l(t)-\mathbf{r}(\varvec{\theta }_0,\varvec{\mu }_0,t))_i\), an application of the CLT shows that \(n_m^{-\frac{1}{2}}U(\varvec{\theta }_0,\varvec{\mu }_0,t)\) converges weakly to a continuous Gaussian process that has a covariance matrix \(\Sigma \) when evaluated at time \(t=1\). Hence, \(n_m^{-\frac{1}{2}}U(\varvec{\theta }_0,\varvec{\mu }_0,1){\mathop {\rightarrow }\limits ^{d}} N(0,\Sigma )\).

Again, following arguments similar to Andersen and Gill (1982) and using Lenglart’s inequality, it can be shown that \(||n_m^{-1}V(\varvec{\theta }^*,\varvec{\mu }^*,1) - \Sigma ||{\mathop {\rightarrow }\limits ^{p}} 0\), for any random \((\varvec{\theta }^{*},\varvec{\mu }^{*})\) such that \((\varvec{\theta }^{*'},\varvec{\mu }^{*'})'{\mathop {\rightarrow }\limits ^{p}}(\varvec{\theta }_0^{'},\varvec{\mu }_0^{'})'\) as \(n_m\rightarrow \infty \).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chatterjee, M., Sen Roy, S. Estimating the hazard functions of two alternating recurrent events in the presence of covariates. AStA Adv Stat Anal 102, 289–304 (2018). https://doi.org/10.1007/s10182-017-0316-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10182-017-0316-1

Keywords

Mathematics Subject Classification

Navigation