Skip to main content

Subgroup Specific Incremental Value of New Markers for Risk Prediction

  • Conference paper
  • First Online:
Risk Assessment and Evaluation of Predictions

Part of the book series: Lecture Notes in Statistics ((LNSP,volume 215))

Abstract

In many clinical applications, understanding when measurement of new markers is necessary to provide added accuracy to existing prediction tools could lead to more cost effective disease management. Many statistical tools for evaluating the incremental value of the novel markers over the routine clinical risk factors have been developed in recent years. However, most existing literature focuses primarily on global assessment. Since the incremental values of new markers often vary across subgroups, it would be of great interest to identify subgroups for which the new markers are most/least useful in improving risk prediction. In this paper we provide novel statistical procedures for systematically identifying potential traditional-marker based subgroups in whom it might be beneficial to apply a new model with measurements of both the novel and traditional markers. We consider various conditional time-dependent accuracy parameters for censored failure time outcome to assess the subgroup-specific incremental values. We provide nonparametric kernel-based estimation procedures to calculate the proposed parameters. Simultaneous interval estimation procedures are provided to account for sampling variation and adjust for multiple testing. Simulation studies suggest that our proposed procedures work well in finite samples. The proposed procedures are applied to the Framingham Offspring Study to examine the added value of an inflammation marker, C-reactive protein, on top of the traditional Framingham Risk Score for predicting 10-year risk of cardiovascular disease.

The paper appeared in volume 19 (2013) of Lifetime Data Analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Baker, S., Pinsky, P.: A proposed design and analysis for comparing digital and analog mammography: special receiver operating characteristic methods for cancer screening. J. Am. Stat. Assoc. 96, 421–428 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bickel, P., Rosenblatt, M.: On some global measures of the deviations of density function estimates. Ann. Stat. 1, 1071–1095 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  3. Blumenthal, R., Michos, E., Nasir, K.: Further improvements in CHD risk prediction for women. J. Am. Med. Assoc. 297, 641–643 (2007)

    Article  Google Scholar 

  4. Cai, T., Cheng, S.: Robust combination of multiple diagnostic tests for classifying censored event times. Biostatistics 9, 216–233 (2008)

    Article  MATH  Google Scholar 

  5. Cai, T., Dodd, L.E.: Regression analysis for the partial area under the ROC curve. Stat. Sin. 18, 817–836 (2008)

    MathSciNet  MATH  Google Scholar 

  6. Cai, T., Tian, L., Wei, L.: Semiparametric Box–Cox power transformation models for censored survival observations. Biometrika 92(3), 619–632 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  7. Cai, T., Tian, L., Uno, H., Solomon, S., Wei, L.: Calibrating parametric subject-specific risk estimation. Biometrika 97(2), 389–404 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  8. Cook, N., Ridker, P.: The use and magnitude of reclassification measures for individual predictors of global cardiovascular risk. Ann. Intern. Med. 150(11), 795–802 (2009)

    Article  Google Scholar 

  9. Cook, N., Buring, J., Ridker, P.: The effect of including C-reactive protein in cardiovascular risk prediction models for women. Ann. Intern. Med. 145, 21–29 (2006)

    Article  Google Scholar 

  10. Cox, D.: Regression models and life-tables. J. R. Stat. Soc. B (Stat. Methodol.) 34(2), 187–220 (1972)

    Google Scholar 

  11. Dabrowska, D.: Non-parametric regression with censored survival time data. Scand. J. Stat. 14(3), 181–197 (1987)

    MathSciNet  MATH  Google Scholar 

  12. Dabrowska, D.: Uniform consistency of the kernel conditional Kaplan-Meier estimate. Ann. Stat. 17(3), 1157–1167 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  13. Dabrowska, D.: Smoothed Cox regression. Ann. Stat. 25(4), 1510–1540 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  14. D’Agostino, R.: Risk prediction and finding new independent prognostic factors. J. Hypertens. 24(4), 643–645 (2006)

    Article  MathSciNet  Google Scholar 

  15. Dodd, L., Pepe, M.: Partial AUC estimation and regression. Biometrics 59, 614–623 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  16. Du, Y., Akritas, M.: Iid representations of the conditional Kaplan-Meier process for arbitrary distributions. Math. Method. Stat. 11, 152–182 (2002)

    MathSciNet  MATH  Google Scholar 

  17. Dwyer, A.J.: In pursuit of a piece of the ROC. Radiology 201, 621–625 (1996)

    Article  Google Scholar 

  18. Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults: Executive summary of the third report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III). J. Am. Med. Assoc. 285(19), 2486–2497 (2001)

    Google Scholar 

  19. Fan, J., Gijbels, I.: Data-driven bandwidth selection in local polynomial regression: variable bandwidth selection and spatial adaptation. J. R. Stat. Soc. B (Stat. Methodol.) 57, 371–394 (1995)

    Google Scholar 

  20. Gail, M., Pfeiffer, R.: On criteria for evaluating models of absolute risk. Biostatistics 6(2), 227–239 (2005)

    Article  MATH  Google Scholar 

  21. Gilbert, P., Wei, L., Kosorok, M., Clemens, J.: Simultaneous inferences on the contrast of two hazard functions with censored observations. Biometrics 58(4), 773–780 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  22. Harrell, F. Jr., Lee, K., Mark, D.: Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat. Med. 15(4), 361–387 (1996)

    Article  Google Scholar 

  23. Heagerty, P., Zheng, Y.: Survival model predictive accuracy and ROC curves. Biometrics 61, 92–105 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  24. Jiang, Y., Metz, C., Nishikawa, R.: A receiver operating characteristic partial area index for highly sensitive diagnostic tests. Radiology 201, 745–750 (1996)

    Article  Google Scholar 

  25. Jin, Z., Ying, Z., Wei, L.: A simple resampling method by perturbing the minimand. Biometrika 88(2), 381–390 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  26. Korn, E., Simon, R.: Measures of explained variation for survival data. Stat. Med. 9(5), 487–503 (1990)

    Article  Google Scholar 

  27. Li, G., Doss, H.: An approach to nonparametric regression for life history data using local linear fitting. Ann. Stat. 23, 787–823 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  28. McIntosh, M., Pepe, M.: Combining several screening tests: optimality of the risk score. Biometrics 58(3), 657–664 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  29. Park, Y., Wei, L.: Estimating subject-specific survival functions under the accelerated failure time model. Biometrika 9, 717–723 (2003)

    Article  MathSciNet  Google Scholar 

  30. Park, B., Kim, W., Ruppert, D., Jones, M., Signorini, D., Kohn, R.: Simple transformation techniques for improved non-parametric regression. Scand. J. Stat. 24(2), 145–163 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  31. Paynter, N., Chasman, D., Pare, G., Buring, J., Cook, N., Miletich, J., Ridker, P.: Association between a literature-based genetic risk score and cardiovascular events in women. J. Am. Med. Assoc. 303(7), 631–637 (2010)

    Article  Google Scholar 

  32. Pencina, M., D’Agostino, R.: Overall C as a measure of discrimination in survival analysis: model specific population value and confidence interval estimation. Stat. Med. 23(13), 2109–2123 (2004)

    Article  Google Scholar 

  33. Pencina, M., D’Agostino, R.S., D’Agostino, R.J., Vasan, R.: Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond (with coomentaries & rejoinder). Stat. Med. 27, 157–212 (2008)

    Article  MathSciNet  Google Scholar 

  34. Pfeiffer, R., Gail, M.: Two criteria for evaluating risk prediction models. Biometrics 67(3), 1057–1065 (2010)

    Article  MathSciNet  Google Scholar 

  35. Pfeffer, M., Jarcho, J.: The charisma of subgroups and the subgroups of CHARISMA. N. Engl. J. Med. 354(16), 1744–1746 (2006)

    Article  Google Scholar 

  36. Ridker, P.: C-Reactive protein and the prediction of cardiovascular events among those at intermediate risk: moving an inflammatory hypothesis toward consensus. J. Am. Coll. Cardiol. 49(21), 2129–2138 (2007)

    Article  Google Scholar 

  37. Ridker, P., Rifai, N., Rose, L., Buring, J., Cook, N.: Comparison of C-reactive protein and low-density lipoprotein cholesterol levels in the prediction of first cardiovascular events. N. Engl. J. Med. 347, 1557–1565 (2007)

    Article  Google Scholar 

  38. Robins, J., Ya’Acov, R.: Toward a curse of dimensionality appropriate (CODA) asymptotic theory for semi-parametric models. Stat. Med. 16(3), 285–319 (1997)

    Article  Google Scholar 

  39. Rothwell, P.: Treating individuals 1 external validity of randomised controlled trials: to whom do the results of this trial apply? Lancet 365, 82–93 (2005)

    Article  Google Scholar 

  40. Tian, L., Zucker, D., Wei, L.: On the cox model with time-varying regression coefficients. J. Am. Stat. Assoc. 100(469), 172–183 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  41. Tian, L., Cai, T., Wei, L.J.: Identifying subjects who benefit from additional information for better prediction of the outcome variables. Biometrics 65, 894–902 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  42. Tibshirani, R., Hastie, T.: Local likelihood estimation. J. Am. Stat. Assoc. 82(398), 559–567 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  43. Tice, J., Cummings, S., Ziv, E., Kerlikowske, K.: Mammographic breast density and the Gail model for breast cancer risk prediction in a screening population. Breast Cancer Res. Treat. 94(2), 115–122 (2005)

    Article  Google Scholar 

  44. Uno, H., Cai, T., Tian, L., Wei, L.: Evaluating prediction rules for t-year survivors with censored regression models. J. Am. Stat. Assoc. 102, 527–537 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  45. Uno, H., Cai, T., Pencina, M., D’Agostino, R., Wei, L.: On the C-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data. Stat. Med. 30(10), 1105–1117 (2011)

    MathSciNet  Google Scholar 

  46. Uno, H., Cai, T., Tian, L., Wei, L.J.: Graphical procedures for evaluating overall and subject-specific incremental values from new predictors with censored event time data. Biometrics 67, 1389–1396 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  47. Van der vaart, A.W., Wellner, J.A.: Weak convergence and empirical processes. Springer, New York (1996)

    Google Scholar 

  48. Wacholder, S., Hartge, P., Prentice, R., Garcia-Closas, M., Feigelson, H., Diver, W., Thun, M., Cox, D., Hankinson, S., Kraft, P., et al.: Performance of common genetic variants in breast-cancer risk models. N. Engl. J. Med. 362(11), 986–993 (2010)

    Article  Google Scholar 

  49. Wand, M., Marron, J., Ruppert, D.: Transformation in density estimation (with comments). J. Am. Stat. Assoc. 86, 343–361 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  50. Wang, T., Gona, P., Larson, M., Tofler, G., Levy, D., Newton-Cheh, C., Jacques, P., Rifai, N., Selhub, J., Robins, S.: Multiple biomarkers for the prediction of first major cardiovascular events and death. N. Engl. J. Med. 355, 2631–2639 (2006)

    Article  Google Scholar 

  51. Wang, R., Lagakos, S., Ware, J., Hunter, D., Drazen, J.: Statistics in medicine-reporting of subgroup analyses in clinical trials. N. Engl. J. Med. 357(21), 2189–2194 (2007)

    Article  Google Scholar 

  52. Wilson, P.W., D’Agostino, R.B., Levy, D., Belanger, A.M., Silbershatz, H., Kannel, W.B.: Prediction of cornary heart disease using risk factor categories. Circulation 97, 1837–1847 (1998)

    Article  Google Scholar 

  53. Zhao, L., Cai, T., Tian, L., Uno, H., Solomon, S., Wei, L., Minnier, J., Kohane, I., Pencina, M., D’Agostino, R., et al.: Stratifying subjects for treatment selection with censored event time data from a comparative study. Harvard University Biostatistics working paper series 2010: working paper 122 (2010)

    Google Scholar 

Download references

Acknowledgements

The Framingham Heart Study and the Framingham SHARe project are conducted and supported by the National Heart, Lung, and Blood Institute (NHLBI) in collaboration with Boston University. The Framingham SHARe data used for the analyses described in this manuscript were obtained through dbGaP (access number: phs000007.v3.p2). This manuscript was not prepared in collaboration with investigators of the Framingham Heart Study and does not necessarily reflect the opinions or views of the Framingham Heart Study, Boston University, or the NHLBI. The work is supported by grants U01-CA86368, P01- CA053996, R01-GM085047, R01- GM079330, R01-AI052817 and U54-LM008748 awarded by the National Institutes of Health.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Q. Zhou .

Editor information

Editors and Affiliations

Appendices

Appendix 1

Let \(\mathbb{P}_{n}\) and \(\mathbb{P}\) denote expectation with respect to (w.r.t.) the empirical probability measure of \(\{(T_{i},\varDelta _{i},X_{i},Z_{i}),i = 1,\cdots \,,n\}\) and the probability measure of (T, Δ, X, Z), respectively, and \(\mathbb{G}_{n} = \sqrt{n}(\mathbb{P}_{n} - \mathbb{P})\). We use \(\dot{\mathcal{F}}(x)\) to denote \(d\mathcal{F}(x)/dx\) for any function \(\mathcal{F}\), \(\simeq \) to denote equivalence up to o p (1), and ≲ to denote being bounded above up to a universal constant. Let β 0 and γ 0 denote the solution to

$$\displaystyle{E\left [V _{i}\left \{Y _{i}^{\dag }- g_{ 1}(\beta ^{\prime}V _{i})\right \}\right ] = 0}$$

and \(E\left [W_{i}\left \{Y _{i}^{\dag }- g_{2}(\gamma ^{\prime}W_{i})\right \}\right ] = 0\), respectively. Let \(\bar{p}_{1i} = g_{1}(\beta _{0}^{\prime}V _{i})\), and \(\bar{p}_{2i} = g_{2}(\gamma _{0}^{\prime}W_{i})\). Let \(\omega =\varDelta I(T \leq t_{0})/G_{X,Z}(T) + I(T > t_{0})/G_{X,Z}(t_{0})\), \(\hat{M}_{i}(c) = I(\hat{p}_{2i} \geq c)\) and \(\bar{M}_{i}(c) = I(\bar{p}_{2i} \geq c)\). For y = 0, 1, let f y (c; s) denote the conditional density of \(\bar{p}_{2i}\) given \(Y _{i}^{\dag } = y\) and \(\bar{p}_{1i} = s\) and we assumed that f y (c; s) is continuous and bounded away from zero uniformly in c and s. This assumption implies that ROC(u; s) has continuous and bounded derivative \(\dot{\mbox{ ROC}}(u;s) = \partial \mbox{ ROC}(u;s)/\partial u\). We assume that V and W are bounded, and \(\tau (y;s) = \partial pr[\phi \{\bar{p}_{1}(X)\} \leq s,{Y }^{\dag } = y]/\partial s\), is continuously differentiable with bounded derivatives and bounded away from zero. Throughout, the bandwidths are assumed to be of order n ν with ν ∈ (1∕5, 1∕2). For ease of presentation and without loss of generality, we assume that \(h_{1} = h_{0}\), denoted by h, and suppress h from the notations. Without loss of generality, we assume that \(\sup _{t,x,z}\vert {n}^{\frac{1} {2} }\{\hat{G}_{X,Z}(t) - G_{X,Z}(t)\}\vert = O_{p}(1)\). When C is assumed to be independent of both T and (X, Z), the simple Kaplan-Meier estimator satisfies this condition. When C depends on (X, Z), \(\hat{G}_{X,Z}\) obtained under the Cox model also satisfies this condition provided that W c is bounded. The kernel function K is assumed to be symmetric, smooth with a bounded support on [−1, 1] and we let m 2 = ∫ K(x)2 dx.

Asymptotic Expansions for \(\hat{\mathcal{S}}_{\mathbf{\mathit{y}}}(\mathbf{\mathit{c}};\mathbf{\mathit{s}})\)

Uniform Convergence Rate for \(\hat{\mathcal{S}}_{y}(c;s)\) We first establish the following uniform convergence rate of \(\hat{\mathcal{S}}_{y}(c;s) = g\{\hat{a}_{y}(c;s)\}\):

$$\displaystyle{ \sup _{s\in \mathcal{I}_{h},c}\vert \hat{\mathcal{S}}_{y}(c;s) -\mathcal{S}_{y}(c;s)\vert = O_{p}\{{(nh)}^{-\frac{1} {2} }\log (n)\} = o_{p}(1). }$$
(6)

To this end, we note that for any given c and s,

$$\displaystyle{\hat{\boldsymbol{\zeta }}_{y}(c;s) = \left [\begin{array}{cc} \hat{\zeta }_{a_{y}}(c;s) \\ \hat{\zeta }_{b_{y}}(c;s) \end{array} \right ] = \left [\begin{array}{c} \hat{a}_{y}(c;s) - a_{y}(c;s) \\ \hat{b}_{y}(c;s) - b_{y}(c;s) \end{array} \right ]}$$

is the solution to the estimating equation \(\hat{\boldsymbol{\varPsi }}_{y}(\boldsymbol{\zeta }_{y},c,s) = 0\), where \(\boldsymbol{\zeta }_{y} = (\zeta _{a_{y}},\zeta _{b_{y}})^{\prime}\) and

$$\displaystyle\begin{array}{rcl} \hat{\boldsymbol{\varPsi }}_{y}(\boldsymbol{\zeta }_{y};c,s)& =& \left [\begin{array}{c} \hat{\varPsi }_{y1}(\boldsymbol{\zeta }_{y},c,s) \\ \hat{\varPsi }_{y2}(\boldsymbol{\zeta }_{y},c,s) \end{array} \right ] {}\\ & =& {n}^{-1}\sum _{ i:Y _{i}=y}\hat{w}_{i}\!\left [\!\begin{array}{c} 1 \\ {h}^{-1}\hat{\mathcal{E}}_{i1}(s) \end{array} \!\right ]\!K_{h}\left \{\hat{\mathcal{E}}_{i1}(s)\right \}\left [\hat{M}_{i}(c) -\mathcal{G}\{\boldsymbol{\zeta }_{y},c,s;\phi (\hat{p}_{1i}),h\}\right ]\!,{}\\ \end{array}$$

\(a_{y}(c;s) = {g}^{-1}\{\mathcal{S}_{y}(c;s)\},b_{y}(c;s) = \partial {g}^{-1}\left \{\mathcal{S}_{y}(c;s)\right \}/\partial s\) and

$$\displaystyle{\mathcal{G}(\boldsymbol{\zeta }_{y},c,s;e,h) = g[a_{y}(c;s) + b_{y}(c;s)\{e -\phi (s)\} +\zeta _{a_{y}} +\zeta _{b_{y}}{h}^{-1}\{e -\phi (s)\}].}$$

We next establish the convergence rate for \(\sup _{\boldsymbol{\zeta }_{y},c,s}\vert \hat{\boldsymbol{\varPsi }}_{y}(\boldsymbol{\zeta }_{y};c,s) -\boldsymbol{\varPsi }_{y}(\boldsymbol{\zeta }_{y};c,s)\vert \), where

$$\displaystyle{\boldsymbol{\varPsi }_{y}(\boldsymbol{\zeta }_{y};c,s) =\!\! \left [\!\begin{array}{c} \varPsi _{y1}(\boldsymbol{\zeta }_{y},c,s) \\ \varPsi _{y2}(\boldsymbol{\zeta }_{y};c,s) \end{array} \!\right ]\! =\tau (y;s)\!\left [\!\begin{array}{c} \mathcal{S}_{y}(c;s) -\int K(t)g\{a_{y}(c;s) +\zeta _{a_{y}} +\zeta _{b_{y}}t\}dt \\ -\int tK(t)g\{a_{y}(c;s) +\zeta _{a_{y}} +\zeta _{b_{y}}t\}dt \end{array} \!\right ]\!\!.}$$

We first show that

$$\displaystyle{\sup _{s\in \mathcal{I}_{h},c}\left \vert {n}^{-1}\sum _{ i:Y _{i}=y}\hat{\omega }_{i}K_{h}\{\hat{\mathcal{E}}_{i1}(s)\}\hat{M}_{i}(c) -\tau (y;s)\mathcal{S}_{y}(c;s)\right \vert }$$

and

$$\displaystyle\begin{array}{rcl} & & \sup _{\boldsymbol{\zeta }_{y},s\in \mathcal{I}_{h},c}\left \vert {n}^{-1}\sum _{ i:Y _{i}=y}\hat{\omega }_{i}K_{h}\{\hat{\mathcal{E}}_{i1}(s)\}\mathcal{G}\{\boldsymbol{\zeta }_{y},c,s;\phi (\hat{p}_{1i}),h\}\right. {}\\ & & \qquad \qquad \qquad \qquad \qquad \left.-\tau (y;s)\int K(t)g\{a_{y}(c;s) +\zeta _{a_{y}} +\zeta _{b_{y}}t\}dt\right \vert {}\\ \end{array}$$

are both \(O_{p}\{{(nh)}^{-\frac{1} {2} }\log (n)\}\) where \(\mathcal{I}_{h} = {[\phi }^{-1}(\rho _{l} + h){,\phi }^{-1}(\rho _{u} - h)]\) and \([\rho _{l},\rho _{u}]\) is a subset of the support of \(\phi \{g_{1}(\beta _{0}^{T}V )\}\). To this end, we note that since \(\sup _{u}\vert \hat{G}_{X,Z}(u) - G_{X,Z}(u)\vert = O_{p}({n}^{-\frac{1} {2} })\) and \(\vert \hat{\beta }-\beta _{0}\vert = O_{p}({n}^{-\frac{1} {2} })\),

$$\displaystyle\begin{array}{rcl} & & \left \vert {n}^{-1}\sum _{ i:Y _{i}=y}(\hat{\omega }_{i} -\omega _{i})K_{h}\{\hat{\mathcal{E}}_{i1}(s)\}\mathcal{G}\{\boldsymbol{\zeta }_{y},c,s;\phi (\hat{p}_{1i}),h\}\right \vert {}\\ & \leq & {n}^{-1}\sum _{ i:Y _{i}=y}\vert \hat{\omega }_{i} -\omega _{i}\vert K_{h}\{\hat{\mathcal{E}}_{i1}(s)\} = O_{p}({n}^{-\frac{1} {2} }). {}\\ \end{array}$$

This implies that

$$\displaystyle\begin{array}{rcl} & & \left \vert {n}^{-1}\sum _{ i:Y _{i}=y}\hat{\omega }_{i}K_{h}\{\hat{\mathcal{E}}_{i1}(s)\}\mathcal{G}\{\boldsymbol{\zeta }_{y},c,s;\phi (\hat{p}_{1i}),h\} -\tau (y; s)\int K(t)g\{a_{y}(c; s) +\zeta _{a_{y}} +\zeta _{b_{y}}t\}dt\right \vert {}\\ & \leq & \left \vert {n}^{-\frac{1} {2} }\int K_{ h}\{e -\phi (s)\}\mathcal{G}\{\boldsymbol{\zeta }_{y},c,s;\phi (\hat{p}_{1i}),h\}d\mathbb{G}_{n}\left [\omega I\{\phi (\hat{p}_{i1}) \leq e\} -\omega I\{\phi (\bar{p}_{i1}) \leq e\}\right ]\right \vert {}\\ & +& \left \vert \int K_{h}\{e -\phi (s)\}\mathcal{G}\{\boldsymbol{\zeta }_{y},c,s;\phi (\hat{p}_{1i}),h\}d\mathbb{P}\left [\omega I\{\phi (\bar{p}_{i1}) \leq e\}\right ]\right. {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \left.-\tau (y; s)\int K(t)g\{a_{y}(c; s) +\zeta _{a_{y}} +\zeta _{b_{y}}t\}dt\right \vert {}\\ & +& \left \vert {n}^{-\frac{1} {2} }\int K_{ h}\{e -\phi (s)\}d\mathbb{P}\left [\omega \mathcal{G}\{\boldsymbol{\zeta }_{y},c,s;\phi (\hat{p}_{1i}),h\}I\{\phi (\bar{p}_{i1}) \leq e\}\right ]\right \vert + O_{p}({n}^{-\frac{1} {2} }) {}\\ & \lesssim & {n}^{-\frac{1} {2} }{h}^{-1}\|\mathbb{G}_{ n}\|_{\mathcal{H}_{\delta }} + \left \vert {n}^{-\frac{1} {2} }\int K_{ h}\{e -\phi (s)\}d\mathbb{P}\left [\omega \mathcal{G}\{\boldsymbol{\zeta }_{y},c,s;\phi (\hat{p}_{1i}),h\}I\{\phi (\bar{p}_{i1}) \leq e\}\right ]\right \vert {}\\ & &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + O_{p}({n}^{-\frac{1} {2} } + {h}^{2}), {}\\ \end{array}$$

where \(\mathcal{H}_{\delta } =\{\omega I\left [\phi \{g_{1}(\beta ^{\prime}v)\} \leq e\right ] -\omega I\left [\phi \{g_{1}(\beta _{0}^{\prime}v)\} \leq e\right ]: \vert \beta -\beta _{0}\vert \leq \delta,e\}\) is a class of functions indexed by β and e. By the maximum inequality of Van der vaart and Wellner [47], we have

$$\displaystyle{E\|\mathbb{G}_{n}\|_{\mathcal{H}_{\delta }}{\lesssim \delta }^{\frac{1} {2} }\left \{\vert \log (\delta )\vert + \vert \log (h)\vert \right \}\left [1 + \frac{{\delta }^{\frac{1} {2} }\left \{\vert \log (\delta )\vert + \vert \log (h)\vert \right \}} {\delta {n}^{\frac{1} {2} }} \right ]}$$

Together with the fact that \(\vert \hat{\beta }-\beta _{0}\vert = O_{p}({n}^{-\frac{1} {2} })\) from Uno et al. [44], it implies that \({n}^{-\frac{1} {2} }{h}^{-1}\|\mathbb{G}_{n}\|_{\mathcal{H}_{\delta }} = O_{p}\{{(nh)}^{-\frac{1} {2} }{(n{h}^{2})}^{-\frac{1} {4} }\log (n)\}\). In addition, with the standard arguments used in Bickel and Rosenblatt [2], it can be shown that

$$\displaystyle\begin{array}{rcl} & & \left \vert {n}^{-\frac{1} {2} }\int K_{h}\{e -\phi (s)\}d\mathbb{P}\left [\omega \mathcal{G}\{\boldsymbol{\zeta }_{y},c,s;\phi (\hat{p}_{1i}),h\}I\{\phi (\bar{p}_{i1}) \leq e\}\right ]\right \vert {}\\ & =& O_{p}\{{(nh)}^{-\frac{1} {2} }\log (n)\}. {}\\ \end{array}$$

Therefore, for h = n ν, 1∕5 < ν < 1∕2,

$$\displaystyle\begin{array}{rcl} & & \sup _{\boldsymbol{\zeta }_{y},s\in \mathcal{I}_{h},c}\left \vert {n}^{-1}\sum _{ i:Y _{i}=y}\hat{\omega }_{i}K_{h}\{\hat{\mathcal{E}}_{i1}(s)\}\mathcal{G}\{\boldsymbol{\zeta }_{y},c,s;\phi (\hat{p}_{1i}),h\}\right. {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \left.-\tau (y;s)\int K(t)g\{a_{y}(c;s) +\zeta _{a_{y}} +\zeta _{b_{y}}t\}dt\right \vert {}\\ \end{array}$$

is \(O_{p}\{{(nh)}^{-\frac{1} {2} }\log (n)\}\). Following with similar arguments as given above, coupled with the fact that \(\vert \hat{\gamma }-\gamma _{0}\vert = O_{p}({n}^{-\frac{1} {2} })\), we have

$$\displaystyle{\sup _{s\in \mathcal{I}_{h},c\in [0,1]}\left \vert {n}^{-1}\sum _{ i:Y _{i}=y}\hat{\omega }_{i}K_{h}\{\hat{\mathcal{E}}_{i1}(s)\}\hat{M}_{i}(c) -\tau (y;s)\mathcal{S}_{y}(c;s)\right \vert = O_{p}\{{(nh)}^{-\frac{1} {2} }\log (n)\}.}$$

Thus, \(\sup _{\boldsymbol{\zeta }_{ y},c,s}\vert \hat{\varPsi }_{y1}(\boldsymbol{\zeta }_{y};c,s) -\varPsi _{y1}(\boldsymbol{\zeta }_{y};c,s)\vert = O_{p}\{{(nh)}^{-\frac{1} {2} }\log (n)\} = o_{p}(1)\). It follows from the same arguments as given above that

$$\displaystyle{\sup _{\boldsymbol{\zeta }_{y},c,s}\vert \hat{\varPsi }_{y2}(\boldsymbol{\zeta }_{y};c,s) -\varPsi _{y2}(\boldsymbol{\zeta }_{y};c,s)\vert = O_{p}\{{(nh)}^{-\frac{1} {2} }\log (n) + h\} = o_{p}(1).}$$

Therefore, \(\sup _{\boldsymbol{\zeta }_{ y},c,s}\vert \hat{\boldsymbol{\varPsi }}_{y}(\boldsymbol{\zeta }_{y};c,s) -\boldsymbol{\varPsi }_{y}(\boldsymbol{\zeta }_{y};c,s)\vert = o_{p}(1)\). In addition, we note that 0 is the unique solution to the equation \(\boldsymbol{\varPsi }_{y}(\boldsymbol{\zeta }_{y};c,s) = 0\) w.r.t. \(\boldsymbol{\zeta }_{y}\). It suggests that \(\sup _{s,c}\vert \hat{\boldsymbol{\zeta }}_{a_{y}}(c;s)\vert = O_{p}\{{(nh)}^{-\frac{1} {2} }\log (n)\} = o_{p}(1)\), which implies the consistency of \(\mathcal{S}_{y}(c;s)\),

$$\displaystyle{\sup _{s\in \mathcal{I}_{h},c\in [0,1]}\vert \hat{\mathcal{S}}_{y}(c;s) -\mathcal{S}_{y}(c;s)\vert = O_{p}\{{(nh)}^{-\frac{1} {2} }\log (n)\} = o_{p}(1).}$$

Asymptotic Expansion for \(\hat{\mathcal{S}}_{y}(c;s)\) Let \(\hat{d}_{y}(c;s) = \sqrt{nh}\{\hat{a}_{y}(c;s) - a_{y}(c;s)\}\). It follows from a Taylor series expansion and the convergence rate of \(\boldsymbol{\zeta }_{y}(c;s)\) that

$$\displaystyle{ \hat{d}_{y}(c;s) = \frac{\sqrt{nh}\mathbb{P}_{n}\left (\hat{\omega }I(Y = y)K_{h}\{\hat{\mathcal{E}}_{1}(s)\}\left [\hat{M}(c) -\mathcal{G}_{y}^{0}\{c,s;\phi (\hat{p}_{1})\}\right ]\right )} {\tau \{y;\phi (s)\}\dot{g}\{a_{y}(c;s)\}} + o_{p}(1), }$$
(7)

where \(\mathcal{G}_{y}^{0}(c,s;e) = g\left [a_{y}(c;s) + b_{y}(c;s)\{e -\phi (s)\}\right ]\). Furthermore,

$$\displaystyle{\hat{d}_{y}(c;s) = \frac{\sqrt{nh}\mathbb{P}_{n}\left (\omega I(Y = y)K_{h}\{\hat{\mathcal{E}}_{1}(s)\}\left [\hat{M}(c) -\mathcal{G}_{y}^{0}\{c,s;\phi (\hat{p}_{1})\}\right ]\right )} {\tau \{y;\phi (s)\}\dot{g}\{a_{y}(c;s)\}} + o_{p}(1),}$$

since \(\sup _{t\leq t_{0}}\left \vert \hat{G}_{X,Z}(t) - G_{X,Z}(t)\right \vert = O_{p}({n}^{-1/2})\). We next show that \(\hat{d}_{y}(c;s)\) is asymptotically equivalent to

$$\displaystyle{ \tilde{d}_{y}(c;s) = \frac{\sqrt{nh}\mathbb{P}_{n}\left (\omega I(Y = y)K_{h}\{\bar{\mathcal{E}}_{1}(s)\}\left [\bar{M}(c) -\mathcal{G}_{y}^{0}\left \{c,s;\phi (\bar{p}_{1})\right \}\right ]\right )} {\tau \{y;\phi (s)\}\dot{g}\{a_{y}(c;s)\}}, }$$
(8)

where \(\bar{\mathcal{E}}_{1}(s) =\phi (\bar{p}_{1}) -\phi (s)\). From (7), (8), and the fact that τ{y; ϕ(s)} is bounded away from 0 uniformly in s, we have

$$\displaystyle\begin{array}{rcl} & & \vert \hat{d}_{y}(s) -\tilde{ d}_{y}(s)\vert {}\\ & \lesssim & {h}^{\frac{1} {2} }\left \vert \int K_{h}\{e -\phi (s)\}d\mathbb{G}_{n}\left (I(Y = y)\omega \left [\hat{M}(c)I\{\phi (\hat{p}_{1}) \leq e\}\right.\right.\right. {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \left.\left.\left.-\bar{M}(c)I\{\phi (\bar{p}_{1}) \leq e\}\right ]\right )\right \vert {}\\ & +& {h}^{\frac{1} {2} }\left \vert \int K_{h}\{e -\phi (s)\}\mathcal{G}_{y}(c,s;e)d\mathbb{G}_{n}\left (I(Y = y)\left [\omega I\{\phi (\hat{p}_{1}) \leq e\}\right.\right.\right. {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \left.\left.\left.-\omega I\{\phi (\bar{p}_{1}) \leq e\}\right ]\right )\right \vert {}\\ & +& \left \vert \sqrt{nh}\int K_{h}\{e -\phi (s)\}d\mathbb{P}\left (I(Y = y)\left [\omega \hat{M}(c)I\{\phi (\hat{p}_{1}) \leq e\}\right.\right.\right. {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \left.\left.\left.-\omega \bar{M}(c)I\{\phi (\bar{p}_{1}) \leq e\}\right ]\right )\right \vert {}\\ & +& \left \vert \sqrt{nh}\int K_{h}\{e -\phi (s)\}d\mathbb{P}\left (I(Y = y)\left [\omega \mathcal{G}_{y}\{c,s;\phi (\hat{p}_{1})\}I\{\phi (\hat{p}_{1}) \leq e\}\right.\right.\right. {}\\ & & -\left.\left.\left.\omega \mathcal{G}_{y}\{c,s;\phi (\bar{p}_{1})\}I\{\phi (\bar{p}_{1}) \leq e\}\right ]\right )\right \vert {}\\ & \lesssim & {h}^{\frac{1} {2} }\left \|\mathbb{G}_{n}\right \|_{\mathcal{F}_{\delta }} + {h}^{\frac{1} {2} }\left \|\mathbb{G}_{n}\right \|_{\mathcal{H}_{\delta }} + O_{p}\{{(nh)}^{1/2}\vert \hat{\beta }-\beta _{0}\vert + \vert \hat{\gamma }-\gamma _{0}\vert + {h}^{2}\}, {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} & & \mathcal{F}_{\delta } = \left \{\omega I\{g_{2}(\gamma ^{\prime}w) \geq c\}I\left [\phi \{g_{1}(\beta ^{\prime}v)\} \leq e\right ]\right. {}\\ & & \qquad \qquad \quad \left.-\omega I\{g_{2}(\gamma _{0}^{\prime}w) \geq c\}I\left [\phi \{g_{1}(\beta _{0}^{\prime}v)\} \leq e\right ]: \vert \gamma -\gamma _{0}\vert + \vert \beta -\beta _{0}\vert \leq \delta,e\right \} {}\\ \end{array}$$

is the class of functions indexed by γ, β and e. By the maximum inequality of Van der vaart and Wellner [47] and the fact that \(\vert \hat{\beta }-\beta _{0}\vert + \vert \hat{\gamma }-\gamma _{0}\vert = O_{p}({n}^{-\frac{1} {2} })\) from Uno et al. [44], we have \({h}^{\frac{1} {2} }\left \|\mathbb{G}_{n}\right \|_{\mathcal{F}_{\delta }} = O_{p}\{{h}^{-\frac{1} {2} }{n}^{-\frac{1} {4} }\log (n)\}\) and \({h}^{\frac{1} {2} }\left \|\mathbb{G}_{n}\right \|_{\mathcal{H}_{\delta }} = O_{p}\{{h}^{-\frac{1} {2} }{n}^{-\frac{1} {4} }\log (n)\}\). It follows that \(\sup _{s}\vert \hat{d}_{y}(s) -\tilde{ d}_{y}(s)\vert = o_{p}(1)\). Then, by a delta method,

$$\displaystyle\begin{array}{rcl} \hat{\mathcal{W}}_{\mathcal{S}_{y}}(c;s) = \sqrt{nh}\{\hat{\mathcal{S}}_{y}(c;s) -\mathcal{S}_{y}(c;s)\} \simeq \sqrt{nh}\ \mathbb{P}_{n}\left [K_{h}\{\bar{\mathcal{E}}_{1}(s)\}\mathcal{D}_{\mathcal{S}_{y}}(c;s)\right ]& &{}\end{array}$$
(9)
$$\displaystyle\begin{array}{rcl} \mbox{ where}\qquad \mathcal{D}_{\mathcal{S}_{y}}(c;s) =\tau \{ y;\phi {(s)\}}^{-1}\omega I(Y = y)\left \{\bar{M}(c) -\mathcal{S}_{ y}(c;s)\right \}& &{}\end{array}$$
(10)

Using the same arguments as for establishing the uniform convergence rate of conditional Kaplan-Meier estimators [12, 16], we obtain (6). Furthermore, following similar arguments as given in Dabrowska [11, 13], we have \(\hat{\mathcal{W}}_{\mathcal{S}_{y}}(c;s)\) converges weakly to a Gaussian process in c for all s. Note that as for all kernel estimators, \(\hat{\mathcal{W}}_{\mathcal{S}_{y}}(c;s)\) does not converge as a process in s.

Uniform Consistency of \(\widehat{\mbox{ pAUC}}_{\mathbf{\mathit{f }}}(\mathbf{\mathit{s}})\)

Next we establish the uniform convergence rate for \(\widehat{\mbox{ ROC}}(u;s)\). To this end, we write

$$\displaystyle{\widehat{\mbox{ ROC}}(u;s) -\mbox{ ROC}(u;s) = \hat{\varepsilon }_{1}(u;s) + \hat{\varepsilon }_{0}(u;s),}$$

where \(\hat{\varepsilon }_{1}(u;s) =\hat{ \mathcal{S}}_{1}\{\hat{\mathcal{S}}_{0}^{-1}(u;s);s\} -\mathcal{S}_{1}\{\hat{\mathcal{S}}_{0}^{-1}(u;s);s\}\) and \(\hat{\varepsilon }_{0}(u;s) = \mathcal{S}_{1}\{\hat{\mathcal{S}}_{0}^{-1}\) \((u;s);s\} -\mathcal{S}_{1}\{\mathcal{S}_{0}^{-1}(u;s);s\}\). It follows from (6) that \(\sup _{u;s}\vert \hat{\varepsilon }_{1}(u;s)\vert \leq \sup _{c;s}\vert \hat{\mathcal{S}}_{1}(c;s) -\mathcal{S}_{1}(c;s)\vert \). Let \(\hat{\mathcal{I}}(u;s) = \mathcal{S}_{0}\{\hat{\mathcal{S}}_{0}^{-1}(u;s);s\}\). Then \(\hat{\varepsilon }_{0}(u;s) = \mbox{ ROC}\{\hat{\mathcal{I}}(u;s);s\} -\mbox{ ROC}(u;s)\). Noting that \(\sup _{u}\vert \hat{\mathcal{I}}(u;s)-u\vert =\sup _{u}\vert \hat{\mathcal{I}}(u;s)-\hat{\mathcal{S}}_{0}\{\hat{\mathcal{S}}_{0}^{-1}(u;s);s\}\vert +{n}^{-1} \leq \sup _{c}\vert \mathcal{S}_{0}(c;s)-\hat{\mathcal{S}}_{0}(c;s)\vert +{n}^{-1} = O_{p}\{{(nh)}^{-1/2}\log n\}\), we have \(\hat{\varepsilon }_{0}(u;s) = O_{p}\{{(nh)}^{-1/2}\log n\}\) by the continuity and boundedness of \(\dot{\mbox{ ROC}}(u;s)\). Therefore,

$$\displaystyle{\sup _{u,s}\vert \widehat{\mbox{ ROC}}(u;s) -\mbox{ ROC}(u;s)\vert = O_{p}\{{(nh)}^{-1/2}\log n\}}$$

which implies

$$\displaystyle\begin{array}{rcl} & & \sup _{s\in \mathcal{I}_{h}}\left \vert \widehat{\mbox{ pAUC}}_{f}(s) -\mbox{ pAUC}_{f}(s)\right \vert {}\\ & \lesssim & \sup _{s\in \mathcal{I}_{h}}\int _{0}^{f}\left \vert \widehat{\mbox{ ROC}}(u;s) -\mbox{ ROC}(u;s)\right \vert du = O_{ p}\{{(nh)}^{-\frac{1} {2} }\log n\}. {}\\ \end{array}$$

and hence the uniform consistency of \(\widehat{\mbox{ pAUC}}_{f}(s)\).

Asymptotic Distribution of \(\hat{\mathcal{W}}_{\mbox{ pAUC}_{f}}(s)\)

To derive the asymptotic distribution for \(\hat{\mathcal{W}}_{\mbox{ pAUC}_{f}}(s)\), we first derive asymptotic expansions for \(\hat{\mathcal{W}}_{\mbox{ ROC}}(u;s) = \sqrt{nh}\{\widehat{\mbox{ ROC}}(u;s) -\mbox{ ROC}(u;s)\} = \sqrt{nh}\ \hat{\varepsilon }_{1}(u;s) + \sqrt{nh}\ \hat{\varepsilon }_{0}(u;s)\). From the weak convergence of \(\hat{\mathcal{W}}_{S_{y}}(c;s)\) in c, the approximation in (9), and the consistency of \(\hat{\mathcal{S}}_{0}^{-1}(c;s)\) given in the section “Uniform Consistency of \(\widehat{\mbox{ pAUC}}_{f}(s)\)” in Appendix 1, we have

$$\displaystyle\begin{array}{rcl} \sqrt{ nh}\ \hat{\varepsilon }_{1}(u;s)& \simeq & \sqrt{nh}\left [\hat{\mathcal{S}}_{1}\{\mathcal{S}_{0}^{-1}(u;s);s\} -\mbox{ ROC}(u;s)\right ] {}\\ & \simeq & \sqrt{nh}\mathbb{P}_{n}\left [K_{h}\{\bar{\mathcal{E}}_{1}(s)\}\mathcal{D}_{\mathcal{S}_{1}}\{\mathcal{S}_{0}^{-1}(u;s);s\}\right ] {}\\ \end{array}$$

On the other hand, from the uniform convergence of \(\hat{\mathcal{I}}_{0}(u;s) \rightarrow u\) and the weak convergence of \(\hat{\mathcal{D}}_{0}(c;s)\) in c, we have

$$\displaystyle\begin{array}{rcl} \sqrt{ nh}\left \{u -\hat{\mathcal{I}}(u;s)\right \}& \simeq & \sqrt{nh}\left [\hat{{\mathcal{I}}}^{-1}\left \{\hat{\mathcal{I}}(u;s);s\right \} -\hat{\mathcal{I}}(u;s)\right ] \simeq \sqrt{nh}\left \{\hat{{\mathcal{I}}}^{-1}(u;s) - u\right \} {}\\ & \simeq & \sqrt{nh}\left [\hat{\mathcal{S}}_{0}\{\mathcal{S}_{0}^{-1}(u;s);s\} - u\right ] {}\\ \end{array}$$

This, together with a Taylor series expansion and the expansion given (9), implies that

$$\displaystyle\begin{array}{rcl} \sqrt{ nh}\ \hat{\varepsilon }_{0}(u;s) \simeq -\dot{\mbox{ ROC}}(u;s)\mathbb{P}_{n}\left [K_{h}\{\bar{\mathcal{E}}_{1}(s)\}\mathcal{D}_{\mathcal{S}_{0}}\left \{\mathcal{S}_{0}^{-1}(u;s);s\right \}\right ]& & {}\\ \end{array}$$

It follows that

$$\displaystyle\begin{array}{rcl} \hat{\mathcal{W}}_{\mbox{ pAUC}_{f}}(s) \simeq \sqrt{nh}\mathbb{P}_{n}\left [K_{h}\{\bar{\mathcal{E}}_{1}(s)\}\mathcal{D}_{\mbox{ pAUC}_{f}}(s)\right ]& &{}\end{array}$$
(11)
$$\displaystyle\begin{array}{rcl} \mbox{ where}\quad \mathcal{D}_{\mbox{ pAUC}_{f}}(s) =\int _{ 0}^{f}\left [\mathcal{D}_{ \mathcal{S}_{1}}\left \{\mathcal{S}_{0}^{-1}(u;s);s\right \} -\dot{\mbox{ ROC}}(u;s)\mathcal{D}_{ \mathcal{S}_{0}}\left \{\mathcal{S}_{0}^{-1}(u;s);s\right \}\right ]du.& &{}\end{array}$$
(12)

It then follows from a central limit theorem that for any fixed s, \(\hat{\mathcal{W}}_{\mbox{ pAUC}_{f}}(s)\) converges to a normal with mean 0 and variance

$$\displaystyle{\sigma _{\mbox{ pAUC}_{f}}^{2}(s) = m_{ 2}{\left [\tau \{1;\phi (s)\}\dot{F}_{\phi (\bar{p}_{1})}(s)\right ]}^{-1}\sigma _{ 1}^{2}(s) + m_{ 2}{\left [\tau \{0;\phi (s)\}\dot{F}_{\phi (\bar{p}_{1})}(s)\right ]}^{-1}\sigma _{ 0}^{2}(s),}$$

where \(\dot{F}_{\phi (\bar{p}_{1})}(s)\) is the density function of \(\phi (\bar{p}_{1})\),

$$\displaystyle\begin{array}{rcl} \sigma _{1}^{2}(s)& =& \mbox{ E}\!\!\left.\left (\!G{({T}^{\dag })}^{-1}\!{\left [\int _{ 0}^{f}\bar{M}\{\mathcal{S}_{ 0}^{-1}(u;s)\}du -\mbox{ pAUC}_{ f}(s)\!\right ]}^{2}\right \vert \bar{p}_{ 1} = s,{Y }^{\dag } = 1\right )\!\!,\quad \mbox{ and} {}\\ \sigma _{0}^{2}(s)& =& \mbox{ E}\left (G{(t_{ 0})}^{-1}\left [\int _{ 0}^{f}\bar{M}\{\mathcal{S}_{ 0}^{-1}(u;s)\}d\mbox{ ROC}(u;s)\right.\right. {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \left.{\left.-\int _{0}^{f}ud\mbox{ ROC}(u;s)\right ]}^{2}\mid \bar{p}_{ 1} = s,{Y }^{\dag } = 0\right ). {}\\ \end{array}$$

Justification for the Resampling Methods

To justify the resampling method, we first note that

$$\displaystyle{{\vert \beta }^{{\ast}}-\hat{\beta }\vert + {\vert \gamma }^{{\ast}}-\hat{\gamma }\vert +\sup _{ t\leq t_{0}}\vert G_{X,Z}^{{\ast}}(t) -\hat{G}_{ X,Z}(t)\vert = O_{p}({n}^{-\frac{1} {2} }).}$$

It follows from similar arguments given in the Appendix 1 and Appendix 1 of [7] that \(\mathcal{W}_{S_{y}}^{{\ast}}(c;s) = \sqrt{nh}\{\mathcal{S}_{y}^{{\ast}}(c;s) -\hat{\mathcal{S}}_{y}(c;s)\} \simeq {n}^{\frac{1} {2} }{h}^{-1/2}\sum _{i=1}^{n}\hat{\mathcal{D}}_{\mathcal{S}_{ y}i}(c;s)\xi _{i}\), where \(\hat{\mathcal{D}}_{\mathcal{S}_{y}i}(c;s)\) is obtained by replacing all theoretical quantities in \(\mathcal{D}_{\mathcal{S}_{y}}(c;s)\) given in (10) with the estimated counterparts for the ith subject. This, together with similar arguments as given above for the expansion of \(\hat{\mathcal{W}}_{\mbox{ ROC}}(u;s)\), implies that

$$\displaystyle\begin{array}{rcl} \mathcal{W}_{\mbox{ pAUC}_{f}}^{{\ast}}(s)& =& \int _{ 0}^{f}\sqrt{nh}\{{\mbox{ ROC}}^{{\ast}}(u;s) -\widehat{\mbox{ ROC}}(u;s)\}du {}\\ & \simeq & {n}^{-\frac{1} {2} }{h}^{-1/2}\sum _{i=1}^{n}K_{h}\{\hat{\mathcal{E}}_{1}(s)\}\hat{\mathcal{D}}_{\mbox{ pAUC}_{ f}}(s)\xi _{i}, {}\\ \end{array}$$

where \(\hat{\mathcal{D}}_{\mbox{ pAUC}_{f}}(s) =\int _{ 0}^{f}[\hat{\mathcal{D}}_{\mathcal{S}_{1}i}\{\hat{\mathcal{S}}_{0}^{-1}(u;s);s\} -\dot{\mbox{ ROC}}(u;s)\hat{\mathcal{D}}_{\mathcal{S}_{0}i}\{\hat{\mathcal{S}}_{0}^{-1}(u;s);s\}]du\). Conditional on the data, \(\mathcal{W}_{\mbox{ pAUC}_{f}}^{{\ast}}(s)\) is approximately normally distributed with mean 0 and variance

$$\displaystyle{\hat{\sigma }_{\mbox{ pAUC}_{f}}^{2}(s) = {h}^{-1}\sum _{ i=1}^{n}K_{ h}\{\hat{\mathcal{E}}_{1}{(s)\}}^{2}\hat{\mathcal{D}}_{\mbox{ pAUC}_{f}}{(s)}^{2}.}$$

Using the consistency of the proposed estimators along with similar arguments as given above, it is not difficult to show that the above variance converges to \(\sigma _{\mbox{ pAUC} _{f}}^{2}(s)\) as n → . Therefore, the empirical distribution obtained from the perturbed sample can be used to approximate the distribution of \(\hat{\mathcal{W}}_{\mbox{ pAUC}_{f}}(s)\).

We now show that after proper standardization, the supermum type statistics Γ converges weakly. To this end, we first note that, similar arguments as given in the Appendix 1 can be used to show that \(\sup _{s\in \mathcal{I}_{h}}\vert \hat{\sigma }_{\mbox{ pAUC}_{f}}^{2}(s) -\sigma _{\mbox{ pAUC}_{f}}^{2}(s)\vert = o_{p}({n}^{-\delta })\) and

$$\displaystyle{\varGamma =\sup _{s\in \mathcal{I}_{h}}\left \vert \frac{\sqrt{nh}\mathbb{P}_{n}\left [K_{h}\{\bar{\mathcal{E}}_{1}(s)\}\mathcal{D}_{\mbox{ pAUC}_{f}}(s)\right ]} {\sigma _{\mbox{ pAUC}_{f}}(s)} \right \vert + o_{p}({n}^{-\delta }),}$$

for some small positive constant δ. Using similar arguments in Bickel and Rosenblatt [2], we have

$$\displaystyle{pr\left \{a_{n}\left (\varGamma -d_{n}\right ) < x\right \} \rightarrow {e}^{-2{e}^{-x} },}$$

where \(a_{n} ={ \left [2\log \left \{(\rho _{u} -\rho _{l})/h\right \}\right ]}^{\frac{1} {2} }\) and \(d_{n} = a_{n} + a_{n}^{-1}\log \left \{\int \dot{K}{(t)}^{2}dt/(4m_{2}\pi )\right \}\). Now to justify the resampling procedure for constructing the confidence interval, we note that

$$\displaystyle{\mathcal{W}_{\mbox{ pAUC}_{f}}^{{\ast}}(s) = {n}^{-\frac{1} {2} }{h}^{\frac{1} {2} }\sum _{i=1}^{n}K_{h}\{\hat{\mathcal{E}}_{1i}(s)\}\hat{\mathcal{D}}_{\mbox{ pAUC}_{ f}}(s)(\xi _{i} - 1) {+\varepsilon }^{{\ast}}(s).}$$

where \(pr\{\sup _{s\in \varOmega (h)}\vert {n{}^{\delta }\varepsilon }^{{\ast}}(s)\vert \geq e\mid \mbox{ data}\} \rightarrow 0\) in probability. Therefore,

$$\displaystyle{{\varGamma }^{{\ast}} =\sup _{ s\in \mathcal{I}_{h}}\left \vert \frac{{n}^{-\frac{1} {2} }{h}^{\frac{1} {2} }\sum _{i=1}^{n}K_{h}\{\hat{\mathcal{E}}_{1i}(s)\}\hat{\mathcal{D}}_{\mbox{ pAUC}_{ f}}(s)(\xi _{i} - 1)} {\sigma _{\mbox{ pAUC}_{f}}(s)} \right \vert + \vert \varepsilon _{\mbox{ sup}}^{{\ast}}\vert.}$$

where \(pr\{\vert {n}^{\delta }\varepsilon _{\mbox{ sup}}^{{\ast}}\vert \geq e\vert \mbox{ data}\} \rightarrow 0\). It follows from similar arguments as given in Tian et al. [40] and Zhao et al. [53] that

$$\displaystyle{\sup \left \vert pr\left \{a_{n}{(\varGamma }^{{\ast}}- d_{ n}) < x\vert \mbox{ data}\right \} - {e}^{-2{e}^{-x} }\right \vert \rightarrow 0,}$$

in probability as n → . Thus, the conditional distribution of \(a_{n}{(\varGamma }^{{\ast}}- d_{n})\) can be used to approximate the unconditional distribution of \(a_{n}(\varGamma -d_{n})\). When \(h_{0} = h_{1}\), in general, the standardized Γ does not converge to the extreme value distribution. However, when \(h_{0} = h_{1} = k \in (0,\infty )\), the distribution of the suitable standardized version of Γ still can be approximated by that of the corresponding standardized Γ conditional on the data [21].

Appendix 2

Bandwidth Selection for pAUC f (s)

The choice of the bandwidths h 0 and h 1 is important for making inference about \(\mathcal{S}_{y}(c;s)\) and consequently pAUC f (s). Here we propose a two-stage K-fold cross-validation procedure to obtain the optimal bandwidth for \(\hat{\mathcal{S}}_{0,h_{0}}^{-1}(u;s)\) and \(\hat{\mathcal{S}}_{1,h_{1}}(c;s)\) sequentially. Specifically, we randomly split the data into K disjoint subsets of about equal sizes denoted by \(\{\mathcal{J}_{k},k = 1,\cdots \,,K\}\). The two-stage procedure is described as follows:

  1. (I)

    Motivated by the fact that \(\mathcal{S}_{0}^{-1}(u;s)\) is essentially the (1 − u)-th quantile of the conditional distribution of \(\bar{p}_{2}(X,Z)\) given \({Y }^{\dag } = 0\) and \(\bar{p}_{1}(X) = s\), for each k, we use all the observations not in \(\mathcal{J}_{k}\) to estimate \(q_{0,1-u}(s) = \mathcal{S}_{0}^{-1}(u;s)\) by obtaining \(\{\hat{\alpha }_{0}(s;h),\hat{\alpha }_{1}(s;h)\}\), the minimizer of

    $$\displaystyle{\sum _{j\in \mathcal{J}_{l},l\neq k}I(Y _{j} = 0)\hat{w}_{j}K_{h}\{\hat{\mathcal{E}}_{1j}(s)\}\rho _{1-u}\left [\hat{p}_{2j} - g\{\alpha _{0} +\alpha _{1}\hat{\mathcal{E}}_{1j}(s)\}\right ]}$$

    w.r.t. \((\alpha _{0},\alpha _{1})\), where ρ τ (e) is a check function defined as ρ τ (e) = τ e, if e ≥ 0; = (τ − 1)e, otherwise. Let \(\hat{q}_{0,1-u}^{(-k)}(s;h) = g\{\hat{\alpha }_{0}(s;h)\}\) denote the resulting estimator of q 0, 1−u (s). With observations in \(\mathcal{J}_{k}\), we obtain

    $$\displaystyle{Err_{k}^{(q_{0})}(h) =\sum _{ i\in \mathcal{J}_{k}}(1 - Y _{i})\hat{w}_{i}\int _{0}^{f}\rho _{ 1-u}\left [\hat{p}_{2i} -\hat{ q}_{0,1-u}^{(-k)}(\hat{p}_{ 1i};h)\right ]du.}$$

    Then, we let \(h_{0}^{\mbox{ opt}} =\arg \min _{ h}\sum _{k=1}^{K}Err_{ k}^{(q_{0})}(h)\).

  2. (II)

    Next, to find an optimal h 1 for \(\hat{\mathcal{S}}_{1,h_{1}}(\cdot;s)\), we choose an error function that directly relates to \(\mbox{ pAUC}_{f}(s) = -\int _{\mathcal{S}_{0}^{-1}(f;s)}^{\infty }\mathcal{S}_{1}(c;s)d\mathcal{S}_{0}(c;s)\). Specifically, noting the fact that

    $$\displaystyle{ E\left (\left.\int _{\mathcal{S}_{0}^{-1}(f;s)}^{\infty }\left [I\left \{g_{ 2}(\gamma ^{\prime}W_{i}) \geq c\right \} -\mathcal{S}_{1}(c; s)\right ]d\mathcal{S}_{0}(c; s)\right \vert Y _{i}^{\dag } = 1,g_{ 1}(\beta ^{\prime}X_{i}) = s\right ) = 0, }$$

    we use the corresponding mean integrated squared error for \(I\{g_{2}(\gamma ^{\prime}W_{i}) \geq c\} -\mathcal{S}_{1}(c;s)\) as the error function. For each k, we use all the observations which are not in \(\mathcal{J}_{k}\) to obtain the estimate of \(\mathcal{S}_{1}(c;s)\), denoted by \(\hat{\mathcal{S}}_{1,h}^{(-k)}(c;s)\) via (4). Then, with the observations in \(\mathcal{J}_{k}\), we calculate the prediction error

    $$\displaystyle\begin{array}{rcl} Err_{k}^{(\mathcal{S}_{1})}(h)& =& -\sum _{ i\in \mathcal{J}_{k},Y _{i}=1}\hat{w}_{i} {}\\ & & \int _{\hat{\mathcal{S}}_{0,h_{ 0}}^{-1}(f;\hat{p}_{1i})}^{\infty }{\left \{I\left (\hat{p}_{ 2i} \geq c\right ) -\hat{\mathcal{S}}_{1,h}^{(-k)}\left (c;\hat{p}_{ 1i}\right )\right \}}^{2}d\hat{\mathcal{S}}_{ 0,h_{0}}(c;\hat{p}_{1i}). {}\\ \end{array}$$

    We let \(h_{1}^{\mbox{ opt}} =\arg \min _{ h}\sum _{k=1}^{K}Err_{ k}^{(\mathcal{S}_{1})}(h)\).

Since the order of \(h_{y}^{\mbox{ opt}}\) is expected to be n −1∕5 [19], the bandwidth we use for estimation is \(h_{y} = h_{y}^{\mbox{ opt}} \times {n}^{-d_{0}}\) with 0 < d 0 < 3∕10 such that \(h_{y} = {n}^{-\nu }\) with 1∕5 < ν < 1∕2. This ensures that the resulting functional estimator \(\mathcal{S}_{y,h_{y}}(c;s)\) with the data-dependent smooth parameter has the above desirable large sample properties.

Bandwidth Selection for IDI(s)

Same as bandwidth selection for pAUC, we also propose a K-fold cross validation procedure to choose the optimal bandwidth h 1 for \(\mbox{ IS}(s) =\int _{ 0}^{1}\mathcal{S}_{1}(c;s)dc\) and h 0 for \(\mbox{ IP}(s) =\int _{ 0}^{1}\mathcal{S}_{0}(c;s)dc\) separately. The procedure is described as follows: we randomly split the data into K disjoint subsets of about equal sizes denoted by \(\{\mathcal{J}_{k},k = 1,\cdots \,,K\}\). Motivated by the fact (3), for each k, we use all the observations not in \(\mathcal{J}_{k}\) to estimate \(\int _{0}^{1}\mathcal{S}_{y}(c,s)dc\) by obtaining \(\{\hat{\varphi }_{0}^{(y)}(s;h),\hat{\varphi }_{1}^{(y)}(s;h)\}\) for y = 0, 1, which is the solution to the estimating equation

$$\displaystyle{\sum _{j\in \mathcal{J}_{l},l\neq k}I(Y _{j} = y)\hat{\omega }_{j}K_{h}\{\hat{\mathcal{E}}_{1j}(s)\}\left [\hat{p}_{2j} - g\{\varphi _{0}^{(y)} +\varphi _{ 1}^{(y)}\hat{\mathcal{E}}_{ 1j}(s)\}\right ] = 0,}$$

w.r.t. \((\varphi _{0}^{(y)},\varphi _{1}^{(y)})\). Let \(\widehat{{\mbox{ IS}}}^{(-k)}(s;h) = g\{\hat{\varphi }_{0}^{(1)}(s;h)\}\) and \(\widehat{{\mbox{ IP}}}^{(-k)}(s;h) = g\{\hat{\varphi }_{0}^{(0)}(s;h)\}\). With observations in \(\mathcal{J}_{k}\), we obtain

$$\displaystyle{Err_{k}^{(\mbox{ IS})}(h) =\sum _{ i\in \mathcal{J}_{k}}Y _{i}\hat{\omega }_{i}{\left \{\hat{p}_{2i} -\widehat{{\mbox{ IS}}}^{(-k)}(\hat{p}_{ 1i};h)\right \}}^{2},}$$

or

$$\displaystyle{Err_{k}^{(\mbox{ IP})}(h) =\sum _{ i\in \mathcal{J}_{k}}(1 - Y _{i})\hat{\omega }_{i}{\left \{\hat{p}_{2i} -\widehat{{\mbox{ IP}}}^{(-k)}(\hat{p}_{ 1i};h)\right \}}^{2}.}$$

Then, we let \(h_{1}^{opt} =\arg \min _{h}\sum _{k=1}^{K}Err_{k}^{(\mbox{ IS})}(h)\) and \(h_{0}^{opt} =\arg \min _{h}\sum _{k=1}^{K}Err_{k}^{(\mbox{ IP})}(h)\).

Appendix 3

R codes for application will be available from the corresponding author upon request.

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this paper

Cite this paper

Zhou, Q., Zheng, Y., Cai, T. (2013). Subgroup Specific Incremental Value of New Markers for Risk Prediction. In: Lee, ML., Gail, M., Pfeiffer, R., Satten, G., Cai, T., Gandy, A. (eds) Risk Assessment and Evaluation of Predictions. Lecture Notes in Statistics, vol 215. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8981-8_12

Download citation

Publish with us

Policies and ethics