Rank method for partial functional linear regression models


In this paper, we consider rank estimation for partial functional linear regression models based on functional principal component analysis. The proposed rank-based method is robust to outliers in the errors and highly efficient under a wide range of error distributions. The asymptotic properties of the resulting estimators are established under some regularity conditions. A simulation study conducted to investigate the finite sample performance of the proposed estimators shows that the proposed rank method performs well. Furthermore, the proposed methodology is illustrated by means of an analysis of the Berkeley growth data.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6


  1. Abrevaya, J., & Shin, Y. (2011). Rank estimation of partially linear index models. The Econometrics Journal, 14(3), 409–437.

    MathSciNet  MATH  Google Scholar 

  2. Aneiros, G., & Vieu, P. (2015). Partial linear modelling with multi-functional covariates. Computational Statistics, 30(3), 1–25.

    MathSciNet  MATH  Google Scholar 

  3. Aneiros-Pérez, G., & Vieu, P. (2011). Automatic estimation procedure in partial linear model with functional data. Statistical Papers, 52(4), 751–771.

    MathSciNet  MATH  Google Scholar 

  4. Cai, T., & Hall, P. (2006). Prediction in functional linear regression. Annals of Statistics, 34(5), 2159–2179.

    MathSciNet  MATH  Google Scholar 

  5. Cai, T., & Yuan, M. (2012). Minimax and adaptive prediction for functional linear regression. Journal of the American Statistical Association, 107(499), 1201–1216.

    MathSciNet  MATH  Google Scholar 

  6. Cardot, H., Ferraty, F., & Sarda, P. (1999). Functional linear model. Statistics Probability and Letters, 45(1), 11–22.

    MathSciNet  MATH  Google Scholar 

  7. Cardot, H., Ferraty, F., & Sarda, P. (2003). Spline estimators for the functional linear model. Statistica Sinica, 13(3), 571–592.

    MathSciNet  MATH  Google Scholar 

  8. Crambes, C., Kneip, A., & Sarda, P. (2009). Smoothing splines estimators for functional linear regression. Annals of Statistics, 37(1), 35–72.

    MathSciNet  MATH  Google Scholar 

  9. David, H. A. (1998). Early sample measures of variability. Statistical Science, 13(4), 368–377.

    MathSciNet  MATH  Google Scholar 

  10. Du, J., Chen, X. P., Kwessi, E., & Sun, Z. M. (2018). Model averaging based on rank. Journal of Applied Statistics, 45, 1900–1919.

    MathSciNet  Google Scholar 

  11. Feng, L., Zou, C., & Wang, Z. (2012). Rank-based inference for the single-index model. Statistics and Probability Letters, 82(3), 535–541.

    MathSciNet  MATH  Google Scholar 

  12. Feng, L., Wang, Z. J., Zhang, C., & Zou, C. (2016). Nonparametric testing in regression models with Wilcoxon-type generalized likelihood ratio. Statistica Sinica, 26, 137–155.

    MathSciNet  MATH  Google Scholar 

  13. Ferraty, F., & Vieu, P. (2006). Nonparametric functional data analysis: Theory and practice. New York: Springer.

    Google Scholar 

  14. Hall, P., & Horowitz, J. L. (2007). Methodology and convergence rates for functional linear regression. The Annals of Statistics, 35(1), 70–91.

    MathSciNet  MATH  Google Scholar 

  15. Horváth, L., & Kokoszka, P. (2012). Inference for functional data with applications. New York: Springer.

    Google Scholar 

  16. Ivanescu, A. E., Staicu, A. M., Scheipl, F., et al. (2015). Penalized function-on-function regression. Computational Statistics, 30(2), 539–568.

    MathSciNet  MATH  Google Scholar 

  17. Kato, K. (2012). Estimation in functional linear quantile regression. Annals of Statistics, 40(6), 3108–3136.

    MathSciNet  MATH  Google Scholar 

  18. Kong, D., Staicu, A. M., & Maity, A. (2016a). Classical testing in functional linear models. Journal of Nonparametric Statistics, 28, 813–838.

    MathSciNet  MATH  Google Scholar 

  19. Kong, D., Xue, K., Yao, F., & Zhang, H. (2016b). Partially functional linear regression in high dimensions. Biometrika, 103(1), 147–159.

    MathSciNet  MATH  Google Scholar 

  20. Kong, D., Bondell, H., & Wu, Y. (2018a). Fully efficient robust estimation, outlier detection, and variable selection via penalized regression. Statistica Sinica, 28, 1031–1052.

    MathSciNet  MATH  Google Scholar 

  21. Kong, D., Ibrahim, J. G., Lee, E., & Zhu, H. (2018b). FLCRM: Functional linear cox regression model. Biometrics, 74, 109–117.

    MathSciNet  MATH  Google Scholar 

  22. Locantore, N., Marron, J. S., Simpson, D. G., Tripoli, N., Zhang, J. T., & Cohen, K. L. (1999). Robust principal component analysis for functional data. Test, 8(1), 1–73.

    MathSciNet  MATH  Google Scholar 

  23. Leng, C. (2010). Variable selection and coefficientestimation via regularized rank regression. Statistica Sinica, 20, 167–181.

    MathSciNet  MATH  Google Scholar 

  24. Lu, Y., Du, J., & Sun, Z. (2014). Functional partially linear quantile regression model. Metrika, 77(3), 317–332.

    MathSciNet  MATH  Google Scholar 

  25. Peng, Q. Y., Zhou, J. J., & Tang, N. S. (2016). Varying coefficient partially functional linear regression models. Statistical Papers, 57, 827–841.

    MathSciNet  MATH  Google Scholar 

  26. Pollard, D. (1991). Asymptotics for least absolute deviation regression estimators. Econometric Theory, 7(02), 186–199.

    MathSciNet  Google Scholar 

  27. Ramsay, J. O., & Silverman, B. W. (2002). Applied functional data analysis: Methods and case studies. New York: Springer.

    Google Scholar 

  28. Ramsay, J. O., & Silverman, B. W. (2005). Functional data analysis (2nd ed.). New York: Springer.

    Google Scholar 

  29. Shin, H. (2009). Partial functional linear regression. Journal of Statistical Planning and Inference, 139(10), 3405–3418.

    MathSciNet  MATH  Google Scholar 

  30. Shin, Y. (2010). Local rank estimation of transformation models with functional coefficients. Econometric Theory, 26(06), 1807–1819.

    MathSciNet  MATH  Google Scholar 

  31. Sun, J., & Lin, L. (2014). Local rank estimation and related test for varying-coefficient partially linear models. Journal of Nonparametric Statistics, 26(1), 187–206.

    MathSciNet  MATH  Google Scholar 

  32. Wang, L. (2009). Wilcoxon-type generalized Bayesian information criterion. Biometrika, 96(1), 163–173.

    MathSciNet  MATH  Google Scholar 

  33. Wang, L., Kai, B., & Li, R. (2009). Local rank inference for varying coefficient models. Journal of the American Statistical Association, 104(488), 1631–1645.

    MathSciNet  MATH  Google Scholar 

  34. Yao, F., Müller, H. G., & Wang, J. L. (2005). Functional linear regression analysis for longitudinal data. Annals of Statistics, 33(6), 2873–2903.

    MathSciNet  MATH  Google Scholar 

  35. Yang, H., Guo, C., & Lv, J. (2015). SCAD penalized rank regression with a diverging number of parameters. Journal of Multivariate Analysis, 133, 321–333.

    MathSciNet  MATH  Google Scholar 

  36. Yang, J., Yang, H., & Lu, F. (2017). Rank-based shrinkage estimation for identification in semiparametric additive models. Statistical Papers,. https://doi.org/10.1007/s00362-017-0874-z.

    MATH  Article  Google Scholar 

  37. Yu, D., Kong, L., & Mizera, I. (2016a). Partial functional linear quantile regression for neuroimaging data analysis. Neurocomputing, 195, 74–87.

    Google Scholar 

  38. Yu, P., Du, J., & Zhang, Z. (2020). Single-index partially functional linear regression model. Statistical Papers. https://doi.org/10.1007/s00362-018-0980-6.

    MathSciNet  Article  Google Scholar 

  39. Yu, P., Zhang, Z., & Du, J. (2016b). A test of linearity in partial functional linear regression. Metrika, 79(8), 953–969.

    MathSciNet  MATH  Google Scholar 

  40. Yushkevich, P., Pizer, S. M., Joshi, S., & Marron, J. S. (2001). Intuitive, localized analysis of shape variability. Information Processing in Medical Imaging (Vol. 2082, pp. 402–408). Berlin: Springer.

    Google Scholar 

  41. Zeng, D., & Lin, D. (2008). Efficient resampling methods for nonsmooth estimating functions. Biostatistics, 9(2), 355–63.

    MATH  Google Scholar 

  42. Zhao, W., Lian, H., & Ma, S. (2017). Robust reduced-rank modeling via rank regression. Journal of Statistical Planning and Inference, 180, 1–12.

    MathSciNet  MATH  Google Scholar 

Download references


Cao’s work is partly supported by the National Natural Science Foundation of China (No. 11701020). Xie’s work is supported by the National Natural Science Foundation of China (No. 11771032, 11571340, 11971045), Beijing Natural Science Foundation under Grant No. 1202001, the Science and Technology Project of Beijing Municipal Education Commission (KM201710005032, KM201910005015), and the International Research Cooperation Seed Fund of Beijing University of Technology (No. 006000514118553).

Author information



Corresponding author

Correspondence to Tianfa Xie.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.



In order to prove the theorem, we make a linear approximation to \(|\epsilon _i-\epsilon _j-t|\) by \(D_{ij}=2I_{\{\epsilon _i>\epsilon _j\}}-1\). One intuitive interpretation of \(D_{ij}\) is that \(D_{ij}\) can be treated as the first derivative of \(|\epsilon _i-\epsilon _j-t|\) at \(t=0\) (Pollard 1991). Let \(R_i=\sum _{j=1}^m\gamma _{0j}{\hat{\xi }}_{ij}-\int _{0}^{1}\beta _0(t)X_i(t)dt=\varvec{U}_i^T\varvec{\gamma }_0-\int _{0}^{1}\beta _0(t)X_i(t)dt\). By Condition C2, one has \(E(D_{ij})=0, \text {Var}(D_{ij})=1\). Define

$$\begin{aligned} \begin{aligned} R_{ij}(\varvec{u})&= \left[ \left| \epsilon _i-\epsilon _j-\frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T{\varvec{u}_1}-\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2-R_i+R_j\right| \right. \\&\quad -\,\left. \left| \epsilon _i-\epsilon _j\right| \right] -D_{ij} \left[ \frac{1}{\sqrt{n}}{(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1}+\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2 \right] , \\ \varvec{W}_{n1}&=\frac{1}{n\sqrt{n}}\sum _{i=1}^n\sum _{j=1}^nD_{ij}(\varvec{z}_i-\varvec{z}_j), \end{aligned} \end{aligned}$$


$$\begin{aligned} \varvec{W}_{n2}=\frac{1}{n\sqrt{n}}\sum _{i=1}^n\sum _{j=1}^nD_{ij}(\varvec{U}_{i}-\varvec{U}_{j}), \end{aligned}$$

where \(\varvec{u} =(\varvec{u}_1^T,\varvec{u}_2^T)^T\) and \(\varvec{W}_{n}=(\varvec{W}_{n1}^T,\varvec{W}_{n2}^T)^T.\)

Lemma 1


$$\begin{aligned} \begin{aligned} G_{n}(\varvec{u})&=\frac{1}{n}\sum \limits _{i=1}^n\sum \limits _{j=1}^n \left[ |\epsilon _i-\epsilon _j -\frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T{ \varvec{u}_1}-\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2\right. \\&\quad \left. -\,R_i+R_j|-|\epsilon _i-\epsilon _j| \right] . \end{aligned} \end{aligned}$$

Under the Conditions C1 and C2, it has

$$\begin{aligned} G_{n}(\varvec{u})=A_n(\varvec{u})+ \varvec{W}_{n}^T\varvec{u}+O_p\left( \lambda _m^{-1/2}\right) +O_{p}\left( n^{\frac{1}{a+2b}}\right) , \end{aligned}$$


$$\begin{aligned} \begin{aligned} A_{n}(\varvec{u})&=\frac{\tau }{n^2}\varvec{u}^T_1\left( \sum _{i=1}^n\sum \limits _{j=1}^n (\varvec{z}_i-\varvec{z}_j)(\varvec{z}_i-\varvec{z}_j)^T\right) \varvec{u}_1\\&\quad +\,\frac{\tau }{\lambda _mn^2}\varvec{u}^T_2\left( \sum \limits _{i=1}^n\sum \limits _{j=1}^n (\varvec{U}_{i}-\varvec{U}_{j})(\varvec{U}_{i}-\varvec{U}_{j})^T\right) \varvec{u}_2\\&\quad +\,\frac{\tau }{\sqrt{\lambda }_mn^2}\varvec{u}^T_1\left( \sum \limits _{i=1}^n\sum \limits _{j=1}^n (\varvec{z}_{i}-\varvec{z}_{j})(\varvec{U}_{i}-\varvec{U}_{j})^T\right) \varvec{u}_2, \\ \varvec{W}_{n}^T\varvec{u}&=\varvec{W}_{n1}^T\varvec{u}_1+\varvec{W}_{n2}^T\varvec{u}_2, \end{aligned} \end{aligned}$$

where \(\varvec{W}_{n} =( \varvec{W}_{n1}^T,\varvec{W}_{n2}^T)^T,\)\(\tau =\int f^2(t)dt\) and f(t) is the density function of the random error.

Proof of Lemma 1

Let \(M(t)=E(|\epsilon _i-\epsilon _j-t|-|\epsilon _i-\epsilon _j|)\). With Condition C2, it is easy to show that M(t) has a unique minimizer at zero, and its Taylor expansion at origin has the form \(M(t)=\tau t^2+o(t^2)\). Denote \({\mathcal {F}}_n=\{(\varvec{z}_1,\varvec{X}_1),\ldots ,(\varvec{z}_n,\varvec{X}_n)\}.\) Hence, for larger n, we have

$$\begin{aligned} E(G_{n}(\varvec{u})|{\mathcal {F}}_n)=A_n(\varvec{u})+B_n(\varvec{u})+o_p(A_n(\varvec{u}))+o_p(B_n(\varvec{u})), \end{aligned}$$


$$\begin{aligned} \begin{aligned} B_{n}(\varvec{u})&=\frac{\tau }{n}\sum _{i=1}^n\sum \limits _{j=1}^n (R_i-R_j)^2+\frac{\tau }{\sqrt{\lambda _m}n^{3/2}}\varvec{u}^T_2 \left( \sum \limits _{i=1}^n\sum \limits _{j=1}^n (\varvec{U}_{i}-\varvec{U}_{j})(R_i-R_j) \right) \\&\quad +\frac{\tau }{n^{3/2}}\varvec{u}^T_1 \left( \sum \limits _{i=1}^n\sum \limits _{j=1}^n (\varvec{z}_{i}-\varvec{z}_{j})(R_i-R_j) \right) \\&\triangleq B_{n1}+B_{n2}(\varvec{u})+B_{n3}(\varvec{u}) \end{aligned} \end{aligned}$$

Via Condition C1 and Cauchy–Schwarz inequality, it has

$$\begin{aligned} A_n(\varvec{u})=O_p(1). \end{aligned}$$

Note that

$$\begin{aligned} \begin{aligned} |R_i|^2&= \left| \sum _{j=1}^m\gamma _{0j}{\hat{\xi }}_{ij}-\int _{0}^{1}\beta _0(t)X_i(t)dt \right| ^2\\&\le 2 \left| \sum _{j=1}^{m}\langle X_i,{\hat{v}}_j-{v}_j\rangle \gamma _{0j} \right| ^2+2 \left| \sum _{j=m+1}^{ \infty }\langle X_i,v_j\rangle \gamma _{0j} \right| ^2\\&\triangleq 2\text {A}_1+2\text {A}_2. \end{aligned} \end{aligned}$$

For \(\text {A}_1\), by Condition C1 and Hölder inequality, it is obtained

$$\begin{aligned} \begin{aligned} \text {A}_1&= \left| \sum _{j=1}^{m}\langle X_i,{v}_j-{\hat{v}}_j\rangle \gamma _{0j} \right| ^2 \le Km\sum _{j=1}^{m}\Vert {v}_j-{\hat{v}}_j\Vert ^2|\gamma _{0j}|^2\\&\le Km\sum _{j=1}^{m}O_p(n^{-1}j^{2-2b}) =O_p\left( n^{-\frac{a+4b-4}{a+2b}}\right) . \end{aligned} \end{aligned}$$

As for \(\text {A}_2\), since

$$\begin{aligned} E \left\{ \sum _{j=m+1}^{ \infty }\langle X_i,v_j\rangle \gamma _{0j} \right\} =0, \end{aligned}$$


$$\begin{aligned} \begin{aligned}&\text {Var} \left\{ \sum _{j=m+1}^{ \infty }\langle X_i,{v}_j\rangle \gamma _{0j} \right\} \\&\quad =\sum _{j=m+1}^{ \infty }\lambda _j{\gamma _{0j}}^2 \le K \sum _{j=m+1}^{ \infty }j^{-(a+2b)} =O\left( n^{-\frac{a+2b-1}{a+2b}}\right) , \end{aligned} \end{aligned}$$

one has

$$\begin{aligned} A_2=O_p\left( n^{-\frac{a+2b-1}{a+2b}}\right) . \end{aligned}$$

Combining above discussions, it follows

$$\begin{aligned} |R_i|^2=O_p\left( n^{-\frac{a+2b-1}{a+2b}}\right) . \end{aligned}$$

Moreover, it has

$$\begin{aligned} \begin{aligned} B_{n1}&=\frac{\tau }{n}\sum _{i=1}^{n}\sum \limits _{j=1}^{n} (R_i-R_j)^2\\&=nO_{p}\left( n^{-\frac{2b+a-1}{a+2b}}\right) \\&=O_{p}\left( n^{\frac{1}{a+2b}}\right) . \end{aligned} \end{aligned}$$

Taking advantage of the stochastic order of \(B_{n1}\) and \(A_n(\varvec{u})\) and Cauchy–Schwarz inequality, one has

$$\begin{aligned} B_{n2} (\varvec{u}) =B_{n3}(\varvec{u}) =O_{p}\left( n^{\frac{1}{a+2b}}\right) . \end{aligned}$$

Therefore, it follows

$$\begin{aligned} B_{n}=O_{p}\left( n^{\frac{1}{a+2b}}\right) . \end{aligned}$$

Furthermore, we obtain

$$\begin{aligned} E(G_{n}(\varvec{u})|{\mathcal {F}}_n)=A_n(\varvec{u})+O_{p}\left( n^{\frac{1}{a+2b}}\right) . \end{aligned}$$


$$\begin{aligned} G_n(\varvec{u})=E[G_n(\varvec{u})|{\mathcal {F}}_n]+\varvec{W}_{n}^T\varvec{u}+\frac{1}{n}\sum \limits _{j=1}^n\sum \limits _{i=1}^n\left[ R_{ij}(\varvec{u}) -E(R_{ij}(\varvec{u}))\right] . \end{aligned}$$

By elementary calculation, one has

$$\begin{aligned} \begin{aligned} |R_{ij}(\varvec{u})|&\le \left| \frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1+\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2+R_i-R_j \right| \\&\quad \times I_{\left\{ \epsilon _i-\epsilon _j\le \left| \frac{1}{\sqrt{n}}{ (\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1}+\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2+R_i-R_j \right| \right\} }. \end{aligned} \end{aligned}$$

With cross product terms being cancellated, it follows

$$\begin{aligned} \begin{aligned} E \left( \frac{1}{n}\sum \limits _{i=1}^n\sum \limits _{j=1}^n\left[ R_{ij}(\varvec{u}) -E(R_{ij}(\varvec{u}))\right] \right) ^2&\le \frac{1}{n^2}\sum \limits _{i=1}^n\sum \limits _{j=1}^nE\left[ R_{ij}(\varvec{u}) -E(R_{ij}(\varvec{u}))\right] ^2\\&\le I_1(\varvec{u})+I_2(\varvec{u})+I_3(\varvec{u}), \end{aligned} \end{aligned}$$


$$\begin{aligned} \begin{aligned} I_1(\varvec{u})&=\frac{1}{n^2}\sum \limits _{i=1}^n\sum \limits _{j=1}^n\varvec{u}_1^T(\varvec{z}_i-\varvec{z}_j)(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1\\&\quad \times EI_{\left\{ \epsilon _i-\epsilon _j\le \left| \frac{1}{\sqrt{n}}\frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1-\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2+R_i-R_j \right| \right\} },\\ I_2(\varvec{u})&=\frac{1}{n^2}\sum \limits _{i=1}^n\sum \limits _{j=1}^n\frac{1}{ {n\lambda _m}}\varvec{u}_2^T(\varvec{U}_{i}-\varvec{U}_{j})(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2\\&\quad \times EI_{\left\{ \epsilon _i-\epsilon _j\le \left| \frac{1}{\sqrt{n}}\frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1-\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2+R_i-R_j \right| \right\} } \end{aligned} \end{aligned}$$


$$\begin{aligned} I_3(\varvec{u})=\frac{1}{n^2}\sum \limits _{i=1}^n\sum \limits _{j=1}^n (R_{i }-R_{j })^2 EI_{\left\{ \epsilon _i-\epsilon _j\le \left| \frac{1}{\sqrt{n}}\frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1-\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2+R_i-R_j \right| \right\} }. \end{aligned}$$

By Condition C2 and the weak law of large numbers, one has

$$\begin{aligned}&\frac{1}{n^2}\sum \limits _{i=1}^n\sum \limits _{j=1}^n\varvec{u}_1^T(\varvec{z}_i-\varvec{z}_j)(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1=O_p(1), \end{aligned}$$
$$\begin{aligned}&\max \limits _{i=1,2,\ldots ,n}\left| \frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_2-\frac{1}{\sqrt{n\lambda _m}}{ (\varvec{U}_{i}-\varvec{U}_{j})^T } \right| =o_p(1), \end{aligned}$$


$$\begin{aligned} || R_i-R_j||^2=o_p(1). \end{aligned}$$

Therefore, combining (6) and (7) with (8), we can conclude that \(I_1(\varvec{u})=o_p(1)\).

Next, consider the stochastic order of \(I_{2}(\varvec{u})\).

$$\begin{aligned} \begin{aligned} I_2(\varvec{u})&=\frac{1}{n^2}\sum \limits _{i=1}^n\sum \limits _{j=1}^n\frac{1}{ {n\lambda _m}}\varvec{u}_2^T(\varvec{U}_{i}-\varvec{U}_{j})(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2\\&\quad \times EI_{\left\{ \epsilon _i-\epsilon _j\le \left| \frac{1}{\sqrt{n}}\frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1-\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2+R_i-R_j \right| \right\} }\\&=\frac{1}{n^2}\sum \limits _{i=1}^n\sum \limits _{j=1}^n\frac{1}{ {n\lambda _m}}\varvec{u}_2^T\varvec{U}_{i}\varvec{U}_{i}^T\varvec{u}_2EI_{\left\{ \epsilon _i-\epsilon _j\le \left| \frac{1}{\sqrt{n}}\frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1-\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2+R_i-R_j \right| \right\} }\\&\quad +\,\frac{1}{n^2}\sum \limits _{i=1}^n\sum \limits _{j=1}^n\frac{1}{ {n\lambda _m}}\varvec{u}_2^T\varvec{U}_{j}\varvec{U}_{j}^Tu_2EI_{\left\{ \epsilon _i-\epsilon _j\le \left| \frac{1}{\sqrt{n}}\frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1-\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2+R_i-R_j \right| \right\} }\\&\quad -\,2\frac{1}{n^2}\sum \limits _{i=1}^n\sum \limits _{j=1}^n\frac{1}{ {n\lambda _m}}\varvec{u}_2^T\varvec{U}_{i}\varvec{U}_{j}^T\varvec{u}_2EI_{\left\{ \epsilon _i-\epsilon _j\le \left| \frac{1}{\sqrt{n}}\frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_2-\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}+R_i-R_j \right| \right\} }\\&\triangleq I_{21}(\varvec{u})+I_{21}(\varvec{u})-2I_{23}(\varvec{u}). \end{aligned} \end{aligned}$$

By Lemma 2 of Yu et al. (2016b) and (8), one has

$$\begin{aligned} \begin{aligned} I_{21}(\varvec{u})&={ \frac{1}{n^2}\sum \limits _{i=1}^n\sum \limits _{j=1}^n\frac{1}{ {n\lambda _m}}\varvec{u}_2^T\varvec{U}_{i}\varvec{U}_{i}^T\varvec{u}_2EI_{\left\{ \epsilon _i-\epsilon _j\le \left| \frac{1}{\sqrt{n}}\frac{1}{\sqrt{n}}(\varvec{z}_i-\varvec{z}_j)^T\varvec{u}_1-\frac{1}{\sqrt{n\lambda _m}}(\varvec{U}_{i}-\varvec{U}_{j})^T\varvec{u}_2+R_i-R_j \right| \right\} }}\\&=o_p(1) \sum \limits _{i=1}^n \frac{1}{ {n\lambda _m}}\varvec{u}_2^T\varvec{U}_{i}\varvec{U}_{i}^T\varvec{u}_2 \\&= o_p(1) \frac{1}{ { \lambda _m}} \varvec{u}_2^T \varvec{\Lambda }\varvec{u}_2 \\&= O_p( { \lambda _m^{-1}}), \end{aligned} \end{aligned}$$

where \(\varvec{\Lambda }= \text {diag}( \lambda _1,\lambda _2,\ldots ,\lambda _m).\) Similar to \(I_{21}(\varvec{u})\), we have \(I_{22}(\varvec{u})=O_p( { \lambda _m^{-1}}).\) Then, invoking Cauchy–Schwarz inequality, one has \(I_{23}(\varvec{u})=O_p( { \lambda _m^{-1}}).\) Thus, \(I_{2}(\varvec{u})=O_p( { \lambda _m^{-1}}).\) Now, we consider \(I_3(\varvec{u})\). Similar to \(B_n(\varvec{u})\), we can show that \(I_3(\varvec{u})=o_{p}\left( n^{\frac{1}{a+2b}}\right) .\)

Considering the stochastic order of \(I_{1}(\varvec{u})\), \(I_{2}(\varvec{u})\) and \(I_{3}(\varvec{u})\), one has

$$\begin{aligned} G_{n}(\varvec{u})=A_n(\varvec{u})+\varvec{W}_{n}^T\varvec{u}+O_p\left( \lambda _m^{-1/2}\right) +O_{p}\left( n^{\frac{1}{a+2b}}\right) . \end{aligned}$$

Thus completes the proof of Lemma 1. \(\square\)

Proof of Theorem 1

Write \(\varvec{U}_{i}=({\hat{\xi }}_{i1},\ldots ,{\hat{\xi }}_{im})^T, \varvec{\theta }=(\varvec{\alpha }^T,\varvec{\gamma }^T)^T\). Note that

$$\begin{aligned} \hat{\varvec{\theta }}=\arg \min \limits _{\varvec{\theta }}\sum \limits _{i=1}^n\sum \limits _{j=1}^n \frac{1}{n}|Y_i-Y_j-\varvec{z}_i^T\varvec{\alpha }-\varvec{U}_{i}^T\varvec{\gamma } + \varvec{z}_j^T\varvec{\alpha }+\varvec{U}_{j}^T\varvec{\gamma } |, \end{aligned}$$

so \(\hat{\varvec{\theta }}\) minimizes

$$\begin{aligned} \begin{aligned} Q_{n}(\varvec{\theta })&=\sum \limits _{i=1}^n\sum \limits _{j=1}^n \frac{1}{n}|Y_i-Y_j-\varvec{z}_i^T\varvec{\alpha }-\varvec{U}_{i}^T\varvec{\gamma } + \varvec{z}_j^T\varvec{\alpha }+\varvec{U}_{j}^T\varvec{\gamma } |-\frac{1}{n}\sum \limits _{i=1}^n\sum \limits _{j=1}^n|\epsilon _i-\epsilon _j| \\&=\frac{1}{n}\sum \limits _{i=1}^n\sum \limits _{j=1}^n \left| \epsilon _i-\epsilon _j-(\varvec{z}_i-\varvec{z}_j)^T(\varvec{\alpha }-\varvec{\alpha }_0) -(\varvec{U}_{i}-\varvec{U}_{j})^T(\varvec{\gamma }-\varvec{\gamma }_{0})-R_i-R_j \right| \\&\quad -\,\frac{1}{n}\sum \limits _{i=1}^n\sum \limits _{j=1}^n|\epsilon _i-\epsilon _j|{,} \end{aligned} \end{aligned}$$

where \(R_{i}=\sum _{j=1}^{m}{\hat{\xi }}_{ij}\gamma _{0j}-\int _{0}^{1}\beta _0(t)x_i(t)dt.\)

By Lemma 1, with simple calculation, it yields

$$\begin{aligned} Q_{n}(\varvec{\theta })=V_{n}(\varvec{\theta })+\sqrt{ n}\varvec{W}_{n1}^T(\varvec{\alpha }-\varvec{\alpha }_0)+\sqrt{ n}\varvec{W}_{n2}^T(\varvec{\gamma }-\varvec{\gamma }_0)+O_p\left( \lambda _m^{-1/2}\right) +O_{p}\left( n^{\frac{1}{a+2b}}\right) , \end{aligned}$$


$$\begin{aligned} \begin{aligned} V_{n}(\varvec{\theta })&=\frac{\tau }{n}(\varvec{\alpha }-\varvec{\alpha }_0)^T \left( \sum _{i=1}^n\sum \limits _{j=1}^n (\varvec{z}_i-\varvec{z}_j)(\varvec{z}_i-\varvec{z}_j)^T\right) (\varvec{\alpha }-\varvec{\alpha }_0) \\&\quad +\,\frac{\tau }{\lambda _mn}(\varvec{\gamma }-\varvec{\gamma }_{0})^T\left( \sum \limits _{i=1}^n\sum \limits _{j=1}^n (\varvec{U}_{i}-\varvec{U}_{j})(\varvec{U}_{i}-\varvec{U}_{j})^T\right) (\varvec{\gamma }-\varvec{\gamma }_{0})\\&\quad +\,\frac{\tau }{\sqrt{\lambda }_mn}(\varvec{\alpha }-\varvec{\alpha }_0)^T\left( \sum \limits _{i=1}^n\sum \limits _{j=1}^n (\varvec{z}_{i}-\varvec{z}_{j})(\varvec{U}_{i}-\varvec{U}_{j})^T\right) (\varvec{\gamma }-\varvec{\gamma }_{0}){.} \end{aligned} \end{aligned}$$

By the definition of \(\hat{\varvec{\theta }}\) and Lemma 1, we can conclude that

$$\begin{aligned} \frac{\partial V_{n}(\hat{\varvec{\theta }})}{\partial \varvec{\theta }}+\sqrt{n} \varvec{W}_n=O_p\left( \lambda _m^{-1/2}\right) +O_{p}\left( n^{\frac{1}{a+2b}}\right) . \end{aligned}$$

And with simple calculation, it yields

$$\begin{aligned} \begin{aligned}&\frac{2\tau }{\lambda _mn} \left( \sum \limits _{i=1}^n\sum \limits _{j=1}^n (\varvec{U}_{i}-\varvec{U}_{j})(\varvec{U}_{i}-\varvec{U}_{j})^T \right) (\hat{\varvec{\gamma }}-\varvec{\gamma }_{0})\\&\qquad + \frac{\tau }{\sqrt{\lambda }_mn}\left( \sum \limits _{i=1}^n\sum \limits _{j=1}^n (\varvec{z}_{i}-\varvec{z}_{j})(\varvec{U}_{i}-\varvec{U}_{j})^T\right) (\hat{\varvec{\alpha }}-\varvec{\alpha }_0)+\sqrt{n}\varvec{W}_{n2}\\&\quad =O_p\left( \lambda _m^{-1/2}\right) +O_{p}\left( n^{\frac{1}{a+2b}}\right) , \end{aligned} \end{aligned}$$


$$\begin{aligned} \begin{aligned}&\frac{2\tau }{\lambda _mn} \left( \sum \limits _{i=1}^n\sum \limits _{j=1}^n (\varvec{z}_{i}-\varvec{z}_{j})(\varvec{z}_{i}-\varvec{z}_{j})^T \right) (\hat{\varvec{\alpha }}-\varvec{\alpha }_{0})\\&\qquad + \frac{\tau }{\sqrt{\lambda }_mn}\left( \sum \limits _{i=1}^n\sum \limits _{j=1}^n (\varvec{z}_{i}-\varvec{z}_{j})(\varvec{U}_{i}-\varvec{U}_{j})^T\right) (\hat{\varvec{\gamma }}-\varvec{\gamma }_{0})+\sqrt{n}\varvec{W}_{n1}\\&\quad =O_p\left( \lambda _m^{-1/2}\right) +O_{p}\left( n^{\frac{1}{a+2b}}\right) . \end{aligned} \end{aligned}$$

It can be rewritten as follows

$$\begin{aligned} 4\tau \sqrt{ n}({ \varvec{Z} }^T(\varvec{I}-\varvec{S}_m)\varvec{Z} ) ( \hat{\varvec{\alpha }}-\varvec{\alpha }_0 )=\frac{1}{n\sqrt{n}}\sum _{i=1}^n\sum _{j=1}^nD_{ij}(\tilde{\varvec{Z}}_i-\tilde{\varvec{Z}}_j)+o_p(1), \end{aligned}$$

where \(\varvec{Z}=[\varvec{z}_1,\ldots ,\varvec{z}_n]^T\), and \(\tilde{\varvec{Z}}_i\) is the ith column component of \((\varvec{I}-\varvec{S}_m)\varvec{Z},\) here \(\varvec{S}_m=\varvec{U}_m(\varvec{U}_m^T\varvec{U}_m)^{-1}\varvec{U}_m^T.\)

Similar to Lemma 2 of Du et al. (2018), we have

$$\begin{aligned} \frac{1}{n\sqrt{n}}\sum _{i=1}^n\sum _{j=1}^nD_{ij}(\tilde{\varvec{Z}}_i-\tilde{\varvec{Z}}_j){\longrightarrow } N \left( 0 ,\frac{4}{3}\varvec{\Sigma }\right) . \end{aligned}$$

Therefore, according to the law of large numbers, we have

$$\begin{aligned} \begin{aligned} \sqrt{n}(\hat{\varvec{\alpha }}-\varvec{\alpha }_0)&= ( 4\tau (\varvec{Z}^T(\varvec{I}-\varvec{S}_m)\varvec{Z}/n ) )^{-1} \frac{1}{n\sqrt{n}}\sum _{i=1}^n\sum _{j=1}^nD_{ij}(\tilde{\varvec{Z}}_i-\tilde{\varvec{Z}}_j)+o_p(1)\\&=( 4\tau \varvec{\Sigma })^{-1} \frac{1}{n\sqrt{n}}\sum _{i=1}^n\sum _{j=1}^nD_{ij}(\tilde{\varvec{Z}}_i-\tilde{\varvec{Z}}_j)+o_p(1)\\&\rightarrow N \left( 0 ,\frac{1}{12\tau ^2}\varvec{\Sigma }^{-1}\right) . \end{aligned} \end{aligned}$$

Furthermore, we have \(\Vert \hat{\varvec{\alpha }}-\varvec{\alpha }_0\Vert =O_p(n^{-1/2}).\) Combining this with the definition of \(\varvec{W}_{n2}\) and (10), one has

$$\begin{aligned} \Vert \hat{\varvec{\gamma }}-\varvec{\gamma }_{0}\Vert =O_p(n^{-1/2})+O_p\left( \sqrt{\frac{m^{a+1}}{n}}\right) +O_p\left( \sqrt{\frac{m }{n}}\right) =O_p\left( \sqrt{\frac{m^{a+1}}{n}}\right) . \end{aligned}$$

Using the similar argument as in Theorem 3.2 of Shin (2009) and the condition \(m\sim n^{1/(a+2b)}\), we have

$$\begin{aligned} \Vert {{\hat{\beta }}}-\beta _0\Vert =O_p\left( n^{-(2b-1)/(a+2b)}\right) . \end{aligned}$$


Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cao, R., Xie, T. & Yu, P. Rank method for partial functional linear regression models. J. Korean Stat. Soc. (2020). https://doi.org/10.1007/s42952-020-00075-4

Download citation


  • Rank estimation
  • Karhunen–Loève expansion
  • Asymptotic normality
  • Functional principal component analysis
  • Convergence rate

Mathematics Subject Classification

  • 62J05
  • 62M10