Skip to main content
Log in

Dynamic partially functional linear regression model

  • Original Paper
  • Published:
Statistical Methods & Applications Aims and scope Submit manuscript

Abstract

In this paper, we develop a dynamic partially functional linear regression model in which the functional dependent variable is explained by the first order lagged functional observation and a finite number of real-valued variables. The bivariate slope function is estimated by bivariate tensor-product B-splines. Under some regularity conditions, the large sample properties of the proposed estimators are established. We investigate the finite sample performance of the proposed methods via Monte Carlo simulation studies, and illustrate its usefulness by the analysis of the electricity consumption data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Aneiros G, Bongiorno EG, Cao R, Vieu P (2017) Functional statistics and related fields. Springer, Berlin

    Book  Google Scholar 

  • Aneiros G, Vieu P (2008) Nonparametric time series prediction: a semi-functional partial linear modeling. J Multivar Anal 99:834–857

    Article  MathSciNet  Google Scholar 

  • Bosq D (2000) Linear processes in function spaces. Springer, Berlin

    Book  Google Scholar 

  • Fan ZH, Reimherr M (2017) High-dimensional adaptive function-on-scalar regression. Econom Stat 1:167–183

    MathSciNet  Google Scholar 

  • He X, Shi P (1994) Convergence rate of B-spline estimators of nonparametric conditional quantile functions. J Nonparametr Stat 3:299–308

    Article  MathSciNet  Google Scholar 

  • He X, Fung WK, Zhu Z (2005) Robust estimation in generalized partial linear models for clustered data. J Am Stat Assoc 100(472):1176–1184

    Article  MathSciNet  Google Scholar 

  • Hörmann S, Horváth L, Reeder R (2013) A functional version of the ARCH model. Econometr Theory 29(2):267–288

    Article  MathSciNet  Google Scholar 

  • Kokoszka P, Miao H, Reimherr M, Taoufik B (2017) Dynamic functional regression with application to the cross-section of returns. J Financ Econometr 16:1–25

    Google Scholar 

  • Ramsay JO, Silverman BW (2005) Functional data analysis, 2nd edn. Springer, New York

    Book  Google Scholar 

  • Reimherr M, Dan N (2014) A functional data analysis approach for genetic association studies. Ann Appl Stat 8(1):406–429

    Article  MathSciNet  Google Scholar 

  • Schumaker LL (1981) Spline functions: basic theory. Wiley, New York

    MATH  Google Scholar 

Download references

Acknowledgements

Du’s work is supported by the National Natural Science Foundation of China (Nos. 11771032), the Science and Technology Project of Beijing Municipal Education Commission (KM201910005015), Young Talent program of Beijing Municipal Commission of Education (No. CIT&TCD201904021), and the International Research Cooperation Seed Fund of Beijing University of Technology (No. 006000514118553). Zhangs work is supported by the National Natural Science Foundation of China (No. 11271039).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiang Du.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix. Proofs

Appendix. Proofs

The following lemma, which follows easily from Theorem 12.7 of Schumaker (1981), is stated for ease of reference.

Lemma 1

Under Assumption 1, there exists a spline function

$$\begin{aligned} \beta ^*(t,s)=\sum \limits _{i=1}^{M_N}\sum \limits _{j=1}^{M_N}\alpha _{i,j}b_i(t) b_j(s)=\left( {\varvec{B}}^T(t)\otimes {\varvec{B}}^T (s)\right) {{\varvec{\alpha }}}, \end{aligned}$$

which is called spline approximation of \(\beta (t,s)\), such that

$$\begin{aligned} \sup _{(t,s)\in [0,1]\times [0,1]} \parallel \beta ^*(t,s)-\beta (t,s) \parallel =O(M_N^{-r}), \end{aligned}$$

where \({\varvec{\alpha }}=(\alpha _{1,1},\alpha _{1,2},\ldots ,\alpha _{1,M_N},\alpha _{2,1},\ldots ,\alpha _{2,M_N},\alpha _{3,1},\ldots ,\alpha _{M_N,M_N})^T.\)

Proof of Theorem 1

By Lemma 1, there exists \({\varvec{\alpha }}\) such that

$$\begin{aligned} R(t,s)=\beta (t,s)-{\varvec{B}}^T(t)\otimes {\varvec{B}}^T (s){\varvec{\alpha }}=O(M_N^{-r}) \end{aligned}$$

uniformly in \((t,s)\in [0,1]\times [0,1]\). Then, it has

$$\begin{aligned} \hat{ {\varvec{\alpha }}}- {{\varvec{\alpha }}}= & {} {\varvec{A}}_1^{-1}{\varvec{A}}_2-{ {\varvec{\alpha }}}\\= & {} {\varvec{A}}_1^{-1}({\varvec{A}}_2-{\varvec{A}}_1{ {\varvec{\alpha }}})\\= & {} {\varvec{A}}_1^{-1}\int _0^1\left( \langle Y_1,{\varvec{B}} \rangle ,\ldots ,\langle Y_{N-1},{\varvec{B}} \rangle \right) \otimes {\varvec{B}}(t)({\varvec{I}}_N-{\varvec{P}}_N)[\widetilde{{\varvec{Y}}}(t)-{\varvec{\eta }}^*(t)]dt, \end{aligned}$$

where \({\varvec{\eta }}^*(t)=\left( \int _0^1Y_1(s)\beta ^*(t,s)ds,\ldots ,\int _0^1Y_{N-1}(s)\beta ^*(t,s)ds\right) ^T.\) By the definition of \({\varvec{\eta }}^*(t)\), one has

$$\begin{aligned} \widetilde{{\varvec{Y}}}(t)-{\varvec{\eta }}^*(t)= & {} \left( Y_2(t){-}\int _0^1\beta ^*(t,s)Y_1(s)ds,\ldots ,Y_N(t){-}\int _0^1\beta ^*(t,s)Y_{N-1}(s)ds\right) ^T\\= & {} \left( \varepsilon _2(t)+{\varvec{Z}}_2^T{\varvec{\theta }}(t)+\int _0^1R(t,s)Y_1(s)ds,\ldots ,\varepsilon _N(t)+{\varvec{Z}}_N^T {\varvec{\theta }}(t)\right. \\&\left. +\int _0^1R(t,s)Y_{N-1}(s)ds\right) ^T\\= & {} {{\varvec{\varepsilon }}}(t)+ \widetilde{{\varvec{Z}}}{\varvec{\theta }}(t)+\left( \int _0^1R(t,s)Y_1(s)ds,\ldots ,\int _0^1R(t,s)Y_{N-1}(s)ds\right) ^T\\= & {} {{\varvec{\varepsilon }}}(t)+ \widetilde{{\varvec{Z}}}{\varvec{\theta }}(t)+{\varvec{R}}^*(t), \end{aligned}$$

where

$$\begin{aligned} {{\varvec{\varepsilon }}}(t)=( \varepsilon _2(t), \varepsilon _3(t),\ldots , \varepsilon _N(t))^T \end{aligned}$$

and

$$\begin{aligned} {\varvec{R}}^*(t)=\left( \int _0^1R(t,s)Y_1(s)ds,\ldots ,\int _0^1R(t,s)Y_{N-1}(s)ds\right) ^T. \end{aligned}$$

Invoking the definition of \({\varvec{P}}_N\), it is easy to show that \(({\varvec{I}}_N-{\varvec{P}}_N)\widetilde{{\varvec{Z}}}=0.\) Thus, we have

$$\begin{aligned} \hat{ {\varvec{\alpha }}}- { {\varvec{\alpha }}}= & {} {\varvec{A}}_1^{-1}\int _0^1 \left( \langle Y_1,{\varvec{B}} \rangle ,\ldots ,\langle Y_{N-1},{\varvec{B}} \rangle \right) \otimes {\varvec{B}}(t)({\varvec{I}}_N-{\varvec{P}}_N)[{{\varvec{\varepsilon }}}(t)+{\varvec{R}}^*(t)]dt\\= & {} {\varvec{A}}_1^{-1}\int _0^1 \left( \langle Y_1,{\varvec{B}} \rangle ,\langle Y_2,{\varvec{B}} \rangle ,\ldots ,\langle Y_{N-1},{\varvec{B}} \rangle \right) \otimes {\varvec{B}}(t)({\varvec{I}}_N-{\varvec{P}}_N){{\varvec{\varepsilon }}}(t)dt\\&+{\varvec{A}}_1^{-1}\int _0^1 \left( \langle Y_1,{\varvec{B}} \rangle ,\ldots ,\langle Y_{N-1},{\varvec{B}} \rangle \right) \otimes {\varvec{B}}(t)({\varvec{I}}_N-{\varvec{P}}_N) {\varvec{R}}^*(t)dt\\= & {} {\varvec{A}}_1^{-1}({\varvec{I}}_1+{\varvec{II}}_2), \end{aligned}$$

where

$$\begin{aligned} {\varvec{I}}_1=\int _0^1 \left( \langle Y_1,{\varvec{B}} \rangle ,\ldots ,\langle Y_{N-1},{\varvec{B}} \rangle \right) \otimes {\varvec{B}}(t)({\varvec{I}}_N-{\varvec{P}}_N){{\varvec{\varepsilon }}}(t)dt \end{aligned}$$

and

$$\begin{aligned} {\varvec{II}}_2=\int _0^1 \left( \langle Y_1,{\varvec{B}} \rangle ,\ldots ,\langle Y_{N-1},{\varvec{B}} \rangle \right) \otimes {\varvec{B}}(t)({\varvec{I}}_N-{\varvec{P}}_N) {\varvec{R}}^*(t)dt. \end{aligned}$$

Invoking the properties of B-spline and Assumption 1, we can get \(\Vert {\varvec{II}}_2\Vert =O_p\left( NM_N^{-r-1}\right) \) and \(\Vert {\varvec{I}}_1\Vert =O_p\left( \sqrt{N}M_N^{-1}\right) \) via routine calculation.

First, consider \({\varvec{A}}_1\). By the law of large numbers and Assumptions 1 and 3, one has

$$\begin{aligned} {\varvec{A}}_1= & {} \int _0^1 \left( \langle Y_1,{\varvec{B}}\rangle ,\ldots ,\langle Y_{N-1},{\varvec{B}}\rangle \right) \otimes {\varvec{B}}(t)({\varvec{I}}_N-{\varvec{P}}_N) {{\varvec{B}}}^T(t) \\&\otimes \left( \langle Y_1,{\varvec{B}}\rangle ,\ldots ,\langle Y_{N-1},{\varvec{B}}\rangle \right) ^T dt\\= & {} \int _0^1\int _0^1\int _0^1 {\varvec{B}}(t)\otimes {\varvec{B}}(s) \sum _{n=1}^{N} Y_n(s)({\varvec{I}}_N-{\varvec{P}}_N) Y_n(u) {{\varvec{B}}}^T(t)\otimes {{\varvec{B}}}^T (u)dtdsdu \\= & {} N\int _0^1\int _0^1\int _0^1 {\varvec{B}}(t)\otimes {\varvec{B}}(s) K_Z(s,u) {{\varvec{B}}}^T(t)\otimes {{\varvec{B}}}^T (u)dtduds+o_p(NM_N^{-2}), \end{aligned}$$

where \(K_Z(t,s)=E[(Y_n(t)-E(Y_n(t)|Z_n))(Y_{n-1}(s)-E(Y_{n-1}(s)|Z_{n-1}))]\). Combining \({\varvec{I}}_1, {\varvec{II}}_2\) with the properties of B-spline, one has \(\Vert \hat{ {\varvec{\alpha }}}- { {\varvec{\alpha }}}\Vert =O_p(M_N^{-r+1})+O_p(M_NN^{-1/2}).\) By triangle inequality and Lemma 1, we have

$$\begin{aligned} \parallel \hat{\beta }(t,s)-\beta (t,s) \parallel ^2\le & {} \parallel \hat{\beta }(t,s)-\beta ^*(t,s) \parallel ^2 + \parallel \beta ^*(t,s)-\beta (t,s) \parallel ^2\\\le & {} M_N^{-2}\Vert \hat{ {\varvec{\alpha }}}- { {\varvec{\alpha }}}\Vert ^2+O(M_N^{-2r})\\= & {} O_p(M_N^{-2r})+O_p(N^{-1})+O(M_N^{-2r})\\= & {} O_p(M_N^{-2r}). \end{aligned}$$

\(\square \)

Proof of Theorem 2

By the proof of Theorem 1, we have

$$\begin{aligned} \sqrt{\frac{N}{M_N^2}}\left( \hat{\beta }(t,s)-\beta ^*(t,s)\right)= & {} \sqrt{\frac{N}{M_N^2}}{\varvec{B}}^T(t)\otimes {\varvec{B}}^T (s) (\hat{ {\varvec{\alpha }}}- { {\varvec{\alpha }}})\\= & {} \sqrt{\frac{M_N^2}{N}}{\varvec{B}}^T(t)\otimes {\varvec{B}}^T (s) {\varvec{\Sigma }}^{-1}_N({\varvec{I}}_1+{\varvec{II}}_2)+o_p(1)\\= & {} R_1(t,s)+R_2(t,s)+o_p(1), \end{aligned}$$

where

$$\begin{aligned} {\varvec{\Sigma }}_N= & {} M_N^2\int _0^1\int _0^1\int _0^1 {\varvec{B}}(t)\otimes {\varvec{B}}(s) K_Z(s,u) {{\varvec{B}}}^T (t)\otimes {{\varvec{B}}}^T(u)dtduds,\\ R_1(t,s)= & {} \sqrt{\frac{M_N^2}{N}} {\varvec{B}}^T(t)\otimes {\varvec{B}}^T (s) {\varvec{\Sigma }}^{-1}_N\int _0^1 \left( \langle Y_1,{\varvec{B}} \rangle ,\ldots ,\langle Y_{N-1},{\varvec{B}} \rangle \right) \\&\otimes {\varvec{B}}(t)({\varvec{I}}_N-{\varvec{P}}_N){{\varvec{\varepsilon }}}(t)dt, \end{aligned}$$

and

$$\begin{aligned} R_2(t,s)= & {} \sqrt{\frac{M_N^2}{N}} {\varvec{B}}^T(t)\otimes {\varvec{B}}^T (s) {\varvec{\Sigma }}^{-1}_N \int _0^1 \left( \langle Y_1,{\varvec{B}} \rangle ,\ldots ,\langle Y_{N-1},{\varvec{B}} \rangle \right) \\&\otimes {\varvec{B}}(t)({\varvec{I}}_N-{\varvec{P}}_N) {\varvec{R}}^*(t)dt. \end{aligned}$$

First, consider \(R_1(t,s).\) By the central limit theorem and Assumption 2, one has \(R_1(t,s){\mathop {\longrightarrow }\limits ^{L}}{\varvec{G}}(t,s),\) where \({\varvec{G}}(t,s)\) is a normal distribution with zero mean and the covariance

$$\begin{aligned} \sigma ^2(t, s) =\lim _{N \rightarrow \infty } M_N^2 {\varvec{B}}^T(t)\otimes {\varvec{B}}^T (s) {\varvec{\Sigma }}^{-1}_N {\varvec{H}} {\varvec{\Sigma }}^{-1}_N{\varvec{B}}(t)\otimes {\varvec{B}} (s) \end{aligned}$$

with

$$\begin{aligned} H=\int _0^1\int _0^1{\varvec{B}} (s)C_\varepsilon (t,s){\varvec{B}}^T(s)dtds\otimes \int _0^1\int _0^1{\varvec{B}} (s)K_Z(t,s){\varvec{B}}^T(s)dtds, \end{aligned}$$

where \(C_\varepsilon (t,s)=\text {Cov}(\varepsilon _n(t),\varepsilon _n(s)).\)

Next, consider \(R_2(t,s).\) Invoking Theorem 1 and \(N/M_N^{2r+2}=o(1)\), we have

$$\begin{aligned} \Vert R_2(t,s)\Vert= & {} \left\| \sqrt{\frac{M_N^2}{N}} {\varvec{B}}^T(t)\otimes {\varvec{B}}^T (s) {\varvec{\Sigma }}^{-1}_N \right\| \left\| \int _0^1 \left( \langle Y_1,{\varvec{B}} \rangle ,\ldots ,\langle Y_{N-1},{\varvec{B}} \rangle \right) \right. \\&\left. \otimes {\varvec{B}}(t)({\varvec{I}}_N-{\varvec{P}}_N) {\varvec{R}}^*(t)dt\right\| \\= & {} \left\| \sqrt{\frac{M_N^2}{N}} {\varvec{B}}^T(t)\otimes {\varvec{B}}^T (s) {\varvec{\Sigma }}^{-1}_N \right\| \Vert {\varvec{II}}_2\Vert \\= & {} O_p\left( \sqrt{N}M_N^{-r-1}\right) \\= & {} o_p(1). \end{aligned}$$

Combining the asymptotic normality of \(R_1(t,s)\) with the convergence rate of \(R_2(t,s)\), one has

$$\begin{aligned} \sqrt{\frac{N}{M_N^2}}\left( \hat{\beta }(t,s)-\beta ^*(t,s)\right)&= R_1(t,s)+R_2(t,s)+o_p(1)\\&= R_1(t,s)+o_p(1)\\&{\mathop {\longrightarrow }\limits ^{L}}{\varvec{G}}(t,s). \end{aligned}$$

This completes the proof. \(\square \)

Proof of Theorem 3

Invoking \(\widetilde{{\varvec{Y}}}(t)={\varvec{\eta }}_{\beta }(t)+\widetilde{{\varvec{Z}}}{\varvec{\theta }}(t)+{\varvec{\varepsilon }}(t)\), one has

$$\begin{aligned} \sqrt{N}[{\varvec{\hat{\theta }(t)}}-{\varvec{\theta }}(t)]= & {} \sqrt{N}(\widetilde{{\varvec{Z}}}^T \widetilde{{\varvec{Z}}})^{-1}\widetilde{{\varvec{Z}}}^T[\widetilde{{\varvec{Y}}}(t)- \hat{ {\varvec{\eta }}}_{\hat{\beta }}(t)-\widetilde{{\varvec{Z}}}{\varvec{\theta }}(t)]\\= & {} \sqrt{N}(\widetilde{{\varvec{Z}}}^T \widetilde{{\varvec{Z}}})^{-1}\widetilde{{\varvec{Z}}}^T[{\varvec{\eta }}_{\beta }(t)-{\varvec{\hat{{\varvec{\eta }}}}}_{\hat{\beta }}(t)+{\varvec{\varepsilon }}(t)]\\= & {} R_1(t)+R_2(t), \end{aligned}$$

where \(R_1(t)=\sqrt{N}(\widetilde{{\varvec{Z}}}^T \widetilde{{\varvec{Z}}})^{-1}\widetilde{{\varvec{Z}}}^T[{\varvec{\eta }}_{\beta }(t)-{\varvec{\hat{{\varvec{\eta }}}}}_{\hat{\beta }}(t)]\) and \(R_2(t)=\sqrt{N}(\widetilde{{\varvec{Z}}}^T \widetilde{{\varvec{Z}}})^{-1}\widetilde{{\varvec{Z}}}^T{\varvec{\varepsilon }}(t)\).

First, consider \(R_1(t)\). By Assumption 1 and Theorem 1, we have \(\Vert {\varvec{\eta }}_{\beta }(t)-{\varvec{\hat{{\varvec{\eta }}}}}_{\hat{\beta }}(t)\Vert =O_p(NM_N^{-r}).\) Consequently, it has

$$\begin{aligned} \Vert R_1\Vert ^2= & {} N \int _0^1 ({\varvec{\eta }}(t)- \hat{ {\varvec{\eta }}}(t))^T\widetilde{{\varvec{Z}}}(\widetilde{{\varvec{Z}}}^T \widetilde{{\varvec{Z}}})^{-1}(\widetilde{{\varvec{Z}}}^T \widetilde{{\varvec{Z}}})^{-1}\widetilde{{\varvec{Z}}}^T[{\varvec{\eta }}(t)- \hat{ {\varvec{\eta }}}(t)]dt\\= & {} O_p(1)\int _0^1 ({\varvec{\eta }}(t)- \hat{ {\varvec{\eta }}}(t))^T({\varvec{\eta }}(t)- \hat{ {\varvec{\eta }}}(t))dt\\= & {} O_p(M_N^{-r})\\= & {} o_p(1). \end{aligned}$$

Next, consider \(R_2(t)\). By the CLT for Hilbert spaces, we have \(\frac{1}{\sqrt{N}}\sum _{n=1}^N {\varvec{Z}}_n \varepsilon _n(t) {\mathop {\longrightarrow }\limits ^{L}}{\varvec{B}}_1(t),\) where \({\varvec{B}}_1(t)\) is a Guassian processes with zero mean and the covariance function \(C(t, s) = \text {Cov}(\varepsilon _n(t),\varepsilon _n(s))\). Denote \({\varvec{\Sigma }}=\text {E}[{\varvec{Z}}{\varvec{Z}}^T].\) By the strong law of large numbers and the CLT for Hilbert spaces, it has

$$\begin{aligned} R_2(t)= & {} \sqrt{N} (\widetilde{{\varvec{Z}}}^T \widetilde{{\varvec{Z}}})^{-1}\widetilde{{\varvec{Z}}}^T{\varvec{\varepsilon }}(t)\\= & {} {\varvec{\Sigma }}^{-1}\frac{1}{\sqrt{N}}\sum _{n=1}^N {\varvec{Z}}_n \varepsilon _n(t)+o_p(1)\\= & {} {\varvec{\Sigma }}^{-1} {\varvec{B}}_1(t)+o_p(1). \end{aligned}$$

Combining this with \(\Vert R_1\Vert =o_p(1),\) one has \(\sqrt{N}[{\varvec{\hat{\theta }}}(t)-{\varvec{\theta }}(t)] \longrightarrow {\varvec{G}}(t),\) where \({\varvec{G}}(t)\) is a Guassian processes with zero mean and the covariance function \(C_{{\varvec{B}}}(t, s) = {\varvec{\Sigma }}^{-1} \text {Cov}(\varepsilon _n(t),\varepsilon _n(s))\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Du, J., Zhao, H. & Zhang, Z. Dynamic partially functional linear regression model. Stat Methods Appl 28, 679–693 (2019). https://doi.org/10.1007/s10260-019-00457-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10260-019-00457-x

Keywords

Mathematics Subject Classification

Navigation