Skip to main content

Exact Quadratic Error of the Local Linear Regression Operator Estimator for Functional Covariates

  • Conference paper
Functional Statistics and Applications

Part of the book series: Contributions to Statistics ((CONTRIB.STAT.))

Abstract

In this paper, it is studied the asymptotic behavior of the nonparametric local linear estimation of the regression operator when the covariates are curves. Under some general conditions we give the exact expression involved in the leading terms of the quadratic error of this estimator. The obtained results affirm the superiority of the local linear modeling over the kernel method, in functional statistics framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Barrientos-Marin, J., Ferraty, F., Vieu, P.: Locally modelled regression and functional data. J. Nonparametr. Stat. 22, 617–632 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  2. Baìllo, A., Grané, A.: Local linear regression for functional predictor and scalar response. J. Multivar. Anal. 100, 102–111 (2009)

    Article  MATH  Google Scholar 

  3. Chu, C.-K., Marron, J.-S.: Choosing a kernel regression estimator. Stat. Sci. 6, 404–436 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  4. Davidian, M., Lin, X., Wang, J.L.: Introduction. Stat. Sinica 14, 613–614 (2004)

    MathSciNet  Google Scholar 

  5. Demongeot, J., Laksaci, A., Madani, F., Rachdi, M.: Functional data: local linear estimation of the conditional density and its application. Statistics 47, 26–44 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  6. El Methni, M., Rachdi, M.: Local weighted average estimation of the regression operator for functional data. Commun. Stat. Theory Methods 40, 3141–3153 (2010)

    Article  Google Scholar 

  7. Fan, J.: Design-adaptive nonparametric regression. J. Am. Stat. Assoc. 87, 998–1004 (1992)

    Article  MATH  Google Scholar 

  8. Fan, J., Gijbels, I.: Local Polynomial Modelling and Its Applications. Monographs on Statistics and Applied Probability, vol. 66. Chapman & Hall, London (1996)

    Google Scholar 

  9. Fan, J., Yao, Q.: Nonlinear Time Series: Nonparametric and Parametric Methods. Springer, New York (2003)

    Book  Google Scholar 

  10. Ferraty, F.: High-dimensional data: a fascinating statistical challenge. J. Multivar. Anal. 101, 305–30 (2010)

    Article  MathSciNet  Google Scholar 

  11. Ferraty, F., Romain, Y.: The Oxford Handbook of Functional Data Analysis. Oxford University Press, Oxford (2011)

    Google Scholar 

  12. Ferraty, F., Vieu, P.: Nonparametric Functional Data Analysis: Theory and Practice. Springer Series in Statistics, Springer, New York (2006)

    Google Scholar 

  13. Ferraty, F., Mas, A., Vieu, P.: Nonparametric regression on functional data: inference and practical aspects. Aust. N. Z. J. Stat. 49, 267–286 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  14. Ferraty, F., Laksaci, A., Tadj, A., Vieu, P.: Rate of uniform consistency for nonparametric estimates with functional variables. J. Stat. Plan. Inference 140, 335–352 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  15. Gonzalez Manteiga, W., Vieu, P.: Statistics for functional data. Comput. Stat. Data Anal. 51, 4788–4792 (2007

    Article  MATH  MathSciNet  Google Scholar 

  16. Sarda, P., Vieu, P.: Kernel regression. In: Schimek, M.G. (ed.) Smoothing and Regression: Approaches, Computation and Application. Wiley Series in Probability and Statistics, pp. 43–70. Wiley, Chichester, New York (2000)

    Google Scholar 

  17. Valderrama, M.J.: An overview to modelling functional data. Comput. Stat. 22, 331–334 (2000)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We are indebted to Campus France CMCU for supporting us with the grant PHC Maghreb SCIM.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Laksaci .

Editor information

Editors and Affiliations

Appendix: Proofs

Appendix: Proofs

In what follows, when no confusion is possible, we will denote by C and C′ some strictly positive generic constants. Moreover, we set for all i, j = 1, , n and for a fixed \((x,y) \in \mathcal{F}\times \mathrm{I\!R}\):

$$\displaystyle{K_{i} = K(h^{-1}\delta (x,X_{ i})),\;\beta _{i} =\beta (X_{i},x)\mbox{ and }W_{ij}(x) = W_{ij}.}$$

Proof of Lemma 1

We have:

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\hat{g}(x)\right ]& =& \mathbb{E}\left [ \frac{1} {n(n - 1)\mathbb{E}[W_{12}]}\sum _{j\not =i,1}^{n}W_{ ij}Y _{j}\right ] \\ & =& \frac{\mathbb{E}[W_{12}Y _{2}]} {\mathbb{E}[W_{12}]} = \frac{1} {\mathbb{E}[W_{12}]}\mathbb{E}\left [W_{12}\mathbb{E}[Y _{2}\vert X_{2}]\right ].{}\end{array}$$
(3)

Then, it follows from (3) and the definition of the operator m that:

$$\displaystyle{\mathbb{E}\left [\hat{g}(x)\right ] = \frac{1} {\mathbb{E}[W_{12}]}\mathbb{E}\left [W_{12}m(X_{2})\right ].}$$

Now, by the same arguments as those used in [13], for the regression operator estimation, we show that:

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [W_{12}m(X_{2})\right ]& =& m(x)\mathbb{E}[W_{12}] + \mathbb{E}\left [W_{12}\left (m(X_{2}) - m(x)\right )\right ] {}\\ & =& m(x)\mathbb{E}[W_{12}] + \mathbb{E}\left [W_{12}\mathbb{E}\left [m(X_{2}) - m(x)\vert \beta (x,X_{2})\right ]\right ] {}\\ & =& m(x)\mathbb{E}[W_{12}] + \mathbb{E}\left [W_{12}\varPsi \left (\beta (x,X_{2})\right )\right ] {}\\ \end{array}$$

and since \(\mathbb{E}\left [\beta (x,X_{2})W_{12}\right ] = 0\) and Ψ(0) = 0, we obtain:

$$\displaystyle{\begin{array}{cc} \mathbb{E}\left [W_{12}\varPsi \left (\beta (x,X_{2})\right )\right ]& = \frac{1} {2}\varPsi ^{{\prime\prime}}(0)\mathbb{E}\left [\beta ^{2}(x,X_{ 2})W_{12}\right ] + o\left (\mathbb{E}\left [\beta ^{2}(x,X_{ 2})W_{12}\right ]\right ). \end{array} }$$

Then:

$$\displaystyle{ \mathbb{E}\left [\hat{g}(x)\right ] = m(x) +\varPsi _{ 0}^{{\prime\prime}}(0)\frac{\mathbb{E}\left [\beta ^{2}(x,X_{ 2})W_{12}\right ]} {2\mathbb{E}[W_{12}]} + o\left (\frac{\mathbb{E}\left [\beta ^{2}(x,X_{2})W_{12}\right ]} {\mathbb{E}[W_{12}]} \right ). }$$

Moreover, it is clear that:

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\beta (x,X_{2})^{2}W_{ 12}\right ]& =& \left (\mathbb{E}\left [K_{1}\beta _{1}^{2}\right ]\right )^{2} - \mathbb{E}[K_{ 1}\beta _{1}]\mathbb{E}[K_{1}\beta _{1}^{3}] {}\\ \mathbb{E}\left [W_{12}\right ]& =& \mathbb{E}[K_{1}\beta _{1}^{2}]EK_{ 1} - (\mathbb{E}[K_{1}\beta _{1}])^{2} {}\\ \end{array}$$

and, under the Assumption (H4), we obtain that:

$$\displaystyle{\mbox{ for all }a> 0,\; \mathbb{E}[K_{1}^{a}\beta _{ 1}] \leq C\int _{B(x,h)}\beta (u,x)dP_{X}(u).}$$

So, by using the last part of the Assumption (H3), we get:

$$\displaystyle{h\mathbb{E}[K_{1}^{a}\beta _{ 1}] = o\left (\int _{B(x,h)}\beta ^{2}(u,x)dP_{ X}(u)\right ) = o(h^{2}\phi _{ x}(h))}$$

which allows to write:

$$\displaystyle{ \mathbb{E}[K_{1}^{a}\beta _{ 1}] = o(h\phi _{x}(h)). }$$
(4)

Moreover, for all b > 1, we can write:

$$\displaystyle{\mathbb{E}[K_{1}^{a}\beta _{ 1}^{b}] = \mathbb{E}[K_{ 1}^{a}\delta ^{b}(x,X_{ 1})] + \mathbb{E}\left [K_{1}(\beta ^{b}(X_{ 1},x) -\delta ^{b}(x,X_{ 1}))\right ].}$$

Then, the second part of the Assumption (H3) implies that:

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\left [K_{1}^{a}(\beta ^{b}(X_{ 1},x) -\delta ^{b}(x,X_{ 1}))\right ] {}\\ & & \ \ = \mathbb{E}\left [K_{1}^{a}I_{ B(x,h)}(\beta (X_{1},x) -\delta (x,X_{1}))\sum _{l=0}^{b-1}(\beta (X_{ 1},x))^{b-1-l}(\delta (x,X_{ 1}))^{l}\right ] {}\\ & & \ \ \leq \sup _{u\in B(x,h)}\vert \beta (u,x) -\delta (x,u)\vert \sum _{l=0}^{b-1}\mathbb{E}\left [K_{ 1}^{a}I_{ B(x,h)}\vert \beta (X_{1},x)\vert ^{b-1-l}\vert \delta (x,X_{ 1})\vert ^{l})\right ], {}\\ \end{array}$$

whereas the first part of the Assumption (H3) gives:

$$\displaystyle{I_{B(x,h)}\vert \beta (X_{1},x)\vert \leq I_{B(x,h)}\vert \delta (x,X_{1})\vert.}$$

Thus, it follows:

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [K_{1}^{a}(\beta ^{b}(X_{ 1},x) -\delta ^{b}(x,X_{ 1}))\right ]& \leq & b\sup _{u\in B(x,h)}\vert \beta (u,x) -\delta (x,u)\vert \vert \mathbb{E}[K_{1}^{a}\vert \delta \vert ^{b-1}(x,X_{ 1})] {}\\ & \leq & b\sup _{u\in B(x,h)}\vert \beta (u,x) -\delta (x,u)\vert h^{b-1}\mathbb{E}[K_{ 1}^{a}] {}\\ & \leq & b\sup _{u\in B(x,h)}\vert \beta (u,x) -\delta (x,u)\vert h^{b-1}\phi _{ x}(h) {}\\ \end{array}$$

which allows to write:

$$\displaystyle{\mathbb{E}[K_{1}^{a}\beta _{ 1}^{b}] = \mathbb{E}[K_{ 1}^{a}\delta ^{b}(x,X_{ 1})] + o(h^{b}\phi _{ x}(h)).}$$

Concerning the term \(\mathbb{E}[K_{1}^{a}\delta ^{b}]\), we write:

$$\displaystyle\begin{array}{rcl} h^{-b}\mathbb{E}[K_{ 1}^{a}\delta ^{b}]& =& \int v^{b}K^{a}(v)dP_{ X}^{h^{-1}\delta (x,X_{ 1})}(v) {}\\ & =& \int _{-1}^{1}\left [K^{a}(1) -\int _{ v}^{1}(u^{b}K^{a}(u))^{{\prime}}du\right ]dP_{ X}^{h^{-1}\delta (x,X_{ 1})}(v) {}\\ & =& K^{a}(1)\phi _{ x}(h) -\int _{-1}^{1}(u^{b}K^{a}(u))^{{\prime}}\phi _{ x}(uh,h)du {}\\ & =& \phi _{x}(h)\left (K^{a}(1) -\int _{ -1}^{1}(u^{b}K^{a}(u))^{{\prime}}\frac{\phi _{x}(uh,h)} {\phi _{x}(h)} du\right ). {}\\ \end{array}$$

Finally, under the Assumption (H1), we get:

$$\displaystyle{ \mathbb{E}[K_{1}^{a}\beta _{ 1}^{b}] = h^{b}\phi _{ x}(h)\left (K^{a}(1) -\int _{ -1}^{1}(u^{b}K^{a}(u))^{{\prime}}\chi _{ x}(u)du\right ) + o(h^{b}\phi _{ x}(h)). }$$
(5)

It follows that:

$$\displaystyle\begin{array}{rcl} \frac{\mathbb{E}\left [\beta ^{2}(x,X_{2})W_{12}\right ]} {\mathbb{E}[W_{12}]} & =& h^{2}\left [\frac{K(1) -\int _{-1}^{1}(u^{2}K(u))^{{\prime}}\chi _{ x}(u)du} {K(1) -\int _{-1}^{1}K^{{\prime}}(u)\chi _{ x}(u)du} \right ] + o(h^{2}). {}\\ \end{array}$$

Consequently:

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\hat{g}(x)\right ]& =& m(x) + \frac{h^{2}} {2} \varPsi _{0}^{{\prime\prime}}(0)\left [\frac{K(1) -\int _{-1}^{1}(u^{2}K(u))^{{\prime}}\chi _{ x}(u)du} {K(1) -\int _{-1}^{1}K^{{\prime}}(u)\chi _{ x}(u)du} \right ] + o(h^{2}). {}\\ \end{array}$$

 ■ 

Proof of Lemma 2

For this Lemma, we use the same ideas of Sarda and Vieu [16] to show that

$$\displaystyle\begin{array}{rcl} \mathrm{Var}\left [\hat{m}(x)\right ]& =& \mathrm{Var}\left [\hat{g}(x)\right ] - 2(\mathbb{E}\hat{g}(x))\mathrm{Cov}(\hat{g}(x),\hat{f}(x)) {}\\ & & +(\mathbb{E}\hat{g}(x))^{2}\mathrm{Var}(\,\hat{f}(x)) + o\left ( \frac{1} {n\phi _{x}(h)}\right ). {}\\ \end{array}$$

It is clear that:

$$\displaystyle\begin{array}{rcl} \mbox{ Var}\left (\hat{g}(x)\right )& =& \frac{1} {(n(n - 1)\mathbb{E}[W_{12}])^{2}}\mbox{ Var}\left (\sum _{i\not =j\,=1}^{n}W_{ ij}Y _{j}\right ) {}\\ & =& \frac{1} {\left (n(n - 1)(EW_{12})\right )^{2}}\left (n(n - 1)\mathbb{E}[W_{12}^{2}Y _{ 2}^{2}] + n(n - 1)\mathbb{E}[W_{ 12}W_{21}Y _{2}Y _{1}]\right. {}\\ & & \left.+n(n - 1)(n - 2)\mathbb{E}[W_{12}W_{13}Y _{2}Y _{3}] + n(n - 1)(n - 2)\mathbb{E}[W_{12}W_{23}Y _{2}Y _{3}]\right. {}\\ & & \left.+n(n - 1)(n - 2)\mathbb{E}[W_{12}W_{31}Y _{2}Y _{1}] + n(n - 1)(n - 2)\mathbb{E}[W_{12}W_{32}Y _{2}^{2}]\right. {}\\ & & \left.-n(n - 1)(4n - 6)(\mathbb{E}[W_{12}Y _{2}])^{2}\right ). {}\\ \end{array}$$

Observe that the terms of the first line are negligible compared to other terms which are multiplied by \(n(n - 1)(n - 2)\). Furthermore,

$$\displaystyle\begin{array}{rcl} \mathbb{E}[W_{12}^{2}Y _{ 2}^{2}]& =& O(h^{4}\phi _{ x}^{2}(h)), {}\\ \mathbb{E}[W_{12}W_{21}Y _{1}Y _{2}]& =& O(h^{4}\phi _{ x}^{2}(h)), {}\\ \mathbb{E}[W_{12}W_{13}Y _{2}Y _{3}]& =& (m(x))^{2}\mathbb{E}[\beta _{ 1}^{4}K_{ 1}^{2}](\mathbb{E}[K_{ 1}])^{2} + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[W_{12}W_{23}Y _{2}Y _{3}]& =& (m(x))^{2}\mathbb{E}[\beta _{ 1}^{2}K_{ 1}](\mathbb{E}[\beta _{1}^{2}K_{ 1}^{2}]\mathbb{E}[K_{ 1}]) + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[W_{12}W_{31}Y _{2}Y _{1}]& =& (m(x))^{2}\mathbb{E}[\beta _{ 1}^{2}K_{ 1}](\mathbb{E}[\beta _{1}^{2}K_{ 1}^{2}]\mathbb{E}[K_{ 1}]) + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[W_{12}W_{32}Y _{2}^{2}]& =& (m_{ 2}(x))(\mathbb{E}[\beta _{1}^{2}K_{ 1}])^{2}(\mathbb{E}[K_{ 1}^{2}]) + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[W_{12}Y _{2}]& =& O(h^{2}\phi _{ x}^{2}(h)). {}\\ \end{array}$$

Therefore, the leading term in the expression of \(\mbox{ Var}\left (\hat{g}(x)\right )\) is:

$$\displaystyle\begin{array}{rcl} & & \frac{n(n - 1)(n - 2)} {(n(n - 1)\mathbb{E}[W_{12}])^{2}}\left ((m(x))^{2}\left (\mathbb{E}[\beta _{ 1}^{4}K_{ 1}^{2}](\mathbb{E}[K_{ 1}])^{2} + 2BBe[\beta _{ 1}^{2}K_{ 1}](\mathbb{E}[\beta _{1}^{2}K_{ 1}^{2}]\mathbb{E}[K_{ 1}])\right )\right. {}\\ & & \left.\qquad + (m_{2}(x))(\mathbb{E}[\beta _{1}^{2}K_{ 1}])^{2}(\mathbb{E}[K_{ 1}^{2}]) + o(h^{4}\phi _{ x}^{3}(h))\right ). {}\\ \end{array}$$

Concerning the covariance term, we have by the same fashion:

$$\displaystyle\begin{array}{rcl} \text{Cov}(\hat{g}(x),\hat{f}(x))& =& \frac{1} {(n(n - 1)\mathbb{E}[W_{12}])^{2}}\text{Cov}\left (\sum _{\stackrel{i,j=1}{i\not =j}}^{n}W_{ ij}Y _{j},\sum _{\stackrel{i^{{\prime}},j^{{\prime}}=1}{i^{{\prime}}\not =j^{{\prime}}}}^{n}W_{ i^{{\prime}}j^{{\prime}}}\right ) {}\\ & =& \frac{1} {\left (n(n - 1)EW_{12})\right )^{2}}\left [n(n - 1)\mathbb{E}[W_{12}^{2}Y _{ 2}] + n(n - 1)\mathbb{E}[W_{12}W_{21}Y _{2}]\right. {}\\ & & \left.+n(n - 1)(n - 2)\mathbb{E}[W_{12}W_{13}Y _{2}] + n(n - 1)(n - 2)\mathbb{E}[W_{12}W_{23}Y _{2}]\right. {}\\ & & \left.+n(n - 1)(n - 2)\mathbb{E}[W_{12}W_{31}Y _{2}] + n(n - 1)(n - 2)\mathbb{E}[W_{12}W_{32}Y _{2}]\right. {}\\ & & \left.-n(n - 1)(4n - 6)(\mathbb{E}[W_{12}Y _{2}]\mathbb{E}[W_{12}])\right ] {}\\ \end{array}$$

with

$$\displaystyle\begin{array}{rcl} \mathbb{E}[W_{12}^{2}Y _{ 2}]& =& O(h^{4}\phi _{ x}^{2}(h)), {}\\ \mathbb{E}[W_{12}W_{21}Y _{2}]& =& O(h^{4}\phi _{ x}^{2}(h)), {}\\ \mathbb{E}[W_{12}W_{13}Y _{2}]& =& (m(x))\mathbb{E}[\beta _{1}^{4}K_{ 1}^{2}](\mathbb{E}[K_{ 1}])^{2} + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[W_{12}W_{23}Y _{2}]& =& (m(x))\mathbb{E}[\beta _{1}^{2}K_{ 1}](\mathbb{E}[\beta _{1}^{2}K_{ 1}^{2}]\mathbb{E}[K_{ 1}]) + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[W_{12}W_{31}Y _{2}]& =& (m(x))\mathbb{E}[\beta _{1}^{2}K_{ 1}](\mathbb{E}[\beta _{1}^{2}K_{ 1}^{2}]\mathbb{E}[K_{ 1}]) + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[W_{12}W_{32}Y _{2}]& =& (m(x))(\mathbb{E}[\beta _{1}^{2}K_{ 1}])^{2}(\mathbb{E}[K_{ 1}^{2}]) + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[W_{12}Y _{1}]& =& O(h^{2}\phi _{ x}^{2}(h)). {}\\ \end{array}$$

Therefore, the leading term in the expression of \(\text{Cov}(\hat{g}(x),\hat{f}(x))\) is:

$$\displaystyle\begin{array}{rcl} & & \frac{n(n - 1)(n - 2)} {(n(n - 1)\mathbb{E}[W_{12}])^{2}}\left (m(x)\left (\mathbb{E}[\beta _{1}^{4}K_{ 1}^{2}](\mathbb{E}[K_{ 1}])^{2} + 2\mathbb{E}[\beta _{ 1}^{2}K_{ 1}](\mathbb{E}[\beta _{1}^{2}K_{ 1}^{2}]\mathbb{E}[K_{ 1}])\right.\right. {}\\ & & \left.\left.\qquad + (\mathbb{E}[\beta _{1}^{2}K_{ 1}])^{2}(\mathbb{E}[K_{ 1}^{2}])\right ) + o(h^{4}\phi _{ x}^{3}(h))\right ). {}\\ \end{array}$$

Finally, for \(\text{Var}\left (\hat{f}(x)\right )\)

$$\displaystyle\begin{array}{rcl} \text{Var}\left (\hat{f}(x)\right )& =& \frac{1} {\left (n(n - 1)(\mathbb{E}W_{12})\right )^{2}}\Big[n(n - 1)\mathbb{E}[[W_{12}^{2}] + n(n - 1)\mathbb{E}[[W_{ 12}W_{21}] {}\\ & & \left.+n(n - 1)(n - 2)\mathbb{E}[[W_{12}W_{13}] + n(n - 1)(n - 2)\mathbb{E}[[W_{12}W_{23}]\right. {}\\ & & \left.+n(n - 1)(n - 2)\mathbb{E}[[W_{12}W_{31}] + n(n - 1)(n - 2)\mathbb{E}[[W_{12}W_{32}]\right. {}\\ & & -n(n - 1)(4n - 6)(\mathbb{E}[[W_{12}])^{2}\Big] {}\\ \end{array}$$

and similarly to the previous cases:

$$\displaystyle\begin{array}{rcl} \mathbb{E}[W_{12}^{2}]& =& O(h^{4}\phi _{ x}^{2}(h)), {}\\ \mathbb{E}[[W_{12}W_{21}]& =& O(h^{4}\phi _{ x}^{2}(h)), {}\\ \mathbb{E}[[W_{12}W_{13}]& =& \mathbb{E}[[\beta _{1}^{4}K_{ 1}^{2}](\mathbb{E}[[K_{ 1}])^{2} + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[[W_{12}W_{23}]& =& \mathbb{E}[[\beta _{1}^{2}K_{ 1}](\mathbb{E}[[\beta _{1}^{2}K_{ 1}^{2}]\mathbb{E}[[K_{ 1}]) + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[[W_{12}W_{31}]& =& \mathbb{E}[[\beta _{1}^{2}K_{ 1}](\mathbb{E}[[\beta _{1}^{2}K_{ 1}^{2}]\mathbb{E}[[K_{ 1}]) + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[[W_{12}W_{32}]& =& (\mathbb{E}[[\beta _{1}^{2}K_{ 1}])^{2}(\mathbb{E}[[K_{ 1}^{2}]) + o(h^{4}\phi _{ x}^{3}(h)), {}\\ \mathbb{E}[[W_{12}]& =& O(h^{2}\phi _{ x}^{2}(h)). {}\\ \end{array}$$

Therefore,

$$\displaystyle\begin{array}{rcl} \text{Var}\left (\hat{m}(x)\right ) = \frac{(m_{2}(x) - m^{2}(x))} {n\phi _{x}(h)} \left [\frac{\left (K^{2}(1) -\int _{-1}^{1}(K^{2}(u))^{{\prime}}\chi (u)du\right )} {\left (K(1) -\int _{-1}^{1}(K(u))^{{\prime}}\chi (u)du\right )^{2}} \right ] + o\left ( \frac{1} {n\phi _{x}(h)}\right ).& & {}\\ \end{array}$$

which completes the proof. ■ 

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Naceri, A., Laksaci, A., Rachdi, M. (2015). Exact Quadratic Error of the Local Linear Regression Operator Estimator for Functional Covariates. In: Ould Saïd, E., Ouassou, I., Rachdi, M. (eds) Functional Statistics and Applications. Contributions to Statistics. Springer, Cham. https://doi.org/10.1007/978-3-319-22476-3_5

Download citation

Publish with us

Policies and ethics