Skip to main content
Log in

On the local linear modelization of the conditional density for functional and ergodic data

  • Published:
METRON Aims and scope Submit manuscript

Abstract

In this paper, we estimate the conditional density function using the local linear approach. We treat the case when the regressor is valued in a semi-metric space, the response is a scalar and the data are observed as ergodic functional times series. Under this dependence structure, we state the almost complete consistency (a.co.) with rates of the constructed estimator. Moreover, the usefulness of our results is illustrated through their application to the conditional mode estimation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Attouch, M.K., Chouaf, B., Laksaci, A.: Nonparametric M-estimation for functional spatial data. Commun. Stat. Appl. Methods 19, 193–211 (2012)

    Google Scholar 

  2. Baìllo, A., Grané, A.: Local linear regression for functional predictor and scalar response. J. Multivariate Anal. 100, 102–111 (2009)

    Article  MathSciNet  Google Scholar 

  3. Barrientos, J., Ferraty, F., Vieu, P.: Locally Modelled Regression and Functional Data. J. Nonparametr. Stat. 22, 617–632 (2010)

    Article  MathSciNet  Google Scholar 

  4. Bouanani, O., Laksaci, A., Rachdi, M., Rahmani, S.: Asymptotic normality of some conditional nonparametric functional parameters in high-dimensional statistics. Behaviormetrika 4, 1–35 (2018)

    Google Scholar 

  5. Bouanani, O., Rahmani, S., Ait-Hennani, L.: Local linear conditional cumulative distribution function with mixing data. Arab. J. Math. (2019). https://doi.org/10.1007/s40065-019-0247-7

    Article  MATH  Google Scholar 

  6. Chaouch, M., Laïb, N., Ould Saïd, E.: Nonparametric M-estimation for right censored regression model with stationary ergodic data. Stat. Methodol. 33, 234–255 (2016)

    Article  MathSciNet  Google Scholar 

  7. Dabo-Niang, S., Kaid, Z., et Laksaci, A.: Asymptotic properties of the kernel estimate of spatial conditional mode when the regressor is functional. Adv. Stat. Anal. 99, 131–160 (2015)

    Article  MathSciNet  Google Scholar 

  8. Dabo-Niang, S., Laksaci, A.: Note on conditional mode estimation for functional dependent data. Statistica 70, 83–94 (2010)

    Google Scholar 

  9. Damon, J., Guillas, S.: The inclusion of exogenous variables in functional autoregressive ozone forecasting. Environmetrics 13, 759–774 (2002)

    Article  Google Scholar 

  10. Demongeot, J., Laksaci, A., Madani, F., Rachdi, M.: A fast functional locally modeled conditional density and mode for functional time-series. Recent Advances in Functional Data Analysis and Related Topics, Contributions to Statistics, Pages 85–90, https://doi.org/10.1007/978-3-7908-2736-1_13 Physica-Verlag/Springer (2011)

  11. Demongeot, J., Laksaci, A., Madani, F., Rachdi, M.: Functional data: local linear estimation of the conditional density and its application. Statistics 47, 26–44 (2013)

    Article  MathSciNet  Google Scholar 

  12. Demongeot, J., Laksaci, A., Rachdi, M., Rahmani, S.: On the local linear modelization of the conditional distribution for functional data. Sankhya A 76, 328–355 (2014)

    Article  MathSciNet  Google Scholar 

  13. Didi, S., Louani, D.: Asymptotic results for the regression function estimate on continuous time stationary and ergodic data. Stat. Risk Model. 31, 129–150 (2014)

    MathSciNet  MATH  Google Scholar 

  14. Fan, J.: Design-adaptive nonparametric regression. J. Am. Stat. Assoc. 87, 998–1004 (1992)

    Article  MathSciNet  Google Scholar 

  15. Fan, J., Gijbels, I.: Local Polynomial Modelling and its Applications. Chapman and Hall, London (1996)

    MATH  Google Scholar 

  16. Ferraty, F., Vieu, P.: Nonparametric Functional Data Analysis Theory and Practice. Springer-Verlag, Berlin (2006)

    MATH  Google Scholar 

  17. Ferraty, F.: Y. The Oxford Handbook of Fuctional Data Analysis and Romain. Oxford University Press, Oxford (2010)

    Google Scholar 

  18. Gheriballah, A., Laksaci, A., Sekkal, S.: Nonparametric M-regression for functional ergodic data. Stat. Probability Lett. 83, 902–908 (2013)

    Article  MathSciNet  Google Scholar 

  19. Kaid, Z., Laksaci, A.: Functional quantile regression: local linear modelisation, pp. 155–160. In Functional Statistics and Related Fields. Springer, Cham (2017)

  20. Laïb, N., Ould-Saïd, E.: Estimation non paramétrique robuste de la fonction de régression pour des observations ergodiques. C. R. Acad. Sci. Série 1(322), 271–276 (1996)

    MATH  Google Scholar 

  21. Laïb, N., Louani, D.: Rates of strong consistencies of the regression function estimator for functional stationary ergodic data. J. Stat. Plann. Inf. 141, 359–372 (2011)

    Article  MathSciNet  Google Scholar 

  22. Laksaci, A., Rachdi, M., Rahmani, S.: Spatial modelization: local linear estimation of the conditional distribution for functional data. Spat. Stat. 6, 1–23 (2013)

    Article  Google Scholar 

  23. Rachdi, M., Laksaci, A., Demongeot, J., Abdali, A., Madani, F.: Theoretical and practical aspects of the quadratic error in the local linear estimation of the conditional density for functional data. Comput. Stat. Data Anal. 73, 53–68 (2014)

    Article  MathSciNet  Google Scholar 

  24. Ruppert, D., Wand, M.P.: Multivariate weighted least squares regression. Ann. Stat. 22, 1346–1370 (1994)

    Article  MathSciNet  Google Scholar 

  25. Zhou, Z., Lin, Z.-Y.: Asymptotic normality of locally modelled regression estimator for functional data. J. Nonparametr. Stat. 28, 116–131 (2016)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saâdia Rahmani.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Proof

Proof

1.1 Preliminary technical lemmas

Firstly, we state the following technical lemmas which are needed to establish our asymptotic results.

Lemma 5

Under the assumptions (H.1),(H.3) and (H.4)(i), we have: \( \forall \left( k,l \right) \in \mathbb {N}^{*} \times \mathbb {N}\),

  1. (i)

    \(\mathbb {E}\left( K_{j}^{k} \vert \rho _{j} \vert ^{l} | \mathcal {F}_{j-1} \right) \le C h_{K}^{l} \phi _{j,x}\left( h_{K}\right) \)

  2. (ii)

    \(\mathbb {E}\left( \Gamma _{j}K_{j}| \mathcal {F}_{j-1} \right) = O \left( n h_{K}^{2} \phi _{j,x}\left( h_{K} \right) \right) \)

  3. (iii)

    \(\mathbb {E}\left( \Gamma _{1}K_{1} \right) = O \left( n h_{K}^{2} \phi _{x}\left( h_{K} \right) \right) \)

Proof

  1. (i)

    One starts by using (H.3) followed by using (H.4), we get

    $$\begin{aligned} K_{j}^{k}|\rho _{j}|^{l} h_{K}^{-l}\le & {} C K_{j}^{k} |\delta \left( X_{j}, x\right) |^{l} h_{K}^{-l} \\\le & {} C |\delta \left( X_{j}, x\right) |^{l} h_{K}^{-l} \displaystyle {\mathbb {1}_{[-1,1]}}\left( \delta (X_{j}, x)\right) , \end{aligned}$$

    and thereby, we have

    $$\begin{aligned} \mathbb {E} \left( K_{j}^{k}|\rho _{j}|^{l} h_{K}^{-l} | \mathcal {F}_{j-1}\right)\le & {} C \mathbb {P}\left( X_{j} \in B(x,h_{K}) | \mathcal {F}_{j-1}\right) , \\\le & {} C \phi _{j,x}\left( h_{K}\right) , \end{aligned}$$

    which is the claimed result.

  2. (ii)

    Recall that the fact that the kernel K is bounded on \([-1, 1]\) and under (H.3), we have

    $$\begin{aligned} |\Gamma _{j}|\le & {} n C h_{K}^{2} + n C h_{K} |\rho _{j}|. \end{aligned}$$

    So, by using (i), we find

    $$\begin{aligned} \mathbb {E}\left( \Gamma _{j}K_{j} | \mathcal {F}_{j-1} \right)\le & {} n C h_{K}^{2} \phi _{j,x}\left( h_{K}\right) + n C h_{K}^{2} \phi _{j,x}\left( h_{K} \right) \\\le & {} n C' h_{K}^{2} \phi _{j,x}\left( h_{K} \right) . \end{aligned}$$
  3. (iii)

    Combining (H.1)(iii) with part (ii) of the same lemma, and by considering \(\mathcal {F}_{j}\) as the trivial \(\sigma -\) filed, part (iii) is directly verified.

\(\square \)

Lemma 6

Under the assumptions of Lemma (5), we have

$$\begin{aligned} \displaystyle \lim _{n \rightarrow \infty } \bar{f}_{0}^{x}\left( y\right) = O(1). \end{aligned}$$

Proof

We start by applying parts (ii) and (iii) of Lemma 5 to get

$$\begin{aligned} \displaystyle \lim _{n \rightarrow \infty }\bar{f}_{0}^{x}\left( y\right)= & {} O(1) \displaystyle \lim _{n \rightarrow \infty }\frac{1}{n \phi _{x}\left( h_{K} \right) } \displaystyle \sum _{j=1}^{n} \phi _{j,x}\left( h_{K} \right) . \end{aligned}$$

Finally, we just have to use part (iii) of assumption (H.1) to obtain the claimed result. \(\square \)

1.2 Proofs of main results:

Proof of Lemma 1

Observe that

$$\begin{aligned}&\displaystyle \frac{\bar{f}^{x}_{1} \left( y\right) }{\bar{f}^{x}_{0} (y)} - f^{x} \left( y\right) \\= & {} \displaystyle \frac{1}{n h_{J}\mathbb {E} \left( \Gamma _{1} K_{1}\right) \bar{f}_{0}^{x}\left( y\right) } \displaystyle \sum _{j=1}^{n}\left\{ \mathbb {E}\left( \Gamma _{j} K_{j} J_{j} | \mathcal {F}_{j-1} \right) - h_{J} f^{x} \left( y\right) \mathbb {E}\left( \Gamma _{j} K_{j}| \mathcal {F}_{j-1}\right) \right\} \\= & {} \displaystyle \frac{1}{n h_{J}\mathbb {E} \left( \Gamma _{1} K_{1}\right) \bar{f}_{0}^{x}\left( y\right) } \displaystyle \sum _{j=1}^{n} \displaystyle \left\{ \mathbb {E}\left( \Gamma _{j} K_{j} \mathbb {E}\left( J_{j} | \mathcal {G}_{j-1}\right) | \mathcal {F}_{j-1} \right) - h_{J} f^{x} \left( y\right) \mathbb {E}\left( \Gamma _{j} K_{j}| \mathcal {F}_{j-1}\right) \right\} \\\le & {} \displaystyle \frac{1}{n h_{J}\mathbb {E} \left( \Gamma _{1} K_{1}\right) \bar{f}_{0}^{x}\left( y\right) } \displaystyle \sum _{j=1}^{n} \displaystyle \left\{ \mathbb {E}\left( \Gamma _{j} K_{j} \left| \mathbb {E}\left[ J_{j} |X_{j}\right] - h_{J} f^{x} \left( y\right) \right| | \mathcal {F}_{j-1}\right) \right\} . \end{aligned}$$

The last inequality is obtained by (H.4) (iii). Next an integration par parts and the change of variable allow to get

$$\begin{aligned} \mathbb {E} \left( J_{j} | X_{j}\right) = h_{J} \displaystyle \int _{\mathbb {R}} J \left( t\right) f^{x} \left( y- h_{J} t\right) dt, \end{aligned}$$
(6)

thus, we have

$$\begin{aligned} \left| \mathbb {E}\left[ J_{j} |X_{j}\right] - h_{J} f^{x} \left( y\right) \right| \le h_{J} \displaystyle \int _{\mathbb {R}} J \left( t\right) \left| f^{x} \left( y- h_{J} t\right) - f^{x}\left( y\right) \right| dt. \end{aligned}$$

On one side, if we use the assumption (H.2)(i) followed by (H.4) (ii) and Lemma 6, we obtain the part (4) of Lemma 1.

And on the other side, if we replace (H.2) (i) by (H.2) (ii) we obtain

$$\begin{aligned} \displaystyle {\mathbb {1}_{ B(x,h_{k})}} \left( X_{j}\right) \left| \mathbb {E}\left[ J_{j} |X_{j}\right] - h_{J} f^{x}\left( y\right) \right| \le h_{J} \displaystyle \int _{\mathbb {R}} J \left( t\right) \left( h_{K}^{b_{1}} + \left| t\right| ^{b_{2}} h_{J}^{b_{2}} \right) dt. \end{aligned}$$

Hence, we get

$$\begin{aligned} \bar{f}^{x}_{1}\left( y\right) - f^{x}\left( y\right) \bar{f}^{x}_{0} (y)= & {} \left( O\left( h_{K}^{b_{1}} \right) + O\left( h_{J}^{b_{2}} \right) \right) \times \displaystyle \frac{1}{n \mathbb {E} \left( \Gamma _{1} K_{1}\right) } \displaystyle \sum _{j=1}^{n}\mathbb {E}\left( \Gamma _{j} K_{j} | \mathcal {F}_{j-1} \right) \\= & {} \left( O\left( h_{K}^{b_{1}} \right) + O\left( h_{J}^{b_{2}} \right) \right) \times \bar{f}^{x}_{0} (y). \end{aligned}$$

Finally, making use Lemma 6 allows us to obtain the part (5) of Lemma 1.

Proof of Lemma 2

Before proving this Lemma let us start by writing that:

$$\begin{aligned} \widehat{f}^{x}_{k} (y) - \bar{f}^{x}_{k} (y)= & {} \displaystyle \frac{ 1}{n h_{J}^{k} \mathbb {E} \left( \Gamma _{1} K_{1}\right) } \displaystyle \sum _{j=1}^{n} \left( \Gamma _{j} K_{j}J_{j}^{k} - \mathbb {E}\left( \Gamma _{j} K_{j} J_{j}^{k} | \mathcal {F}_{j-1} \right) \right) \\&= :&\displaystyle \frac{1}{n h_{J}^{k}\mathbb {E} \left( \Gamma _{1} K_{1}\right) } \displaystyle \sum _{j=1}^{n} T_{j},\text{ with } \, k=0, 1, \end{aligned}$$

and where \(T_{j}\) is a triangular array of martingale differences according to the \(\sigma \)- fields\(\left( \mathcal {F}_{j-1}\right) _{j}.\) In view that \(\mathbb {E} \left( \Gamma _{j} K_{j} J_{j}^{k} | \mathcal {F}_{j-1}\right) \) is \(\mathcal {F}_{j-1}\) measurable, it follows that

$$\begin{aligned} \mathbb {E}\left( T_{j}^{2} |\mathcal {F}_{j-1}\right)= & {} \mathbb {E}\left( \left( \Gamma _{j}K_{j}\right) ^{2} J_{j}^{2 k}| \mathcal {F}_{j-1}\right) - \mathbb {E}\left( \left( \Gamma _{j}K_{j} J_{j}^{k}| \mathcal {F}_{j-1}\right) \right) ^{2} \\\le & {} \mathbb {E}\left( \left( \Gamma _{j}K_{j}\right) ^{2} \mathbb {E}\left( J_{j}^{2 k}| \mathcal {G}_{j-1}\right) | \mathcal {F}_{j-1}\right) \\\le & {} \mathbb {E}\left( \left( \Gamma _{j}K_{j}\right) ^{2} \mathbb {E}\left( J_{j}^{2k}| X_{j}\right) | \mathcal {F}_{j-1}\right) . \end{aligned}$$

Now using (6) and by assumptions (H.2)(ii) and (H.4) (ii), we get

$$\begin{aligned} \mathbb {E} \left( J_{j}^{2k} | X_{j}\right) = O\left( h_{J}^{k}\right) . \end{aligned}$$

So,

$$\begin{aligned} \mathbb {E}\left( T_{j}^{2} |\mathcal {F}_{j-1}\right) \le C h_{J}^{k} \mathbb {E}\left( \Gamma _{j}^{2} K_{j}^{2} | \mathcal {F}_{j-1}\right) . \end{aligned}$$

Thus,

$$\begin{aligned}&\mathbb {E}\left( T_{j}^{2} |\mathcal {F}_{j-1}\right) \\&\quad \le 2 C h_{J}^{k} \left( \mathbb {E}\left( \left( \displaystyle \sum _{i=1}^{n} \rho _{i}^{2} K_{i}\right) ^{2} K_{j}^{2} | \mathcal {F}_{j-1} \right) +\mathbb {E}\left( \left( \displaystyle \sum _{i=1}^{n} |\rho _{i}| K_{i} \right) ^{2}\rho _{j}^{2} K_{j}^{2} | \mathcal {F}_{j-1}\right) \right) . \\&\quad \le 2 C h_{J}^{k}\left( C n^{2} h_{K}^{4} \mathbb {E}\left( K_{j}^{2} | \mathcal {F}_{j-1} \right) + C n^{2} h_{K}^{2}\mathbb {E}\left( \rho _{j}^{2} K_{j}^{2} | \mathcal {F}_{j-1}\right) \right) . \end{aligned}$$

This last inequality is obtained under (H.3) and (H.4) (i).

Next, applying of Lemma 5 (i) allows us to get

$$\begin{aligned} \mathbb {E}\left( T_{j}^{2} |\mathcal {F}_{j-1}\right)\le & {} 2 C' n^{2} h_{J}^{k} h_{K}^{4} \phi _{j, x} \left( h_{K}\right) . \end{aligned}$$

Now, we use the exponential inequality of Lemma 1 in [21] (with \(d_{j}^{2}= C' n^{2} h_{J}^{k} h_{K}^{4} \phi _{j, x}(h_{K})\) to obtain for all \(\varepsilon > 0,\)

$$\begin{aligned} \mathbb {P}\left( |\widehat{f}^{x}_{k}(y) - \bar{f}^{x}_{k} (y)|> \varepsilon \right)= & {} \mathbb {P}\left( |\displaystyle \frac{1}{n h_{J}^{k}\mathbb {E} \left( \Gamma _{1} K_{1}\right) } \displaystyle \sum _{j=1}^{n} T_{j}| > \varepsilon \right) \\\le & {} 2 \exp \left\{ -\frac{\varepsilon ^{2} n^{2} h_{J}^{2k} \left( \mathbb {E}\left( \Gamma _{1} K_{1}\right) \right) ^{2} }{2\left( D_{n}+ C \varepsilon n h_{J}^{k}\mathbb {E}\left( \Gamma _{1} K_{1}\right) \right) }\right\} . \end{aligned}$$

Taking \( \varepsilon = \varepsilon _{0} \displaystyle \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2} h_{J}^{k} \phi _{x}^{2}( h_{K} )}},\) then

$$\begin{aligned}&\mathbb {P}\left( |\widehat{f}^{x}_{k}(y) - \bar{f}^{x}_{k} (y)| > \varepsilon _{0} \displaystyle \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2} h_{J}^{k} \phi _{x}^{2}( h_{K} )}} \right) \\&\quad \le 2 \exp \left\{ -\frac{ n^{2} h_{J}^{2k} \left( \mathbb {E}\left( \Gamma _{1} K_{1}\right) \right) ^{2} \varepsilon _{0}^{2} \displaystyle \frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2} h_{J}^{k} \phi _{x}^{2}( h_{K} )}}{2\left( D_{n}+ C n h_{J}^{k} \mathbb {E}\left( \Gamma _{1} K_{1}\right) \varepsilon _{0} \displaystyle \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2} h_{J}^{k} \phi _{x}^{2}( h_{K} ))}}\right) }\right\} . \end{aligned}$$

Now using Lemma 5 (iii), allows us to write

$$\begin{aligned}&\mathbb {P} \left( |\widehat{f}^{x}_{k} (y) - \bar{f}^{x}_{k} (y)| > \varepsilon _{0} \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2}h_{J}^{k} \phi _{x}^{2}( h_{K} )}} \right) \\&\quad \le 2 \exp \left\{ -\frac{ n^{2} h_{J}^{2k} \left( O\left( n h_{K}^{2} \phi _{x}( h_{K})\right) \right) ^{2} \varepsilon _{0}^{2} \displaystyle \frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2} h_{J}^{k} \phi _{x}^{2}( h_{K} )}}{2 n h_{J}^{k} h_{K}^{2} \varphi _{x}\left( h_{K}\right) \left( C' n h_{K}^{2}+ O \left( n \phi _{x}( h_{K}) \right) \varepsilon _{0} \displaystyle \sqrt{\frac{\log n}{n^{2} h_{J}^{k} \phi _{x}^{2}( h_{K} ) \varphi _{x}\left( h_{K}\right) }}\right) }\right\} \\&\quad \le 2 \exp \left\{ -\frac{ n^{2} h_{J}^{2k} \left( O\left( n h_{K}^{2} \phi _{x}( h_{K})\right) \right) ^{2} \varepsilon _{0}^{2} \displaystyle \frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2} h_{J}^{k} \phi _{x}^{2}( h_{K} )}}{2 n h_{J}^{k} h_{K}^{2} \varphi _{x}\left( h_{K}\right) \left( C' n h_{K}^{2}+ O(1) \varepsilon _{0} \displaystyle \sqrt{\frac{\log n}{h_{J}^{k}\varphi _{x}\left( h_{K}\right) }}\right) }\right\} . \end{aligned}$$

Now using the fact that, under (H.1) (ii) and (iii), for all n we have \(\varphi _{x}\left( h_{K}\right) \ge C n \phi _{x} ( h_{K})\) which implies that

$$\begin{aligned} \frac{\log n}{h_{J}^{k} \varphi _{x}\left( h_{K}\right) } \le C' \frac{ \varphi _{x}\left( h_{K}\right) \log n }{ n^{2} h_{J}^{k} \phi _{x}^{2}( h_{K} )}. \end{aligned}$$

Therefore, under (H.5), we have:

$$\begin{aligned} \displaystyle \lim _{n \rightarrow \infty } \frac{\log n}{ h_{J}^{k} \varphi _{x}\left( h_{K}\right) } = 0. \end{aligned}$$

It follows that

$$\begin{aligned} \mathbb {P}\left( |\widehat{f}^{x}_{k} (y) - \bar{f}^{x}_{k} (y)| > \varepsilon _{0} \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2} h_{J}^{k} \phi _{x}^{2}( h_{K} )}} \right) \le 2 n^{-C_{0} \varepsilon _{0}^{2}}, \end{aligned}$$

where \(C_{0} \) is a positive constant.

Consequently, using Borel-Cantelli’s Lemma and by choosing \( \varepsilon _{0}\) large enough, we can deduce that:

$$\begin{aligned} \widehat{f}^{x}_{k} (y) - \bar{f}^{x}_{k} (y)= O_{a. co} \left( \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2} h_{J}^{k} \phi _{x}^{2}( h_{K} )}} \right) . \end{aligned}$$
(7)

Finally, taking \(k=0\), this last result finish the proof of Lemma 2. \(\square \)

Proof of Lemma 3

Remarked that, under (H.1)(iii) and (H.4), we have

$$\begin{aligned} 0< \displaystyle \frac{C}{n \phi _{x}( h_{K} )} \displaystyle \sum _{j=1}^{n} \mathbb {P}\left( X_{j} \in B\left( x, h_{K}\right) | \mathcal {F}_{j-1} \right) \le \bar{f}_{0}^{x}\left( y\right) \le |\widehat{f}^{x}_{0} (y) - \bar{f}_{0}^{x}\left( y\right) | +\widehat{f}^{x}_{0} (y). \end{aligned}$$

Therefore,

$$\begin{aligned}&\mathbb {P}\left( \widehat{f}^{x}_{0} (y) \le \frac{C}{2}\right) \\&\quad \le \mathbb {P} \left( \frac{C}{ n \phi _{x}( h_{K} )} \displaystyle \sum _{j=1}^{n} \mathbb {P}\left( X_{j} \in B\left( x, h_{K}\right) | \mathcal {F}_{j-1} \right) < \frac{C}{2} + |\widehat{f}^{x}_{0} (y) - \bar{f}_{0}^{x}\left( y\right) | \right) \\&\quad \le \mathbb {P} \left( |\frac{C}{ n \phi _{x}( h_{K} )} \displaystyle \sum _{j=1}^{n} \mathbb {P}\left( X_{j} \in B\left( x, h_{K}\right) | \mathcal {F}_{j-1} \right) - |\widehat{f}^{x}_{0} (y) - \bar{f}_{0}^{x}\left( y\right) | -C| > \frac{C}{2} \right) . \end{aligned}$$

It is obvious that Lemma 2 and (H.1) (iii) allow to obtain

$$\begin{aligned} \displaystyle \sum _{n} \mathbb {P} \left( |\frac{C}{ n \phi _{x}( h_{K} )} \displaystyle \sum _{j=1}^{n} \mathbb {P}\left( X_{j} \in B\left( x, h_{K}\right) | \mathcal {F}_{j-1} \right) - |\widehat{f}^{x}_{0} (y)- \bar{f}_{0}^{x}\left( y\right) |- C| > \frac{C}{2} \right) <\infty , \end{aligned}$$

which gives the result. \(\square \)

Proof of Lemma 4

The compactness of \(\mathscr {C} \) permits us to deduce that there exists a sequence of real numbers \((y_{k})_{k=1, \ldots , d_{n}}\) such that:

$$\begin{aligned} \mathscr {C} \subset \displaystyle \bigcup _{k=1}^{d_{n}}\mathscr {C}_{k}, \, \text{ where }\, \mathscr {C}_{k} = (y_{k}-l_{n}, y_{k}+l_{n}), \end{aligned}$$

with \(l_{n}= n^{-1-\alpha }\) and \(d_{n} = O(l_{n}^{-1}).\)

We start our proof with the following decomposition:

$$\begin{aligned} \displaystyle \sup _{y \in \mathscr {C} }|\widehat{f}^{x}_{1} (y) - \bar{f}^{x}_{1} (y)|\le & {} \displaystyle \underbrace{\displaystyle \sup _{y \in \mathscr {C} }| \widehat{f}^{x}_{1} (y) - \widehat{f}^{x}_{1} (z) | } _{S_{1}}+ \displaystyle \underbrace{ \displaystyle \sup _ {y \in \mathscr {C} }| \widehat{f}^{x}_{1} (z)- \bar{f}^{x}_{1} (z)|}_{S_{2}} \\&\quad + \displaystyle \underbrace{\displaystyle \sup _{y \in \mathscr {C} }|\bar{f}^{x}_{1} (z)- \bar{f}^{x}_{1} (y)|}_{S_{3}}. \end{aligned}$$

Now, we establish the three terms.

On the one side, for the term \(S_{1}\), by using assumption (H.5), we obtain:

$$\begin{aligned} S_{1}\le & {} \displaystyle \sup _{y \in \mathscr {C} }| \frac{1}{n h_{J} \mathbb {E}\left( \Gamma _{1} K_{1}\right) } \sum _{j=1}^{n} \Gamma _{j} K_{j}| J_{j}(y) - J_{j}(z)||, \\\le & {} \displaystyle \sup _{y \in \mathscr {C} } \frac{C| y- z |}{h_{J}} \left( | \frac{1}{n h_{J} \mathbb {E}\left( \Gamma _{1} K_{1}\right) } \sum _{j=1}^{n} \Gamma _{j} K_{j}| \right) , \\\le & {} C \frac{l_{n}}{h_{J}^{2}}| \widehat{f}^{x}_{0} (y)|. \end{aligned}$$

Thus, using Lemma 3, we get :

$$\begin{aligned} S_{1} \le C \frac{l_{n}}{h_{J}^{2}}. \end{aligned}$$

Since \(l_{n}= n^{-1-\alpha }\), we obtain:

$$\begin{aligned} \frac{l_{n}}{h_{J}^{2}}= o\left( \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2}h_{J}\phi _{x}^{2}( h_{K} )}}\right) . \end{aligned}$$

So, for n large enough, we find a \(\eta > 0\) such that

$$\begin{aligned} \mathbb {P}\left( S_{1} > \eta \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2}h_{J}\phi _{x}^{2}( h_{K} )}}\right) =0. \end{aligned}$$
(8)

Similarly, for the term \(S_{3}\), we obtain

$$\begin{aligned} S_{3} \le C \frac{l_{n}}{h_{J}^{2}} |\bar{f}^{x}_{0} (y)|. \end{aligned}$$

Therefore, Lemma 6 allows us to write:

$$\begin{aligned} S_{3} \le C \frac{l_{n}}{h_{J}^{2}}. \end{aligned}$$

Using analogous arguments as \(S_{1}\), we can found for n large enough:

$$\begin{aligned} \mathbb {P}\left( S_{3} > \eta \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2}h_{J}\phi _{x}^{2}( h_{K} )}}\right) =0. \end{aligned}$$
(9)

On the other side, to complete the proof of this Lemma, we need to prove that:

$$\begin{aligned} S_{2} = O_{a. co} \left( \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2}h_{J}\phi _{x}^{2}( h_{K} )}}\right) . \end{aligned}$$

By using (7) for \(k=1\), we get for \(\eta >0\) and for all \(z \in \mathscr {C}_{k}:\)

$$\begin{aligned} \mathbb {P}\left( |\widehat{f}_{1}^{x}(z) - \bar{f}^{x}_{1}(z)| > \eta \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2}h_{J}\phi _{x}^{2}(h_{K} )}} \right) \le C' n^{-C_{0} \eta ^{2}}. \end{aligned}$$

Thus, we have

$$\begin{aligned}&\mathbb {P}\left( \displaystyle \sup _{y \in \mathscr {C} } |\widehat{f}_{1}^{x}(z) - \bar{f}^{x}_{1}(z)|> \eta \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2}h_{J}\phi _{x}^{2}( h_{K} )}} \right) \\&\quad \le \mathbb {P}\left( \displaystyle \max _{z \in \mathscr {C}_{k}} |\widehat{f}_{1}^{x}(z) - \bar{f}^{x}_{1}(z)|> \eta \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2}h_{J}\phi _{x}^{2}( h_{K} )}} \right) \\&\quad \le 2 d_{n} \displaystyle \max _{z \in \mathscr {C}_{k}}\mathbb {P}\left( |\widehat{f}_{1}^{x}(z) - \bar{f}^{x}_{1}(z)| > \eta \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2}h_{J}\phi _{x}^{2}( h_{K} )}} \right) \\&\quad \le C' n^{-C_{0} \eta ^{2} +1 + \alpha }. \end{aligned}$$

Therefore, by choosing \(\eta \) such that \(C_{0} \eta ^{2}= 2+ 2 \alpha \), we find

$$\begin{aligned} \mathbb {P}\left( \displaystyle \sup _{y \in \mathscr {C} } |\widehat{f}_{1}^{x}(z) - \bar{f}^{x}_{1}(z)| > \eta \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2}h_{J}\phi _{x}^{2}( h_{K} )}} \right) \le C' n ^{-1- \alpha }. \end{aligned}$$
(10)

Finally, Lemma 4 can be deduced directly from (8), (9) and (10). \(\square \)

1.3 Proof of Corollary 1

The unimodality of \(f^{x}\) and assumption (H.6) (ii) permit us to write that \(f^{x (l)} (\Theta (x)) = f^{x (l)} (\widehat{\Theta }(x))) = 0.\) Furthermore, by a Taylor expansion of the function \(f^{x}\) at \(\Theta (x),\) we have:

$$\begin{aligned} f^{x}(\widehat{\Theta } (x)) = f^{x}(\Theta (x))+ \frac{1}{j!} f^{x (j)} (\Theta ^{*} (x)) \left( \widehat{\Theta } (x) - \Theta (x)\right) ^{j}, \end{aligned}$$
(11)

where \(\Theta ^{*} (x)\) is between \(\Theta (x) \) and \(\widehat{\Theta } (x).\)

Next, by simple manipulation we show that

$$\begin{aligned} \vert {f}^{x}(\widehat{\Theta }(x)) - f^{x}(\Theta (x)) \vert \le 2 \displaystyle \sup _{y \in \mathscr {C} }| \widehat{f}^{x} (y) - f^{x} (y) |. \end{aligned}$$
(12)

To end the proof of Corollary 1, we only need to show the following claim.

Claim

$$\begin{aligned} \displaystyle \lim _{n \rightarrow \infty } | \widehat{\Theta } (x) - \Theta (x)| = 0. \qquad a. co. \end{aligned}$$

Proof

By the continuity of the function \(f^{x}\), it follows that:

$$\begin{aligned} \forall \varepsilon>0, \exists \delta ( \varepsilon ) >0,| f^{x} (y) - f^{x}(\Theta (x)| \le \delta ( \varepsilon ) \Rightarrow | y- \Theta (x)| \le \varepsilon . \end{aligned}$$

Then, this last consideration implies that:

$$\begin{aligned} \forall \varepsilon>0, \exists \delta ( \varepsilon )>0, \mathbb {P}\left( |\widehat{\Theta } (x) - \Theta (x)|> \varepsilon \right) \le \mathbb {P}\left( |f^{x}(\widehat{\Theta } (x))- f^{x}(\Theta (x)) |> \delta ( \varepsilon ) \right) . \end{aligned}$$
(13)

Lastly, the claimed result can be deduced by combining (13) with the the statement (12) and Theorem 2. \(\square \)

Now, we return to the proof of Corollary 1.

Since \( f^{x (j)} (\Theta ^{*} (x)) \rightarrow f^{x (j)} (\Theta (x))\)   and by using (H.6)(iii), we obtain

$$\begin{aligned} \exists c >0, \, \displaystyle \sum _{n=1}^{\infty } \mathbb {P} \left( | f^{x (j)} (\Theta ^{*} (x)) |< c \right) < \infty . \end{aligned}$$
(14)

Therefore, we have

$$\begin{aligned} |\widehat{\Theta } (x) - \Theta (x)|^{j} = O\left( \displaystyle \sup _{y \in \mathscr {C} } |\widehat{f}^{x}\left( y\right) - f^{x}\left( y\right) | \right) , \quad a.co. \end{aligned}$$

We find this last result by combining the statements (11) and (12) with (14).

Finally, the proof of Corollary 1 can be easily deduced from Theorem 2. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ayad, S., Laksaci, A., Rahmani, S. et al. On the local linear modelization of the conditional density for functional and ergodic data. METRON 78, 237–254 (2020). https://doi.org/10.1007/s40300-020-00174-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40300-020-00174-6

Keywords

Mathematics Subject Classification

Navigation