Abstract
In this paper, we estimate the conditional density function using the local linear approach. We treat the case when the regressor is valued in a semi-metric space, the response is a scalar and the data are observed as ergodic functional times series. Under this dependence structure, we state the almost complete consistency (a.co.) with rates of the constructed estimator. Moreover, the usefulness of our results is illustrated through their application to the conditional mode estimation.
Similar content being viewed by others
References
Attouch, M.K., Chouaf, B., Laksaci, A.: Nonparametric M-estimation for functional spatial data. Commun. Stat. Appl. Methods 19, 193–211 (2012)
Baìllo, A., Grané, A.: Local linear regression for functional predictor and scalar response. J. Multivariate Anal. 100, 102–111 (2009)
Barrientos, J., Ferraty, F., Vieu, P.: Locally Modelled Regression and Functional Data. J. Nonparametr. Stat. 22, 617–632 (2010)
Bouanani, O., Laksaci, A., Rachdi, M., Rahmani, S.: Asymptotic normality of some conditional nonparametric functional parameters in high-dimensional statistics. Behaviormetrika 4, 1–35 (2018)
Bouanani, O., Rahmani, S., Ait-Hennani, L.: Local linear conditional cumulative distribution function with mixing data. Arab. J. Math. (2019). https://doi.org/10.1007/s40065-019-0247-7
Chaouch, M., Laïb, N., Ould Saïd, E.: Nonparametric M-estimation for right censored regression model with stationary ergodic data. Stat. Methodol. 33, 234–255 (2016)
Dabo-Niang, S., Kaid, Z., et Laksaci, A.: Asymptotic properties of the kernel estimate of spatial conditional mode when the regressor is functional. Adv. Stat. Anal. 99, 131–160 (2015)
Dabo-Niang, S., Laksaci, A.: Note on conditional mode estimation for functional dependent data. Statistica 70, 83–94 (2010)
Damon, J., Guillas, S.: The inclusion of exogenous variables in functional autoregressive ozone forecasting. Environmetrics 13, 759–774 (2002)
Demongeot, J., Laksaci, A., Madani, F., Rachdi, M.: A fast functional locally modeled conditional density and mode for functional time-series. Recent Advances in Functional Data Analysis and Related Topics, Contributions to Statistics, Pages 85–90, https://doi.org/10.1007/978-3-7908-2736-1_13 Physica-Verlag/Springer (2011)
Demongeot, J., Laksaci, A., Madani, F., Rachdi, M.: Functional data: local linear estimation of the conditional density and its application. Statistics 47, 26–44 (2013)
Demongeot, J., Laksaci, A., Rachdi, M., Rahmani, S.: On the local linear modelization of the conditional distribution for functional data. Sankhya A 76, 328–355 (2014)
Didi, S., Louani, D.: Asymptotic results for the regression function estimate on continuous time stationary and ergodic data. Stat. Risk Model. 31, 129–150 (2014)
Fan, J.: Design-adaptive nonparametric regression. J. Am. Stat. Assoc. 87, 998–1004 (1992)
Fan, J., Gijbels, I.: Local Polynomial Modelling and its Applications. Chapman and Hall, London (1996)
Ferraty, F., Vieu, P.: Nonparametric Functional Data Analysis Theory and Practice. Springer-Verlag, Berlin (2006)
Ferraty, F.: Y. The Oxford Handbook of Fuctional Data Analysis and Romain. Oxford University Press, Oxford (2010)
Gheriballah, A., Laksaci, A., Sekkal, S.: Nonparametric M-regression for functional ergodic data. Stat. Probability Lett. 83, 902–908 (2013)
Kaid, Z., Laksaci, A.: Functional quantile regression: local linear modelisation, pp. 155–160. In Functional Statistics and Related Fields. Springer, Cham (2017)
Laïb, N., Ould-Saïd, E.: Estimation non paramétrique robuste de la fonction de régression pour des observations ergodiques. C. R. Acad. Sci. Série 1(322), 271–276 (1996)
Laïb, N., Louani, D.: Rates of strong consistencies of the regression function estimator for functional stationary ergodic data. J. Stat. Plann. Inf. 141, 359–372 (2011)
Laksaci, A., Rachdi, M., Rahmani, S.: Spatial modelization: local linear estimation of the conditional distribution for functional data. Spat. Stat. 6, 1–23 (2013)
Rachdi, M., Laksaci, A., Demongeot, J., Abdali, A., Madani, F.: Theoretical and practical aspects of the quadratic error in the local linear estimation of the conditional density for functional data. Comput. Stat. Data Anal. 73, 53–68 (2014)
Ruppert, D., Wand, M.P.: Multivariate weighted least squares regression. Ann. Stat. 22, 1346–1370 (1994)
Zhou, Z., Lin, Z.-Y.: Asymptotic normality of locally modelled regression estimator for functional data. J. Nonparametr. Stat. 28, 116–131 (2016)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Proof
Proof
1.1 Preliminary technical lemmas
Firstly, we state the following technical lemmas which are needed to establish our asymptotic results.
Lemma 5
Under the assumptions (H.1),(H.3) and (H.4)(i), we have: \( \forall \left( k,l \right) \in \mathbb {N}^{*} \times \mathbb {N}\),
-
(i)
\(\mathbb {E}\left( K_{j}^{k} \vert \rho _{j} \vert ^{l} | \mathcal {F}_{j-1} \right) \le C h_{K}^{l} \phi _{j,x}\left( h_{K}\right) \)
-
(ii)
\(\mathbb {E}\left( \Gamma _{j}K_{j}| \mathcal {F}_{j-1} \right) = O \left( n h_{K}^{2} \phi _{j,x}\left( h_{K} \right) \right) \)
-
(iii)
\(\mathbb {E}\left( \Gamma _{1}K_{1} \right) = O \left( n h_{K}^{2} \phi _{x}\left( h_{K} \right) \right) \)
Proof
-
(i)
One starts by using (H.3) followed by using (H.4), we get
$$\begin{aligned} K_{j}^{k}|\rho _{j}|^{l} h_{K}^{-l}\le & {} C K_{j}^{k} |\delta \left( X_{j}, x\right) |^{l} h_{K}^{-l} \\\le & {} C |\delta \left( X_{j}, x\right) |^{l} h_{K}^{-l} \displaystyle {\mathbb {1}_{[-1,1]}}\left( \delta (X_{j}, x)\right) , \end{aligned}$$and thereby, we have
$$\begin{aligned} \mathbb {E} \left( K_{j}^{k}|\rho _{j}|^{l} h_{K}^{-l} | \mathcal {F}_{j-1}\right)\le & {} C \mathbb {P}\left( X_{j} \in B(x,h_{K}) | \mathcal {F}_{j-1}\right) , \\\le & {} C \phi _{j,x}\left( h_{K}\right) , \end{aligned}$$which is the claimed result.
-
(ii)
Recall that the fact that the kernel K is bounded on \([-1, 1]\) and under (H.3), we have
$$\begin{aligned} |\Gamma _{j}|\le & {} n C h_{K}^{2} + n C h_{K} |\rho _{j}|. \end{aligned}$$So, by using (i), we find
$$\begin{aligned} \mathbb {E}\left( \Gamma _{j}K_{j} | \mathcal {F}_{j-1} \right)\le & {} n C h_{K}^{2} \phi _{j,x}\left( h_{K}\right) + n C h_{K}^{2} \phi _{j,x}\left( h_{K} \right) \\\le & {} n C' h_{K}^{2} \phi _{j,x}\left( h_{K} \right) . \end{aligned}$$ -
(iii)
Combining (H.1)(iii) with part (ii) of the same lemma, and by considering \(\mathcal {F}_{j}\) as the trivial \(\sigma -\) filed, part (iii) is directly verified.
\(\square \)
Lemma 6
Under the assumptions of Lemma (5), we have
Proof
We start by applying parts (ii) and (iii) of Lemma 5 to get
Finally, we just have to use part (iii) of assumption (H.1) to obtain the claimed result. \(\square \)
1.2 Proofs of main results:
Proof of Lemma 1
Observe that
The last inequality is obtained by (H.4) (iii). Next an integration par parts and the change of variable allow to get
thus, we have
On one side, if we use the assumption (H.2)(i) followed by (H.4) (ii) and Lemma 6, we obtain the part (4) of Lemma 1.
And on the other side, if we replace (H.2) (i) by (H.2) (ii) we obtain
Hence, we get
Finally, making use Lemma 6 allows us to obtain the part (5) of Lemma 1.
Proof of Lemma 2
Before proving this Lemma let us start by writing that:
and where \(T_{j}\) is a triangular array of martingale differences according to the \(\sigma \)- fields\(\left( \mathcal {F}_{j-1}\right) _{j}.\) In view that \(\mathbb {E} \left( \Gamma _{j} K_{j} J_{j}^{k} | \mathcal {F}_{j-1}\right) \) is \(\mathcal {F}_{j-1}\) measurable, it follows that
Now using (6) and by assumptions (H.2)(ii) and (H.4) (ii), we get
So,
Thus,
This last inequality is obtained under (H.3) and (H.4) (i).
Next, applying of Lemma 5 (i) allows us to get
Now, we use the exponential inequality of Lemma 1 in [21] (with \(d_{j}^{2}= C' n^{2} h_{J}^{k} h_{K}^{4} \phi _{j, x}(h_{K})\) to obtain for all \(\varepsilon > 0,\)
Taking \( \varepsilon = \varepsilon _{0} \displaystyle \sqrt{\frac{\varphi _{x}\left( h_{K}\right) \log n}{n^{2} h_{J}^{k} \phi _{x}^{2}( h_{K} )}},\) then
Now using Lemma 5 (iii), allows us to write
Now using the fact that, under (H.1) (ii) and (iii), for all n we have \(\varphi _{x}\left( h_{K}\right) \ge C n \phi _{x} ( h_{K})\) which implies that
Therefore, under (H.5), we have:
It follows that
where \(C_{0} \) is a positive constant.
Consequently, using Borel-Cantelli’s Lemma and by choosing \( \varepsilon _{0}\) large enough, we can deduce that:
Finally, taking \(k=0\), this last result finish the proof of Lemma 2. \(\square \)
Proof of Lemma 3
Remarked that, under (H.1)(iii) and (H.4), we have
Therefore,
It is obvious that Lemma 2 and (H.1) (iii) allow to obtain
which gives the result. \(\square \)
Proof of Lemma 4
The compactness of \(\mathscr {C} \) permits us to deduce that there exists a sequence of real numbers \((y_{k})_{k=1, \ldots , d_{n}}\) such that:
with \(l_{n}= n^{-1-\alpha }\) and \(d_{n} = O(l_{n}^{-1}).\)
We start our proof with the following decomposition:
Now, we establish the three terms.
On the one side, for the term \(S_{1}\), by using assumption (H.5), we obtain:
Thus, using Lemma 3, we get :
Since \(l_{n}= n^{-1-\alpha }\), we obtain:
So, for n large enough, we find a \(\eta > 0\) such that
Similarly, for the term \(S_{3}\), we obtain
Therefore, Lemma 6 allows us to write:
Using analogous arguments as \(S_{1}\), we can found for n large enough:
On the other side, to complete the proof of this Lemma, we need to prove that:
By using (7) for \(k=1\), we get for \(\eta >0\) and for all \(z \in \mathscr {C}_{k}:\)
Thus, we have
Therefore, by choosing \(\eta \) such that \(C_{0} \eta ^{2}= 2+ 2 \alpha \), we find
Finally, Lemma 4 can be deduced directly from (8), (9) and (10). \(\square \)
1.3 Proof of Corollary 1
The unimodality of \(f^{x}\) and assumption (H.6) (ii) permit us to write that \(f^{x (l)} (\Theta (x)) = f^{x (l)} (\widehat{\Theta }(x))) = 0.\) Furthermore, by a Taylor expansion of the function \(f^{x}\) at \(\Theta (x),\) we have:
where \(\Theta ^{*} (x)\) is between \(\Theta (x) \) and \(\widehat{\Theta } (x).\)
Next, by simple manipulation we show that
To end the proof of Corollary 1, we only need to show the following claim.
Claim
Proof
By the continuity of the function \(f^{x}\), it follows that:
Then, this last consideration implies that:
Lastly, the claimed result can be deduced by combining (13) with the the statement (12) and Theorem 2. \(\square \)
Now, we return to the proof of Corollary 1.
Since \( f^{x (j)} (\Theta ^{*} (x)) \rightarrow f^{x (j)} (\Theta (x))\) and by using (H.6)(iii), we obtain
Therefore, we have
We find this last result by combining the statements (11) and (12) with (14).
Finally, the proof of Corollary 1 can be easily deduced from Theorem 2. \(\square \)
Rights and permissions
About this article
Cite this article
Ayad, S., Laksaci, A., Rahmani, S. et al. On the local linear modelization of the conditional density for functional and ergodic data. METRON 78, 237–254 (2020). https://doi.org/10.1007/s40300-020-00174-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40300-020-00174-6
Keywords
- Ergodic data
- Functional data
- Local linear estimator
- Conditional density
- Nonparametric estimation
- Conditional mode
- Ozone concentration