Abstract
Functional data become increasingly popular with the rapid technological development in data collection and storage. In this study, we consider both scalar and functional predictors for a positive scalar response under the partially linear multiplicative model. A loss function based on the relative errors is adopted, which provides a useful alternative to the classic methods such as the least squares. Penalization is used to detect the true structure of the model. The proposed method can not only identify the significant scalar variables but also select the basis functions (on which the functional variable is projected) that contribute the response. Both estimation and selection consistency properties are rigorously established. Simulation is conducted to investigate the finite sample performance of the proposed method. We analyze the Tecator data to demonstrate application of the proposed method.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Bühlmann, P., & Van De Geer, S. (2011). Statistics for high-dimensional data: Methods, theory and applications. Berlin/Heidelberg: Springer Science & Business Media.
Chen, K., Guo, S., Lin, Y., & Ying, Z. (2010). Least absolute relative error estimation. Journal of the American Statistical Association, 105(491), 1104–1112.
Chen, K., Lin, Y., Wang, Z., & Ying, Z. (2016). Least product relative error estimation. Journal of Multivariate Analysis, 144, 91–98.
Fan, J., & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456), 1348–1360.
Ferraty, F., Hall, P., & Vieu, P. (2010). Most-predictive design points for functional data predictors. Biometrika, 97(4), 807–824.
Knight, K. (1998). Limiting distributions for L1 regression estimators under general conditions. The Annals of Statistics, 26(2), 755–770.
Kong, D., Xue, K., Yao, F., & Zhang, H. H. (2016). Partially functional linear regression in high dimensions. Biometrika, 131(1), 147–159.
Li, Y., & Hsing, T. (2010). Deciding the dimension of effective dimension reduction space for functional and high-dimensional data. The Annals of Statistics, 38(5), 3028–3062.
Lian, H. (2013). Shrinkage estimation and selection for multiple functional regression. Statistica Sinica, 23, 51–74.
Narula, S. C., & Wellington, J. F. (1977). Prediction, linear regression and the minimum sum of relative errors. Technometrics, 19(2), 185–190.
Park, H., & Stefanski, L. (1998). Relative-error prediction. Statistics & Probability Letters, 40(3), 227–236.
Shin, H. (2009). Partial functional linear regression. Journal of Statistical Planning and Inference, 139(10), 3405–3418.
Wang, J. L., Chiou, J. M., & Müller Hans-Georg, H. G. (2016). Functional data analysis. Annual Review of Statistics and Its Application, 3, 257–295.
Wang, Z., Chen, Z., & Wu, Y. (2016). A relative error estimation approach for single index model. Preprint. arXiv:1609.01553.
Yao, F., Müller, H. G., & Wang, J. L. (2005). Functional data analysis for sparse longitudinal data. Journal of the American Statistical Association, 100(470), 577–590.
Zhang, Q., & Wang, Q. (2013). Local least absolute relative error estimating approach for partially linear multiplicative model. Statistica Sinica, 23(3), 1091–1116.
Zhang, T., Zhang, Q., & Li, N. (2016). Least absolute relative error estimation for functional quadratic multiplicative model. Communications in Statistics–Theory and Methods, 45(19), 5802–5817.
Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476), 1418–1429.
Acknowledgements
We thank the organizers for invitation. The study was partly supported by the Fundamental Research Funds for the Central Universities (20720171064), the National Natural Science Foundation of China (11561006, 11571340), Research Projects of Colleges and Universities in Guangxi (KY2015YB171), Open Fund Project of Guangxi Colleges and Universities Key Laboratory of Mathematics and Statistical Model (2016GXKLMS005), and National Bureau of Statistics of China (2016LD01). Ahmed research is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Denote δ = (θ ⊤, β ⊤)⊤ and \(\hat {\delta }=(\hat {\theta }^{\top }, \hat {\beta }^{\top } )^{\top }\). To facilitate proof of the theorems, we introduce Lemmas 1–3 where proof of Lemmas 1 and 2 can be found in [7] and proof of Lemma 3 can be found in [9].
Lemma 1
Under (C1), (C2), and (C5), we have
Lemma 2
Under (C1)–(C5), we have
Lemma 3
Under (C1) and (C2), we have
Proof of Theorem 1
Let \(\alpha _{n}=\sqrt {K/n}\). We first show that \(\|\hat {\delta }-\delta \|=O_{p}(\alpha _{n})\). Following [4], it is sufficient to show that, for any given ε > 0, there exists a large constant C, such that
where δ C = {δ ∗ = δ + α n u, ∥u∥ = C} for C > 0. This implies that, with probability 1 − ε, there exists a minimum in the ball δ C. Hence, the consistency of δ is established.
By some simplifications, we have
For I 1, by applying the identity in [6],
which is valid for x ≠ 0, we have
where
By Taylor expansion, we have
where \(\xi _{i}^{[1]}\) lies between \(\hat {W}_{i}^{\top }(\delta +\alpha _{n}u)\) and \(\hat {W}_{i}^{\top }\delta \).
For I 111, by Lemma 2, we have
It follows directly from Lemma 1 and (C5) that I 1111 = O p(1)∥α n u∥. Moreover, under (C5), \(I_{1112}=\alpha _{n}^{2}u^{\top }D_{2}u/2\) and \(I_{112}=\alpha _{n}^{2}u^{\top }D_{2}u/2+o_{p}(1)\|\alpha _{n}u\|{ }^{2}\).
Hence,
For I 12, denote \(c_{1i} = \exp (\hat {W}_{i}^{\top }\hat {\delta }-W_{i}^{\top }\delta -H)\), \(c_{2i} = \exp (\hat {W}_{i}^{\top }\delta -W_{i}^{\top }\delta -H)\) and τ = sε i, then
Therefore, we have
Similarly, it can be shown that
Adopting the approach in [6], we have
According to (C7), we have
and
Combining (6)–(10), for a sufficiently large C, we have ψ n(u) > 0. Therefore, \(\|\hat {\delta }-\delta \|=O_{p}(\sqrt {K/n})\).
Next we show \(\|\hat {\gamma }(t)-\gamma (t)\|=o_{p}(1)\). In fact,
By result of the first part and Lemma 3, we can obtain
By the square integrable property of γ(t), we have F 2 = o p(1). From the above results and condition (C4), we obtain
This completes the proof of Theorem 1. □
Proof of Theorem 2
For j ∈ A and k ∈ B, the consistency result in Theorem 1 indicates that \(\hat {\theta }\rightarrow \theta \) and \(\hat {\beta }\rightarrow \beta \) in probability. Therefore, \(P(j\in \hat {A}_{n})\rightarrow 1\) and \(P(k\in \hat {B}_{n})\rightarrow 1\). Then it suffices to show that \(P ( j\in \hat {A}_{n} ) \rightarrow 0\) for ∀j∉A and \(P ( k\in \hat {B}_{n} ) \rightarrow 0\) for ∀k∉B.
For \(\forall j\in \hat {A}_{n}\) and \(\forall k\in \hat {B}_{n}\), we have
By similar arguments with Theorem 1, we have
According to (C8), we have
Consequently, we must have \(P(j\in \hat {A}_{n})\rightarrow 0\) and \(P(k\in \hat {B}_{n})\rightarrow 0\) for ∀j∉A and ∀k∉B. This completes the proof Theorem 2. □
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, T., Huang, Y., Zhang, Q., Ma, S., Ahmed, S.E. (2019). Penalized Relative Error Estimation of a Partially Functional Linear Multiplicative Model. In: Ahmed, S., Carvalho, F., Puntanen, S. (eds) Matrices, Statistics and Big Data. IWMS 2016. Contributions to Statistics. Springer, Cham. https://doi.org/10.1007/978-3-030-17519-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-17519-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-17518-4
Online ISBN: 978-3-030-17519-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)