Abstract
In this paper, we consider the estimation problem of quantile varying-coefficient models when the link function is unspecified, which significantly expands the existing works on varying-coefficient models with unspecified link function focusing only on mean regression. We provide new identification conditions which are weaker than existing ones. Under these identification conditions, we use polynomial splines to estimate both the varying coefficients and the link functions and establish the convergence rate of the estimator. Our simulation studies and a real data application illustrate the finite sample performance of the estimators.
Similar content being viewed by others
References
Cai Z, Xiao Z (2012) Semiparametric quantile regression estimation in dynamic models with partially varying coefficients. J Econom 167:413–425
Cai Z, Xu X (2008) Nonparametric quantile estimations for dynamic smooth coefficient models. J Am Stat Assoc 103(484):1595–1608
Cai ZW, Fan JQ, Li RZ (2000) Efficient estimation and inferences for varying-coefficient models. J Am Stat Assoc 95(451):888–902
Chen R, Tsay RS (1993) Functional-coefficient autoregressive models. J Am Stat Assoc 88(421):298–308
Cui X, Haerdle WK, Zhu L (2011) The EFM approach for single-index models. Ann Stat 39(3):1658–1688
De Boor C (2001) A practical guide to splines, rev edn. Springer, New York
Fan JQ, Zhang WY (1999) Statistical estimation in varying coefficient models. Ann Stat 27(5):1491–1518
Fan JQ, Zhang JT (2000) Two-step estimation of functional linear models with applications to longitudinal data. J R Stat Soc B Stat Methodol 62:303–322
Fan JQ, Zhang W (2008) Statistical methods with varying coefficient models. Stat Interface 1:179–195
Haerdle W, Liang H, Gao J (2007) Partially linear models. Springer, Berlin
Hastie T, Tibshirani R (1993) Varying-coefficient models. J R Stat Soc B Stat Methodol 55(4):757–796
He X, Shi P (1994) Convergence rate of b-spline estimators of nonparametric conditional quantile functions. J Nonparametr Stat 3:299–308
He X, Shi P (1996) Bivariate tensor-product b-splines in a partly linear model. J Multivar Anal 58(2):162–181
Hendricks W, Koenker R (1992) Hierarchical spline models for conditional quantiles and the demand for electricity. J Am Stat Assoc 87(417):58–68
Hoover DR, Rice JA, Wu CO, Yang LP (1998) Nonparametric smoothing estimates of time-varying coefficient models with longitudinal data. Biometrika 85(4):809–822
Horowitz JL, Lee S (2005) Nonparametric estimation of an additive quantile regression model. J Am Stat Assoc 100(472):1238–1249
Horowitz JL, Mammen E (2007) Rate-optimal estimation for a general class of nonparametric regression models with unknown link functions. Ann Stat 35(6):2589–2619
Huang JHZ, Wu CO, Zhou L (2002) Varying-coefficient models and basis function approximations for the analysis of repeated measurements. Biometrika 89(1):111–128
Kim M (2007) Quantile regression with varying coefficients. Ann Stat 35(1):92–108
Koenker R, Bassett G Jr (1978) Regression quantiles. Econometrica 1(46):33–50
Kong E, Xia Y (2012) A single-index quantile regression model and its estimation. Econom Theory 28(04):730–768
Kuruwita C, Kulasekera K, Gallagher C (2011) Generalized varying coefficient models with unknown link function. Biometrika 98(3):701–710
Li Q (2000) Efficient estimation of additive partially linear models. Int Econ Rev 41(4):1073–1092
Lian H (2012) Variable selection for high-dimensional generalized varying-coefficient models. Stat Sin 22:1563–1588
Lin W, Kulasekera K (2007) Identifiability of single-index models and additive-index models. Biometrika 94(2):496–501
Ma S, He X (2016) Inference for single-index quantile regression models with profile optimization. Ann Stat 44:1234–1268
Schwartz L (1965) On bayes procedures. Z Wahrsch Verw Gabiete 4:10–26
Wang LF, Li HZ, Huang JHZ (2008) Variable selection in nonparametric varying-coefficient models for analysis of repeated measurements. J Am Stat Assoc 103(484):1556–1569
Wang HJ, Zhu Z, Zhou J (2009) Quantile regression in partially linear varying coefficient models. Ann Stat 37(6B):3841–3866
Wei F, Huang J, Li HZ (2011) Variable selection and estimation in high-dimensional varying-coefficient models. Stat Sin 21:1515–1540
Wu TZ, Yu K, Yu Y (2010) Single-index quantile regression. J Multivar Anal 101(7):1607–1621
Xia Y, Tong H, Li W (1999) On extended partially linear single-index models. Biometrika 86(4):831–842
Xia Y, Zhang W, Tong H (2004) Efficient estimation for semivarying-coefficient models. Biometrika 91:661–681
Yu K, Jones M (1998) Local linear quantile regression. J Am Stat Assoc 93(441):228–237
Yu Y, Ruppert D (2002) Penalized spline estimation for partially linear single-index models. J Am Stat Assoc 97(460):1042–1054
Zhang W, Li D, Xia Y (2015) Estimation in generalised varying-coefficient models with unspecified link functions. J Econom 187(1):238–255
Acknowledgements
The authors want to sincerely thank the Editor-in-Chief Professor Ugarte, an Associate Editor, and two referees for their insightful comments that greatly improved the manuscript. The research of Heng Lian is partially supported by City University of Hong Kong Startup Grant 7200521, Hong Kong RGC general research fund 11301718, and Project 11871411 from NSFC and the Shenzhen Research Institute, City University of Hong Kong. Gaorong Li and Lili Yue’s research was supported by the National Natural Science Foundation of China (11871001) and the Beijing Natural Science Foundation (1182003).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Technical proofs
Appendix: Technical proofs
1.1 Proof of Proposition 1
Suppose we have \(g(\mathbf{x}^{\mathrm{T}}{\varvec{\beta }}(u))=h(\mathbf{x}^{\mathrm{T}}{\varvec{\alpha }}(u))\) for \((\mathbf{x},u)\in R^{p+1}\) in the support of \((\mathbf{X},U)\), with h and \({\varvec{\alpha }}(u)\) also satisfying (C2) and (C3). Setting \(u=0\), since \(\beta _1(0)>0\) and \(\Vert {\varvec{\beta }}(0)\Vert =1\), by the identification of single-index models (Lin and Kulasekera 2007), we know that \({\varvec{\beta }}(0)={\varvec{\alpha }}(0)\) and \(g=h\) on \(\mathcal{S}_0\). Let \(u_0=\inf \{u\in [0,1]:{\varvec{\beta }}(u)\ne {\varvec{\alpha }}(u)\}\). By continuity we have \({\varvec{\beta }}(u_0)={\varvec{\alpha }}(u_0)\). If \(u_0=1\), the identification is achieved. Thus we assume \(u_0<1\) below.
Let \(\inf _u\Vert {\varvec{\beta }}(u)\Vert =\epsilon >0\). since continuity implies uniform continuity, there exists \(\delta >0\) such that \(\Vert {\varvec{\beta }}(u_1)-{\varvec{\beta }}(u_2)\Vert \le \epsilon \) and \(\Vert {\varvec{\alpha }}(u_1)-{\varvec{\alpha }}(u_2)\Vert \le \epsilon \) whenever \(|u_1-u_2|\le \delta \).
By the assumption that \(\mathbf{X}\) has a convex support containing the origin, \(\mathcal{S}_u \) is an interval containing zero for any u. Since g is nonconstant on \(\mathcal{S}_{u_0}\), there exists some \(M>0\) such that g is nonconstant on \(\mathcal{S}_{u_0}^M:=\mathcal{S}_{u_0}\cap \{\mathbf{x}^{\mathrm{T}}{\varvec{\beta }}(u_0): \Vert \mathbf{x}\Vert \le M\}\) (if g is constant on \(\mathcal{S}_{u_0}^M\) for all \(M>0\) then g is also a constant on \(\cup _{M} \mathcal{S}_{u_0}^M\) leading to a contradiction). Similarly, we can find a constant \(a\in (0,1)\) such that g is not a constant on \(a\mathcal{S}_{u_0}^M\). For other values of u, we also denote \(\mathcal{S}_u^M=\mathcal{S}_u\cap \{\mathbf{x}^{\mathrm{T}}{\varvec{\beta }}(u): \Vert \mathbf{x}\Vert \le M\}\).
Let \(L_u<0\) and \(U_u>0\) be the left and right boundary point of \(\mathcal{S}_u^M\), respectively. Assumption (C1) also implies that \(\inf _u \min \{-L_u,U_u\} \) is bounded below by a positive constant, say \(\epsilon '\). Since \(\mathcal{S}_{u_0}^M\) is an interval containing zero, \(a\mathcal{S}_{u_0}^M\) is a proper subset of \(\mathcal{S}_{u_0}^M\). More concretely, we have, for any \(\mathbf{x}\in \mathcal{X}\cap [-M,M]\),
and
In fact, for example, the first inequality above can be seen by the trivial inequality \(U_{u_0}-a\mathbf{x}^{\mathrm{T}}{\varvec{\beta }}(u_0)\ge (1-a)U_{u_0}\).
Now we claim that there exists a constant \(\delta '>0\) such that \(u_0\le u\le u_0+\delta '\) implies
In fact, we obviously have \(a\mathcal{S}_{u_0}^M\subseteq \mathcal{S}_{u_0}^M\) since \(a<1\). To see \(a\mathcal{S}_{u_0}^M\subseteq \mathcal{S}_{u}^M\), we only need to show that for all \(\mathbf{x}\in \mathcal{X}\cap \{\mathbf{x}:\Vert \mathbf{x}\Vert \le M\}\),
and
which are easily seen to be implied by (A.1) and (A.2), respectively, using that \(\Vert \mathbf{x}^{\mathrm{T}}{\varvec{\beta }}(u_0)-\mathbf{x}^{\mathrm{T}}{\varvec{\beta }}(u)\Vert \le M \Vert {\varvec{\beta }}(u_0)-{\varvec{\beta }}(u)\Vert \) and the continuity of \({\varvec{\beta }}(u)\).
Summarizing the above, we can find \(\delta ''>0\) such that for \(u=u_0+\delta ''\), we have \(a\mathcal{S}_{u_0}^M\subseteq \mathcal{S}_u^M\cap \mathcal{S}_{u_0}^M\) and \(\Vert {\varvec{\beta }}(u)-{\varvec{\beta }}(u_0)\Vert \le \epsilon \), \(\Vert {\varvec{\alpha }}(u)-{\varvec{\alpha }}(u_0)\Vert \le \epsilon \). In the following, we fixed u to be \(u_0+\delta ''\).
Differentiating both sides of \(g(\mathbf{x}^{\mathrm{T}}{\varvec{\beta }}(u))=h(\mathbf{x}^{\mathrm{T}}{\varvec{\alpha }}(u))\) we get \(g'(\mathbf{x}^{\mathrm{T}}{\varvec{\beta }}(u)){\varvec{\beta }}(u)=h'(\mathbf{x}^{\mathrm{T}}{\varvec{\alpha }}(u)){\varvec{\alpha }}(u)\) and thus \({\varvec{\beta }}(u)\propto {\varvec{\alpha }}(u)\). For this fixed u, simply write \({\varvec{\beta }}={\varvec{\beta }}(u)\), \({\varvec{\alpha }}={\varvec{\alpha }}(u)\) and thus \({\varvec{\beta }}=c{\varvec{\alpha }}\) for some \(c\in R\). We assume \(|c|\le 1\) (otherwise we can simply switch \({\varvec{\beta }}\) and \({\varvec{\alpha }}\)). If \(c=1\), we have \({\varvec{\beta }}={\varvec{\alpha }}\) which contradicts the definition of \(u_0\). If \(c=-1\), we have \({\varvec{\beta }}=-{\varvec{\alpha }}\). and thus \(\Vert {\varvec{\beta }}(u)-{\varvec{\beta }}(u_0)\Vert ^2+\Vert {\varvec{\alpha }}(u)-{\varvec{\alpha }}(u_0)\Vert ^2=\Vert {\varvec{\beta }}(u)-{\varvec{\beta }}(u_0)\Vert ^2+\Vert {\varvec{\beta }}(u)+{\varvec{\beta }}(u_0)\Vert ^2=2\Vert {\varvec{\beta }}(u)\Vert ^2+2\Vert {\varvec{\beta }}(u_0)\Vert ^2\ge 4\epsilon ^2\), which contradict that \(\Vert {\varvec{\beta }}(u)-{\varvec{\beta }}(u_0)\Vert \le \epsilon \) and \(\Vert {\varvec{\alpha }}(u)-{\varvec{\alpha }}(u_0)\Vert \le \epsilon \). Finally, if \(|c|<1\), we have \(g(\mathbf{x}^{\mathrm{T}}{\varvec{\beta }})=h(c \mathbf{x}^{\mathrm{T}}{\varvec{\beta }})\) and by the identification of single-index models, we have \(g(x)=h(cx)\) for \(x\in \mathcal{S}_u\). Since \(g=h\) on \(\mathcal{S}_{u_0}\), we have \(h(cx)=h(x)\) for \(x\in \mathcal{S}_u\cap \mathcal{S}_{u_0}\). Note \(\mathcal{S}_u\cap \mathcal{S}_{u_0}\) is an interval containing zero. This implies \(h(x)=h(cx)=h(c^2x)=\ldots \rightarrow h(0)\) and h is a constant on \(\mathcal{S}_u\cap \mathcal{S}_{u_0}\supseteq \mathcal{S}_u^M\cap \mathcal{S}_{u_0}^M\), which is in turn a superset of \(a\mathcal{S}_{u_0}^M\), leading to contradiction (since we argued above that \(g=h\) is nonconstant on \(a\mathcal{S}_{u_0}^M\)).
\(\square \)
1.2 Proofs for convergence rates
Let \(F(\cdot |\mathbf{X},U)\) be the conditional cdf of e given the covariates. We also write the true conditional quantile \(g(\mathbf{X}_i^{\mathrm{T}}{\varvec{\beta }}(U_i))\) as \(m_i\). In the proofs, C denotes a generic positive constant which may assume different values even on the same line.
Lemma 1
Let \(r_n=\sqrt{K/n}+K^{-d}\). Define \(\mathbf{Z}_i=\{\mathbf{A}^{-1}\mathbf{B}(U_i)\}\otimes \mathbf{X}_i\).
where the expectations are over \(Y_i\) conditional on \(\mathbf{X}_i\) and \(U_i\) (all expectations below are also such conditional expectations).
Proof
As in He and Shi (1994), in the proof we consider median regression with \(\tau =1/2,\;\rho _\tau (u)=|u|/2\) and the general case can be shown in the same way. Let \(\mathcal{N}=\{({{\phi }}_{(1)},{\varvec{\theta }}_{(1)}),\ldots ,({{\phi }}_{(N)},{\varvec{\theta }}_{(N)})\}\) be a \(\delta _n\) covering of \(\{({{\phi }},{\varvec{\theta }}): \Vert {{\phi }}-{{\phi }}_0\Vert +\Vert {\varvec{\theta }}-{\varvec{\theta }}_0\Vert \le Cr_n\}\), with size bounded by \(N\le (Cr_n/\delta _n)^{CK}\) and thus \(\mathrm{log} N\le C K\mathrm{log} n\) if we choose \(\delta _n\sim n^{-a}\) for some \(a>0\) (we will choose a to be large enough).
Let \(M_{ni}({{\phi }},{\varvec{\theta }})=\frac{1}{2}|Y_i- \mathbf{B}^{\mathrm{T}}(\mathbf{Z}_i^{\mathrm{T}}{{\phi }}){\varvec{\theta }}|-\frac{1}{2} |Y_i- \mathbf{B}^{\mathrm{T}}(\mathbf{Z}_i^{\mathrm{T}}{{\phi }}_0){\varvec{\theta }}_0|+ \{ \mathbf{B}^{\mathrm{T}}(\mathbf{Z}_i^{\mathrm{T}}{{\phi }}){\varvec{\theta }}- \mathbf{B}^{\mathrm{T}}(\mathbf{Z}_i^{\mathrm{T}}{{\phi }}_0){\varvec{\theta }}_0\}\{1/2-I(e_i\le 0)\}\), and \(M_n({{\phi }},{\varvec{\theta }})=\sum _{i=1}^nM_{ni}({{\phi }},{\varvec{\theta }})\). Using the Lipschitz property of |u|, and that for any \(({{\phi }},{\varvec{\theta }})\) there exists \(({{\phi }}_{(l)},{\varvec{\theta }}_{(l)})\) such that \(\Vert {{\phi }}-{{\phi }}_{(l)}\Vert ^2+\Vert {\varvec{\theta }}-{\varvec{\theta }}_{(l)}\Vert ^2\le \delta _n^2\), we have
which can obviously be made smaller than \(nr_n^2\) by the Lipschitz property of the spline functions, by setting \(\delta _n\sim n^{-a}\) for a large enough.
Furthermore, by simple algebra
Thus
where \({{\phi }}^*\) lies between \({{\phi }}\) and \({{\phi }}_0\) and we used that \(\Vert \mathbf{B}(x)\Vert \le C\sqrt{K}\) and \(\Vert \mathbf{B}^{(1)}(x)\Vert \le CK^{3/2}\) at any fixed point \(x\in [0,1]\).
Furthermore, we have
Using Bernstein’s inequality, together with union bound, we have
The right-hand side converges to zero with
\(a=O\left( \max \{K^{3/2}r_n\mathrm{log} (n), \sqrt{nK^{3/2}r_n^3\mathrm{log} (n)}\}\right) =o(nr_n^2)\). \(\square \)
Lemma 2
Suppose \(\Vert {{\phi }}-{{\phi }}_0\Vert +\Vert {\varvec{\theta }}-{\varvec{\theta }}_0\Vert =Lr_n\) for sufficiently large \(L>0\).
with probability approaching one.
Proof
Using the Knight’s identity \(\rho _\tau (x-y)-\rho _\tau (x)=-y\{\tau -I(x\le 0)\}+\int _0^y \{I(x\le t)-I(x\le 0)\}dt\), we have that
We have, by Taylor’s expansion,
with probability approaching one. The final lower bound above is nontrivial and we put its proof in Lemma 3. On the other hand, using Taylor’s expansion, we can easily get similar upper bound
and by the approximation property of splines,
Combining various bounds above, we get
if L is large enough. \(\square \)
Lemma 3
Under the setup of Lemma 2, we have
with probability approaching one.
Proof
We use \(\mathrm{vec}^{-1}\) to denote the inverse mapping of \({{\phi }}=\mathrm{vec}({\varvec{\Phi }})\). Let \({\varvec{\Phi }}=\mathrm{vec}^{-1}({{\phi }})\), \({\varvec{\Phi }}_0=\mathrm{vec}^{-1}({{\phi }}_0)\), \({\varvec{\beta }}(u)={\varvec{\Phi }}\mathbf{A}^{-1}\mathbf{B}(u)\) and \(\widetilde{\varvec{\beta }}(u)={\varvec{\Phi }}_0\mathbf{A}^{-1}\mathbf{B}(u)\). Obviously \(\mathbf{Z}_i({{\phi }}-{{\phi }}_0)=\mathbf{X}_i^{\mathrm{T}}\{{\varvec{\beta }}(U_i)-\widetilde{\varvec{\beta }}(U_i)\} =\mathbf{X}_i^{\mathrm{T}}\mathbf{J}(U_i)\{{\varvec{\beta }}^{(-r)}(U_i)-\widetilde{\varvec{\beta }}^{(-r)}(U_i)\}+o_p(r_n^2)\) and we have \(\Vert {\varvec{\beta }}^{(-r)}-\widetilde{\varvec{\beta }}^{(-r)}\Vert \ge CLr_n\). Then the right-hand side of (A.5) can be written as
By law of large number, we only need to show the lower bound for
which is obviously lower bounded by \(E\Vert {\varvec{\beta }}^{(-r)}(U)-\widetilde{\varvec{\beta }}^{(-r)}(U)\Vert ^2+\Vert {\varvec{\theta }}-{\varvec{\theta }}_0\Vert ^2\ge CL^2r_n^2\) if we can show that the eigenvalues of
are bounded away from zero for all \(u\in [0,1]\).
Since \(|\mathbf{B}^{(1)\mathrm{T}}(\mathbf{X}^{\mathrm{T}}\widetilde{\varvec{\beta }}(u)) {\varvec{\theta }}_0-g^{(1)}(\mathbf{X}^{\mathrm{T}}{\varvec{\beta }}_0(u))|\le CK^{-d+1}\), we only need to show that the eigenvalues of
are bounded away from zero.
Under condition (A4), let \(\mathbf{L}_0\) be the \(p\times K\) matrix of spline coefficients with \(\Vert E\{\mathbf{X}|\mathbf{X}^{\mathrm{T}}{\varvec{\beta }}_0(U)\}-\mathbf{L}_0\mathbf{B}(\mathbf{X}^{\mathrm{T}}{\varvec{\beta }}_0(U))\Vert \le C K^{-d'}\).
Pre-multiplying (A.6) by
and post-multiplying (A.6) by its transpose we get the matrix
It is easy to see that singular values of (A.7) are bounded and bounded away from zero, and thus we only need to show that the eigenvalues of (A.8) are bounded away from zero. Since \(\Vert E\{\mathbf{X}|\mathbf{X}^{\mathrm{T}}{\varvec{\beta }}_0(U)\}-\mathbf{L}_0\mathbf{B}(\mathbf{X}^{\mathrm{T}}{\varvec{\beta }}_0(U))\Vert \le C K^{-d'}\), we can replace \(\mathbf{L}_0\mathbf{B}(\mathbf{X}^{\mathrm{T}}{\varvec{\beta }}_0(U))\) with \(E\{\mathbf{X}|\mathbf{X}^{\mathrm{T}}{\varvec{\beta }}_0(U)\}\) in (A.8), and we only need to consider
It is easy to see that (A.9) is block-diagonal and the eigenvalues of both blocks are bounded and bounded away from zero, by the property of splines and condition (A5). \(\square \)
The following lemma deals with one of the terms in the statement of Lemma 1. For models with single-index structure, its proof is more complicated than additive or varying-coefficient models (due to that for the latter models the parametric part and nonparametric part are added up to give the regression function).
Lemma 4
Proof
For simplicity of notation, let \(\epsilon _i=\tau -I(e_i\le 0)\). We have
The term (A.10) obviously has order \(L\cdot O_p(\sqrt{n}r_n)\). For (A.14), we have that \(\Vert \mathbf{B}(\mathbf{Z}_i^{\mathrm{T}}{{\phi }}_0)\epsilon _i\Vert ^2=O_p(\sum _i\Vert \mathbf{B}(\mathbf{Z}_i^{\mathrm{T}}{{\phi }}_0)\Vert ^2)=O_p(nK)\) and thus (A.14) is \(O_p(\sqrt{nK}\Vert {\varvec{\theta }}-{\varvec{\theta }}_0\Vert )=L\cdot O_p(\sqrt{nK}r_n)\).
For the term (A.11), since \(\Vert \sum _i\mathbf{B}^{(1)}(\mathbf{Z}_i^{\mathrm{T}}{{\phi }}_0)\epsilon _i\Vert ^2=O_p(\sum _i\Vert \mathbf{B}^{(1)}(\mathbf{Z}_i^{\mathrm{T}}{{\phi }}_0)\Vert ^2)=O_p(nK^3)\) we have
With further Taylor expansion \(\mathbf{B}^{(1)}(\mathbf{Z}_i^{\mathrm{T}}{{\phi }}^*)-\mathbf{B}^{(1)}(\mathbf{Z}_i^{\mathrm{T}}{{\phi }}_0)=\mathbf{B}^{(2)}(\mathbf{Z}_i^{\mathrm{T}}{{\phi }}^{**})\mathbf{Z}_i^{\mathrm{T}}({{\phi }}^*-{{\phi }}_0)\), (A.12) and (A.13) are also of order \(o_p(nr_n^2)\) and the proof is complete. Note that all these bounds are trivially uniform over \(\Vert {{\phi }}-{{\phi }}_0\Vert +\Vert {\varvec{\theta }}-{\varvec{\theta }}_0\Vert = Lr_n\) since expressions \({{\phi }}-{{\phi }}_0\) and \({\varvec{\theta }}-{\varvec{\theta }}_0\) can be put in front of the sum. \(\square \)
Proof of Theorem 1
Combining Lemmas 1, 2 and 4, we get for sufficiently large \(L>0\),
and thus there is a local minimizer of \((\widehat{{\phi }},\widehat{\varvec{\theta }})\) with \(\Vert \widehat{{\phi }}-{{\phi }}_0\Vert +\Vert \widehat{\varvec{\theta }}-{\varvec{\theta }}_0\Vert =O_p(r_n)\). \(\square \)
Rights and permissions
About this article
Cite this article
Yue, L., Li, G. & Lian, H. Identification and estimation in quantile varying-coefficient models with unknown link function. TEST 28, 1251–1275 (2019). https://doi.org/10.1007/s11749-019-00638-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11749-019-00638-6