On the Whittle estimator for linear random noise spectral density parameter in continuous-time nonlinear regression models

  • A. V. Ivanov
  • N. N. LeonenkoEmail author
  • I. V. Orlovskyi
Open Access


A continuous-time nonlinear regression model with Lévy-driven linear noise process is considered. Sufficient conditions of consistency and asymptotic normality of the Whittle estimator for the parameter of spectral density of the noise are obtained in the paper.


Nonlinear regression model Lévy-driven linear noise process The least squares estimator Spectral density Whittle estimator Consistency Asymptotic normality Levitan polynomials 

1 Introduction

The paper is focused on such an important aspect of the study of regression models with correlated observations as an estimation of random noise functional characteristics. When considering this problem the regression function unknown parameter becomes nuisance and complicates the analysis of noise. To neutralise its presence, we must estimate the parameter and then build estimators, say, of spectral density parameter of a stationary random noise using residuals, that is the difference between the values of the observed process and fitted regression function.

So, in the first step we employ the least squares estimator (LSE) for unknown parameter of nonlinear regression, because of its relative simplicity. Asymptotic properties of the LSE in nonlinear regression model were studied by many authors. Numerous results on the subject can be found in monograph by Ivanov and Leonenko (1989), Ivanov (1997).

In the second step we use the residual periodoram to estimate the unknown parameter of the noise spectral density using the Whittle-type contrast process (Whittle 1951, 1953).

The results obtained at this time on the Whittle minimum contrast estimator (MCE) form a developed theory that covers various mathematical models of stochastic processes and random fields. Some publications on the topic are Hannan (1970, 1973), Dunsmuir and Hannan (1976), Guyon (1982), Rosenblatt (1985), Fox and Taqqu (1986), Dahlhaus (1989), Heyde and Gay (1989, 1993), Giraitis and Surgailis (1990), Giraitis and Taqqu (1999), Gao et al. (2001), Gao (2004), Leonenko and Sakhno (2006), Bahamonde and Doukhan (2017), Ginovyan and Sahakyan (2017), AvLeoSaspsoSTLTHUBLIetc, Anh et al. (2004), Bai et al. (2016), Ginovyan et al. (2014), Giraitis et al. (2017).

In the article by Koul and Surgailis (2000) in the linear regression model the asymptotic properties of the Whittle estimator of strongly dependent random noise spectral density parameters were studied in a discrete-time setting.

In the paper by Ivanov and Prykhod’ko (2016) sufficient conditions on consistency and asymptotic normality of the Whittle estimator of the spectral density parameter of the Gaussian stationary random noise in continuous-time nonlinear regression model were obtained using residual periodogram. The current paper continues this research extending it to the case of the Lévy-driven linear random noise and more general classes of regression functions including trigonometric ones. We use the scheme of the proof in the case of Gaussian noise (Ivanov and Prykhod’ko 2016) and some results of the papers (Avram et al. 2010; Anh et al. 2004). For linear random noise the proofs utilize essentially another types of limits theorems. In comparison with Gaussian case it leads to the use of special conditions on linear Lévy-driven random noise, new consistency and asymptotic normality conditions.

In the present publication continues-time model is considered. However, the results obtained can be also used for discrete time observations using the statements like Theorem 3 of Alodat and Olenko (2017) or Lemma 1 of Leonenko and Taufer (2006).

2 Setting

Consider a regression model
$$\begin{aligned} X(t)=g(t,\,\alpha _0)+\varepsilon (t),\ t\ge 0, \end{aligned}$$
where \(g{:}\;(-\gamma ,\,\infty )\times \mathcal {A}_\gamma \ \rightarrow \ \mathbb {R}\) is a continuous function, \(\mathcal {A}\subset \mathbb {R}^q\) is an open convex set, \(\mathcal {A}_\gamma =\bigcup \limits _{\Vert e\Vert \le 1}\left(\mathcal {A}+\gamma e\right)\), \(\gamma \) is some positive number, \(\alpha _0\in \mathcal {A}\) is a true value of unknown parameter, and \(\varepsilon \) is a random noise described below.

Remark 1

The assumption about domain \((-\gamma ,\,\infty )\) for function g in t is of technical nature and does not effect possible applications. This assumption makes it possible to formulate the condition \(\mathbf{N }_2\), which is used in the proof of Lemma 7.

Throughout the paper \((\Omega ,\,\mathcal {F},\,\hbox {P})\) denotes a complete probability space.

A Lévy process L(t), \(t\ge 0\), is a stochastic process, with independent and stationary increments, continuous in probability, with sample-paths which are right-continuous with left limits (cádlág) and \(L(0)=0\). For a general treatment of Lévy processes we refer to Applebaum (2009) and Sato (1999).

Let \((a,\,b,\,\Pi )\) denote a characteristic triplet of the Lévy process L(t), \(t\ge 0\), that is for all \(t\ge 0\)
$$\begin{aligned} \log \hbox {E}\exp \left\rbrace \mathrm {i}zL(t)\right\lbrace =t\kappa (z) \end{aligned}$$
for all \(z\in \mathbb {R}\), where
$$\begin{aligned} \kappa (z)=\mathrm {i}az-\frac{1}{2}bz^2 +\int \limits _{\mathbb {R}}\,\left(e^{\mathrm {i}zu}-1-\mathrm {i}z\tau (u)\right)\Pi (du),\ z\in \mathbb {R}, \end{aligned}$$
where \(a\in \mathbb {R}\), \(b\ge 0\), andThe Lévy measure \(\Pi \) in (2) is a Radon measure on \(\mathbb {R}\backslash \{0\}\) such that \(\Pi (\{0\})=0\), and
$$\begin{aligned} \int \limits _{\mathbb {R}}\,\min (1,\,u^2)\Pi (du)<\infty . \end{aligned}$$
It is known that L(t) has finite pth moment for \(p>0\) (\(\hbox {E}|L(t)|^p<\infty \)) if and only if
$$\begin{aligned} \int \limits _{|u|\ge 1}\,|u|^p\Pi (du)<\infty , \end{aligned}$$
and L(t) has finite pth exponential moment for \(p>0\) (\(\hbox {E}\left[e^{pL(t)}\right]<\infty \)) if and only if
$$\begin{aligned} \int \limits _{|u|\ge 1}\,e^{pu}\Pi (du)<\infty , \end{aligned}$$
see, i.e., Sato Sato (1999), Theorem 25.3.

If L(t), \(t\ge 0\), is a Lévy process with characteristics \((a,\,b,\,\Pi )\), then the process \(-L(t)\), \(t\ge 0\), is also a Lévy process with characteristics \((-a,\,b,\,\tilde{\Pi })\), where \(\tilde{\Pi }(A)=\Pi (-A)\) for each Borel set A, modifying it to be cádlág (Anh et al. 2002).

We introduce a two-sided Lévy process L(t), \(t\in \mathbb {R}\), defined for \(t<0\) to be equal an independent copy of \(-L(-t)\).

Let \(\hat{a}\,:\,\mathbb {R}\rightarrow \mathbb {R}_+\) be a measurable function. We consider the Lévy-driven continuous-time linear (or moving average) stochastic process
$$\begin{aligned} \varepsilon (t)=\int \limits _{\mathbb {R}}\, \hat{a}(t-s)dL(s),\ t\in \mathbb {R}. \end{aligned}$$
For causal process (4) \(\hat{a}(t)=0,\ t<0\).
In the sequel we assume that
$$\begin{aligned} \hat{a}\in L_1(\mathbb {R})\cap L_2(\mathbb {R})\ \text {or}\ \hat{a}\in L_2(\mathbb {R}) \ \text {with}\ \hbox {E}L(1)=0. \end{aligned}$$
Under the condition (5) and
$$\begin{aligned} \int \limits _{\mathbb {R}}\,u^2\Pi (du)<\infty , \end{aligned}$$
the stochastic integral in (4) is well-defined in \(L_2(\Omega )\) in the sense of stochastic integration introduced in Rajput and Rosinski (1989).
The popular choices for the kernel in (4) are Gamma type kernels:
  • \(\hat{a}(t)=t^\alpha e^{-\lambda t}\mathbb {I}_{[0,\,\infty )}(t)\), \(\lambda >0\), \(\alpha >-\frac{1}{2}\);

  • \(\hat{a}(t)=e^{-\lambda t}\mathbb {I}_{[0,\,\infty )}(t)\), \(\lambda >0\) (Ornstein-Uhlenbeck process);

  • \(\hat{a}(t)=e^{-\lambda |t|}\), \(\lambda >0\) (well-balanced Ornstein-Uhlenbeck process).

\(\mathbf A _1\). The process \(\varepsilon \) in (1) is a measurable causal linear process of the form (4), where a two-sides Lévy process L is such that \(\hbox {E}L(1)=0\), \(\hat{a}\in L_1(\mathbb {R})\cap L_2(\mathbb {R})\). Moreover the Lévy measure \(\Pi \) of L(1) satisfies (3) for some \(p>0\).
From the condition \(\mathbf{A }_1\) it follows Anh et al. (2002) for any \(r\ge 1\)
$$\begin{aligned} \log \hbox {E}\exp \left\rbrace \mathrm {i}\sum \limits _{j=1}^r\,z_j\varepsilon (t_j)\right\lbrace = \int \limits _{\mathbb {R}}\,\kappa \left(\sum \limits _{j=1}^r\,z_j\hat{a}\left(t_j-s\right)\right)ds. \end{aligned}$$
In turn from (6) it can be seen that the stochastic process \(\varepsilon \) is stationary in a strict sense.
Denote by
$$\begin{aligned} \begin{aligned} m_r(t_1,\,\ldots ,\,t_r)&=\hbox {E}\varepsilon (t_1)\ldots \varepsilon (t_r),\\ c_r(t_1,\,\ldots ,\,t_r)&=\mathrm {i}^{-r}\left.\dfrac{\partial ^r}{\partial z_1\ldots \partial z_r}\,\log \hbox {E}\exp \left\rbrace \mathrm {i}\sum \limits _{j=1}^r\,z_j\varepsilon (t_j)\right\lbrace \,\right|_{z_1 = \cdots =z_r = 0} \end{aligned} \end{aligned}$$
the moment and cumulant functions correspondingly of order \(r,\ r\ge 1\), of the process \(\varepsilon \). Thus \(m_2(t_1,\,t_2)=B(t_1-t_2)\), where
$$\begin{aligned} B(t)=d_2\int \limits _{\mathbb {R}}\,\hat{a}(t+s)\hat{a}(s)ds,\ t\in \mathbb {R}, \end{aligned}$$
is a covariance function of \(\varepsilon \), and the fourth moment function
$$\begin{aligned} \begin{aligned} m_4(t_1,\,t_2,\,t_3,\,t_4)&=c_4(t_1,\,t_2,\,t_3,\,t_4)+m_2(t_1,\,t_2)m_2(t_3,\,t_4)\\&\ \quad +\,m_2(t_1,\,t_3)m_2(t_2,\,t_4)+m_2(t_1,\,t_4)m_2(t_2,\,t_3). \end{aligned} \end{aligned}$$
The explicit expression for cumulants of the stochastic process \(\varepsilon \) can be obtained from (6) by direct calculations:
$$\begin{aligned} c_r(t_1,\,\ldots ,\,t_r)=d_r\int \limits _{\mathbb {R}}\,\prod \limits _{j=1}^r\,\hat{a}\left(t_j-s\right)ds, \end{aligned}$$
where \(d_r\) is the rth cumulant of the random variable L(1). In particular,
$$\begin{aligned} d_2=\hbox {E}L^2(1)=-\kappa ^{(2)}(0),\ \ \ d_4=\hbox {E}L^4(1) - 3\left(\hbox {E}L^2(1)\right)^2. \end{aligned}$$
Under the condition \(\mathbf{A }_1\), the spectral densities of the stationary process \(\varepsilon \) of all orders exist and can be obtained from (8) as
$$\begin{aligned} f_r(\lambda _1,\,\ldots ,\,\lambda _{r-1})=(2\pi )^{-r+1}d_r\cdot a\left(-\sum \limits _{j=1}^{r-1}\,\lambda _j\right)\cdot \prod _{j=1}^{r-1}\,a(\lambda _j), \end{aligned}$$
where \(a\in L_2(\mathbb {R})\), \(a(\lambda )=\int \limits _{\mathbb {R}}\,\hat{a}(t)e^{-\mathrm {i}\lambda t}dt\), \(\lambda \in \mathbb {R}\), if complex-valued functions \(f_r\in L_1\left(\mathbb {R}^{r-1}\right)\), \(r>2\), see, e.g., Avram et al. (2010) for definitions of the spectral densities of higher order \(f_r,\ r\ge 3\).
For \(r=2\), we denote the spectral density of the second order by
$$\begin{aligned} f(\lambda )=f_2(\lambda )=(2\pi )^{-1}d_2a(\lambda )a(-\lambda )=(2\pi )^{-1}d_2\left|a(\lambda )\right|^2. \end{aligned}$$
\(\mathbf A _2\). (i)

Spectral densities (9) of all orders \(f_r\in L_1(\mathbb {R}^{r-1})\), \(r\ge 2\);


\(a(\lambda )=a\left(\lambda ,\,\theta ^{(1)}\right)\), \(d_2=d_2\left(\theta ^{(2)}\right)\), \(\theta =\left(\theta ^{(1)},\,\theta ^{(2)}\right)\in \Theta _\tau \), \(\Theta _\tau =\bigcup \limits _{\Vert e\Vert < 1}(\Theta +\tau e)\), \(\tau >0\) is some number, \(\Theta \subset \mathbb {R}^m\) is a bounded open convex set, that is \(f(\lambda )=f(\lambda ,\,\theta )\), \(\theta \in \Theta _\tau \), and a true value of parameter \(\theta _0\in \Theta \);


\(f(\lambda ,\,\theta )>0\), \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\).

In the condition \(\mathbf{A }_2\)(ii) above \(\theta ^{(1)}\) represents parameters of the kernel \(\hat{a}\) in (4), while \(\theta ^{(2)}\) represents parameters of Lévy process.

Remark 2

The last part of the condition \(\mathbf{A }_1\) is fully used in the proof of Lemma 5 and Theorem B.1 in “Appendix B”. The condition \(\mathbf{A }_2\)(i) is fully used just in the proof of Lemma 5. When we refer to these conditions in other places of the text we use them partially: see, for example, Lemma 3, where we need in the existence of \(f_4\) only.

Definition 1

The least squares estimator (LSE) of the parameter \(\alpha _0\in \mathcal {A}\) obtained by observations of the process \(\left\rbrace X(t),\ t\in [0,T]\right\lbrace \) is said to be any random vector \(\widehat{\alpha }_T=(\widehat{\alpha }_{1T},\,\ldots ,\,\widehat{\alpha }_{qT})\in \mathcal {A}^c\) (\(\mathcal {A}^c\) is the closure of \(\mathcal {A}\)), such that
$$\begin{aligned} S_T\left(\widehat{\alpha }_T\right)= \min \limits _{\alpha \in \mathcal {A}^c}\,S_T(\alpha ),\ S_T(\alpha )=\int \limits _0^T\,\left(X(t)-g(t,\,\alpha )\right)^2dt. \end{aligned}$$
We consider the residual periodogram
$$\begin{aligned} I_T(\lambda ,\,\widehat{\alpha }_T)=(2\pi T)^{-1}\left|\int \limits _0^T\,\left(X(t)-g(t,\,\widehat{\alpha }_T)\right) e^{-\mathrm {i}t\lambda }dt\right|^2,\ \lambda \in \mathbb {R}, \end{aligned}$$
and the Whittle contrast field
$$\begin{aligned} U_T(\theta ,\,\widehat{\alpha }_T)=\int \limits _{\mathbb {R}}\,\left(\log f(\lambda ,\,\theta ) +\dfrac{I_T\left(\lambda ,\,\widehat{\alpha }_T\right)}{f(\lambda ,\,\theta )}\right)w(\lambda )d\lambda ,\ \theta \in \Theta ^c, \end{aligned}$$
where \(w(\lambda ),\ \lambda \in \mathbb {R}\), is an even nonnegative bounded Lebesgue measurable function, for which the intgral (10) is well-defined. The existence of integral (10) follows from the condition \(\mathbf{C }_4\) introduced below.

Definition 2

The minimum contrast estimator (MCE) of the unknown parameter \(\theta _0\in \Theta \) is said to be any random vector \(\widehat{\theta }_T=\left(\widehat{\theta }_{1T},\ldots ,\widehat{\theta }_{mT}\right)\) such that
$$\begin{aligned} U_T\left(\widehat{\theta }_T,\,\widehat{\alpha }_T\right)=\min \limits _{\theta \in \Theta ^c}\,U_T\left(\theta ,\widehat{\alpha }_T\right). \end{aligned}$$

The minimum in the Definition 2 is attained due to integral (10) continuity in \(\theta \in \Theta ^c\) as follows from the condition \(\mathbf{C }_4\) introduced below.

3 Consistency of the minimum contrast estimator

Suppose the function \(g(t,\,\alpha )\) in (1) is continuously differentiable with respect to \(\alpha \in \mathcal {A}^c\) for any \(t\ge 0\), and its derivatives \(g_i(t,\,\alpha )=\dfrac{\partial }{\partial \alpha _i}g(t,\,\alpha )\), \(i=\overline{1,q}\), are locally integrable with respect to t. Let
$$\begin{aligned} d_T(\alpha )=\hbox {diag}\Bigl (d_{iT}(\alpha ),\ i=\overline{1,q}\Bigr ),\ d_{iT}^2(\alpha )=\int \limits _0^T\,g_i^2(t,\,\alpha )dt, \end{aligned}$$
and \(\underset{T\rightarrow \infty }{\liminf }\,T^{-\frac{1}{2}}d_{iT}(\alpha )>0\), \(i=\overline{1,q}\), \(\alpha \in \mathcal {A}\).
$$\begin{aligned} \Phi _T(\alpha _1,\,\alpha _2)=\int \limits _0^T\, (g(t,\,\alpha _1)-g(t,\,\alpha _2))^2dt,\ \alpha _1,\,\alpha _2\in \mathcal {A}^c. \end{aligned}$$
We assume that the following conditions are satisfied.
\(\mathbf{C }_1\).
The LSE \(\widehat{\alpha }_T\) is a weakly consistent estimator of \(\alpha _0\in \mathcal {A}\) in the sense that
$$\begin{aligned} T^{-\frac{1}{2}}d_T(\alpha _0)\left(\widehat{\alpha }_T-\alpha _0\right)\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \text {as}\ T\rightarrow \infty . \end{aligned}$$
\(\mathbf{C }_2\).
There exists a constant \(c_0<\infty \) such that for any \(\alpha _0\in \mathcal {A}\) and \(T>T_0\), where \(c_0\) and \(T_0\) may depend on \(\alpha _0\),
$$\begin{aligned} \Phi _T(\alpha ,\,\alpha _0)\le c_0\Vert d_T(\alpha _0)\left(\alpha -\alpha _0\right)\Vert ^2,\ \alpha \in \mathcal {A}^c. \end{aligned}$$
The fulfillment of the conditions C\(_1\) and C\(_2\) is discussed in more detail in “Appendix A”. We need also in 3 more conditions.
\(\mathbf{C }_3\).

\(f(\lambda ,\,\theta _1)\ne f(\lambda ,\,\theta _2)\) on a set of positive Lebesgue measure once \(\theta _1\ne \theta _2\), \(\theta _1,\theta _2\in \Theta ^c\).

\(\mathbf{C }_4\).
The functions \(w(\lambda )\log f(\lambda ,\,\theta )\), \(\dfrac{w(\lambda )}{f(\lambda ,\,\theta )}\) are continuous with respect to \(\theta \in \Theta ^c\) almost everywhere in \(\lambda \in \mathbb {R}\), and

\(w(\lambda )\left|\log f(\lambda ,\,\theta )\right|\le Z_1(\lambda )\), \(\theta \in \Theta ^c\), almost everywhere in \(\lambda \in \mathbb {R}\), and \(Z_1(\cdot )\in L_1(\mathbb {R})\);


\(\sup \limits _{\lambda \in \mathbb {R},\,\theta \in \Theta ^c}\,\dfrac{w(\lambda )}{f(\lambda ,\,\theta )} =c_1<\infty \).

\(\mathbf{C }_5\).
There exists an even positive Lebesgue measurable function \(v(\lambda )\), \(\lambda \in \mathbb {R}\), such that

\(\dfrac{v(\lambda )}{f(\lambda ,\,\theta )}\) is uniformly continuous in \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\);


\(\sup \limits _{\lambda \in \mathbb {R}}\,\dfrac{w(\lambda )}{v(\lambda )}<\infty \).

Theorem 1

Under conditions \(\mathbf{A }_1, \mathbf{A }_2, \mathbf{C }_1\)\(\mathbf{C }_5\)\(\widehat{\theta }_T\ \overset{\hbox {P}}{\longrightarrow }\ \theta \), as \(T\rightarrow \infty \).

To prove the theorem we need some additional assertions.

Lemma 1

Under condition \(\mathbf{A }_1\)
$$\begin{aligned} \nu ^*_T=T^{-1}\int \limits _0^T\,\varepsilon ^2(t)dt\ \overset{\hbox {P}}{\longrightarrow }\ B(0),\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$


For any \(\rho >0\) by Chebyshev inequality and (7)
$$\begin{aligned} \begin{aligned} \hbox {P}\left\rbrace \left|\nu ^*_T-B(0)\right|\ge \rho \right\lbrace&\le \rho ^{-2}T^{-2}\int \limits _0^T\int \limits _0^T\,c_4(t,t,s,s)dtds+ \\&\quad +\,2\rho ^{-2}T^{-2}\int \limits _0^T\int \limits _0^T\,B^2(t-s)dtds=I_1+I_2. \end{aligned} \end{aligned}$$
From \(\mathbf{A }_1\) it follows that \(I_2=O(T^{-1})\). Using expression (8) for cumulants of the process \(\varepsilon \) we get
$$\begin{aligned} \begin{aligned} I_1&= d_4\rho ^{-2}T^{-2}\int \limits _0^T\int \limits _0^T\int \limits _{\mathbb {R}}\,\hat{a}^2(t-u)\hat{a}^2(s-u)dudtds \\&=d_4\rho ^{-2}T^{-2}\int \limits _0^T\,\left(\int \limits _{\mathbb {R}}\,\hat{a}^2(t-u)\left(\int \limits _0^T\,\hat{a}^2(s-u)ds\right)du\right)dt \le d_4\rho ^{-2}\left\Vert\hat{a}\right\Vert_2^4T^{-1}, \end{aligned} \end{aligned}$$
where \(\left\Vert\hat{a}\right\Vert_2=\left(\int \limits _{\mathbb {R}}\,\hat{a}^2(u)du\right)^{\frac{1}{2}}\), that is \(I_1=O(T^{-1})\) as well. \(\square \)
$$\begin{aligned} \begin{aligned} \mathrm {F}_T^{(k)}\left(u_1,\,\ldots ,\,u_k\right) =\mathrm {F}_T^{(k)}\left(u_1\,\ldots ,\,u_{k-1}\right)&=(2\pi )^{-(k-1)}T^{-1} \int \limits _{[0,T]^k}\, e^{\mathrm {i}\sum \limits _{j=1}^kt_j u_j}dt_1\ldots dt_k\\&=(2\pi )^{-(k-1)}T^{-1}\prod \limits _{i=1}^k\, \dfrac{\sin \frac{Tu_j}{2}}{\frac{u_j}{2}}, \end{aligned} \end{aligned}$$
with \(u_k=-\left(u_1+\ldots +u_{k-1}\right)\), \(u_j\in \mathbb {R}\), \(j=\overline{1,k}\).

The functions \(\mathrm {F}_T^{(k)}\left(u_1,\ldots ,u_k\right)\), \(k\ge 3\), are multidimensional analogues of the Fejér kernel, for \(k=2\) we obtain the usual Fejér kernel.

The next statement bases on the results by Bentkus (1972a, b), Bentkus and Rutkauskas (1973).

Lemma 2

Let function \(G\left(u_1,\,\ldots ,\,u_k\right)\), \(u_k=-\left(u_1+\ldots +u_{k-1}\right)\) be bounded and continuous at the point \(\left(u_1,\,\ldots ,\,u_{k-1}\right)=(0,\,\ldots ,\,0)\). Then
$$\begin{aligned} \lim \limits _{T\rightarrow \infty }\,\int \limits _{\mathbb {R}^{k-1}}\,\mathrm {F}_T^k\left(u_1,\,\ldots ,\,u_{k-1}\right) G\left(u_1,\,\ldots ,\,u_k\right)du_1\ldots du_{k-1}=G(0,\,\ldots ,\,0). \end{aligned}$$
We set
$$\begin{aligned} \begin{aligned} g_T(\lambda ,\,\alpha )&=\int \limits _0^T\,e^{-\mathrm {i}\lambda t}g(t,\,\alpha )dt,\quad s_T(\lambda ,\,\alpha )=g_T(\lambda ,\,\alpha _0)-g_T(\lambda ,\,\alpha ),\\ \varepsilon _T(\lambda )&=\int \limits _0^T\,e^{-\mathrm {i}\lambda t}\varepsilon (t)dt,\quad I_T^{\varepsilon }(\lambda )=(2\pi T)^{-1}\left|\varepsilon _T(\lambda )\right|^2, \end{aligned} \end{aligned}$$
and write the residual periodogram in the form
$$\begin{aligned} I_T\left(\lambda ,\,\widehat{\alpha }_T\right)=I_T^{\varepsilon }(\lambda )+(\pi T)^{-1}\hbox {Re}\left\rbrace \varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\right\lbrace +(2\pi T)^{-1}\left|s_T(\lambda ,\,\widehat{\alpha }_T)\right|^2. \end{aligned}$$
Let \(\varphi =\varphi (\lambda ,\,\theta )\), \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\), be an even Lebesgue measurable with respect to variable \(\lambda \) for each fixed \(\theta \) weight function. We have
$$\begin{aligned} J_T(\varphi ,\,\widehat{\alpha }_T)= & {} \int \limits _{\mathbb {R}}\,I_T(\lambda ,\,\widehat{\alpha }_T)\varphi (\lambda ,\,\theta )d\lambda = \int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda )\varphi (\lambda ,\,\theta )d\lambda \\&+\,(\pi T)^{-1}\int \limits _{\mathbb {R}}\,\hbox {Re}\left\rbrace \varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\right\lbrace \varphi (\lambda ,\,\theta )d\lambda \\&+\,(2\pi T)^{-1} \int \limits _{\mathbb {R}}\,\left|s_T(\lambda ,\,\widehat{\alpha }_T)\right|^2 \varphi (\lambda ,\,\theta )d\lambda \\= & {} J_T^{\varepsilon }(\varphi )+J_T^{(1)}(\varphi )+J_T^{(2)}(\varphi ). \end{aligned}$$
$$\begin{aligned} \varphi (\lambda ,\,\theta )\ge 0,\ \sup \limits _{\lambda \in \mathbb {R},\,\theta \in \Theta ^c}\,\varphi (\lambda ,\,\theta )= c(\varphi )<\infty . \end{aligned}$$
Then by the Plancherel identity and condition \(\mathbf{C }_2\)Taking into account conditions \(\mathbf{A }_1, \mathbf{C }_1, \mathbf{C }_2\) and the result of Lemma 1 we obtain
$$\begin{aligned} \sup \limits _{\theta \in \Theta ^c}\,\left|J_T^{(1)}(\varphi )\right|\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
On the other handand again, thanks to \(\mathbf{C }_1, \mathbf{C }_2\),
$$\begin{aligned} \sup \limits _{\theta \in \Theta ^c}\,J_T^{(2)}(\varphi )\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$

Lemma 3

Suppose conditions \(\mathbf{A }_1, \mathbf{A }_2\) are fulfilled and the weight function \(\varphi (\lambda ,\,\theta )\) introduced above satisfies (11). Then, as \(T\rightarrow \infty \),
$$\begin{aligned} J_T^{\varepsilon }(\varphi )\ \overset{\hbox {P}}{\longrightarrow }\ J(\varphi )=\int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0) \varphi (\lambda ,\,\theta )d\lambda ,\ \theta \in \Theta ^c. \end{aligned}$$


The lemma in fact is an application of Lemma 2 in Anh et al. (2002) and Theorem 1 in Anh et al. (2004) reasoning to linear process (4). It is sufficient to prove
$$\begin{aligned} (1)\ \hbox {E}J_T^\varepsilon (\varphi )\ \longrightarrow \ J(\varphi );\ \ \ (2)\ J_T^\varepsilon (\varphi )-\hbox {E}J_T^\varepsilon (\varphi )\ \overset{\hbox {P}}{\longrightarrow }\ 0. \end{aligned}$$
Omitting parameters \(\theta _0\), \(\theta \) in some formulas below we derive
$$\begin{aligned} \begin{aligned} \hbox {E}J_T^{\varepsilon }(\varphi )&=\int \limits _{\mathbb {R}}\,G_2(u)\mathrm {F}_T^{(2)}(u)du,\ \ G_2(u)=\int \limits _{\mathbb {R}}\,f(\lambda +u)\varphi (\lambda )d\lambda ;\\ T\hbox {Var}J_T^\varepsilon (\varphi )&=2\pi \int \limits _{\mathbb {R}^3}\,G_4(u_1,\,u_2,\,u_3) \mathrm {F}_T^{(4)}(u_1,\,u_2,\,u_3) du_1du_2du_3,\\ G_4(u_1,\,u_2,\,u_3)&=2\int \limits _{\mathbb {R}}\,f(\lambda +u_1)f(\lambda -u_3)\varphi (\lambda )\varphi (\lambda +u_1+u_2)d\lambda \\&\quad +\,\int \limits _{\mathbb {R}^2}\,f_4(\lambda +u_1,\,-\lambda +u_2,\,\mu +u_3)\varphi (\lambda ) \varphi (\mu )d\lambda d\mu \\&=2G_4^{(1)}(u_1,\,u_2,\,u_3)+G_4^{(2)}(u_1,\,u_2,\,u_3). \end{aligned} \end{aligned}$$
To apply Lemma 2 we have to show that the functions \(G_2(u)\), \(u\in \mathbb {R}\); \(G_4^{(1)}(\mathrm {u})\), \(G_4^{(2)}(\mathrm {u})\), \(\mathrm {u}=(u_1,\,u_2,\,u_3)\in \mathbb {R}^3\), are bounded and continuous at origins.
Boundedness of \(G_2\) follows from (11). Thanks to (11)
$$\begin{aligned} \underset{\mathrm {u}\in \mathbb {R}^3}{\sup }\,\left|G_4^{(1)}(\mathrm {u})\right|\le c^2(\varphi )\Vert f\Vert _2^2<\infty ,\ \ \Vert f\Vert _2=\left(\int \limits _{\mathbb {R}}\,f^2(\lambda ,\,\theta _0)d\lambda \right)^{\frac{1}{2}}. \end{aligned}$$
On the other hand, by (9)
$$\begin{aligned} |G_4^{(2)}(u_1,\,u_2,\,u_3)|\le & {} d_4(2\pi )^{-3}\int \limits _{\mathbb {R}}\,\left|a(\lambda +u_1)a(-\lambda +u_2)\right|\varphi (\lambda )d\lambda \\&\cdot \int \limits _{\mathbb {R}}\,\left|a(\mu +u_3)a(-\mu -u_1-u_2-u_3)\right|\varphi (\mu )d\mu \\= & {} d_4\cdot (2\pi )^{-3}\cdot I_3\cdot I_4,\\ I_3\le & {} 2\pi c(\varphi )d_2^{-1}\int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0)d\lambda =2\pi c(\varphi )d_2^{-1}B(0). \end{aligned}$$
Integral \(I_4\) admits the same upper bound. So,
$$\begin{aligned} \underset{\mathrm {u}\in \mathbb {R}^3}{\sup }\,\left|G_4^{(2)}(\mathrm {u})\right|\le (2\pi )^{-1}\gamma _2c^2(\varphi )B^2(0), \end{aligned}$$
where \(\gamma _2=\dfrac{d_4}{d_2^2}>0\) is the excess of L(1) distribution, and functions \(G_2\), \(G_4^{(1)}\), \(G_4^{(2)}\) are bounded. The continuity at origins of these functions follows from conditions of Lemma 3 as well. \(\square \)

Corollary 1

If \(\varphi (\lambda ,\,\theta )=\dfrac{w(\lambda )}{f(\lambda ,\,\theta )}\), then under conditions \(\mathbf{A }_1, \mathbf{A }_2, \mathbf{C }_1, \mathbf{C }_2\) and \(\mathbf{C }_4\)
$$\begin{aligned} U_T(\theta ,\,\widehat{\alpha }_T)\ \overset{\hbox {P}}{\longrightarrow }\ U(\theta ) =\int \limits _{\mathbb {R}}\,\left(\log f(\lambda ,\,\theta )+ \dfrac{f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta )}\right)w(\lambda )d\lambda ,\ \theta \in \Theta ^c. \end{aligned}$$
Consider the Whittle contrast function
$$\begin{aligned} K(\theta _0,\,\theta )=U(\theta )-U(\theta _0)=\int \limits _{\mathbb {R}}\, \left(\dfrac{f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta )}-1- \log \dfrac{f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta )}\right) w(\lambda )d\lambda \ge 0, \end{aligned}$$
with \(K(\theta _0,\,\theta )=0\) if and only if \(\theta =\theta _0\) due to \(\mathbf{C }_3\).

Lemma 4

If the coditions \(\mathbf{A }_1, \mathbf{A }_2, \mathbf{C }_1, \mathbf{C }_2, \mathbf{C }_4\) and \(\mathbf{C }_5\) are satisfied, then
$$\begin{aligned} \sup \limits _{\theta \in \Theta ^c}\,\left|U_T(\theta ,\,\widehat{\alpha }_T)- U(\theta )\right|\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$


Let \(\{\theta _j,\ j=\overline{1,N_{\delta }}\}\) be a \(\delta \)-net of the set \(\Theta ^c\). Then
$$\begin{aligned} \begin{aligned}&\sup \limits _{\theta \in \Theta ^c}\left|U_T(\theta ,\,\widehat{\alpha }_T)-U(\theta )\right|\le \\&\quad \le \sup \limits _{\Vert \theta _1-\theta _2\Vert \le \delta } \left|U_T(\theta _1,\,\widehat{\alpha }_T)-U(\theta _1) -(U_T(\theta _2,\,\widehat{\alpha }_T)-U(\theta _2))\right|\\&\qquad +\max \limits _{1\le j\le N_{\delta }} \left|U_T(\theta _j,\,\widehat{\alpha }_T)-U(\theta _j)\right|, \end{aligned} \end{aligned}$$
and for any \(\rho \ge 0\)
$$\begin{aligned} \begin{aligned} \hbox {P}\left\rbrace \sup \limits _{\theta \in \Theta ^c}\,\left| U_T(\theta ,\,\widehat{\alpha }_T)-U(\theta )\right|\ge \rho \right\lbrace \le P_1+P_2, \end{aligned} \end{aligned}$$
$$\begin{aligned} P_2=\hbox {P}\left\rbrace \max \limits _{1\le j\le N_{\delta }}\,\left| U_T(\theta _j,\,\widehat{\alpha }_T)-U(\theta _j)\right| \ge \dfrac{\rho }{2}\right\lbrace \ \rightarrow \ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
by Corollary 1. On the other hand,
$$\begin{aligned} \begin{aligned} P_1&=\hbox {P}\left\rbrace \sup \limits _{\Vert \theta _1-\theta _2\Vert \le \delta }\, \Bigl |U_T(\theta _1,\,\widehat{\alpha }_T)-U(\theta _1)- \left(U_T(\theta _2,\,\widehat{\alpha }_T)-U(\theta _2)\right) \Bigr |\ge \frac{\rho }{2}\right\lbrace \\&\le \hbox {P}\left\rbrace \sup \limits _{\Vert \theta _1-\theta _2\Vert \le \delta }\, \left|\int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda ) \left(\dfrac{w(\lambda )}{f(\lambda ,\,\theta _1)}- \dfrac{w(\lambda )}{f(\lambda ,\,\theta _2)}\right)d\lambda \right|\right.\\&\quad +\,\sup \limits _{\Vert \theta _1-\theta _2\Vert \le \delta }\, \left|\int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0) \left(\dfrac{w(\lambda )}{f(\lambda ,\,\theta _1)} -\dfrac{w(\lambda )}{f(\lambda ,\,\theta _2)}\right)d\lambda \right|\\&\quad +\,2\left.\sup \limits _{\theta \in \Theta ^c} \left|J_T^{(1)}\left(\dfrac{w}{f}\right)\right| +2\sup \limits _{\theta \in \Theta ^c}\,J_T^{(2)}\left(\dfrac{w}{f}\right) \ge \dfrac{\rho }{2}\right\lbrace . \end{aligned} \end{aligned}$$
By the condition \(\mathbf{C }_5\)(i)
$$\begin{aligned} \sup \limits _{\Vert \theta _1-\theta _2\Vert \le \delta }\, \left|\int \limits _{\mathbb {R}}\, I_T^{\varepsilon }(\lambda ) \left(\dfrac{w(\lambda )}{f(\lambda ,\,\theta _1)} -\dfrac{w(\lambda )}{f(\lambda ,\,\theta _2)}\right)d\lambda \right| \le \eta (\delta )\int \limits _{\mathbb {R}}\, I_T^{\varepsilon }(\lambda )\dfrac{w(\lambda )}{v(\lambda )}d\lambda , \end{aligned}$$
$$\begin{aligned} \eta (\delta )=\sup \limits _{\lambda \in \mathbb {R},\, \Vert \theta _1-\theta _2\Vert \le \delta }\, \left|\dfrac{v(\lambda )}{f(\lambda ,\,\theta _1)} -\dfrac{v(\lambda )}{f(\lambda ,\,\theta _2)}\right|\ \rightarrow \ 0,\ \delta \rightarrow 0. \end{aligned}$$
Since by Lemma 3 and the condition \(\mathbf{C }_5\)(ii)
$$\begin{aligned} \int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda )\dfrac{w(\lambda )}{v(\lambda )} d\lambda \overset{\hbox {P}}{\longrightarrow }\int \limits _{\mathbb {R}}\,f(\lambda , \theta _0)\dfrac{w(\lambda )}{v(\lambda )} d\lambda ,\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$
and the 2nd term under the probability sign in (14) by chosing \(\delta \) can be made arbitrary small, then \(P_1\rightarrow 0\), as \(T\rightarrow 0\), taking into account that the 3rd and the 4th terms converge to zero in probability, thanks to (12) and (13), if \(\varphi =\dfrac{w}{f}\). \(\square \)

Proof of Theorem 1

By Definition 2 for any \(\rho >0\)
$$\begin{aligned} \begin{aligned}&\hbox {P}\left\rbrace \left\Vert\widehat{\theta }_T-\theta _0\right\Vert\ge \rho \right\lbrace =\hbox {P}\left\rbrace \left\Vert\widehat{\theta }_T-\theta _0\right\Vert\ge \rho ;\ U_T(\widehat{\theta }_T,\,\widehat{\alpha }_T)\le U_T(\theta _0,\,\widehat{\alpha }_T)\right\lbrace \\&\quad \le \hbox {P}\left\rbrace \inf \limits _{\Vert \theta -\theta _0\Vert \ge \rho }\, \left(U_T(\theta ,\,\widehat{\alpha }_T) -U_T(\theta _0,\,\widehat{\alpha }_T)\right)\le 0\right\lbrace \\&\quad = \hbox {P}\left\rbrace \inf \limits _{\Vert \theta -\theta _0\Vert \ge \rho }\, \Bigl [U_T(\theta ,\,\widehat{\alpha }_T)-U(\theta ) -(U_T(\theta _0,\,\widehat{\alpha }_T)-U(\theta _0)) +K(\theta _0,\theta )\Bigr ]\le 0\right\lbrace \\&\quad \le \hbox {P}\left\rbrace \inf \limits _{\Vert \theta -\theta _0\Vert \ge \rho }\, \Bigl [U_T(\theta ,\,\widehat{\alpha }_T)-U(\theta ) -(U_T(\theta _0,\,\widehat{\alpha }_T)-U(\theta _0))\Bigr ] +\inf \limits _{\Vert \theta -\theta _0\Vert \ge \rho } K(\theta _0,\theta )\le 0\right\lbrace \\&\quad \le \hbox {P}\left\rbrace \sup \limits _{\theta \in \Theta ^c}\, \left|U_T(\theta ,\,\widehat{\alpha }_T)-U(\theta )\right| +\left|U_T(\theta _0,\,\widehat{\alpha }_T)-U(\theta _0)\right| \ge \inf \limits _{\Vert \theta -\theta _0\Vert \ge \rho }\, K(\theta _0,\,\theta )\right\lbrace \ \rightarrow \ 0, \end{aligned} \end{aligned}$$
when \(T\rightarrow \infty \) due to Lemma 4 and the property of the contrast function K. \(\square \)

4 Asymptotic normality of minimum contrast estimator

The first three conditions relate to properties of the regression function \(g(t,\,\alpha )\) and the LSE \(\widehat{\alpha }_T\). They are commented in “Appendix B”.
\({\mathbf{N }}_1\).

The normed LSE \(d_T(\alpha _0)\left(\widehat{\alpha }_T-\alpha _0\right)\) is asymptotically, as \(T\rightarrow \infty \), normal \(N(0,\,\Sigma _{_{LSE}})\), \(\Sigma _{_{LSE}}=\left(\Sigma _{_{LSE}}^{ij}\right)_{i,j=1}^q\).

Let us
$$\begin{aligned} g'(t,\,\alpha )=\dfrac{\partial }{\partial t}g(t,\,\alpha );\ \ \ \Phi '_T(\alpha _1,\,\alpha _2) =\int \limits _0^T\,\left(g'(t,\,\alpha _1)-g'(t,\,\alpha _2)\right)^2dt,\ \alpha _1,\,\alpha _2\in \mathcal {A}^c. \end{aligned}$$
\(\mathbf N _2\).
The function \(g(t,\,\alpha )\) is continuously differentiable with respect to \(t\ge 0\) for any \(\alpha \in \mathcal {A}^c\) and for any \(\alpha _0\in \mathcal {A}\), and \(T>T_0\) there exists a constant \(c_0'\) (\(T_0\) and \(c'_0\) may depend on \(\alpha _0\)) such that
$$\begin{aligned} \Phi _T'(\alpha ,\,\alpha _0) \le c_0'\Bigl \Vert d_T(\alpha _0)\left(\alpha -\alpha _0\right)\Bigr \Vert ^2,\ \alpha \in \mathcal {A}^c. \end{aligned}$$
$$\begin{aligned} g_{il}(t,\,\alpha )= & {} \dfrac{\partial ^2}{\partial \alpha _i \partial \alpha _l}g(t,\,\alpha ),\ \ d_{il,T}^2(\alpha )=\int \limits _0^T\,g_{il}^2(t,\,\alpha )dt,\ \ i,l=\overline{1,q},\\ v(r)= & {} \left\rbrace x\in \mathbb {R}^q\,:\,\Vert x\Vert <r\right\lbrace ,\ r>0. \end{aligned}$$
\(\mathbf N _3\).
The function \(g(t,\,\alpha )\) is twice continuously differentiable with respect to \(\alpha \in \mathcal {A}^c\) for any \(t\ge 0\), and for any \(R\ge 0\) and all sufficiently large T (\(T>T_0(R)\))

\(d_{iT}^{-1}(\alpha _0)\sup \limits _{t\in [0,T],\,u\in v^c(R)}\, \left|g_i\left(t,\,\alpha _0+d_T^{-1}(\alpha _0)u\right)\right|\le c^i(R)T^{-\frac{1}{2}}\), \(i=\overline{1,q}\);


\(d_{il,T}^{-1}(\alpha _0)\sup \limits _{t\in [0,T],\,u\in v^c(R)}\, \left|g_{il}\left(t,\,\alpha _0+d_T^{-1}(\alpha _0)u\right)\right|\le c^{il}(R)T^{-\frac{1}{2}}\), \(i,l=\overline{1,q}\);


\(d_{iT}^{-1}(\alpha _0)d_{lT}^{-1}(\alpha _0) d_{il,T}(\alpha _0)\le \tilde{c}^{il}T^{-\frac{1}{2}}\), \(i,l=\overline{1,q}\),

with positive constants \(c^i\), \(c^{il}\), \(\tilde{c}^{il}\), possibly, depending on \(\alpha _0\).

We assume also that the function \(f(\lambda ,\,\theta )\) is twice differentiable with respect to \(\theta \in \Theta ^c\) for any \(\lambda \in \mathbb {R}\).

$$\begin{aligned} f_i(\lambda ,\,\theta )=\dfrac{\partial }{\partial \theta _i} f(\lambda ,\,\theta ),\ \ \ f_{ij}(\lambda ,\,\theta )=\dfrac{\partial ^2}{\partial \theta _i \partial \theta _j}f(\lambda ,\,\theta ), \end{aligned}$$
and introduce the following conditions.
For any \(\theta \in \Theta ^c\) the functions \(\varphi _i(\lambda )=\dfrac{f_i(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )\), \(\lambda \in \mathbb {R}\), \(i=\overline{1,m}\), possess the following properties:

\(\varphi _i\in L_\infty (\mathbb {R})\cap L_1(\mathbb {R})\);


\(\overset{+\infty }{\underset{-\infty }{\hbox {Var}}}\,\varphi _i<\infty \);


\(\underset{\eta \rightarrow 1}{\lim }\,\underset{\lambda \in \mathbb {R}}{\sup }\, \left|\varphi _i(\eta \lambda )-\varphi _i(\lambda )\right|=0\) ;


\(\varphi _i\) are differentiable and \(\varphi '_i\) are uniformly continuous on \(\mathbb {R}\).


\(\dfrac{|f_i(\lambda ,\,\theta )|}{f(\lambda ,\,\theta )}w(\lambda ) \le Z_2(\lambda )\), \(\theta \in \Theta \), \(i=\overline{1,m}\), almost everywhere in \(\lambda \in \mathbb {R}\) and \(Z_2(\cdot )\in L_1(\mathbb {R})\).

The functions \(\dfrac{f_i(\lambda ,\,\theta ) f_j(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )\), \(\dfrac{f_{ij}(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )\) are continuous with respect to \(\theta \in \Theta ^c\) for each \(\lambda \in \mathbb {R}\) and
$$\begin{aligned} \dfrac{f_i^2(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda ) +\dfrac{|f_{ij}(\lambda ,\,\theta )|}{f(\lambda ,\,\theta )}w(\lambda )\le a_{ij}(\lambda ),\ \lambda \in \mathbb {R},\ \theta \in \Theta ^c, \end{aligned}$$
where \(a_{ij}(\cdot )\in L_1(\mathbb {R})\), \(i,j=\overline{1,m}\).
\(\mathbf N _5\).

\(\dfrac{f_i^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}w(\lambda )\), \(\dfrac{f_{ij}(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )\), \(i,j=\overline{1,m}\), are bounded functions in \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\);


There exists an even positive Lebesgue measurable function \(v(\lambda ),\ \lambda \in \mathbb {R}\), such that the functions \(\dfrac{f_i(\lambda ,\,\theta ) f_j(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )\), \(\dfrac{f_{ij}(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )\), \(i,j=\overline{1,m}\), are uniformly continuous in \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\);


\(\underset{\lambda \in \mathbb {R}}{\sup }\,\dfrac{w(\lambda )}{v(\lambda )}<\infty \).

Conditions \(\mathbf N _5\)(iii) and \(\mathbf C _5\)(ii) look the same, however the function v in these conditions must satisfy different conditions \(\mathbf N _5\)(ii) and \(\mathbf C _5\)(i), and therefore, generally speaking, the functions v in these two conditions can be different.

The next three matrices appear in the formulation of Theorem 2:
$$\begin{aligned} \begin{aligned} W_1(\theta )&=\int \limits _{\mathbb {R}}\,\nabla _\theta \log f(\lambda ,\,\theta )\nabla _\theta '\log f(\lambda ,\,\theta )w(\lambda )d\lambda ,\\ W_2(\theta )&=4\pi \int \limits _{\mathbb {R}}\,\nabla _\theta \log f(\lambda ,\,\theta )\nabla _\theta '\log f(\lambda ,\,\theta ) w^2(\lambda )d\lambda ,\\ V(\theta )&=\gamma _2\int \limits _{\mathbb {R}}\,\nabla _\theta \log f(\lambda ,\,\theta )w(\lambda )d\lambda \int \limits _{\mathbb {R}}\,\nabla _\theta '\log f(\lambda ,\theta )w(\lambda )d\lambda , \end{aligned} \end{aligned}$$
where \(\gamma _2=\dfrac{d_4}{d_2^2}>0\) is the excess of the random variable L(1), \(\nabla _\theta \) is a column vector-gradient, \(\nabla _\theta '\) is a row vector-gradient.

\(\mathbf N _6\). Matrices \(W_1(\theta )\) and \(W_2(\theta )\) are positive definite for \(\theta \in \Theta \).

Theorem 2

Under conditions \(\mathbf{A }_1, \mathbf{A }_2, \mathbf{C }_1\)\(\mathbf{C }_5\) and \(\mathbf{N }_1\)\(\mathbf{N }_6\) the normed MCE \(T^{\frac{1}{2}}(\widehat{\theta }_T-\theta _0)\) is asymptotically, as \(T\rightarrow \infty \), normal with zero mean and covariance matrix
$$\begin{aligned} W(\theta )=W_1^{-1}(\theta _0)\left(W_2(\theta _0)+V(\theta _0)\right) W_1^{-1}(\theta _0). \end{aligned}$$

The proof of the theorem is preceded by several lemmas. The next statement is Theorem 5.1 Avram et al. (2010) formulated in a form convenient to us.

Lemma 5

Let the stochastic process \(\varepsilon \) satisfies \(\mathbf A _1\), \(\mathbf A _2\), spectral density \(f\in L_p(\mathbb {R})\), a function \(b\in L_q(\mathbb {R})\bigcap L_1(\mathbb {R})\), where \(\dfrac{1}{p}+\dfrac{1}{q}=\dfrac{1}{2}\). Let
$$\begin{aligned} \hat{b}(t)=\int \limits _{\mathbb {R}}\,e^{i\lambda t}b(\lambda )d\lambda \end{aligned}$$
$$\begin{aligned} Q_T=\int \limits _0^T\int \limits _0^T\, \left(\varepsilon (t)\varepsilon (s) -B(t-s)\right)\hat{b}(t-s)dtds. \end{aligned}$$
Then the central limit theorem holds:
$$\begin{aligned} T^{-\frac{1}{2}}Q_T\ \Rightarrow \ N(0,\,\sigma ^2),\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$
where “\(\Rightarrow \)” means convergence in distributions,
$$\begin{aligned} \sigma ^2=16\pi ^3\int \limits _{\mathbb {R}}\,b^2(\lambda ) f^2(\lambda )d\lambda +\gamma _2\left(2\pi \int \limits _{\mathbb {R}}\, b(\lambda )f(\lambda )d\lambda \right)^2, \end{aligned}$$
where \(\gamma _2=\dfrac{d_4}{d_2^2}>0\) is the excess of the random variable L(1). In particular, the statement is true for \(p=2\) and \(q=\infty \).

Alternative form of Lemma 5 is given in Bai et al. (2016). We formulate their Theorem 2.1 in the form convenient to us.

Lemma 6

Let the stochastic process \(\varepsilon \) be such that \(\hbox {E}L(1)=0\), \(\hbox {E}L^4(1)<\infty \), and \(Q_T\) be as in (17). Assume that \(\hat{a}\in L_p(\mathbb {R})\cap L_2(\mathbb {R})\), \(\hat{b}\) is of the form (16) with even function \(b\in L_1(\mathbb {R})\) and \(\hat{b}\in L_q(\mathbb {R})\) with
$$\begin{aligned} 1\le p,\,q\le 2,\ \ \dfrac{2}{p}+\dfrac{1}{q}\ge \dfrac{5}{2}, \end{aligned}$$
$$\begin{aligned} T^{-\frac{1}{2}}Q_T\ \Rightarrow \ N(0,\,\sigma ^2),\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$
where \(\sigma ^2\) is given in (18).

Remark 3

It is important to note that conditions of Lemma 5 are given in frequency domain, while Lemma 6 employs the time domain conditions.

Theorems similar to Lemmas 5 and 6 can be found in paper by Giraitis et al. (2017), where the case of martingale-differences were considered. Overview of analogous results for different types of processes is given in the paper by Ginovyan et al. (2014).

$$\begin{aligned} \Delta _T(\varphi )=T^{-\frac{1}{2}}\int \limits _{\mathbb {R}}\, \varepsilon _T(\lambda )\overline{s_T(\lambda ,\,\widehat{\alpha }_T)} \varphi (\lambda )d\lambda . \end{aligned}$$

Lemma 7

Suppose the conditions \(\mathbf{A }_1, \mathbf{A }_2, \mathbf{C }_2, \mathbf{N }_1\)\(\mathbf{N }_3\) are fulfilled, \(\varphi (\lambda )\), \(\lambda \in \mathbb {R}\), is a bounded differentiable function satisfying the relation 3) of the condition \(\mathbf{N }_4\)(i), and moreover the derivative \(\varphi '(\lambda )\), \(\lambda \in \mathbb {R}\), is uniformly continuous on \(\mathbb {R}\). Then
$$\begin{aligned} \Delta _T(\varphi )\overset{\hbox {P}}{\longrightarrow }0\ \text {as}\ T\rightarrow \infty . \end{aligned}$$


Let \(B_\sigma \) be the set of all bounded entire functions on \(\mathbb {R}\) of exponential type \(0\le \sigma <\infty \) (see “Appendix C”), and \(\delta >0\) is an arbitrarily small number. Then there exists a function \(\varphi _\sigma \in B_\sigma \), \(\sigma =\sigma (\delta )\), such that
$$\begin{aligned} \sup \limits _{\lambda \in \mathbb {R}}\, |\varphi (\lambda )-\varphi _\sigma (\lambda )|<\delta . \end{aligned}$$
Let \(T_n(\varphi _\sigma ;\,\lambda )=\sum \limits _{j=-n}^n\, c_j^{(n)}e^{\mathrm {i}j\frac{\sigma }{n}\lambda },\ n\ge 1\), be a sequence of the Levitan polynomials that corresponds to \(\varphi _\sigma \). For any \(\Lambda >0\) there exists \(n_0=n_0(\delta ,\,\Lambda )\) such that for \(n>n_0\)
$$\begin{aligned} \sup \limits _{\lambda \in [-\Lambda ,\Lambda ]}\, |\varphi _\sigma -T_n(\varphi _\sigma ;\,\lambda )|\le \delta . \end{aligned}$$
$$\begin{aligned} \Delta _T(\varphi )= \Delta _T(\varphi -\varphi _\sigma ) +\Delta _T(\varphi _\sigma -T_n)+\Delta _T(T_n), \end{aligned}$$
So, under the condition \(\mathbf{C }_2\), for any \(\rho >0\)The probability \(P_4\rightarrow 0\), as \(T\rightarrow \infty \), and the probability \(P_3\) under the condition \(\mathbf{N }_1\) for sufficiently large T (we will write \(T>T_0\)) can be made less than a preassigned number by chosing \(\delta >0\) for a fixed \(\rho >0\).
As far as the function \(\varphi _\sigma \in B_\sigma \) and the corresponding sequence of Levitan polynomials \(T_n\) are bounded by the same constant, we obtain
$$\begin{aligned} |\Delta (\varphi _\sigma -T_n)|\le & {} \delta T^{-\frac{1}{2}} \int \limits _{-\Lambda }^{\Lambda }\, \left|\varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\right|d\lambda \\&+\,2c(\varphi _\sigma )T^{-\frac{1}{2}} \int \limits _{\mathbb {R}\backslash [-\Lambda ,\Lambda ]}\, \left|\varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\right|d\lambda =D_1+D_2. \end{aligned}$$
The integral in the term \(D_1\) can be majorized by an integral over \(\mathbb {R}\) and bounded as earlier. We have further
$$\begin{aligned} \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}=(\mathrm {i}\lambda )^{-1} \left[e^{\mathrm {i}\lambda T}(g(T,\,\alpha _0)-g(T,\,\widehat{\alpha }_T)) -(g(0,\,\alpha _0)-g(0,\,\widehat{\alpha }_T)) -\overline{s_T'(\lambda ,\,\widehat{\alpha }_T)}\right], \end{aligned}$$
where \(\overline{s_T'(\lambda ,\,\widehat{\alpha }_T)} =\int \limits _0^T\, e^{-\mathrm {i}\lambda t}(g'(t,\,\alpha _0)-g'(t,\,\widehat{\alpha }_T))dt\).
Under the Lemma conditionsObviously,
$$\begin{aligned} g(T,\,\widehat{\alpha }_T)- g(T,\,\alpha _0)=\sum \limits _{i=1}^q\,g_i(T,\,\alpha ^*_T), \left(\widehat{\alpha }_{iT}-\alpha _{i0}\right), \end{aligned}$$
\(\alpha ^*_T=\alpha _0+\eta \left(\widehat{\alpha }_T-\alpha _0\right)\), \(\eta \in (0,\,1)\), \(d_T(\alpha _0)\left(\alpha ^*_T-\alpha _0\right)= \eta d_T(\alpha _0)\left(\widehat{\alpha }_T-\alpha _0\right)\), and for any \(\rho >0\) and \(i=\overline{1,q}\)By condition \(\mathbf N _3\)(i) for any \(R\ge 0\)
$$\begin{aligned} \begin{aligned} P_5&\le \hbox {P}\left\rbrace \left(d_{iT}^{-1}(\alpha _0)\sup \limits _{t\in [0,T],\,\Vert u\Vert \le R}\, \left|g_i\left(t,\,\alpha _0+d_T^{-1}(\alpha _0)u\right)\right|\right)\cdot \left(d_{iT}^{-1}(\alpha _0)\left|\widehat{\alpha }_{iT}-\alpha _{i0}\right|\right)\ge \rho \right\lbrace \\&\le \hbox {P}\left\rbrace T^{-\frac{1}{2}}d_{iT}^{-1}(\alpha _0)\left|\widehat{\alpha }_{iT}-\alpha _{i0}\right|\ge \frac{\rho }{c^i(R)}\right\lbrace \ \rightarrow \ 0,\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned} \end{aligned}$$
according to \(\mathbf{N }_1\) (or \(\mathbf{C }_1\)). On the other hand, by condition \(\mathbf{N }_1\) the value R can be chosen so that for \(T>T_0\) the probability \(P_6\) becomes less that preassigned number.
$$\begin{aligned} g(T,\,\widehat{\alpha }_T)- g(T,\,\alpha _0)\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$
and, similarly, \(g(0,\,\widehat{\alpha }_T)- g(0,\,\alpha _0)\ \overset{\hbox {P}}{\longrightarrow }\ 0\), as \(T\rightarrow \infty \).
Moreover, for any \(\rho >0\)and the second probability is equal to zero, if \(\Lambda >\frac{R}{\rho }\).

Thus for any fixed \(\rho >0\), similarly to the probability \(P_3\), the probability \(P_7=\hbox {P}\{D_2\ge \rho \}\) for \(T>T_0\) can be made less than preassigned number by the choice of the value \(\Lambda \).

$$\begin{aligned} \Delta _T(T_n)= & {} T^{-\frac{1}{2}}\sum \limits _{j=-n}^n\,c_j^{(n)} \int \limits _{\mathbb {R}}\,\varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)} e^{\mathrm {i}j\frac{\sigma }{n}\lambda }d\lambda ,\\ \overline{s_T(\lambda ,\,\widehat{\alpha }_T)} e^{\mathrm {i}j\frac{\sigma }{n}\lambda }= & {} \int \limits _{\frac{j\sigma }{n}}^{T+\frac{j\sigma }{n}}\, e^{\mathrm {i}\lambda t}\left(g\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right) -g\left(t-j\dfrac{\sigma }{n},\,\widehat{\alpha }_T\right)\right)dt,\ j=\overline{-n,n}. \end{aligned}$$
It means that
$$\begin{aligned} \begin{aligned} \Delta _T(T_n)&=2\pi \sum \limits _{j=1}^n\,c_j^{(n)}T^{-\frac{1}{2}} \int \limits _{\frac{j\sigma }{n}}^T\,\varepsilon (t) \left(g\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right) -g\left(t-j\dfrac{\sigma }{n},\,\widehat{\alpha }_T\right)\right)dt\\&\quad +\,2\pi \sum \limits _{j=-n}^0\,c_j^{(n)}T^{-\frac{1}{2}} \int \limits _0^{T+\frac{j\sigma }{n}}\,\varepsilon (t) \left(g\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right) -g\left(t-j\dfrac{\sigma }{n},\,\widehat{\alpha }_T\right)\right)dt. \end{aligned} \end{aligned}$$
For \(j>0\) consider the value
$$\begin{aligned} \begin{aligned}&T^{-\frac{1}{2}}\int \limits _{\frac{j\sigma }{n}}^T\,\varepsilon (t) \left(g\left(t-j\dfrac{\sigma }{n},\,\widehat{\alpha }_T\right)- g\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right)\right)dt\\&\quad =\sum \limits _{i=1}^q\,\left(T^{-\frac{1}{2}}d_{iT}^{-1}(\alpha _0)\int \limits _{\frac{j\sigma }{n}}^T\, \varepsilon (t)g_i\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right)dt\right) d_{iT}(\alpha _0)(\widehat{\alpha }_{iT} - \alpha _{i0})\\&\qquad +\dfrac{1}{2}\sum \limits _{i,k=1}^q\,\left(T^{-\frac{1}{2}} \int \limits _{\frac{j\sigma }{n}}^T\,\varepsilon (t) g_{ik}\left(t-j\dfrac{\sigma }{n},\,\alpha _T^{*}\right)dt\right) (\widehat{\alpha }_{iT}-\alpha _{i0}) \left(\widehat{\alpha }_{kT}-\alpha _{k0}\right)\\&\quad =S_{1T}+\frac{1}{2}S_{2T}, \end{aligned} \end{aligned}$$
\(\alpha _T^{*}=\alpha _0+\bar{\eta }\left(\widehat{\alpha }_T-\alpha _0\right)\), \(\bar{\eta }\in (0,\,1)\).
Note that for \(i=\overline{1,q}\)
$$\begin{aligned} d_{iT}(\alpha _0)\left(\widehat{\alpha }_{iT}-\alpha _{i0}\right)\Rightarrow N(0,\,\Sigma _{_{LSE}}^{ii}),\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$
by the condition \(\mathbf{N }_1\). Moreover,
$$\begin{aligned} \begin{aligned}&\hbox {E}\left(T^{-\frac{1}{2}}d_{iT}^{-1}(\alpha _0)\int \limits _{\frac{j\sigma }{n}}^T\, \varepsilon (t)g_i\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right)dt\right)^2\\&\quad =T^{-1}d_{iT}^{-2}(\alpha _0)\int \limits _{\frac{j\sigma }{n}}^T\int \limits _{\frac{j\sigma }{n}}^T\,B(t-s) g_i\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right)g_i\left(s-j\dfrac{\sigma }{n},\,\alpha _0\right)dtds\\&\quad \le \left(T^{-2}\int \limits _0^T\int \limits _0^T\,B^2(t-s)dtds\right)^{\frac{1}{2}}=O\left(T^{-\frac{1}{2}}\right), \end{aligned} \end{aligned}$$
$$\begin{aligned} T^{-1}\int \limits _0^T\int \limits _0^T\,B^2(t-s)dtds\ \rightarrow \ 2\pi \Vert f\Vert _2^2,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
It means that the sum \(S_{1T}\overset{\hbox {P}}{\longrightarrow }0\), as \(T\rightarrow \infty \).
For the general term \(S_{2T}^{ik}\) of the sum \(S_{2T}\) and any \(\rho >0\), \(R>0\),Under condition Open image in new window using assumptions \(\mathbf{N }_3\)(ii) and \(\mathbf{N }_3\)(iii) we get as in the estimation of the probability \(P_5\)
$$\begin{aligned} \begin{aligned} \left|S_{2T}^{ik}\right|&\le \left(T^{-\frac{1}{2}}\int \limits _{\frac{j\sigma }{n}}^T\, |\varepsilon (t)|dt\right)\cdot \left(d_{ik,T}^{-1}(\alpha _0)\sup \limits _{t\in [0,T],\,u\in v^c(R)}\, \left|g_{ik}\left(t,\,\alpha _0+d_T^{-1}(\alpha _0)u\right)\right|\right)\\&\quad \cdot \Bigl (d_{iT}^{-1}(\alpha _0)d_{kT}^{-1}(\alpha _0) d_{ik,T}(\alpha _0)\Bigr )\cdot \left|d_{iT}(\alpha _0)(\widehat{\alpha }_{iT}-\alpha _{i0})\right|\cdot \left|d_{kT}(\alpha _0)(\widehat{\alpha }_{kT}-\alpha _{k0})\right|\\&\le c^{ik}(R)\tilde{c}^{ik}T^{-\frac{3}{2}}\int \limits _0^T\, |\varepsilon (t)|dt\cdot \left|d_{iT}(\alpha _0)(\widehat{\alpha }_{iT}-\alpha _{i0})\right|\cdot \left|d_{kT}(\alpha _0)(\widehat{\alpha }_{kT}-\alpha _{k0})\right|. \end{aligned} \end{aligned}$$
By Lemma 1
$$\begin{aligned} T^{-\frac{3}{2}}\int \limits _0^T\, |\varepsilon (t)|dt\le \frac{1}{2}T^{-\frac{1}{2}}+\frac{1}{2} T^{-\frac{3}{2}}\int \limits _0^T\, \varepsilon ^2(t)dt\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
So, by condition \(\mathbf{N }_1\)\(P_8\rightarrow 0\), as \(T\rightarrow \infty \), that is \(S_{2T}\ \overset{\hbox {P}}{\longrightarrow }\ 0\), as \(T\rightarrow \infty \). For \(j\le 0\) the reasoning is similar, and
$$\begin{aligned} \Delta _T(T_n)\overset{\hbox {P}}{\longrightarrow }0,\ T\rightarrow \infty . \end{aligned}$$
\(\square \)

Lemma 8

Let the function \(\varphi (\lambda ,\,\theta )w(\lambda )\) be continuous in \(\theta \in \Theta ^c\) for each fixed \(\lambda \in \mathbb {R}\) with
$$\begin{aligned} |\varphi (\lambda ,\,\theta )|\le \varphi (\lambda ),\ \theta \in \Theta ^c,\ \text {and}\ \varphi (\cdot )w(\cdot )\in L_1(\mathbb {R}). \end{aligned}$$
If \(\theta _T^{*}\overset{\hbox {P}}{\longrightarrow }\theta _0\), then
$$\begin{aligned} I\left(\theta _T^{*}\right)=\int \limits _{\mathbb {R}}\, \varphi \left(\lambda ,\,\theta _T^{*}\right)w(\lambda )d\lambda \ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\, \varphi (\lambda ,\,\theta _0)w(\lambda )d\lambda =I(\theta _0). \end{aligned}$$


By a Lebesgue dominated convergence theorem the integral \(I(\theta )\), \(\theta \in \Theta ^c\), is a continuous function. Further argument is standard. For any \(\rho >0\) and \(\varepsilon =\dfrac{\rho }{2}\) we find such a \(\delta >0\), that \(|I(\theta )-I(\theta _0)|<\varepsilon \) as \(\Vert \theta -\theta _0\Vert <\delta \). Then
$$\begin{aligned} \hbox {P}\left\rbrace |I(\theta _T^{*})-I(\theta _0)|\ge \rho \right\lbrace =P_9+P_{10}, \end{aligned}$$
$$\begin{aligned} P_9=\hbox {P}\left\rbrace |I(\theta _T^{*})-I(\theta _0)|\ge \dfrac{\rho }{2},\ \Vert \theta _T^{*}-\theta _0\Vert <\delta \right\lbrace =0, \end{aligned}$$
due to the choice of \(\varepsilon \), and
$$\begin{aligned} P_{10}=\hbox {P}\left\rbrace |I(\theta _T^{*})-I(\theta _0)|\ge \dfrac{\rho }{2},\ \Vert \theta _T^{*}-\theta _0\Vert \ge \delta \right\lbrace \ \rightarrow \ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
\(\square \)

Lemma 9

If the conditions \(\mathbf{A }_1, \mathbf{C }_2\) are satisfied and \(\sup \limits _{\lambda \in \mathbb {R},\,\theta \in \Theta ^c}\, |\varphi (\lambda ,\,\theta )|=c(\varphi )<\infty \), then
$$\begin{aligned} \begin{aligned} T^{-1}\int \limits _{\mathbb {R}}\,\varphi (\lambda ,\,\theta _T^{*}) \varepsilon _T(\lambda )\overline{s_T(\lambda ,\,\widehat{\alpha }_T)} d\lambda \&\overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty ,\\ T^{-1}\int \limits _{\mathbb {R}}\,\varphi (\lambda ,\,\theta _T^{*}) |s_T(\lambda ,\,\widehat{\alpha }_T)|d\lambda \&\overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned} \end{aligned}$$


These relations are similar to (12), (13), and can be obtained in the same way. \(\square \)

Lemma 10

Let under conditions \(\mathbf{A }_1, \mathbf{A }_2\) there exists an even positive Lebesgue measurable function \(v(\lambda )\), \(\lambda \in \mathbb {R}\), and an even Lebesgue measurable in \(\lambda \) for any fixed \(\theta \in \Theta ^c\) function \(\varphi (\lambda ,\,\theta )\), \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\), such that

\(\varphi (\lambda ,\,\theta )v(\lambda )\) is uniformly continuous in \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\);


\(\underset{\lambda \in \mathbb {R}}{\sup }\,\dfrac{w(\lambda )}{v(\lambda )}<\infty \);

\(\underset{\lambda \in \mathbb {R},\ \theta \in \Theta ^c}{\sup }\,|\varphi (\lambda ,\,\theta )|w(\lambda )<\infty \). Suppose also that \(\theta _T^{*}\overset{\hbox {P}}{\longrightarrow }\theta _0\), then, as \(T\rightarrow \infty \),
$$\begin{aligned} \int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda )\varphi (\lambda ,\,\theta _T^{*})w(\lambda )d\lambda \ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0)\varphi (\lambda ,\,\theta _0)w(\lambda )d\lambda . \end{aligned}$$


We have
$$\begin{aligned} \begin{aligned} \int \limits _{\mathbb {R}}\, I_T^{\varepsilon }(\lambda )\varphi (\lambda ,\,\theta _T^{*})w(\lambda )d\lambda =\int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda ) \bigl (\varphi (\lambda ,\,\theta _T^{*}) -\varphi (\lambda ,\,\theta _0)\bigr )v(\lambda ) \dfrac{w(\lambda )}{v(\lambda )}d\lambda&\\ +\int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda )\varphi (\lambda ,\,\theta _0)w(\lambda )d\lambda&=I_5+I_6. \end{aligned} \end{aligned}$$
By Lemma 3 and the condition (iii)
$$\begin{aligned} I_6\ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\, f(\lambda ,\,\theta _0)\varphi (\lambda ,\,\theta _0)w(\lambda ) d\lambda ,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
On the other hand, for any \(r>0\) under the condition (i) there exists \(\delta =\delta (r)\) such that for \(\left\Vert\theta _T^{*}-\theta _0\right\Vert<\delta \)
$$\begin{aligned} |I_5|\le r \int \limits _{\mathbb {R}}\, I_T^{\varepsilon }\dfrac{w(\lambda )}{v(\lambda )}d\lambda , \end{aligned}$$
and by the condition (ii)
$$\begin{aligned} \int \limits _{\mathbb {R}}\,I_T^{\varepsilon } \dfrac{w(\lambda )}{v(\lambda )}d\lambda \ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0) \dfrac{w(\lambda )}{v(\lambda )}d\lambda . \end{aligned}$$
The relations (19)–(21) prove the lemma. \(\square \)

Proof of Theorem 2

By definition of the MCE \(\widehat{\theta }_T\), formally using the Taylor formula, we get
$$\begin{aligned} 0=\nabla _\theta U_T(\widehat{\theta }_T,\,\widehat{\alpha }_T) =\nabla _\theta U_T(\theta _0,\,\widehat{\alpha }_T) +\nabla _\theta \nabla _\theta 'U_T(\theta _T^{*},\,\widehat{\alpha }_T) (\widehat{\theta }_T-\theta _0). \end{aligned}$$
Since there is no vector Taylor formula, (22) must be taken coordinatewise, that is each row of vector equality (22) depends on its own random vector \(\theta _T^{*}\), such that \(\Vert \theta _T^{*}-\theta _0\Vert \le \Vert \widehat{\theta }_T-\theta _0\Vert \). In turn, from (22) we have formally
$$\begin{aligned} T^{\frac{1}{2}}(\widehat{\theta }_T-\theta _0)=\left(\nabla _\theta \nabla _\theta ' U_T(\theta _T^{*},\,\widehat{\alpha }_T)\right)^{-1} \left(-T^{\frac{1}{2}}\nabla _\theta U_T(\theta _0,\,\widehat{\alpha }_T)\right). \end{aligned}$$
As far as the condition \(\mathbf{N }_4\) implies the possibility of differentiation under the sign of the integrals in (10), then
$$\begin{aligned} \begin{aligned} -T^{\frac{1}{2}}\nabla _\theta&U_T(\theta _0,\,\widehat{\alpha }_T)=-T^{\frac{1}{2}}\int \limits _{\mathbb {R}}\, \left(\nabla _\theta \log f(\lambda ,\,\theta _0)+\nabla _\theta \left(\dfrac{1}{f(\lambda ,\,\theta _0)}\right)I_T(\lambda ,\,\widehat{\alpha }_T)\right) w(\lambda )d\lambda \\&=T^{\frac{1}{2}}\int \limits _{\mathbb {R}}\, \left(\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)} I_T^{\varepsilon }(\lambda )-\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta _0)}\right) w(\lambda )d\lambda \\&\quad +\,(2\pi )^{-1}T^{-\frac{1}{2}}\int \limits _{\mathbb {R}}\, \left(2\hbox {Re}\left\rbrace \varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\right\lbrace +|s_T(\lambda ,\,\widehat{\alpha }_T)|^2\right)\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)} w(\lambda )d\lambda \\&=A_T^{(1)}+ A_T^{(2)}+ A_T^{(3)}. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \nabla _\theta \nabla _\theta '&U_T(\theta _T^{*},\,\widehat{\alpha }_T)=\int \limits _{\mathbb {R}}\, \left(\nabla _\theta \nabla _\theta ' \log f(\lambda ,\,\theta _T^{*})+\nabla _\theta \nabla _\theta ' \left(\dfrac{1}{f(\lambda ,\,\theta _T^{*})}\right)I_T(\lambda ,\,\widehat{\alpha }_T)\right) w(\lambda )d\lambda \\&=\int \limits _{\mathbb {R}}\,\left\rbrace \left( \dfrac{\nabla _\theta \nabla _\theta 'f(\lambda ,\,\theta _T^{*})}{f(\lambda ,\,\theta _T^{*})} -\dfrac{\nabla _\theta f(\lambda ,\,\theta _T^{*}) \nabla _\theta ' f(\lambda ,\,\theta _T^{*})}{f^2(\lambda ,\,\theta _T^{*})}\right)\right.\\&\quad +\left(2\dfrac{\nabla _\theta f(\lambda ,\,\theta _T^{*})\nabla _\theta ' f(\lambda ,\,\theta _T^{*})}{f^3(\lambda ,\,\theta _T^{*})} -\dfrac{\nabla _\theta \nabla _\theta 'f(\lambda ,\,\theta _T^{*})}{f^2(\lambda ,\,\theta _T^{*})}\right)\times \\&\quad \times \left.(I_T^{\varepsilon }(\lambda )+(\pi T)^{-1}\hbox {Re}\{\varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\} +(2\pi T)^{-1} |s_T(\lambda ,\,\widehat{\alpha }_T)|^2)\right\lbrace w(\lambda )d\lambda \\&=B_T^{(1)}+B_T^{(2)}+B_T^{(3)}+B_T^{(4)}, \end{aligned} \end{aligned}$$
where the terms \(B_T^{(3)}\) and \(B_T^{(4)}\) contain values \(\hbox {Re}\{\varepsilon _T(\lambda )\overline{s_T(\lambda ,\widehat{\alpha }_T)}\}\) and \(|s_T(\lambda ,\widehat{\alpha }_T)|^2\), respectively.
Bearing in mind the 1st part of the condition \(\mathbf N _4\)(i), we take in Lemma 7 the functions
$$\begin{aligned} \varphi (\lambda )=\varphi _i(\lambda ) =\dfrac{f_i(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda ),\ i=\overline{1,m}. \end{aligned}$$
Then in the formula (23) \(A_T^{(2)}\ \overset{\hbox {P}}{\longrightarrow }\ 0\), as \(T\rightarrow \infty \).
Consider the term \(A_T^{(3)}=(a_{iT}^{(3)})_{i=1,}^m\), in the sum (23)
$$\begin{aligned} a_{iT}^{(3)}=(2\pi )^{-1}T^{-\frac{1}{2}}\int \limits _{\mathbb {R}}\, |s_T(\lambda ,\,\widehat{\alpha }_T)|^2\varphi _i(\lambda )d\lambda , \end{aligned}$$
where \(\varphi _i(\lambda )\) are as before. Under conditions \(\mathbf{C }_1, \mathbf{C }_2, \mathbf{N }_1\) and (1) of \(\mathbf{N }_4\)(i)\(A_T^{(3)}\ \overset{\hbox {P}}{\longrightarrow }\ 0\), as \(T\rightarrow \infty \), because
$$\begin{aligned} |a_{iT}^{(3)}|\le & {} c(\varphi _i)T^{-\frac{1}{2}} \Phi _T(\widehat{\alpha }_T,\,\alpha _0)\\\le & {} c(\varphi _i)c_0 \Vert T^{-\frac{1}{2}}d_T(\alpha _0)\left(\widehat{\alpha }_T-\alpha _0\right)\Vert \;\Vert d_T(\alpha _0)\left(\widehat{\alpha }_T- \alpha _0\right)\Vert \ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
Examine the behaviour of the terms \(B_T^{(1)}-B_T^{(4)}\) in formula (24). Under conditions \(\mathbf{C }_1\) and \(\mathbf{N }_4\)(iii) we can use Lemma 8 with functions
$$\begin{aligned} \varphi (\lambda ,\,\theta )=\varphi _{ij}(\lambda ,\,\theta )=\dfrac{f_{ij}(\lambda ,\,\theta )}{f(\lambda ,\,\theta )},\ \dfrac{f_i(\lambda ,\,\theta ) f_j(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )},\ i,j=\overline{1,m}, \end{aligned}$$
to obtain the convergence
$$\begin{aligned} B_T^{(1)}\ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\, \left(\dfrac{\nabla _\theta \nabla _\theta 'f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta _0)}-\dfrac{\nabla _\theta f(\lambda ,\,\theta _0) \nabla _\theta 'f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)}\right) w(\lambda )d\lambda ,\ \text {as}\ T\rightarrow \infty . \end{aligned}$$
Under the condition \(\mathbf{N }_5\)(i) we can use Lemma 9 with functions
$$\begin{aligned} \varphi (\lambda ,\,\theta )=\varphi _{ij}(\lambda ,\,\theta )= \dfrac{f_{ij}(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )} w(\lambda ),\ \dfrac{f_i(\lambda ,\,\theta ) f_j(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )},\ i,j=\overline{1,m}, \end{aligned}$$
to obtain that
$$\begin{aligned} B_T^{(3)}\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ B_T^{(4)}\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
Under conditions \(\mathbf{C }_1\) and \(\mathbf{N }_5\)
$$\begin{aligned} B_T^{(2)}\ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\, \left(2\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)\nabla _\theta ' f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)} -\dfrac{\nabla _\theta \nabla _\theta 'f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta _0)}\right)w(\lambda )d\lambda , \end{aligned}$$
if we take in Lemma 10 in conditions (i) and (iii)
$$\begin{aligned} \varphi (\lambda ,\,\theta )=\varphi _{ij}(\lambda ,\,\theta )= \dfrac{f_i(\lambda ,\,\theta )f_j(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )},\ \dfrac{f_{ij}(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}\ i,j=\overline{1,m}. \end{aligned}$$
So, under conditions \(\mathbf{C }_1, \mathbf{C }_2, \mathbf{N }_4\)(iii) and \(\mathbf{N }_5\)
$$\begin{aligned} \begin{aligned} \nabla _\theta \nabla _\theta 'U_T(\theta _T^{*},\,\widehat{\alpha }_T)\ \overset{\hbox {P}}{\longrightarrow }\&\int \limits _{\mathbb {R}}\,\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)\nabla _\theta ' f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)}w(\lambda )d\lambda \\ =&\int \limits _{\mathbb {R}}\,\nabla _\theta \log f(\lambda ,\,\theta _0)\nabla _\theta '\log f(\lambda ,\,\theta _0)w(\lambda )d\lambda =W_1(\theta _0), \end{aligned} \end{aligned}$$
because \(W_1(\theta _0)\) is the sum of the right hand sides of (25) and (26).
From the facts obtained, it follows that for the proof of Theorem 2 it is necessary to study an asymptotic behaviour of vector \(A_T^{(1)}\) from (23):
$$\begin{aligned} A_T^{(1)}= T^{\frac{1}{2}}\int \limits _{\mathbb {R}}\,\left(\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)}I_T^{\varepsilon }(\lambda )-\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta _0)}\right) w(\lambda )d\lambda . \end{aligned}$$
We will take
$$\begin{aligned} \begin{aligned} \varphi _i(\lambda )=&\dfrac{f_i(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)}w(\lambda ),\ i=\overline{1,m},\\ \Psi (\lambda )=&\sum \limits _{i=1}^m\,u_i\varphi _i(\lambda ),\ \mathrm {u}=\left(u_1,\,\ldots ,\,u_m\right)\in \mathbb {R}^m,\\ Y_T=&\int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda )\Psi (\lambda )d\lambda ,\ \ \ Y=\int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0)\Psi (\lambda )d\lambda , \end{aligned} \end{aligned}$$
and write
$$\begin{aligned} \left<A_T^{(1)},\,\mathrm {u}\right>=T^{\frac{1}{2}}(Y_T-\hbox {E}Y_T)+T^{\frac{1}{2}}(\hbox {E}Y_T-Y). \end{aligned}$$
Under conditions (1) and (2) of \(\mathbf{N }_4\)(i) (Bentkus 1972b; Ibragimov 1963) for any \(u\in \mathbb {R}^m\)
$$\begin{aligned} T^{\frac{1}{2}}(\hbox {E}Y_T-Y)\ \longrightarrow \ 0, \ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
On the other hand
$$\begin{aligned} T^{\frac{1}{2}}(Y_T-\hbox {E}Y_T)=T^{-\frac{1}{2}}\int \limits _0^T\int \limits _0^T\, \left(\varepsilon (t)\varepsilon (s) -B(t-s)\right)\hat{b}(t-s)dtds \end{aligned}$$
$$\begin{aligned} \hat{b}(t)=\int \limits _{\mathbb {R}}\,e^{\mathrm {i}\lambda t}\,(2\pi )^{-1}\Psi (\lambda )d\lambda . \end{aligned}$$
Thus we can apply Lemma 5 taking \(b(\lambda )=(2\pi )^{-1}\Psi (\lambda )\) in the formula (18) to obtain for any \(u\in \mathbb {R}^m\)
$$\begin{aligned} T^{\frac{1}{2}}(Y_T-\hbox {E}Y_T)\ \Rightarrow \ N(0,\,\sigma ^2),\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$
$$\begin{aligned} \begin{aligned} \sigma ^2=\,&4\pi \int \limits _{\mathbb {R}}\,\Psi ^2(\lambda ) f^2(\lambda ,\,\theta _0)d\lambda +\gamma _2\left(\int \limits _{\mathbb {R}}\,\Psi (\lambda ) f(\lambda ,\,\theta _0)d\lambda \right)^2. \end{aligned} \end{aligned}$$
The relations (28) and (29) are equivalent to the convergence
$$\begin{aligned} A_T^{(1)}\ \Rightarrow \ N\left(0,\,W_2(\theta _0)+V(\theta _0)\right),\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
From (27) and (30) it follows (15).

Remark 4

From the conditions of Theorem 2 it follows also the fulfillment of Lemma 6 conditions for functions \(\hat{a}\) and \(\hat{b}\). Really by condition \(\mathbf{A }_1\)\(\hat{a}\in L_1(\mathbb {R})\cap L_2(\mathbb {R})\) and we can take \(p=1\) in Lemma 6. On the other hand, if we look at \(b=(2\pi )^{-1}\Psi \) as at an original of the Fourier transform, from \(\mathbf{N }_4\)(i)1) we have \(b\in L_1(\mathbb {R})\cap L_2(\mathbb {R})\). Then according to the Plancherel theorem \(\hat{b}\in L_2(\mathbb {R})\) and we can take \(q=2\) in Lemma 6. Thus
$$\begin{aligned} \frac{2}{p}+\frac{1}{q}=\frac{5}{2}, \end{aligned}$$
and conclusion of Lemma 6 is true.

5 Example: The motion of a pendulum in a turbulent fluid

First of all we review a number of results discussed in Parzen (1962), Anh et al. (2002), and Leonenko and Papić (2019), see also references therein.

We examine the stationary Lévy-driven continuous-time autoregressive process \(\varepsilon (t),\ t\in \mathbb {R}\), of the order two ( CAR(2)-process ) in the under-damped case (see Leonenko and Papić 2019 for details).

The motion of a pendulum is described by the equation
$$\begin{aligned} \ddot{\varepsilon }(t)+2\alpha \dot{\varepsilon }+\left(\omega ^2+\alpha ^2\right)\varepsilon (t)=\dot{L}(t),\ t\in \mathbb {R}, \end{aligned}$$
in which \(\varepsilon (t)\) is the replacement from its rest position, \(\alpha \) is a damping factor, \(\dfrac{2\pi }{\omega }\) is the damped period of the pendulum (see, i.e., Parzen 1962, pp. 111–113).
We consider the Green function solution of the equation (31), in which \(\dot{L}\) is the Lévy noise, i.e. the derivative of a Lévy process in the distribution sense (see Anh et al. 2002; Leonenko and Papić 2019 for details). The solution can be defined as the linear process
$$\begin{aligned} \varepsilon (t)=\int \limits _{\mathbb {R}}\,\hat{a}(t-s)dL(s),\ t\in \mathbb {R}, \end{aligned}$$
where the Green function
$$\begin{aligned} \hat{a}(t)=e^{-\alpha t}\, \frac{\sin (\omega t)}{\omega }\,\mathbb {I}_{[0,\,\infty )}(t),\ \alpha >0. \end{aligned}$$
Assuming \(\hbox {E}L(1)=0\), \(d_2=\hbox {E}L^2(1)<\infty \), we obtain
$$\begin{aligned} B(t)=d_2\int \limits _0^\infty \,\hat{a}(t+s)\hat{a}(s)ds=\frac{d_2}{4(\alpha ^2+\omega ^2)}\,e^{-\alpha |t|}\, \left(\frac{\sin (\omega |t|)}{\omega }+\frac{\cos (\omega t)}{\alpha }\right). \end{aligned}$$
The formula (33) for the covariance function of the process \(\varepsilon \) corresponds to the formula (2.12) in Leonenko and Papić (2019) for the correlation function
$$\begin{aligned} \hbox {Corr}\left(\varepsilon (t),\,\varepsilon (0)\right)=\frac{B(t)}{B(0)}= e^{-\alpha |t|}\,\left(\cos (\omega t)+\frac{\alpha }{\omega }\sin (\omega |t|)\right). \end{aligned}$$
On the other hand for \(\hat{a}(t)\) given by (32)
$$\begin{aligned} a(\lambda )=\int \limits _0^\infty \,e^{-i\lambda t}\hat{a}(t)dt=\frac{1}{\alpha ^2+\omega ^2-\lambda ^2+2i\alpha \lambda }. \end{aligned}$$
Then the positive spectral density of the stationary process \(\varepsilon \) can be written as (compare with Parzen 1962)
$$\begin{aligned} f_2(\lambda )=\frac{d_2}{2\pi }\left|a(\lambda )\right|^2=\frac{d_2}{2\pi }\cdot \frac{1}{\left(\lambda ^2-\alpha ^2-\omega ^2\right)^2+4\alpha ^2\lambda ^2},\ \lambda \in \mathbb {R}. \end{aligned}$$
It is convenient to rewrite (34) in the form
$$\begin{aligned} f_2(\lambda )=f(\lambda ,\,\theta )=\frac{1}{2\pi }\cdot \frac{\beta }{\left(\lambda ^2-\alpha ^2-\gamma ^2\right)^2+4\alpha ^2\lambda ^2},\ \lambda \in \mathbb {R}, \end{aligned}$$
where \(\alpha =\theta _1\) is a damping factor, \(\beta =-\varkappa ^{(2)}(0)=d_2(\theta _2)=\theta _2\), \(\gamma =\omega =\theta _3\) is a damped cyclic frequency of the pendulum oscillations. Suppose that
$$\begin{aligned} \theta= & {} \left(\theta _1,\,\theta _2,\,\theta _3\right) = \left(\alpha ,\,\beta ,\,\gamma \right)\in \Theta \\= & {} \left(\underline{\alpha },\,\overline{\alpha }\right)\times \left(\underline{\beta },\,\overline{\beta }\right)\times \left(\underline{\gamma },\,\overline{\gamma }\right),\ \underline{\alpha },\underline{\beta },\underline{\gamma }>0,\ \overline{\alpha },\overline{\beta },\overline{\gamma }<\infty . \end{aligned}$$
The condition \(\mathbf{C }_3\) is fulfilled for spectral density (35).
Assume that
$$\begin{aligned} w(\lambda )=\left(1+\lambda ^2\right)^{-a},\ \lambda \in \mathbb {R},\ a>0. \end{aligned}$$
More precisely the value of a will be chosen below.
Obviously the functions \(w(\lambda )\log f(\lambda ,\,\theta )\), \(\frac{w(\lambda )}{f(\lambda ,\,\theta )}\) are continuous on \(\mathbb {R}\times \Theta ^c\). For any \(\Lambda >0\) the function \(\left|\log f(\lambda ,\,\theta )\right|\) is bounded on the set \([-\Lambda ,\,\Lambda ]\times \Theta ^c\). The number \(\Lambda \) can be chosen so that for \(\mathbb {R}\backslash [-\Lambda ,\,\Lambda ]\)
$$\begin{aligned} 1<\frac{8\pi }{\overline{\beta }}\underline{\alpha }^2\lambda ^2\le f^{-1}(\lambda ,\,\theta ) \le \frac{2\pi }{\underline{\beta }}\left(2\left(\lambda ^4+\left(\overline{\alpha }^2+\overline{\gamma }^2\right)^2\right)+4\overline{\alpha }^2\lambda ^2\right). \end{aligned}$$
Thus the function \(Z_1(\lambda )\) in the condition \(\mathbf{C }_4\)(i) exists.
As for condition \(\mathbf{C }_4\)(ii), if \(a\ge 2\), then
$$\begin{aligned} \sup \limits _{\lambda \in \mathbb {R},\,\theta \in \Theta ^c}\,\frac{w(\lambda )}{f(\lambda ,\,\theta )}<\infty . \end{aligned}$$
As a function v in condition \(\mathbf{C }_5\) we take
$$\begin{aligned} v(\lambda )=\left(1+\lambda ^2\right)^{-b},\ \lambda \in \mathbb {R},\ b>0. \end{aligned}$$
Obviously, if \(a\ge b\), then \(\sup \limits _{\lambda \in \mathbb {R}}\,\frac{w(\lambda )}{v(\lambda )}<\infty \) (condition \(\mathbf{C }_5\)(ii)), and the function \(\frac{v(\lambda )}{f(\lambda ,\,\theta )}\) is uniformly continuous in \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\), if \(b\ge 2\) (condition \(\mathbf{C }_5\)(i)).
Further it will be helpful to use the notation \(s(\lambda )=\left(\lambda ^2-\alpha ^2-\gamma ^2\right)^2+4\alpha ^2\lambda ^2\). Then
$$\begin{aligned} \begin{aligned} f_\alpha (\lambda ,\,\theta )&=\frac{\partial }{\partial \alpha }\,f(\lambda ,\,\theta )=-\frac{2\alpha \beta }{\pi }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-2}(\lambda );\\ f_\beta (\lambda ,\,\theta )&=\frac{\partial }{\partial \beta }\,f(\lambda ,\,\theta )=\left(2\pi s(\lambda )\right)^{-2};\\ f_\gamma (\lambda ,\,\theta )&=\frac{\partial }{\partial \gamma }\,f(\lambda ,\,\theta )=\frac{2\beta \gamma }{\pi }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)s^{-2}(\lambda ). \end{aligned} \end{aligned}$$
To check the condition \(\mathbf{N }_4\)(i)1) consider the functions
$$\begin{aligned} \begin{aligned} \varphi _\alpha (\lambda )&=\frac{f_\alpha (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda ) =-\frac{4\pi \alpha }{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)w(\lambda );\\ \varphi _\beta (\lambda )&=\frac{f_\beta (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )=\frac{2\pi }{\beta ^2}s(\lambda )w(\lambda );\\ \varphi _\gamma (\lambda )&=\frac{f_\gamma (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda ) =\frac{8\pi \gamma }{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)w(\lambda ). \end{aligned} \end{aligned}$$
Then the condition \(\mathbf{N }_4\)(i)1) is satisfied for \(\varphi _\alpha \) and \(\varphi _\gamma \) when \(a>\frac{3}{2}\), for \(\varphi _\beta \) when \(a>\frac{5}{2}\). The same values of a are sufficient also to meet the condition \(\mathbf{N }_4\)(i)2).
To verify \(\mathbf{N }_4\)(i)3) fix \(\theta \in \Theta ^c\) and denote by \(\varphi (\lambda )\), \(\lambda \in \mathbb {R}\), any of the continuous functions \(\varphi _\alpha (\lambda )\), \(\varphi _\beta (\lambda )\), \(\varphi _\gamma (\lambda )\), \(\lambda \in \mathbb {R}\). Suppose \(|1-\eta |<\delta <\frac{1}{2}\). Then
$$\begin{aligned} \sup \limits _{\lambda \in \mathbb {R}}\,\left|\varphi (\eta \lambda )-\varphi (\lambda )\right|= & {} \max \left(\sup \limits _{\eta |\lambda |\le \Lambda }\,\left|\varphi (\eta \lambda )-\varphi (\lambda )\right|,\ \sup \limits _{\eta |\lambda |>\Lambda }\,\left|\varphi (\eta \lambda )-\varphi (\lambda )\right|\right)\\= & {} \max \left(s_1,\,s_2\right), \end{aligned}$$
$$\begin{aligned} s_2\le \sup \limits _{|\lambda |>\Lambda }\,\left|\varphi (\lambda )\right|+\sup \limits _{\eta |\lambda |>\Lambda }\,\left|\varphi (\lambda )\right|= s_3+s_4. \end{aligned}$$
By the properties of the functions \(\varphi \) under assumption \(a>\frac{5}{2}\) for any \(\varepsilon >0\) there exists \(\Lambda =\Lambda (\varepsilon )>0\) such that for \(|\lambda |>\frac{2}{3}\Lambda \)\(|\varphi (\lambda )|<\frac{\varepsilon }{2}\). So, \(s_3\le \frac{\varepsilon }{2}\). We have also \(s_4\le \underset{|\lambda |>\frac{2}{3}\Lambda }{\sup }\,|\varphi (\lambda )|\le \frac{\varepsilon }{2}\). On the other hand,
$$\begin{aligned} s_1\le \sup \limits _{|\lambda |<2\Lambda }\,\left|\varphi (\eta \lambda )-\varphi (\lambda )\right|,\ \ |\eta \lambda -\lambda |\le 2\Lambda \delta =\delta ', \end{aligned}$$
and by the proper choice of \(\delta \)
$$\begin{aligned} s_1\le \sup \limits _{\begin{array}{c} \lambda _1,\lambda _2\in [-2\Lambda ,\,2\Lambda ]\\ \left|\lambda _1-\lambda _2\right|<\delta ' \end{array}}\, \left|\varphi (\lambda _1)-\varphi (\lambda _2)\right|<\varepsilon , \end{aligned}$$
and condition \(\mathbf{N }_4\)(i)(3) is met.
Using (37) we get for any \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),
$$\begin{aligned} \begin{aligned} \varphi _\alpha '(\lambda )&=-\frac{8\pi \alpha }{\beta }\,\lambda w(\lambda )-\frac{4\pi \alpha }{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)w'(\lambda ) =O\left(\lambda ^{-2a+1}\right);\\ \varphi _\beta '(\lambda )&=\frac{2\pi }{\beta ^2}\bigl (s'(\lambda )w(\lambda )+s(\lambda )w'(\lambda )\bigr )=O\left(\lambda ^{-2a+3}\right);\\ \varphi _\gamma '(\lambda )&=\frac{16\pi \gamma }{\beta }\,\lambda w(\lambda )+\frac{8\pi \gamma }{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)w'(\lambda ) =O\left(\lambda ^{-2a+1}\right). \end{aligned} \end{aligned}$$
Therefore for \(a>\frac{3}{2}\) these derivatives are uniformly continuous on \(\mathbb {R}\) (condition \(\mathbf{N }_4\)(i)4). So, to satisfy condition \(\mathbf{N }_4\)(i) we can take weight function \(w(\lambda )\) with \(a>\frac{5}{2}\).

The check of assumption \(\mathbf{N }_4\)(ii) is similar to the check of \(\mathbf{C }_4\)(i).

As \(\lambda \rightarrow \infty \), uniformly in \(\theta \in \Theta ^c\)
$$\begin{aligned} \begin{aligned} \frac{\left|f_\alpha (\lambda ,\,\theta )\right|}{f(\lambda ,\,\theta )}w(\lambda )&=\left|\varphi _\alpha (\lambda )\right|f(\lambda ,\,\theta )w(\lambda ) =2\alpha \left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-1}(\lambda )w(\lambda )=O\left(\lambda ^{-2a-2}\right);\\ \frac{\left|f_\beta (\lambda ,\,\theta )\right|}{f(\lambda ,\,\theta )}w(\lambda )&=\varphi _\beta (\lambda )f(\lambda ,\,\theta )w(\lambda ) =\beta ^{-1}w(\lambda )=O\left(\lambda ^{-2a}\right);\\ \frac{\left|f_\gamma (\lambda ,\,\theta )\right|}{f(\lambda ,\,\theta )}w(\lambda )&=\left|\varphi _\gamma (\lambda )\right|f(\lambda ,\,\theta )w(\lambda ) =4\gamma \left|\lambda ^2-\alpha ^2-\gamma ^2\right|s^{-1}(\lambda )w(\lambda )=O\left(\lambda ^{-2a-2}\right). \end{aligned} \end{aligned}$$
On the other hand, for any \(\Lambda >0\) the functions (38) are bounded on the sets \([-\Lambda ,\,\Lambda ]\times \Theta ^c\).
To check \(\mathbf{N }_4\)(iii) note first of all that the functions uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),
$$\begin{aligned} \begin{aligned} \frac{f_\alpha ^2(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\alpha (\lambda )f(\lambda ,\,\theta ) =8\alpha ^2\left(\lambda ^2+\alpha ^2+\gamma ^2\right)^2s^{-2}(\lambda )w(\lambda )=O\left(\lambda ^{-2a-4}\right);\\ \frac{f_\beta ^2(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\beta (\lambda )f(\lambda ,\,\theta ) =\beta ^{-2}w(\lambda )=O\left(\lambda ^{-2a}\right);\\ \frac{f_\gamma ^2(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\gamma (\lambda )f(\lambda ,\,\theta ) =16\gamma ^2\left(\lambda ^2-\alpha ^2-\gamma ^2\right)^2s^{-2}(\lambda )w(\lambda )=O\left(\lambda ^{-2a-4}\right). \end{aligned} \end{aligned}$$
These functions are continuous on \(\mathbb {R}\times \Theta ^c\), as well as the functions
$$\begin{aligned} \begin{aligned} \frac{f_\alpha (\lambda ,\,\theta )f_\beta (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\alpha (\lambda )f_\beta (\lambda ,\,\theta ) =-\frac{2\alpha }{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-1}(\lambda )w(\lambda );\\ \frac{f_\alpha (\lambda ,\,\theta )f_\gamma (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\alpha (\lambda )f_\gamma (\lambda ,\,\theta ) =-8\alpha \gamma \left(\lambda ^4-\left(\alpha ^2+\gamma ^2\right)^2\right)s^{-2}(\lambda )w(\lambda );\\ \frac{f_\beta (\lambda ,\,\theta )f_\gamma (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\beta (\lambda )f_\gamma (\lambda ,\,\theta ) =\frac{4\gamma }{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)s^{-1}(\lambda )w(\lambda ). \end{aligned} \end{aligned}$$
Moreover, uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),
$$\begin{aligned} \begin{aligned} \frac{f_{\alpha \alpha }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=-4\left(\lambda ^2+3\alpha ^2+\gamma ^2\right)s^{-1}(\lambda )w(\lambda ) +8\alpha \left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-2}(\lambda )s_\alpha '(\lambda )w(\lambda )\\&=O\left(\lambda ^{-2a-2}\right);\\ \frac{f_{\beta \beta }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=0;\\ \frac{f_{\gamma \gamma }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=4\left(\lambda ^2-\alpha ^2-3\gamma ^2\right)s^{-1}(\lambda )w(\lambda ) -8\gamma \left(\lambda ^2-\alpha ^2-\gamma ^2\right)s^{-2}(\lambda )s_\gamma '(\lambda )w(\lambda )\\&=O\left(\lambda ^{-2a-2}\right);\\ \frac{f_{\alpha \beta }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=-\frac{4\alpha }{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-1}(\lambda )w(\lambda ) =O\left(\lambda ^{-2a-2}\right);\\ \frac{f_{\alpha \gamma }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=-8\alpha \gamma s^{-1}(\lambda )w(\lambda )+ 16\alpha \gamma \left(\lambda ^4-\left(\alpha ^2+\gamma ^2\right)^2\right)s^{-2}(\lambda )w(\lambda ) =O\left(\lambda ^{-2a-4}\right);\\ \frac{f_{\beta \gamma }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=\frac{4\gamma }{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)s^{-1}(\lambda )w(\lambda ) =O\left(\lambda ^{-2a-2}\right). \end{aligned} \end{aligned}$$
Note that the functions (41) are continuous on \(\mathbb {R}\times \Theta ^c\) as well as functions (39) and (40). Therefore the condition \(\mathbf{N }_4\)(iii) is fulfilled.
Let us verify the condition \(\mathbf{N }_5\)(1). According to equation (39), uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),
$$\begin{aligned} \begin{aligned} \frac{f_\alpha ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}w(\lambda )&=\frac{16\pi \alpha ^2}{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)^2s^{-1}(\lambda )w(\lambda )=O\left(\lambda ^{-2a}\right);\\ \frac{f_\beta ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}w(\lambda )&=\frac{2\pi }{\beta ^3}s(\lambda )w(\lambda )=O\left(\lambda ^{-2a+4}\right);\\ \frac{f_\gamma ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}w(\lambda )&=\frac{32\pi \gamma ^2}{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)^2s^{-1}(\lambda )w(\lambda )=O\left(\lambda ^{-2a}\right). \end{aligned} \end{aligned}$$
Therefore the continuous in \((\lambda ,\theta )\in \mathbb {R}\times \Theta ^c\) functions (42) are bounded in \((\lambda ,\theta )\in \mathbb {R}\times \Theta ^c\), if \(a\ge 2\).
Using equations (40) and (41) we obtain uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),
$$\begin{aligned} \begin{aligned} \frac{f_{\alpha \alpha }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=-\frac{8\pi }{\beta }\left(\lambda ^2+3\alpha ^2+\gamma ^2\right)w(\lambda ) +\frac{16\pi \alpha }{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-1}(\lambda )s_\alpha '(\lambda )w(\lambda )\\&=O\left(\lambda ^{-2a+2}\right);\\ \frac{f_{\beta \beta }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=0;\\ \frac{f_{\gamma \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\frac{8\pi }{\beta }\left(\lambda ^2-\alpha ^2-3\gamma ^2\right)w(\lambda ) -\frac{16\pi \gamma }{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)s^{-1}(\lambda )s_\gamma '(\lambda )w(\lambda )\\&=O\left(\lambda ^{-2a+2}\right);\\ \frac{f_{\alpha \beta }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=-\frac{8\pi \alpha }{\beta ^2} \left(\lambda ^2+\alpha ^2+\gamma ^2\right)w(\lambda ) =O\left(\lambda ^{-2a+2}\right);\\ \frac{f_{\alpha \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=-\frac{16\alpha \gamma }{\beta }w(\lambda )+ \frac{32\pi \alpha \gamma }{\beta }\left(\lambda ^4-\left(\alpha ^2+\gamma ^2\right)^2\right)s^{-1}(\lambda )w(\lambda ) =O\left(\lambda ^{-2a}\right);\\ \frac{f_{\beta \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\frac{8\pi \gamma }{\beta ^2}\left(\lambda ^2-\alpha ^2-\gamma ^2\right)w(\lambda ) =O\left(\lambda ^{-2a+2}\right). \end{aligned} \end{aligned}$$
So, continuous on \(\mathbb {R}\times \Theta ^c\) functions (43) are bounded in \((\lambda ,\theta )\in \mathbb {R}\times \Theta ^c\), if \(a\ge 1\).
To check \(\mathbf{N }_5\)(ii) consider the weight function
$$\begin{aligned} v(\lambda )=\left(1+\lambda ^2\right)^{-b},\ \lambda \in \mathbb {R},\ b>0. \end{aligned}$$
If \(a\ge b\), then function \(\frac{w(\lambda )}{v(\lambda )}\) is bounded on \(\mathbb {R}\) (condition \(\mathbf{N }_5\)(iii)). Using (42) we obtain uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),
$$\begin{aligned} \begin{aligned} \frac{f_\alpha ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=\frac{16\pi \alpha ^2}{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)^2s^{-1}(\lambda )v(\lambda )=O\left(\lambda ^{-2b}\right);\\ \frac{f_\beta ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=\frac{2\pi }{\beta ^3}s(\lambda )v(\lambda )=O\left(\lambda ^{-2b+4}\right);\\ \frac{f_\gamma ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=\frac{32\pi \gamma ^2}{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)^2s^{-1}(\lambda )v(\lambda )=O\left(\lambda ^{-2b}\right). \end{aligned} \end{aligned}$$
In turn, similarly to (40) it follows uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),
$$\begin{aligned} \begin{aligned} \frac{f_\alpha (\lambda ,\,\theta )f_\beta (\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=-\frac{4\pi \alpha }{\beta ^2}\left(\lambda ^2+\alpha ^2+\gamma ^2\right)v(\lambda )=O\left(\lambda ^{-2b+2}\right);\\ \frac{f_\alpha (\lambda ,\,\theta )f_\gamma (\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=-\frac{16\alpha \gamma }{\beta }\left(\lambda ^4-\left(\alpha ^2+\gamma ^2\right)^2\right)s^{-1}(\lambda )v(\lambda )=O\left(\lambda ^{-2b}\right);\\ \frac{f_\beta (\lambda ,\,\theta )f_\gamma (\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=\frac{8\pi \gamma }{\beta ^2}\left(\lambda ^2-\alpha ^2-\gamma ^2\right)v(\lambda )=O\left(\lambda ^{-2b+2}\right). \end{aligned} \end{aligned}$$
The functions (44) and (45) will be uniformly continuous in \((\lambda ,\theta )\in \mathbb {R}\times \Theta ^c\), if they converge to zero, as \(\lambda \rightarrow \infty \), uniformly in \(\theta \in \Theta ^c\), that is if \(b>2\).
Similarly to (43) uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),
$$\begin{aligned} \begin{aligned}&\frac{f_{\alpha \alpha }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )=O\left(\lambda ^{-2b+2}\right);\ \ \frac{f_{\beta \beta }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )=0;\quad \ \ \frac{f_{\gamma \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )=O\left(\lambda ^{-2b+2}\right);\\&\frac{f_{\alpha \beta }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )=O\left(\lambda ^{-2b+2}\right);\ \ \frac{f_{\alpha \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )=O\left(\lambda ^{-2b}\right);\quad \\&\frac{f_{\beta \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda ) =O\left(\lambda ^{-2b+2}\right). \end{aligned} \end{aligned}$$
Thus the functions (44)–(46) are uniformly continuous in \((\lambda ,\theta )\in \mathbb {R}\times \Theta ^c\), if \(b>2\).
Proceeding to the verification of condition \(\mathbf{N }_6\), we note that for any \(x=\left(x_\alpha ,\,x_\beta ,\,x_\gamma \right)\ne 0\)
$$\begin{aligned} \left<W_1(\theta )x,\,x\right>=\int \limits _{\mathbb {R}}\,\left(x_\alpha f_\alpha (\lambda ,\,\theta )+x_\beta f_\beta (\lambda ,\,\theta ) +x_\gamma f_\gamma (\lambda ,\,\theta )\right)\frac{w(\lambda )}{f^2(\lambda ,\,\theta )}d\lambda . \end{aligned}$$
From equation (36) it is seen that the positive definiteness of the matrix \(W_1(\lambda )\) follows from linear independence of the functions \(\lambda ^2+\alpha ^2+\gamma ^2\), \(s(\lambda )\), \(\lambda ^2-\alpha ^2-\gamma ^2\). Positive definiteness of the matrix \(W_2(\theta )\) is established similarly.

In our example to satisfy the consistency conditions \(\mathbf{C }_4\) and \(\mathbf{C }_5\) the weight functions \(w(\lambda )\) and \(v(\lambda )\) should be chosen so that \(a\ge b>2\). On the other hand to satisfy the asymptotic normality conditions \(\mathbf{N }_4\) and \(\mathbf{N }_5\) the functions \(w(\lambda )\) and \(v(\lambda )\) should be such that \(a>\frac{5}{2}\) and \(a\ge b>2\).

The spectral density (35) has no singularity at zero, so that the functions \(v(\lambda )\) in the conditions \(\mathbf{C }_5\)(i) and \(\mathbf{N }_5\)(ii) could be chosen to be equal to \(w(\lambda )\), for example, \(a=b=3\). However we prefer to keep in the text the function \(v(\lambda )\), since it is needed when the spectral density could have a singularity at zero or elsewhere, see, e.g., Example 1 (Leonenko and Sakhno 2006), where linear process driven by the Brownian motion and regression function \(g(t,\,\alpha )\equiv 0\) have been studied. Specifically in the case of Riesz-Bessel spectral density
$$\begin{aligned} f(\lambda ,\,\theta )=\frac{\beta }{2\pi |\lambda |^{2\alpha }(1+\lambda ^2)^\gamma },\ \lambda \in \mathbb {R}, \end{aligned}$$
where \(\theta =\left(\theta _1,\,\theta _2,\,\theta _3\right)=(\alpha ,\,\beta ,\,\gamma )\in \Theta =(\underline{\alpha },\,\overline{\alpha }) \times (\underline{\beta },\,\overline{\beta })\times (\underline{\gamma },\,\overline{\gamma })\), \(\underline{\alpha }>0\), \(\overline{\alpha }<\frac{1}{2}\), \(\underline{\beta }>0\), \(\overline{\beta }<\infty \), \(\underline{\gamma }>\frac{1}{2}\), \(\overline{\gamma }<\infty \), and the parameter \(\alpha \) signifies the long range dependence, while the parameter \(\gamma \) indicates the second-order intermittency (Anh et al. 2004; Gao et al. 2001; Lim and Teo 2008), the weight functions have been chosen in the form
$$\begin{aligned} w(\lambda )=\frac{\lambda ^{2b}}{\left(1+\lambda ^2\right)^a},\ a>b>0;\ \ \ v(\lambda )=\frac{\lambda ^{2b'}}{\left(1+\lambda ^2\right)^{a'}},\ a'>b'>0,\ \lambda \in \mathbb {R}. \end{aligned}$$
Unfortunately, our conditions do not cover so far the case of the general non-linear regression function and Lévy driven continuous-time strongly dependent linear random noise such as Riesz-Bessel motion.



  1. Akhiezer NI (1965) Lections on approximation theory. Nauka, Moscow (in Russian)Google Scholar
  2. Alodat T, Olenko A (2017) Weak convergence of weighted additive functionals of long-range dependent fields. Theor Probab Math Stat 97:9–23MathSciNetzbMATHGoogle Scholar
  3. Anh VV, Heyde CC, Leonenko NN (2002) Dynamic models of long-memory processes driven by Lévy noise. J Appl Prob 39(4):730–747zbMATHGoogle Scholar
  4. Anh VV, Leonenko NN, Sakhno LM (2004) On a class of minimum contrast estimators for fractional stochastic processes and fields. J Statist Plan Inference 123:161–185MathSciNetzbMATHGoogle Scholar
  5. Applebaum D (2009) Lévy processes and stochastic calculus, vol 116. Cambridge studies in advanced mathematics. Cambridge University Press, CambridgezbMATHGoogle Scholar
  6. Avram F, Leonenko N, Sakhno L (2010) On a Szegö type limit theorem, the Hölder–Young–Brascamp–Lieb inequality, and the asymptotic theory of integrals and quadratic forms of stationary fields. ESAIM: Probab Stat 14:210–225MathSciNetzbMATHGoogle Scholar
  7. Bahamonde N, Doukhan P (2017) Spectral estimation in the presence of missing data. Theor Probab Math Stat 95:59–79zbMATHGoogle Scholar
  8. Bai S, Ginovyan MS, Taqqu MS (2016) Limit theorems for quadratic forms of Lévy-driven continuous-time linear processes. Stoch Process Appl 126(4):1036–1065zbMATHGoogle Scholar
  9. Bentkus R (1972) Asymptotic normality of an estimate of the spectral function. Liet Mat Rink 3(12):5–18MathSciNetGoogle Scholar
  10. Bentkus R (1972) On the error of the estimate of the spectral function of a stationary process. Liet Mat Rink 1(12):55–71MathSciNetGoogle Scholar
  11. Bentkus R, Rutkauskas R (1973) On the asymptotics of the first two moments of second order spectral estimators. Liet Mat Rink 1(13):29–45zbMATHGoogle Scholar
  12. Dahlhaus R (1989) Efficient parameter estimation for self-similar processes. Ann Stat 17:1749–1766MathSciNetzbMATHGoogle Scholar
  13. Dunsmuir W, Hannan EJ (1976) Vector linear time series models. Adv Appl Probab 8:339–360MathSciNetzbMATHGoogle Scholar
  14. Fox R, Taqqu MS (1986) Large-sample properties of parameter estimates for strongly dependent stationary Gaussian time series. Ann Stat 2(14):517–532MathSciNetzbMATHGoogle Scholar
  15. Gao J (2004) Modelling long-range-dependent Gaussian processes with application in continuous-time financial models. J Appl Probab 41:467–485MathSciNetzbMATHGoogle Scholar
  16. Gao J, Anh V, Heyde CC, Tieng Q (2001) Parameter estimation of stochastic processes with long-range dependence and intermittency. J Time Ser Anal 22:517–535MathSciNetzbMATHGoogle Scholar
  17. Ginovyan MS, Sahakyan AA, Taqqu MS (2014) The trace problem for Toeplitz matrices and operators and its impact in probability. Probab Surv 11:393–440MathSciNetzbMATHGoogle Scholar
  18. Ginovyan MS, Sahakyan AA (2017) Robust estimation for continuous-time linear models with memory. Theor Probab Math Stat 95:81–98zbMATHGoogle Scholar
  19. Giraitis L, Surgailis D (1990) A central limit theorem for quadratic forms in strongly dependent linear variables and its application to asymptotic normality of Whittle estimate. Probab Theory Relat Fields 86:87–104zbMATHGoogle Scholar
  20. Giraitis L, Taniguchi M, Taqqu MS (2017) Asymptotic normality of quadratic forms of martingale differences. Stat Inference Stoch Process 20(3):315–327MathSciNetzbMATHGoogle Scholar
  21. Giraitis L, Taqqu MS (1999) Whittle estimator for finite-variance non-Gaussian time series with long memory. Ann Stat 1(27):178–203MathSciNetzbMATHGoogle Scholar
  22. Grenander U (1954) On the estimation of regression coefficients in the case of an autocorrelated disturbance. Ann Stat 25(2):252–272MathSciNetzbMATHGoogle Scholar
  23. Grenander U, Rosenblatt M (1984) Statistical analysis of stationary time series. Chelsea Publishing Company, New YorkzbMATHGoogle Scholar
  24. Guyon X (1982) Parameter estimation for a stationary process on a d-dimensional lattice. Biometrica 69:95–102MathSciNetzbMATHGoogle Scholar
  25. Hannan EJ (1970) Multiple time series. Springer, New YorkzbMATHGoogle Scholar
  26. Hannan EJ (1973) The asymptotic theory of linear time series models. J Appl Probab 10:130–145MathSciNetzbMATHGoogle Scholar
  27. Heyde C, Gay R (1989) On asymptotic quasi-likelihood stochastic process. Stoch Process Appl 31:223–236zbMATHGoogle Scholar
  28. Heyde C, Gay R (1993) Smoothed periodogram asymptotic and estimation for processes and fields with possible long-range dependence. Stoch Process Appl 45:169–182MathSciNetzbMATHGoogle Scholar
  29. Ibragimov IA (1963) On estimation of the spectral function of a stationary Gaussian process. Theory Probab Appl 8(4):366–401zbMATHGoogle Scholar
  30. Ibragimov IA, Rozanov YA (1978) Gaussian random processes. Springer, New YorkzbMATHGoogle Scholar
  31. Ivanov AV (1980) A solution of the problem of detecting hidden periodicities. Theory Probab Math Stat 20:51–68zbMATHGoogle Scholar
  32. Ivanov AV (1997) Asymptotic theory of nonlinear regression. Kluwer, DordrechtzbMATHGoogle Scholar
  33. Ivanov AV (2010) Consistency of the least squares estimator of the amplitudes and angular frequencies of a sum of harmonic oscillations in models with long-range dependence. Theory Probab Math Statist 80:61–69Google Scholar
  34. Ivanov AV, Leonenko NN (1989) Statistical analysis of random fields. Kluwer, DordrechtzbMATHGoogle Scholar
  35. Ivanov AV, Leonenko NN (2004) Asymptotic theory of nonlinear regression with long-range dependence. Math Methods Stat 13(2):153–178MathSciNetzbMATHGoogle Scholar
  36. Ivanov AV, Leonenko NN (2007) Robust estimators in nonlinear regression model with long-range dependence. In: Pronzato L, Zhigljavsky A (eds) Optimal design and related areas in optimization and statistics. Springer, Berlin, pp 191–219Google Scholar
  37. Ivanov AV, Leonenko NN (2008) Semiparametric analysis of long-range dependence in nonlinear regression. J Stat Plan Inference 138:1733–1753MathSciNetzbMATHGoogle Scholar
  38. Ivanov AV, Leonenko NN, Ruiz-Medina MD, Zhurakovsky BM (2015) Estimation of harmonic component in regression with cyclically dependent errors. Statistics 49:156–186MathSciNetzbMATHGoogle Scholar
  39. Ivanov AV, Orlovskyi IV (2018) Large deviations of regression parameter estimator in continuous-time models with sub-Gaussian noise. Mod Stoch Theory Appl 5(2):191–206MathSciNetzbMATHGoogle Scholar
  40. Ivanov OV, Prykhod’ko VV (2016) On the Whittle estimator of the parameter of spectral density of random noise in the nonlinear regression model. Ukr Math J 67(8):1183–1203MathSciNetzbMATHGoogle Scholar
  41. Jennrich RI (1969) Asymptotic properties of non-linear least squares estimators. Ann Math Stat 40:633–643MathSciNetzbMATHGoogle Scholar
  42. Koul HL, Surgailis D (2000) Asymptotic normality of the Whittle estimator in linear regression models with long memory errors. Stat Inference Stoch Process 3:129–147MathSciNetzbMATHGoogle Scholar
  43. Leonenko NN, Papić I (2019) Correlation properties of continuous-time autoregressive processes delayed by the inverse of the stable subordinator. Commun Stat: Theory Methods.
  44. Leonenko NN, Sakhno LM (2006) On the Whittle estimator for some classes of continuous-parameter random processes and fields. Stat Probab Lett 76:781–795MathSciNetzbMATHGoogle Scholar
  45. Leonenko NN, Taufer E (2006) Weak convergence of functionals of stationary long memory processes to Rosenblatt-type distributions. J Stat Plan Inference 136:1220–1236MathSciNetzbMATHGoogle Scholar
  46. Lim SC, Teo LP (2008) Sample path properties of fractional Riesz–Bessel field of variable order. J Math Phys 49:013509MathSciNetzbMATHGoogle Scholar
  47. Malinvaud E (1970) The consistency of nonlinear regression. Ann Math Stat 41:953–969MathSciNetzbMATHGoogle Scholar
  48. Parzen E (1962) Stochastic processes. Holden-Day Inc, San FranciscozbMATHGoogle Scholar
  49. Rajput B, Rosinski J (1989) Spectral representations of infinity divisible processes. Prob Theory Rel Fields 82:451–487zbMATHGoogle Scholar
  50. Rosenblatt MR (1985) Stationary sequences and random fields. Birkhauser, BostonzbMATHGoogle Scholar
  51. Sato K (1999) Lévy processes and infinitely divisible distributions, vol 68. Cambridge studies in advanced mathematics. Cambridge University Press, CambridgezbMATHGoogle Scholar
  52. Walker AM (1973) On the estimation of a harmonic component in a time series with stationary dependent residuals. Adv Appl Probab 5:217–241MathSciNetzbMATHGoogle Scholar
  53. Whittle P (1951) Hypothesis testing in time series. Hafner, New YorkzbMATHGoogle Scholar
  54. Whittle P (1953) Estimation and information in stationary time series. Ark Mat 2:423–434MathSciNetzbMATHGoogle Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Mathematical Analysis and Probability TheoryNational Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”KievUkraine
  2. 2.School of MathematicsCardiff UniversityCardiffUK

Personalised recommendations