Extremes of a class of non-stationary Gaussian processes and maximal deviation of projection density estimates

Abstract

In this paper, we consider the distribution of the supremum of non-stationary Gaussian processes, and present a new theoretical result on the asymptotic behaviour of this distribution. We focus on the case when the processes have finite number of points attaining their maximal variance, but, unlike previously known facts in this field, our main theorem yields the asymptotic representation of the corresponding distribution function with exponentially decaying remainder term. This result can be efficiently used for studying the projection density estimates, based, for instance, on Legendre polynomials. More precisely, we construct the sequence of accompanying laws, which approximates the distribution of maximal deviation of the considered estimates with polynomial rate. Moreover, we construct the confidence bands for densities, which are honest at polynomial rate to a broad class of densities.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2

References

  1. Adler, R, Taylor, J: Random Fields and Geometry. Springer Science & Business Media (2009)

  2. Azaïs, J-M, Wschebor, M: The distribution of the maximum of a Gaussian process: Rice method revisited. In and Out of Equilibrium 3, 107–129 (2000)

    MATH  Google Scholar 

  3. Azaïs, J-M, Wschebor, M: Level Sets and Extrema of Random Processes and Fields. Wiley (2009)

  4. Bai, L, Dȩbicki, K, Hashorva, E, Ji, L: Extremes of threshold-dependent Gaussian processes. Sci. Chin. Math. 61(11), 1971–2002 (2018)

    MathSciNet  Article  Google Scholar 

  5. Bai, L, Dȩbicki, K, Hashorva, E, Luo, L: On generalised Piterbarg constants. Methodol. Comput. Appl. Probab. 20, 1 (2018)

    MathSciNet  Article  Google Scholar 

  6. Bickel, P, Rosenblatt, M: On some global measures of the deviations of density function estimates. Ann. Stat. 1(6), 1071–1095 (1973)

    MathSciNet  Article  Google Scholar 

  7. Bull, A: Honest adaptive confidence bands and self-similar functions. Electron. J. Stat. 6, 1490–1516 (2012)

    MathSciNet  Article  Google Scholar 

  8. Chernozhukov, V, Chetverikov, D, Kato, K: Anti-concentration and honest, adaptive confidence bands. Ann. Stat. 42(5), 1787–1818 (2014)

    MathSciNet  Article  Google Scholar 

  9. Giné, E, Koltchinskii, V, Sakhanenko, L: Kernel density estimators: convergence in distribution for weighted sup-norms. Probab. Theory Relat. Fields 130(2), 167–198 (2004)

    MathSciNet  Article  Google Scholar 

  10. Giné, E, Nickl, R: Confidence bands in density estimation. Ann. Stat. 38(2), 1122–1170 (2010)

    MathSciNet  Article  Google Scholar 

  11. Giné, E, Nickl, R: Mathematical Foundations of Infinite-Dimensional Statistical Models, vol 40. Cambridge University Press (2016)

  12. Hashorva, E, Hüsler, J: Extremes of Gaussian processes with maximal variance near the boundary points. Methodol. Comput. Appl. Probab. 2(3), 255–269 (2000)

    MathSciNet  Article  Google Scholar 

  13. Hüsler, J, Piterbarg, V: On the ruin probability for physical fractional Brownian motion. Stoch. Process. Appl. 113(2), 315–332 (2004)

    MathSciNet  Article  Google Scholar 

  14. Hüsler, J, Piterbarg, V, Seleznjev, O: On convergence of the uniform norms for Gaussian processes and linear approximation problems. Ann. Appl. Probab. 13(4), 1615–1653 (2003)

    MathSciNet  Article  Google Scholar 

  15. Komlós, J, Major, P, Tusnády, G: An approximation of partial sums of independent rv’s and the sample DF. Zeitschrift für Wahrscheinlichkeitstheorie und Verw Gebiete 32, 111–131 (1975)

    MathSciNet  Article  Google Scholar 

  16. Konakov, V, Panov, V: Sup-norm convergence rates for Lévy density estimation. Extremes 19(3), 371–403 (2016)

    MathSciNet  Article  Google Scholar 

  17. Konakov, V, Panov, V: Convergence rates of maximal deviation distribution for projection estimates of Lévy densities. arXiv:http://arxiv.org/abs1411.4750v3 (2016)

  18. Konstant, D, Piterbarg, V: Extreme values of the cyclostationary Gaussian random process. J. Appl. Probab. 30(1), 82–97 (1993)

    MathSciNet  Article  Google Scholar 

  19. Marron, J, Wand, M: Exact mean integrated squared error. Ann. Stat., 712–736 (1992)

  20. Michna, Z: Remarks on Pickands theorem. arxiv:http://arxiv.org/abs0904.3832v1(2009)

  21. Piterbarg, V: Asymptotic Methods in the Theory of Gaussian Processes and Fields. AMS, Providence (1996)

    Google Scholar 

  22. Piterbarg, V: Twenty Lectures about Gaussian Processes. Atlantic Financial Press, London (2015)

    Google Scholar 

  23. Piterbarg, V, Prisiazhniuk, V: Asymptotic analysis of the probability of large excursions for a nonstationary Gaussian process. Teoriia Veroiatnostei i Matematicheskaia Statistika 18, 121–134 (1978)

    Google Scholar 

  24. Piterbarg, V, Simonova, I: Asymptotic expansions for the probabilities of large runs of nonstationary Gaussian processes. Math. Notes 35(6), 477–483 (1984)

    Article  Google Scholar 

  25. Smirnov, N V: On the construction of confidence regions for the density of distribution of random variables, vol. 74, pp 189–191 (1950)

  26. Wasserman, L: All of Nonparametric Statistics. Springer Science & Business Media (2006)

Download references

Acknowledgments

The article was prepared within the framework of the HSE University Basic Research Program. For the first author the study has been funded by the Russian Science Foundation (project No 20-11-20119).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Vladimir Panov.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: choice of the parameter δ

In this section, we provide an example on the choice of the parameter δ in Theorem 1. We will concentrate on the case of the Gaussian process

$$ \begin{array}{@{}rcl@{}} {\Upsilon}(t) = {\sum}_{j=0}^{J} \psi_{j} (t) Z_{j}, \end{array} $$
(61)

where Z0,Z1,... are the Legendre polynomials. As it is explained in Section 2.2, this case corresponds to the item (i) in Theorem 1: t0 = A = − 1, \(r_{10}(t_{0},t_{0})=\frac {1}{2}(\sigma ^{2}(t_{0}))^{\prime }<0\) (the behaviour at the point t0 = B = 1 is completely the same). Denote

$$ D(\delta):=\max_{t\in \widetilde{\mathcal{M}}(\delta)}\bigl[-(\sigma^{2}(t))^{\prime}\bigr], $$

where ~(δ) = (δ) ∩ [− 1, 0]. In what follows, we assume that δ is such that ~(δ) = [A,b] for some b ∈ (− 1, 0). In the considered case, the absolute value of (σ2(t)) decays in some right vicinity of the point t0 = − 1 (for instance, the plot of σ2(t) for J = 4 is given on Fig. 3) and therefore we can take δ such that

$$ D(\delta)=-(\sigma^{2}(A))^{\prime}=2|r_{10}(A,A)|. $$
(62)
Fig. 3
figure3

Plot of the variance σ2(t) = Var(Υ(t)) for J = 4. Note that S = maxσ2(t) = (J + 1)2/2 = 25/2, the set (δ) := {t : σ2(t) > S/(1 + δ)} is an interval for any δ < 4.44

Now we aim to find a lower bound for \(\min \limits _{s,t\in \widetilde {{\mathscr{M}}}(\delta )}r(s,t)\). The two-dimensional mean value theorem yields

$$ \begin{array}{@{}rcl@{}} r(s,t) =r(A,A) +{{\int}_{0}^{1}}\left[(s-A)r_{10}(A+h(s-A),A+h(t-A))\right.\\ \left.+(t-A)r_{01}(A+h(s-A),A+h(t-A))\right]dh. \end{array} $$

We get

$$ r(s,t)\geq S-2(b-A)D_{1}(\delta),\qquad s,t\in \widetilde{\mathcal{M}}(\delta), $$

where D1(δ) = maxs,t∈~(δ)(−r01(s,t)). The inequality (35) reads as

$$ \chi<\chi_{1}(\delta):=\min\left\{ \delta,\frac{S-(b-A)D_{1}(\delta)}{(b-A)D_{1}(\delta)}\right\}. $$
(63)

Note that for the Legendre polynomials, A = − 1 and

$$ \begin{array}{@{}rcl@{}} D_{1}(\delta) = \max_{s,t\in \widetilde{\mathcal{M}}(\delta)} \Bigl(- {\sum}_{j=0}^{J} \psi_{j}(s) \psi^{\prime}_{j}(t) \Bigr)= - {\sum}_{j=0}^{J} \psi_{j}(-1) \psi^{\prime}_{j}(-1), \end{array} $$
(64)

because for δ small enough it holds

$$ \begin{array}{@{}rcl@{}} - \psi_{j}(s) \psi^{\prime}_{j}(t) \geq 0 \qquad \forall s,t \in \widetilde{\mathcal{M}}(\delta), \forall j=0,1,.. \end{array} $$
(65)

and

$$\operatornamewithlimits{arg max}_{s \in \widetilde{\mathcal{M}}(\delta)} |\psi^{\prime}_{j}(s)| = \operatornamewithlimits{arg max}_{t \in \widetilde{\mathcal{M}}(\delta)} |\psi_{j}(t)| = -1,$$

see, e.g., Section 5.1 from Konakov and Panov (2016b). The last expression in Eq. 64 and S can be directly computed:

$$ \begin{array}{@{}rcl@{}} - {\sum}_{j=0}^{J} \psi^{\prime}_{j}(-1) \psi_{j}(-1) = \frac{1}{4} {\sum}_{j=0}^{J} j(j+1)(2j+1)&=&\frac{1}{8}J(J+1)^{2}(J+2),\\ S = {\sum}_{j=0}^{J} {\psi_{j}^{2}}(-1)={\sum}_{j=0}^{J} \frac{2j+1}{2} &=& \frac{(J+1)^{2}}{2}. \end{array} $$

Therefore, we conclude that

$$ \begin{array}{@{}rcl@{}} \chi_{1}(\delta) = \min\left\{ \delta,\frac{4}{(b-A)J(J+2)} - 1 \right\} \end{array} $$

The second restriction on χ arrises due to Eq. 39:

$$ \chi\leq\chi_{2}(\delta):=\min_{t\in \widetilde{\mathcal{M}}(\delta)} \mathbb{P}si(t), \qquad \mathbb{P}si(t) := \frac{r_{10}^{2}(t,t)} {r_{11}(t,t)}. $$
(66)

This function cann’t be simplified in the considered particular case, and will be analysed numerically later.

The last restriction on χ appears due to the Corollary 1. Applying Theorem 8.1 from Piterbarg (1996), we get

$$ \begin{array}{@{}rcl@{}} P_{2u}(X(s)+X(t),\widetilde{\mathcal{M}}(\delta) \times (\mathcal{M}(\delta)\setminus \widetilde{\mathcal{M}}(\delta)))\leq C(b-A)^{2}ue^{-2u^{2}/R(\delta)} \end{array} $$
(67)

with some C > 0 and

$$R(\delta) =\max_{ \begin{smallmatrix} s\in \widetilde{\mathcal{M}}(\delta), \\ t \in \mathcal{M}(\delta)\setminus \widetilde{\mathcal{M}}(\delta) \end{smallmatrix}} \text{Var}({\Upsilon}_{s}+{\Upsilon}_{t}).$$

We have

$$ \text{Var} ({\Upsilon}_{s}+{\Upsilon}_{t}) =r(s,s)+r(t,t)+2r(s,t) = {\sum}_{j=0}^{J} \bigl(\psi_{j} (s) + \psi_{j}(t) \bigr)^{2}. $$

For the Legendre polynomials, it holds ψj(t) = (− 1)jψj(−t), and therefore

$$ \begin{array}{@{}rcl@{}} R(\delta) = \max_{s,t \in \widetilde{\mathcal{M}}(\delta)} \mathcal{R}(s,t), \qquad \text{where} \quad \mathcal{R}(s,t):={\sum}_{j=0}^{J} \bigl(\psi_{j} (s) + (-1)^{j} \psi_{j}(t) \bigr)^{2}. \end{array} $$

Empirically, we get that the maximum of (s,t) is attained at the point (− 1,− 1) (see Fig. 4), and therefore

$$ \begin{array}{@{}rcl@{}} R(\delta) = \begin{cases} (J+1)(J+2), & J \text{is even},\\ J(J+1), & J \text{is odd}, \end{cases} \end{array} $$
Fig. 4
figure4

Plot of the function (s,t) for J = 4. Maximum is attained at the point (− 1,− 1)

for any δ, which guarantees that the set (δ) is an interval. Therefore, from Eq. 67 we get the last restriction on χ :

$$ \chi<\chi_{3}(\delta):=\frac{4S}{R(\delta)}-1 = \begin{cases} J/ (J+2), & J \text{is even},\\ (J+2)/J, & J \text{is odd}. \end{cases} $$
(68)

Finally, we conclude that for the optimisation of χ, we should find as follows

$$ \chi_{opt}=\max_{\delta} m(\delta), \qquad \text{where} \quad m(\delta):=\min\{\chi_{1}(\delta),\chi_{2}(\delta),\chi_{3}(\delta)\}. $$

This optimisation procedure is illustrated on Fig. 5. The left picture presents the plots of the functions χ1(δ),χ2(δ),χ3(δ), while the right picture depicts the minimum between them. It turns out, that the maximum over δ is equal to 2/3, and this value is attained for any δ ∈ (0.71, 2.13).

Fig. 5
figure5

Plots of the function χ1(t),χ2(t),χ3(t) (left) and of the function m(δ) = min(χ1(t),χ2(t),χ3(t)) (right) for J = 4. The maximal value of m(δ) is equal to J/(J + 2) = 2/3, and this value is attained with any δ ∈ (0.71,2.13)

Appendix B: SBR-type theorem for projection density estimates

In this subsection, we briefly discuss the SBR-type theorem for the estimate (19). The next theorem shows that the distribution of n converges to the Gumbel distribution. Nevertheless, the rate of this convergence is very slow, of logarithmic order.

Theorem 3

Assume that pq,H with some q,H > 0,β∈ (0, 1], and the basis functions ψj(x),j = 0, 1, 2,... satisfy the assumptions (A1) and (A2). Let M = Mn = ⌊nλ⌋ with λ ∈ (0, 1).

  1. (i)

    For any x ∈ ℝ, it holds

    $$ \begin{array}{@{}rcl@{}} \mathbb{P} \Bigl\{\sqrt{\frac{n}{M_{n}}}\mathcal{R}_{n} \leq u_{M}(x) \Bigr\} =e^{- e^{-x}} \left( 1 +e^{-x} {\Lambda}_{M} (1+o(1)) \right), \end{array} $$
    (69)

    as n, where

    $$ \begin{array}{@{}rcl@{}} {\Lambda}_{M} = \frac{\bigl(\log \log (M) \bigr)^{2}}{16 \log (M)} \end{array} $$

    and

    $$ \begin{array}{@{}rcl@{}} u_{M}(x) &=& a_{M} + \frac{xS}{a_{M}}, \end{array} $$
    (70)
    $$ \begin{array}{@{}rcl@{}} a_{M} &=& \bigl(2S \log(M) \bigr)^{1/2} - \frac{S^{1/2}}{2^{3/2}} \frac{\log\bigl(\bigl(8 \pi^{2} S / \mathfrak{c}_{0} \bigr) \log(M) \bigr) } {\bigl(\log(M) \bigr)^{1/2} } \end{array} $$
    (71)

    with S = maxσ2(t), and c0 defined by Eq. 15 for X(t) = Υ(t).

  2. (ii)

    In Eq. 69, n can be changed to n, provided λ ∈ (1/(2β + 1), 1).

Proof

(i). Note that the process Υ(x) defined by Eq. 21 has zero mean and variance equal to Eq. 17. From Eqs. 23 and 15, we get for u,

$$ \begin{array}{@{}rcl@{}} \mathbb{P} \Bigl\{\sqrt{\frac{n}{M}}\mathcal{R}_{n} \leq u \Bigr\} \!& \leq &\! \left[ \mathbb{P} \Bigl\{ \max|{\Upsilon}(x)| \leq u + \gamma_{n,M} \Bigr\} \right]^{M} + \mathcal{C}_{1} n^{-\kappa}\\ \!&=&\! \exp\Bigl\{M \log \Bigl(1 - \frac{ 1 }{2 \pi } e^{-\breve{u}_{n,M}^{2}/(2 \S)} \breve{u}_{n,M}^{-1} \bigl(\mathfrak{c}_{0} + \mathfrak{c}_{1} \breve{u}_{n,M}^{-2} + o(\breve{u}_{n,M}^{-2}) \bigr) \Bigr) \Bigr\} \\ && + \mathcal{C}_{1} n^{-\kappa} \\ \!&=&\! \exp\Bigl\{ - \frac{ M }{2 \pi } e^{-\breve{u}_{n,M}^{2}/(2 \S)} \breve{u}_{n,M}^{-1} \Bigl(\mathfrak{c}_{0} + \mathfrak{c}_{1} \breve{u}_{n,M}^{-2} + o(\breve{u}_{n,M}^{-2}) \Bigr)\Bigr\} \\ && + \mathcal{C}_{1} n^{-\kappa} \end{array} $$
(72)

with ŭn,M = u + γn,M. Next, let us substitute u = aM + xS/aMγn,M with some aM as M. We have

$$ \begin{array}{@{}rcl@{}} \mathbb{P} \Bigl\{\sqrt{\frac{n}{M}}\mathcal{R}_{n} \leq a_{M} + \frac{xS}{a_{M}} -\gamma_{n,M} \Bigr\} \\ \leq \exp\Bigl\{ -e^{-x} \cdot \mathcal{G}_{M} \cdot \Bigl(1+a_{M}^{-2} \bigl(-x^{2}(S/2) - xS +\mathfrak{c}_{1}/\mathfrak{c}_{0} \bigr) +o(1) \Bigr) \Bigr\} + \mathcal{C}_{1} n^{-\kappa}, \end{array} $$

as M, where

$$ \begin{array}{@{}rcl@{}} \mathcal{G}_{M} = \frac{M \mathfrak{c}_{0}}{2 \pi e^{{a_{M}^{2}}/(2 \S)} a_{M}}. \end{array} $$
(73)

Now we specify aM such that M ≍ 1. More concretely, let us find aM in the form aM = cMdM/cM, where cM,dM,dM/cM → 0 as M. This form leads to the equalities

$$ \begin{array}{@{}rcl@{}} M = e^{{c_{M}^{2}}/(2S)}, \qquad e^{d_{M}/S} \mathfrak{c}_{0} = 2\pi c_{M}, \end{array} $$

which suggest

$$ \begin{array}{@{}rcl@{}} c_{M} = \sqrt{2S \log(M)}, \qquad d_{M} = S \log(2 \pi c_{M}/\mathfrak{c}_{0}). \end{array} $$

Under this choice, we get

$$ \begin{array}{@{}rcl@{}} \mathcal{G}_{M} = 1-\frac{{d_{M}^{2}}}{2S {c_{M}^{2}}} (1+o(1)). \end{array} $$
(74)

Therefore,

$$ \begin{array}{@{}rcl@{}} &&\mathbb{P} \left\{\sqrt{\frac{n}{M_{n}}}\mathcal{R}_{n} \leq a_{M} + \frac{xS}{a_{M}} -\gamma_{n,M}\right\}\\ &\leq& \exp\left\{ -e^{-x} \cdot \left( 1 - \frac{{d_{M}^{2}}}{2S {c_{M}^{2}}} (1 + o(1) \right) \cdot \left( 1 + c_{M}^{-2} \left( -x^{2}(S/2) - xS +\mathfrak{c}_{1}/\mathfrak{c}_{0} \right) + o(1) \right) \right\} \\&&+ \mathcal{C}_{1} n^{-\kappa}\\ &=& \exp\left\{ -e^{-x} \left( 1 - {\Lambda}_{M} (1+o(1)) \right.\right\} + \mathcal{C}_{1} n^{-\kappa} = e^{-e^{-x} } \left( 1 + e^{-x} {\Lambda}_{M} (1+o(1)) \right). \end{array} $$

After the change x by x + γn,MaM/S, we get

$$ \begin{array}{@{}rcl@{}} \mathbb{P} \Bigl\{\sqrt{\frac{n}{M_{n}}}\mathcal{R}_{n} \leq u_{M}(x)\Bigr\} &\leq& e^{-e^{-x} } \bigl(1 + e^{-x} {\Lambda}_{M} (1+o(1)) \bigr),\quad n\to \infty,\end{array} $$

because γn,MaM converges to zero at polynomial rate (here we use the assumption M = nλ, λ ∈ (0, 1) at the first time, see Remark 5). The proof of the inverse inequality follows from the second statement of the Proposition 1.

(ii). It holds

$$ \begin{array}{@{}rcl@{}} \bigl| \hat{p}_{n}(x) - p(x) \bigr| \leq \bigl| \hat{p}_{n}(x) - {\mathbb E} \hat{p}_{n}(x) \bigr| + \bigl| {\mathbb E} \hat{p}_{n}(x) - p(x) \bigr|. \end{array} $$
(75)

Let us show that the second summand can be upper bounded by an expression of order M− 1. In fact, for any x ∈ [A,B],

$$ \begin{array}{@{}rcl@{}} 0 &=& {\sum}_{m=1}^{M}{\int}_{I_{m}} \psi_j^{(m)}(y) dy \cdot \psi_j^{(m)}(x), \qquad j=1,2,...,\\ 1 &=& {\sum}_{m=1}^{M}{\int}_{I_{m}} \psi^{(m)}_{0}(y) dy \cdot \psi^{(m)}_{0}(x). \end{array} $$

Both equalities are obtained by direct calculations; e.g., the first equality follows from

$$ \begin{array}{@{}rcl@{}} {\int}_{I_{m}} \psi_j^{(m)}(y) dy &=& M^{-1/2} \sqrt{\frac{2j+1}{2}} {\int}_{-1}^{1} \psi_{j}(x) dx \\&=& M^{-1/2} \sqrt{\frac{2j+1}{2}} \frac{1}{2^{n} n!} \frac{d^{n-1}}{dx^{n-1}}(x^{2}-1)^{n} |_{-1}^{1} =0. \end{array} $$

Therefore,

$$ \begin{array}{@{}rcl@{}} {\mathbb E}\hat{p}_{n}(x) - p(x) = {\sum}_{m=1}^{M} {\sum}_{j=0}^{J} \left[ {\int}_{I_{m}} \psi_j^{(m)}(y) \left( p(y) - p(x) \right) dy \cdot \psi_j^{(m)}(x) \right]. \end{array} $$

Applying the Cauchy-Schwarz inequality for the second sum, we get

$$ \begin{array}{@{}rcl@{}} \left| {\mathbb E}\hat{p}_{n}(x) - p(x) \right| &\leq& {\sum}_{m=1}^{M} \left( {\sum}_{j=0}^{J} \left( {\int}_{I_{m}} \psi_j^{(m)}(y) \left( p(y) - p(x) \right) dy \right)^{2} \right)^{1/2}\\ &&\left( {\sum}_{j=0}^{J} (\psi_j^{(m)}(x))^{2} \right)^{1/2}. \end{array} $$

Next, we apply the Cauchy-Schwarz inequality for the integral in Eq. 76:

$$ \begin{array}{@{}rcl@{}} \left| {\mathbb E}\hat{p}_{n}(x) - p(x) \right| &\leq& {\sum}_{m=1}^{M} \left( {\int}_{I_{m}} \left( p(y)- p(x) \right)^{2} dy \cdot {\sum}_{j=0}^{J} {\int}_{I_{m}} \left( \psi_j^{(m)}(y)\right)^{2} dy \right)^{1/2} \\ &&\cdot \left( {\sum}_{j=0}^{J} \left( \psi_j^{(m)}(x)\right)^{2} \right)^{1/2}. \end{array} $$

Now we will use that

$$ \begin{array}{@{}rcl@{}} {\int}_{I_{m}} \left( \psi_j^{(m)}(y)\right)^{2} dy=1, \forall j,m, \qquad \qquad {\sum}_{j=0}^{J} \left( \psi_j^{(m)}(x)\right)^{2} \leq C_{1} M, \end{array} $$

with some C1 > 0 depending on J. We have

$$ \begin{array}{@{}rcl@{}} {\int}_{I_{m}} \left( p(y)- p(x) \right)^{2} dy \leq C_{2} M^{-(2\upbeta+1)}, \qquad \forall x \in I_{m}. \end{array} $$

We arrive at

$$ \begin{array}{@{}rcl@{}} \left| {\mathbb E} \hat{p}_{n}(x) - p(x) \right| \leq C_{3} M^{-\upbeta}, \qquad \forall x \in [A,B].\end{array} $$

with some constant C3 > 0. Substituting this result into Eq. 75, we get

$$ \begin{array}{@{}rcl@{}} \sqrt{\frac{n}{M}} \mathcal{D}_{n} \leq \sqrt{\frac{n}{M}} \mathcal{R}_{n} + C_{4} n^{1/2} M^{-\upbeta-1/2} \end{array} $$

where C4 = C3q− 1. On the other side, we have

$$ \begin{array}{@{}rcl@{}} \left| \hat{p}_{n} (x) - p(x) \right| \geq \left| \hat{p}_{n} (x) - {\mathbb E} \hat{p}_{n} (x) \right| - \left| {\mathbb E} \hat{p}_{n} (x) - p(x) \right|, \end{array} $$

and therefore

$$ \begin{array}{@{}rcl@{}} \sqrt{\frac{n}{M}} \mathcal{D}_{n} \geq \sqrt{\frac{n}{M}} \mathcal{R}_{n} - C_{4} n^{1/2} M^{-\upbeta-1/2}. \end{array} $$

We conclude that

$$ \begin{array}{@{}rcl@{}} \mathbb{P} \left\{ \left| \sqrt{\frac{n}{M}} \mathcal{D}_{n} - \sqrt{\frac{n}{M}} \mathcal{R}_{n} \right| \leq C_{4} n^{1/2} M^{-\upbeta-1/2} \right\}=1. \end{array} $$

By Lemma 1, for any x ∈ ℝ,

$$ \begin{array}{@{}rcl@{}} \mathbb{P} \left\{ \sqrt{\frac{n}{M}} \mathcal{R}_{n} \leq x - C_{4} n^{1/2} M^{-\upbeta-1/2} \right\} &\leq& \mathbb{P} \left\{ \sqrt{\frac{n}{M}} \mathcal{D}_{n} \leq x \right\} \\ &\leq& \mathbb{P} \left\{ \sqrt{\frac{n}{M}} \mathcal{R}_{n} \leq x + C_{4} n^{1/2} M^{-\upbeta-1/2} \right\}. \end{array} $$

Substituting uM(x) defined by Eqs. 7071 instead of x, we get that the left-hand side in Eq. 76 can be transformed as follows

$$ \begin{array}{@{}rcl@{}} &&\mathbb{P} \left\{ \sqrt{\frac{n}{M}} \mathcal{R}_{n} \leq u_{M}(x) - C_{2} n^{1/2} M^{-\upbeta-1/2} \right\} \\ &=& \mathbb{P} \left\{ \sqrt{\frac{n}{M}} \mathcal{R}_{n} \leq u_{M}\bigl(x-C_{2} n^{1/2} M^{-\upbeta-1/2}a_{M}/S \bigr) \right\}\\ &=& e^{- e^{-x}} \left( 1 +e^{-x} {\Lambda}_{M} (1+o(1)) \right), \end{array} $$

provided n1/2M−β− 1/2aM converges to 0 at polynomial rate. The last condition is fulfilled for any α ∈ (1/(2β + 1), 1). The same argument holds for the right-hand side of Eq. 76, and the desired result follows. □

Appendix C: one technical lemma

Lemma T 1

Denote

$$ {\Theta}_{n,M}(x) := A_{M}(x+w_{n,M}) - A_{M}(x),$$

where wn,M > 0 converges to zero as n,M. Then

$$ \begin{array}{@{}rcl@{}} \sup_{x \in \mathbb{R}} {\Theta}_{n,M}(x) \leq c_{1} M^{\theta_{1}} w_{n,M} + c_{2} M^{-\theta_{2}} \end{array} $$

for some c1,c2 > 0 and any 𝜃1,𝜃2 > 0.

Proof

For x < cMwn,M, we have Θn,M(x) = 0.

If xcM,

$$ \begin{array}{@{}rcl@{}} {\Theta}_{n,M}(x) &\leq& \sup_{x\geq c_{M}} \bigl[ A_{M}^{\prime}(x) \bigr] w_{n,M}= \sup_{x\geq c_{M}} \Bigl[ - A_{M}(x) {\sum}_{i=1}^{k} \mathscr{P}^{\prime}_{i}\bigl(x \bigr) \Bigr] M w_{n,M}. \end{array} $$

Note that for xcM, we have

$$ \begin{array}{@{}rcl@{}} 0 \leq - {\sum}_{i=1}^{k} \mathscr{P}^{\prime}_{i}\bigl(x \bigr) = \frac{2k}{\sqrt{S}} \phi\Bigl(\frac{x}{\sqrt{S}}\Bigr) < \frac{2k}{\sqrt{2\pi S}} e^{-{c_{M}^{2}}/(2S)}\lesssim M^{-1} e^{\sqrt{2S\log(M)}}. \end{array} $$

Since \(e^{\sqrt {2S\log (M)}}\lesssim M^{\theta _{1}}\) for any \((\theta _{1}>0,)\) we conclude that

$$ \begin{array}{@{}rcl@{}} \sup_{x\geq c_{M}} {\Theta}_{n,M}(x) \lesssim c_{1} M^{\theta_{1}} w_{n,M}, \qquad n,M \to \infty \end{array} $$
(76)

for any 𝜃1 > 0. Finally, if x ∈ (cMwn,M,cM), we have

$$ \begin{array}{@{}rcl@{}} {\Theta}_{n,M}(x) = A_{M}(x+ w_{n,M}) < {\Theta}_{n,M}(c_{M}) + A_{M}(c_{M}), \end{array} $$

where the first term is bounded by the expression in the right-hand side of Eq. 76. Now let us consider the second term:

$$ \begin{array}{@{}rcl@{}} A_{M}(c_{M}) &=& \exp \Bigl\{ -\frac{M \mathfrak{c}_{0}}{ 2\pi c_{M}} e^{-{c_{M}^{2}} / (2S)} (1 +o(1)) \Bigr\}\\ &=& \exp \Bigl\{ -\frac{ \mathfrak{c}_{0} e^{-S/2}}{ 2\pi} \frac{ e^{(2S \log M)^{1/2}}}{(2S \log(M))^{1/2}} (1 +o(1)) \Bigr\}. \end{array} $$

Applying (60), we conclude that AM(cM) converges to zero at polynomial rate with respect to M. This observation completes the proof of Lemma T.□

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Konakov, V., Panov, V. & Piterbarg, V. Extremes of a class of non-stationary Gaussian processes and maximal deviation of projection density estimates. Extremes (2021). https://doi.org/10.1007/s10687-020-00402-2

Download citation

Keywords

  • Non-stationary Gaussian processes
  • Rice method
  • Projection estimates
  • Confidence bands
  • Legendre polynomials

AMS 2000 Subject Classifications

  • 60G70
  • 60G15
  • 62G07