Abstract
In this paper, we consider the distribution of the supremum of non-stationary Gaussian processes, and present a new theoretical result on the asymptotic behaviour of this distribution. We focus on the case when the processes have finite number of points attaining their maximal variance, but, unlike previously known facts in this field, our main theorem yields the asymptotic representation of the corresponding distribution function with exponentially decaying remainder term. This result can be efficiently used for studying the projection density estimates, based, for instance, on Legendre polynomials. More precisely, we construct the sequence of accompanying laws, which approximates the distribution of maximal deviation of the considered estimates with polynomial rate. Moreover, we construct the confidence bands for densities, which are honest at polynomial rate to a broad class of densities.
Similar content being viewed by others
References
Adler, R, Taylor, J: Random Fields and Geometry. Springer Science & Business Media (2009)
Azaïs, J-M, Wschebor, M: The distribution of the maximum of a Gaussian process: Rice method revisited. In and Out of Equilibrium 3, 107–129 (2000)
Azaïs, J-M, Wschebor, M: Level Sets and Extrema of Random Processes and Fields. Wiley (2009)
Bai, L, Dȩbicki, K, Hashorva, E, Ji, L: Extremes of threshold-dependent Gaussian processes. Sci. Chin. Math. 61(11), 1971–2002 (2018)
Bai, L, Dȩbicki, K, Hashorva, E, Luo, L: On generalised Piterbarg constants. Methodol. Comput. Appl. Probab. 20, 1 (2018)
Bickel, P, Rosenblatt, M: On some global measures of the deviations of density function estimates. Ann. Stat. 1(6), 1071–1095 (1973)
Bull, A: Honest adaptive confidence bands and self-similar functions. Electron. J. Stat. 6, 1490–1516 (2012)
Chernozhukov, V, Chetverikov, D, Kato, K: Anti-concentration and honest, adaptive confidence bands. Ann. Stat. 42(5), 1787–1818 (2014)
Giné, E, Koltchinskii, V, Sakhanenko, L: Kernel density estimators: convergence in distribution for weighted sup-norms. Probab. Theory Relat. Fields 130(2), 167–198 (2004)
Giné, E, Nickl, R: Confidence bands in density estimation. Ann. Stat. 38(2), 1122–1170 (2010)
Giné, E, Nickl, R: Mathematical Foundations of Infinite-Dimensional Statistical Models, vol 40. Cambridge University Press (2016)
Hashorva, E, Hüsler, J: Extremes of Gaussian processes with maximal variance near the boundary points. Methodol. Comput. Appl. Probab. 2(3), 255–269 (2000)
Hüsler, J, Piterbarg, V: On the ruin probability for physical fractional Brownian motion. Stoch. Process. Appl. 113(2), 315–332 (2004)
Hüsler, J, Piterbarg, V, Seleznjev, O: On convergence of the uniform norms for Gaussian processes and linear approximation problems. Ann. Appl. Probab. 13(4), 1615–1653 (2003)
Komlós, J, Major, P, Tusnády, G: An approximation of partial sums of independent rv’s and the sample DF. Zeitschrift für Wahrscheinlichkeitstheorie und Verw Gebiete 32, 111–131 (1975)
Konakov, V, Panov, V: Sup-norm convergence rates for Lévy density estimation. Extremes 19(3), 371–403 (2016)
Konakov, V, Panov, V: Convergence rates of maximal deviation distribution for projection estimates of Lévy densities. arXiv:http://arxiv.org/abs1411.4750v3 (2016)
Konstant, D, Piterbarg, V: Extreme values of the cyclostationary Gaussian random process. J. Appl. Probab. 30(1), 82–97 (1993)
Marron, J, Wand, M: Exact mean integrated squared error. Ann. Stat., 712–736 (1992)
Michna, Z: Remarks on Pickands theorem. arxiv:http://arxiv.org/abs0904.3832v1(2009)
Piterbarg, V: Asymptotic Methods in the Theory of Gaussian Processes and Fields. AMS, Providence (1996)
Piterbarg, V: Twenty Lectures about Gaussian Processes. Atlantic Financial Press, London (2015)
Piterbarg, V, Prisiazhniuk, V: Asymptotic analysis of the probability of large excursions for a nonstationary Gaussian process. Teoriia Veroiatnostei i Matematicheskaia Statistika 18, 121–134 (1978)
Piterbarg, V, Simonova, I: Asymptotic expansions for the probabilities of large runs of nonstationary Gaussian processes. Math. Notes 35(6), 477–483 (1984)
Smirnov, N V: On the construction of confidence regions for the density of distribution of random variables, vol. 74, pp 189–191 (1950)
Wasserman, L: All of Nonparametric Statistics. Springer Science & Business Media (2006)
Acknowledgments
The article was prepared within the framework of the HSE University Basic Research Program. For the first author the study has been funded by the Russian Science Foundation (project No 20-11-20119).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: choice of the parameter δ
In this section, we provide an example on the choice of the parameter δ in Theorem 1. We will concentrate on the case of the Gaussian process
where Z0,Z1,... are the Legendre polynomials. As it is explained in Section 2.2, this case corresponds to the item (i) in Theorem 1: t0 = A = − 1, \(r_{10}(t_{0},t_{0})=\frac {1}{2}(\sigma ^{2}(t_{0}))^{\prime }<0\) (the behaviour at the point t0 = B = 1 is completely the same). Denote
where ~(δ) = (δ) ∩ [− 1, 0]. In what follows, we assume that δ is such that ~(δ) = [A,b] for some b ∈ (− 1, 0). In the considered case, the absolute value of (σ2(t))′ decays in some right vicinity of the point t0 = − 1 (for instance, the plot of σ2(t) for J = 4 is given on Fig. 3) and therefore we can take δ such that
Now we aim to find a lower bound for \(\min \limits _{s,t\in \widetilde {{\mathscr{M}}}(\delta )}r(s,t)\). The two-dimensional mean value theorem yields
We get
where D1(δ) = maxs,t∈~(δ)(−r01(s,t)). The inequality (35) reads as
Note that for the Legendre polynomials, A = − 1 and
because for δ small enough it holds
and
see, e.g., Section 5.1 from Konakov and Panov (2016b). The last expression in Eq. 64 and S can be directly computed:
Therefore, we conclude that
The second restriction on χ arrises due to Eq. 39:
This function cann’t be simplified in the considered particular case, and will be analysed numerically later.
The last restriction on χ appears due to the Corollary 1. Applying Theorem 8.1 from Piterbarg (1996), we get
with some C > 0 and
We have
For the Legendre polynomials, it holds ψj(t) = (− 1)jψj(−t), and therefore
Empirically, we get that the maximum of (s,t) is attained at the point (− 1,− 1) (see Fig. 4), and therefore
for any δ, which guarantees that the set (δ) is an interval. Therefore, from Eq. 67 we get the last restriction on χ :
Finally, we conclude that for the optimisation of χ, we should find as follows
This optimisation procedure is illustrated on Fig. 5. The left picture presents the plots of the functions χ1(δ),χ2(δ),χ3(δ), while the right picture depicts the minimum between them. It turns out, that the maximum over δ is equal to 2/3, and this value is attained for any δ ∈ (0.71, 2.13).
Appendix B: SBR-type theorem for projection density estimates
In this subsection, we briefly discuss the SBR-type theorem for the estimate (19). The next theorem shows that the distribution of n converges to the Gumbel distribution. Nevertheless, the rate of this convergence is very slow, of logarithmic order.
Theorem 3
Assume that p ∈q,H,β with some q,H > 0,β∈ (0, 1], and the basis functions ψj(x),j = 0, 1, 2,... satisfy the assumptions (A1) and (A2). Let M = Mn = ⌊nλ⌋ with λ ∈ (0, 1).
-
(i)
For any x ∈ ℝ, it holds
$$ \begin{array}{@{}rcl@{}} \mathbb{P} \Bigl\{\sqrt{\frac{n}{M_{n}}}\mathcal{R}_{n} \leq u_{M}(x) \Bigr\} =e^{- e^{-x}} \left( 1 +e^{-x} {\Lambda}_{M} (1+o(1)) \right), \end{array} $$(69)as n →∞, where
$$ \begin{array}{@{}rcl@{}} {\Lambda}_{M} = \frac{\bigl(\log \log (M) \bigr)^{2}}{16 \log (M)} \end{array} $$and
$$ \begin{array}{@{}rcl@{}} u_{M}(x) &=& a_{M} + \frac{xS}{a_{M}}, \end{array} $$(70)$$ \begin{array}{@{}rcl@{}} a_{M} &=& \bigl(2S \log(M) \bigr)^{1/2} - \frac{S^{1/2}}{2^{3/2}} \frac{\log\bigl(\bigl(8 \pi^{2} S / \mathfrak{c}_{0} \bigr) \log(M) \bigr) } {\bigl(\log(M) \bigr)^{1/2} } \end{array} $$(71)with S = maxσ2(t), and c0 defined by Eq. 15 for X(t) = Υ(t).
-
(ii)
In Eq. 69, n can be changed to n, provided λ ∈ (1/(2β + 1), 1).
Proof
(i). Note that the process Υ(x) defined by Eq. 21 has zero mean and variance equal to Eq. 17. From Eqs. 23 and 15, we get for u →∞,
with ŭn,M = u + γn,M. Next, let us substitute u = aM + xS/aM − γn,M with some aM →∞ as M →∞. We have
as M →∞, where
Now we specify aM such that M ≍ 1. More concretely, let us find aM in the form aM = cM − dM/cM, where cM,dM →∞,dM/cM → 0 as M →∞. This form leads to the equalities
which suggest
Under this choice, we get
Therefore,
After the change x by x + γn,MaM/S, we get
because γn,MaM converges to zero at polynomial rate (here we use the assumption M = nλ, λ ∈ (0, 1) at the first time, see Remark 5). The proof of the inverse inequality follows from the second statement of the Proposition 1.
(ii). It holds
Let us show that the second summand can be upper bounded by an expression of order M− 1. In fact, for any x ∈ [A,B],
Both equalities are obtained by direct calculations; e.g., the first equality follows from
Therefore,
Applying the Cauchy-Schwarz inequality for the second sum, we get
Next, we apply the Cauchy-Schwarz inequality for the integral in Eq. 76:
Now we will use that
with some C1 > 0 depending on J. We have
We arrive at
with some constant C3 > 0. Substituting this result into Eq. 75, we get
where C4 = C3q− 1. On the other side, we have
and therefore
We conclude that
By Lemma 1, for any x ∈ ℝ,
Substituting uM(x) defined by Eqs. 70–71 instead of x, we get that the left-hand side in Eq. 76 can be transformed as follows
provided n1/2M−β− 1/2aM converges to 0 at polynomial rate. The last condition is fulfilled for any α ∈ (1/(2β + 1), 1). The same argument holds for the right-hand side of Eq. 76, and the desired result follows. □
Appendix C: one technical lemma
Lemma T 1
Denote
where wn,M > 0 converges to zero as n,M →∞. Then
for some c1,c2 > 0 and any 𝜃1,𝜃2 > 0.
Proof
For x < cM − wn,M, we have Θn,M(x) = 0.
If x ≥ cM,
Note that for x ≥ cM, we have
Since \(e^{\sqrt {2S\log (M)}}\lesssim M^{\theta _{1}}\) for any \((\theta _{1}>0,)\) we conclude that
for any 𝜃1 > 0. Finally, if x ∈ (cM − wn,M,cM), we have
where the first term is bounded by the expression in the right-hand side of Eq. 76. Now let us consider the second term:
Applying (60), we conclude that AM(cM) converges to zero at polynomial rate with respect to M. This observation completes the proof of Lemma T.□
Rights and permissions
About this article
Cite this article
Konakov, V., Panov, V. & Piterbarg, V. Extremes of a class of non-stationary Gaussian processes and maximal deviation of projection density estimates. Extremes 24, 617–651 (2021). https://doi.org/10.1007/s10687-020-00402-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10687-020-00402-2
Keywords
- Non-stationary Gaussian processes
- Rice method
- Projection estimates
- Confidence bands
- Legendre polynomials