Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

The subleading order of two dimensional cover times


The \(\varepsilon \)-cover time of the two dimensional torus by Brownian motion is the time it takes for the process to come within distance \(\varepsilon >0\) from any point. Its leading order in the small \(\varepsilon \)-regime has been established by Dembo et al. (Ann Math 160:433–464, 2004). In this work, the second order correction is identified. The approach relies on a multi-scale refinement of the second moment method, and draws on ideas from the study of the extremes of branching Brownian motion.

This is a preview of subscription content, log in to check access.

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6


  1. 1.

    Strictly speaking [13] deals with the (closely related) discrete setting, as does [21].

  2. 2.

    Although it does not appear in the literature, it is expected that the behaviour of the fluctuations of the \(\varepsilon \)-cover time of the Euclidean torus in \(d\ge 3\) is the same as in the discrete setting.

  3. 3.

    We do not prove that this is the only strategy, but [2] proves the analogous statement for Branching Brownian Motion, and this seems very likely to carry over to our setting.

  4. 4.

    Or more accurately a forest of \(\sim r_0^{-2}\) trees, the latter being the number of balls that can be “packed” into the highest scale.

  5. 5.

    To be precise, a version of BBM with branching at discrete integer times and average branching factor \(e^2\), run up to time L.

  6. 6.

    To verify this one must rearrange (0.1) of [15] appropriately, so that the cover time is rescaled by the expected hitting time of a leaf.


  1. 1.

    Aïdékon, E., Berestycki, J., Brunet, É., Shi, Z.: Branching Brownian motion seen from its tip. Probab. Theory Related Fields 157(1–2), 405–451 (2013)

  2. 2.

    Arguin, L.-P., Bovier, A., Kistler, N.: Genealogy of extremal particles of branching Brownian motion. Commun. Pure Appl. Math. 64(12), 1647–1676 (2011)

  3. 3.

    Arguin, L.-P., Bovier, A., Kistler, N.: The extremal process of branching Brownian motion. Probab. Theory Related Fields 157(3–4), 535–574 (2013)

  4. 4.

    Belius, D.: Gumbel fluctuations for cover times in the discrete torus. Probab. Theory Related Fields 157(3–4), 635–689 (2013)

  5. 5.

    Bramson, M.: Maximal displacement of branching brownian motion. Commun. Pure Appl. Math 31(5), 531–581 (1978)

  6. 6.

    Bramson, M.: Convergence of solutions of the Kolmogorov equation to traveling waves. Mem. Am. Math. Soc. 44(285), 1–190 (1983)

  7. 7.

    Bramson, M., Zeitouni, O.: Tightness of the recentered maximum of the two-dimensional discrete Gaussian free field. Commun. Pure Appl. Math. 65(1), 1–20 (2012)

  8. 8.

    Carr, P., Schröder, M.: Bessel processes, the integral of geometric Brownian motion, and Asian options. Teor. Veroyatnost. i Primenen. 48(3), 503–533 (2003)

  9. 9.

    Comets, F., Gallesco, C., Popov, S., Vachkovskaia, M.: On large deviations for the cover time of two-dimensional torus. Electron. J. Probab. 18(96), 18 (2013)

  10. 10.

    Dembo, A., Peres, Y., Rosen, J.: Brownian motion on compact manifolds: cover time and late points. Electron. J. Probab. 8(15), 1–14 (2003)

  11. 11.

    Dembo, A., Yuval, Peres, Y., Rosen, J., Zeitouni, O.: Cover times for Brownian motion and random walks in two dimensions. Ann. Math. 160(2), 433–464 (2004)

  12. 12.

    Dembo, A., Peres, Y., Rosen, J., Zeitouni, O.: Late points for random walks in two dimensions. Ann. Probab. 34(1), 219–263 (2006)

  13. 13.

    Ding, J.: On cover times for 2D lattices. Electron. J. Probab. 17(45), 18 (2012)

  14. 14.

    Ding, J.: Asymptotics of cover times via Gaussian free fields: bounded-degree graphs and general trees. Ann. Probab. 42(2), 464–496 (2014)

  15. 15.

    Ding, J., Zeitouni, O.: A sharp estimate for cover times on binary trees. Stochastic Process. Appl. 122(5), 2117–2133 (2012)

  16. 16.

    Eisenbaum, N., Kaspi, H., Marcus, M.B., Rosen, J., Shi, Z.: A Ray-Knight theorem for symmetric Markov processes. Ann. Probab. 28(4), 1781–1796 (2000)

  17. 17.

    Fitzsimmons, P.J., Pitman, J.: Kac’s moment formula and the Feynman-Kac formula for additive functionals of a Markov process. Stochastic Process. Appl. 79(1), 117–134 (1999)

  18. 18.

    Goodman, J., den Hollander, F.: Extremal geometry of a Brownian porous medium. Probab. Theory Relat Fields 160(1–2), 127–174 (2013)

  19. 19.

    Kistler, N.: Derrida’s random energy models. Lecture Notes in Mathematics, vol. 2143. Springer, Berlin (2015)

  20. 20.

    Lawler, G.F.: Intersections of Random Walks. Probability and its Applications. Birkhäuser Boston Inc., Boston (1991)

  21. 21.

    Lawler, G.F.: On the covering time of a disc by simple random walk in two dimensions. In: Seminar on Stochastic Processes, 1992 (Seattle, WA, 1992), Progr. Probab., vol. 33, pp. 189–207. Birkhäuser Boston (1993)

  22. 22.

    Marcus, M.B., Rosen, J.: Markov processes, Gaussian processes, and local times, Cambridge Studies in Advanced Mathematics, vol. 100. Cambridge University Press, Cambridge (2006)

  23. 23.

    Matthews, P.: Covering problems for Brownian motion on spheres. Ann. Probab. 16(1), 189–199 (1988)

  24. 24.

    Mörters, P., Peres, Y.: Brownian motion, Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge (2010) (with an appendix by Oded Schramm and Wendelin Werner)

  25. 25.

    Pitman, J., Yor, M.: A decomposition of Bessel bridges. Z. Wahrsch. Verw. Gebiete 59(4), 425–457 (1982)

  26. 26.

    Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion, vol. 293, 3rd edn. Springer, Berlin (1999)

  27. 27.

    Scheike, T.H.: A boundary-crossing result for Brownian motion. J. Appl. Probab. 29(2), 448–453 (1992)

  28. 28.

    Sznitman, A.-S.: Topics in occupation times and Gaussian free fields. Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich (2012)

  29. 29.

    Ueno, T.: On recurrent Markov processes. Kōdai Math. Sem. Rep. 12, 109–142 (1960)

  30. 30.

    Webb, C.: Exact asymptotics of the freezing transition of a logarithmically correlated random energy model. J. Stat. Phys. 145(6), 1595–1619 (2011)

Download references


The authors thank Louis-Pierre Arguin, Alain-Sol Sznitman, and Augusto Teixeira for useful discussions, and Serguei Popov for suggesting the use of renewals to prove Proposition 8.9. We are very grateful to Jay Rosen for his thorough reading of the article and many suggestions for improvements. Finally D.B. thanks the Jean Morlet initiative for its hospitality and support during a visit to the CIRM in Luminy while this article was being written.

Author information

Correspondence to David Belius.

Additional information

D. Belius was Supported in part by the Swiss National Science Foundation, the Centre de Recherches Mathématiques and the Institut des Sciences Mathématiques (Montréal). N. Kistler was supported in part by the Centre International de Rencontres Mathématiques (Luminy), through the Jean Morlet Chair initiative.



In the Appendix we collect some less important proofs. We first give the proof of the large deviation bound Lemma 4.6 for sums of a binomial number of geometric random variables, which was used to prove the upper bound Proposition 3.5.

Proof of Lemma 4.6

Note that

$$\begin{aligned} \mathbb {P}\left[ \sum _{i=1}^{n}J_{i}G_{i}\le \theta \right] =\mathbb {P}\left[ \sum _{i=1}^{J_{1}+\cdots +J_{n}}G_{i}\le \theta \right] . \end{aligned}$$

Now (since a sum of geometrics is a negative binomial distribution) we have

$$\begin{aligned} \mathbb {P}\left[ \sum _{i=1}^{m}G_{i}\le \theta \right] =\mathbb {P}\left[ I_{1}+\cdots +I_{\theta }\ge m\right] \quad \text{ for } m\ge 1, \end{aligned}$$

where \(I_{1},I_{2},\ldots \) are iid Bernoulli random variables with success probability p, which can be taken to be independent of the \(J_{i}-\)s. Thus (by conditioning on \(J_{1}+\cdots +J_{n}\) in (9.1)) we have in fact

$$\begin{aligned} \mathbb {P}\left[ \sum _{i=1}^{n}J_{i}G_{i}\le \theta \right] =\mathbb {P}\left[ I_{1}+\cdots +I_{\theta }\ge J_{1}+\cdots +J_{n}\right] . \end{aligned}$$

For any \(\lambda >0\) this probability is bounded above by

$$\begin{aligned}&\mathbb {E}\left[ \exp \left( \lambda \left( I_{1}+\cdots +I_{\theta }-J_{1}-\ldots -J_{n}\right) \right) \right] \\&\quad =\left( 1+p\left( e^{\lambda }-1\right) \right) ^{\theta }\left( 1+q\left( e^{-\lambda }-1\right) \right) ^{n}\le \exp \left( \theta p\left( e^{\lambda }-1\right) +qn\left( e^{-\lambda }-1\right) \right) , \end{aligned}$$

where we have used that \(1+x\le e^{x}\). Now (4.23) follows by setting \(\lambda =\frac{1}{2}\log \frac{qn}{\theta p}\). \(\square \)

Next we derive the characterisation Lemma 7.7 of local times of continuous time random walk on \(\left\{ 0,\ldots ,L\right\} \) from the generalized second Ray–Knight theorem. Recall the definition of \(\tilde{\mathbb {P}}_{l}\) and \(Y_{t}\) from above (7.39) and the definition of \(L_{l}^{t}\) from (7.39).

Proof of Lemma 7.7

Let \(\mathcal {L}=\left\{ 0,\ldots ,L\right\} \). The generalized second Ray–Knight theorem (see [16] or Theorem 8.2.2 [22]) implies that \(\left( L_{l}^{\tau \left( t\right) }+\frac{1}{2}\eta _{x}^{2}\right) _{l\in \mathcal {L}}\overset{\text{ law }}{=}\left( \frac{1}{2}\left( \eta _{l}+\sqrt{2t}\right) ^{2}\right) _{l\in \mathcal {L}}\), where \(\eta _{l}\) is a centered Gaussian process on \(\mathcal {L}\) with covariance \(\mathbb {E}\left[ \eta _{a}\eta _{b}\right] =\tilde{\mathbb {E}}_{b}\left[ L_{a}^{H_{0}}\right] =a\) for \(a\le b\), independent of \(L_{l}^{\tau \left( t\right) }\). Thus \(\eta _{l},l\in \mathcal {L}\), is in fact Brownian motion at the integer times \(l\in \mathcal {L}\). This in turn implies that \(\left( \frac{1}{2}\eta _{l}^{2}\right) _{l\in \mathcal {L}}\) has the \(\mathbb {Q}_{0}^{1}\)-law of \(\left( \frac{1}{2}X_{l}\right) _{l\in \mathcal {L}}\) and \(\left( \frac{1}{2}\left( \eta _{l}+\sqrt{2t}\right) ^{2}\right) _{l\in \mathcal {L}}\) has the \(\mathbb {Q}_{2t}^{1}\)-law of \(\left( \frac{1}{2}X_{l}\right) _{l\in \mathcal {L}}\) (recall (7.24)). By the additivity property (7.26) of Bessel processes we thus have that \(\left( L_{l}^{\tau \left( t\right) }+\frac{1}{2}X_{l}^{1}\right) _{l\in \mathcal {L}}\overset{\text{ law }}{=}\left( \frac{1}{2}X_{l}^{1}+\frac{1}{2}X_{l}^{2}\right) _{l\in \mathcal {L}}\) where \(\left( X_{t}^{1}\right) _{t\ge 0}\) has law \(\mathbb {Q}_{0}^{1}\), \(X_{t}^{2}\) haw law \(\mathbb {Q}_{2t}^{0}\). Now the claim follows because we may “cancel out” \(\frac{1}{2}X_{l}^{1}\) from this equality in law, since all random variables involved are non-negative (see [28, (2.56)]). \(\square \)

Next we give the proof of Lemma 7.8, which describes the law of the local times \(L_{l}^{D_{t}},l\in \left\{ 0,\ldots ,L\right\} ,\) of continuous time random walk on \(\left\{ 0,1,\ldots ,L\right\} \) when conditioned on \(L_{L}^{D_{t}}=0\). Recall the definition of \(D_{t}\) from (7.40). For the proof let us denote by \(\Gamma \) the state space of \(\left( Y_{t}\right) _{t\ge 0}\), that is the space of all piecewise constant cadlag functions from \([0,\infty )\) to \(\left\{ 0,\ldots ,L\right\} \).

Proof of Lemma 7.8

Define the successive returns to and departures from \(\left\{ 0,\ldots ,L\right\} \backslash \left\{ 1\right\} \) of \(Y_{t}\) by \(\tilde{D}_{0}=H_{1}\),

$$\begin{aligned} \tilde{R}_{n}=H_{\left\{ 0,2\right\} }\circ \theta _{\tilde{D}_{n-1}}+\tilde{D}_{n-1},n\ge 1,\quad \text{ and }\quad \tilde{D}_{n}=H_{1}\circ \theta _{\tilde{R}_{n}}+\tilde{R}_{n},n\ge 1. \end{aligned}$$

Collect the excursions of \(Y_{t}\) into a marked point process \(\mu \) on \([0,\infty )\times \Gamma \) defined by

$$\begin{aligned} \mu =\sum _{i\ge 1}\delta _{\left( L_{1}^{\tilde{R}_{i}},Y_{\left( \tilde{R}_{i}+\cdot \right) \wedge \tilde{D}_{i}}\right) }. \end{aligned}$$

The point process \(\mu \) is a Poisson point process on \(\mathbb {R}_{+}\times \Gamma \) of intensity

$$\begin{aligned} \left( 2\lambda \right) \otimes \left( \frac{1}{2}\tilde{\mathbb {P}}_{0}\left[ Y_{\cdot \wedge H_{1}}\in dw\right] +\frac{1}{2}\tilde{\mathbb {P}}_{2}\left[ Y_{\cdot \wedge H_{1}}\in dw\right] \right) , \end{aligned}$$

where \(\lambda \) is Lebesgue-measure. We can decompose this point process into

$$\begin{aligned} \mu _{1}=1_{\mathbb {R}_{+}\times \left\{ Y_{0}=0\right\} }\mu \quad \text{ and }\quad \mu _{2}=1_{\mathbb {R}_{+}\times \left\{ Y_{0}=2,H_{1}<H_{L}\right\} }\mu \text{ and } \mu _{3}=1_{\mathbb {R}_{+}\times \left\{ Y_{0}=2,H_{L}<H_{1}\right\} }\mu , \end{aligned}$$

where \(\mu _{1}\) collects the excursions that start in 0, \(\mu _{2}\) collects the excursions that start in 2 and avoid L, and \(\mu _{3}\) has the excursions that start in 2 and hit L . Since we are restricting \(\mu \) to disjoint sets, \(\mu _{1}\), \(\mu _{2}\) and \(\mu _{3}\) are independent Poisson point processes.


$$\begin{aligned} \mu _{1}=\sum _{i}\delta _{\left( S_{i},w_{i}\right) }, \end{aligned}$$

for \(S_{1}<S_{2}<\ldots \), so that \(S_{t}\) is the local time at vertex 1 until the t-th jump to 0. Note that (recall (7.40))

$$\begin{aligned} L_{1}^{D_{t}}=S_{t},\quad \text{ for } t\in \left\{ 1,2,\ldots \right\} . \end{aligned}$$

We have

$$\begin{aligned} L_{l}^{D_{t}}=\sum _{\left( s,w\right) \in \mu _{2}\cup \mu _{3}:s\le S_{t}}L_{l}^{\infty }\left( w_{i}\right) \quad \text{ for } l\in \left\{ 2,3,\ldots \right\} , \end{aligned}$$

where \(L_{l}^{\infty }\left( w\right) \) is the local time at l of the path w, i.e. \(L_{l}^{\infty }\left( w\right) =d_{l}^{-1}\int _{0}^{\infty }1_{\left\{ w_{s}=l\right\} }ds\) for \(d_{l}\) as in (7.38). For any \(u\ge 0\) define the vector

$$\begin{aligned} V_{u}=\left( u,\sum _{\left( s,w\right) \in \mu _{2}:s\le u}L_{2}^{\infty }\left( w_{i}\right) ,\ldots ,\sum _{\left( s,w\right) \in \mu _{2}:s\le u}L_{L}^{\infty }\left( w_{i}\right) \right) \in \mathbb {R}^{L}. \end{aligned}$$

By (9.3) we have

$$\begin{aligned} \left( L_{l}^{D_{t}}\right) _{l\in \left\{ 1,\ldots ,L\right\} }=V_{S_{t}} \text{ on } \text{ the } \text{ event } \left\{ L_{L}^{D_{t}}=0\right\} =\left\{ \mu _{3}\left( \left[ 0,S_{t}\right] \times \Gamma \right) =0\right\} . \end{aligned}$$

Furthermore note that \(L_{1}^{D_{t}}\) and \(\left\{ L_{L}^{D_{t}}=0\right\} \) only depend on \(\mu _{1}\) and \(\mu _{3}\), while \(V_{u}\) only depends on \(\mu _{2}\), which is independent of \(\mu _{1}\) and \(\mu _{3}\). Therefore

$$\begin{aligned} \begin{array}{l} \tilde{\mathbb {P}}_{0}\left[ \left( L_{l}^{D_{t}}\right) _{l\in \left\{ 1,\ldots ,L\right\} }\in A|L_{L}^{D_{t}}=0\right] =\tilde{\mathbb {P}}_{0}\left[ f\left( L_{1}^{D_{t}}\right) |L_{L}^{D_{t}}=0\right] ,\\ \text{ where } f\left( u\right) =\tilde{\mathbb {P}}_{0}\left[ V_{u}\in A\right] . \end{array} \end{aligned}$$

We are thus interested in the law of \(V_{u}\). Let \(\tilde{Y}_{t}\) be continuous time random walk on \(\left\{ 1,\ldots ,L\right\} \) with local times and inverse local time at vertex 1 given by

$$\begin{aligned} \tilde{L}_{l}^{u}=\frac{1}{1+1_{\left\{ 1<l<L\right\} }}\int _{0}^{u}1_{\left\{ \tilde{Y}_{s}=l\right\} }ds\quad \text{ and }\quad \tilde{\tau }\left( t\right) =\inf \left\{ s\ge 0:\tilde{L}_{1}^{s}>u\right\} . \end{aligned}$$

Sampling \(\tilde{Y}_{t},t\ge 0,\) by “stitching together” the excursions in the point processes \(\mu _{2}\) and \(\mu _{3}\) we see that

$$\begin{aligned} \left( \tilde{L}_{l}^{\tilde{\tau }\left( u\right) }\right) _{l\in \left\{ 2\ldots ,L\right\} }\overset{\text{ law }}{=}\left( \sum _{\left( s,w\right) \in \mu _{2}\cup \mu _{3}:s\le u}L_{l}^{\infty }\left( w\right) \right) _{l\in \left\{ 2,\ldots ,L\right\} }. \end{aligned}$$

So by Lemma 7.7 (with \(\left\{ 1,\ldots ,L\right\} \) in place of \(\left\{ 0,\ldots ,L\right\} \)) we have that

$$\begin{aligned}&\tilde{\mathbb {P}}_{0}\left[ \left( {\displaystyle \sum _{\left( s,w\right) \in \mu _{2}\cup \mu _{3}:s\le u}}L_{l}^{\infty }\left( w\right) \right) _{l\in \left\{ 2,\ldots ,L\right\} }\in \cdot \right] \nonumber \\&\quad =\tilde{\mathbb {P}}_{0}\left[ \left( \tilde{L}_{l}^{\tilde{\tau }\left( u\right) }\right) _{l\in \left\{ 2\ldots ,L\right\} }\in \cdot \right] =\mathbb {Q}_{2u}^{0}\left[ \left( \frac{1}{2}X_{l}\right) _{l\in \left\{ 1,\ldots ,L-1\right\} }\in \cdot \right] . \end{aligned}$$

Now since

$$\begin{aligned} \left\{ \mu _{3}\left( \left[ 0,t\right] \times \Gamma \right) =0\right\} =\left\{ \sum _{\left( s,w\right) \in \mu _{2}\cup \mu _{3}:s\le t}L_{L}^{\infty }\left( w\right) =0\right\} , \end{aligned}$$

and \(V_{u}\) is independent of \(\mu _{3}\) we have

$$\begin{aligned}&\tilde{\mathbb {P}}_{0}\left[ V_{u}\in A\right] \\&\qquad \;\, =\tilde{\mathbb {P}}_{0}\left[ V_{u}\in A|\mu _{3}\left( \left[ 0,t\right] \times \Gamma \right) =0\right] \\&\quad \overset{(9.4),(9.6)}{=} \mathbb {P}_{0}\left[ \left( \tilde{L}_{L}^{\tau \left( u\right) }\right) _{l\in \left\{ 1,\ldots ,,L\right\} }|\tilde{L}_{L}^{\tau \left( u\right) }=0\right] \\&\qquad \overset{(9.7)}{=} \mathbb {Q}_{2u}^{0}\left[ \left( \frac{1}{2}X_{l}\right) _{l\in \left\{ 0,\ldots ,L-1\right\} }\in A|X_{L-1}=0\right] \\&\qquad \overset{(7.23)}{=} \mathbb {Q}_{2u\rightarrow 0}^{0,L-1}\left[ \left( \frac{1}{2}X_{l}\right) _{l\in \left\{ 0,\ldots ,L-1\right\} }\in A\right] . \end{aligned}$$

Plugging this into (9.5) gives the claim. \(\square \)

The same construction of \(Y_{t}\) from the Poisson point processes \(\mu _{1},\mu _{2}\) and \(\mu _{3}\) can be used to prove Lemma 7.9, which gives a control on the law of \(L_{1}^{D_{t}}\) conditioned on \(L_{L}^{D_{t}}=0\).

Proof of Lemma 7.9

We will first show that

$$\begin{aligned}&\text{ the } \tilde{\mathbb {P}}_{0}\left[ \cdot |L_{L}^{D_{t}}=0\right] \text{-law } \text{ of } \frac{L}{L-1}L_{1}^{D_{t}}\, \text{ is } \text{ that } \text{ of } \text{ a } \text{ sum } \text{ of } t\nonumber \\&\quad \text{ independent } \text{ standard } \text{ exponential } \text{ random } \text{ variables }. \end{aligned}$$

In the notation of the proof of Lemma 7.8: Since \(L_{1}^{D_{t}}=S_{t}\) and \(\left\{ L_{L}^{D_{t}}=0\right\} =\left\{ \mu _{3}\left( \left[ 0,S_{t}\right] \times \Gamma \right) =0\right\} \) we are interested in the law of \(S_{t}\) given \(\left\{ \mu _{3}\left( \left[ 0,S_{t}\right] \times \Gamma \right) =0\right\} \). Since \(\mu _{3}\) is independent of \(S_{t}\) we have that

$$\begin{aligned} \tilde{\mathbb {P}}_{0}\left[ S_{t}=ds,\mu _{3}\left( \left[ 0,S_{t}\right] \times \Gamma \right) =0\right] =\tilde{\mathbb {P}}_{0}\left[ S_{t}=ds,\tilde{\mathbb {P}}\left[ \mu _{3}\left( \left[ 0,s\right] \times \Gamma \right) =0\right] \right] . \end{aligned}$$

The intensity of \(\mu _{3}\) is \(\left( 2\lambda \right) \otimes \frac{1}{2}\tilde{\mathbb {P}}_{2}\left[ \cdot ,H_{L}<H_{1}\right] \) (recall (9.2)), so that

$$\begin{aligned}&\tilde{\mathbb {P}}_{0}\left[ \mu _{3}\left( \left[ 0,s\right] \times \Gamma \right) =0\right] =e^{-s\tilde{\mathbb {P}}_{2}\left[ H_{L}<H_{1}\right] }=e^{-\frac{s}{L-1}}, \text{ and }\\&\tilde{\mathbb {P}}_{0}\left[ S_{t}=ds,\mu _{3}\left( \left[ 0,S_{t}\right] \times \Gamma \right) =0\right] =e^{-\frac{s}{L-1}}\tilde{\mathbb {P}}_{0}\left[ S_{t}=ds\right] . \end{aligned}$$

The \(\tilde{\mathbb {P}}_{0}\)-law of \(S_{t}\) is the gamma distribution with shape t and scale 1. Thus

$$\begin{aligned} \tilde{\mathbb {P}}_{0}\left[ S_{t}=ds,\mu _{3}\left( \left[ 0,S_{t}\right] \times \Gamma \right) =0\right] =e^{-\frac{s}{L-1}}\frac{s^{t-1}e^{-s}}{\left( t-1\right) !}=e^{-\left( \frac{1}{L-1}+1\right) s}\frac{s^{t-1}}{\left( t-1\right) !}, \end{aligned}$$

so that the \(\tilde{\mathbb {P}}_{0}\left[ \cdot |\mu _{3}\left( \left[ 0,S_{t}\right] \times \Gamma \right) =0\right] \)-law of \(S_{t}\) is the gamma distribution with shape t and scale \(\left( 1+1/\left( L-1\right) \right) ^{-1}=\left( L-1\right) /L\). This proves (9.8).

Now the central limit theorem shows that

$$\begin{aligned} 0<c\le \tilde{\mathbb {P}}_{0}\left[ \left| \frac{L}{L-1}L_{1}^{D_{t}}-t\right| \le \sqrt{t}|L_{L}^{D_{t}}=0\right] , \end{aligned}$$

and a standard large deviation bound shows that for \(x\ge 0\),

$$\begin{aligned} \tilde{\mathbb {P}}_{0}\left[ \left| \frac{L}{L-1}L_{1}^{D_{t}}-t\right| \ge x\sqrt{t}|L_{L}^{D_{t}}=0\right] \le e^{-cx^{2}}. \end{aligned}$$

From the assumption \(t\le 10L^{2}\) we see that \(\frac{L}{L-1}\) can be replaced by \(\left( \frac{L}{L-1}\right) ^{2}\), and then (7.41) follows from (9.9) and (7.42) follows from (9.10). \(\square \)

It remains to prove Lemma 1, giving a large deviation bound for the number of traversals \(\tilde{T}_{l}^{t}\) (recall (7.43)) given the continuous local times \(L_{l}^{D_{t}}\). For this we will need the following computation of the conditional distribution of \(\tilde{T}_{l}^{t}\) (which can be seen as a special case of the results of [14, Section 4]). To prove it we use the following fact about the modified Bessel function of the first kind \(I_{1}\left( \cdot \right) \):

$$\begin{aligned} \sum _{m\ge 1}\frac{z^{m}}{m!\left( m-1\right) !}=\sqrt{z}I_{1}\left( 2\sqrt{z}\right) \quad \text{ for } \text{ all } z\in \mathbb {R}. \end{aligned}$$

Lemma 9.1

For all \(u_{0},u_{1},u_{2},\ldots ,u_{L}\in [0,\infty )\) such that \(u_{i}=0\implies u_{i+1}=0\), and any \(l\in \left\{ 1,\ldots ,L-1\right\} \) such that \(u_{l+1}>0\) we have for \(m\in \left\{ 1,2,\ldots \right\} \)

$$\begin{aligned} \mathbb {\tilde{P}}_{0}\left[ \tilde{T}_{l}^{t}=m|L_{l}^{D_{t}}=u_{l},l=0,\ldots ,L\right] =\frac{\left( u_{l}u_{l+1}\right) ^{m}/\left( m!\cdot \left( m-1\right) !\right) }{\sqrt{u_{l}u_{l+1}}I_{1}\left( 2\sqrt{u_{l}u_{l+1}}\right) }. \end{aligned}$$


The law of \(T_{1}\) under \(\mathbb {G}_{a}\) can be written down explicitly as

$$\begin{aligned} \mathbb {G}_{a}\left[ T_{1}=b\right] ={a+b-1 \atopwithdelims ()a-1}\left( \frac{1}{2}\right) ^{a+b}\quad \text{ for } a\in \left\{ 1,2,\ldots \right\} ,b\in \left\{ 0,1,2,\ldots \right\} , \end{aligned}$$

since there are \({a+b-1 \atopwithdelims ()a-1}\) ways to write b as a sum of a non-negative integers, and since the probability that a geometric random variable with support \(\left\{ 0,1,\ldots \right\} \) and mean 1 takes on the value k is \(\left( \frac{1}{2}\right) ^{k+1}\). By Lemma 7.10 we therefore have for all \(t=t_{0},t_{1},t_{2},\ldots ,t_{L-1}\in \left\{ 0,1,2,\ldots \right\} \) such that \(t_{i}=0\implies t_{i+1}=0\)

$$\begin{aligned} \mathbb {\tilde{P}}_{0}\left[ \tilde{T}_{i}^{t}=t_{i},i=0,\ldots ,L-1\right] =\prod _{i\in \left\{ 1,\ldots ,L-1\right\} :t_{i-1}>0}{t_{i-1}+t_{i}-1 \atopwithdelims ()t_{i-1}-1}\left( \frac{1}{2}\right) ^{t_{i-1}+t_{i}}. \end{aligned}$$

Conditioned on the number of visits to each vertex the total holding times at the vertices are independent and gamma distributed, so we have for such \(t_{i}\) and any \(u_{0},u_{1},\ldots ,u_{L}\in [0,\infty )\) such that \(t_{l-1}=0\iff u_{l}=0\) that

$$\begin{aligned}&\tilde{\mathbb {P}}_{0}\left[ \tilde{T}_{l}^{t}=t_{l},l=1,\ldots ,L-1,L_{l}^{D_{t}}=u_{l},l=0,\ldots ,L\right] \\&\quad =\left\{ {\displaystyle \prod _{i\in \left\{ 1,\ldots ,L-1\right\} :t_{i-1}>0}}{t_{i-1}+t_{i}-1 \atopwithdelims ()t_{i-1}-1}\left( \frac{1}{2}\right) ^{t_{i-1}+t_{i}}\right\} \\&\qquad \times \left\{ \left( \frac{e^{-u_{1}}u_{1}^{t_{0}-1}}{\left( t_{0}-1\right) !}\right) \left( {\displaystyle \prod _{i\in \left\{ 1,\ldots ,L-1\right\} :t_{i-1}>0}}\frac{e^{-2u_{i}}\left( 2u_{i}\right) ^{t_{i-1}+t_{i}}}{u_{i}\left( t_{i-1}+t_{i}-1\right) !}\right) \left( \frac{e^{-u_{L}}u_{L}^{t_{L-1}-1}}{\left( t_{L-1}-1\right) !}\right) \right\} , \end{aligned}$$

where the quantity in the last parenthesis is interpreted as 1 if \(t_{l-1}=0\) or \(u_{L}=0\). Exploiting two cancellations the right-hand side equals

$$\begin{aligned}&\left\{ {\displaystyle \prod _{i\in \left\{ 1,\ldots ,L-1\right\} :t_{i-1}>0}}\frac{1}{\left( t_{i-1}-1\right) !t_{i}!}\right\} \\&\quad \times \left\{ \left( \frac{e^{-u_{1}}u_{1}^{t_{0}-1}}{\left( t_{0}-1\right) !}\right) \left( {\displaystyle \prod _{i\in \left\{ 1,\ldots ,L-1\right\} :t_{i-1}>0}}\frac{e^{-2u_{i}}u_{i}^{t_{i-1}+t_{i}}}{u_{i}}\right) \left( \frac{e^{-u_{L}}u_{L}^{t_{L-1}-1}}{\left( t_{L-1}-1\right) !}\right) \right\} . \end{aligned}$$

Considering only the terms that depend on \(t_{l}\) we have that if \(u_{0},u_{1},\ldots ,u_{l+1}>0\)

$$\begin{aligned} \tilde{\mathbb {P}}_{0}\left[ \tilde{T}_{l}^{t}=m|L_{l}^{D_{t}}=u_{l},l=0,\ldots ,L\right] =\frac{1}{\tilde{Z}}\frac{\left( u_{l}u_{l+1}\right) ^{m}}{\left( m-1\right) !m!},m\ge 1,l\in \left\{ 1,\ldots ,L-1\right\} pg{\!}, \end{aligned}$$

for a normalizing constant \(\tilde{Z}\) depending only on \(t,u_{0},\ldots ,u_{L}\). Using (9.11) we can identify the constant as

$$\begin{aligned} \tilde{Z}=\sum _{m\ge 1}\frac{\left( u_{l}u_{l+1}\right) ^{m}}{\left( m-1\right) !m!}=\sqrt{u_{l}u_{l+1}}I_{1}\left( 2\sqrt{u_{l}u_{l+1}}\right) . \end{aligned}$$

\(\square \)

We now prove the large deviation result Lemma 7.12 for the traversal process \(\tilde{T}_{l}^{t}\) conditioned on \(L_{l}^{D_{t}},l=0,\ldots ,L\).

Proof of Lemma 7.12

Denote \(\tilde{\mathbb {P}}_{0}\left[ \cdot |\sigma \left( L_{l}^{D_{t}}:l=0,\ldots ,L\right) \right] \) by \(\tilde{\mathbb {Q}}\). By Lemma 1,

$$\begin{aligned} \tilde{\mathbb {Q}}\left[ \exp \left( \lambda \tilde{T}_{l}^{t}\right) \right] =\sum _{m\ge 1}\frac{\left( e^{\lambda }\mu ^{2}\right) ^{m}/\left( m!\cdot \left( m-1\right) !\right) }{\mu I_{1}\left( 2\mu \right) }\overset{(9.11)}{=}\frac{e^{\lambda /2}\mu I_{1}\left( 2e^{\lambda /2}\mu \right) }{\mu I_{1}\left( 2\mu \right) } \text{ for } \lambda \in \mathbb {R}. \end{aligned}$$

Thus for all \(\lambda >0\)

$$\begin{aligned} \tilde{\mathbb {Q}}\left[ \tilde{T}_{l}^{t}\ge \mu +\theta \right] \le e^{\lambda /2}\frac{I_{1}\left( 2e^{\lambda /2}\mu \right) }{I_{1}\left( 2\mu \right) }\exp \left( -\lambda \left( \mu +\theta \right) \right) . \end{aligned}$$

Using the standard estimate \(I_{1}\left( z\right) =\frac{e^{z}}{\sqrt{2\pi z}}\left( 1+O\left( z^{-1}\right) \right) \) we have that

$$\begin{aligned} I_{1}\left( 2e^{\lambda /2}\mu \right) /I_{1}\left( 2\mu \right) \le ce^{\lambda /4}e^{2\left( e^{\lambda /2}-1\right) \mu }, \end{aligned}$$

so that for all \(\lambda >0\)

$$\begin{aligned} \tilde{\mathbb {Q}}\left[ \tilde{T}_{l}^{t}\ge \mu +\theta \right] \le ce^{c\lambda }\exp \left( 2\left\{ e^{\lambda /2}-1\right\} \mu -\lambda \left\{ \mu +\theta \right\} \right) \le ce^{c\lambda }\exp \left( c\lambda ^{2}\mu -\lambda \theta \right) . \end{aligned}$$

Setting \(\lambda =c\theta /\mu \) for a small enough c the right-hand side is bounded above by \(ce^{c\theta /\mu -c\theta ^{2}/\mu }\), giving one half of (7.45). By estimating \(\tilde{\mathbb {Q}}\left[ \exp \left( -\lambda \tilde{T}_{l}^{t}\right) \right] \) one can similarly show that \(\tilde{\mathbb {Q}}\left[ \tilde{T}_{l}^{t}\le \mu -\theta \right] \le ce^{c\theta /\mu -c\theta ^{2}/\mu }\), giving the other half. \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Belius, D., Kistler, N. The subleading order of two dimensional cover times. Probab. Theory Relat. Fields 167, 461–552 (2017).

Download citation

Mathematics Subject Classification

  • 60J65
  • 60G50
  • 60G70
  • 60K35