Advertisement

Probability Theory and Related Fields

, Volume 171, Issue 3–4, pp 917–979 | Cite as

Delocalising the parabolic Anderson model through partial duplication of the potential

Open Access
Article
  • 216 Downloads

Abstract

The parabolic Anderson model on \(\mathbb {Z}^d\) with i.i.d. potential is known to completely localise if the distribution of the potential is sufficiently heavy-tailed at infinity. In this paper we investigate a modification of the model in which the potential is partially duplicated in a symmetric way across a plane through the origin. In the case of potential distribution with polynomial tail decay, we exhibit a surprising phase transition in the model as the decay exponent varies. For large values of the exponent the model completely localises as in the i.i.d. case. By contrast, for small values of the exponent we show that the model may delocalise. More precisely, we show that there is an event of non-negligible probability on which the solution has non-negligible mass on two sites.

Keywords

Parabolic Anderson model Localisation Intermittency 

Mathematics Subject Classification

60H25 (Primary)  82C44 60F10 (Secondary) 

1 Introduction

1.1 Delocalising the parabolic Anderson model

Given a potential field \(\xi : \mathbb {Z}^d \rightarrow \mathbb {R}\), the parabolic Anderson model (PAM) is the solution to the Cauchy problem with localised initial condition
$$\begin{aligned} \partial _t u(t,z)&=\Delta u(t,z)+\xi (z)u(t,z),&(t,z)\in (0,\infty )\times \mathbb {Z}^d,\nonumber \\ u(0,z)&={\mathbf 1}_{\{0\}}(z),&z\in \mathbb {Z}^d, \end{aligned}$$
(1.1)
where \(\Delta \) is the discrete Laplacian acting on functions \(f:\mathbb {Z}^d\rightarrow \mathbb {R}\) by
$$\begin{aligned} (\Delta f)(z)=\sum _{|y-z|=1}(f(y)-f(z)), \qquad z\in \mathbb {Z}^d, \end{aligned}$$
with \(|\cdot |\) the standard \(\ell _1\) distance. The PAM models the competition between smoothing effects, generated by the Laplacian, and roughening effects, generated by the potential. It is well known that if the potential \(\xi \) is sufficiently inhomogeneous, the PAM may undergo a process of localisation in which its solution is eventually concentrated, at typical large times, on a small number of spatially disjoint clusters of sites. Indeed, if \(\xi \) is an i.i.d. random field with the law of \(\xi (\cdot )\) sufficiently heavy-tailed at infinity, the solution is known to eventually concentrate on a single site with overwhelming probability, i.e. there exists a \(\mathbb {Z}^d\)-valued process \(Z_t\) such that, as \(t \rightarrow \infty \),
$$\begin{aligned} \frac{u(t, Z_t)}{\sum _{z \in \mathbb {Z}^d} u(t, z)} \rightarrow 1 \quad \text {in probability.} \end{aligned}$$
In this case we say that the PAM completely localises.

While there are many results in the literature establishing localisation in the PAM in various settings (see Sect. 1.2 for an overview), our understanding of the absence of localisation is much less well-developed. In the case that the potential \(\xi \) is a random field, there are at least two features of \(\xi \) which may prevent complete localisation in the PAM. First, the potential may be too homogeneous on large scales—too close to a constant potential—for sharp peaks in the solution to form. Second, even if the potential is sufficiently inhomogeneous, complete localisation may be prevented by the presence of ‘duplicated’ regions in which the potential is very similar; in this case, the solution may have no reason to favour one such region over another.

This paper is motivated by the following question:

Given a random potential for which the PAM completely localises, what kind of ‘duplication’ of the potential will cause complete localisation to fail?

Of course, there are trivial ways to prevent complete localisation by introducing duplication. For instance, if the potential is symmetric about some plane through the origin then \(u(t,\cdot )\) is also symmetric about this plane, and so complete localisation cannot occur. This paper considers a model of partial duplication in which we pick a fraction \(p \in (0, 1)\) of the sites to duplicate across the plane of symmetry. It turns out that this model exhibits a rich phenomenon of delocalisation; indeed, if the potential is i.i.d. with Pareto distribution (i.e. with polynomial tail decay), we show that the model exhibits a phase transition in the Pareto parameter.

1.2 Localisation in the PAM

The study of localisation in the PAM has received much attention in recent years. This began with the seminal paper [6] and is by now well-understood, see [7, 10, 13] for surveys. In the i.i.d. case, for a wide class of potentials with unbounded tails it is known that the solution to the PAM is concentrated at typical large times on a small number of spatially disjoint clusters of sites, known as islands. The shape of the potential and the solution \(u(t,\cdot )\) on these islands was first studied in [8] for the case of double-exponential potentials. More recently, it has been shown that for sufficiently heavy-tailed potentials (Pareto [11], exponential [12], Weibull [5, 17]), the solution exhibits the strongest possible form of localisation: complete localisation. In [14] this was shown to also be the case for a model that replaced the Laplacian with the generator of a trapped random walk. By contrast, in very recent work [3] it has been shown that in the double-exponential case the PAM localises on a single connected island, rather than on a single site. This has confirmed the long standing conjecture that, in the i.i.d. case, potentials with double-exponential tail decay form the boundary of the complete localisation universality class.

The model we consider is an example of the PAM in a random potential that has spatial correlation. To the best of our knowledge, the only previous work that has considered the PAM with correlated potential in a discrete setting is [9], in which the motivation was to more accurately model a physical system by introducing long-range correlations. The main result in that paper is an asymptotic formula for moments of the total solution; this shows that the solution is intermittent in a certain weak sense, but is not precise enough to determine the localisation/delocalisation properties of the model.

1.3 The PAM with partially duplicated potential

In this section we formally introduce the PAM with partially duplicated potential that is the object of our study. For the remainder of the paper we fix \(d=1\). This avoids certain additional complications that arise in higher dimensions, while preserving the phenomena that we seek to investigate; we comment on the nature of these complications in Sect. 1.5.

We begin by introducing the partially duplicated potential \(\xi \). Define an auxiliary random field \(\xi _0: \mathbb {Z}\rightarrow [1, \infty )\) consisting of independent Pareto random variables with parameter \(\alpha >0\), that is, with distribution function
$$\begin{aligned} F(x)=1-x^{-\alpha }, \quad x \ge 1. \end{aligned}$$
Fix a parameter \(p\in (0,1)\) that controls the density of duplicated sites. Abbreviate \(\mathbb {N}_0 = \mathbb {N}\cup \{0\}\), and define a random field \(\xi : \mathbb {Z}\rightarrow [1, \infty )\) by setting \(\xi (n) =\xi _0(n)\) for each \(n\in \mathbb {N}_0\) and, for each \(n\in \mathbb {N}\), independently setting
$$\begin{aligned} \xi (-n) = {\left\{ \begin{array}{ll} \xi _0(n)&{}\text{ with } \text{ probability } p,\\ \xi _0(-n)&{}\text{ otherwise }. \end{array}\right. } \end{aligned}$$
We henceforth refer to \(\xi \) as the potential, and denote its corresponding probability and expectation by \({\mathrm {Prob}}\) and \(\mathrm {E}\) respectively.
The model that we consider is the parabolic Anderson model on \(\mathbb {Z}\)—i.e. the solution of equation (1.1)—with the partially duplicated potential \(\xi \). It follows from [6] by the same argument as in the i.i.d. case that the solution exists provided that \(\alpha >1\), and is given by the Feynman-Kac formula
$$\begin{aligned} u(t,z)=\mathbb {E}\Big [\exp \Big \{\int _0^t\xi (X_s)ds\Big \}{\mathbf 1}\{X_t=z\}\Big ], \qquad (t,z)\in (0,\infty )\times \mathbb {Z}, \end{aligned}$$
where \((X_t)_{t\ge 0}\) is a continuous-time random walk on \(\mathbb {Z}\) with generator \(\Delta \) started at the origin and \(\mathbb {P}\) and \(\mathbb {E}\) are its corresponding probability and expectation. We denote by
$$\begin{aligned} U(t) = \sum _{z\in \mathbb {Z}}u(t,z) \end{aligned}$$
the total mass of the solution.

1.4 The phase transition in the model

We are now ready to introduce our results. Let \(D =\{z \in \mathbb {Z}:\xi (z)=\xi (-z)\}\) denote the set of integers whose potential values are duplicated, and \(E = \mathbb {Z}{\setminus }D\) the set of positive integers whose potential values are unique (or exclusive) to them. For each \(t>0\) and \(z\in \mathbb {Z}\), define the functional
$$\begin{aligned} \Psi _t(z) =\xi (z)-\frac{|z|}{t}\log \xi (z). \end{aligned}$$
Notice that \(\Psi _t\) represents a balance between the local potential value and a ‘penalty term’ which increases in the distance to the origin; it turns out that \(\Psi _t\) is a good approximation for the asymptotic growth rate of the high peaks of the solution of the PAM, in the sense that, for a high peak centred at \(z \in \mathbb {Z}^d\),
$$\begin{aligned} \frac{1}{t} \log u(t, z) \approx \Psi _t(z) , \end{aligned}$$
see [11] for example. For each \(t > 0\), let \(\Omega _t\) be the set of maximisers of \(\Psi _t\); in Lemma 3.2 we prove that either \(\Omega _t = \{z\}\) for some \(z \in E\), or \(\Omega _t = \{-z, z\}\) for some \(z \in D\). Define \(\mathfrak {D}_t =\{|\Omega _t| = 2\}\) to be the event that the maximisers of \(\Psi _t\) are duplicated; an example of this event and its complement are depicted in Fig. 1.
Fig. 1

An example of the event \(\mathfrak {D_t}\) (left) and its complement (right). The filled and empty circles represent the values of \(\Psi _t\) for points in D and and E respectively; we have only plotted the top order statistics of \(\Psi _t\). The dashed lines mark out the sites in \(\Omega _t\)

Our first result is to show that, for all values of the Pareto parameter \(\alpha > 1\), the model always localises on the set \(\Omega _t\). We also show that the event \(\mathfrak {D}_t\) has non-negligible probability. Of course, outside the event \(\mathfrak {D}_t\) this is already enough to conclude that the model completely localises.

Theorem 1.1

(Localisation of the model) Let \(\alpha >1\). As \(t\rightarrow \infty \),
$$\begin{aligned} \frac{1}{U(t)}\sum _{z\in \Omega _t}u(t,z)\rightarrow 1 \qquad \text {in probability}. \end{aligned}$$
(1.2)
Moreover, as \(t\rightarrow \infty \),
$$\begin{aligned} {\mathrm {Prob}}(\mathfrak {D}_t)\rightarrow \frac{p}{2-p}. \end{aligned}$$

Our next two results establish the following phase transition in the model. If \(\alpha \in (1, 2)\), then on the event \(\mathfrak {D}_t\) the two sites in \(\Omega _t\) both have a non-negligible proportion of the solution; in other words the model delocalises. By contrast, if \(\alpha \ge 2\) only one site in \(\Omega _t\) has a non-negligible proportion of the solution; in other words, the model completely localises whether the event \(\mathfrak {D}_t\) holds or not. Surprisingly, the critical value of \(\alpha = 2\) does not depend on the value of p. To state these result, let \(Z^{{\scriptscriptstyle {({1}})}}_t\in \Omega _t\), with \(Z^{{\scriptscriptstyle {({1}})}}_t\) chosen to be positive on the event \(\mathfrak {D}_t\).

Theorem 1.2

(Delocalisation in the case \(\alpha \in (1, 2)\)) Let \(\alpha \in (1,2)\). As \(t\rightarrow \infty \),
$$\begin{aligned} \mathcal {L}\left( \frac{u(t,Z^{{\scriptscriptstyle {({1}})}}_t)}{u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)} \, \Big | \, \mathfrak {D}_t\right) \Rightarrow \mathcal {L}(\Upsilon ), \end{aligned}$$
where \(\Upsilon \) is a random variable with positive density on \(\mathbb {R}_+\), \(\mathcal {L}(\cdot )\) denotes the law of a random variable, and \(\Rightarrow \) denotes weak convergence. In Sect. 5.3 we give an explicit construction of the random variable \(\Upsilon \).

Theorem 1.3

(Complete localisation in the case \(\alpha \ge 2\)) Let \(\alpha \ge 2\). As \(t\rightarrow \infty \),
$$\begin{aligned} \Big |\log \frac{u(t,Z^{{\scriptscriptstyle {({1}})}}_t)}{u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)}\Big |\rightarrow \infty \qquad \text {in probability.} \end{aligned}$$

Remark 1

At first glance it may seem counter-intuitive that delocalisation occurs for small, rather than large, values of \(\alpha \), since by analogy with the i.i.d. case we might expect that the heavier the tails of the potential, the stronger the localisation. However, in our model it is precisely the strengthening of the concentration effect for small \(\alpha \) which results in delocalisation.

To explain this, consider that if \(\alpha \) is smaller, the advantage of the sites in \(\Omega _t\) relative to other sites is increased. We show that, if \(\alpha \) is small enough, this advantage is so great that the impact of the other potential values (at sites closer to the origin than \(Z^{{\scriptscriptstyle {({1}})}}_t\)) is minimal, and the solution cannot readily distinguish between the sites in \(\Omega _t\). On the other hand, for large values of \(\alpha \) the advantage is less pronounced, and the fluctuations in the other potential values eventually force one of the sites in \(\Omega _t\) to be significantly more beneficial than the other. In the next subsection, we give some heuristics for why the transition occurs at \(\alpha = 2\). \(\square \)

Remark 2

One surprising aspect of the phase transition in the model is that it is not sharp. In particular, the random variable \(\Upsilon \) in Theorem 1.2 does not, as might be expected, degenerate for small \(\alpha \). As will be further explained in the next subsection, this is ultimately due to two different scales, arising from distinct sources, exactly cancelling each other out. \(\square \)

Remark 3

The proof of the Theorem 1.1 is relatively straightforward, and is similar to analogous results in the i.i.d. case, see [11, 16, 17]. The proof of Theorems 1.2 and 1.3 are much more involved, and require us to analyse the model, and indeed the PAM with i.i.d. potential, in much finer detail than has been done in previous work. \(\square \)

Remark 4

Our main results can be recast as a demonstration of the robustness, or lack thereof, of the total mass of the solution of the PAM with i.i.d. potential under a resampling of some of the potential values. More precisely, suppose u(tz) denotes the solution of the PAM on \(\mathbb {Z}\) with the i.i.d. potential \(\xi _0\), with \(U(t) = \sum _z u(t, z)\) the total mass of the solution. Now consider resampling each potential value independently with probability \(q \in (0,1)\), and let \(\tilde{u}(t, z)\) be the solution of the PAM with this resampled potential, with \(\tilde{U}(t) = \sum _z \tilde{u}(t, z)\) the total mass of the solution. Then our results, suitably translated, demonstrate the following phase transition. If \(\alpha \in (1,2)\), then there exists an event of non-negligible probability on which \(U(t)/\tilde{U}(t)\) converges in distribution to a random variable with positive density on \(\mathbb {R}_+\). By contrast, if \(\alpha \ge 2\), then \(|\log U(t)/\tilde{U}(t)| \rightarrow \infty \) in probability. \(\square \)

1.5 Heuristics for the phase transition

We start by recalling (see above) that the high peaks of the solution have the first-order approximation
$$\begin{aligned} \log u(t, z) \approx t \Psi _t(z) = t \xi (z)- |z| \log \xi (z) . \end{aligned}$$
The main challenge with analysing our model, compared to i.i.d. case, is that on the event \(\mathfrak {D}_t\) there are two maximisers of the functional \(\Psi _t\). Hence the height of the peak at these sites is identical up to the first-order approximation. As a result, and in contrast to the i.i.d. case, in order to understand the localisation phenomena we must turn to second-order contributions. Notice that the first-order approximation, captured by the functional \(\Psi _t\), has the nice feature that it is local, depending on the value of \(\xi \) at the site z only. By contrast, the second-order contributions depend on all the potential values along entire paths to \(Z^{{\scriptscriptstyle {({1}})}}_t\) and \(-Z^{{\scriptscriptstyle {({1}})}}_t\). This makes them much more challenging to study.

To explain the phase transition in the model at \(\alpha = 2\), we show that the second-order contributions undergo two distinct transitions as \(\alpha \) increases, both of which, seemingly coincidentally, occur at \(\alpha = 2\). The first transition is the negligibility or otherwise of non-direct paths which end at the sites in \(\Omega _t\); this transition serves mainly as a extra technical difficulty in our proofs, rather than a determining factor in the phase transition of the model. The second transition is a shift in the fluctuations of the second-order contributions from the Gaussian universality class (\(\alpha \ge 2\)) to the \(\alpha \)-stable universality class (\(\alpha \in (1, 2)\)), and it is this which turns out to cause the phase transition of the model.

These transitions are also relevant for the PAM with i.i.d. potential, and give a more nuanced understanding of localisation phenomena in the i.i.d. case than has previously been available. For example, in the case \(\alpha \in (1, 2)\), our proof of the first transition establishes that the PAM path measure, given by
$$\begin{aligned} d \mathbb {Q}( (X_s)_{s \le t} ) = \frac{1}{U(t)} \exp \Big \{\int _0^t\xi _0(X_s)ds \Big \} \, d \mathbb {P}( (X_s)_{s \le t} ) , \end{aligned}$$
concentrates on a single geometric path (i.e. the direct path to the localisation site), which is much stronger result than the complete localisation of the solution. In the case \(\alpha \ge 2\), we strongly suspect that the path measure instead concentrates on a class of paths that end at the localisation site but which also contain small loops.

1.5.1 The first transition: direct/non-direct paths

Recall that the Feynman-Kac formula allows us to consider the contribution to U(t) coming from different geometric paths which start at the origin. Assuming the localisation result in Theorem 1.1, we know that, for all \(\alpha > 1\), the only significant contribution to U(t) comes from paths which end in \(\Omega _t\). In Proposition 4.3 we show that, if \(\alpha \in (1,2)\), the only significant contribution to U(t) actually come from the direct paths to \(\Omega _t\); here we give some heuristics for why this should be true. On the other hand, if \(\alpha \ge 2\), then we strongly believe that certain sets of non-direct paths do make a non-negligible contribution to U(t); since we do not need this for our main results, we do not formally prove this.

Assume that \(\alpha \in (1,2)\) and let \(y^{(t)}\) denote the direct path from the origin to \(Z^{{\scriptscriptstyle {({1}})}}_t\). For the purposes of keeping the calculations simple, we will show only that the contribution to U(t) from paths \(\Pi ^{(t, +)}\) from the origin to \(Z^{{\scriptscriptstyle {({1}})}}_t\) obtained by adding a single loop of length two to \(y^{(t)}\), anywhere along the path except at the end, are negligible with respect to the contribution to U(t) from the path \(y^{(t)}\) itself. The same argument can be extended, with minor adaption, to cover all non-direct paths to \(\Omega _t\).

We can assume without loss of generality that \(Z^{{\scriptscriptstyle {({1}})}}_t\in \mathbb {N}\). For any path \(y=(y_0,\ldots ,y_n)\) of length n we can write the contribution from y at time t as
$$\begin{aligned} U(t,y)=e^{-2t} I_{n}(t; \xi (y_0),\dots ,\xi (y_{n})), \end{aligned}$$
for a function \(I_n(t; a_0,\dots ,a_n)\) with a rather nice structure; see Eqs. (2.2) and (2.3). In Lemma 3.9 we prove a bound on I which enables us to compare U(ty) for various paths. This lemma implies that for any path \(y^{(t,+)} \in \Pi ^{(t, +)}\) we have
$$\begin{aligned} \frac{U(t,y^{(t,+)})}{U(t,y^{(t)})}\le \max _{0\le j<Z^{{\scriptscriptstyle {({1}})}}_t}(\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (j))^{-2}, \end{aligned}$$
which reflects the fact that each extra step induces in a ‘penalty’ of order \((\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (j))^{-1}\). In Proposition 5.6 we prove that (up to a small correction) \(\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (j)\ge (t/\log t)^{1/(\alpha -1)}\) for any \(0\le j<Z^{{\scriptscriptstyle {({1}})}}_t\), and also that \(|Z^{{\scriptscriptstyle {({1}})}}_t|\) is asymptotically \((t/\log t)^{\alpha /(\alpha -1)}\). Since there are no more than \(2|Z^{{\scriptscriptstyle {({1}})}}_t|\) such paths (there are \(|Z^{{\scriptscriptstyle {({1}})}}_t|\) places to add the loop and two directions the loop can go in), their total contribution is at most
$$\begin{aligned} 2|Z^{{\scriptscriptstyle {({1}})}}_t|\Big (\frac{\log t}{t}\Big )^{2/(\alpha -1)}U(t,y^{(t)})\le 2\Big (\frac{t}{\log t}\Big )^{(\alpha -2)/(\alpha -1)}U(t,y^{(t)}). \end{aligned}$$
Notice that the exponent is negative if \(\alpha < 2\), which confirms that such paths are negligible with respect to the direct path. As mentioned, we can readily extend this argument to all non-direct paths.

1.5.2 The second transition: the universality class of fluctuations

To keep things simple, and since the intuition is correct, we shall for now assume that, for all \(\alpha > 1\), it is sufficient to consider only direct paths (even though we strongly believe that this is only true in the case \(\alpha \in (1, 2)\)).

Assume that the event \(\mathfrak {D}_t\) holds and denote by \(y^{(t,1)}\) the direct path to \(Z^{{\scriptscriptstyle {({1}})}}_t\) and \(y^{(t,-1)}\) the direct path to \(-Z^{{\scriptscriptstyle {({1}})}}_t\). We derive in Lemma 3.8 that, provided \(a_n\ne a_i\) for \(i\ne n\), the function I satisfies
$$\begin{aligned} I_n(t; a_0,\dots ,a_n)=e^{ta_n}\prod \limits _{j=0}^{n-1}\frac{1}{a_n-a_j} -\sum _{i=0}^{n-1}I_i(t,a_0,\dots ,a_i)\prod _{j=i}^{n-1} \frac{1}{a_n-a_j}. \end{aligned}$$
In Proposition 4.2 we show that the second term in the form of I above can be essentially discarded when considering the direct path, thus giving
$$\begin{aligned} U(t,y^{(t, 1)})\approx e^{t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-2t} \prod _{j=0}^{|Z^{{\scriptscriptstyle {({1}})}}_t|-1}\frac{1}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi ( j)}, \end{aligned}$$
and similarly for \(U(t,y^{(t, -1)})\). Using the assumption that only direct paths are significant, we obtain
$$\begin{aligned} \log \frac{u(t,Z^{{\scriptscriptstyle {({1}})}}_t)}{u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)}&\approx - \sum _{0\le j<|Z^{{\scriptscriptstyle {({1}})}}_t|} \Big [\log \Big (1-\frac{\xi (j)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big )-\log \Big (1-\frac{\xi (-j)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big )\Big ] \nonumber \\&\approx \xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{-1}\sum _{0\le j<|Z^{{\scriptscriptstyle {({1}})}}_t|}(\xi (j)-\xi (-j)) + \ldots . \end{aligned}$$
(1.3)
where we have used a Taylor expansion for the logarithm in the last step (this Taylor expansion does not actually converge if \(\alpha < 2\), but it does give a good insight into the scale of the fluctuations; see Sect. 6 for precise statements). Note that the summand is zero for each \(j\in D\) and so in expectation there are \(q|Z^{{\scriptscriptstyle {({1}})}}_t|\) non-zero terms, where \(q = 1-p\). At this point we have reduced the study of the ratio \(u(t,Z^{{\scriptscriptstyle {({1}})}}_t)/u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)\) to the study of fluctuations in the sum of independent (although not identically distributed) random variables, and so we may appeal to the well-developed theory of such fluctuations.
In the case \(\alpha \in (1,2)\), these fluctuations belong to the \(\alpha \)-stable universality class, and so we obtain
$$\begin{aligned} \log \frac{u(t,Z^{{\scriptscriptstyle {({1}})}}_t)}{u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)}\approx \xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{-1}(q|Z^{{\scriptscriptstyle {({1}})}}_t|)^{\frac{1}{\alpha }}Y, \end{aligned}$$
(1.4)
where Y is a certain non-degenerate random variable. Since we prove in Proposition 5.6 that
$$\begin{aligned} \xi (Z^{{\scriptscriptstyle {({1}})}}_t) \approx (t/\log t)^{1/(\alpha -1)} \quad \text {and} \quad |Z^{{\scriptscriptstyle {({1}})}}_t| \approx (t/\log t)^{\alpha /(\alpha -1)}, \end{aligned}$$
(1.5)
the growing scales in (1.4) exactly cancel out. Hence the ratio \(u(t, Z^{{\scriptscriptstyle {({1}})}}_t)/u(t, Z^{{\scriptscriptstyle {({2}})}}_t)\) remains of constant order as \(t \rightarrow \infty \), and so there is a non-negligible proportion of the solution at both sites in \(\Omega _t\).
In the case \(\alpha > 2\), the fluctuations are instead in the Gaussian universality class, and so we obtain
$$\begin{aligned} \log \frac{u(t,Z^{{\scriptscriptstyle {({1}})}}_t)}{u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)}\approx \xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{-1}(q|Z^{{\scriptscriptstyle {({1}})}}_t|)^{\frac{1}{2}}Y. \end{aligned}$$
Using (1.5), this gives that
$$\begin{aligned} \left| \log \frac{u(t,Z^{{\scriptscriptstyle {({1}})}}_t)}{u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)} \right| \approx (t/\log t)^{\frac{1}{\alpha -1}(-1 +\alpha /2)} \rightarrow \infty . \end{aligned}$$
The case \(\alpha = 2\) is slightly more delicate, but using the extra logarithmic factor that appears in the fluctuations, we can prove that \(| \log u(t,Z^{{\scriptscriptstyle {({1}})}}_t)/u(t,-Z^{{\scriptscriptstyle {({1}})}}_t) | \rightarrow \infty \) also in this case.
The above analysis also gives an indication why the model is harder to study in \(d \ge 2\). Indeed, even assuming only the shortest paths to \(\Omega _t\) make a non-negligible contribution to U(t), since there are in general many such shortest paths we must replace (1.3) with
$$\begin{aligned} \log \frac{u(t,Z^{{\scriptscriptstyle {({1}})}}_t)}{u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)} \approx \log \frac{ \sum _{p \in \{\text {shortest paths to } Z^{{\scriptscriptstyle {({1}})}}_t\}} \prod _{0\le j<|Z^{{\scriptscriptstyle {({1}})}}_t|} (\xi (Z^{{\scriptscriptstyle {({1}})}}_t) - \xi (p_j))^{-1} }{\sum _{p \in \{ \text {shortest paths to } -Z^{{\scriptscriptstyle {({1}})}}_t\}} \prod _{0\le j<|Z^{{\scriptscriptstyle {({1}})}}_t|} (\xi (Z^{{\scriptscriptstyle {({1}})}}_t) - \xi (p_j))^{-1} } , \end{aligned}$$
where \((p_j)\) denote the sites along p. The fluctuation theory for this expression is significantly more complicated than for (1.3), as it does not reduce to the study of sums of independent random variables.

1.6 Future work

Intuitively, the closer p is to 1, the more symmetric the model becomes and the more likely that the model delocalises for a wider class of potentials. Our results show that if p is uniformly bounded away from 1 then this intuition is not realised, since the threshold \(\alpha = 2\) is the same for all values of \(p \in (0 ,1)\). This leads us to wonder what happens if p is not uniformly bounded away from 1. One way to investigate this is to let \(\xi (z)=\xi (-z)\) with probability \(p=p(|z|)\) that depends on the distance of z from the origin. We can then ask the question: how fast should \(p(n)\rightarrow 1\) so that, for a given value of \(\alpha >2\), complete localisation fails? We conjecture that there is a critical scale for p(n) such that if and only if \(p\rightarrow 1\) slower than this scale then complete localisation holds. We will investigate this model in a future paper.

2 Outline of proof

In this section we give an outline of the proof of our main results, and an overview of the rest of the paper. We assume henceforth that \(\alpha > 1\).

Step 1: Trimming the path set. As already remarked, the Feynman-Kac formula allows us to consider contributions to u(tz) coming from various geometric paths which start at the origin and are at site z at time t. The first step is to eliminate paths that a priori make a negligible contribution to the solution, either because they fail to hit the sites in \(\Omega _t\) or because they make too many jumps. This step is rather standard, and is similar to in [11, 16, 17].

We now define the a priori negligible paths. Introduce the scales
$$\begin{aligned} r_t=\Big (\frac{t}{\log t}\Big )^{\frac{\alpha }{\alpha -1}} \qquad \text {and}\qquad a_t=\Big (\frac{t}{\log t}\Big )^{\frac{1}{\alpha -1}}, \end{aligned}$$
which, as suggested in (1.5), are the asymptotic scales for \(|Z^{{\scriptscriptstyle {({1}})}}_t|\) and \(\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\) respectively. For technical reasons, we also introduce some auxiliary positive scaling functions \(f_t \rightarrow 0\) and \(g_t \rightarrow \infty \) which can be thought of as being arbitrarily slowly decaying or growing. We shall need these scales to satisfy
$$\begin{aligned} g_t,1/f_t=O(\log \log \log t). \end{aligned}$$
(2.1)
Let \(R_t=|Z^{{\scriptscriptstyle {({1}})}}_t|(1+f_t)\). For any set \(A\subseteq \mathbb {Z}\) denote by \(\tau _A=\inf \{t>0:X_t\in A\}\) its hitting time by the continuous-time random walk \((X_s)\). Let \(J_t\) be the number of jumps of \((X_s)\) by time t. We decompose the total mass U(t) into a significant component
$$\begin{aligned} U_0(t) =\mathrm {E}\Big [\exp \Big \{\int _0^t \xi (X_s)ds\Big \} {\mathbf 1}\{J_t \le R_t, \tau _{\Omega _t} < t \} \Big ] \end{aligned}$$
and a negligible component \(U_1(t) = U(t) - U_0(t)\).
In Sect. 3.2 we use standard methods to prove that \(U_1\) is negligible with respect to U as long as certain typical properties of \(\xi \) hold. To define these properties, denote, for each \(n\in \mathbb {N}_0\),
$$\begin{aligned} \xi _n^{{\scriptscriptstyle {({1}})}}=\max \{\xi (z):|z|\le n\} \qquad \text {and}\qquad \xi _n^{{\scriptscriptstyle {({2}})}}=\max \{\xi (z):|z|\le n, \xi (z)<\xi _n^{{\scriptscriptstyle {({1}})}}\}. \end{aligned}$$
Let \(Z^{{\scriptscriptstyle {({2}})}}_t\) be a maximiser of \(\Psi _t\) on the set \(\mathbb {Z}{\setminus }\Omega _t\); we prove that \(Z^{{\scriptscriptstyle {({2}})}}_t\) exists in Lemma 3.2. The typical properties are contained in the event
$$\begin{aligned} \mathcal {E}_t&=\Big \{r_tf_t<|Z^{{\scriptscriptstyle {({1}})}}_t|<r_tg_t, \ a_tf_t<\xi (Z^{{\scriptscriptstyle {({1}})}}_t)<a_tg_t, \ \Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)-\Psi _t(Z^{{\scriptscriptstyle {({2}})}}_t)\\&\quad>a_tf_t , \Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)>f_t\xi (Z^{{\scriptscriptstyle {({1}})}}_t), \ \xi (Z^{{\scriptscriptstyle {({1}})}}_t)=\xi _{R_t}^{{\scriptscriptstyle {({1}})}}, \ \xi _{R_t}^{{\scriptscriptstyle {({1}})}}-\xi _{R_t}^{{\scriptscriptstyle {({2}})}}> a_tf_t, \ \xi (z)\nonumber \\&\quad <\frac{|z|}{t}\log \frac{|z|}{2et} \,\,\forall \,|z|>r_tg_t \Big \}, \end{aligned}$$
which in particular guarantees a large gap between the value of \(\Psi _t\) at sites in \(\Omega _t\) and all other sites.

Step 2: Reduction to subsets of paths that end at \(\Omega \). At this point understanding \(U_0\) becomes the main goal, and we aim to find out which paths make a non-negligible contribution to it; here we make a distinction between the cases \(\alpha \in (1, 2)\) and \(\alpha \ge 2\) (see the heuristics in Sect. 1.5).

The main input is a careful analysis of the properties of the function I that defines the contribution to U(t) for any path. To define this function precisely, denote by
$$\begin{aligned} \mathcal {P}_{all}=\{y=(y_0,\dots ,y_{\ell })\in \mathbb {Z}^{\ell +1}:\ell \in \mathbb {N}_0,|y_i-y_{i-1}|=1\text { for all }1\le i\le \ell \} \end{aligned}$$
the set of all geometric paths on \(\mathbb {Z}\). For each path \(y\in \mathcal {P}_{all}\), denote by \(\ell (y)\) its length (counted as the number of edges). Denote by \((\tau _i)_{i\in \mathbb {N}_0}\) the sequence of the jump times of the continuous-time random walk \((X_t)\) and by
$$\begin{aligned} P(t,y)=\,&\{X_0=y_0, \, X_{\tau _0+\cdots +\tau _{i-1}}=y_i\text { for all }1\le i\le \ell (y), \quad t-\tau _{\ell (y)}\le \tau _0\nonumber \\&+\cdots +\tau _{\ell (y)-1}<t\} \end{aligned}$$
the event that the random walk has the trajectory y up to time t. Let
$$\begin{aligned} U(t,y)=\mathbb {E}\Big [\exp \Big \{\int _0^t\xi (X_s)ds\Big \}{\mathbf 1}_{P(t,y)}\Big ] \end{aligned}$$
be the contribution of the event P(ty) to U(t). By direct computation, we have
$$\begin{aligned} U(t,y)&=2^{-\ell (y)}\mathbb {E}\left[ \exp \left\{ \sum _{i=0}^{\ell (y)-1}\tau _i\xi (y_i)\right. \right. \nonumber \\&\quad \left. \left. +\left( t-\sum _{i=0}^{\ell (y)-1}\tau _i\right) \xi (y_{\ell (y)})\right\} {\mathbf 1}\left\{ \sum _{i=0}^{\ell (y)-1}\tau _i<t,\sum _{i=0}^{\ell (y)}\tau _i>t\right\} \right] \nonumber \\&=2\int _{\mathbb {R}_{+}^{\ell (y)+1}}\left[ \exp \left\{ \sum _{i=0}^{\ell (y)-1}x_i \xi (y_i)+\left( t-\sum _{i=0}^{\ell (y)-1}x_i\right) \xi (y_{\ell (y)}) -2\sum _{i=0}^{\ell (y)}x_i\right\} \right. \nonumber \\&\quad \left. \times {\mathbf 1}\left\{ \sum _{i=0}^{\ell (y)-1}x_i<t,\sum _{i=0}^{\ell (y)}x_i> t\right\} \right] dx_0\cdots dx_{\ell (y)}\nonumber \\&=e^{-2t} I_{\ell (y)}(t; \xi (y_0),\dots ,\xi (y_{\ell (y)})). \end{aligned}$$
(2.2)
where the function I is defined by
$$\begin{aligned} I_n(t; a_0,\dots ,a_n)=e^{ta_n}\int _{\mathbb {R}_{+}^{n}}\exp \left\{ \sum _{i=0}^{n-1}x_i (a_i-a_n)\right\} {\mathbf 1}\left\{ \sum _{i=0}^{n-1}x_i<t\right\} dx_0\cdots dx_{n-1}, \end{aligned}$$
(2.3)
for each \(t>0\), \(n\in \mathbb {N}\), and \(a_0,\dots ,a_n\in \mathbb {R}\). In particular, \(I_0(t; a_0)=e^{ta_0}\).

In Sect. 3.3 we show that I has a rather neat symmetric structure and study its properties. Using this understanding, in Sect. 4 we identify the paths making non-negligible contribution to \(U_0\). For \(\alpha \in (1,2)\) the situation is relatively simple: in Propositions 4.2 and 4.3 we show that only the direct paths to \(\Omega _t\) are significant, and approximate their contribution to \(U_0(t)\) by a certain product over the path. This is useful because, since each site is visited at most once, we can invoke standard fluctuation theory to analyse this product.

The situation is more complicated for \(\alpha \ge 2\) since we strongly suspect that non-direct paths are significant. Instead we show in Proposition 4.6 that, as long as certain additional typical properties of \(\xi \) hold, we can limit the significant paths to those that end at \(\Omega _t\) and visit each site in \(\{0\}\cup \mathcal {N}_t\) at most once, where \(\mathcal {N}_t\) is a set of non-duplicated sites of high potential. The advantage is that, after careful conditioning, it will be sufficient to study the fluctuations of the contribution from sites in \(\mathcal {N}_t\). Since these sites are visited at most once, we can again apply standard fluctuation theory.

To define the set \(\mathcal {N}_t\) precisely, we first introduce an additional auxiliary scaling function
$$\begin{aligned} \delta _t=(\log t)^{-\frac{1}{2\alpha }} \end{aligned}$$
which is chosen in such a way that, on the one hand, \(1/\delta _t\) grows slower than \((\log t)^{\frac{1}{\alpha }}\), but on the other hand, \(\log (1/\delta _t)\) grows faster than any power of \(g_t\) and \(1/f_t\). For each \(t>0\), we then let
$$\begin{aligned} \mathcal {N}_t= \big \{z\in \mathbb {Z}: 0<|z|<|Z^{{\scriptscriptstyle {({1}})}}_t|, z \in E, \xi (z)>\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\big \}. \end{aligned}$$
(2.4)
The additional typical properties we need are
$$\begin{aligned} \mathcal {E}_t^{[2,\infty )} =\Big \{\delta _t^{-\alpha }/\log \log (1/\delta _t)<|\mathcal {N}_t| <\delta _t^{-\alpha }\log (1/\delta _t), \inf _{z\in \mathcal {N}_t,x\in \Omega _t}|z-x|>g_t\Big \} , \end{aligned}$$
which guarantees the set \(\mathcal {N}_t\) is large enough and well-separated from \(\Omega _t\); in Proposition 3.3 we prove that this event holds eventually with overwhelming probability, assuming the event \(\mathcal {E}_t\) also holds.

This analysis is already enough to finish the proof of Theorem 1.1 assuming the event \(\mathcal {E}_t\) holds; we complete the proof at the end of Sect. 4.

Step 3: Point process techniques. In Sect. 5 we build up a point process approach to study the high exceedences of \(\xi \) and the top order statistics of the penalisation functional \(\Psi _t\). We start by proving that the potential \(\xi \), properly rescaled, converges to a Poisson point process. We then use this convergence to pass certain functionals of \(\xi \), including properties of \(\Psi _t\), to the limit. Since this analysis involves several lengthy computations, some of the proofs are deferred to “Appendix A”.

To end the section, we draw two main consequences from our point process analysis. First, we establish that the event \(\mathcal {E}_t\) holds eventually with overwhelming probability. Second, we give an explicit construction for the limit random variable \(\Upsilon \) appearing in Theorem 1.2; this is done via identifying it as the law of a certain time-inhomogeneous Lévy process stopped at a random time.

Step 4: Fluctuation theory for the ratio \(u(t, Z^{{\scriptscriptstyle {({1}})}}_t)/u(t, -Z^{{\scriptscriptstyle {({1}})}}_t)\). At this point we have assembled all the main ingredients, and all that is left is to apply fluctuation theory to analyse the ratio \(u(t, Z^{{\scriptscriptstyle {({1}})}}_t)/u(t, -Z^{{\scriptscriptstyle {({1}})}}_t)\); here we again distinguish between the cases \(\alpha \in (1, 2)\) and \(\alpha \ge 2\) (see the heuristics in Sect. 1.5).

In Sect. 6 we study the case \(\alpha \in (1,2)\) and complete the proof of Theorem 1.2. In particular, since only direct paths contribute significantly to \(U_0(t)\), and since the contribution from these paths can be approximated by a product over the path, we can use standard theory to study these fluctuations. With the aid of our point process analysis, we prove that the ratio \(u(t,Z^{{\scriptscriptstyle {({1}})}}_t)/u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)\) converges to the limit random variable we identify in Sect. 5.

In Sect. 7 we study the case \(\alpha \ge 2\) and complete the proof of Theorem 1.3. Here we apply a central limit theorem to establish that the fluctuations in \(u(t, Z^{{\scriptscriptstyle {({1}})}}_t)/u(t, -Z^{{\scriptscriptstyle {({1}})}}_t)\) due to the sites \(\mathcal {N}_t\) (which are visited at most once) are in the Gaussian universality class; the proof of the central limit theorem is deferred to “Appendix B”. These fluctuations turn out to already be sufficient to prove that \(| \log u(t,Z^{{\scriptscriptstyle {({1}})}}_t)/u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)| \rightarrow \infty \), irrespective of the contribution due to the other sites.

3 Preliminaries

In this section we establish some preliminary results. First, we prove asymptotic properties of the potential \(\xi \). Second, we establish the negligibility of \(U_1(t)\). Lastly, we study the structure of the function I introduced in (2.3).

3.1 Asymptotic properties of the potential

To begin, we establish asymptotic properties of the potential. This allows us to deduce properties of the maximisers \(Z^{{\scriptscriptstyle {({1}})}}_t\) and \(Z^{{\scriptscriptstyle {({2}})}}_t\), and also to establish that \(\mathcal {E}_t^{[2,\infty )}\) holds eventually with overwhelming probability.

Lemma 3.1

Recall that \(\xi _n^{{\scriptscriptstyle {({1}})}} = \max _{|z| \le n} \xi (z)\). For every \(\varepsilon >0\), almost surely
$$\begin{aligned} n^{1/\alpha -\varepsilon }<\xi _n^{{\scriptscriptstyle {({1}})}}<n^{1/\alpha }(\log n)^{1/\alpha +\varepsilon } \end{aligned}$$
eventually.

Proof

According to [16, Lemma 3.5], almost surely the sequence \((\xi _0(z))_{z \in \mathbb {N}}\) of independent Pareto(\(\alpha \)) random variables satisfies
$$\begin{aligned} \max \{\xi _0(z):|z|\le n\}<n^{1/\alpha }(\log n)^{1/\alpha +\varepsilon } \end{aligned}$$
and
$$\begin{aligned} \min \left\{ \max \{\xi _0(z):0\le z\le n\}, \max \{\xi _0(z):-n\le z\le 0\} \right\} >n^{1/\alpha -\varepsilon } \end{aligned}$$
eventually for all n, and the result follows. \(\square \)

Lemma 3.2

For fixed t, almost surely either \(\Omega _t = \{z\}\) for some \(z \in E\) or \(\Omega _t = \{-z, z\}\) for some \(z \in D\), and the same conclusion holds for the maximisers of \(\Psi _t\) on the set \(\mathbb {Z}\setminus \Omega _t\). Moreover, almost surely \(\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)> \Psi _t(Z^{{\scriptscriptstyle {({2}})}}_t)>1\) eventually for all t.

Proof

By Lemma 3.1 with \(0<\varepsilon <\min \{1-1/\alpha ,1/\alpha \}\), for all z with |z| sufficiently large
$$\begin{aligned} \Psi _t(z)\le |z|^{1/\alpha +\varepsilon }-\frac{|z|}{t}(1/\alpha -\varepsilon )\log |z|, \end{aligned}$$
which is a bounded function of |z|. Hence \(\Psi _t\) is bounded for each \(t>0\). Since \(\Psi _t(z)\) is a continuous random variable with no point mass, this implies the first statement.

For the second statement, let \(z_1,\,z_2\in \mathbb {Z}_+\) be fixed sites satisfying \(\xi (z_1)\wedge \xi (z_2)>1\) (such sites exist almost surely). Then \(\Psi _t(z_1)\wedge \Psi _t(z_2)>1\) for all t sufficiently large and so in particular \(\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)\) and \(\Psi _t(Z^{{\scriptscriptstyle {({2}})}}_t)\) are both larger than one eventually. Again since \(\Psi _t(z)\) is a continuous random variable with no point mass, this implies the second statement. \(\square \)

Proposition 3.3

\(\text {Prob}\big (\mathcal {E}_t^{[2,\infty )} \, | \, \mathcal {E}_t \big )\rightarrow 1\) as \(t\rightarrow \infty \).

Proof

Let
$$\begin{aligned} \mathcal {E}'_t=\Big \{r_t f_t<|Z^{{\scriptscriptstyle {({1}})}}_t|<r_t g_t, \xi (Z^{{\scriptscriptstyle {({1}})}}_t)>a_tf_t,\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)>f_t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\Big \}. \end{aligned}$$
and
$$\begin{aligned} \mathcal {E}''_t=\Big \{f_t|Z^{{\scriptscriptstyle {({1}})}}_t|<\big |\big \{|y|<|Z^{{\scriptscriptstyle {({1}})}}_t|:y \in E \big \}\big |<g_t|Z^{{\scriptscriptstyle {({1}})}}_t|\Big \}, \end{aligned}$$
and observe that we may work on the event \(\mathcal {E}'_t \cap \mathcal {E}''_t\) since \(\mathcal {E}'_t\) is implied by \(\mathcal {E}_t\) and \({\mathrm {Prob}}(\mathcal {E}''_t | \mathcal {E}'_t)\rightarrow 1\) by the law of large numbers.
For each \(t>0\), denote by \(\mathcal {G}_t\) the \(\sigma \)-algebra generated by D, \(Z^{{\scriptscriptstyle {({1}})}}_t\) and \(\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\), and denote the conditional probability with respect to \(\mathcal {G}_t\) by \({\mathrm {Prob}}_{\mathcal {G}_t}\). It is easy to see that, conditionally on \(\mathcal {G}_t\), the events
$$\begin{aligned} \big \{z\in \mathcal {N}_t\big \}_{z \in E,|z|<|Z^{{\scriptscriptstyle {({1}})}}_t|} \end{aligned}$$
are independent. Hence we can stochastically dominate the desired properties of \(\mathcal {N}_t\) by equivalent properties of Bernoulli trials, and use standard properties of such trials to complete the proof. For each \(z \in E\), \(|z|<|Z^{{\scriptscriptstyle {({1}})}}_t|\), the conditional distribution of \(\xi (z)\) with respect to \(\mathcal {G}_t\) is the Pareto distribution with parameter \(\alpha \) conditioned on \(\Psi _t(z)<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)\). Observe that
$$\begin{aligned} 1\ge \int _{1}^{\infty }\frac{\alpha dy}{y^{\alpha +1}} {\mathbf 1}_{\{y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)\}} \ge \int _{1}^{\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)}\frac{\alpha dy}{y^{\alpha +1}} =1-\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)^{-\alpha }>1/2\nonumber \\ \end{aligned}$$
(3.1)
uniformly for all z for all sufficiently large t almost surely. Further, using \(\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)>\delta _t f_ta_t>1\) and \(\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)>f_t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)>\delta _t \xi (Z^{{\scriptscriptstyle {({1}})}}_t)\) on \(\mathcal {E}'_t\) eventually, we have
$$\begin{aligned} \int _{1}^{\infty }\frac{\alpha dy}{y^{\alpha +1}} {\mathbf 1}_{\{y>\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t),y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)\}}&\ge \int _{\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}^{f_t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\frac{\alpha dy}{y^{\alpha +1}}\\&=\xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{-\alpha }\big (\delta _t^{-\alpha }-f_t^{-\alpha }\big ) >(a_tg_t\delta _t)^{-\alpha }/2 \end{aligned}$$
and
$$\begin{aligned}&\int _{1}^{\infty }\frac{\alpha dy}{y^{\alpha +1}} {\mathbf 1}_{\{y>\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t),y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)\}}\\&\quad \le \int _{\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}^{\infty }\frac{\alpha dy}{y^{\alpha +1}} =(\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\delta _t)^{-\alpha } <(a_tf_t\delta _t)^{-\alpha }. \end{aligned}$$
Combining two above inequalities with (3.1) we get
$$\begin{aligned} (a_tg_t\delta _t)^{-\alpha }/2<{\mathrm {Prob}}_{\mathcal {G}_t}\big (z\in \mathcal {N}_t\big )< 2(a_tf_t\delta _t)^{-\alpha } \end{aligned}$$
(3.2)
uniformly for all z for all sufficiently large t almost surely. Using this together with the conditional independence and the properties guaranteed by \(\mathcal {E}'_t\) and \(\mathcal {E}''_t\) we infer that eventually
$$\begin{aligned} \text {Bin}(f_t^2 r_t, (a_tg_t\delta _t)^{-\alpha }/2) \prec |\mathcal {N}_t|\prec \text {Bin}(g_t^2 r_t, 2(a_tf_t\delta _t)^{-\alpha }), \end{aligned}$$
(3.3)
where \(\text {Bin}(n,\tau )\) denotes a binomial random variable with parameters \(n\in \mathbb {N}\) and \(\tau \in [0,1]\), and \(\prec \) denotes stochastic domination. By looking at the characteristic function of the binomial distribution we see that
$$\begin{aligned} \frac{\text {Bin}(n_t\tau _t)}{n_t\tau _t}\Rightarrow 1 \end{aligned}$$
as \(t\rightarrow \infty \) if \(n_t\tau _t\rightarrow \infty \). This condition is clearly satisfied by both binomial random variables in (3.3) by the choice of \(\delta _t, f_t\), and \(g_t\). To complete the proof of the inequalities on \(|\mathcal {N}_t|\), it remains to notice that \(\delta _t^{-\alpha }/\log \log (1/\delta _t)\ll f_t^2 r_t (a_t g_t\delta _t)^{-\alpha }/2\) and \(2g_t^2 r_t (a_tf_t\delta _t)^{-\alpha }\ll \delta _t^{-\alpha }\log (1/\delta _t)\) since \(r_ta_t^{-\alpha }=1\) and by the choice of \(\delta _t, f_t\), and \(g_t\).
Similarly, the upper bound in (3.2) also implies that, conditionally on \(\mathcal {G}_t\),
$$\begin{aligned} \inf _{z\in \mathcal {N}_t,x\in \Omega _t}|z-x|\succ \min \big \{\text {Geo}_1\big (2(a_tf_t\delta _t)^{-\alpha }\big ), \text {Geo}_2\big (2(a_tf_t\delta _t)^{-\alpha }\big )\big \}, \end{aligned}$$
where \(\text {Geo}_1(\tau )\) and \(\text {Geo}_2(\tau )\) denote two independent geometric random variables with parameter \(\tau \in [0,1]\) supported on \(\mathbb {N}\). Observe that
$$\begin{aligned} \mathbb {P}(\text {Geo}(\tau _t)>g_t)= (1-\tau _t)^{\lfloor g_t\rfloor }\rightarrow 1 \end{aligned}$$
as \(t\rightarrow \infty \) if \(\tau _tg_t\rightarrow 0\). It remains to notice that \(2(a_tf_t\delta _t)^{-\alpha }g_t\rightarrow 0\) as \(t\rightarrow \infty \). \(\square \)

3.2 Eliminating the a priori negligible paths

We begin by decomposing \(U_1(t)\) into
$$\begin{aligned} U'_1(t)&=\mathrm {E}\Big [\exp \Big \{\int _0^t \xi (X_s)ds\Big \}{\mathbf 1}\{J_t>R_t\}\Big ], \end{aligned}$$
(3.4)
$$\begin{aligned} U''_1(t)&=\mathrm {E}\Big [\exp \Big \{\int _0^t \xi (X_s)ds\Big \}{\mathbf 1}\{J_t\le R_t,\tau _{\Omega _t}\ge t\}\Big ]. \end{aligned}$$
(3.5)
We first find a lower bound for U in Lemma 3.4 and upper bounds for \(U_1'\) and \(U_1''\) in Lemmas 3.5 and 3.6 respectively, before combining these to prove the negligibility of \(U_1\). This approach is standard and similar to [11, 16, 17].

Lemma 3.4

Almost surely,
$$\begin{aligned} \log U(t)>t\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t) - 2t + O(\log t) \end{aligned}$$
on the event \(\mathcal {E}_t\) as \(t\rightarrow \infty \).

Proof

The idea of the proof as the same as of [11, Prop. 4.2]. Let \(\rho \in (0,1]\) and \(z\in \mathbb {Z}\), \(z\ne 0\). Following the lines of [11, Prop. 4.2], we obtain
$$\begin{aligned} \log U(t) > \exp \Big \{t(1-\rho )\xi (z)-|z|\log \frac{|z|}{e\rho t}-2t+O(\log |z|)\Big \}. \end{aligned}$$
(3.6)
Take \(z=Z^{{\scriptscriptstyle {({1}})}}_t\) and \(\rho =\frac{|Z^{{\scriptscriptstyle {({1}})}}_t|}{t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\). Observe that on the event \(\mathcal {E}_t\) this \(\rho \) eventually belongs to (0, 1] since
$$\begin{aligned} \frac{|Z^{{\scriptscriptstyle {({1}})}}_t|}{t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)} < \frac{g_t r_t}{t f_t a_t}=\frac{g_t }{f_t \log t}\rightarrow 0 \end{aligned}$$
by (2.1). Substituting this into (3.6) we obtain
$$\begin{aligned} \log U(t) > \exp \Big \{t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-|Z^{{\scriptscriptstyle {({1}})}}_t|\log \xi (Z^{{\scriptscriptstyle {({1}})}}_t)-2t+O(\log t)\Big \}. \end{aligned}$$
as required. \(\square \)

Lemma 3.5

Almost surely,
$$\begin{aligned} \log U_1'(t) < \max \Big \{t\Psi (Z^{{\scriptscriptstyle {({2}})}}_t) +o(ta_tf_t), \xi (Z^{{\scriptscriptstyle {({1}})}}_t)-R_t\log \frac{R_t}{2et} + O(t) \Big \} \end{aligned}$$
on the event \(\mathcal {E}_t\) as \(t\rightarrow \infty \).

Proof

Observe that the number of jumps \(J_t\) of the continuous-time random walk by the time t has Poisson distribution with parameter 2t. Fix some \(0<\varepsilon <1-1/\alpha \). We can estimate the integral in (3.4) by \(t\xi ^{{\scriptscriptstyle {({1}})}}_n\) on the event \(\{J_t=n\}\) and then use Lemma 3.1 to obtain the almost-sure bound
$$\begin{aligned} U_1'(t)\le \sum _{n>R_t}e^{t\xi _n^{(1)}-2t}\frac{(2t)^n}{n!}\le \sum _{n>R_t}\exp \{tn^{\frac{1}{\alpha }+\varepsilon }-2t\}\frac{(2t)^n}{n!}. \end{aligned}$$
(3.7)
We now present an upper-bound for the tail of this series. Fix some \(\theta >1\) and \(\beta >(1-\varepsilon -1/\alpha )^{-1}\). Define \(\gamma :=\beta (1-\varepsilon -1/\alpha )-1\) and note that \(\gamma >0\). By Stirling’s formula,
$$\begin{aligned} n!=\sqrt{2\pi n}\left( \frac{n}{e}\right) e^{\delta (n)},\qquad \text{ with } \lim _{n\rightarrow \infty }\delta (n)=0, \end{aligned}$$
and so for all \(n>t^\beta \) and sufficiently large t,
$$\begin{aligned}&t n^{\frac{1}{\alpha }+\varepsilon }+n\log (2t)-\log (n!)\le t n^{\frac{1}{\alpha }+\varepsilon }-n\log \frac{n}{2et}-\delta (n)\\&\quad \le tn^{\frac{1}{\alpha }+\varepsilon }\left( 1-\frac{n^{1-\frac{1}{\alpha }-\varepsilon }}{t}\log \frac{n}{2et}-\frac{\delta (n)}{tn^{\frac{1}{\alpha }+\varepsilon }}\right) \nonumber \\&\quad \le tn^{\frac{1}{\alpha }+\varepsilon }\left( 1-t^\gamma \log \frac{t^{\beta -1}}{2e}-\frac{\delta (n)}{tn^{\frac{1}{\alpha }+\varepsilon }}\right) \le -\theta \log n. \end{aligned}$$
Splitting the sum on the right of (3.7) at \(n=\lceil t^\beta \rceil \), and using \(\sum _{n>\lceil t^\beta \rceil }n^{-\theta }=o(1)\), we have
$$\begin{aligned} \log U_1'(t)\le \log \left[ \sum _{n> R_t}e^{t\xi _n^{{\scriptscriptstyle {({1}})}}-2t}\frac{(2t)^n}{n!}\right] < \max _{n> R_t}\left[ t\xi _n^{{\scriptscriptstyle {({1}})}}-n\log \frac{n}{2te}\right] +O(t). \end{aligned}$$
Denote by \(n_t> R_t\) the maximiser of the expression on the right-hand side, and by \(z_t\in \mathbb {Z}\) a point such that \(\xi (z_t)=\xi _{n_t}^{{\scriptscriptstyle {({1}})}}\). If \(|z_t|\le R_t\) then \(\xi (z_t)=\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\) on the event \(\mathcal {E}_t\) and so
$$\begin{aligned} \log U_1'(t)< t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-n_t\log \frac{n_t}{2te}+O(t) < t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-R_t\log \frac{R_t}{2te}+O(t). \end{aligned}$$
If \(|z_t|>R_t\) then by monotonicity \(n_t=|z_t|\). If in fact \(|z_t|>r_tg_t\), then on the event \(\mathcal {E}_t\),
$$\begin{aligned} \log U'_t(t)<O(t), \end{aligned}$$
whereas if \(R_t<|z_t|\le r_tg_t\) then almost surely
$$\begin{aligned} \log U_1'(t)&< t\Psi _t(z_t)+|z_t|\log \xi (z_t)-|z_t|\log \frac{|z_t|}{2te}+O(t) \\&<t\Psi _t(z_t)+O\left( ta_t\frac{g_t\log \log t}{\log t}\right) \le t\Psi (Z^{{\scriptscriptstyle {({2}})}}_t)+o(ta_tf_t) \end{aligned}$$
by Lemma 3.1 and (2.1). \(\square \)

Lemma 3.6

Almost surely,
$$\begin{aligned} \log U_1''(t) < t\Psi (Z^{{\scriptscriptstyle {({2}})}}_t)+o(t a_t f_t) \end{aligned}$$
on the event \(\mathcal {E}_t\) as \(t\rightarrow \infty \).

Proof

For any \(n\in \mathbb {N}_0\), let
$$\begin{aligned} \zeta _n=\max \{\xi (z):|z|\le n, z \notin \Omega _t\} . \end{aligned}$$
Similarly to Lemma 3.5 we have
$$\begin{aligned} \log U_1''(t)\le \log \left[ \sum _{n\le R_t}e^{t\zeta _n-2t}\frac{(2t)^n}{n!}\right] < \max _{n\le R_t}\left[ t\zeta _n-n\log \frac{n}{2te}\right] +O(t). \end{aligned}$$
Denote by \(n_t\le R_t\) the maximiser of the expression on the right-hand side, and by \(z_t\in \mathbb {Z}\) a point such that \(\xi (z_t)=\zeta _{n_t}\).
If \(|z_t|<r_t (\log t)^{-2}\) then by monotonicity and Lemma 3.1 with small \(\varepsilon >0\)
$$\begin{aligned} t\zeta _{n_t}-n_t\log \frac{n_t}{2te}< t\xi _{r_t(\log t)^{-2}}^{{\scriptscriptstyle {({1}})}}+2t < t r_t^{1/\alpha }(\log t)^{-1/\alpha +\varepsilon } =o(t a_t f_t). \end{aligned}$$
If \(r_t (\log t)^{-2}\le |z_t|\le R_t\) then by monotonicity
$$\begin{aligned} t\zeta _{n_t}-n_t\log \frac{n_t}{2te} =t\xi (z_t)-|z_t|\log \frac{|z_t|}{2te} =t\Psi (z_t)+|z_t|\log \frac{2te\xi (z_t)}{|z_t|}. \end{aligned}$$
Clearly \(\Psi (z_t)\le \Psi (Z^{{\scriptscriptstyle {({2}})}}_t)\) since \(z_t\notin \Omega _t\). By Lemma 3.1 on the event \(\mathcal {E}_t\)
$$\begin{aligned} |z_t|\log \frac{2te\xi (z_t)}{|z_t|}&< |z_t|\log \frac{2te(\log |z_t|)^{1/\alpha +\varepsilon }}{|z_t|^{1-1/\alpha }}\nonumber \\&= R_t O(\log \log t)=O(r_tg_t\log \log t)=o(t a_t f_t) \end{aligned}$$
by (2.1) as required. \(\square \)

Proposition 3.7

Almost surely,
$$\begin{aligned} \frac{U_1(t)}{U(t)}{\mathbf 1}_{\mathcal {E}_t} \rightarrow 0 \end{aligned}$$
as \(t\rightarrow \infty \).

Proof

We first claim that \(U_1'(t)/U(t) \rightarrow 0\) on the event \(\mathcal {E}_t\). Combining Lemmas 3.4 and 3.5 we have on \(\mathcal {E}_t\)
$$\begin{aligned} \log U_1'(t)-\log U(t)&< \max \Big \{t\Psi (Z^{{\scriptscriptstyle {({2}})}}_t)+o(ta_tf_t),t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-R_t\log \frac{R_t}{2et}+O(t)\Big \}\nonumber \\&\quad -t\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t). \end{aligned}$$
First, on the event \(\mathcal {E}_t\)
$$\begin{aligned} t\Psi _t(Z^{{\scriptscriptstyle {({2}})}}_t)-t\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)+o(ta_tf_t)<-t a_t f_t+o(ta_tf_t)\rightarrow -\infty . \end{aligned}$$
(3.8)
Second,
$$\begin{aligned} t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-R_t\log \frac{R_t}{2et}-t\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)+O(t)&=|Z^{{\scriptscriptstyle {({1}})}}_t|\log \xi (Z^{{\scriptscriptstyle {({1}})}}_t)-R_t\log \frac{R_t}{2et}+O(t)\\&< |Z^{{\scriptscriptstyle {({1}})}}_t|\log \frac{2et\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}{|Z^{{\scriptscriptstyle {({1}})}}_t|}\nonumber \\&\quad -f_t|Z^{{\scriptscriptstyle {({1}})}}_t|\log \frac{|Z^{{\scriptscriptstyle {({1}})}}_t|}{2et} +O(t). \end{aligned}$$
On the event \(\mathcal {E}_t\), as \(t\rightarrow \infty \), we have for the first term
$$\begin{aligned} |Z^{{\scriptscriptstyle {({1}})}}_t|\log \frac{2et\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}{|Z^{{\scriptscriptstyle {({1}})}}_t|} < r_tg_t\log \frac{2etg_ta_t}{f_tr_t} =r_tg_t\log \frac{2eg_t\log t}{f_t}\sim r_tg_t\log \log t \end{aligned}$$
and for the second term
$$\begin{aligned} f_t|Z^{{\scriptscriptstyle {({1}})}}_t|\log \frac{|Z^{{\scriptscriptstyle {({1}})}}_t|}{2et} > f_tr_tf_t\log \frac{r_tf_t}{2et}\sim \frac{1}{\alpha -1}f_t^2r_t\log t. \end{aligned}$$
By (2.1) the first term is negligible with respect to the second term and so
$$\begin{aligned} t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-R_t\log \frac{R_t}{2et}-t\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)+O(t)\rightarrow -\infty , \end{aligned}$$
which proves the claim.
It remains to show that \(U_1''(t)/U(t) \rightarrow 0\) on the event \(\mathcal {E}_t\). Combining Lemmas 3.4 and 3.6 we have on the event \(\mathcal {E}_t\)
$$\begin{aligned} \log U_1''(t)-\log U(t)< t\Psi _t(Z^{{\scriptscriptstyle {({2}})}}_t)-t\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)+o(ta_tf_t) < -ta_tf_t+o(ta_tf_t)\rightarrow -\infty . \end{aligned}$$
\(\square \)

3.3 Structure of the function I

In this section we study the structure of the function I introduced in (2.3). Our point of departure is the recursion
$$\begin{aligned}&I_n(t;a_0,\dots ,a_n)\nonumber \\&\quad =\frac{1}{a_{n}-a_{n-1}} \Big [I_{n-1}(t; a_0,\dots ,a_{n-2},a_n)-I_{n-1}(t; a_0,\dots ,a_{n-2},a_{n-1})\Big ] \end{aligned}$$
(3.9)
whenever \(a_n\ne a_{n-1}\), obtained by evaluating the integral over \(x_{n-1}\) in (2.3). By iterating this recursion we establish the following.

Lemma 3.8

The following hold:
  1. (1)
    If \(a_n \ne a_i\) for \(i \ne n\) then
    $$\begin{aligned} I_n(t; a_0,\dots ,a_n)=e^{ta_n}\prod \limits _{j=0}^{n-1}\frac{1}{a_n-a_j} -\sum _{i=0}^{n-1}I_i(t,a_0,\dots ,a_i)\prod _{j=i}^{n-1} \frac{1}{a_n-a_j} . \end{aligned}$$
    Moreover, if \(a_0,\dots ,a_n\) are pairwise distinct then
    $$\begin{aligned} I_n(t; a_0,\dots ,a_n)=\sum _{i=0}^{n}e^{ta_i}\prod \limits _{\genfrac{}{}{0.0pt}{}{j=0}{j\ne i}}^{n}\frac{1}{a_i-a_j}; \end{aligned}$$
    (3.10)
     
  2. (2)

    \(I_n\) is symmetric with respect to the variables \(a_0,\dots ,a_n\).

     

Proof

The first statement in (1) follows by induction from (3.9), where we apply induction to the first term in the recursion and keep the second term. The second statement in (1) also follows by induction once we notice that it is true for \(n=0\) and the expression on the right hand side satisfies the recursion (3.9). Finally, the symmetry of \(I_n\) for pairwise distinct variables follows from the symmetry of the expression on the right hand side of (3.10). Then it extends by continuity to all variables. \(\square \)

We now establish two upper bounds on the function I. The first bounds the effect of adding additional steps onto a base path. The second bounds the effect of changing the largest value of \(a_i\) along a path; for this we shall need an additional lemma that establishes ‘negative dependence’ in the effect on I due to changes in the \(a_i\).

Lemma 3.9

Let \(m,n\in \mathbb {N}_0\) and suppose \(a_j<a_n\) for all \(0\le j< n\) and \(a_j=a_n\) for all \(n\le j\le n+m\). Then for any \(t>0\), \(0\le k\le n\), and \(0\le i\le m\),
$$\begin{aligned} I_{n+m}(t;a_0,\ldots ,a_{n+m})&\le \frac{t^i}{i!}I_{n+m-i-k}(t;a_k,\ldots ,a_{n+m-i})\prod _{j=1}^k\frac{1}{a_n-a_{j-1}} \end{aligned}$$

Proof

Integrating with respect to the last i variables we obtain
$$\begin{aligned}&I_{n+m} (t; a_0,\dots ,a_{n+m})\\&\quad \left. =e^{ta_n}\int _{\mathbb {R}_{+}^{n}}\exp \left\{ \sum _{s=0}^{n-1}x_s (a_s-a_n)\right\} {\mathbf 1}\left\{ \sum _{s=0}^{n+m-i-1}x_s+\sum _{s=n+m-i}^{n+m-1} x_s<t\right\} \right] dx_0\cdots dx_{n+m-1}\\&\quad =e^{ta_n}\frac{1}{i!}\int _{\mathbb {R}_{+}^{n}}\exp \left\{ \sum _{s=0}^{n-1}x_s (a_s-a_n)\right\} {\mathbf 1}\left\{ \sum _{s=0}^{n+m-i-1}x_s<t\right\} \left( t-\sum _{s=0}^{n+m-i-1}x_s\right) ^i\\&\qquad dx_0\cdots dx_{n+m-i-1} \le \frac{t^i}{i!}I_{n+m-i}(t; a_0,\dots ,a_{n+m-i}). \end{aligned}$$
Further, it follows from (3.9) and symmetry of I proved in Lemma 3.8 that
$$\begin{aligned} I_{n+m-i}(t; a_0,\dots ,a_{n+m-i})&\le I_{n+m-i-1}(t; a_1,\dots ,a_{n+m-i})\frac{1}{a_n-a_0} \le \cdots \\&\le I_{n+m-i-k}(t;a_k,\ldots ,a_{n+m-i})\prod _{j=1}^k\frac{1}{a_n-a_{j-1}} . \end{aligned}$$
\(\square \)

Our ‘negative dependence’ lemma requires the application of a result of [4], which we state below.

Theorem 3.10

([4, Theorem 4.1]) Fix \(s\in \mathbb {R}\) and \(n \in \mathbb {N}\). Let \(X_0,\ldots , X_{n+1}\) be independent random variables each with a log-concave density, and let \((Y_0,\ldots ,Y_n)\) be a random vector satisfying
$$\begin{aligned} \mathcal {L} \left( (Y_0,\ldots ,Y_n) \right) = \mathcal {L} \left( (X_0,\ldots ,X_n) \big |\, X_0+\cdots +X_{n+1} =s \right) . \end{aligned}$$
Then for each \(0\le i<j\le n\),
$$\begin{aligned} \mathbb {E}(Y_iY_j)\le \mathbb {E}(Y_i)\mathbb {E}(Y_j). \end{aligned}$$

Proof

First we remark that a density being log-concave is equivalent to the density being a Polya frequency function of order 2 (or PF\(_2\), using the terminology from [4]). Then [4, Theorem 4.1] implies that \((Y_1,\ldots ,Y_n)\) is reverse regular of order 2 in pairs (again using the terminology of [4]). Then the discussion following Definition 2.2 in [4] demonstrates that this implies the result. \(\square \)

Lemma 3.11

Let \(n\ge 2\). For any \(k\ne j\)
$$\begin{aligned} \frac{\partial ^2}{\partial a_k\partial a_j}\log I_n(t;a_0,\dots ,a_n)\le 0. \end{aligned}$$
(3.11)

Proof

By symmetry of I proved in Lemma 3.8 it suffices to prove the statement for \(j,k\ne n\). Denote \(\mathbf{a}=(a_0,\dots ,a_n)\). It is easy to see that (3.11) is equivalent to showing that
$$\begin{aligned} I_n(t; \mathbf{a})^{-1}\frac{\partial ^2 }{\partial a_k\partial a_j}I_n(t; \mathbf{a})\le \Big [I_n(t; \mathbf{a})^{-1}\frac{\partial }{\partial a_k}I_n(t; \mathbf{a})\Big ] \cdot \Big [I_n(t; \mathbf{a})^{-1}\frac{\partial }{\partial a_j}I_n(t; \mathbf{a})\Big ] \end{aligned}$$
(3.12)
Fix \(t>0\). Let \(W_i\), \(0\le i\le n\), be independent random variables with density \(c_i e^{(a_i-a_n)x}\) on [0, t] and zero otherwise, where \(c_i\) is a normalising constant, and let \(W_{n+1}\) be uniform on [0, t]. We remark that each of \(W_i\), \(0\le i\le n+1\), has a log-concave density. Further, let \(\hat{W}_i\), \(0\le i\le n\), be defined by
$$\begin{aligned} (\hat{W}_0,\dots ,\hat{W}_n)\mathop {=}\limits ^{d}\left( W_0,\dots ,W_n\Big |\sum _{i=0}^{n+1} W_i=t\right) . \end{aligned}$$
Since the densities of \(W_i\), \(0\le i\le n+1\), are log-concave, by Theorem 3.10 we have
$$\begin{aligned} \mathrm {E} \big (\hat{W}_k\hat{W}_j\big )\le \mathrm {E} (\hat{W}_k)\mathrm {E} (\hat{W}_j). \end{aligned}$$
To prove (3.12), it suffices now to show that
$$\begin{aligned} \mathrm {E} (\hat{W}_k) =I_n(t; \mathbf{a})^{-1}\frac{\partial }{\partial a_k}I_n(t; \mathbf{a}) \qquad \text {and}\qquad \mathrm {E} \big (\hat{W}_k\hat{W}_j\big ) =I_n(t; \mathbf{a})^{-1}\frac{\partial ^2 }{\partial a_k\partial a_j}I_n(t; \mathbf{a}). \end{aligned}$$
For this note that
$$\begin{aligned} \mathrm {E}(\hat{W}_k)&=c\int _{\mathbb {R}_+^{n+1}}x_k\exp \left\{ \sum _{i=0}^{n}(a_i-a_n)x_i\right\} {\mathbf 1}\left\{ \sum _{i=0}^n x_i\le t\right\} dx_0\cdots dx_n\nonumber \\&=ce^{-ta_n}\frac{\partial }{\partial a_k}I_n(t;\mathbf{a}) ,\\ \mathrm {E}(\hat{W}_k \hat{W}_j)&=c\int _{\mathbb {R}_+^{n+1}}x_kx_j\exp \left\{ \sum _{i=0}^{n}(a_i-a_n)x_i\right\} {\mathbf 1}\left\{ \sum _{i=0}^n x_i\le t\right\} dx_0\cdots dx_n\nonumber \\&=ce^{-ta_n}\frac{\partial ^2}{\partial a_k\partial a_j}I_n(t;\mathbf{a}) , \end{aligned}$$
where c is a normalising constant and thus satisfies
$$\begin{aligned} 1=c\int _{\mathbb {R}_+^{n+1}}\exp \left\{ \sum _{i=0}^{n}(a_i-a_n)x_i\right\} {\mathbf 1}\left\{ \sum _{i=0}^n x_i\le t\right\} dx_0\cdots dx_n=ce^{-ta_n}I_n(t;\mathbf{a}). \end{aligned}$$
Plugging this value of c into the above equations gives the required identities. \(\square \)

Lemma 3.12

Let \(n\ge 2\), \(x<y\) and \(a_0,\dots ,a_{n-1}\in \mathbb {R}\) be such that \(a_i\le y\) for all \(0\le i\le n-1\). Then
$$\begin{aligned} I_n(t;\mathbf{a},x)\le \frac{n}{(y-x)t}I_n(t;\mathbf{a},y), \end{aligned}$$
where \(\mathbf{a}=(a_0,\dots ,a_{n-1})\).

Proof

Since the function \(s\mapsto \log I_n(t;\mathbf{a},s)\) is continuous we can write
$$\begin{aligned} \frac{I_n(t;\mathbf{a},x)}{I_n(t;\mathbf{a},y)}=\exp \Big \{-\int _x^y\frac{\partial }{\partial s}\log I_n(t;\mathbf{a},s)ds\Big \}. \end{aligned}$$
(3.13)
It follows from the definition (2.3) of \(I_n\) that
$$\begin{aligned} \log I_n(t;\mathbf{a},s)=\log I_n(t;\mathbf{a}-\mathbf{y},s-y)+ty, \end{aligned}$$
where \(\mathbf{y}=(y,\dots ,y)\) and hence
$$\begin{aligned} \frac{\partial }{\partial s}\log I_n(t;\mathbf{a},s)=\frac{\partial }{\partial s}\log I_n(t;\mathbf{a}- \mathbf{y},s-y). \end{aligned}$$
Since all \(a_i\le y\) we can use monotonicity proved in Lemma 3.11 to obtain
$$\begin{aligned} \frac{\partial }{\partial s}\log I_n(t;\mathbf{a}- \mathbf{y},s-y)\ge \frac{\partial }{\partial s}\log I_n(t;\mathbf{0},s-y), \end{aligned}$$
(3.14)
where \(\mathbf{0}=(0,\dots ,0)\). This implies
$$\begin{aligned} \frac{I_n(t;\mathbf{a},x)}{I_n(t;\mathbf{a},y)}\le \exp \left\{ -\int _x^y\frac{\partial }{\partial s}\log I_n(t;\mathbf{0},s-y)\,ds\right\} =\frac{I_n(t;\mathbf{0},x-y)}{I_n(t;\mathbf{0},0)}. \end{aligned}$$
It is easy to see that
$$\begin{aligned} I_n(t;\mathbf{0},0)=\frac{t^n}{n!}. \end{aligned}$$
(3.15)
Using \(y>x\) and the substitution \(u_i=x_i\), \(0\le i\le n-2\) and \(u_{n-1}=x_0+\cdots +x_{n-1}\) in the definition (2.3) of \(I_n\), we also obtain integrating over \(u_{n-1}\) that
$$\begin{aligned} I_n(t;\mathbf{0},x-y)&\le e^{t(x-y)}\int _{\mathbb {R}_{+}^{n-1}} \Big [\int _{-\infty }^t e^{u_{n-1}(y-x)}du_{n-1}\Big ]du_0\dots du_{n-2}\nonumber \\&=\frac{t^{n-1}}{(y-x)(n-1)!}. \end{aligned}$$
(3.16)
Combining (3.13), (3.14), (3.15) and (3.16) gives the stated result. \(\square \)

4 Significant paths

The aim of this section is to determine which paths make a non-negligible contribution to \(U_0(t)\). As described in Sect. 1.5, in the case \(\alpha \in (1, 2)\) we prove that only the direct paths to \(\Omega _t\) are significant. In the case \(\alpha \ge 2\), we can only prove the much weaker result that the significant paths are those which end at \(\Omega _t\) and visit the set \(\mathcal {N}_t \cup \{0\}\) at most once, where \(\mathcal {N}_t\) is the set of non-duplicated sites of high potential defined in (2.4) (actually this is true for all \(\alpha > 1\), but is not as strong as what we prove for \(\alpha \in (1,2)\)).

Assuming the event \(\mathcal {E}_t\) holds, this is already enough to prove the localisation statement in Theorem 1.1; we complete this proof at the end of the section.

4.1 The case \(\alpha \in (1, 2)\): direct paths to \(\Omega _t\)

To prove that only direct paths are significant, we first give an approximation for the contribution made by the direct paths, and then use this approximation to show the negligibility of all other paths. Denote by \(y^{{\scriptscriptstyle {({t,1}})}}\in \mathcal {P}_{all}\), \(y^{{\scriptscriptstyle {({t,-1}})}}\in \mathcal {P}_{all}\) the shortest geometric paths from 0 to \(|Z^{{\scriptscriptstyle {({1}})}}_t|\) and to \(-|Z^{{\scriptscriptstyle {({1}})}}_t|\), respectively.

Before we begin, we state a small combinatorial lemma that will be used in Proposition 4.3 below.

Lemma 4.1

For any \(n\ge 4\) and any \(w\in \mathbb {N}_0 \),
$$\begin{aligned} {{n+2w}\atopwithdelims (){w}} < 16 n^w. \end{aligned}$$

Proof

For \(n=4\) we have \({{n+2w}\atopwithdelims (){w}} < 2^{n+2w}=16n^w\) for all w. By induction
$$\begin{aligned} {{n+1+2w}\atopwithdelims (){w}}={{n+2w}\atopwithdelims (){w}}\cdot \frac{n+1+2w}{n+1+w}< 16n^w\Big (1+\frac{w}{n+1+w}\Big ) < 16(n+1)^w \end{aligned}$$
since
$$\begin{aligned} \Big (\frac{n+1}{n}\Big )^w\ge 1+\frac{w}{n} > 1+\frac{w}{n+1+w} \end{aligned}$$
by Bernoulli’s inequality. \(\square \)

Proposition 4.2

Almost surely
$$\begin{aligned} U(t,y^{{\scriptscriptstyle {({t,\iota }})}})= \left\{ e^{t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-2t} \prod _{j=0}^{|Z^{{\scriptscriptstyle {({1}})}}_t|-1}\frac{1}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (j\iota )}\right\} +o(1)U(t) \end{aligned}$$
for \(\iota =\text {sgn}(Z^{{\scriptscriptstyle {({1}})}}_t)\) on the event \(\mathcal {E}_t\) and for each \(\iota \in \{-1,1\}\) on the event \(\mathcal {E}_t\cap \mathfrak {D}_t\), as \(t\rightarrow \infty \).

Proof

Fix \(t>0\), \(\iota \in \{-1,1\}\), and assume the corresponding event \(\mathcal {E}_t\) or \(\mathcal {E}_t\cap \mathfrak {D}_t\) holds. Denote \(n=|Z^{{\scriptscriptstyle {({1}})}}_t|\), \(a_i=\xi (i\iota )\), \(0\le i\le n\). According to (2.2) and Lemma 3.8 we have
$$\begin{aligned} U(t,y^{{\scriptscriptstyle {({t,\iota }})}})&=e^{-2t} I_n(t; a_0,\dots ,a_n)=e^{t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-2t}\prod \limits _{j=0}^{|Z^{{\scriptscriptstyle {({1}})}}_t|-1}\frac{1}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (j\iota )}\\&\quad -\sum _{i=0}^{n-1}e^{-2t} I_i(t;a_0,\dots ,a_i)\prod _{j=i}^{n-1} \frac{1}{a_n-a_j}. \end{aligned}$$
Observe that on \(\mathcal {E}_t\), \(a_n-a_j>a_tf_t>1\) eventually for all \(1\le j<n\). Further, again by (2.2) we have
$$\begin{aligned} e^{-2t} I_i(t;a_0,\dots ,a_i) =U(t,w^{{\scriptscriptstyle {({i}})}}) \end{aligned}$$
for all \(0\le i<n\), where \(w^{{\scriptscriptstyle {({i}})}}\) is the shortest path to \(i\iota \). Since \(\sum _{i=0}^{n-1}U(t,w^{(i)})\le U_1''(t)\), by Lemma 3.6 we have
$$\begin{aligned} \sum _{i=0}^{n-1}e^{-2t} I_i(t;a_0,\dots ,a_i)\prod _{j=i}^{n-1} \frac{1}{a_n-a_j} \le U_1''(t) < \exp \Big \{t\Psi _t(Z^{{\scriptscriptstyle {({2}})}}_t)+o(ta_tf_t)\Big \}. \end{aligned}$$
Combining this with the lower bound for U(t) from Lemma 3.4 and also taking into account that \(t\Psi (Z^{{\scriptscriptstyle {({1}})}}_t)-t\Psi _t(Z^{{\scriptscriptstyle {({2}})}}_t)>ta_tf_t\) we obtain
$$\begin{aligned} \sum _{i=0}^{n-1}e^{-2t} I_i(t;a_0,\dots ,a_i)\prod _{j=i}^{n-1} \frac{1}{a_n-a_j}=o(1)U(t), \end{aligned}$$
which completes the proof. \(\square \)

Proposition 4.3

Let \(\alpha \in (1,2)\). Almost surely,
$$\begin{aligned} U_0(t)=(1+o(1))\sum _{\iota \in \{-1,1\}}U(t,y^{{\scriptscriptstyle {({t,\iota }})}}) \end{aligned}$$
on the event \(\mathcal {E}_t\), as \(t\rightarrow \infty \).

Proof

For a path \(y\in \mathcal {P}_{all}\) that hits \(\Omega _t\), let \(z_t(y)\in \Omega _t\) be the first point where y hits \(\Omega _t\), i.e.,
$$\begin{aligned} z_t(y)=y_i, \quad \text { where }i=\min \{j: y_j\in \Omega _t\}. \end{aligned}$$
Denote by \(m_t(y)\) the number of times \(\Omega _t\) is visited minus one, i.e.,
$$\begin{aligned} m_t(y)=|\{0\le i\le \ell (y): y_i\in \Omega _t\}|-1. \end{aligned}$$
Denote by \(2w_t(y)\) the difference between the hitting time of \(z_t(y)\) and \(|Z^{{\scriptscriptstyle {({1}})}}_t|\), i.e.,
$$\begin{aligned} w_t(y)=\frac{\min \{i:y_i=z_t(y)\}-|Z^{{\scriptscriptstyle {({1}})}}_t|}{2}. \end{aligned}$$
Finally, denote by \(s_t(y)\) the number of points on the path after the first visit to \(\Omega _t\) that do not belong to \(\Omega _t\), i.e.,
$$\begin{aligned} s_t(y)=|\{|Z^{{\scriptscriptstyle {({1}})}}_t|+2w_t(y)<i\le \ell (y):y_i\notin \Omega _t\}|. \end{aligned}$$
Observe that \(s_t(y)\ge m_t(y)\).
For each \(t>0\) and \(m\in \mathbb {N}\cup \{0\}\), \(w\in \mathbb {N}\cup \{0\}\), \(s\ge m\), denote
$$\begin{aligned} \mathcal {P}^{t}_{m,w,s} =\big \{y\in \mathcal {P}_{all}:&y_0=0, m_t(y)=m, w_t(y)=w,s_t(y)=s\big \}. \end{aligned}$$
Using Lemma 4.1 we have
$$\begin{aligned} \big |\mathcal {P}_{m,w,s}^{t}\big | \le 2^s{|Z^{{\scriptscriptstyle {({1}})}}_t|+2w\atopwithdelims ()w}< 16 \cdot 2^s|Z^{{\scriptscriptstyle {({1}})}}_t|^w < 16 \cdot 2^s(r_tg_t)^w. \end{aligned}$$
For any \(y\in \mathcal {P}^t_{m,w,s}\) we use (2.2) and Lemma 3.9 with \(n+m\) being the length of y, \(i=m\), \(k=n\), \(a_0,\dots ,a_{n-1}\) being the values of \(\xi \) along y except when it visits \(\Omega _t\), and \(a_n,\dots ,a_{n+m}=\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\) and obtain
$$\begin{aligned} U(t,y)\le e^{t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-2t}\frac{t^m}{m!} \prod _{\genfrac{}{}{0.0pt}{}{j=0}{y_j\notin \Omega _t}}^{\ell (y)}\frac{1}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (y_j)}, \end{aligned}$$
on the event \(\mathcal {E}_t\). We will keep \(|Z^{{\scriptscriptstyle {({1}})}}_t|\) terms in the product corresponding to one visit to each of the points \(i\iota \), \(0\le i\le |Z^{{\scriptscriptstyle {({1}})}}_t|-1\), where \(\iota =\mathrm {sgn}(z_t(y))\), and estimate the rest by
$$\begin{aligned} \xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (y_j)\ge \xi _{R_t}^{{\scriptscriptstyle {({1}})}}-\xi _{R_t}^{{\scriptscriptstyle {({2}})}} > a_tf_t. \end{aligned}$$
This implies
$$\begin{aligned} U(t,y) < \left\{ e^{t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-2t} \prod _{j=0}^{|Z^{{\scriptscriptstyle {({1}})}}_t|-1}\frac{1}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (i\iota )}\right\} \frac{t^m}{m!} (a_tf_t)^{-2w-s}. \end{aligned}$$
By Proposition 4.2 we obtain on \(\mathcal {E}_t\)
$$\begin{aligned} U(t,y) < \big [U(t,y^{{\scriptscriptstyle {({t,\iota }})}})+o(1)U(t)\big ] \frac{t^m}{m!} (a_tf_t)^{-2w-s}. \end{aligned}$$
(4.1)
Let us show that the total mass corresponding to all paths from \(\mathcal {P}^{t}_{m,w,s}\) except those corresponding to \((m,w,s)=(0,0,0)\) is negligible. Indeed,
$$\begin{aligned}&\sum _{w=0}^{\infty }\sum _{m=0}^{\infty }\sum _{s=m}^{\infty } |\mathcal {P}^t_{m,w,s}|\frac{t^m}{m!} (a_tf_t)^{-2w-s}{\mathbf 1}\{(m,k,s)\ne (0,0,0)\}\\&\quad <16\sum _{w=0}^{\infty }\sum _{m=0}^{\infty }\sum _{s=m}^{\infty } 2^s(r_tg_t)^w\frac{t^m}{m!} (a_tf_t)^{-2w-s}{\mathbf 1}\{(m,w,s)\ne (0,0,0)\}\\&\quad =16\left[ \left( \sum _{w=0}^{\infty }\left( \frac{r_tg_t}{a_t^2f_t^2}\right) ^{w}\right) \left( \sum _{m=0}^{\infty }\frac{t^m}{m!} \left( \sum _{s=m}^{\infty }\left( \frac{2}{a_tf_t}\right) ^s\right) \right) -1\right] \\&\quad =16\left[ \left( 1-\frac{r_tg_t}{a_t^2f_t^2}\right) ^{-1} \left( 1-\frac{2}{a_tf_t}\right) ^{-1} \sum _{m=0}^{\infty }\frac{1}{m!}\big (\frac{2t}{a_tf_t}\Big )^m -1\right] \\&\quad =16\left[ \left( 1-\frac{r_tg_t}{a_t^2f_t^2}\right) \left( 1-\frac{2}{a_tf_t}\right) ^{-1} \exp \left\{ \frac{2t}{a_tf_t}\right\} -1\right] =o(1) \end{aligned}$$
since \(\frac{r_tg_t}{a_t^2f_t^2}=o(1)\), \(\frac{2}{a_tf_t}=o(1)\), and \(\frac{2t}{a_tf_t}=o(1)\) as \(\alpha \in (1,2)\). Combining this with (4.1) we obtain on the event \(\mathcal {E}_t\)
$$\begin{aligned} U_0(t) < \sum _{\iota \in \{-1,1\}}\big [U(t,y^{{\scriptscriptstyle {({t,\iota }})}})+o(1)U(t)\big ](1+o(1)), \end{aligned}$$
which gives the required result by Proposition 3.7. \(\square \)

4.2 The case \(\alpha \ge 2\): paths to \(\Omega _t\) visiting sites in \(\mathcal {N}_t\) at most once

Our proof proceeds in two stages. First, we analyse the portion of the part up until the first visit to \(\Omega _t\) and after the last visit to \(\Omega _t\), and show that, in this portion of the path, it is never beneficial to visit sites in \(\mathcal {N}_t \cup \{0\}\) more than once. Second, we analyse the portion of the path consisting of the loops that occur between first and last visit to \(\Omega _t\), showing that it is never beneficial for these loops to return to sites in \(\mathcal {N}_t \cup \{0\}\); in fact, we show the stronger result that these loops have length at most \(\lfloor 2\alpha \rfloor \) (although we suspect that the optimal bound is actually \(\lfloor \alpha \rfloor \)).

Denote by \(\mathcal {P}^t\) the set of all geometric paths contributing to \(U_0(t)\), that is, those visiting \(\Omega _t\) and having length at most \(R_t\). Fix \(t>0\) and let \(y\in \mathcal {P}^t\). The skeleton of y, denoted \(\text {skel}(y)\), is the geometric path from the origin to a site in \(\Omega _t\) constructed by chronologically removing all loops in y which start and end at any site belonging to \(\{0\}\cup \mathcal {N}_t\) up until the first visit of \(\Omega _t\) as well as removing any part of the path after the final visit of y to \(\Omega _t\).

We can now partition \(\mathcal {P}^t\) into equivalence classes by saying that paths y and \(\hat{y}\) are in the same class if and only if \(\text {skel}(y)=\text {skel}(\hat{y})\). We write \(\mathfrak {P}^t\) for the set of all such equivalence classes. Note that any such equivalence class \(\mathcal {P}\in \mathfrak {P}^t\) contains the null path, \(y_{\mathrm {null}}^\mathcal {P}\in \mathcal {P}\), defined as \(y_{\mathrm {null}}^\mathcal {P}=\mathrm {skel}(y_{\mathrm {null}}^\mathcal {P})\). Observe that every null path, prior to visiting \(\Omega _t\) for the first time, either (i) visits each site in \(\{0\} \cup (\mathcal {N}_t \cap \mathbb {N})\) exactly once, or (ii) visits each site in \(\{0\} \cup (\mathcal {N}_t \cap -\mathbb {N})\) exactly once. In particular, until the first visit of \(\Omega _t\) each null path visits either only positive integers, or only negative integers.

The importance of the null path is through the following lemma, which states that the contribution to the solution coming from an equivalence class is dominated by that coming from the null path.

Lemma 4.4

Almost surely,
$$\begin{aligned} \sum _{y\in \mathcal {P}}U(t,y)< (1+o(1))U(t,y_{\mathrm {null}}^\mathcal {P}) \end{aligned}$$
uniformly for all \(\mathcal {P}\in \mathfrak {P}^t\) on the event \(\mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}\), as \(t\rightarrow \infty \).

Proof

For \(k\in \mathbb {N}\), write \(\mathcal {P}^k\) for the subset of \(\mathcal {P}\) consisting of the paths with additional length k compared to \(y_{\mathrm {null}}^\mathcal {P}\). We have on \(\mathcal {E}_t^{[2,\infty )}\)
$$\begin{aligned} |\mathcal {P}^k|\le \big (2(|\mathcal {N}_t|+2)\big )^{k} < \delta _t^{-2\alpha k} \end{aligned}$$
since each of the additional k pieces will be added to a loop at a site in \(\{0\}\cup \mathcal {N}_t\) or at the end in at most two ways. Note that no null path can visit both sites in \(\Omega _t\) since each null path is in \(\mathcal {P}_t\) and has length at most \(R_t<2|Z^{{\scriptscriptstyle {({1}})}}_t|\). Using (2.2) and Lemma 3.9 with \(m+1\) being the number of visits of y to \(\Omega _t\), \(n+m\) the length of y, \(i=0\), \(a_0,\dots , a_{k-1}\) the values of \(\xi \) at the additional points of y, \(a_k,\dots ,a_{n-1}\) the values of \(\xi \) along \(y_{\mathrm {null}}^\mathcal {P}\) except when it visits \(\Omega _t\), and \(a_n=\dots =a_{n+m}\) the value of \(\xi \) on \(\Omega _t\), we obtain
$$\begin{aligned} U(t,y)\le U(t,y_{\mathrm {null}}^\mathcal {P})\prod _{j=1}^k \frac{1}{a_n-a_{j-1}}. \end{aligned}$$
on \(\mathcal {E}_t\). Since none of the additional sites visited by any path in \(\mathcal {P}\) are in \(\Omega _t\), we have on \(\mathcal {E}_t\)
$$\begin{aligned} U(t,y) < U(t,y_{\mathrm {null}}^\mathcal {P})(a_tf_t)^{-k}, \end{aligned}$$
and thus
$$\begin{aligned} \sum _{y\in \mathcal {P}}U(t, y)&=\sum _{k=0}^\infty \sum _{ y\in \mathcal {P}^k}U(t,y) < U(t,y_{\mathrm {null}}^\mathcal {P})\sum _{k=0}^\infty \big (a_tf_t\delta _t^{2\alpha }\big )^{-k}=(1+o(1))U(t,y_{\mathrm {null}}^\mathcal {P}) \end{aligned}$$
on \(\mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}\) as \(a_tf_t\delta _t^{2\alpha }\rightarrow \infty \). \(\square \)

We now eliminate paths that make loops from \(\Omega _t\) that return to sites in \(\mathcal {N}_t\). Denote by \(\mathrm {Null}^t_1\) the set of all null paths in \(\mathcal {P}^t\) which visit each site in \(\{0\}\cup \mathcal {N}_t\) at most once, \(\mathrm {Null}^t_2\) for all other null paths in \(\mathcal {P}^t\) and \(\mathrm {Null}^t\) for their union.

Lemma 4.5

Almost surely,
$$\begin{aligned} \sum _{y\in \mathrm {Null}^t_2}U(t,y)=o(1)\sum _{y\in \mathrm {Null}^t_1}U(t,y) \end{aligned}$$
on the event \(\mathcal {E}_t\cap \mathcal {E}^{[2,\infty )}_t\), as \(t\rightarrow \infty \).

Proof

Note that by the construction of null paths, the only way for a null path to visit a site in \(\mathcal {N}_t\) more than once is by having a loop from \(\Omega _t\). On the event \(\mathcal {E}^{[2,\infty )}_t\) this loop must have length at least \(g_t\). We shall show a stronger result than is needed: that all null paths with loops from \(\Omega _t\) of length more than \(k_0\), where \(k_0>2\alpha \), have negligible contribution to the solution compared to the contribution from all other null paths.

To do this we partition \(\mathrm {Null}^t\) into equivalence classes by saying two null paths are in the same class if and only if they are identical after removing all loops from \(\Omega _t\) of length at least \(k_0\). For any such equivalence class \(\mathcal {P}\), write \(y^{\mathcal {P}}_\mathrm {min}\) for the path in \(\mathcal {P}\) of minimum length (i.e. the path without any loops from \(\Omega _t\) of length at least \(k_0\)). Further, for any \(k\ge k_0\), write \(\mathcal {P}^k\) for the set of paths in \(\mathcal {P}\) with additional length k compared to \(y^{\mathcal {P}}_\mathrm {min}\). Finally we write \(\mathfrak {N}^t\) for the set of all such equivalence classes.

Observe that for all \(k\ge k_0\) and \(\mathcal {P}\in \mathfrak {N}^t\), any path \(y\in \mathcal {P}^k\) can make no more than \(\lfloor k/k_0\rfloor \) extra visits to \(\Omega _t\) compared to \(y^{\mathcal {P}}_\mathrm {min}\). Using (2.2) and Lemma 3.9 with \(m+1\) being the number of visits of y to \(\Omega _t\), \(n+m\) the length of y, i the number of additional visits to \(\Omega _t\) compared to \(y^{\mathcal {P}}_\mathrm {min}\), \(a_0,\dots , a_{k-1-i}\) the values of \(\xi \) at the additional points of y except when it visits \(\Omega _t\), \(a_{k-i}=\cdots =a_{k-1}\) the value of \(\xi \) on \(\Omega _t\), \(a_k,\dots ,a_{n-1}\) the values of \(\xi \) along \(y_{\mathrm {min}}^\mathcal {P}\) except when it visits \(\Omega _t\), and \(a_n=\dots =a_{n+m}\) the value of \(\xi \) on \(\Omega _t\), we obtain
$$\begin{aligned} U(t,y)\le U(t,y^{\mathcal {P}}_\mathrm {min}) \frac{t^i}{i!}\prod _{j=0}^{k-1-i}\frac{1}{a_n-a_j} < U(t,y^{\mathcal {P}}_\mathrm {min}) t^{ k/k_0}(a_tf_t)^{-k+ k/k_0} \end{aligned}$$
on \(\mathcal {E}_t\). Further, on \(\mathcal {E}_t\)
$$\begin{aligned} |\mathcal {P}^k|\le 2^k\big [(R_t-|Z^{{\scriptscriptstyle {({1}})}}_t|+1)/2\big ]^{\lfloor k/k_0\rfloor }< 2^k(r_tg_tf_t)^{k/k_0} \end{aligned}$$
since there are at most \(\lfloor k/k_0\rfloor \) additional loops, at most \((R_t-|Z^{{\scriptscriptstyle {({1}})}}_t|+1)/2\) points where such a loop can be created, and at most \(2^k\) shapes of the loops.
Hence, for any \(\mathcal {P}\in \mathfrak {N}^t\), on the event \(\mathcal {E}_t\)
$$\begin{aligned} \sum _{y\in \mathcal {P}}U(t,y)&\le U(t,y^{\mathcal {P}}_\mathrm {min}) +\sum _{k=k_0}^\infty \sum _{ y\in \mathcal {P}^k}U(t,y)\\&< U(t,y^{\mathcal {P}}_\mathrm {min})\left( 1+ \sum _{k=k_0}^\infty 2^k (t r_tg_tf_t)^{k/k_0}(a_tf_t)^{k/k_0-k}\right) =U(t,y^{\mathcal {P}}_\mathrm {min})\\&\quad \left( 1 + 2^{k_0}tr_tg_tf_t (a_t f_t)^{1-k_0}\left[ 1-2(tr_tg_tf_t)^{1/k_0}(a_tf_t)^{1/k_0-1}\right] ^{-1}\right) . \end{aligned}$$
Since \(k_0>2\alpha \) this implies
$$\begin{aligned} \sum _{y\in \mathcal {P}}U(t,y) < U(t,y^{\mathcal {P}}_\mathrm {min})(1+o(1)) \end{aligned}$$
as \(t\rightarrow \infty \) uniformly over the equivalence classes. To conclude the proof, note that
$$\begin{aligned} \sum _{y\in \mathrm {Null}^t}U(t,y)&=\sum _{\mathcal {P}\in \mathfrak {N}^t}\sum _{y\in \mathcal {P}}U(t,y)< (1+o(1))\sum _{\mathcal {P}\in \mathfrak {N}^t}U(t,y^{\mathcal {P}}_\mathrm {min})\nonumber \\&< (1+o(1)) \sum _{y\in \mathrm {Null}^t_1}U(t,y) \end{aligned}$$
on the event \(\mathcal {E}_t\cap \mathcal {E}^{[2,\infty )}_t\). \(\square \)

Proposition 4.6

Almost surely,
$$\begin{aligned} U_0(t)=(1+o(1))\sum _{y\in \mathrm {Null}^t_1}U(t,y) \end{aligned}$$
on the event \(\mathcal {E}_t\cap \mathcal {E}^{[2,\infty )}_t\), as \(t\rightarrow \infty \).

Proof

This is a direct consequence of Lemmas 4.4 and 4.5. Indeed,
$$\begin{aligned} U_0(t)&=\sum _{y\in \mathcal {P}^t}U(t,y) =\sum _{\mathcal {P}\in \mathfrak {P}^t}\sum _{y\in \mathcal {P}} U(t,y)<(1+o(1))\sum _{\mathcal {P}\in \mathfrak {P}^t}U(t,y^{\mathcal {P}}_\mathrm {null})\\&=(1+o(1))\sum _{y\in \mathrm {Null}^t}U(t,y) =(1+o(1))\sum _{y\in \mathrm {Null}^t_1}U(t,y) \end{aligned}$$
on the event \(\mathcal {E}_t\cap \mathcal {E}^{[2,\infty )}_t\), as \(t\rightarrow \infty \). \(\square \)

4.3 Completion of the proof of Theorem 1.1

We are now in a position to prove the localisation statement in Theorem 1.1 on the event that \(\mathcal {E}_t\) holds; the fact that \(\mathbb {P}(\mathcal {E}_t) \rightarrow 1\) as \(t \rightarrow \infty \) will be proven in Proposition 5.6. The second statement of Theorem 1.1, that \(\mathbb {P}(\mathfrak {D}_t) \rightarrow p/(2-p)\), will be proven in Proposition 5.7.

By Proposition 3.3 we may work on the event \(\mathcal {E}_t\cap \mathcal {E}^{[2,\infty )}_t\). Since \(U_1\) is negligible with respect to U by Proposition 3.7, it remains to show that the contribution to \(U_0\) from the paths not ending in \(\Omega _t\) is negligible. For \(\alpha \in (1,2)\) this follows from Propositions 4.3; for \(\alpha \ge 2\) this follows from Propositions 4.6. In fact, the latter argument works for all \(\alpha >1\) but we prefer to use the much simpler argument for \(\alpha \in (1,2)\).

5 Point process analysis

In this section we develop a point processes approach to analyse the high exceedences of \(\xi \) and top order statistics of the penalisation functional \(\Psi _t\). We use this analysis to prove that the \(\mathcal {E}_t\) holds eventually with overwhelming probability. We also use it to give an explicit construction for the limiting random variable \(\Upsilon \) from Theorem 1.2. Since the proofs in this section are quite technical, we defer some of them to “Appendix A”.

Recall that \(E = \mathbb {Z}{\setminus }D\) denotes the set of positive integers whose potential values are exclusive, and abbreviate \(q = 1-p\).

5.1 Point process convergence for the rescaled potential

The first step is to establish that the potential, properly rescaled, converges to a Poisson point process. The limiting point process will arise as a superposition of two distinct independent Poisson point processes that are, respectively, the limit of the potential restricted to the duplicated and the exclusive sites.

Let us begin by defining the limiting point process. Consider the measure
$$\begin{aligned} \mu (dx\otimes dy)=dx\otimes \frac{\alpha }{|y|^{\alpha +1}}dy \end{aligned}$$
on \(\mathbb {R}^2\). In the sequel, we denote by the same symbol the restriction of \(\mu \) to subsets of \(\mathbb {R}^2\), and we denote by \((0,\infty ]\) the extension of \((0,\infty )\) by the point \(\infty \), equipped with the topology generated by the topology of \((0,\infty )\) and the sets of the form \((a,\infty ]\), for all \(a\in \mathbb {R}\). Let \(\Pi ^{{\scriptscriptstyle {({e}})}}\) be a Poisson point process on \(\mathbb {R}\times (0,\infty ]\) with the intensity measures \(q\mu \). Let \(\Pi ^{{\scriptscriptstyle {({d,+}})}}\) be a Poisson point process on \([0,\infty )\times (0,\infty ]\) with the intensity measures \(p\mu \) and independent of \(\Pi ^{{\scriptscriptstyle {({e}})}}\). Let \(\Pi ^{{\scriptscriptstyle {({d,-}})}}\) be a Poisson point process on \((-\infty ,0]\times (0,\infty ]\) defined by \(\Pi ^{{\scriptscriptstyle {({d,-}})}}(A)=\Pi ^{{\scriptscriptstyle {({d,+}})}}(\hat{A})\) for any Borel set \(A\subseteq (-\infty ,0]\times (0,\infty ]\), where \(\hat{A}\) is the reflection of the set A with respect to the y-axis. Finally, let \(\Pi ^{{\scriptscriptstyle {({d}})}}\) be the point process on \(\mathbb {R}\times (0,\infty ]\) defined by
$$\begin{aligned} \Pi ^{{\scriptscriptstyle {({d}})}}(A) =\Pi ^{{\scriptscriptstyle {({d,-}})}}(A\cap (-\infty ,0]\times (0,\infty ]) +\Pi ^{{\scriptscriptstyle {({d,+}})}}(A\cap [0,\infty )\times (0,\infty ]), \end{aligned}$$
and let
$$\begin{aligned} \Pi =\Pi ^{{\scriptscriptstyle {({d}})}}+\Pi ^{{\scriptscriptstyle {({e}})}} \end{aligned}$$
be a point process on \(\mathbb {R}\times (0,\infty ]\). Denote the corresponding probability and expectation by \({\mathrm {Prob}}_{*}\) and \(\mathrm {E}_*\).
We show the convergence of the potential, properly rescaled, to the Poisson point process \(\Pi \). Let
$$\begin{aligned}&\Pi ^{{\scriptscriptstyle {({e}})}}_s=\sum _{z\in E}\varepsilon \Big (\frac{z}{s}, \frac{\xi (z)}{s^{1/\alpha }}\Big ), \quad \Pi ^{{\scriptscriptstyle {({d,+}})}}_s=\sum _{z\in D, z\ge 0}\varepsilon \Big (\frac{z}{s}, \frac{\xi (z)}{s^{1/\alpha }}\Big ) \quad \text {and} \\&\quad \Pi ^{{\scriptscriptstyle {({d,-}})}}_s=\sum _{z\in D, z\le 0}\varepsilon \Big (\frac{z}{s}, \frac{\xi (z)}{s^{1/\alpha }}\Big ), \end{aligned}$$
where \(\varepsilon (x,y)\) denotes the Dirac measure in (xy). Denote
$$\begin{aligned} \Pi ^{{\scriptscriptstyle {({d}})}}_s=\Pi ^{{\scriptscriptstyle {({d,+}})}}_s+\Pi ^{{\scriptscriptstyle {({d,-}})}}_s \qquad \text {and}\qquad \Pi _s=\Pi ^{{\scriptscriptstyle {({d}})}}_s+\Pi ^{{\scriptscriptstyle {({e}})}}_s. \end{aligned}$$
The following convergence result is classical and we defer its proof to “Appendix A”.

Lemma 5.1

As \(s\rightarrow \infty \), \((\Pi ^{{\scriptscriptstyle {({d,+}})}}_s, \Pi ^{{\scriptscriptstyle {({d,-}})}}_s, \Pi ^{{\scriptscriptstyle {({e}})}}_s)\) converges in law to \((\Pi ^{{\scriptscriptstyle {({d, +}})}}, \Pi ^{{\scriptscriptstyle {({d, -}})}},\Pi ^{{\scriptscriptstyle {({e}})}})\), and in particular, \(\Pi _s\) converges in law to \(\Pi \).

5.2 Asymptotic properties of the top order statistics of the penalisation functional

We now show how to use the convergence of the potential to extract asymptotic properties of the top order statistics of the penalisation functional \(\Psi _t\). We first introduce the limiting versions of \(Z^{{\scriptscriptstyle {({1}})}}_t\), \(Z^{{\scriptscriptstyle {({2}})}}_t\) and \(\mathfrak {D}_t\) and study their properties, before arguing that we may successfully pass to the limit.

Given a point measure \(\Sigma \), we say that \(x\in \Sigma \) if \(\Sigma (\{x\})>0\). Let \(\bar{\Pi }\) be the point process on \([0,\infty )\times (0,\infty ]\) defined by
$$\begin{aligned} \bar{\Pi }(A)=\Pi ^{{\scriptscriptstyle {({e}})}}(A)+\Pi ^{{\scriptscriptstyle {({e}})}}(\hat{A})+\Pi ^{{\scriptscriptstyle {({d,+}})}}(A) \end{aligned}$$
for any Borel set A, where \(\hat{A}\) denotes the reflection of A with respect to the y-axis. Remark that the three components of \(\bar{\Pi }\) are independent Poisson point processes with the intensity measures \(q\mu \), \(q\mu \) and \(p\mu \), respectively, and so \(\bar{\Pi }\) is itself a Poisson point process with intensity measure \((2q + p) \mu = (2-p)\mu \). Abbreviate
$$\begin{aligned} \rho =\frac{1}{\alpha -1}, \end{aligned}$$
(5.1)
and let the positive random variables \(X^{{\scriptscriptstyle {({1}})}}, X^{{\scriptscriptstyle {({2}})}}, Y^{{\scriptscriptstyle {({1}})}}\) and \(Y^{{\scriptscriptstyle {({1}})}}\) be defined by the properties that
$$\begin{aligned} (X^{{\scriptscriptstyle {({1}})}}, Y^{{\scriptscriptstyle {({1}})}})&\in \bar{\Pi }, \text { and if } (x,y) \in \bar{\Pi }\text { then }y-\rho x\le Y^{{\scriptscriptstyle {({1}})}} -\rho X^{{\scriptscriptstyle {({1}})}}, \\ (X^{{\scriptscriptstyle {({2}})}}, Y^{{\scriptscriptstyle {({2}})}})&\in \bar{\Pi }, \text { and if } (x,y) \in \bar{\Pi }{\setminus }(X^{{\scriptscriptstyle {({1}})}}, Y^{{\scriptscriptstyle {({1}})}}) \text { then }y-\rho |x|\le Y^{{\scriptscriptstyle {({2}})}}-\rho X^{{\scriptscriptstyle {({2}})}} . \end{aligned}$$
In Lemma 5.2 we show that these are well-defined. Denote \(\mathfrak {D} = \{ (X^{{\scriptscriptstyle {({1}})}}, Y^{{\scriptscriptstyle {({1}})}}) \in \Pi ^{{\scriptscriptstyle {({d,+}})}} \}\).

At the end of this section we shall identify \((X^{{\scriptscriptstyle {({i}})}}, Y^{{\scriptscriptstyle {({i}})}}), i = 1,2\), and \(\mathfrak {D}\) as the limiting versions of \((|Z^{{\scriptscriptstyle {({i}})}}_t|, \xi (Z^{{\scriptscriptstyle {({i}})}}_t)), i = 1,2\), and \(\mathfrak {D}_t\) respectively. For now, we establish some properties of these objects.

Lemma 5.2

Almost surely, the random variables \(X^{{\scriptscriptstyle {({1}})}}, X^{{\scriptscriptstyle {({2}})}}, Y^{{\scriptscriptstyle {({1}})}}\) and \(Y^{{\scriptscriptstyle {({2}})}}\) are well-defined and satisfy \(Y^{{\scriptscriptstyle {({1}})}}-\rho X^{{\scriptscriptstyle {({1}})}}>Y^{{\scriptscriptstyle {({2}})}}-\rho X^{{\scriptscriptstyle {({2}})}} >0\).

Proof

For any \(a>0\) compute
$$\begin{aligned} \mu \big (\{(x,y): y> a+\rho x\}\big )=2\int _0^{\infty }\int _{a+\rho x}^{\infty }\frac{\alpha }{y^{\alpha +1}}dydx= 2 a^{1-\alpha }. \end{aligned}$$
(5.2)
Since this is finite, almost surely there are finitely many points of \((x,y)\in \bar{\Pi }\) satisfying \(y-\rho x> a\). On the other hand, since (5.2) tends to infinity as \(a\downarrow 0\), almost surely there exist points \((x,y)\in \bar{\Pi }\) satisfying \(y-\rho x> 0\). This implies the result. \(\square \)

Lemma 5.3

The random variable \((X^{{\scriptscriptstyle {({1}})}},Y^{{\scriptscriptstyle {({1}})}})\) has density
$$\begin{aligned} p(x,y)=(2-p)\alpha {y^{-\alpha -1}}\exp \{-(2-p) (y-\rho x)^{1-\alpha }\}{\mathbf 1}\{y-\rho x>0\}. \end{aligned}$$
(5.3)

Proof

We have
$$\begin{aligned}&{\mathrm {Prob}}_*\big (X^{{\scriptscriptstyle {({1}})}}\in dx,Y^{{\scriptscriptstyle {({1}})}}\in dy\big )\\&\quad ={\mathrm {Prob}}_*\Big (\bar{\Pi }(dx\times dy)=1, \bar{\Pi }\big (\{(u,v): v-\rho u>y-\rho x\}\big )=0\Big )\nonumber \\&\quad ={\mathrm {Prob}}_*\big (\bar{\Pi }(dx\times dy)=1\big )\times {\mathrm {Prob}}_*\Big (\bar{\Pi }\big (\{(u,v): v-\rho u>y-\rho x \}\big )=0 \Big ) \nonumber \\&\quad = (2-p) \exp \Big \{-(2-p)\mu \big (\{(u,v): u\ge 0, y-\rho x<v-\rho u\}\big )\Big \}\mu (dx,dy). \end{aligned}$$
To complete the result, compute
$$\begin{aligned} \mu \big (\{(u,v): u\ge 0, y-\rho x<v-\rho u\}\big )&=\int _0^{\infty }\int _{y-\rho x+\rho u}^{\infty }\frac{\alpha }{v^{\alpha +1}}dvdu\nonumber \\&=\int _0^{\infty }(y-\rho x+\rho u)^{-\alpha }du =(y-\rho x)^{1-\alpha } . \end{aligned}$$
\(\square \)

Lemma 5.4

\({\mathrm {Prob}}_*(\mathfrak {D})= p / (2-p)\).

Proof

Since the components \(\Pi ^{{\scriptscriptstyle {({e}})}}\) and \(\Pi ^{{\scriptscriptstyle {({d,+}})}}\) appearing in the definition of \(\bar{\Pi }\) are independent Poisson point processes with the intensity measures \(q\mu \) and \(p\mu \) respectively, we have
$$\begin{aligned} Prob_* \left( (X^{{\scriptscriptstyle {({1}})}}, Y^{{\scriptscriptstyle {({1}})}}) \in \Pi ^{{\scriptscriptstyle {({d,+}})}} \right) =\frac{p}{p+2q}=\frac{p}{2-p} . \end{aligned}$$
\(\square \)

We now argue that we can successfully pass to the limit. As a consequence, we prove that the event \(\mathcal {E}_t\) holds eventually with overwhelming probability. Since the proof of these results are rather technical, we defer them to “Appendix A”.

Proposition 5.5

As \(t\rightarrow \infty \),
  1. (i)

    \( \Big (\frac{|Z^{{\scriptscriptstyle {({1}})}}_t|}{r_t},\frac{|Z^{{\scriptscriptstyle {({2}})}}_t|}{r_t},\frac{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}{a_t},\frac{\xi (Z^{{\scriptscriptstyle {({2}})}}_t)}{a_t}\Big ) \Rightarrow (X^{{\scriptscriptstyle {({1}})}},X^{{\scriptscriptstyle {({2}})}},Y^{{\scriptscriptstyle {({1}})}},Y^{{\scriptscriptstyle {({2}})}}),\)

     
  2. (ii)

    \(\Big (\frac{\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)}{a_t}, \frac{\Psi _t(Z^{{\scriptscriptstyle {({2}})}}_t)}{a_t}\Big ) \Rightarrow (Y^{{\scriptscriptstyle {({1}})}}-\rho X^{{\scriptscriptstyle {({1}})}},Y^{{\scriptscriptstyle {({2}})}}-\rho X^{{\scriptscriptstyle {({2}})}}).\)

     

Proposition 5.6

\(\text {Prob}(\mathcal {E}_t)\rightarrow 1\) as \(t\rightarrow \infty \).

Proposition 5.7

\({\mathrm {Prob}}(\mathfrak {D}_t)\rightarrow \frac{p}{2-p}\) as \(t\rightarrow \infty \).

5.3 An explicit construction of the limiting random variable

We complete this section by giving an explicit construction of the limiting random variable \(\Upsilon \) in Theorem 1.2. For any \(\delta >0\), let
$$\begin{aligned} S^{{\scriptscriptstyle {({\delta ,+}})}}&=\sum _{\genfrac{}{}{0.0pt}{}{(x,y)\in \Pi ^{{\scriptscriptstyle {({e}})}}}{0<x<X^{{\scriptscriptstyle {({1}})}} ,y\ge \delta Y^{{\scriptscriptstyle {({1}})}}}}\log \Big (1-\frac{y}{Y^{{\scriptscriptstyle {({1}})}}}\Big ) \qquad \text {and}\qquad S^{{\scriptscriptstyle {({\delta ,-}})}} =\sum _{\genfrac{}{}{0.0pt}{}{(x,y)\in \Pi ^{{\scriptscriptstyle {({e}})}}}{-X^{{\scriptscriptstyle {({1}})}}<x<0,y\ge \delta Y^{{\scriptscriptstyle {({1}})}}}}\log \Big (1-\frac{y}{Y^{{\scriptscriptstyle {({1}})}}}\Big ), \end{aligned}$$
on the event \(\mathfrak {D}\), and zero otherwise, and let
$$\begin{aligned} S^{{\scriptscriptstyle {({\delta }})}}=- X^{{\scriptscriptstyle {({1}})}} \big (S^{{\scriptscriptstyle {({\delta ,+}})}}-S^{{\scriptscriptstyle {({\delta ,-}})}}\big ) . \end{aligned}$$
(5.4)
Observe that these variables are well-defined since, for every \((x,y)\in \Pi ^{{\scriptscriptstyle {({e}})}}\) such that \(|x|<X^{{\scriptscriptstyle {({1}})}}\), we have \(y<Y^{{\scriptscriptstyle {({1}})}}-\rho X^{{\scriptscriptstyle {({1}})}}+\rho |x|<Y^{{\scriptscriptstyle {({1}})}}\).
In the next lemma we show that, as \(\delta \rightarrow 0\),
$$\begin{aligned} \mathcal {L} ( S^{{\scriptscriptstyle {({\delta }})}}|\mathfrak {D} ) \Rightarrow \mathcal {L}(\log \Upsilon ) \end{aligned}$$
for a certain random variable \(\Upsilon \) with positive density on \(\mathbb {R}_+\). In Sect. 6 we identify \(\Upsilon \) with the random variable appearing in Theorem 1.2.

Lemma 5.8

As \(\delta \downarrow 0\),
$$\begin{aligned} \mathcal {L}(S^{{\scriptscriptstyle {({\delta }})}}|\mathfrak {D})\Rightarrow \mathcal {L}( \log \Upsilon ), \end{aligned}$$
where \(\Upsilon \) is a random variable with positive density on \(\mathbb {R}_+\) defined as follows. Let \((X, Y) \in \mathbb {R}^2\) be a random variable with density given by (5.3). Then, conditionally on (XY), \(\log \Upsilon \) is the value at time X of a time-inhomogeneous Lévy process with zero drift, no Brownian component and the Lévy measure
$$\begin{aligned} L(dx\otimes dz)= \left\{ \begin{array}{ll} \displaystyle \frac{q\alpha e^{|z|}}{(Y-\rho X+\rho x)^{\alpha } Y^{\alpha }(1-e^{|z|})^{\alpha +1}} dx\otimes dz &{} \text { if } |z|<\log \frac{Y}{\rho ( X-x)},\\ 0 &{} \text { otherwise.} \end{array} \right. \end{aligned}$$
(5.5)

Proof

Denote for brevity \(X=X^{{\scriptscriptstyle {({1}})}}\) and \(Y=Y^{{\scriptscriptstyle {({1}})}}\). Conditionally on \(\Pi ^{{\scriptscriptstyle {({d}})}}\), \(\mathfrak {D}\) and (XY), the point process \(\Pi ^{{\scriptscriptstyle {({e}})}}\) is Poissonian with the intensity measure
$$\begin{aligned} \mu ^{{\scriptscriptstyle {({e}})}}(dx\otimes dy)= \left\{ \begin{array}{ll} q(Y-\rho X+\rho |x|)^{-\alpha }\mu (dx\otimes dy) &{} \text { if } y-\rho |x|<Y-\rho X,\\ 0 &{} \text { otherwise, } \end{array} \right. \end{aligned}$$
and \(S^{{\scriptscriptstyle {({\delta ,+}})}}\) is the value at time X of a time-inhomogeneous Lévy process with zero drift, no Brownian component and the Lévy measure
$$\begin{aligned}&L^{{\scriptscriptstyle {({\delta ,+}})}}(dx\otimes dz)\nonumber \\&\quad = \left\{ \begin{array}{ll} \displaystyle \frac{q\alpha e^z}{(Y-\rho X+\rho x)^{\alpha } Y^{\alpha }(1-e^z)^{\alpha +1}} dx\otimes dz &{} \text { if } \log \frac{\rho (X-x)}{Y}<z\le \log (1-\delta ),\\ 0 &{} \text { otherwise,} \end{array} \right. \end{aligned}$$
where we consider \(\delta <Y-\rho X\). Further, conditionally on \(\Pi ^{{\scriptscriptstyle {({d}})}}\), \(\mathfrak {D}\) and (XY), the variable \(S^{{\scriptscriptstyle {({\delta ,-}})}}\) is independent and identically distributed with \(S^{{\scriptscriptstyle {({\delta ,+}})}}\). Due to symmetry, \(S^{{\scriptscriptstyle {({\delta }})}}\) is therefore the value of a time X of a time-inhomogeneous Lévy process with zero drift, no Brownian component and the Lévy measure
$$\begin{aligned}&L^{{\scriptscriptstyle {({\delta }})}}(dx\otimes dz)\nonumber \\&\quad = \left\{ \begin{array}{ll} \displaystyle \frac{q\alpha e^{|z|}}{(Y-\rho X+\rho x)^{\alpha } Y^{\alpha }(1-e^{|z|})^{\alpha +1}} dx\otimes dz &{} \text { if } \log \frac{1}{1-\delta }\le |z|< \log \frac{Y}{\rho (X-x)},\\ 0 &{} \text { otherwise. } \end{array} \right. \end{aligned}$$
As \(\delta \downarrow 0\), \(S^{{\scriptscriptstyle {({\delta ,-}})}}\) converges weakly to the value at time X of a time-inhomogeneous Lévy process with zero drift, no Brownian component and the Lévy measure given by (5.5), where the limiting Lévy measure is valid because
$$\begin{aligned}&\int _0^{X}\int _{\mathbb {R}}\min \{1,z^2\}L(dx\otimes dz) =2\int _0^{X}\int _0^{\infty }\min \{1,z^2\}L(dx\otimes dz)\\&\quad =2\int _0^{X}\int _0^{Y-\rho X+\rho x}\frac{q\alpha \min \{1,\log ^2(1-y/Y)\}}{(Y-\rho X+\rho x)^{\alpha } y^{\alpha +1}}dydx\\&\quad \le 2\int _0^{ X}\int _0^{Y}\frac{q\alpha \min \{1,\log ^2(1-y/Y)\}}{(Y-\rho X+\rho x)^{\alpha } y^{\alpha +1}}dydx\\&\quad =\frac{2q\alpha }{\alpha -1}\Big [(Y-\rho X)^{1-\alpha }-Y^{1-\alpha }\Big ]\int _0^{Y}\frac{ \min \{1,\log ^2(1-y/Y)\}}{y^{\alpha +1}}dy<\infty . \end{aligned}$$
Since a Lévy process has positive density on \(\mathbb {R}\) at positive times, and since the law of \(\log \Upsilon \) is obtained by averaging over the law of Lévy processes at positive times, \(\Upsilon \) also has positive density on \(\mathbb {R}_+\). \(\square \)

6 Fluctuation theory in the case \(\alpha \in (1,2)\)

In this section we study the fluctuations in the ratio \(u(t, Z^{{\scriptscriptstyle {({1}})}}_t)/u(t, Z^{{\scriptscriptstyle {({2}})}}_t)\) in the case \(\alpha \in (1, 2)\), building on our analysis in Sect. 4.1, and hence complete the proof of Theorem 1.2.

Recall that E denotes the set \(\mathbb {Z}{\setminus }D\). For any \(t>0\), let
$$\begin{aligned} S_t=-\sum _{\genfrac{}{}{0.0pt}{}{0<|z|<|Z^{{\scriptscriptstyle {({1}})}}_t|}{z \in E}}\text {sgn}(z/Z^{{\scriptscriptstyle {({1}})}}_t)\log \Big (1-\frac{\xi (z)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ) \end{aligned}$$
on the event \(\mathcal {E}_t\) and zero otherwise. In Sect. 4 we showed that the ratio \(u(t, Z^{{\scriptscriptstyle {({1}})}}_t)/u(t, -Z^{{\scriptscriptstyle {({1}})}}_t)\) was well-approximated by \(\exp \{S_t\}\), so it remains to study the convergence of \(S_t\). To do this, we first truncate the sum at potential values above a certain threshold and show that this is a good approximation of the full sum; we then study the convergence of the truncated sums.
For any \(\delta >0\), define
$$\begin{aligned} S_{t}^{{\scriptscriptstyle {({\delta }})}} =-\sum _{\genfrac{}{}{0.0pt}{}{0<|z|<|Z^{{\scriptscriptstyle {({1}})}}_t|}{z\in E}} \text {sgn}(z/Z^{{\scriptscriptstyle {({1}})}}_t)\log \Big (1-\frac{\xi (z)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ){\mathbf 1}_{\{\xi (z) \ge \delta \xi (Z^{{\scriptscriptstyle {({1}})}}_t)\}} \end{aligned}$$
on the event \(\mathcal {E}_t\) and zero otherwise, and let \(\hat{S}_{t}^{{\scriptscriptstyle {({\delta }})}} = S_t - S_{t}^{{\scriptscriptstyle {({\delta }})}}\). Denote by \({\mathrm {Prob}}^{{\scriptscriptstyle {({e}})}}\) and \(\mathrm {E}^{{\scriptscriptstyle {({e}})}}\) the conditional probability and expectation given D and \(\{\xi (z): z\in D\}\). The next lemma shows that the truncated sum \(S_{t}^{{\scriptscriptstyle {({\delta }})}}\) is a good approximation for the full sum \(S_t\).

Lemma 6.1

For any \(\varepsilon _1,\varepsilon _2>0\) there is \(\delta _0>0\) such that for each \(0<\delta \le \delta _0\)
$$\begin{aligned} {\mathrm {Prob}}\big (\{|\hat{S}_{t}^{{\scriptscriptstyle {({\delta }})}}|>\varepsilon _1\}\cap \mathfrak {D}_t\big )<\varepsilon _2 \end{aligned}$$
eventually for all t.

Proof

Let
$$\begin{aligned} \mathcal {\hat{E}}_t=\left\{ c_1<\frac{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}{a_t}<c_2, c_1<\frac{\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)}{a_t}<c_2, c_1<\frac{\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}, \frac{|Z^{{\scriptscriptstyle {({1}})}}_t|}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{\alpha }}<c_2\right\} , \end{aligned}$$
where \(c_1,c_2>0\) are chosen according to Proposition 5.5 so that
$$\begin{aligned} {\mathrm {Prob}}(\mathcal {\hat{E}}_t)>1-\varepsilon _2/3 \end{aligned}$$
for all \(t\ge t_1\). By Proposition 5.6, let \(t_2\) be such that for all \(t\ge t_2\)
$$\begin{aligned} {\mathrm {Prob}}(\mathcal {E}_t)>1-\varepsilon _2/3. \end{aligned}$$
For each \(\delta >0\), let \(t_3\) be such that \(\delta c_1a_{t_3}>1\). Let \(t_0= \max \{t_1,t_2,t_3\}\). Consider the event \(\mathfrak {D}_t\cap \mathcal {E}_t\cap \mathcal {\hat{E}}_t\), \(\delta >0\) and \(t\ge t_0\).
By Chebychev’s inequality we have
$$\begin{aligned}&\text {Prob}^{{\scriptscriptstyle {({e}})}} \big (|\hat{S}_t^{{\scriptscriptstyle {({\delta }})}}|>\varepsilon _1\big ) \le \varepsilon _1^{-2}\mathrm {E}^{{\scriptscriptstyle {({e}})}}\big [\hat{S}_t^{{\scriptscriptstyle {({\delta }})}}\big ]^2\\&\quad = \varepsilon _1^{-2}\mathrm {E}^{{\scriptscriptstyle {({e}})}}\left[ \sum _{\genfrac{}{}{0.0pt}{}{0<z<|Z^{{\scriptscriptstyle {({1}})}}_t|}{z\in E}} \text {sgn}(Z^{{\scriptscriptstyle {({1}})}}_t) \left( \log \left( 1-\frac{\xi (-z)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\right) {\mathbf 1}_{\{\xi (-z)< \delta \xi (Z^{{\scriptscriptstyle {({1}})}}_t)\}}\right. \right. \nonumber \\&\qquad \left. \left. - \log \left( 1-\frac{\xi (z)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\right) {\mathbf 1}_{\{\xi (z)< \delta \xi (Z^{{\scriptscriptstyle {({1}})}}_t)\}}\right) \right] ^2 \end{aligned}$$
Since the summands are independent and consist of the differences of two independent identically distributed terms under \({\mathrm {Prob}}^{{\scriptscriptstyle {({e}})}}\) we obtain
$$\begin{aligned} \text {Prob}^{{\scriptscriptstyle {({e}})}}&\big (|\hat{S}_t^{{\scriptscriptstyle {({\delta }})}}|>\varepsilon _1\big ) \le 4\varepsilon _1^{-2}\sum _{\genfrac{}{}{0.0pt}{}{0<z<|Z^{{\scriptscriptstyle {({1}})}}_t|}{z\in E}} \mathrm {E}^{{\scriptscriptstyle {({e}})}}\Big [\log \Big (1-\frac{\xi (z)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ){\mathbf 1}_{\{\xi (z)< \delta \xi (Z^{{\scriptscriptstyle {({1}})}}_t)\}}\Big ]^2 . \end{aligned}$$
For each \(0<z<|Z^{{\scriptscriptstyle {({1}})}}_t|\), \(z\in E\), the conditional distribution of \(\xi (z)\) is the Pareto distribution with parameter \(\alpha \) conditioned on \(\Psi _t(z)<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)\), that is, on
$$\begin{aligned} \xi (z)-\frac{|z|}{t}\log \xi (z)<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t). \end{aligned}$$
Observe that for all \(\delta \le c_1\)
$$\begin{aligned} \Big \{y\in [1,\infty ):y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)\Big \} \supset [1,\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)]\supset [1,\delta \xi (Z^{{\scriptscriptstyle {({1}})}}_t)]. \end{aligned}$$
This implies
$$\begin{aligned}&\int _{1}^{\infty }\frac{\alpha }{y^{\alpha +1}}{\mathbf 1}_{\{y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)\}}dy\\&\quad \ge \int _{1}^{\infty }\frac{\alpha }{y^{\alpha +1}}{\mathbf 1}_{\{y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)\}}dy=1-\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)^{-\alpha }>1/2. \end{aligned}$$
Observe that \(\delta \xi (Z^{{\scriptscriptstyle {({1}})}}_t)>\delta c_1 a_t\ge 1\). Using the change of variables \(y=u\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\), we obtain
$$\begin{aligned}&\int _{1}^{\delta \xi (Z^{{\scriptscriptstyle {({1}})}}_t)} \frac{\alpha }{y^{\alpha +1}}\log ^2 \Big (1-\frac{y}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ) {\mathbf 1}_{\{y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)\}}dy\\&\qquad =\int _{1}^{\delta \xi (Z^{{\scriptscriptstyle {({1}})}}_t)} \frac{\alpha }{y^{\alpha +1}}\log ^2 \Big (1-\frac{y}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big )dy\\&\qquad \le \frac{\alpha }{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{\alpha }}\int _{0}^{\delta } u^{-\alpha -1}\log ^2 (1-u)du. \end{aligned}$$
Since
$$\begin{aligned} \int _{0}^{\delta } u^{-\alpha -1}\log ^2 (1-u)du\sim \int _0^{\delta }u^{1-\alpha }du =\frac{\delta ^{2-\alpha }}{2-\alpha } \end{aligned}$$
as \(\delta \downarrow 0\), we can choose \(\delta _1\le c_1\) small enough so that for all \(\delta \le \delta _1\)
$$\begin{aligned} \mathrm {E}^{{\scriptscriptstyle {({e}})}}\Big [\log \Big (1-\frac{\xi (z)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ){\mathbf 1}_{\{\xi (z)< \delta \xi (Z^{{\scriptscriptstyle {({1}})}}_t)\}}\Big ]^2 \le \frac{4\alpha \delta ^{2-\alpha }}{(2-\alpha )\xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{\alpha }} \end{aligned}$$
and
$$\begin{aligned} \text {Prob}^{{\scriptscriptstyle {({e}})}}&\big (|\hat{S}_t^{{\scriptscriptstyle {({\delta }})}}|>\varepsilon _1\big ) \le \frac{16\alpha \delta ^{2-\alpha }}{\varepsilon _1^2(2-\alpha )}\cdot \frac{|Z^{{\scriptscriptstyle {({1}})}}_t|}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{\alpha }}<\frac{16\alpha c_2\delta ^{2-\alpha }}{\varepsilon _1^2(2-\alpha )} <\varepsilon _2/3 \end{aligned}$$
for all \(\delta \le \delta _0\) with some \(\delta _0\le \delta _1\). Hence
$$\begin{aligned} {\mathrm {Prob}}\big (\{|\hat{S}_{t}^{{\scriptscriptstyle {({\delta }})}}|>\varepsilon _1\big \}\cap \mathfrak {D}_t)&\le {\mathrm {Prob}}\big (\{|\hat{S}_{t}^{{\scriptscriptstyle {({\delta }})}}|>\varepsilon _1\big \}\cap \mathfrak {D}_t\cap \mathcal {E}_t\cap \mathcal {\hat{E}}_t) +{\mathrm {Prob}}(\mathcal {E}_t^c)\\&\quad +{\mathrm {Prob}}(\mathcal {\hat{E}}_t^c) <\varepsilon _2 \end{aligned}$$
as required. \(\square \)

We next show that the truncated sum \(S_t^{{\scriptscriptstyle {({\delta }})}}\) converges to the variable \(S^{{\scriptscriptstyle {({\delta }})}}\) introduced in (5.4); since the proof is similar to those appearing in “Appendix A”, we also defer it to the appendix.

Proposition 6.2

As \(t\rightarrow \infty \),
$$\begin{aligned} \mathcal {L}(S_t^{{\scriptscriptstyle {({\delta }})}}|\mathfrak {D}_t) \Rightarrow \mathcal {L}(S^{{\scriptscriptstyle {({\delta }})}}|\mathfrak {D}). \end{aligned}$$

We are now ready to put everything together to complete the proof of Theorem 1.2, in particular showing that the ratio \(u(t, Z^{{\scriptscriptstyle {({1}})}}_t)/u(t, -Z^{{\scriptscriptstyle {({1}})}}_t)\) converges in distribution to \(\Upsilon \), where \(\Upsilon \) is the random variable defined in Lemma 5.8.

6.1 Completion of the proof of Theorem 1.2

By Propositions 3.7 and 4.3 on the event \(\mathfrak {D}_t\) only the shortest paths to \(Z^{{\scriptscriptstyle {({1}})}}_t\) and \(-Z^{{\scriptscriptstyle {({1}})}}_t\) are non-negligible and hence we have by Proposition 4.2
$$\begin{aligned} \frac{u(t,Z^{{\scriptscriptstyle {({1}})}}_t)}{u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)}&=\left\{ \prod _{j=0}^{|Z^{{\scriptscriptstyle {({1}})}}_t|-1}\frac{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (-j)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (j)}\right\} ^\iota +o(1)\\&=\exp \Big \{-\text {sgn}(Z^{{\scriptscriptstyle {({1}})}}_t)\sum _{0\le j<|Z^{{\scriptscriptstyle {({1}})}}_t|} \Big [\log \Big (1-\frac{\xi (j)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big )-\log \Big (1-\frac{\xi (-j)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big )\Big ]\Big \}\\&\quad +o(1)=\exp \big \{S_t\big \}+o(1). \end{aligned}$$
It suffices to show that
$$\begin{aligned} \mathcal {L}(S_t|\mathfrak {D}_t)\Rightarrow \mathcal {L}(\log \Upsilon ), \end{aligned}$$
where \(\Upsilon \) is the random variable defined in Lemma 5.8. Let \(x\in \mathbb {R}\), \(\varepsilon >0\) and choose \(\hat{\varepsilon }>0\) so that
$$\begin{aligned} {\mathrm {Prob}}(\log \Upsilon \le x-\hat{\varepsilon })&> {\mathrm {Prob}}(\log \Upsilon \le x)-\varepsilon /4,\end{aligned}$$
(6.1)
$$\begin{aligned} {\mathrm {Prob}}(\log \Upsilon \le x+\hat{\varepsilon })&< {\mathrm {Prob}}(\log \Upsilon \le x)+\varepsilon /4. \end{aligned}$$
(6.2)
Choose \(\delta _0\) according to Lemma 6.1 with \(\varepsilon _1=\hat{\varepsilon }\) and \(\varepsilon _2<\varepsilon {\mathrm {Prob}}(\mathfrak {D}_t)/4\) for all t, which is possible by Proposition 5.7 since \({\mathrm {Prob}}(\mathfrak {D}_t)\) converges to a positive limit. By Lemma 5.8, choose \(\delta \le \delta _0\) so that
$$\begin{aligned} {\mathrm {Prob}}\big (S^{{\scriptscriptstyle {({\delta }})}}\le x-\varepsilon _1| \mathfrak {D}\big )&> {\mathrm {Prob}}(\log \Upsilon \le x-\varepsilon _1)-\varepsilon /4, \end{aligned}$$
(6.3)
$$\begin{aligned} {\mathrm {Prob}}\big (S^{{\scriptscriptstyle {({\delta }})}}\le x+\varepsilon _1| \mathfrak {D}\big )&< {\mathrm {Prob}}(\log \Upsilon \le x+\varepsilon _1)+\varepsilon /4, \end{aligned}$$
(6.4)
and choose \(t_0\) such that the statement of Lemma 6.1 holds for \(t\ge t_0\). We have
$$\begin{aligned} {\mathrm {Prob}}(\{S_t\le x\}\cap \mathfrak {D}_t)&={\mathrm {Prob}}\big (\{S^{{\scriptscriptstyle {({\delta }})}}_t\le x-\hat{S}^{{\scriptscriptstyle {({\delta }})}}_t\}\cap \{|\hat{S}^{{\scriptscriptstyle {({\delta }})}}_t|\le \varepsilon _1\}\cap \mathfrak {D}_t\big )\\&\quad +{\mathrm {Prob}}\big (\{S^{{\scriptscriptstyle {({\delta }})}}_t\le x-\hat{S}^{{\scriptscriptstyle {({\delta }})}}_t\}\cap \{|\hat{S}^{{\scriptscriptstyle {({\delta }})}}_t|>\varepsilon _1\}\cap \mathfrak {D}_t\big ). \end{aligned}$$
Hence by Lemma 6.1 for all \(t>t_0\),
$$\begin{aligned} {\mathrm {Prob}}\big (S^{{\scriptscriptstyle {({\delta }})}}_t\le x-\hat{\varepsilon }| \mathfrak {D}_t\big )-\varepsilon /4< {\mathrm {Prob}}(S_t\le x|\mathfrak {D}_t) < {\mathrm {Prob}}\big (S^{{\scriptscriptstyle {({\delta }})}}_t\le x+\hat{\varepsilon }| \mathfrak {D}_t\big )+\varepsilon /4. \end{aligned}$$
By Proposition 6.2 there is \(t_1\ge t_0\) such that for all \(t\ge t_1\)
$$\begin{aligned} {\mathrm {Prob}}\big (S^{{\scriptscriptstyle {({\delta }})}}_t\le x-\hat{\varepsilon }| \mathfrak {D}_t\big )&> {\mathrm {Prob}}\big (S^{{\scriptscriptstyle {({\delta }})}}\le x-\hat{\varepsilon }| \mathfrak {D}\big ) -\varepsilon /4,\\ {\mathrm {Prob}}\big (S^{{\scriptscriptstyle {({\delta }})}}_t\le x+\hat{\varepsilon }| \mathfrak {D}_t\big )&< {\mathrm {Prob}}\big (S^{{\scriptscriptstyle {({\delta }})}}\le x+\hat{\varepsilon }| \mathfrak {D}\big ) +\varepsilon /4, \end{aligned}$$
implying
$$\begin{aligned} {\mathrm {Prob}}\big (S^{{\scriptscriptstyle {({\delta }})}}\le x-\hat{\varepsilon }| \mathfrak {D}\big )-\varepsilon /2< {\mathrm {Prob}}(S_t\le x|\mathfrak {D}_t) < {\mathrm {Prob}}\big (S^{{\scriptscriptstyle {({\delta }})}}\le x+\hat{\varepsilon }| \mathfrak {D}\big )+\varepsilon /2. \end{aligned}$$
Combining this with (6.3) and (6.4) we obtain
$$\begin{aligned} {\mathrm {Prob}}(S\le x-\hat{\varepsilon })-3\varepsilon /4< {\mathrm {Prob}}(S_t\le x|\mathfrak {D}_t) < {\mathrm {Prob}}(\log \Upsilon \le x+\hat{\varepsilon })+3\varepsilon /4. \end{aligned}$$
Together with (6.1) and (6.2) this gives
$$\begin{aligned} {\mathrm {Prob}}(\log \Upsilon \le x)-\varepsilon< {\mathrm {Prob}}(S_t\le x|\mathfrak {D}_t) < {\mathrm {Prob}}(\log \Upsilon \le x)+\varepsilon . \end{aligned}$$
for all \(t\ge t_1\).

7 Fluctuation theory in the case \(\alpha \ge 2\)

In this section we study the fluctuations in the ratio \(u(t, Z^{{\scriptscriptstyle {({1}})}}_t)/u(t, Z^{{\scriptscriptstyle {({2}})}}_t)\) in the case \(\alpha \ge 2\), and hence complete the proof of Theorem 1.3. Due to our analysis in Sect. 4, we know that it is sufficient to study only the contribution from paths which visit the sites in \(\mathcal {N}_t\) at most once.

The first step is to show that, by conditioning on the information not contained in the sites in \(\mathcal {N}_t\), we are left with an expression that is amenable to applying standard fluctuation theory; here we use our analysis of the function I. The final step is to show that the fluctuations due to \(\mathcal {N}_t\) are already enough to imply that \(|\log u(t, Z^{{\scriptscriptstyle {({1}})}}_t)/u(t, Z^{{\scriptscriptstyle {({2}})}}_t) | \rightarrow \infty \), regardless of the contributions from all other sites; we achieve this by invoking a central limit argument.

For each \(t>0\), let \(\mathcal {F}_t\) be the \(\sigma \)-algebra generated by D, \(Z^{{\scriptscriptstyle {({1}})}}_t\), \(\mathcal {N}_t\), and \(\{\xi (z):z\notin \mathcal {N}_t\}\). Let \(\text {Prob}_{\mathcal {F}_t}\), \({\mathrm E}_{\mathcal {F}_t}\) and \(\text {Var}_{\mathcal {F}_t}\) denote, respectively, conditional probability, expectation and variance with respect to \(\mathcal {F}_t\). For each \(z\in \mathcal {N}_t\), define
$$\begin{aligned} Q_t(z)=-\text {sgn}(z/Z^{{\scriptscriptstyle {({1}})}}_t)\log \Big (1-\frac{\xi (z)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ) \end{aligned}$$
whenever \(\xi (z)<\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\) and zero otherwise. Let
$$\begin{aligned} Q_t=\sum _{z\in \mathcal {N}_t}Q_t(z). \end{aligned}$$
Observe that \(Q_t(z)\), \(z\in \mathcal {N}_t\), are conditionally independent with respect to \(\mathcal {F}_t\), which implies that
$$\begin{aligned} \text {Var}_{\mathcal {F}_t} Q_t =\sum _{z \in \mathcal {N}_t}\text {Var}_{\mathcal {F}_t} Q_t(z). \end{aligned}$$
(7.1)
Further, it is easy to see that for each \(z\in \mathcal {N}_t\), the conditional distribution of \(\xi (z)\) is the Pareto distribution with parameter \(\alpha \) conditioned on \(\Psi _t(z)<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)\) and \(\xi (z)>\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\).

The next lemma establishes that, after conditioning on \(\mathcal {F}_t\), the contribution to the ratio \(u(t, Z^{{\scriptscriptstyle {({1}})}}_t)/u(t, Z^{{\scriptscriptstyle {({2}})}}_t)\) due to the sites in \(\mathcal {N}_t\) is well-approximated by a product over these sites.

Proposition 7.1

There exists an \(\mathcal {F}_t\)-measurable random variable \(P_t\) such that
$$\begin{aligned} \big | \log u ( t, Z^{{\scriptscriptstyle {({1}})}}_t) - \log u ( t, -Z^{{\scriptscriptstyle {({1}})}}_t) - Q_t + P_t \big | {\mathbf 1}_{\mathfrak {D}_t\cap \mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}} \rightarrow 0 \end{aligned}$$
almost surely, as \(t\rightarrow \infty \).

Proof

We write \(\mathrm {Null}^t_{1+}\), \(\mathrm {Null}^t_{1-}\) for the set of null paths in \(\mathrm {Null}^t_1\) ending in \(Z^{{\scriptscriptstyle {({1}})}}_t\) and \(-Z^{{\scriptscriptstyle {({1}})}}_t\), respectively. Further, we denote by \(\mathcal {N}_t^+\) and \(\mathcal {N}_t^{-}\) the subsets of \(\mathcal {N}_t\) consisting of the points lying between 0 and \(Z^{{\scriptscriptstyle {({1}})}}_t\), and 0 and \(-Z^{{\scriptscriptstyle {({1}})}}_t\), respectively. Finally, we denote by \(N_t^{+}\) and \(N_t^{-}\) the respective cardinalities of \(\mathcal {N}_t^+\) and \(\mathcal {N}_t^{-}\).

By Propositions 3.7 and  4.6 we have that on the event \(\mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}\) almost surely
$$\begin{aligned} u(t,Z^{{\scriptscriptstyle {({1}})}}_t)=(1+o(1))\sum _{y\in \mathrm {Null}^t_{1+}}U(t,y), \end{aligned}$$
(7.2)
as \(t\rightarrow \infty \). Fix a path \(y\in \mathrm {Null}^t_{1+}\) and note that \(y_{\ell (y)}=Z^{{\scriptscriptstyle {({1}})}}_t\). We wish to extract from the path terms involving potential values of sites in \(\mathcal {N}_t\). To do this first we recall that by (2.2)
$$\begin{aligned} U(t,y)=e^{-2t}I_{\ell (y)}(t;\xi (\mathbf{y})), \end{aligned}$$
(7.3)
where \(\xi (\mathbf{y})\) denotes the sequence \(\xi (y_0),\ldots ,\xi (y_{\ell (y)})\). Then, for any \(z\in \mathcal {N}_t^+\) it follows from the recursion (3.9) and the symmetry of I proved in Lemma 3.8 that
$$\begin{aligned} I_{\ell (y)}(t;\xi (\mathbf{y}))= \frac{1}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (z)} \Big [I_{\ell (y)-1}\big (t;\xi (\mathbf{y}\backslash \{z\})\big )-I_{\ell (y)-1}\big (t;\xi (\mathbf{y}\backslash \{Z^{{\scriptscriptstyle {({1}})}}_t\})\big )\Big ], \end{aligned}$$
where \(\xi (\mathbf{y}\backslash \{z\})\) denotes the sequence \(\xi (y_0),\ldots ,\xi (y_{\ell (y)})\) with the occurrence of \(\xi (z)\) removed (note that since \(y\in \mathrm {Null}^t_{1+}\) it makes exactly one visit to z). On the event \(\mathcal {E}_t\) we have \(\xi (z)<\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\) and therefore we can use Lemma 3.12 and obtain
$$\begin{aligned} I_{\ell (y)-1}\big (t;\xi (\mathbf{y}\backslash \{Z^{{\scriptscriptstyle {({1}})}}_t\})\big )\le \frac{\ell (y)}{(\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (z))t} I_{\ell (y)-1}\big (t;\xi (\mathbf{y}\backslash \{z\})\big ). \end{aligned}$$
Observe that we have \(\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (z)>a_tf_t\) on the event \(\mathcal {E}_t\) as well as \(\ell (y)\le R_t<r_tg_t(1+f_t)\). Plugging this into the above gives
$$\begin{aligned} I_{\ell (y)-1}\big (t;\xi (\mathbf{y}\backslash \{Z^{{\scriptscriptstyle {({1}})}}_t\})\big )\le \frac{g_t(1+f_t)}{f_t\log t} I_{\ell (y)-1}\big (t;\xi (\mathbf{y}\backslash \{z\})\big ), \end{aligned}$$
and thus
$$\begin{aligned} I_{\ell (y)}(t;\xi (\mathbf{y})) =I_{\ell (y)-1}\big (t;\xi (\mathbf{y}\backslash \{z\})\big ) \Big (1+O\Big (\frac{g_t}{f_t\log t}\Big )\Big ) \frac{1}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (z)} \end{aligned}$$
on \(\mathcal {E}_t\), as \(t\rightarrow \infty \). Iterating this procedure for all \(z\in \mathcal {N}_t^+\) and observing that
$$\begin{aligned} \frac{N_t^+g_t}{f_t\log t}< \frac{\delta _t^{-\alpha }\log (1/\delta _t)g_t}{f_t\log t}\rightarrow 0 \end{aligned}$$
on the event \(\mathcal {E}_t^{[2,\infty )}\), we obtain that
$$\begin{aligned} I_{\ell (y)}(t;\xi (\mathbf{y})) =(1+o(1))I_{\ell (y)-N_t^+}\big (t;\xi (\mathbf{y}\backslash \mathcal {N}_t)\big ) \prod _{z\in \mathcal {N}_t^+}\frac{1}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (z)} \end{aligned}$$
on \(\mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}\), as \(t\rightarrow \infty \), where \(\xi (\mathbf{y}\backslash \mathcal {N}_t)\) denotes the sequence \(\xi (y_0),\ldots ,\xi (y_{\ell (y)})\) with all occurrences of \(\xi (z)\) with \(z\in \mathcal {N}_t\) removed. Combining this with (7.2) and (7.3) we obtain
$$\begin{aligned} u(t,Z^{{\scriptscriptstyle {({1}})}}_t)=(1+o(1))e^{-2t} \prod _{z\in \mathcal {N}_t^+}\frac{1}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\xi (z)} \sum _{y\in \mathrm{Null}^t_{1+}} I_{\ell (y)-N_t^+}\big (t;\xi (\mathbf{y}\backslash \mathcal {N}_t)\big ) \end{aligned}$$
and hence
$$\begin{aligned} \log u(t,Z^{{\scriptscriptstyle {({1}})}}_t) =&-\sum _{ z\in \mathcal {N}_t^+}\log \Big (1-\frac{\xi (z)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ) -N_t^+\log \xi (Z^{{\scriptscriptstyle {({1}})}}_t)\nonumber \\&+\log \sum _{y\in \mathrm{Null}^t_{1+}} I_{\ell (y)-N_t^+}\big (t;\xi (\mathbf{y}\backslash \mathcal {N}_t)\big ) -2t+o(1) \end{aligned}$$
(7.4)
on \(\mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}\), as \(t\rightarrow \infty \). Similarly,
$$\begin{aligned} \log u(t,-Z^{{\scriptscriptstyle {({1}})}}_t) =&-\sum _{z\in \mathcal {N}_t^-}\log \Big (1-\frac{\xi (z)}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ) -N_t^-\log \xi (Z^{{\scriptscriptstyle {({1}})}}_t)\nonumber \\&+\log \sum _{y\in \mathrm{Null}^t_{1-}} I_{\ell (y)-N_t^-}\big (t;\xi (\mathbf{y}\backslash \mathcal {N}_t)\big ) -2t+o(1) \end{aligned}$$
(7.5)
on \(\mathfrak {D}\cap \mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}\), as \(t\rightarrow \infty \). Combining (7.4) and (7.5) we obtain the desired result with
$$\begin{aligned} P_t =\,&N_t^+\log \xi (Z^{{\scriptscriptstyle {({1}})}}_t)+\log \sum _{y\in \mathrm{Null}^t_{1+}} I_{\ell (y)-N_t^+}\big (t;\xi (\mathbf{y}\backslash \mathcal {N}_t)\big )\\&-N_t^-\log \xi (Z^{{\scriptscriptstyle {({1}})}}_t)-\log \sum _{y\in \mathrm{Null}^t_{1-}} I_{\ell (y)-N_t^-}\big (t;\xi (\mathbf{y}\backslash \mathcal {N}_t)\big ), \end{aligned}$$
which is obviously \(\mathcal {F}_t\)-measurable. \(\square \)

We now study the scale of the fluctuations due to the sites in \(\mathcal {N}_t\), showing in particular that these fluctuations are unbounded.

Proposition 7.2

As \(t \rightarrow \infty \),
$$\begin{aligned} \left[ \text {Var}_{\mathcal {F}_t} Q_t \right] ^{-1} {\mathbf 1}_{\mathcal {E}_t \cap \mathcal {E}_t^{[2,\infty )}} \rightarrow 0 \quad \text {almost surely} \end{aligned}$$
and
$$\begin{aligned} \max _{z \in \mathcal {N}_t} \big |{\mathrm E}_{\mathcal {F}_t} Q_t(z) \big | {\mathbf 1}_{\mathcal {E}_t } \rightarrow 0 \quad \text {almost surely.} \end{aligned}$$

Proof

We work throughout on the event \(\mathcal {E}_t\). Using \(\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)>\delta _tf_ta_t>1\) for the upper bound and \(\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)/\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)<\delta _t/f_t\rightarrow 0\) for the lower bound we obtain
$$\begin{aligned} \int _1^{\infty }\frac{\alpha dy}{y^{\alpha +1}} {\mathbf 1}_{\{y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t),\xi (z)>\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\}} \le \int _{\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}^{\infty }\frac{\alpha dy}{y^{\alpha +1}}=\big (\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\big )^{-\alpha } \end{aligned}$$
and
$$\begin{aligned}&\int _1^{\infty }\frac{\alpha dy}{y^{\alpha +1}} {\mathbf 1}_{\{y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t),\xi (z)>\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\}}\\&\quad \ge \int _{\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}^{\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)}\frac{\alpha dy}{y^{\alpha +1}}=(1+o(1))\big (\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\big )^{-\alpha } \end{aligned}$$
implying
$$\begin{aligned} \int _1^{\infty }\frac{\alpha dy}{y^{\alpha +1}} {\mathbf 1}_{\{y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t),\xi (z)>\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\}}=(1+o(1))\big (\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\big )^{-\alpha } \end{aligned}$$
(7.6)
as \(t\rightarrow \infty \) uniformly for all z almost surely.
Further, using \(\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)>1\), the change of variables \(y=u\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\), and \(\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)/\xi (Z^{{\scriptscriptstyle {({1}})}}_t)>f_t\) we get
$$\begin{aligned}&\int _1^{\infty }\frac{\alpha dy}{y^{\alpha +1}} \log ^2\Big (1-\frac{y}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ) {\mathbf 1}_{\{y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t),\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)<y<\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\}}\nonumber \\&\quad \ge \int _{\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}^{\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t)}\frac{\alpha dy}{y^{\alpha +1}} \log ^2\Big (1-\frac{y}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ) \ge \frac{\alpha }{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{\alpha }} \int _{\delta _t}^{f_t} u^{-\alpha -1} \log ^2(1-u)du. \end{aligned}$$
(7.7)
Since \(\delta _t/f_t\rightarrow 0\) we have
$$\begin{aligned} \int _{\delta _t}^{f_t} u^{-\alpha -1} \log ^2(1-u)du=(1+o(1)) \times \left\{ \begin{array}{ll} \frac{1}{\alpha -2}\delta _t^{2-\alpha }, &{} \alpha >2,\\ \log (1/\delta _t), &{} \alpha =2. \end{array}\right. \end{aligned}$$
(7.8)
Combining (7.6),  (7.7), and (7.8) we obtain
$$\begin{aligned} {\mathrm E}_{\mathcal {F}_t}Q_t^2(z) \ge (1+o(1)) \times \left\{ \begin{array}{ll} \frac{\alpha }{\alpha -2}\delta _t^{2}, &{} \alpha >2,\\ \alpha \delta _t^2\log (1/\delta _t), &{} \alpha =2. \end{array}\right. \end{aligned}$$
(7.9)
Now, using \(\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)>1\) and the change of variables \(y=u\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\) we compute
$$\begin{aligned}&-\int _1^{\infty }\frac{\alpha dy}{y^{\alpha +1}} \log \Big (1-\frac{y}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ) {\mathbf 1}_{\{y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t),\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)<y<\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\}}\nonumber \\&\quad \le -\int _{\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}^{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\frac{\alpha dy}{y^{\alpha +1}} \log \Big (1-\frac{y}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big ) = -\frac{\alpha }{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{\alpha }} \int _{\delta _t}^{1} u^{-\alpha -1} \log (1-u)du. \end{aligned}$$
(7.10)
Observe that
$$\begin{aligned} -\int _{\delta _t}^{1} u^{-\alpha -1} \log (1-u)du=(1+o(1))\frac{1}{\alpha -1}\delta _t^{1-\alpha }. \end{aligned}$$
(7.11)
Combining (7.6), (7.10) and (7.11) we get
$$\begin{aligned} \big |{\mathrm E}_{\mathcal {F}_t}Q_t(z)\big | \le (1+o(1)) \frac{\alpha }{\alpha -1}\delta _t. \end{aligned}$$
(7.12)
Combining (7.9) and (7.12) we obtain
$$\begin{aligned} \text {Var}_{\mathcal {F}_t} Q_t(z) \ge (1+o(1)) \times \left\{ \begin{array}{ll} c(\alpha )\delta _t^{2}, &{} \alpha >2,\\ \alpha \delta _t^2\log (1/\delta _t), &{} \alpha =2. \end{array}\right. \end{aligned}$$
(7.13)
where \(c(\alpha )=\frac{\alpha }{(\alpha -2)(\alpha -1)^2}\). To prove the first result, it remains to notice that \(|\mathcal {N}_t|>\delta _t^{-\alpha }/\log \log (1/\delta _t)\) on event \(\mathcal {E}_t^{[2,\infty )}\) and that \(\delta _t^{2-\alpha }/\log \log (1/\delta _t)\rightarrow \infty \) if \(\alpha >2\) and \(\delta _t^{2-\alpha }\log (1/\delta _t)/\log \log (1/\delta _t)\rightarrow \infty \) if \(\alpha =2\). The second result follows immediately from (7.12). \(\square \)
The final step is to apply a central limit theorem to show that the fluctuations due to the sites in \(\mathcal {N}_t\) are in the Gaussian universality class. For each \(z\in \mathcal {N}_t\), denote
$$\begin{aligned} V_t(z)=\frac{Q_t(z)-{\mathrm E}_{\mathcal {F}_t}Q_t(z)}{\sqrt{\text {Var}_{\mathcal {F}_t}Q_t}} \, , \quad V_t=\sum _{z\in \mathcal {N}_t}V_t(z), \end{aligned}$$
(7.14)
and denote by
$$\begin{aligned} F_{V_t}(x)={\mathrm {Prob}}_{\mathcal {F}_t}\big (V_t\le x\big ) \end{aligned}$$
the conditional distribution function of \(V_t\).

Proposition 7.3

As \(t \rightarrow \infty \),
$$\begin{aligned} \sup _{x \in \mathbb {R}} |F_{V_t} (x) - \Phi (x) | {\mathbf 1}_{\mathcal {E}_t \cap \mathcal {E}_t^{[2,\infty )}} \rightarrow 0 \quad \text {almost surely} , \end{aligned}$$
where \(\Phi \) denotes the distribution function of a standard normal random variable.

Proof

This result follows from an application of the central limit theorem that we state and prove in “Appendix B”. It remains to verify that the conditions of the theorem are satisfied.

First note that, conditionally on \(\mathcal {F}_t\), the random variables \(V_t(z)\), \(z \in \mathcal {N}_t\), are independent. Moreover, by construction, for each \(t>0\) and \(z \in \mathcal {N}_t\),
$$\begin{aligned} {\mathrm E}_{\mathcal {F}_t}V_t(z) = 0 \quad \text {and} \quad \sum _{z \in \mathcal {N}_t} {\mathrm E}_{\mathcal {F}_t} V_t^2(z) = 1 \quad \text {almost surely.} \end{aligned}$$
Hence it remains to verify that, for each \(\varepsilon > 0\),
$$\begin{aligned} \sum _{z \in \mathcal {N}_t} {\mathrm E}_{\mathcal {F}_t}\big [ V_t^2(z) {\mathbf 1}_{\{|V_t(z)| \ge \varepsilon \}} \big ] {\mathbf 1}_{\mathcal {E}_t \cap \mathcal {E}_t^{[2,\infty )}} \rightarrow 0 \quad \text {almost surely}. \end{aligned}$$
(7.15)
For the rest of the proof assume the event \(\mathcal {E}_t \cap \mathcal {E}_t^{[2,\infty )}\) holds, and remark that, according to (7.14)
$$\begin{aligned} Q_t(z)={\mathrm E}_{\mathcal {F}_t}Q_t(z)+V_t(z)\sqrt{\text {Var}_{\mathcal {F}_t}Q_t}. \end{aligned}$$
Since \(Q_t(z)\) and \({\mathrm E}_{\mathcal {F}_t}Q_t(z)\) are either both non-negative or non-positive almost surely, we obtain, using that \(\text {Var}_{\mathcal {F}_t} Q_t\) diverges and that \(E_{\mathcal {F}_t}Q_t(z)\) tends to zero by Proposition 7.2,
$$\begin{aligned} \big \{|V_t(z)| \ge \varepsilon \big \} \subseteq \big \{|Q_t(z)|\ge \varepsilon \sqrt{\text {Var}_{\mathcal {F}_t}Q_t}\big \}. \end{aligned}$$
Hence
$$\begin{aligned}&\sum _{z \in \mathcal {N}_t} {\mathrm E}_{\mathcal {F}_t} \big [V_t^2(z){\mathbf 1}_{\{ |V_t(z)| \ge \varepsilon \}} \big ]\nonumber \\&\qquad \le \frac{1}{\text {Var}_{\mathcal {F}_t}Q_t} \sum _{z \in \mathcal {N}_t} {\mathrm E}_{\mathcal {F}_t} \Big [\big (Q_t(z)-{\mathrm E}_{\mathcal {F}_t}Q_t(z)\big )^2{\mathbf 1}_{\big \{ |Q_t(z)| \ge \varepsilon \sqrt{\text {Var}_{\mathcal {F}_t}Q_t}\big \}} \Big ] \end{aligned}$$
(7.16)
Combining this with (7.1) we observe that in order to prove (7.15) it suffices to show that
$$\begin{aligned} \frac{{\mathrm E}_{\mathcal {F}_t} \Big [\big (Q_t(z)-{\mathrm E}_{\mathcal {F}_t}Q_t(z)\big )^2{\mathbf 1}_{\big \{ |Q_t(z)| \ge \varepsilon \sqrt{\text {Var}_{\mathcal {F}_t}Q_t}\big \}} \Big ] }{\text {Var}_{\mathcal {F}_t}Q_t(z)}\rightarrow 0 \end{aligned}$$
(7.17)
uniformly in z almost surely.
Denote
$$\begin{aligned} \nu _t^{\varepsilon }=\exp \big \{-\varepsilon \sqrt{\text {Var}_{\mathcal {F}_t}Q_t}\big \}. \end{aligned}$$
Then
$$\begin{aligned} \big \{ |Q_t(z)| \ge \varepsilon \sqrt{\text {Var}_{\mathcal {F}_t}Q_t}\big \} =\big \{\xi (z)>(1-\nu _t^{\varepsilon })\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\big \} \subset \big \{\xi (z)>\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\big \} \end{aligned}$$
since \(\nu _t^{\varepsilon }\rightarrow 0\) almost surely on the event \(\mathcal {E}_t \cap \mathcal {E}_t^{[2,\infty )}\) by Proposition 7.2. Similarly to the proof of Proposition 7.2 we use the change of variables \(y=u\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\) to compute, for \(k\in \{0,1,2\}\),
$$\begin{aligned}&\int _1^{\infty }\frac{\alpha }{y^{\alpha +1}} \Big |\log \Big (1-\frac{y}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big )\Big |^k\\&\quad {\mathbf 1}_{\big \{y-\frac{|z|}{t}\log y<\Psi _t(Z^{{\scriptscriptstyle {({1}})}}_t),\delta _t\xi (Z^{{\scriptscriptstyle {({1}})}}_t)<y<\xi (Z^{{\scriptscriptstyle {({1}})}}_t), y>(1-\nu _t^{\varepsilon })\xi (Z^{{\scriptscriptstyle {({1}})}}_t)\big \}}dy\nonumber \\&\quad \le \int _{(1-\nu _t^{\varepsilon })\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}^{\infty }\frac{\alpha }{y^{\alpha +1}} \Big |\log \Big (1-\frac{y}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)}\Big )\Big |^k dy\nonumber \\&\quad = \frac{\alpha }{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{\alpha }} \int _{1-\nu _t^{\varepsilon }}^{\infty } u^{-\alpha -1} |\log (1-u)|^kdu \sim \frac{c_k}{\xi (Z^{{\scriptscriptstyle {({1}})}}_t)^{\alpha }} \end{aligned}$$
with some \(c_k>0\). By the second part of Proposition 7.2 and using (7.6) we obtain
$$\begin{aligned} {\mathrm E}_{\mathcal {F}_t} \Big [\big (Q_t(z)-{\mathrm E}_{\mathcal {F}_t}Q_t(z)\big )^2{\mathbf 1}_{\big \{ |Q_t(z)| \ge \varepsilon \sqrt{\text {Var}_{\mathcal {F}_t}Q_t}\big \}} \Big ] \sim c_2\delta _t^{\alpha } \end{aligned}$$
uniformly in z almost surely. Combining this with (7.13) we arrive at (7.17). \(\square \)

We are now ready to complete the proof of Theorem 1.3. The point is that, since we have shown that the fluctuations due to \(\mathcal {N}_t\) are unbounded and in the Gaussian universality class, they place negligible probability mass on any bounded scale. Hence we have the result.

7.1 Completion of the proof of Theorem 1.3

Let \(c>0\). As a direct corollary of Theorem 1.1,
$$\begin{aligned} {\mathrm {Prob}}\Big (\Big \{\Big |\log \frac{u(t,Z^{{\scriptscriptstyle {({1}})}}_t)}{u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)}\Big |<c\Big \}\cap \mathfrak {D}^c_t \Big )\rightarrow 0, \end{aligned}$$
so it remains to show the convergence on the event \(\mathfrak {D}_t\).
Let \(c>0\). Since \(\text {Prob}(\mathfrak {D}_t)\not \rightarrow 0\) by Theorem 1.1 and \(\text {Prob}(\mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )})\rightarrow 1\) by Propositions 3.3 and 5.6, it suffices to show that, as \(t\rightarrow \infty \),
$$\begin{aligned} {\mathrm {Prob}}\Big (\Big \{\Big |\log \frac{u(t,Z^{{\scriptscriptstyle {({1}})}}_t)}{u(t,-Z^{{\scriptscriptstyle {({1}})}}_t)}\Big |<c\Big \}\cap \mathfrak {D}_t\cap \mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}\Big )\rightarrow 0, \end{aligned}$$
By Proposition 7.1 it is then enough to prove that, as \(t\rightarrow \infty \),
$$\begin{aligned} {\mathrm {Prob}}\big (\big \{|Q_t-P_t|<2c\big \}\cap \mathfrak {D}_t\cap \mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}\big )\rightarrow 0, \end{aligned}$$
for which, in turn, it suffices to show that
$$\begin{aligned} \mathrm {E}\Big [{\mathrm {Prob}}_{\mathcal {F}_t} \big \{|Q_t-P_t|<2c\big \} {\mathbf 1}_{\mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}}\Big ]\rightarrow 0. \end{aligned}$$
Observe that even though the event \(\mathcal {E}_t\) does not belong to \(\mathcal {F}_t\) we can take it out of \({\mathrm {Prob}}_{\mathcal {F}_t}\) by Proposition 5.6 and since the function under \(\mathrm {E}\) is bounded. Now, by the dominated convergence theorem, it remains to prove that
$$\begin{aligned} {\mathrm {Prob}}_{\mathcal {F}_t} \big \{|Q_t-P_t|<2c\big \}\rightarrow 0 \end{aligned}$$
(7.18)
almost surely on \(\mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}\). To do so, assume that \(\mathcal {E}_t\cap \mathcal {E}_t^{[2,\infty )}\) holds and observe that
$$\begin{aligned} Q_t=V_t\sqrt{\text {Var}_{\mathcal {F}_t}Q_t}+\mathrm {E}_{\mathcal {F}_t}Q_t. \end{aligned}$$
Hence (7.18) is equivalent to showing that, almost surely,
$$\begin{aligned} {\mathrm {Prob}}_{\mathcal {F}_t} \Big \{V_t\in \big [\text {Var}_{\mathcal {F}_t}Q_t\big ]^{-\frac{1}{2}}\big (P_t-\mathrm {E}_{\mathcal {F}_t}Q_t-2c,P_t-\mathrm {E}_{\mathcal {F}_t}Q_t+2c\big )\Big \}\rightarrow 0 \end{aligned}$$
(7.19)
Since \(P_t\), \(\mathrm {E}_{\mathcal {F}_t}Q_t\), and \(\text {Var}_{\mathcal {F}_t}Q_t\) are \(\mathcal {F}_t\)-measurable, and the length of the interval on the right-hand side of \(\in \) tends to zero by Proposition 7.2, (7.19) now follows from Proposition 7.3.

References

  1. 1.
    Billingsley, P.: Convergence of Probability Measures, p. xii+253. Wiley, New York (1968)MATHGoogle Scholar
  2. 2.
    Billingsley, P.: Probability and Measure. Wiley Series in Probability and Mathematical Statistics, 3rd edn, p. xiv+593. A Wiley-Interscience Publication. Wiley, New York (1995)MATHGoogle Scholar
  3. 3.
    Biskup, M., Konig, W., dos Santos, R.S.: Mass concentration and aging in the parabolic Anderson model with doubly-exponential tails. Probab. Theory Relat. Fields (2017). doi: 10.1007/s00440-017-0777-x
  4. 4.
    Block, H.W., Savits, T.H., Shaked, M.: Some concepts of negative dependence. Ann. Probab. 10(3), 765–772 (1982)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Fiodorov, A., Muirhead, S.: Complete localisation and exponential shape of the parabolic Anderson model with Weibull potential field. Electron. J. Probab 19(58), 27 (2014)MathSciNetMATHGoogle Scholar
  6. 6.
    Gartner, J., Molchanov, S.A.: Parabolic problems for the Anderson model. I. Intermittency and related topics. Commun. Math. Phys. 132(3), 613–655 (1990)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Gartner, J., Konig, W.: The Parabolic Anderson Model, Interacting Stochastic Systems, pp. 153–179. Springer, Berlin (2005)CrossRefMATHGoogle Scholar
  8. 8.
    Gartner, J., Konig, W., Molchanov, S.: Geometric characterization of intermittency in the parabolic Anderson model. Ann. Probab. 35(2), 439–499 (2007)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Gartner, J., Molchanov, S.A.: Moment asymptotics and Lifshitz tails for the parabolic Anderson model. In: Stochastic Models (Ottawa, ON, 1998), CMS Conf. Proc. Amer. Math. Soc., Providence, RI, vol. 26, pp. 141–157 (2000)Google Scholar
  10. 10.
    Konig, W.: The Parabolic Anderson Model Pathways in Mathematics, p. xi+192. Birkhauser, Basel (2016)MATHGoogle Scholar
  11. 11.
    Konig, W., et al.: A two cities theorem for the parabolic Anderson model. Ann. Probab. 37(1), 347–392 (2009)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Lacoin, H., Mörters, P.: A scaling limit theorem for the parabolic Anderson model with exponential potential. In: Deuschel, J.D., Gentz, B., König, W., von Renesse, M., Scheutzow, M., Schmock, U. (eds.) Probability in Complex Physical Systems. Springer Proceedings in Mathematics, vol. 11, pp. 247–272. Springer, Berlin (2012)Google Scholar
  13. 13.
    Morters, P.: The parabolic Anderson model with heavy-tailed potential. In: Blath, J., Imkeller, P., Roelly. S. (eds.) Surveys in Stochastic Processes. Proceedings of the 33rd SPA Conference in Berlin, 2009, pp. 67–85. EMS Series of Congress Reports (2011)Google Scholar
  14. 14.
    Muirhead, S., Pymar, R.: Localisation in the Bouchaud-Anderson model. Stoch. Proc. Appl. 126, 3402–3462 (2016)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Resnick, S.I.: Extreme Values, Regular Variation, and Point Processes. Applied Probability. A Series of the Applied Probability Trust, vol. 4, p. xii+320. Springer, New York (1987)CrossRefMATHGoogle Scholar
  16. 16.
    Sidorova, N., Twarowski, A.: Localisation and ageing in the parabolic Anderson model with Weibull potential. Ann. Probab. 42(4), 1666–1698 (2014)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    van der Hofstad, R., Morters, P., Sidorova, N.: Weak and almost sure limits for the parabolic Anderson model with heavy tailed potentials. Ann. Appl. Probab. 18(6), 2450–2494 (2008)MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© The Author(s) 2017

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Mathematical InstituteUniversity of OxfordOxfordUK
  2. 2.Department of StatisticsUniversity College LondonLondonUK
  3. 3.Department of MathematicsUniversity College LondonLondonUK
  4. 4.Department of Economics, Mathematics and StatisticsBirkbeck, University of LondonLondonUK

Personalised recommendations