Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Universal tail profile of Gaussian multiplicative chaos

  • 116 Accesses


In this article we study the tail probability of the mass of Gaussian multiplicative chaos. With the novel use of a Tauberian argument and Goldie’s implicit renewal theorem, we provide a unified approach to general log-correlated Gaussian fields in arbitrary dimension and derive precise first order asymptotics of the tail probability, resolving a conjecture of Rhodes and Vargas. The leading order is described by a universal constant that captures the generic property of Gaussian multiplicative chaos, and may be seen as the analogue of the Liouville unit volume reflection coefficients in higher dimensions.


Gaussian multiplicative chaos (GMC) was first constructed by Kahane [22] in an attempt to provide a mathematical framework for the Kolmogorov–Obukhov–Mandelbrot model of energy dissipation in turbulence. The theory of (subcritical) GMC consists of defining and studying, for each \(\gamma \in (0, \sqrt{2d})\), the random measure

$$\begin{aligned} M_{\gamma }(dx) = e^{\gamma X(x) - \frac{\gamma ^2}{2}\mathbb {E}[X(x)^2]} dx, \end{aligned}$$

where \(X(\cdot )\) is a (centred) log-correlated Gaussian field on some domain \(D \subset \mathbb {R}^d\). The expression (1) is formal because \(X(\cdot )\) is not defined pointwise; instead it is only a random generalised function. It is now, however, well understood that \(M_\gamma \) may be defined via a limiting procedure of the form

$$\begin{aligned} M_{\gamma }(dx) = \lim _{\epsilon \rightarrow 0} M_{\gamma , \epsilon }(dx) = \lim _{\epsilon \rightarrow 0} e^{\gamma X_\epsilon (x) - \frac{\gamma ^2}{2} \mathbb {E}[X_\epsilon (x)^2]} dx \end{aligned}$$

where \(X_{\epsilon }(\cdot )\) is some suitable sequence of smooth Gaussian fields that converges to \(X(\cdot )\) as \(\epsilon \rightarrow 0\). We refer the readers to e.g. [6] for more details about the construction.

In recent years the theory of GMC has attracted a lot of attention in the mathematics and physics communities due to its wide array of applications – it plays a central role in random planar geometry [15, 16] and the mathematical formulation of Liouville conformal field theory (LCFT) [13], appears as a universal limit in other areas such as random matrix theory [7, 24, 25, 32], and is even used as a model for Riemann zeta function in probabilistic number theory [31] or stochastic volatility in quantitative finance [14].

In spite of the importance of the theory, not much is known about the distributional properties of GMC. For instance, given a bounded open set \(A \subset D\), one may ask what the exact distribution of \(M_{\gamma }(A)\) is, but nothing is known except in very specific cases where specialised LCFT tools are applicable [23, 26, 27]. In the general setting, our knowledge is still limited to the criterion for finite moments (Lemma 5), and weak properties such as the existence of probability density function.Footnote 1

Main results

Define \(M_{\gamma , g}(dx) = g(x) M_{\gamma }(dx)\) where \(g(x) \ge 0\) is continuous on \({\overline{D}}\). The goal of this paper is to derive the leading order asymptotics for

$$\begin{aligned} \mathbb {P}\left( M_{\gamma , g}(A) > t\right) \end{aligned}$$

for non-trivialFootnote 2 bounded open sets \(A \subset D\) as \(t \rightarrow \infty \). This may be seen as a first step towards the goal of understanding the full distribution of \(M_{\gamma , g}(A)\), and will also highlight a new universality phenomenon of GMC. It is a standard fact in the literature that

$$\begin{aligned} \mathbb {E}\left[ M_{\gamma , g}(A)^p\right]< \infty \quad \Leftrightarrow \quad p < \frac{2d}{\gamma ^2} \end{aligned}$$

and this suggests the possibility that the right tail (2) may satisfy a power law with exponent \(2d/\gamma ^2\). Our main result confirms this behaviour.

Theorem 1

Let \(\gamma \in (0, \sqrt{2d})\), \(Q = \frac{\gamma }{2} + \frac{d}{\gamma }\) and \(M_{\gamma , g}\) be the subcritical GMC associated with the Gaussian field \(X(\cdot )\) with covariance

$$\begin{aligned} \mathbb {E}[X(x) X(y)] = -\log |x-y| + f(x, y), \qquad \forall x, y \in D \end{aligned}$$

where f is a continuous function on \({\overline{D}} \times {\overline{D}}\). Suppose f can be decomposed into

$$\begin{aligned} f(x, y) = f_+(x, y) - f_-(x, y) \end{aligned}$$

where \(f_+, f_-\) are covariance kernels for some continuous Gaussian fields on \({\overline{D}}\). Then there exists some constant \({\overline{C}}_{\gamma , d} > 0\) independent of f and g such that for any bounded open set \(A \subset D\),

$$\begin{aligned} \mathbb {P}\left( M_{\gamma , g}(A) > t\right) \overset{t \rightarrow \infty }{=}&\left( \int _A e^{\frac{2d}{\gamma }(Q - \gamma ) f(v, v)} g(v)^{\frac{2d}{\gamma ^2}}dv\right) \frac{\frac{2}{\gamma }(Q-\gamma )}{\frac{2}{\gamma }(Q-\gamma )+1} \frac{{\overline{C}}_{\gamma , d}}{t^{\frac{2d}{\gamma ^2}}} \nonumber \\&+ o(t^{-\frac{2d}{\gamma ^2}}). \end{aligned}$$

Remark 1

The continuity assumption on g may be relaxed. As suggested by one of the anonymous referees, it would be interesting to consider more general densities of the form

$$\begin{aligned} g(x) = \left[ \prod _{i=1}^n |x-x_i|^{-\gamma \alpha _i}\right] {\overline{g}}(x) \end{aligned}$$

where \({\overline{g}}(x) \ge 0\) is continuous on \({{\overline{D}}}\), \(x_1, \dots , x_n \in {\overline{D}}\) are n distinct points and \(\alpha _i \in (0, \frac{\gamma }{2})\) for \(i=1, \dots , n\), as they are related to the study of boundary reflection coefficients in Liouville conformal field theory. The asymptotics (5) in Theorem 1 remains true under this extended setting, and the approach in this article will still work up to small changes which will be explained in Remark 6. Note that the assumption on the strength of singularity is crucial: if \({\overline{g}}\) is bounded away from zero and one of the \(\alpha _i\)’s lies in \((\frac{\gamma }{2}, Q)\), then the tail asymptotics for \(M_{\gamma , g}(D)\) becomes entirely different (see Remark 5 and Proposition 1) as the behaviour is dominated by the strongest singularity. The marginal case where \(\max _i \alpha _i = \frac{\gamma }{2}\), however, is not covered in either scenario and is left for future investigation.

While the decomposition condition (4) may look intractable at first glance, it is implied by a more convenient criterion regarding higher regularity of f. This is satisfied, for instance, by the Liouville quantum gravity measure in dimension 2, i.e.

$$\begin{aligned} \mu _{\gamma }^{\mathrm {LQG}}(dx) = R(x; D)^{\frac{\gamma ^2}{2}} M_{\gamma }(dx) \end{aligned}$$

where \(M_{\gamma }(dx)\) is the GMC measure associated with the Gaussian free field with Dirichlet boundary conditions on \(\partial D\), in which case \(f(x, x) = R(x; D)\) is the conformal radius of x in D. Such an application is not covered by any previously known results.

Corollary 1

Assume f is in the local Sobolev space \(H_{\mathrm {loc}}^{s}(D \times D)\) (see Sect. 2.2 for a definition) for some \(s > d\) instead of the decomposition condition (4) on f. Then the tail asymptotics (5) holds for any bounded open sets \(A \subset D\) such that \({\overline{A}} \subset D\).


Since we can always find another open set \(A'\) such that \({\overline{A}} \subset A' \subset \overline{A'} \subset D\), the decomposition condition on f, when restricted to \({\overline{A}}\), holds by [21, Lemma 3.2]Footnote 3 and Theorem 1 applies immediately. \(\square \)

The constant \({\overline{C}}_{\gamma , d}\) that appears in the tail asymptotics (5) has various probabilistic representations which are summarised in Corollary 5, and we shall call it the reflection coefficient of Gaussian multiplicative chaosFootnote 4 as it may be seen as the d-dimensional analogue of the reflection coefficient in Liouville conformal field theory (LCFT), see Appendix A. Based on existing exact integrability results, we can even provide an explicit expression for \({\overline{C}}_{\gamma , d}\) when \(d=1\) and \(d=2\).

Corollary 2

(cf. [28, Sect. 4]) The constant \({\overline{C}}_{\gamma , d}\) in (5) is given by

$$\begin{aligned} {\overline{C}}_{\gamma , d} = \left\{ \begin{array}{lr} \frac{(2\pi )^{\frac{2}{\gamma }(Q-\gamma )}}{\gamma (Q - \gamma ) \varGamma \left( \gamma (Q - \gamma )\right) ^{\frac{2}{\gamma ^2}}}, &{} d = 1, \\ - \frac{\left( \pi \varGamma (\frac{\gamma ^2}{4})/\varGamma (1-\frac{\gamma ^2}{4})\right) ^{\frac{2}{\gamma }(Q - \gamma )}}{\frac{2}{\gamma }(Q- \gamma )} \frac{\varGamma (-\frac{\gamma }{2}(Q-\gamma ))}{\varGamma (\frac{\gamma }{2}(Q-\gamma ))\varGamma (\frac{2}{\gamma }(Q-\gamma ))}, &{} d= 2. \end{array}\right. \end{aligned}$$


The \(d=2\) case follows from [28] which proves (5) when \(f \equiv 0\) and \(g \equiv 1\). By Theorem 1, our constant \({\overline{C}}_{\gamma , d}\) is independent of f and therefore coincides with the Liouville unit volume reflection coefficient evaluated at \(\gamma \), the value of which is given by the formula in (7).

For \(d=1\), we use the Fyodorov-Bouchaud formula [19] which was verified in [27]. Consider a log-correlated Gaussian field on \((0, 2\pi )\) with covariance

$$\begin{aligned} \mathbb {E}[X(x) X(y)]= & {} -\log |e^{ix} - e^{iy}| \end{aligned}$$
$$\begin{aligned}&\overset{y \rightarrow x}{\sim }&-\log |x-y| - \log \left| 1 +O(x-y)\right| . \end{aligned}$$

If we write \(M_{\gamma }(dx)\) as the GMC measure associated to (8) on \(D = (0, 2\pi )\), then \(M_{\gamma }((0, 2\pi ))\) corresponds to the random variable \(2\pi Y_{\sqrt{2}\gamma }\) in [27], which is known to distribute as

$$\begin{aligned} 2\pi Y_{\sqrt{2}\gamma } \sim \frac{2\pi }{\varGamma (1 - \frac{\gamma ^2}{2})} \mathrm {Exp}(1)^{-\frac{\gamma ^2}{2}} \end{aligned}$$

and hence satisfies

$$\begin{aligned}&\mathbb {P}\left( 2\pi Y_{\sqrt{2}\gamma } > t \right) = \mathbb {P}\left( \mathrm {Exp}(1) < \left( \frac{2\pi }{\varGamma (1 - \frac{\gamma ^2}{2})t}\right) ^{\frac{2}{\gamma ^2}}\right) \nonumber \\&\quad = 1 - \exp \left\{ -\left( \frac{2\pi }{\varGamma (1 - \frac{\gamma ^2}{2})t}\right) ^{\frac{2}{\gamma ^2}}\right\} \overset{t \rightarrow \infty }{\sim } \frac{(2\pi )^{\frac{2}{\gamma ^2}}}{\varGamma (1-\frac{\gamma ^2}{2})^{\frac{2}{\gamma ^2}}t^{\frac{2}{\gamma ^2}}}. \end{aligned}$$

We want to compare the above asymptotics to Theorem 1, but the covariance (8) considered here does not exactly fall into our framework due to the periodicity of the complex exponential: we have

$$\begin{aligned} \mathbb {E}[X(x) X(y)] \rightarrow \infty \qquad \text {if} \qquad x \rightarrow 0^+, y \rightarrow 2\pi ^- \end{aligned}$$

which implies that the function f(xy) in (3) is not continuous on \({\overline{D}} \times {\overline{D}}\). To circumvent the issue, we fix \(\delta \in (0,1)\) and \(\epsilon \in (0, 1)\) and consider the bounds

$$\begin{aligned} \begin{aligned}&\mathbb {P}\left( M_{\gamma }\left( (0, 2\pi - \epsilon )\right)> t\right) \le \mathbb {P}\left( M_{\gamma }\left( (0, 2\pi )\right)> t\right) \\&\quad \le \mathbb {P}\left( M_{\gamma }\left( (0, 2\pi - \epsilon )\right)> (1-\delta ) t \right) + \mathbb {P}\left( M_{\gamma }\left( (2\pi - \epsilon , 2\pi )\right) > \delta t \right) . \end{aligned} \end{aligned}$$

Now the field (8) restricted to each of the interval \((0, 2\pi - \epsilon ), (2\pi - \epsilon )\) falls into our framework (3) with \(f(x, x) \equiv 0\) because of (9). Then by Theorem 1

$$\begin{aligned}&\lim _{t \rightarrow \infty } t^{\frac{2}{\gamma ^2}}\mathbb {P}\left( M_{\gamma }\left( (0, 2\pi - \epsilon )\right)> t\right) \\&\quad = \left( \int _0^{2\pi - \epsilon } dv\right) \frac{\frac{2}{\gamma }(Q-\gamma )}{\frac{2}{\gamma }(Q-\gamma )+1} {\overline{C}}_{\gamma , 1} = (2\pi - \epsilon ) \left( 1 -\frac{\gamma ^2}{2}\right) {\overline{C}}_{\gamma , 1},\\&\lim _{t \rightarrow \infty } t^{\frac{2}{\gamma ^2}}\left[ \mathbb {P}\left( M_{\gamma }\left( (0, 2\pi - \epsilon )\right)> (1-\delta ) t \right) + \mathbb {P}\left( M_{\gamma }\left( (2\pi - \epsilon , 2\pi )\right) > \delta t \right) \right] \\&\quad = \left( \int _0^{2\pi - \epsilon } (1-\delta )^{-\frac{2}{\gamma ^2}}dv + \int _{2\pi - \epsilon }^{2\pi } \delta ^{-\frac{2}{\gamma ^2}}dv\right) \frac{\frac{2}{\gamma }(Q-\gamma )}{\frac{2}{\gamma }(Q-\gamma )+1} {\overline{C}}_{\gamma , 1}\\&\quad = \left[ (2\pi - \epsilon ) (1-\delta )^{-\frac{2}{\gamma ^2}} + \epsilon \delta ^{-\frac{2}{\gamma ^2}}\right] \left( 1 - \frac{\gamma ^2}{2}\right) {\overline{C}}_{\gamma , 1}. \end{aligned}$$

Letting \(\epsilon \rightarrow 0^+\) and then \(\delta \rightarrow 0^+\), the bounds (11) suggest that

$$\begin{aligned} \lim _{t \rightarrow \infty } t^{\frac{2}{\gamma ^2}} \mathbb {P}\left( M_{\gamma }((0, 2\pi )) > t \right) = 2\pi \left( 1 - \frac{\gamma ^2}{2}\right) {\overline{C}}_{\gamma , 1}. \end{aligned}$$

By equating the above coefficient to that in (10). we arrive at the formula for \({\overline{C}}_{\gamma , 1}\) in (7) and this concludes the proof. \(\square \)

Previous work and our approach

Despite being a very fundamental question, the tail probability of GMC has not been investigated very much in the literature. To our knowledge, the first result in this direction is established by Barral and Jin [2] for the GMC associated with the exact scale invariant kernel \(\mathbb {E}[X(x) X(y)] = -\log |x-y|\) on the unit interval [0, 1]:

$$\begin{aligned} \mathbb {P}(M_\gamma ([0,1]) > t) = \frac{C_*}{t^{\frac{2}{\gamma ^2}}} + o(t^{-\frac{2}{\gamma ^2}}) \end{aligned}$$

where the constant \(C_* > 0\) is given by

$$\begin{aligned} C_* = \frac{2\gamma ^2}{2 - \gamma ^2} \frac{\mathbb {E}\Big [ M_{\gamma }([0,1])^{\frac{2}{\gamma ^2}-1} M_\gamma ([0, \frac{1}{2}]) - M_\gamma ([0, \frac{1}{2}])^{\frac{2}{\gamma ^2}}\Big ]}{\log 2}. \end{aligned}$$

The issue about their approach is that they rely heavily on the exact scale invariance of the kernel and the symmetry of the unit interval in order to derive a stochastic fixed point equation, and it is not clear how their method may be generalised.

A recent paper [28] by Rhodes and Vargas, who consider the whole-plane Gaussian free field (GFF) restricted to the unit disc (i.e. \(\mathbb {E}[X(x) X(y)] = -\log |x-y|\) on \(D = \{x \in \mathbb {R}^2: |x| < 1\}\)), offers a new perspective for the tail problem. They introduce the new method of localisation trick

$$\begin{aligned} \mathbb {P}\left( M_{\gamma , g}(A)> t\right) =&\int _A \mathbb {E}\left[ \frac{1_{\{M_{\gamma , g}(v, A) > t\}}}{M_{\gamma , g}(v, A)}\right] g(v)dv, \quad \\ M_{\gamma , g}(v, A) :=&\int _{A} \frac{e^{\gamma ^2 f(x, v)} M_{\gamma , g}(dx)}{|x-v|^{\gamma ^2}} \end{aligned}$$

which effectively pins down the \(\gamma \)-thick points of \(X(\cdot )\), allowing one to express the dependence of the leading tail coefficient on the test set A in a very explicit way, and in the end they are able to obtain (5) when f only consists of the positive definite part, i.e. \(f_-(x, y) \equiv 0\), in dimension \(d \le 2\).

Our strategy is partly inspired by some ideas from the aforementioned works, with several new input and various fundamental changes in order to treat the much more general setting that is not previously accessible. Let us defer the details of our proof to Sect. 3 and just highlight the difference between our approach and earlier attempts here.

  1. (i)

    The use of Tauberian theorem: we translate the problem of the asymptotics for \(\mathbb {P}(M_{\gamma , g}(A)>t)\) as \(t \rightarrow \infty \) to the equivalent problem of the asymptotics of

    $$\begin{aligned} \mathbb {E}[e^{-\lambda / M_{\gamma , g}(A)}] = \int _A\mathbb {E}\left[ \frac{1}{M_{\gamma , g}(v, A)} e^{-\lambda /M_{\gamma , g}(v, A)}\right] g(v) dv \end{aligned}$$

    as \(\lambda \rightarrow \infty \) (here the equality comes from a similar localisation trick). Unlike the approach in [28], the expectation we deal with does not involve any indicator functions, which combined with our dominated convergence-based method, makes our analysis (such as the “removal of non-singularity” step) more versatile.

  2. (ii)

    Gaussian interpolation: thanks to the absence of any indicator functions in

    $$\begin{aligned} \mathbb {E}\left[ \frac{1}{M_{\gamma , g}(v, A)} e^{-\lambda /M_{\gamma , g}(v, A)}\right] , \end{aligned}$$

    there is hope to reduce our problem to the case where the underlying kernel is exact (i.e. \(\mathbb {E}[X(x) X(y)] = -\log |x-y|\)). Unlike many estimates such as moment bounds in GMC, the expectation (13) we are studying here concerns a function \(F: x \mapsto x^{-1} e^{-\lambda x}\) which is not convex or concave. The lack of a convenient convex/concave modification of F without affecting the behaviour of the expectation as \(\lambda \rightarrow \infty \) means that the popular convexity inequality (28) is not applicable, and Kahane’s full interpolation formula (27) plays an indispensable role in our analysis.

  3. (iii)

    The analysis of the exact kernel: without the localisation trick, [2] has to proceed by generalising Goldie’s implicit renewal theorem to a form that is applicable to \(M_{\gamma }([0,1])\), and they also need to show that the constant \(C_*\) in (12) is finite, the proof of which is not trivial. In contrast, we only need the precise asymptotics for the tail probability

    $$\begin{aligned} \mathbb {P}\left( \int _{|x| \le r} |x|^{-\gamma ^2}M_{\gamma }(dx) > t \right) \end{aligned}$$

    which follows from Goldie’s original result and a simple coupling argument.

The novel elements in our proof not only allow us to bypass many tedious computations in existing approaches, but also extend the tail result (5) in three directions, namely

  • general open test sets A: unlike [28] which requires a \(C^1\)-boundary due to intricacies in dealing with the indicator function, our result holds for any open subsets A without further regularity assumption.

  • general kernels (3): the continuity argument in [28] may treat the case where f(xy) is positive definite in \(d=2\) but completely breaks down as soon as the negative definite part \(f_-(x, y)\) is non-trivial, whereas we circumvent this issue entirely by an extrapolation principle;

  • arbitrary dimension d: we do not make use of any special decomposition of the log-kernel \(-\log |x-y|\), unlike [2] (which requires the special cone construction in \(d=1\)) or [28] (which relies heavily on a radial/lateral decomposition of GFF in \(d=2\)), and our method allows a unified approach to all dimensions.

Theorem 1 shares the same spirit of the result in [28] in the sense that we have successfully separated the dependence on the test set A and the functions fg from the rest of the tail coefficient, and the constant \({\overline{C}}_{\gamma , d}\) captures any remaining dependence on d and \(\gamma \) and generic feature of GMC. The fact that we are unable to provide an explicit formula for \({\overline{C}}_{\gamma , d}\) for \(d \ge 3\) should not be seen as a drawback of our approach – explicit expressions are known for \(d=1\) and \(d=2\) only because the constant has an LCFT interpretation, and their formulae are found (independently of the study of tail probability) by LCFT tools which do not seem to have natural generalisation to higher dimension at the moment.

On the relevance of the kernel decomposition

Based on the continuity assumption of f, it is always possible to decompose f into the difference of two positive definite functions: indeed

$$\begin{aligned} T_f: h(\cdot ) \mapsto \int _D f(\cdot , y) h(y) dy \end{aligned}$$

is a symmetric Hilbert-Schmidt operator that maps \(L^2(D)\) to \(L^2(D)\) and by the standard spectral theory of compact self-adjoint operators there exist \(\lambda _n \in \mathbb {R}\) and \(\phi _n \in L^2(D)\) such that \((T_f\phi _n)(x) = \lambda _n \phi _n(x)\), \(|\lambda _n| \xrightarrow {n \rightarrow \infty }{0}\) and

$$\begin{aligned} f(x, y)&= \sum _{n=1}^\infty \lambda _n \phi _n(x) \phi _n(y)\\&= \underbrace{\left( \sum _{n=1}^\infty |\lambda _n| \phi _n(x) \phi _n(y)1_{\{\lambda _n > 0\}} \right) }_{=:f_+(x, y)} - \underbrace{\left( \sum _{n=1}^\infty |\lambda _n| \phi _n(x) \phi _n(y)1_{\{\lambda _n < 0\}} \right) }_{=:f_-(x, y)} \end{aligned}$$

in \(L^2(D)\).Therefore, the relevant question is to determine the least regularity on \(f_\pm \) for the power-law profile (5) to hold. Our decomposition condition (4) requires \(f_\pm \) to be kernels of some continuous Gaussian fields. As it turns out, we only use this technical assumption to obtain the following estimate (see for instance Corollary 6(ii)):

  • There exists some \(r > 0\) and \(C > 0\) such that for all \(v \in D\) and \(s \in [0,1]\)

    $$\begin{aligned} \mathbb {P}\left( \int _{B(v, r) \cap D} \frac{M_{\gamma }^s(dx)}{|x-v|^{\gamma ^2}}> t\right) \le \frac{C}{t^{\frac{2d}{\gamma ^2}-1}} \qquad \forall t > 0 \end{aligned}$$

    where \(M_{\gamma }^s(dx) = e^{\gamma Z_s(x) - \frac{\gamma ^2}{2} \mathbb {E}\left[ Z_s(x)^2\right] }dx\) is the Gaussian multiplicative chaos associated with the log-correlated field \(Z_s\) with covariance \(\mathbb {E}[Z_s(x) Z_s(y)] = -\log |x-y| + s f(x, y)\).

Inspecting the proof in Sect. 3, this is the only assumption (other than the continuity of f) we need in order to apply dominated convergence in several places (such as (55)) which ultimately yields the desired power law. In other words our decomposition condition (4) may be relaxed so long as (15) is satisfied, e.g. we may assume instead that

  • The Gaussian fields \(G_{\pm }\) associated with the kernels \(f_\pm \) satisfy

    $$\begin{aligned} \mathbb {P}\left( \sup _{x \in D} |G_{\pm }(x)| < \infty \right) > 0 \end{aligned}$$

    (see Sect. 2.1 for various implications).

All the proofs in Sect. 3 will go through without any modification to cover this slightly more general setting (which obviously includes the case where \(G_\pm \) are continuous on \({\overline{D}}\)). We choose not to phrase Theorem 1 this way because (16) is less tractable and not necessarily much more general. Indeed when \(f_\pm (x, y) = f_\pm (x-y)\) are continuous translation-invariant kernels, a classical result by Belyaev [5] states that \(G_\pm \) are either continuous or unbounded on any non-empty open sets,Footnote 5 and so (16) is equivalent to the original condition (4) in the stationary setting. We also think that the decomposition condition (4) is a very natural assumption because for any \(s \ge 0\), \(\epsilon > 0\) and symmetric function \(f(\cdot , \cdot ) \in H^s(\mathbb {R}^{2d})\), one can always find some symmetric function \({\widetilde{f}}(\cdot , \cdot ) \in C_c^\infty (\mathbb {R}^{2d})\), say by truncating suitable basis expansion (see also [21, Lemma 2.2]), such that \(||f - {\widetilde{f}}||_{H^s(\mathbb {R}^{2d})} < \epsilon \) and that the operator \(T_{{\widetilde{f}}}\) is of finite rank, i.e. the decomposition condition (4) is satisfied by a “dense collection” of covariance kernels of the form (3).

To understand the importance of continuity at the level of the fields \(G_\pm \), let us consider the simpler situation where \(f = f_+\). We have

$$\begin{aligned} \mathbb {E}\left[ X(x) X(y)\right] = -\log |x-y| + f(x, y) \approx - \log |x-y| + f(v,v) \end{aligned}$$

on a ball of small radius \(r > 0\) centred around \(v \in A\). This says that \(X(\cdot )\) is the sum of an exact scale invariant field Y (with covariance \(\mathbb {E}[Y(x) Y(y)] = K(x, y) = -\log |x-y|\)) and an independent field \(G_+\) which locally behaves like an independent random variable \(N_v \sim {\mathcal N }(0, f(v, v))\), and this leads to

$$\begin{aligned}&\mathbb {P}\bigg ( \underbrace{\int _A \frac{e^{\gamma ^2f(x, v)}M_{\gamma , g}(dx) }{|x-v|^{\gamma ^2}}}_{=:M_{\gamma , g} (v, A)}> t\bigg )\nonumber \\&\quad \approx \mathbb {P}\bigg ( \underbrace{\int _{|x-v|\le r} \frac{e^{\gamma ^2f(x, v)}M_{\gamma , g}(dx) }{|x-v|^{\gamma ^2}}}_{=:M_{\gamma , g}(v, r)} > t\bigg ) \sim e^{\frac{2d}{\gamma }(Q-\gamma ) f(v,v)} g(v)^{\frac{2d}{\gamma ^2}-1} \frac{{\overline{C}}_{\gamma , d}}{t^{\frac{2d}{\gamma ^2}-1}} \end{aligned}$$

(see Corollary 6 and Remark 5). This allows us to interpret

$$\begin{aligned} \mathbb {P}\left( M_{\gamma , g}(A) > t\right) \sim \left( \int _A e^{\frac{2d}{\gamma }(Q - \gamma ) f(v, v)} g(v)^{\frac{2d}{\gamma ^2}}dv\right) \frac{\frac{2}{\gamma }(Q-\gamma )}{\frac{2}{\gamma }(Q-\gamma )+1} \frac{{\overline{C}}_{\gamma , d}}{t^{\frac{2d}{\gamma ^2}}} \end{aligned}$$

in the following way: if \(M_{\gamma , g}(A)\) is extremely large, then most of its mass comes from a small neighbourhood \(B(v, r) \subset A\) of some \(\gamma \)-thick point \(v \in A\) of \(X(\cdot )\), and this point v is more likely to come from regions of higher density with respect to g and/or of higher values of f, i.e. where \(G_+\) has higher variance near v.

When \(G_+\) is not continuous, the localisation intuition is not valid anymore and our method breaks down because (16) is possibly false by Belyaev’s dichotomy mentioned earlier. It may happen that (15) is still valid, in which case the power-law profile will still hold, but it is unclear how to proceed with a Gaussian field \(G_+\) that is only guaranteed to have a separable and measurable version but nothing else. Nevertheless, such issue might potentially be circumvented by some approximation argument in the spirit of Lusin’s theorem, and we conjecture that the power law (5) remains true without the generalised decomposition condition (16).

Critical GMCs and extremal processes: heuristics

Let us abuse the notation and denote by \(M_{\sqrt{2d}}\) the critical GMC (via Seneta–Heyde renormalisationFootnote 6)

$$\begin{aligned} M_{\sqrt{2d}}(dx) = \lim _{\epsilon \rightarrow 0^+} \sqrt{\frac{\pi }{2}}(\mathbb {E}[X_\epsilon (x)^2])^{\frac{1}{2}} e^{\sqrt{2d}X_{\epsilon }(x) - d\mathbb {E}\left[ X_{\epsilon }(x)^2\right] }dx \end{aligned}$$

and similarly \(M_{\sqrt{2d}, g}(dx) = g(x)M_{\sqrt{2d}}(dx)\). While a similar criterion for the existence of moments [17]

$$\begin{aligned} \mathbb {E}[M_{\sqrt{2d}, g}(A)^p ]< \infty \qquad \Leftrightarrow \qquad p < 1 \end{aligned}$$

has been known for critical GMCs associated with general fields, previous attempts to understand the tail probability \(\mathbb {P}(M_{\sqrt{2d}, g}(A) > t)\) are again restricted to exact kernels so that the derivation via stochastic fixed point equation may be applied [3]. We believe that the tail probability of critical GMCs, in general, satisfies the asymptotics

$$\begin{aligned} \mathbb {P}(M_{\sqrt{2d}, g}(A) > t) \overset{t \rightarrow \infty }{=}\frac{\int _A g(v) dv}{t \sqrt{2d}} + o(t^{-1}), \end{aligned}$$

and we provide a heuristic proof based on Theorem 1 here.

Recalling the conjectureFootnote 7 that

$$\begin{aligned} \frac{M_{\gamma }(dx)}{\sqrt{2d} - \gamma } \xrightarrow {\gamma \rightarrow \sqrt{2d}^-} 2M_{\sqrt{2d}}(dx), \end{aligned}$$

we should expect that

$$\begin{aligned}&\mathbb {P}(M_{\sqrt{2d}, g}(A)> t) \overset{\gamma \rightarrow \sqrt{2d}^-}{\approx } \mathbb {P}(M_{\gamma , g}(A) > (\sqrt{2d} - \gamma ) 2t)\\&\qquad \overset{t \rightarrow \infty }{\sim } \left( \int _A e^{\frac{2d}{\gamma }(Q - \gamma ) f(v, v)} g(v)^{\frac{2d}{\gamma ^2}}dv\right) \frac{\frac{2}{\gamma }(Q-\gamma )}{\frac{2}{\gamma }(Q-\gamma )+1} \frac{{\overline{C}}_{\gamma , d}}{((\sqrt{2d} - \gamma )2t)^{\frac{2d}{\gamma ^2}}}\\&\qquad \overset{\gamma \rightarrow \sqrt{2d}^-}{\sim } \left( \lim _{\gamma \rightarrow \sqrt{2d}^-} {\overline{C}}_{\gamma , d}\right) \frac{\int _A g(v) dv}{t\sqrt{2d}}. \end{aligned}$$

When \(d=2\) and \(\gamma \in (0, 2)\), the formula from (7) says that

$$\begin{aligned} {\overline{C}}_{\gamma , 2} = - \frac{\pi ^{\frac{4}{\gamma ^2}-1} \left( \varGamma (\frac{\gamma ^2}{4})/\varGamma (1-\frac{\gamma ^2}{4})\right) ^{\frac{4}{\gamma ^2}-1}}{\frac{4}{\gamma ^2}-1} \frac{\varGamma (\frac{\gamma ^2}{4}-1)}{\varGamma (1 - \frac{\gamma ^2}{4})\varGamma (\frac{4}{\gamma ^2}-1)}. \end{aligned}$$

Using the fact that \(\varGamma (x) = x^{-1} \varGamma (1+x) \overset{x \rightarrow 0}{\sim }x^{-1}\), one can easily evaluate \(\lim _{\gamma \rightarrow 2^-} {\overline{C}}_{\gamma , 2} = 1\), which this is consistent with (18).

One can verify that \({\overline{C}}_{\gamma , d} \overset{\gamma \rightarrow \sqrt{2d}^-}{\rightarrow } 1\) also holds for \(d=1\) using the corresponding formula in (7). While the constant \({\overline{C}}_{\gamma , d}\) is not explicitly known in higher dimension \(d \ge 3\), the calculations here suggest the possibility of a dimension-independent limit

$$\begin{aligned} \lim _{\gamma \rightarrow \sqrt{2d}^-} {\overline{C}}_{\gamma , d} = 1 \end{aligned}$$

and hence the fully explicit asymptotics (18) in all dimensions.

Unfortunately it seems impossible to justify the interchanging of the limits \(\gamma \rightarrow \sqrt{2d}^-\) and \(t \rightarrow \infty \) to turn the above argument into a rigorous proof. The actual analysis for critical GMCs will be quite different from the proof of Theorem 1 here despite sharing similar philosophy. Indeed, various techniques in the current dominated convergence-based approach become inapplicable in the critical regime, whereas the localisation trick requires a very different treatment and also relies on new ingredients such as fusion estimates [4, 12]. We shall pursue the critical result in a separate work [33].

Connection to discrete Gaussian free field The tail probability of critical chaos is not only interesting in its own right but is also closely related to the study of extrema of log-correlated Gaussian fields, which has been an active area of research in the last two decades.

For suitable bounded domains \(D \subset \mathbb {R}^2\), consider the discrete Gaussian free field (DGFF) \(h_N(\cdot )\) on \(V_N := D \cap \frac{1}{N} \mathbb {Z}^2\). This is a Gaussian process indexed by the finite set \(V_N\) with covariance \(\mathbb {E}[h_N(x) h_N(y)]\) proportional to the discrete Green’s function, i.e. the number of visits to y in expectation of a simple random walk on the graph induced by \(V_N\), starting from x and before leaving \(V_N\). Under the normalisation of

$$\begin{aligned} \mathbb {E}\left[ h_N(x) h_N(y)\right] \overset{x \rightarrow y}{=} -\log \left( |x-y| \vee \frac{1}{N}\right) +O(1), \end{aligned}$$

it is known [11] that \(\max _{x \in V_N} h_N(x)\) grows like \(m_N = 2 \log N - \frac{3}{4} \log \log N\) as \(N \rightarrow \infty \). Biskup and Louidor [8,9,10] then considered the extremal process of the DGFF, i.e. (for any choice of \(N^{-1} \ll r_N \ll 1\)) the scaling limit of

$$\begin{aligned} \eta _N(dx, dh) = \sum _{x \in V_N} \delta _{v}(dx) \otimes \delta _{h_N(v) - m_N}(dh) 1_{\{h_N(v) = \max _{|y-v| < r_N} h_N(y) \}} \end{aligned}$$

as \(N \rightarrow \infty \), and they showed that

$$\begin{aligned} \eta _N(dx, dh) \xrightarrow [N \rightarrow \infty ]{d} \mathrm {PPP}\left( \mu (dx) \otimes e^{-ch}dh\right) \end{aligned}$$

for some constant \(c > 0\) and random measure \(\mu \) that was conjectured to be the critical Liouville quantum gravity (LQG) measure \(\mu _{\mathrm {LQG}}\), i.e. \(\mu _{\mathrm {LQG}}(dx) = R(x;D)^2 M_{\mathrm {GFF}}(dx)\) where \(M_{\mathrm {GFF}}(dx)\) is the critical GMC associated to the continuum Gaussian free field on D with Dirichlet boundary condition and R(xD) is the conformal radius, up to a deterministic factor.

The conjecture was finally resolved in the final version of [8], and a very crucial ingredient in their argument was to check that the estimate

$$\begin{aligned} \lim _{\lambda \rightarrow 0^+} \frac{\mathbb {E}\left[ \mu (A) e^{-\lambda \mu (A)}\right] }{-\log \lambda } = c' \int _A R(x; D)^2 dx \qquad \text {for all open } A \subset D \end{aligned}$$

is indeed satisfied by the critical LQG measure. The Laplace-type estimate (19), which is among the axioms that characterise the measure \(\mu \) appearing in the limiting Poisson point process ([8, Theorem 2.8]), could have followed immediately from the asymptotics for the tail probability \(\mathbb {P}\left( \mu _{\mathrm {LQG}}(A) > t\right) \) as \(t \rightarrow \infty \), had the latter been available.

Outline of the paper

The remainder of the article is organised as follows.

In Sect. 2 we compile a list of results that will be used in the proof of Theorem 1. This includes a collection of facts regarding separable Gaussian processes, log-correlated Gaussian fields and GMCs, Karamata’s Tauberian theorem and auxiliary asymptotics, and random recursive equations.

In Sect. 3 we present the proof of Theorem 1 which is divided into two parts. After sketching the idea of the localisation trick, we first establish the tail asymptotics for GMCs associated with exact kernels. We then apply Kahane’s interpolation and extend the result to general kernels (3).

We conclude the article with Appendix A where we define the reflection coefficient \({\overline{C}}_{\gamma , d}(\alpha )\) of Gaussian multiplicative chaos and prove that it is equivalent to the Liouville reflection coefficients in \(d=2\).


Basic facts of Gaussian processes

We collect a few standard results regarding Gaussian processes in the following theorem.

Theorem 2

Let \((G_t)_{t \in {\mathcal T}}\) be a separable centred Gaussian process such that

$$\begin{aligned} \mathbb {P}\left( \sup _{t \in {\mathcal T}} |G_t| < \infty \right) > 0. \end{aligned}$$

Then the following statements are true.

  • Zero-one law: \(\mathbb {P}\left( \sup _{t \in {\mathcal T}} |G_t| < \infty \right) = 1\).

  • Finite moments: \(\mathbb {E}\left[ \sup _{t \in {\mathcal T}} |G_t| \right] < \infty \) and \(\sigma ^2 = \sigma ^2(G) = \sup _{t \in {\mathcal T}} \mathbb {E}\left[ G_t^2\right] < \infty \).

  • Concentration: there exists some \(c > 0\) such that for any \(u \ge 0\),

    $$\begin{aligned} \mathbb {P}\left( \left| \sup _{t \in {\mathcal T}} |G_t| - \mathbb {E}\left[ \sup _{t \in {\mathcal T}} |G_t|\right] \right| > u\right) \le 2 e^{-c\frac{u^2}{\sigma ^2}}. \end{aligned}$$

The lemma below is an easy consequence of Theorem 2.

Lemma 1

Let \(G(\cdot )\) be a continuous Gaussian field on some compact domain \(K \subset \mathbb {R}^d\), then the following are true.

  1. (i)

    There exists some \(c > 0\) such that

    $$\begin{aligned} \mathbb {P}\left( \sup _{x \in K} |G(x)| > t \right) \le \frac{1}{c} e^{-c t^2}, \qquad \forall t \ge 0. \end{aligned}$$
  2. (ii)

    Let \(x \in \mathrm {int}(K)\). For any monotone functions \(\varPsi : \mathbb {R}\rightarrow \mathbb {R}\) with at most exponential growth at infinity,

    $$\begin{aligned} \lim _{r \rightarrow 0^+} \mathbb {E}\left[ \varPsi \left( \sup _{y \in B(x, r)} G(y)\right) \right] = \lim _{r \rightarrow 0^+} \mathbb {E}\left[ \varPsi \left( \inf _{y \in B(x, r)} G(y)\right) \right] = \mathbb {E}\left[ \varPsi \left( G(x)\right) \right] \end{aligned}$$


Since \(G(\cdot )\) is continuous on K, it is separable and satisfies \(\sup _{x \in K} |G(x)| < \infty \) almost surely. By Theorem 2 we have \(\mathbb {E}\left[ \sup _{x \in K} |G(x)|\right] < \infty \) and \(\sigma ^2(G) < \infty \). The tail in (i) can thus be obtained from the concentration inequality (20).

For (ii), note that by monotonicity we can split \(\varPsi \) into positive and negative parts \(\varPsi = \varPsi _+ - \varPsi _-\), such that \(\varPsi _{\pm }\) are monotone functions with at most exponential growth at infinity. Since we can deal with \(\varPsi _+\) and \(\varPsi _-\) separately, we may as well assume without loss of generality that \(\varPsi \) is non-negative. Now take \(r_0 > 0\) such that \(B(x, r_0) \in K\), and consider the case where \(\varPsi \) is non-decreasing. By (21) and the assumption on the growth of \(\varPsi \) at infinity, we have

$$\begin{aligned} \mathbb {E}\left[ \varPsi \left( \sup _{y \in B(x, r_0)} G(y)\right) \right] < \infty . \end{aligned}$$

But then for any \(r \in (0, r_0)\),

$$\begin{aligned} 0 \le \inf _{y \in B(x, r)} \varPsi (G(y)) \le \sup _{y \in B(x, r)} \varPsi (G(y)) \le \sup _{y \in B(x, r_0)} \varPsi (G(y)) \end{aligned}$$

and (22) follows from the continuity of G and dominated convergence. The case where \(\varPsi \) is non-increasing is similar. \(\square \)

Decomposition of Gaussian fields

We mention a result concerning the decomposition of symmetric functions from the very recent paper [21]. Let f(xy) be a symmetric function on \(D \times D\) for some domain \(D \subset \mathbb {R}^d\). We say f is in the local Sobolev space \(H_{\mathrm {loc}}^s(D \times D)\) of index \(s > 0\) if \(\kappa f\) is in \(H^s(D \times D)\) for any \(\kappa \in C_c^\infty (D \times D)\), i.e.

$$\begin{aligned} \int _{\mathbb {R}^d} (1+|\xi |^2)^s|\widehat{(\kappa f)}(\xi )|^2 d\xi < \infty \end{aligned}$$

where \(\widehat{(\kappa f)}\) is the Fourier transform of \(\kappa f\) (see more details in [21, Sect. 2]). Then

Lemma 2

(cf. [21, Lemma 3.2]) If \(f \in H_{\mathrm {loc}}^s(D \times D)\) for some \(s > d\), then there exist two centred, Hölder-continuous Gaussian processes \(G_\pm \) on \(\mathbb {R}^d\) such that

$$\begin{aligned} \mathbb {E}[G_+(x) G_+(y)] - \mathbb {E}[G_-(x) G_-(y)] = f(x, y), \qquad \forall x, y \in D' \end{aligned}$$

for any bounded open set \(D'\) such that \(\overline{D'} \subset D.\)

This decomposition result has various important implications, one of which is the positive-definiteness of the logarithmic kernel. The following result may be seen as a trivial special case of [21, Theorem B] and has been known since [29].

Lemma 3

For each \(L \in \mathbb {R}\), there exists \(r_d(L) > 0\) such that the kernel

$$\begin{aligned} K_L(x, y) = -\log |x-y| + L \end{aligned}$$

is positive definite on \(B(0, r_d(L)) \subset \mathbb {R}^d\). In particular, for any \(R > 0\) there exists some \(L > 0\) such that \(K_L\) is positive definite on B(0, R).

For the sake of convenience, we shall from now on call (24) the L-exact kernel, and when \(L = 0\) we simply call \(K_0(\cdot , \cdot )\) the exact kernel and write \(r_d = r_d(0)\). The exact kernel will play a pivotal role as the reference point from which we extrapolate our tail result to general kernels in the subcritical regime.

Gaussian multiplicative chaos

Given a log-correlated Gaussian field (3), there are various equivalent constructions of the GMC measure \(M_{\gamma }\). In the subcritical case \(\gamma \in (0, \sqrt{2d})\), one approach is the regularisation procedure, which is first suggested in [30] and then generalised/simplified in [6]. The idea is to pick any suitable mollifier \(\theta (\cdot )\) and define

$$\begin{aligned} M_{\gamma , \epsilon }(dx) = e^{\gamma X_\epsilon (x) - \frac{\gamma ^2}{2} \mathbb {E}[X_\epsilon (x)^2]} dx \end{aligned}$$

where \(X_{\epsilon }(\cdot ) = X *\theta _{\epsilon }(\cdot )\) is a continuous Gaussian field on D. Then

Theorem 3

For \(\gamma \in (0, \sqrt{2d})\), the sequence of measures \(M_{\gamma , \epsilon }\) converges in probability to some measure \(M_{\gamma }\) in the weak\(^*\) topology as \(\epsilon \rightarrow 0^+\). The limit \(M_{\gamma }\) is independent of the choice of the mollification \(\theta \).

We collect a few standard results in the literature of GMC. The first is the celebrated interpolation principle by Kahane.

Lemma 4

[22] Let \(\rho \) be a Radon measure on D, \(X(\cdot )\) and \(Y(\cdot )\) be two continuous centred Gaussian fields, and \(F: \mathbb {R}_+ \rightarrow \mathbb {R}\) be some smooth function with at most polynomial growth at infinity. For \(t \in [0,1]\), define \(Z_t(x) = \sqrt{t}X(x) + \sqrt{1-t}Y(x)\) and

$$\begin{aligned} \varphi (t) := \mathbb {E}\left[ F(W_t)\right] , \qquad W_t := \int _D e^{Z_t(x) - \frac{1}{2}\mathbb {E}[Z_t(x)^2]} \rho (dx). \end{aligned}$$

Then the derivative of \(\varphi \) is given by

$$\begin{aligned} \begin{aligned} \varphi '(t) =&\frac{1}{2} \int _D \int _D \left( \mathbb {E}[X(x) X(y)] - \mathbb {E}[Y(x) Y(y)]\right) \\&\times \mathbb {E}\left[ e^{Z_t(x) + Z_t(y) - \frac{1}{2}\mathbb {E}[Z_t(x)^2] - \frac{1}{2}\mathbb {E}[Z_t(y)^2]} F''(W_t) \right] \rho (dx) \rho (dy). \end{aligned} \end{aligned}$$

In particular, if

$$\begin{aligned} \mathbb {E}[X(x) X(y)] \le \mathbb {E}[Y(x) Y(y)] \qquad \forall x, y \in D, \end{aligned}$$

then for any convex \(F: \mathbb {R}_+ \rightarrow \mathbb {R}\)

$$\begin{aligned} \mathbb {E}\left[ F\left( \int _D e^{X(x) - \frac{1}{2} \mathbb {E}[X(x)^2]}\rho (dx)\right) \right] \le \mathbb {E}\left[ F\left( \int _D e^{Y(x) - \frac{1}{2} \mathbb {E}[Y(x)^2]}\rho (dx)\right) \right] . \end{aligned}$$

and the inequality is reversed if F is concave instead.

While Lemma 4 is stated for continuous fields, it may be extended to log-correlated fields if we first apply it to mollified fields \(X_{\epsilon }\) and \(Y_{\epsilon }\) and take the limit \(\epsilon \rightarrow 0^+\). Such argument will work immediately for comparison principles (28) and we shall make no further remarks on that. For the interpolation principle (27) we only need the following weaker statement which may be extended to log-correlated fields in the same way.

Corollary 3

Under the same assumptions and notations in Lemma 4, if there exists some \(C>0\) such that

$$\begin{aligned} \left| \mathbb {E}[X(x) X(y)] - \mathbb {E}[Y(x) Y(y)]\right| \le C \qquad \forall x, y \in D, \end{aligned}$$


$$\begin{aligned} |\varphi '(t)| \le \frac{C}{2} \mathbb {E}\left[ (W_t)^2 |F''(W_t)|\right] \end{aligned}$$

and consequently

$$\begin{aligned} |\varphi (1) - \varphi (0)| \le \frac{C}{2} \int _0^1 \mathbb {E}\left[ (W_t)^2 |F''(W_t)|\right] dt. \end{aligned}$$

The next result is a generalised criterion for the existence of moments of GMC.

Lemma 5

Let \(\gamma \in (0, \sqrt{2d})\), \(Q = \frac{\gamma }{2} + \frac{d}{\gamma }\), \(\alpha < Q\) and \(B(0, r) \subset D\). Then

$$\begin{aligned} \mathbb {E}\left[ \left( \int _{|x| \le r} |x|^{-\gamma \alpha } M_{\gamma }(dx)\right) ^s \right] < \infty \end{aligned}$$

if \(s < \frac{2d}{\gamma ^2} \wedge \frac{2}{\gamma }(Q - \alpha )\). In particular

$$\begin{aligned} \mathbb {E}\left[ \left( \int _{|x| \le r} M_{\gamma }(dx)\right) ^s \right]&< \infty , \qquad \forall s< \frac{2d}{\gamma ^2}, \\ \text {and} \qquad \mathbb {E}\left[ \left( \int _{|x| \le r} |x|^{-\gamma ^2} M_{\gamma }(dx)\right) ^s \right]&< \infty , \qquad \forall s < \frac{2d}{\gamma ^2} - 1. \end{aligned}$$

Remark 2

The bound on (29) is uniform among the class of fields (3) with \(\sup _{x, y \in D} |f(x, y)| \le C\) for some \(C > 0\) by Gaussian comparison (Lemma 4).

Tauberian theorem and related auxiliary results

Let us record the classical Tauberian theorem of Karamata.

Theorem 4

[18, Theorem XIII.5.3] Let \(f(d\cdot )\) be a non-negative measure on \(\mathbb {R}_+\), \(F(t):= \int _0^t f(ds)\) and suppose

$$\begin{aligned} {\widetilde{F}}(\lambda ) := \int _0^\infty e^{-\lambda t} f(dt) \end{aligned}$$

exists for \(\lambda > 0\). If L is slowly varying at the originFootnote 8 and \(\rho \in [0, \infty )\), then

$$\begin{aligned} {\widetilde{F}}(\lambda ) \overset{\lambda \rightarrow \infty }{\sim } \lambda ^{-\rho } L(\lambda ^{-1}) \qquad \Leftrightarrow \qquad F(\epsilon ) \overset{\epsilon \rightarrow 0^+}{\sim } \frac{1}{\varGamma (1+\rho )} \epsilon ^\rho L(\epsilon ). \end{aligned}$$

Our use of Theorem 4 is summarised in the following corollary.

Corollary 4

Let U be a non-negative random variable, \(C>0\) and \(p > 0\). Then

$$\begin{aligned} \mathbb {P}(U > t) \overset{t \rightarrow \infty }{\sim }\frac{C}{t^p} \qquad \Leftrightarrow \qquad \mathbb {E}\left[ e^{-\lambda / U}\right] \overset{\lambda \rightarrow \infty }{\sim } \frac{C \varGamma (1+p)}{\lambda ^p}. \end{aligned}$$


Let \(V = U^{-1}\). In the notation of Theorem 4, we choose \(f(ds) = \mathbb {P}(V \in ds)\), \(L \equiv C \varGamma (1+p)\) and \(\epsilon = t^{-1}\) such that \({\widetilde{F}}(\lambda ) = \mathbb {E}\left[ e^{-\lambda / U}\right] \) and \(F(\epsilon ) = \mathbb {P}(U > t)\), and our claim is now immediate. \(\square \)

To save ourselves from repeated calculations, we shall collect a few basic estimates below. The first one concerns the Laplace transform estimate of a random variable with power-law tail.

Lemma 6

If U is a non-negative random variable such that

$$\begin{aligned} \mathbb {P}(U > t) \overset{t \rightarrow \infty }{\sim } \frac{C}{t^q} \end{aligned}$$

for some \(C > 0\) and \(q > 0\), then for any \(p > 0\)

$$\begin{aligned} \mathbb {E}[U^{-p} e^{-\lambda / U}] \overset{\lambda \rightarrow \infty }{\sim } \frac{q}{p+q} \frac{C\varGamma (p+q+1)}{\lambda ^{p+q}}. \end{aligned}$$

If \(\mathbb {P}(U > t) \le C t^{-q}\) for all \(t > 0\) instead, then there exists some \(C' > 0\) such that

$$\begin{aligned} \mathbb {E}[U^{-p} e^{-\lambda / U}] \le \frac{C'}{\lambda ^{p+q}}, \qquad \forall \lambda > 0. \end{aligned}$$


For any \(t_0 > 0\), it is not difficult to see that there exists \(c_0 > 0\) such that

$$\begin{aligned} \mathbb {E}[U^{-p} e^{-\lambda / U} 1_{\{U \le t_0\}} ] = O(e^{-c_0 \lambda }). \end{aligned}$$

For any \(\epsilon > 0\), choose \(t_0 > 0\) such that for all \(t > t_0\) we have

$$\begin{aligned} \frac{C(1-\epsilon )}{t^q} \le \mathbb {P}(U > t) \le \frac{C(1+\epsilon )}{t^q}. \end{aligned}$$

Using Fubini, we have

$$\begin{aligned}&\mathbb {E}[U^{-p} e^{-\lambda / U} 1_{\{U \ge t_0\}} ] \\&\quad = \frac{1}{t_0^p}e^{-\lambda / t_0} \mathbb {P}(U> t_0) + \int _{t_0}^\infty e^{-\lambda /t} \left( -\frac{p}{t^{p+1}} + \frac{\lambda }{t^{p+2}}\right) \mathbb {P}(U > t) dt\\&\quad \le O(e^{-\lambda / t_0}) + C\int _{t_0}^\infty e^{-\lambda / t} \left( -\frac{p(1-\epsilon )}{t^{p+q+1}} + \frac{\lambda (1+\epsilon )}{t^{p+q+2}}\right) dt. \end{aligned}$$

Note that for any \(m > 0\) we have

$$\begin{aligned} \int _{t_0}^\infty \frac{e^{-\lambda /t}}{t^{m + 2}} dt&= \lambda ^{-(1+m)} \int _0^{\lambda / t_0} s^m e^{-s} ds \overset{\lambda \rightarrow \infty }{=} (1+o(1)) \varGamma (1+m) \lambda ^{-(m+1)} \end{aligned}$$

and therefore

$$\begin{aligned}&\mathbb {E}[U^{-p} e^{-\lambda / U} ]\\&\quad \le \frac{C}{\lambda ^{p+q}} \left[ -p(1-\epsilon ) \varGamma (p+q) + (1+\epsilon ) \varGamma (p+q+1)\right] + o(\lambda ^{-(p+q)})\\&\quad \le \left( \frac{Cq}{p+q} + (p+1)\epsilon \right) \frac{\varGamma (p+q+1)}{\lambda ^{p+q}} + o(\lambda ^{-(p+q)}). \end{aligned}$$

Similarly we have

$$\begin{aligned} \mathbb {E}\left[ U^{-p} e^{-\lambda / U} \right]&\ge \left( \frac{Cq}{p+q} - (p+1)\epsilon \right) \frac{\varGamma (p+q+1)}{\lambda ^{p+q}} + o(\lambda ^{-(p+q)}). \end{aligned}$$

This means that

$$\begin{aligned}&\left( \frac{Cq}{p+q} - (p+1)\epsilon \right) \varGamma (p+q+1) \le \liminf _{\lambda \rightarrow \infty } \lambda ^{q+1} \mathbb {E}\left[ U^{-p} e^{-\lambda / U} \right] \\&\quad \le \limsup _{\lambda \rightarrow \infty } \lambda ^{q+1} \mathbb {E}[U^{-p} e^{-\lambda / U} ] \le \left( \frac{Cq}{p+q} + (p+1)\epsilon \right) \varGamma (p+q+1). \end{aligned}$$

Since \(\epsilon > 0\) is arbitrary, we let \(\epsilon \rightarrow 0^+\) and obtain (32). The claim (33) is similar. \(\square \)

We collect another Laplace transform estimate, the proof of which is similar to that of Lemma 6 and is omitted.

Lemma 7

If U is a non-negative random variable such that

$$\begin{aligned} \mathbb {P}(U > t) \overset{t \rightarrow \infty }{\sim } \frac{C}{t^q} \end{aligned}$$

for some \(C > 0\) and \(q > 0\), then

$$\begin{aligned} \lim _{\lambda \rightarrow 0^+} \frac{\mathbb {E}\left[ U^q e^{-\lambda U}\right] }{-\log \lambda } = Cq. \end{aligned}$$

If \(\mathbb {P}(U > t) \le C t^{-q}\) for all t sufficiently large instead, then (34) may be replaced by the statement that the limit superior is upper bounded by Cq.

We also need the following elementary result, the proof of which is again skipped.

Lemma 8

Let UV be two non-negative random variables. Suppose there exists some \(C > 0\) and \(q > 0\) such that

$$\begin{aligned} (i)&\qquad \mathbb {P}(U> t) \overset{t \rightarrow \infty }{\sim } C t^{-q}, \\ (ii)&\qquad \mathbb {P}(V> t) \overset{t \rightarrow \infty }{\sim } o(t^{-p}) \qquad \forall p > 0. \end{aligned}$$

Then the tail behaviour of UV is given by

$$\begin{aligned} (iii)&\qquad \mathbb {P}(UV > t) \overset{t \rightarrow \infty }{\sim } C \mathbb {E}[V^q] t^{-q}. \qquad \end{aligned}$$

Remark 3

The converse of Lemma 8 is false: in general if we are given only conditions (ii) and (iii), we can only show that there exists some \(C' > 0\) such that

$$\begin{aligned} \mathbb {P}(U > t) \le C' t^{-q} \end{aligned}$$

which follows immediately from \(\mathbb {P}(UV> t) \ge \mathbb {P}(U> t/a ) \mathbb {P}(V > a)\) for any \(a > 0\) such that \(\mathbb {P}(V>a) \ne 0\).

Random recursive equation

Here we collect Goldie’s implicit renewal theorem [20] from the literature of random distributional equations.

Theorem 5

Let M and R be two independent non-negative random variables. Suppose there exists some \(q > 0\) such that

  1. (i)

    \(\mathbb {E}[M^q] = 1\).

  2. (ii)

    \(\mathbb {E}[M^q \log M] < \infty \).

  3. (iii)

    The conditional law of \(\log M\) given \(M \ne 0\) is non-arithmetic.Footnote 9

  4. (iv)

    \(\int _0^\infty |\mathbb {P}(R> t) - \mathbb {P}(MR > t)| t^{q-1} dt < \infty \).

Then \(\mathbb {E}[M^q \log M] \in (0, \infty )\) and as \(t \rightarrow \infty \),

$$\begin{aligned} \mathbb {P}(R > t) = \frac{C}{t^{q}} + o(t^{-q}) \end{aligned}$$

where the constant \(C > 0\) is given by

$$\begin{aligned} C&= \frac{1}{\mathbb {E}[M^q \log M]} \int _0^\infty \left( \mathbb {P}(R> t) - \mathbb {P}(MR > t)\right) t^{q-1} dt. \end{aligned}$$

Theorem 5 will be used alongside the following coupling lemma.

Lemma 9

Let UV be two non-negative random variables and \(q > 0\). Then

$$\begin{aligned} \int _0^\infty \left| \mathbb {P}(U> t) - \mathbb {P}(V > t) \right| t^{q-1}dt \le \frac{1}{q} \mathbb {E}\left| U^q - V^q\right| . \end{aligned}$$

Moreover, for any coupling of (UV) such that \(\mathbb {E}|U^q - V^q| < \infty \),

$$\begin{aligned} \int _0^\infty \left[ \mathbb {P}(U> t) - \mathbb {P}(V > t) \right] t^{q-1}dt = \frac{1}{q} \mathbb {E}\left[ U^q - V^q\right] . \end{aligned}$$


Suppose UV are bounded by some constant \(M > 0\). The inequality (36) is then a simple consequence of

$$\begin{aligned}&|\mathbb {P}(U> t) - \mathbb {P}(V> t)|\\&\quad = \left| \mathbb {P}(U> t, V> t) + \mathbb {P}(U> t, V \le t) - \mathbb {P}(U> t, V> t) - \mathbb {P}(U \le t, V> t)\right| \\&\quad = \left| \mathbb {P}(U> t, V \le t) -\mathbb {P}(U \le t, V> t)\right| \\&\quad \le \mathbb {P}(U> t, V \le t) +\mathbb {P}(U \le t, V> t)\\&\quad = \mathbb {P}(\max (U, V)> t) - \mathbb {P}(\min (U, V) > t) \end{aligned}$$

combined with the fact that

$$\begin{aligned} \mathbb {E}\left| U^q - V^q\right|&=\mathbb {E}\left[ \max (U, V)^q - \min (U, V)^q \right] \\&= q \int _0^\infty t^{q-1} \left[ \mathbb {P}(\max (U, V)> t) - \mathbb {P}(\min (U, V) > t)\right] dt. \end{aligned}$$

The equality (37) is trivial because \(\mathbb {E}[U^q], \mathbb {E}[V^q]\) are all finite.

For UV that are not necessarily bounded but \(\mathbb {E}|U^q - V^q| < \infty \) (otherwise (36) is trivial), we introduce a cutoff \(M > 0\) and write \(U_M = \min (U, M), V_M = \min (V, M)\). Then the previous discussion implies that

$$\begin{aligned}&\int _0^M \left| \mathbb {P}(U> t) - \mathbb {P}(V> t) \right| t^{q-1}dt = \int _0^\infty \left| \mathbb {P}(U_M> t) - \mathbb {P}(V_M > t) \right| t^{q-1}dt \\&\qquad \le \frac{1}{q} \mathbb {E}\left| \max (U_M, V_M)^q - \min (U_M, V_M)^q\right| \\&\qquad \le \frac{1}{q} \mathbb {E}\left| (U^q - V^q) 1_{\{\max (U, V) \le M \}}\right| + \frac{1}{q} \mathbb {E}\left[ \left( M^q - \min (U, V)^q\right) 1_{\{\max (U, V) \ge M\}}\right] \\&\qquad \xrightarrow {M \rightarrow \infty } \frac{1}{q}\mathbb {E}\left| U^q - V^q\right| \end{aligned}$$

by dominated convergence since both

$$\begin{aligned} \left| (U^q - V^q) 1_{\{\max (U, V) \le M \}}\right| , \qquad \left( M^q - \min (U, V)^q\right) 1_{\{\max (U, V) \ge M\}} \end{aligned}$$

are bounded by \(|U^q - V^q|\). We send \(M \rightarrow \infty \) on the LHS of the above inequality and obtain (36) by monotone convergence. The equality (37) may be proved by a similar cutoff argument. \(\square \)

Proof of Theorem 1

This section is devoted to the proof of the tail asymptotics of subcritical GMC measures. As advertised earlier, our proof of Theorem 1 consists of two parts.

  1. 1.

    Tail asymptotics of reference measure (Sect. 3.1): we derive the leading order term of \(\mathbb {P}\left( \int _{|x| \le r} |x|^{-\gamma ^2} {\overline{M}}_{\gamma , g}(dx) > t\right) \) for the chaos measure \({\overline{M}}_{\gamma , g}\) associated with the exact kernel. This will serve as an important estimate for the extrapolation principle as well as applications of dominated convergence.

  2. 2.

    Extrapolation principle (Sect. 3.2): we explain how the estimates for certain expectations involving \({\overline{M}}_{\gamma , g}\) may be extended to \(M_{\gamma , g}\) by Gaussian interpolation.

To conclude our proof, we shall apply a Tauberian argument and translate our intermediate results back to the desired claim concerning the tail probability of \(M_{\gamma , g}(A)\).

Let us commence with the localisation trick.

Lemma 10

Let \(A \subset D\) be a non-empty open subset. Then for any \(t > 0\) and \(\lambda > 0\),

$$\begin{aligned} \mathbb {E}[ e^{-\lambda / M_{\gamma , g}(A)}]&= \int _A \mathbb {E}\left[ \frac{1}{M_{\gamma , g}(v, A)} e^{-\lambda / M_{\gamma , g}(v, A)} \right] g(v)dv, \end{aligned}$$
$$\begin{aligned} \mathbb {P}\left( M_{\gamma , g}(A) > t\right)&\le \int _A \mathbb {E}\left[ \frac{1}{M_{\gamma , g}(v, A)} 1_{\{M_{\gamma , g}(v, A) \ge t\}} \right] g(v)dv \end{aligned}$$


$$\begin{aligned} M_{\gamma , g}(v, A) := \int _A \frac{e^{\gamma ^2 f(x, v)} M_{\gamma , g}(dx)}{|x-v|^{\gamma ^2}}. \end{aligned}$$


For each \(\epsilon > 0\), let \(X_{\epsilon }\) be the mollified field with covariance \(\mathbb {E}[X_{\epsilon }(x) X_{\epsilon }(y)] = -\log \left( |x-y| \vee \epsilon \right) + f_{\epsilon }(x, y)\) where \(f_{\epsilon }(x, y) \xrightarrow {\epsilon \rightarrow 0^+} f(x,y)\) pointwise (cf. [6, Lemma 3.4]). If \(M_{\gamma , \epsilon }(dx)\) is the GMC associated to \(X_{\epsilon }\) and \(M_{\gamma , g, \epsilon }(dx) = g(x)M_{\gamma , \epsilon }(dx)\), then

$$\begin{aligned} \mathbb {E}[ e^{-\lambda / M_{\gamma , g}(A)}] \nonumber&= \lim _{\epsilon \rightarrow 0^+} \mathbb {E}[ e^{-\lambda / M_{\gamma , g, \epsilon }(A)}] \\ \nonumber&= \lim _{\epsilon \rightarrow 0^+} \mathbb {E}\left[ \frac{M_{\gamma , g, \epsilon }(A)}{M_{\gamma , g, \epsilon }(A)} e^{-\lambda / M_{\gamma , g, \epsilon }(A)}\right] \\&= \lim _{\epsilon \rightarrow 0^+} \int _A \mathbb {E}\left[ \frac{e^{\gamma X_{\epsilon }(v) - \frac{\gamma ^2}{2} \mathbb {E}\left[ X_{\epsilon }(v)^2\right] }}{M_{\gamma , g, \epsilon }(A)} e^{-\lambda / M_{\gamma , g, \epsilon }(A)}\right] g(v) dv. \end{aligned}$$

One may interpret \(e^{\gamma X_{\epsilon }(v) - \frac{\gamma ^2}{2} \mathbb {E}\left[ X_{\epsilon }(v)^2\right] }\) as a Radon–Nikodym derivative, and by applying the Cameron-Martin theorem, we can remove this exponential by shifting the mean of \(X_{\epsilon }(\cdot )\) by \(\mathbb {E}\left[ X_{\epsilon }(\cdot ) \gamma X_{\epsilon }(v)\right] = \gamma \left( - \log \left( |\cdot - v| \vee \epsilon \right) + f_{\epsilon }(\cdot , v)\right) \), i.e.

$$\begin{aligned} \mathbb {E}\left[ \frac{e^{\gamma X_{\epsilon }(v) - \frac{\gamma ^2}{2} \mathbb {E}\left[ X_{\epsilon }(v)^2\right] }}{M_{\gamma , g, \epsilon }(A)} e^{-\lambda / M_{\gamma , g, \epsilon }(A)}\right]&= \mathbb {E}\left[ \frac{1}{M_{\gamma , g, \epsilon }(v, A)} e^{-\lambda / M_{\gamma , g, \epsilon }(v, A)} \right] \end{aligned}$$


$$\begin{aligned} M_{\gamma , g, \epsilon }(v, A) =&\int _A e^{\gamma X_{\epsilon }(x) + \gamma \mathbb {E}\left[ X_{\epsilon }(x) X_{\epsilon }(v)\right] - \frac{\gamma ^2}{2}\mathbb {E}[X_{\epsilon }(x)^2]} g(x) dx\\ =&\int _A \frac{e^{\gamma ^2 f_{\epsilon }(x, v)} M_{\gamma , g, \epsilon }(dx)}{\left( |x-v|\vee \epsilon \right) ^{\gamma ^2}}. \end{aligned}$$

Since \(M_{\gamma , g, \epsilon }(v, A)\) converges to \(M_{\gamma , g}(v, A)\) as \(\epsilon \rightarrow 0^+\), (41) converges to the integrand in (38), and we can interchange the limit and integral in (40) to obtain (38) by bounded convergence. The proof of (39) is similarFootnote 10 and is skipped here. \(\square \)

The reference measure \({\overline{M}}_{\gamma }\)

Let \({\overline{M}}_{\gamma }^L\) be the GMC associated with the log-correlated field \(Y_L\) with covariance \(\mathbb {E}[Y_L(x) Y_L(y)] = K_L(x, y) = -\log |x-y| + L\), which by Lemma 3 is positive definite on \(B(0, r_d(L))\). We shall suppress the dependence on L when we are referring to the exact kernel, i.e. \(L = 0\). The main estimate in this subsection is the asymptotics of the tail probability of \({\overline{M}}_{\gamma }(0, r) := \int _{|x| \le r} |x|^{-\gamma ^2} {\overline{M}}_{\gamma }(dx)\).

Lemma 11

There exists some constant \({\overline{C}}_{\gamma , d} > 0\) such that for any \(r \in (0, r_d]\),

$$\begin{aligned} \mathbb {P}\left( {\overline{M}}_{\gamma }(0, r) > t\right) = \frac{{\overline{C}}_{\gamma , d}}{t^{\frac{2d}{\gamma ^2} - 1}} + o(t^{-\frac{2d}{\gamma ^2} +1}), \qquad t \rightarrow \infty . \end{aligned}$$


Pick \(c \in (0, 1)\). Using the fact that

$$\begin{aligned} \left( Y(c x)\right) _{|x| \le r} \overset{d}{=} \left( Y(x) + N_c\right) _{|x| \le r} \end{aligned}$$

where \(N_c \sim {\mathcal N }(0, -\log c)\) is an independent random variable, we see that

$$\begin{aligned} {\overline{M}}_{\gamma }(0, cr) \nonumber&= \int _{|x|< |cr|} e^{\gamma Y(x) - \frac{\gamma ^2}{2} \mathbb {E}[Y(x)^2]}\frac{dx}{|x|^{\gamma ^2}}\\ \nonumber&= c^d\int _{|u|< |r|} e^{\gamma Y(cu) - \frac{\gamma ^2}{2} \mathbb {E}[Y(cu)^2]}\frac{du}{|cu|^{\gamma ^2}}\\ \nonumber&\overset{d}{=} c^{d - \gamma ^2} e^{\gamma N_c - \frac{\gamma ^2}{2} \mathbb {E}[N_c^2]} \int _{|u| < |r|} e^{\gamma Y(u) - \frac{\gamma ^2}{2} \mathbb {E}[Y(u)^2]}\frac{du}{|u|^{\gamma ^2}}\\&= c^{d - \frac{\gamma ^2}{2}} e^{\gamma N_c} {\overline{M}}_{\gamma }(0, r). \end{aligned}$$

For convenience, set \(q = \frac{2d}{\gamma ^2} - 1\) and write \(M = c^{d - \frac{\gamma ^2}{2}} e^{\gamma N_c}= c^{\frac{\gamma ^2}{2} q}e^{\gamma N_c}\) and \(R = {\overline{M}}_{\gamma }(0, r)\). We only need to show that conditions (i) – (iv) in Theorem 5 are satisfied for the desired tail behaviour. Conditions (ii) and (iii) are trivial, while

$$\begin{aligned} \mathbb {E}\left[ M^{q}\right]&=c^{\frac{\gamma ^2}{2} q^2} c^{-\frac{\gamma ^2}{2}q^2} = 1 \end{aligned}$$

and so condition (i) is also satisfied. If we take \(U = {\overline{M}}_{\gamma }(0, r)\), \(V = {\overline{M}}_{\gamma }(0, cr)\), and

$$\begin{aligned} W = U - V = \int _{|x| \in [cr, r)} e^{\gamma Y(x) - \frac{\gamma ^2}{2} \mathbb {E}[Y(x)^2]} \frac{dx}{|x|^{\gamma ^2}} \le |cr|^{-\gamma ^2} {\overline{M}}_{\gamma }(B(0, r)). \end{aligned}$$


$$\begin{aligned} \int _0^{\infty } |\mathbb {P}(R> t) - \mathbb {P}(MR> t)| t^{q-1} dt \nonumber&= \int _0^{\infty } |\mathbb {P}(U> t) - \mathbb {P}(V > t)| t^{q-1} dt \\ \nonumber&\le \frac{1}{q} \mathbb {E}\left| (V+W)^q - V^q \right| \\&\le 2^{q}\mathbb {E}[ V^{q-1} W + W^{q}] \end{aligned}$$

where the first inequality follows from Lemma 9 and the second inequality from the elementary estimate

$$\begin{aligned} (V+W)^q - V^q&\le q \max \left( V^{q-1}, (V+W)^{q-1}\right) W \le q 2^{q} \left( V^{q-1} W + W^q \right) . \end{aligned}$$

Since \(\mathbb {E}[W^{q+1-\epsilon }] < \infty \) for any \(\epsilon > 0\) (in particular that \(\mathbb {E}[W^q] < \infty \)), we have

$$\begin{aligned} \mathbb {E}[V^{q-1} W] \le \mathbb {E}\left[ V^{\frac{(q-1)(q+1 - \epsilon )}{q - \epsilon }}\right] ^{1 - \frac{1}{q+1-\epsilon }} \mathbb {E}\left[ W^{q+1 - \epsilon }\right] ^{\frac{1}{q+1 - \epsilon }} < \infty \end{aligned}$$

for \(\epsilon \) sufficiently small so that \((q-1)(q+1-\epsilon ) / (q-\epsilon ) < q\). Then (44) is finite and condition (iv) is also satisfied, and by Theorem 5 we obtain

$$\begin{aligned} \mathbb {P}({\overline{M}}_{\gamma }(0, r) > t) = \frac{{\overline{C}}_{\gamma , d}}{t^{q}} + o(t^{-q}). \end{aligned}$$

To finish the proof we show that the constant \({\overline{C}}_{\gamma , d}\) is independent of \(r \in (0, r_d]\). To this end, let \(0< r < r' \le r_d\) and \(\theta \in (0, 1)\). We have

$$\begin{aligned}&\mathbb {P}\left( {\overline{M}}_{\gamma }(0, r)> t \right) \le \mathbb {P}\left( {\overline{M}}_{\gamma }(0, r')> t \right) \\&\quad \le \mathbb {P}\left( {\overline{M}}_{\gamma }(0, r)> (1-\theta ) t \right) + \mathbb {P}\left( {\overline{M}}_{\gamma }(0, B(0,r') \setminus B(0, r)) > \theta t \right) \end{aligned}$$


$$\begin{aligned} \mathbb {E}\left[ {\overline{M}}_{\gamma }(0, B(0,r') {\setminus } B(0, r))^p \right] \le r^{-p\gamma ^2} \mathbb {E}\left[ {\overline{M}}_{\gamma }(B(0, r'))^p \right] < \infty \quad \forall p \in \left[ 0, \frac{2d}{\gamma ^2}\right) , \end{aligned}$$

the tail probability of the random variable \({\overline{M}}_{\gamma }(0, B(0,r')\setminus B(0, r))\) decays faster than \(t^{-q}\) as \(t \rightarrow \infty \) by Markov’s inequality, and therefore

$$\begin{aligned}&\liminf _{t\rightarrow \infty } t^{q}\mathbb {P}\left( {\overline{M}}_{\gamma }(0, r)> t \right) \le \liminf _{t \rightarrow \infty } t^{q}\mathbb {P}\left( {\overline{M}}_{\gamma }(0, r')> t \right) \\&\quad \le \limsup _{t \rightarrow \infty } t^{q}\mathbb {P}\left( {\overline{M}}_{\gamma }(0, r')> t \right) \le \limsup _{t \rightarrow \infty } t^{q}\mathbb {P}\left( {\overline{M}}_{\gamma }(0, r) > (1-\theta ) t \right) . \end{aligned}$$

As \(\theta \in (0, 1)\) is arbitrary, we conclude that \({\overline{C}}_{\gamma , d}\) is independent of \(r \in (0, r_d]\). \(\square \)

We summarise various probabilistic representations of \({\overline{C}}_{\gamma , d}\) in the following corollary.

Corollary 5

The constant \({\overline{C}}_{\gamma , d}\) has the following equivalent representations.

$$\begin{aligned} {\overline{C}}_{\gamma , d} \nonumber&= \lim _{t \rightarrow \infty } t^{\frac{2d}{\gamma ^2}-1} \mathbb {P}\left( {\overline{M}}_{\gamma }(0, r) > t\right) \\&= \lim _{\lambda \rightarrow 0^+} \frac{1}{\frac{2d}{\gamma ^2}-1} \frac{\mathbb {E}\left[ {\overline{M}}_{\gamma }(0,r)^{\frac{2d}{\gamma ^2}-1} e^{-\lambda {\overline{M}}_{\gamma }(0,r)}\right] }{-\log \lambda } \end{aligned}$$
$$\begin{aligned}&= \frac{1}{-\frac{2}{\gamma ^2}\left( d - \frac{\gamma ^2}{2}\right) ^2 \log c} \mathbb {E}\left[ {\overline{M}}_{\gamma }(0, r)^{\frac{2d}{\gamma ^2}-1} - {\overline{M}}_{\gamma }(0, cr)^{\frac{2d}{\gamma ^2}-1}\right] , \qquad \forall c \in (0, 1). \end{aligned}$$


The first representation is an immediate consequence of Lemma 11, and the second representation follows from Lemma 7. For the third representation, we recall from Theorem 5 and Lemma 9 that

$$\begin{aligned}&\lim _{t \rightarrow \infty } t^{q} \mathbb {P}\left( {\overline{M}}_{\gamma }(0, r) > t\right) \\&\qquad = \frac{1}{\mathbb {E}\left[ c^{\frac{\gamma ^2}{2}q^2} e^{\gamma q N_c} \left( \frac{\gamma ^2}{2}q \log c + \gamma N_c\right) \right] }\frac{1}{q} \mathbb {E}\left[ {\overline{M}}_{\gamma }(0, r)^{q} - {\overline{M}}_{\gamma }(0, cr)^{q}\right] \end{aligned}$$

where \(q = \frac{2d}{\gamma ^2} - 1\) and \(c \in (0, 1)\). Then it is straightforward to check that

$$\begin{aligned} \mathbb {E}\left[ c^{\frac{\gamma ^2}{2}q^2} e^{\gamma q N_c} \left( \frac{\gamma ^2}{2}q \log c + \gamma N_c\right) \right]&= \frac{\gamma ^2}{2} q \log c + \gamma \mathbb {E}\left[ \gamma q N_c^2\right] = - \frac{\gamma ^2}{2}q \log c \end{aligned}$$

which gives (46). \(\square \)

Remark 4

The fact that (46) holds regardless of \(c \in (0,1)\) is not surprising. Indeed when \(c = 2^{-N}\), we have

$$\begin{aligned}&\mathbb {E}\left[ {\overline{M}}_{\gamma }(0, r)^{\frac{2d}{\gamma ^2}-1} - {\overline{M}}_{\gamma }(0, cr)^{\frac{2d}{\gamma ^2}-1}\right] \\&\quad = \sum _{n=1}^N \mathbb {E}\left[ {\overline{M}}_{\gamma }(0, 2^{-(n-1)} r)^{\frac{2d}{\gamma ^2}-1} - {\overline{M}}_{\gamma }(0, 2^{-n}r)^{\frac{2d}{\gamma ^2}-1}\right] \end{aligned}$$

and the summand on the RHS does not change with n because of the scaling property (43). The scaling property also explains why (46) is independent of \(r \in (0, r_d)\) (as long as the exact kernel remains positive definite on B(0, r)).

Lemma 11 has several useful implications.

Corollary 6

The following are true.

  1. (i)

    For any \(L \in \mathbb {R}\) and \(r \in (0, r_d(L)]\), let \({\overline{M}}_{\gamma }^L(0, r) = \int _{|x| \le r} |x|^{-\gamma ^2} e^{\gamma ^2 L} {\overline{M}}_{\gamma }^L(dx)\). We have, as \(t \rightarrow \infty \),

    $$\begin{aligned} \mathbb {P}\left( {\overline{M}}_{\gamma }^L(0, r) > t \right) = e^{\frac{2d}{\gamma }(Q-\gamma )L}\frac{{\overline{C}}_{\gamma , d} }{t^{\frac{2d}{\gamma ^2}-1}} + o(t^{-\frac{2d}{\gamma ^2} + 1}). \end{aligned}$$
  2. (ii)

    Let X be the log-correlated field in Theorem 1, and \(A \subset D\) be a fixed, non-trivial open set. Then there exists some \(C > 0\) independent of \(v \in A\) such that

    $$\begin{aligned} \mathbb {P}\left( M_{\gamma , g}(v, A)> t \right) \le \frac{C}{t^{\frac{2d}{\gamma ^2} - 1}} \qquad \forall t > 0. \end{aligned}$$

Remark 5

The tail (47) suggests how \(\mathbb {P}\left( M_{\gamma , g}(v, A) > t\right) \) should behave asymptotically as \(t \rightarrow \infty \). As we shall see in the proof, we can pick any \(r > 0\) such that \(B(v, r) \subset A\) and consider instead \(\mathbb {P}\left( M_{\gamma , g}(v, r) > t\right) \) without changing the asymptotic behaviour. When r is small, the covariance structure of X looks like \(-\log |x-y| + f(v, v) = K_{f(v, v)}(x, y)\) locally in B(vr) and we should expect

$$\begin{aligned} \mathbb {P}\left( M_{\gamma , g}(v, r) > t \right) \sim e^{\frac{2d}{\gamma }(Q-\gamma ) f(v, v)} g(v)^{\frac{2d}{\gamma ^2} - 1}\frac{{\overline{C}}_{\gamma , d}}{t^{\frac{2d}{\gamma ^2}-1}}. \end{aligned}$$

It is not hard to verify this claim when f is the covariance of some continuous Gaussian field, but the situation becomes trickier under the setting of Theorem 1 where we only assume that \(f = f_+ - f_-\) is the difference of two such covariance kernels. We shall therefore not attempt to prove (49) here.

Proof of Corollary 6

For convenience, let \(q = \frac{2d}{\gamma ^2}-1 = \frac{2}{\gamma }(Q-\gamma )\).

  1. (i)

    Based on the argument at the end of the proof of Lemma 11, we may thus assume \(r> 0\) to be as small as we like (but independent of t) without loss of generality.

    If \(L \ge 0\), we may interpret \(K_L(x, y) = K_0(x, y) + L\) as the sum of the exact kernel and the variance of an independent random variable \( {\mathcal N }_{L} \sim {\mathcal N }(0, L)\), and hence

    $$\begin{aligned} \mathbb {P}\left( {\overline{M}}_{\gamma }^L(0, r)> t\right) = \mathbb {P}\left( e^{\gamma {\mathcal N }_L - \frac{\gamma ^2}{2} L}{\overline{M}}_{\gamma }(0, r) > t\right) \sim \frac{{\overline{C}}_{\gamma , d}\mathbb {E}\left[ \left( e^{\gamma {\mathcal N }_L - \frac{\gamma ^2}{2} L}\right) ^{q}\right] }{t^q} \end{aligned}$$

    by Lemma 8, and \(\mathbb {E}\left[ \left( e^{\gamma {\mathcal N }_L - \frac{\gamma ^2}{2} L}\right) ^{q}\right] = e^{\frac{2d}{\gamma }(Q-\gamma )L}\).

    If \(L < 0\), we instead interpret \(K_L(x, y) = - \log \left| e^{-L}(x-y)\right| \) as the exact kernel with coordinates scaled by \(e^{-L}\). If we restrict ourselves to \(x, y \in B(0, e^{-L} r_d)\) or equivalently \(r \in (0, e^{-L}r_d]\), then

    $$\begin{aligned} \mathbb {P}\left( {\overline{M}}_{\gamma }^L(0, r)> t \right)&= \mathbb {P}\left( \int _{|x| \le r} |e^{-L}x|^{-\gamma ^2} e^{\gamma Y(e^{-L}x) - \frac{\gamma ^2}{2} \mathbb {E}\left[ Y(e^{-L}x )^2\right] } dx> t \right) \\&= \mathbb {P}\left( e^{dL}{\overline{M}}_{\gamma }(0, e^L r) > t\right) \sim \frac{{\overline{C}}_{\gamma , d}e^{dqL}}{t^q} \end{aligned}$$

    where \(e^{dqL} = e^{\frac{2d}{\gamma }(Q-\gamma )L}\) as expected.

  2. (ii)

    Let \(r = r_d\). Then

    $$\begin{aligned} \mathbb {P}\left( M_{\gamma , g}(v, A)> t\right)&\le \mathbb {P}\left( M_{\gamma , g}(v, B(v,r) \cap D)> \frac{t}{2}\right) \\&\quad + \mathbb {P}\left( |r|^{-\gamma ^2} e^{\gamma ^2 \sup _{x, y} |f(x, y)|}M_{\gamma , g}(D) > \frac{t}{2}\right) . \end{aligned}$$

    Since \(\mathbb {E}\left[ M_{\gamma , g}(D)^q\right] < \infty \) by Lemma 5, Markov’s inequality implies that we only need to verify \(\mathbb {P}\left( M_{\gamma , g}(v, B(v,r) \cap D) > t\right) \le Ct^{-q}\) uniformly in v.

    By (i), let \(C > 0\) be such that

    $$\begin{aligned} \mathbb {P}\left( {\overline{M}}_{\gamma }(0, r)> t\right) \le \frac{C}{t^q} \qquad \forall t > 0. \end{aligned}$$

    To go beyond exact kernels, we utilise the decomposition condition of f. Let \(G_\pm (\cdot )\) be independent continuous Gaussian fields on \({\overline{D}}\) with covariance \(f_{\pm }\), and introduce the random variables

    $$\begin{aligned} R_+&= e^{\gamma \sup _{x \in D} G_+(x) + \gamma ^2 \sup _{y, z \in D} |f(y, z)|}, \qquad \\ R_-&= e^{\gamma \inf _{x \in D} G_-(x) - \frac{\gamma ^2}{2} \sup _{y \in D} |f_-(y, y)|} \end{aligned}$$

    which possess moments of all orders by Lemma 1. Let \(a > 0\) be such that

    $$\begin{aligned} P_{R_-} := \mathbb {P}(R_-> a) > 0. \end{aligned}$$

    Since \(\mathbb {E}[X(x)X(y)] + f_-(x, y) = K_0(x-v, y-v) + f_+(x, y)\) and

    $$\begin{aligned} \{M_{\gamma , g}(v, B(v, r) \cap D)> t\} \cap \{R_-> a\} \subset \{ R_- M_{\gamma , g}(v, B(v, r) \cap D) > at\}, \end{aligned}$$

    we have

    $$\begin{aligned}&P_{R_-}\mathbb {P}(M_{\gamma , g}(v, B(v, r) \cap D)> t) \le \mathbb {P}( R_- M_{\gamma , g}(v, B(v, r) \cap D)> at)\\&\quad \le \mathbb {P}\left( \int _{B(v, r) \cap D} \frac{e^{\gamma ^2 f(x, v)}e^{\gamma G_-(x) - \frac{\gamma ^2}{2} \mathbb {E}[G_-(x)^2]}}{|x-v|^{\gamma ^2}} M_{\gamma }(dx)> \frac{at}{||g||_\infty }\right) \\&\quad = \mathbb {P}\left( \int _{B(0, r) \cap (D - v)} e^{\gamma ^2 f(x+v, v)}e^{\gamma G_+(x+v) - \frac{\gamma ^2}{2} \mathbb {E}[G_+(x+v)^2]}\frac{{\overline{M}}_{\gamma }(dx)}{|x|^{\gamma ^2}}> \frac{at}{||g||_\infty }\right) \\&\quad \le \mathbb {P}\left( R_+ {\overline{M}}_{\gamma }(0, r)> \frac{at}{||g||_\infty }\right) \\&\quad \le \mathbb {E}\left[ \mathbb {P}\left( {\overline{M}}_{\gamma }(0, r) > \frac{at}{||g||_\infty R_+} \Big | R_+\right) \right] \le \frac{ C (||g||_\infty /a)^q \mathbb {E}\left[ R_+^q \right] }{t^q}. \end{aligned}$$

    The value of \(P_{R_-}^{-1} C (||g||_\infty /a)^{q} \mathbb {E}\left[ R_+^q \right] < \infty \) is independent of v so we are done.

\(\square \)

The extrapolation principle

In this subsection we show the existence of the limit

$$\begin{aligned} \lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \mathbb {E}\left[ M_{\gamma , g}(v, A)^{-1} e^{-\lambda / M_{\gamma , g}(v, A)}\right] \end{aligned}$$

and establish a formula for it.

Step 1: removal of non-singularity. We first show that

Lemma 12

For any \(r > 0\) such that \(B(v, r) \subset A\),

$$\begin{aligned} \mathbb {E}\left[ M_{\gamma , g}(v, A)^{-1} e^{-\lambda / M_{\gamma , g}(v, A)}\right] \overset{\lambda \rightarrow \infty }{=} \mathbb {E}\left[ M_{\gamma , g}(v, r)^{-1} e^{-\lambda / M_{\gamma , g}(v, r)}\right] + o(\lambda ^{-\frac{2d}{\gamma ^2}}). \end{aligned}$$

We emphasise that the error in (50) need not be uniform in v or r.


Starting with the localisation inequality (39), we know by the uniform bound (48) from Corollary 6 that

$$\begin{aligned} \mathbb {P}(M_{\gamma , g}(A) > t) \le \int _A \frac{1}{t} \mathbb {P}\left( M_{\gamma , g}(v, A) \ge t\right) g(v)dv \le \frac{C\int _A g(v)dv}{t^{\frac{2d}{\gamma ^2}}} \end{aligned}$$

for all \(t > 0\). In particular

$$\begin{aligned} \mathbb {P}\left( M_{\gamma , g}(v, A {\setminus } B(v, r))> t\right) \le \mathbb {P}\left( |r|^{-\gamma ^2} M_{\gamma , g}(A)> t\right) \le \frac{C_{r,g}}{t^{\frac{2d}{\gamma ^2}}} \qquad \forall t > 0 \end{aligned}$$

for some \(C_{r, g} > 0\).

To finish our proof we only need to show matching upper/lower bounds for (50). For a lower bound, pick \(\delta \in (0, 1)\) and

$$\begin{aligned}&\mathbb {E}\left[ M_{\gamma , g}(v, A)^{-1} e^{-\lambda / M_{\gamma , g}(v, A)}\right] \ge \mathbb {E}\left[ \frac{e^{-\lambda / M_{\gamma , g}(v, r)}}{M_{\gamma , g}(v, r) + M_{\gamma , g}(v, A \setminus B(v, r))} \right] \\&\quad \ge \mathbb {E}\left[ \frac{e^{-\lambda / M_{\gamma , g}(v, r)}}{M_{\gamma , g}(v, r)}\left( 1 + \frac{\lambda ^{1-\delta }}{M_{\gamma , g}(v, r)}\right) ^{-1} 1_{\{M_{\gamma , g}(v, r) \ge \lambda ^{1 - \frac{\delta }{4}}, M_{\gamma , g}(v, A \setminus B(v, r)) \le \lambda ^{1-\delta }\}}\right] \\&\quad \ge \left( 1 - \lambda ^{-\frac{3\delta }{4}}\right) \mathbb {E}\left[ \frac{e^{-\lambda / M_{\gamma , g}(v, r)}}{M_{\gamma , g}(v, r)} 1_{\{M_{\gamma , g}(v, r) \ge \lambda ^{1 - \frac{\delta }{4}}, M_{\gamma , g}(v, A \setminus B(v, r)) \le \lambda ^{1-\delta }\}}\right] \\&\quad = \left( 1 - \lambda ^{-\frac{3\delta }{4}}\right) \left\{ \mathbb {E}\left[ \frac{e^{-\lambda / M_{\gamma , g}(v, r)}}{M_{\gamma , g}(v, r)} \right] -\mathbb {E}\left[ \frac{e^{-\lambda / M_{\gamma , g}(v, r)}}{M_{\gamma , g}(v, r)} 1_{\{M_{\gamma , g}(v, r)\le \lambda ^{1 - \frac{\delta }{4}}\}}\right] \right. \\&\quad \qquad \qquad \qquad \qquad \left. - \mathbb {E}\left[ \frac{e^{-\lambda / M_{\gamma , g}(v, r)}}{M_{\gamma , g}(v, r)}1_{\{M_{\gamma , g}(v, r) \ge \lambda ^{1 - \frac{\delta }{4}}, M_{\gamma , g}(v, A {\setminus } B(v, r)) \ge \lambda ^{1-\delta }\}}\right] \right\} \\ \end{aligned}$$


$$\begin{aligned} \mathbb {E}\left[ \frac{e^{-\lambda / M_{\gamma , g}(v, r)}}{M_{\gamma , g}(v, r)} 1_{\{M_{\gamma , g}(v, r)\le \lambda ^{1 - \frac{\delta }{4}}\}}\right] \le \lambda ^{-(1-\delta /4)} e^{-\lambda ^{3\delta / 4}}&= o(\lambda ^{-\frac{2d}{\gamma ^2}}) \end{aligned}$$


$$\begin{aligned}&\left. \mathbb {E}\left[ \frac{e^{-\lambda / M_{\gamma , g}(v, r)}}{M_{\gamma , g}(v, r)}1_{\{M_{\gamma , g}(v, r) \ge \lambda ^{1 - \frac{\delta }{4}}, M_{\gamma , g}(v, A \setminus B(v, r)) \ge \lambda ^{1-\delta }\}}\right] \right\} \\&\qquad \le \lambda ^{-(1-\delta /4)} \mathbb {P}\left( M_{\gamma , g}(v, A {\setminus } B(v, r)) \ge \lambda ^{1-\delta }\right) \le C_r \lambda ^{-(1-\delta )\left( \frac{2d}{\gamma ^2}+1\right) } \end{aligned}$$

and so we just pick \(\delta > 0\) small enough satisfying \((1-\delta )\left( \frac{2d}{\gamma ^2} + 1\right) > \frac{2d}{\gamma ^2}\) for our desired lower bound.

As for the upper bound,

$$\begin{aligned}&\mathbb {E}\left[ M_{\gamma , g}(v, A)^{-1} e^{-\lambda / M_{\gamma , g}(v, A)}\right] = \mathbb {E}\left[ M_{\gamma , g}(v, A)^{-1} e^{-\frac{\lambda }{M_{\gamma , g}(v, r)}\left( 1 + \frac{M_{\gamma , g}(v, A \setminus B(v,r))}{M_{\gamma , g}(v, r)}\right) ^{-1}}\right] \\&\qquad \le \mathbb {E}\left[ M_{\gamma , g}(v, r)^{-1} e^{-\frac{\lambda }{M_{\gamma , g}(v, r)}\left( 1 + \lambda ^{-\frac{3\delta }{4}}\right) ^{-1}} 1_{\{M_{\gamma , g}(v, r) \ge \lambda ^{1-\frac{\delta }{4}}, M_{\gamma , g}(v, A \setminus B(0,r)) \le \lambda ^{1 - \delta }\}} \right] \\&\qquad \qquad + \underbrace{e^{-\frac{\lambda ^{\delta /4}}{2}}\mathbb {E}\left[ M_{\gamma , g}(v, A)^{-1}\right] + \lambda ^{-(1-\delta )}\mathbb {P}\left( M_{\gamma , g}(v, A \setminus B(0,r)) > \lambda ^{1 - \delta }\right) }_{ = o(\lambda ^{-2d/\gamma ^2})} \end{aligned}$$


$$\begin{aligned}&\mathbb {E}\left[ M_{\gamma , g}(v, r)^{-1} e^{-\frac{\lambda }{M_{\gamma , g}(v, r)}\left( 1 + \lambda ^{-\frac{3\delta }{4}}\right) ^{-1}} 1_{\{M_{\gamma , g}(v, r) \ge \lambda ^{1-\frac{\delta }{4}}, M_{\gamma , g}(v, A \setminus B(0,r)) \le \lambda ^{1 - \delta }\}} \right] \\&\qquad \le \underbrace{e^{\lambda ^{-\delta /2}}}_{= 1+o(1)}\mathbb {E}\Bigg [M_{\gamma , g}(v, r)^{-1} e^{-\frac{\lambda }{M_{\gamma , g}(v, r)}} 1_{\{M_{\gamma , g}(v, r) \ge \lambda ^{1-\frac{\delta }{4}}, M_{\gamma , g}(v, A \setminus B(0,r)) \le \lambda ^{1 - \delta }\}} \Bigg ]\\&\qquad \le (1+o(1))\mathbb {E}\Bigg [M_{\gamma , g}(v, r)^{-1} e^{-\frac{\lambda }{M_{\gamma , g}(v, r)}}\Bigg ] + o(\lambda ^{-\frac{2d}{\gamma ^2}}) \end{aligned}$$

where the last inequality follows from similar calculations in the proof of the lower bound. This concludes the proof of (50). \(\square \)

Step 2: extrapolation. For \(s \in [0,1]\), define \(Z_s(x) = \sqrt{s}X(x) + \sqrt{1-s} Y_{f(v, v)}(x - v)\), \(M_{\gamma }^s(dx) = e^{\gamma Z_s(x) - \frac{\gamma ^2}{2} \mathbb {E}[Z_s(x)^2]}dx\) and

$$\begin{aligned} M_{\gamma , g}^s(v, r)&:= \int _{B(v, r)}\frac{e^{\gamma ^2 f(v,v)} g(v) M_{\gamma }^s(dx)}{|x-v|^{\gamma ^2}}, \qquad \nonumber \\ \varphi (s)&:= \mathbb {E}\left[ \frac{1}{M_{\gamma , g}^s(v, r)} e^{-\lambda / M_{\gamma , g}^s(v, r)}\right] \end{aligned}$$

where \(r \in (0, r_d(f(v,v))]\). Our goal is to prove the following extrapolation result.

Lemma 13

Suppose \(v \in D\) satisfies \(g(v) > 0\). Then

$$\begin{aligned} \nonumber \lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \varphi (1)&= \lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \varphi (0)\\&= \varGamma \left( 1+\frac{2d}{\gamma ^2}\right) e^{\frac{2d}{\gamma }(Q-\gamma ) f(v, v)} g(v)^{\frac{2d}{\gamma ^2}-1} \frac{\frac{2}{\gamma }(Q-\gamma )}{\frac{2}{\gamma }(Q-\gamma ) + 1} {\overline{C}}_{\gamma , d}. \end{aligned}$$


$$\begin{aligned} \nonumber&\lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \mathbb {E}\left[ M_{\gamma , g}(v, r)^{-1} e^{-\lambda / M_{\gamma , g}(v, r)}\right] \\&\qquad = \varGamma \left( 1+\frac{2d}{\gamma ^2}\right) e^{\frac{2d}{\gamma }(Q-\gamma ) f(v, v)} g(v)^{\frac{2d}{\gamma ^2}-1}\frac{\frac{2}{\gamma }(Q-\gamma )}{\frac{2}{\gamma }(Q-\gamma ) + 1} {\overline{C}}_{\gamma , d}. \end{aligned}$$


We first recall that the definition of \(\varphi (t)\) depends on r but the limits (52), if exist, do not because of Lemma 12. Also

$$\begin{aligned} \lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \varphi (0)&= \lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \mathbb {E}\left[ \left( g(v){\overline{M}}_{\gamma }^{f(v,v)}(v, r)\right) ^{-1} e^{-\lambda /\left( g(v){\overline{M}}_{\gamma }^{f(v,v)}(v, r)\right) }\right] \\&= \varGamma \left( 1+\frac{2d}{\gamma ^2}\right) e^{\frac{2d}{\gamma }(Q-\gamma ) f(v, v)} g(v)^{\frac{2d}{\gamma ^2}-1} \frac{\frac{2}{\gamma }(Q-\gamma )}{\frac{2}{\gamma }(Q-\gamma ) + 1} {\overline{C}}_{\gamma , d} \end{aligned}$$

by combining Corollary 6 (\(L = f(v, v)\)) with Lemma 6. From now on we shall focus on the equality of the two limits (52).

For any \(\epsilon > 0\) there exists some \(r = r(\epsilon ) \in (0, r_d(f(v, v))]\) such that

$$\begin{aligned} \left| f(x, y) - f(v, v)\right| \le \epsilon \end{aligned}$$

for all \(x, y \in B(v, r)\) by continuity. If we write \(F(x) = x^{-1} e^{-\lambda / x}\), then \(F''(x) = e^{-\lambda / x}\left( \frac{2}{x^{3}} - \frac{4\lambda }{x^4} + \frac{\lambda ^2}{x^5}\right) \), and Corollary 3 yields

$$\begin{aligned}&|\varphi (1) - \varphi (0)| \nonumber \\&\quad \le \frac{\epsilon }{2} \int _0^1 \mathbb {E}\left[ e^{-\lambda / M_{\gamma , g}^s(v, r)} \left( \frac{2}{M_{\gamma , g}^s(v, r)} + \frac{4\lambda }{M_{\gamma , g}^s(v, r)^2} + \frac{\lambda ^2}{M_{\gamma , g}^s(v, r)^3}\right) \right] ds. \end{aligned}$$

Going through the proof of Corollary 6(ii) again, we check that the same argument also suggests that there exists some \(C > 0\) independent of \(s \in [0,1]\) and \(v \in D\) such that

$$\begin{aligned} \mathbb {P}\big (M_{\gamma , g}^s(v, r)> t\big ) \le \frac{C}{t^{\frac{2d}{\gamma ^2}- 1}} \qquad \forall t > 0. \end{aligned}$$

By Lemma 6, the integrand in (55) is uniformly bounded by \(C' \lambda ^{-\frac{2d}{\gamma ^2}}\) for some \(C' > 0\) which means that

$$\begin{aligned} \limsup _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} |\varphi (1) - \varphi (0)| \le \frac{C' \epsilon }{2}. \end{aligned}$$

Since \(\epsilon > 0\) is arbitrary, we have \(\lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \varphi (1) = \lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \varphi (0)\).

Finally, let \(\epsilon , r > 0\) be chosen according to (54) and the additional constraint that

$$\begin{aligned} \left| \frac{g(x)}{g(v)} - 1\right| \le \epsilon \qquad \forall x \in B(v, r) \end{aligned}$$

which is possible because \(g(v) > 0\) and g is continuous. Then

$$\begin{aligned}&\liminf _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \mathbb {E}\left[ M_{\gamma , g}(v, r)^{-1} e^{-\lambda / M_{\gamma , g}(v, r)}\right] \\&\qquad \ge \lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} (1+\epsilon )^{-1} e^{-\gamma ^2 \epsilon }\mathbb {E}\left[ M_{\gamma , g}^1(v, r)^{-1} e^{-\lambda (1+\epsilon ) e^{\gamma ^2 \epsilon } / M_{\gamma , g}^1(v, r)}\right] \\&\qquad = \left( (1+\epsilon )e^{\gamma ^2 \epsilon }\right) ^{-\left( 1 + \frac{2d}{\gamma ^2}\right) } \lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \varphi (1), \\&\limsup _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \mathbb {E}\left[ M_{\gamma , g}(v, r)^{-1} e^{-\lambda / M_{\gamma , g}(v, r)}\right] \\&\qquad \le \lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} (1+\epsilon ) e^{\gamma ^2 \epsilon }\mathbb {E}\left[ M_{\gamma , g}^1(v, r)^{-1} e^{-\lambda (1+\epsilon )^{-1} e^{-\gamma ^2 \epsilon } / M_{\gamma , g}^1(v, r)}\right] \\&\qquad = \left( (1+\epsilon )e^{\gamma ^2 \epsilon }\right) ^{\left( 1 + \frac{2d}{\gamma ^2}\right) } \lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \varphi (1). \end{aligned}$$

Given that the \(\liminf \)/\(\limsup \) do not depend on r by Lemma 12, \(\epsilon \) can be made arbitrarily small and the claim (53) follows. \(\square \)

Proof of Theorem 1

By Corollary 6 (ii) and Lemma 6, we see that

$$\begin{aligned} \lambda ^{\frac{2d}{\gamma ^2}} \mathbb {E}\left[ M_{\gamma , g}(v, A)^{-1} e^{-\lambda / M_{\gamma , g}(v, A)}\right] \le C' \qquad \forall v \in A. \end{aligned}$$

With an application of dominated convergence, the localisation identity (38) yields

$$\begin{aligned}&\lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \mathbb {E}\left[ e^{-\lambda / M_{\gamma , g}(A)}\right] \\&\qquad = \int _A \left( \lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}}\mathbb {E}\left[ M_{\gamma , g}(v, A)^{-1} e^{-\lambda / M_{\gamma , g}(v, A)} \right] \right) g(v)dv. \end{aligned}$$

We substitute the pointwise limit of the integrand from Lemma 13 and obtain

$$\begin{aligned}&\lim _{\lambda \rightarrow \infty } \lambda ^{\frac{2d}{\gamma ^2}} \mathbb {E}\left[ e^{-\lambda / M_{\gamma , g}(A)}\right] \\&\qquad = \varGamma \left( 1+ \frac{2d}{\gamma ^2}\right) \left( \int _A e^{\frac{2d}{\gamma }(Q-\gamma ) f(v, v)} g(v)^{\frac{2d}{\gamma ^2}} dv\right) \frac{\frac{2}{\gamma }(Q-\gamma )}{\frac{2}{\gamma }(Q-\gamma ) + 1} {\overline{C}}_{\gamma , d}. \end{aligned}$$

The tail asymptotics of \(M_{\gamma , g}(A)\) then follows immediately from Corollary 4. \(\square \)

Remark 6

In this section we explain how Theorem 1 may be extended to treat densities (6). For simplicity we treat the case with one singularity, i.e.

$$\begin{aligned} g(x) = |x|^{-\gamma \alpha } {\overline{g}}(x), \qquad \alpha \in \left( 0, \frac{\gamma }{2}\right) \end{aligned}$$

where \(0 \in {\overline{D}} \subset B(0, 1)\) without loss of generality.Footnote 11 We still follow the localisation trick as in Lemma 10, i.e.

$$\begin{aligned} \mathbb {E}\left[ e^{-\lambda / M_{\gamma , g}(A)}\right] = \int _A \mathbb {E}\left[ M_{\gamma , g}(v, A)^{-1} e^{-\lambda / M_{\gamma , g}(v, A)}\right] g(v) dv \end{aligned}$$


$$\begin{aligned} M_{\gamma , g}(v, A) := \int _A \frac{e^{\gamma ^2 f(x, v)} {\overline{g}}(x) M_{\gamma }(dx)}{|x|^{\gamma \alpha }|x-v|^{\gamma ^2}}. \end{aligned}$$

We now need to derive an estimate analogous to (48) when g has a singularity. If \(x = v+u \in B(v, \frac{1}{2}|v|)^c\) (equivalently \(|u| \ge \frac{1}{2}|v|\)), note that

$$\begin{aligned} \frac{|x|^{\gamma ^2} |x-v|^{\gamma \alpha }}{|x|^{\gamma \alpha } |x-v|^{\gamma ^2}} = \left( \frac{|x|}{|x-v|}\right) ^{\gamma (\gamma - \alpha )} = \left| \frac{v}{|u|} + \frac{u}{|u|}\right| ^{\gamma (\gamma - \alpha )} \le 3^{\gamma (\gamma - \alpha )} \end{aligned}$$

In other words, we have

$$\begin{aligned} M_{\gamma , g}(v, A)&\le \int _{A \cap B(v, \frac{1}{2}|v|)} \frac{e^{\gamma ^2 f(x, v)} {\overline{g}}(x) M_{\gamma }(dx)}{|x|^{\gamma \alpha }|x-v|^{\gamma ^2}}\\&\quad + 3^{\gamma (\gamma - \alpha )} \int _{A \cap B(v, \frac{1}{2}|v|)^c} \frac{e^{\gamma ^2 f(x, v)} {\overline{g}}(x) M_{\gamma }(dx)}{|x|^{\gamma ^2}|x-v|^{\gamma \alpha }}\\&\le C |v|^{-\gamma \alpha } \left[ M_{\gamma , {\overline{g}}}(v, A \cap B(v, 1)) + M_{\gamma , {\overline{g}}}(0, A)\right] \end{aligned}$$

and hence

$$\begin{aligned}&\mathbb {P}\left( M_{\gamma , g}(v, A)> t\right) \le \mathbb {P}\left( M_{\gamma , {\overline{g}}}(v, A \cap B(v, 1)) + M_{\gamma , {\overline{g}}}(0, A)> C^{-1} |v|^{\gamma \alpha }t\right) \\&\qquad \le \mathbb {P}\left( M_{\gamma , {\overline{g}}}(v, A \cap B(v, 1))> \frac{|v|^{\gamma \alpha }t}{2C}\right) + \mathbb {P}\left( M_{\gamma , {\overline{g}}}(0, A)> \frac{|v|^{\gamma \alpha }t}{2C}\right) \\&\qquad \le \frac{C'}{\left( |v|^{\gamma \alpha } t\right) ^{\frac{2d}{\gamma ^2}-1}} \end{aligned}$$

for some \(C' > 0\) independent of t and v by Corollary 6 (ii). Performing computations similar to the proof of Lemma 6, one can check that there exists some constant \(C'' > 0\) independent of \(\lambda > 0\) and \(v \in A\) such that

$$\begin{aligned} \lambda ^{\frac{2d}{\gamma ^2}} \mathbb {E}\left[ M_{\gamma , g}(v, A)^{-1} e^{-\lambda / M_{\gamma , g}(v, A)}\right] \le C'' |v|^{-\gamma \alpha \left( \frac{2d}{\gamma ^2}-1\right) } \end{aligned}$$

which is integrable against g(v)dv since \(\alpha < \frac{\gamma }{2}\) and \(\gamma \alpha (\frac{2d}{\gamma ^2}-1) + \gamma \alpha = \gamma \alpha \frac{2d}{\gamma ^2} < d\). The rest of the analysis in Sect. 3.2 is essentially identicalFootnote 12 and we arrive at the same asymptotics by dominated convergence as before.


  1. 1.

    The existence of density was proved in [30] for scale invariant kernels but may be extended to other fields as pointed out by one of the referees. Indeed, by diagonalising the covariance kernel and forcing the inclusion of a constant function by a Gram-Schmidt procedure, we may pull out a global Gaussian random variable of the field and hence \(M_{\gamma }(A)\) may be seen as the product of a lognormal and an independent variable (the GMC associated to the residual log-correlated field) which must have a probability density.

  2. 2.

    In the sense that \(\int _A g(x) dx > 0\). In particular A has non-trivial Lebesgue measure.

  3. 3.

    The statement is also reproduced as Lemma 2 in our Sect. 2.2.

  4. 4.

    Evaluated at \(\gamma \); see the general definition of \({\overline{C}}_{\gamma , d}(\alpha )\) in Appendix A.

  5. 5.

    The theorem of Belyaev actually concerns stationary kernels in \(d=1\), but this implies the statement in higher dimension because we may view \(G_\pm \), with \(d-1\) coordinates fixed, as Gaussian fields in 1 dimension.

  6. 6.

    Our definition differs from the usual one by the factor \(\sqrt{\pi /2}\) for aesthetic purpose.

  7. 7.

    This was first proved in \(d=2\), for GFF with Dirichlet boundary conditions in [1], and subsequently extended in [21] to log-correlated fields (3) with \(f \in H_{\mathrm {loc}}^{d + \epsilon }\) in dimension \(d = 2\).

  8. 8.

    We say a function \(L: \mathbb {R}_+ \rightarrow \mathbb {R}_+\) is slowly varying at the origin if we have \(\lim _{x \rightarrow 0^+} L(tx)/L(x) = 1\) for any \(t > 0\).

  9. 9.

    The distribution of a real-valued random variable M is arithmetic if \(M \in h \mathbb {Z}\) almost surely for some \(h > 0\), and non-arithmetic otherwise.

  10. 10.

    Indeed (39) is an equality if the distribution \(M_{\gamma , g}(v, A)\) is continuous, but this is only proved in the special case when the covariance kernel is exact. We are happy with the inequality here because we only need the estimate \(\mathbb {P}(M_{\gamma , g}(A) > t) \le t^{-1} \int _A \mathbb {P}(M_{\gamma , g}(v, A) \ge t) dv\) later.

  11. 11.

    We can always shift and rescale our domain and the Gaussian field on the new domain will still be log-correlated.

  12. 12.

    The only change is that the formulae in Lemma 13 do not apply when \(v = 0\), which is still sufficient for an application of dominated convergence.

  13. 13.

    We only focus on \(d=2\); for \(d=1\) a similar proof shows that \({\overline{C}}_{\gamma , 1}\) coincides with the boundary unit volume reflection coefficient, see [28, Sect. 4.3].


  1. 1.

    Aru, J., Powell, E., Sepúlveda, A.: Critical Liouville measure as a limit of subcritical measures. Preprint arXiv:1802.08433

  2. 2.

    Barral, J., Jin, X.: On exact scaling log-infinitely divisible cascades. Probab. Theory Relat. Fields 160(3–4), 521–565 (2014)

  3. 3.

    Barral, J., Kupiainen, A., Nikula, M., Saksman, E., Webb, C.: Basic properties of critical lognormal multiplicative chaos. Ann. Probab. 43(5), 2205–2249 (2015)

  4. 4.

    Baverez, G., Wong, M.D.: Fusion asymptotics for Liouville correlation functions. Preprint arXiv:1807.10207

  5. 5.

    Belyaev, Y.K.: Continuity and Hölder’s conditions for sample functions of stationary Gaussian processes. In: Proceedings of Fourth Berkeley Symposium Mathematical Statistics and Probability (Berkeley, CA, 1960), vol. 2, University of California Press, Berkeley, pp. 23–33

  6. 6.

    Berestycki, N.: An elementary approach to Gaussian multiplicative chaos. Electron. Commun. Probab. 22(27), 1–12 (2017)

  7. 7.

    Berestycki, N., Webb, C., Wong, M.D.: Random Hermitian matrices and Gaussian multiplicative chaos. Probab. Theory Relat. Fields 172, 103–189 (2018). https://doi.org/10.1007/s00440-017-0806-9

  8. 8.

    Biskup, M., Louidor, O.: Conformal symmetries in the extremal process of two-dimensional discrete Gaussian Free Field. Preprint arXiv:1410.4676v2

  9. 9.

    Biskup, M., Louidor, O.: Extreme local extrema of two-dimensional discrete Gaussian free field. Commun. Math. Phys. 345, 271–304 (2016)

  10. 10.

    Biskup, M., Louidor, O.: Full extremal process, cluster law and freezing for the two-dimensional discrete Gaussian Free Field. Adv. Math. 330, 589–687 (2018)

  11. 11.

    Bramson, M., Zeitouni, O.: Tightness of the recentered maximum of the two-dimensional discrete Gaussian free field. Commun. Pure Appl. Math. 65, 1–20 (2011)

  12. 12.

    David, F., Kupiainen, A., Rhodes, R., Vargas, V.: Renormalizability of Liouville quantum field theory at the Seiberg bound. Electron. J. Probab. (2017) https://doi.org/10.1214/17-EJP113

  13. 13.

    David, F., Kupiainen, A., Rhodes, R., Vargas, V.: Liouville quantum gravity on the Riemann sphere. Commun. Math. Phys. 342, 869 (2016). https://doi.org/10.1007/s00220-016-2572-4

  14. 14.

    Duchon, J., Robert, R., Vargas, V.: Forecasting volatility with the multifractal random walk model. Math. Finance 22(1), 83–108 (2012)

  15. 15.

    Duplantier, B., Miller, J., Sheffield, S.: Liouville quantum gravity as a mating of trees. Preprint arXiv:1409.7055

  16. 16.

    Duplantier, B., Sheffield, S.: Liouville quantum gravity and KPZ. Invent. Math. 185(2), 333–393 (2011)

  17. 17.

    Duplantier, B., Rhodes, R., Sheffield, S., Vargas, V.: Renormalization of critical Gaussian multiplicative chaos and KPZ relation. Commun. Math. Phys. 330, 283–330 (2014)

  18. 18.

    Feller, W.: An Introduction to Probability and Its Applications, vol. II. Wiley, New York (1971)

  19. 19.

    Fyodorov, Y., Bouchaud, J.-P.: Freezing and extreme value statistics in a Random Energy Model with logarithmically correlated potential. J. Phys. A: Math. Theor. 41, 372001 (2008)

  20. 20.

    Goldie, C.M.: Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Probab. 1(1), 126–166 (1991)

  21. 21.

    Junnila, J., Saksman, E., Webb, C.: Decompositions of log-correlated fields with applications. Preprint arXiv:1808.06838

  22. 22.

    Kahane, J.-P.: Sur le chaos multiplicatif. Ann. Sci. Math. Québec 9(2), 105–150 (1985)

  23. 23.

    Kupiainen, A., Rhodes, R., Vargas, V.: Integrability of Liouville theory: proof of the DOZZ formula. Preprint arXiv:1707.08785

  24. 24.

    Lambert, G., Ostrovsky, D., Simm, N.: Subcritical multiplicative chaos for regularized counting statistics from random matrix theory. Commun. Math. Phys. 360, 1 (2018). https://doi.org/10.1007/s00220-018-3130-z

  25. 25.

    Nikula, M., Saksman, E., Webb, C.: Multiplicative chaos and the characteristic polynomial of the CUE: the \(L^1\)-phase. Preprint arXiv:1806.01831

  26. 26.

    Remy, G., Zhu, T.: The distribution of Gaussian multiplicative chaos on the unit interval. Preprint arXiv:1804.02942

  27. 27.

    Remy, G.: The Fyodorov–Bouchaud formula and Liouville conformal field theory. Preprint arXiv:1710.06897

  28. 28.

    Rhodes, R., Vagras, V.: The tail expansion of Gaussian multiplicative chaos and the Liouville reflection coefficient. Preprint arXiv:1710.02096

  29. 29.

    Rhodes, R., Vargas, V.: Multidimensional multifractal random measures. Electron. J. Probab. 15(9), 241–258 (2010)

  30. 30.

    Robert, R., Vargas, V.: Gaussian multiplicative chaos revisited. Ann. Probab. 38(2), 605–631 (2010)

  31. 31.

    Saksman, E., Webb, C.: The Riemann zeta function and Gaussian multiplicative chaos: statistics on the critical line. Preprint arXiv:1609.00027

  32. 32.

    Webb, C.: The characteristic polynomial of a random unitary matrix and Gaussian multiplicative chaos: the \(L^2\)-phase. Electron. J. Probab. 20, paper no. 104, 21 pp (2015)

  33. 33.

    Wong, M.D.: Tail universality of critical Gaussian multiplicative chaos. Preprint arXiv:1912.02755

Download references


The author would like to thank Rémi Rhodes and Vincent Vargas for suggesting the problem, and Nathanaël Berestycki for useful discussions. He would also like to thank the two anonymous referees for suggestions on the presentation.

Author information

Correspondence to Mo Dick Wong.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The author was a PhD student at Cambridge Centre for Analysis, supported by the Croucher Foundation Scholarship and EPSRC grant EP/L016516/1. He would also like to acknowledge the support from Universität Wien where most of the work was carried out.

Reflection coefficient of GMC

Reflection coefficient of GMC

In this appendix we explain why \({\overline{C}}_{\gamma , d}\) may be seen as a natural d-dimensional analogue of the Liouville reflection coefficients evaluated at \(\gamma \). To commence with, we define \({\overline{C}}_{\gamma , d}(\alpha )\), which we call the reflection coefficient of GMC, for each \(\alpha \in (\frac{\gamma }{2}, Q)\) as follows.

Proposition 1

Let \({\overline{M}}_{\gamma , \alpha } (0, r) = \int _{|x| \le r} |x|^{-\gamma \alpha } {\overline{M}}_{\gamma }(dx)\) for \(\alpha \in (\frac{\gamma }{2}, Q)\). Then there exists some constant \({\overline{C}}_{\gamma , d}(\alpha ) > 0\) independent of \(r \in (0, r_d)\) such that

$$\begin{aligned} {\overline{C}}_{\gamma , d}(\alpha ) \nonumber&= \lim _{t \rightarrow \infty } t^{\frac{2}{\gamma }(Q - \alpha )} \mathbb {P}\left( {\overline{M}}_{\gamma , \alpha }(0, r) > t \right) \\&= \lim _{\lambda \rightarrow 0^+} \frac{1}{\frac{2}{\gamma }(Q - \alpha )} \frac{\mathbb {E}\left[ {\overline{M}}_{\gamma , \alpha }(0, r)^{\frac{2}{\gamma }(Q-\alpha )} e^{-\lambda {\overline{M}}_{\gamma , \alpha }(0, r)}\right] }{-\log \lambda }. \end{aligned}$$


The first equality can be obtained by a straightforward adaption of the proof of Lemma 11, and the second equality follows from Lemma 7. \(\square \)

We now show that \({\overline{C}}_{\gamma , d}(\alpha )\) coincides with the Liouville reflection coefficients.Footnote 13

Proposition 2

When \(d=2\), the reflection coefficient \({\overline{C}}_{\gamma , 2}(\alpha )\) of GMC is equivalent to the unit volume Liouville reflection coefficient \({\overline{R}}(\alpha )\) defined in [28].


Using the notations in [28], we can write

$$\begin{aligned} {\overline{M}}_{\gamma , \alpha } (0, 1) \overset{d}{=} e^{\gamma M} \int _{-L_{-M}}^\infty e^{\gamma {\mathcal B }_s^{\alpha }} Z_s ds =: e^{\gamma M} {\mathcal I }(L_{-M}) \end{aligned}$$


  • \(Z_s ds\) is the GMC associated with the lateral noise of GFF;

  • \(( {\mathcal B }_s^{\alpha })_{s \in \mathbb {R}}\) an independent two-sided Brownian motion with negative drift \(\alpha - Q\) conditioned to stay non-positive;

  • M is an independent \(\mathrm {Exp}(2(Q-\alpha ))\) random variable; and

  • \(L_{-M}\) is the last time \(( {\mathcal B }_s^\alpha )_{s \ge 0}\) hits \(-M\).

Applying (56) and the decomposition above, we have

$$\begin{aligned} {\overline{C}}_{\gamma , 2}(\alpha ) = \lim _{\lambda \rightarrow 0^+} \frac{1}{\frac{2}{\gamma }(Q - \alpha )} \mathbb {E}\left[ {\mathcal I }(L_{-M})^{\frac{2}{\gamma }(Q- \alpha )} \left( \frac{ (e^{\gamma M})^{\frac{2}{\gamma }(Q-\alpha )} e^{-\lambda e^{\gamma M} {\mathcal I }(L_{-M})}}{-\log \lambda }\right) \right] . \end{aligned}$$

When \(\lambda \rightarrow 0^+\), the above expectation is dominated by the event that the exponential variable M is large, in which case \(L_{-M}\) is very large and \( {\mathcal I }(L_{-M})\) behaves like \( {\mathcal I }(\infty )\) which does not depend on M. To make this rigorous we aim to prove matching upper/lower bounds. Since \(\mathbb {P}(e^{\gamma M} > t) = t^{-\frac{2}{\gamma }(Q - \alpha )}\) for \(t \ge 1\), a straightforward computation shows that

$$\begin{aligned} \mathbb {E}\left[ \left( e^{\gamma M}\right) ^{\frac{2}{\gamma }(Q-\alpha )} e^{-\lambda e^{\gamma M}}\right] = -\frac{2}{\gamma }(Q-\alpha ) e^{-\lambda } \log \lambda + O(1) \end{aligned}$$

where the error O(1) is bounded independently of \(\lambda > 0\). Using the fact that \( {\mathcal I }(\infty )\) has moments of all orders smaller than \(\frac{4}{\gamma ^2}\) ([23, Lemma 2.8]), we deduce that

$$\begin{aligned}&{\overline{C}}_{\gamma , 2}(\alpha )\\&\quad \le \lim _{\lambda \rightarrow 0^+} \frac{1}{\frac{2}{\gamma }(Q - \alpha )} \mathbb {E}\left[ {\mathcal I }(\infty )^{\frac{2}{\gamma }(Q- \alpha )} \mathbb {E}\left[ \left( \frac{ (e^{\gamma M})^{\frac{2}{\gamma }(Q-\alpha )} e^{-\lambda e^{\gamma M} {\mathcal I }(0)}}{-\log \lambda }\right) \bigg | {\mathcal I }(0)\right] \right] \\&\quad = \mathbb {E}\left[ {\mathcal I }(\infty )^{\frac{2}{\gamma }(Q-\alpha )}\right] \end{aligned}$$

which is the desired upper bound. Now fix any \(T>0\), we have

$$\begin{aligned}&{\overline{C}}_{\gamma , 2}(\alpha )\\&\quad \ge \lim _{\lambda \rightarrow 0^+} \frac{1}{\frac{2}{\gamma }(Q - \alpha )} \mathbb {E}\left[ {\mathcal I }(L_{-T})^{\frac{2}{\gamma }(Q- \alpha )} \mathbb {E}\left[ \left( \frac{ (e^{\gamma M})^{\frac{2}{\gamma }(Q-\alpha )} e^{-\lambda e^{\gamma M} {\mathcal I }(\infty )}}{-\log \lambda }\right) \bigg | {\mathcal I }(\infty )\right] \right] \\&\qquad - \lim _{\lambda \rightarrow 0^+} \frac{1}{\frac{2}{\gamma }(Q - \alpha )} \mathbb {E}\left[ {\mathcal I }(\infty )^{\frac{2}{\gamma }(Q- \alpha )}\left( \frac{ (e^{\gamma M})^{\frac{2}{\gamma }(Q-\alpha )} e^{-\lambda e^{\gamma M} {\mathcal I }(\infty )}}{-\log \lambda }\right) 1_{\{M \le T\}}\right] \\&\quad = \mathbb {E}\left[ {\mathcal I }(L_{-T})^{\frac{2}{\gamma }(Q-\alpha )}\right] . \end{aligned}$$

Since T is arbitrary, we may send \(T \rightarrow \infty \) so that \(L_{-T} \rightarrow \infty \) and obtain \({\overline{C}}_{\gamma , 2}(\alpha ) \ge \mathbb {E}\left[ {\mathcal I }(\infty )^{\frac{2}{\gamma }(Q-\alpha )}\right] \). This matches our upper bound and is precisely the probabilistic definition of the Liouville reflection coefficient \({\overline{R}}(\alpha )\) in [28, equation (1.10)]. \(\square \)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wong, M.D. Universal tail profile of Gaussian multiplicative chaos. Probab. Theory Relat. Fields (2020). https://doi.org/10.1007/s00440-020-00960-3

Download citation


  • Gaussian multiplicative chaos
  • Log-correlated Gaussian fields

Mathematics Subject Classification

  • Primary 60G57
  • Secondary 60G15
  • 28A80