1 Introduction

The Yang–Mills measure, associated to a (two-dimensional) surface \(\varSigma \) and to a compact Lie group G, is a probability measure on (generalized) connections of principal G-bundles over \(\varSigma \). It was introduced in a series of works by Gross, King and Sengupta  [29], Fine [21], Driver [17], Witten [52, 53], Sengupta [48] and Lévy [38], as a mathematical version of Euclidean Yang–Mills field theory. See also [11] for recent progress in higher dimensions. In this paper, we will consider the Yang–Mills measure in the case where the surface \(\varSigma \) is fixed and the group G is a classical matrix group of high dimension. The interest of such a set-up from the viewpoint of random matrix theory was first raised in the mathematics literature by Singer  [50], who made several conjectures, based on earlier work in physics [26, 27, 34, 35]. The high-dimensional limit of the Yang–Mills measure when \(\varSigma \) is the whole plane has since been studied by Xu [54], Sengupta [49], Lévy [40], Anshelevich and Sengupta [1], Dahlqvist [13] and others  [8, 24].

We focus here on the case where the surface \(\varSigma \) is a sphere. This case has received particular attention in the physics literature [6, 14, 28, 47] as it displays a phase transition of third order named after Douglas and Kazakov [15]. A corresponding mathematical analysis of the partition function was achieved by Boutet de Monvel and Shcherbina [7] and Lévy and Maïda [43]. The main result of the present work, Theorem 2.2, confirms a conjecture of Singer [50], showing that, under the Yang–Mills measure on the sphere for the unitary group U(N), the traces of loop holonomies converge as \(N\rightarrow \infty \) to a deterministic limit. We characterize this limit analytically and derive some further properties. Following the physics literature, the limit is called the master field on the sphere.

As a by-product of our main result, we show that the Brownian loop in U(N), that is to say, the Brownian bridge starting and ending at the identity, converges in non-commutative distribution as \(N\rightarrow \infty \) to a certain non-commutative process, which we call the free unitary Brownian loop. The notion of free unitary Brownian motion was first defined by Biane [4], using free stochastic calculus, and moreover identified as a limit of the Brownian motion on U(N). This latter limit was further studied in Lévy [39], Lévy and Maïda [42] and Collins, Dahlqvist and Kemp [12]. Our work may be considered as a first instance of the free unitary Brownian loop as a limit of a matrix valued process. Defining it directly in the setting of free probability is an interesting open problem that cannot be handled so far by classical tools of free stochastic calculus, such as the ones introduced in [4, 5].

There is a system of relations, discovered by Makeenko and Migdal  [45], indexed by families of embedded loops, between the expectations under the Yang–Mills measure of polynomials in the traces of loop holonomies. These have now been proved for the whole plane by Lévy [40] and Dahlqvist  [13] and for any compact surface by Driver, Gabriel, Hall and Kemp [18]. They belong to the class of Schwinger–Dyson equations, a family of equations obtained by generalizing integration-by-parts formulas to the setting of functional integrals. See for example [30] and  [9, 10], where these equations are proved and used in different models of random unitary matrices and for a lattice version of the Yang–Mills measure. For the Yang–Mills functional integral, this heuristic derivation has been justified recently by Driver [16]. The Makeenko–Migdal equations provide a potential line of argument to prove convergence of the Yang–Mills measure as \(N\rightarrow \infty \), which is to show a suitable concentration estimate for the holonomy traces, and to pass to the limit in the equations, showing that the limit equations determine a unique limit object. In the whole plane case, moment estimates for unitary Brownian motion provide the needed concentration, and the Makeenko–Migdal equations may be augmented by a further equation, such that the whole system of equations then characterizes the limit field. So the programme has been completed in that case [13, 40]. However, as noted in  [18], the concentration and characterization problems have remained open in general.

In this paper, we will establish two key points. First, for simple loops, we show in Proposition 3.1 that expectations and covariances of the holonomy traces can be represented by functionals of a discrete \(\beta \)-ensemble. This representation allows us to identify the limit in probability of these traces as \(N\rightarrow \infty \), following the work of Guionnet and Maïda [30], Johansson [32] and Féral [20] on discrete \(\beta \)-ensembles. This amounts to a rigorous version of ideas explained by Boulatov  [6] and Douglas and Kazakov [15]. The second point, shown in Sect. 4 using the Makeenko–Migdal equations, is that the convergence of marginals to a deterministic limit for simple loops forces the same to hold for a more general class of loops.Footnote 1 Then, by adapting some estimates of Lévy [40], we are able to consider eventually all loops of finite length, allowing us to express certain key properties of the master field in a natural way.

An alternative line of argument for the first point, which we shall discuss elsewhere, would be to use the fact that the process of eigenvalues of the marginals of the Brownian loop is known to have the same law as a Dyson Brownian motion on the circle, starting from 1 and conditioned to return to 1. Indeed, several scaling limits of this conditioned process have recently been understood by Liechty and Wang [44]. This link was first observed in the physics literature by Forrester, Majumdar and Schehr  [22, 23]. Section 3 gives another way to obtain macroscopic results on the empirical distribution of this process.

The paper is organized as follows. Section 2 introduces the model and our results. Section 3 shows convergence and concentration of holonomy traces for simple loops, using a duality relation with a discrete \(\beta \)-ensemble. Section 4 explains how the Makeenko–Migdal equations can be used to extend this convergence to a general class of regular loops. Then, in Sect. 5, we make a final extension to all loops of finite length. Section 6 presents some further properties of the master field, including a relation with the free Hermitian Brownian loop in the subcritical regime, and a formula for the evaluation of the master field on a large class of loops.

Subject to certain modifications, to be explained in a future work, the argument explained here applies to other series of compact groups and also with the projective plane in place of the sphere.

2 Setting and Statement of the Main Results

We review the notion of a Yang–Mills holonomy field over a compact Riemann surface. Then we discuss its relation, in the case of the sphere, to the Brownian loop in a Lie group. Next, we state our main results on convergence of Yang–Mills holonomy in U(N) over the sphere to the master field, and on analytic characterization of the master field. The proof of these main results has three steps, which are outlined in Sect. 2.5. Then we discuss some consequences of our results, for the convergence of spectral measures of loop holonomies, and for the high-dimensional limit of the Brownian loop in U(N). Finally, we discuss how the master field can be considered as a natural family of infinite-dimensional unitary transport operators, following up some suggestions of Singer  [50].

2.1 Yang–Mills measure on a compact Riemann surface

We recall in this subsection the notion of Yang–Mills measure in two dimensions, following the formulation of Lévy  [38], as a field of holonomies indexed by paths of finite length. Let \(\varSigma \) be a closed two-dimensional Riemannian manifold and let G be a compact Lie group. Fix an area measure on \(\varSigma \) having a continuous positive density with respect to Lebesgue measure in each coordinate chart. Write T for the total area of \(\varSigma \) and denote by 1 the unit element of G. Fix a bi-invariant Riemannian metric on G and denote the associated heat kernel by \(p=(p_t(g):t\in (0,\infty ),g\in G)\). Thus p is the unique \(C^\infty \) positive function on \((0,\infty )\times G\) such that

$$\begin{aligned} \frac{\partial p}{\partial t}=\frac{1}{2}\varDelta p \end{aligned}$$

and, for all continuous functions f on G, in the limit \(t\rightarrow 0\),

$$\begin{aligned} \int _Gf(g)p_t(g)dg\rightarrow f(1). \end{aligned}$$

Here we have written \(\varDelta \) for the Laplace–Beltrami operator and dg for the normalized Haar measure on G.

We specialize in later sections to the case where \(\varSigma \) is the Euclidean sphere \({\mathbb {S}}_T\) of total area T

$$\begin{aligned} {\mathbb {S}}_T=\{x\in {\mathbb {R}}^3:4\pi |x|^2=T\} \end{aligned}$$

and where G is the group U(N) of unitary \(N\times N\) matrices. The Lie algebra of U(N) is the space of skew-Hermitian matrices \({\mathfrak {u}}(N)\). We specify a metric on U(N) by the following choice of inner product on \({\mathfrak {u}}(N)\)

$$\begin{aligned} \langle g_1,g_2\rangle =N\mathrm {Tr}(g_1g_2^*) \end{aligned}$$
(1)

where \(\mathrm {Tr}(g)=\sum _{i=1}^Ng_{ii}\). This dependence of the metric on N, which is standard in random matrix theory, is chosen so that the objects of interest to us have a non-trivial scaling limit as \(N\rightarrow \infty \).

By an oriented path in \(\varSigma \) we mean a continuous map \([0,1]\rightarrow \varSigma \). Write \({\text {Path}}(\varSigma )\) for the set of oriented paths of finite length in \(\varSigma \), parametrized by [0, 1] at constant speed. Denote the length of a path \(\gamma \in {\text {Path}}(\varSigma )\) by \(\ell (\gamma )\). We consider \({\text {Path}}(\varSigma )\) as a metric space, with the length metric

$$\begin{aligned} d(\gamma ,\gamma ')=|\ell (\gamma )-\ell (\gamma ')|+\inf _\tau \sup _{t\in [0,1]}d(\gamma _{\tau (t)},\gamma '_t) \end{aligned}$$
(2)

where the infimum is taken over homeomorphisms \(\tau \) of [0, 1]. Each path \(\gamma \) has a starting point \({{\underline{\gamma }}}\) and a terminal point \({{\overline{\gamma }}}\). Write \(\gamma ^{-1}\) for the reversal of \(\gamma \), that is, the path of reverse orientation from \({{\overline{\gamma }}}\) to \({{\underline{\gamma }}}\). For paths \(\gamma _1,\gamma _2\) such that \({{\overline{\gamma }}}_1={{\underline{\gamma }}}_2\), we write \(\gamma _1\gamma _2\) for the path obtained by their concatenation (and reparametrization by [0, 1] at constant speed). Write \({\text {Loop}}(\varSigma )\) for the set of loops of finite length in \(\varSigma \). Thus

$$\begin{aligned} {\text {Loop}}(\varSigma )=\{\gamma \in {\text {Path}}(\varSigma ):{{\underline{\gamma }}}={{\overline{\gamma }}}\}. \end{aligned}$$

Write also \({\text {Path}}_{x,y}({\mathbb {S}}_T)\) for the set of paths from x to y, and \({\text {Loop}}_x({\mathbb {S}}_T)\) for the set of loops based at x. Given paths \(\gamma ,\gamma _0\), we say that \(\gamma _0\) is a simple reduction of \(\gamma \) if we can write \(\gamma \) and \(\gamma _0\) as concatenations

$$\begin{aligned} \gamma =\gamma _1\gamma _*\gamma _*^{-1}\gamma _2,\quad \gamma _0=\gamma _1\gamma _2 \end{aligned}$$

for some paths \(\gamma _1,\gamma _2,\gamma _*\). More generally, we say that \(\gamma _0\) is a reduction of \(\gamma \) if there is a sequence of paths \((\gamma _1,\dots ,\gamma _n)\) such that \(\gamma _{i-1}\) is a simple reduction of \(\gamma _i\) for all i and \(\gamma _n=\gamma \). Given paths \(\gamma _1,\gamma _2\), we write \(\gamma _1\sim \gamma _2\) if there is a path \(\gamma _0\) which is a reduction of both \(\gamma _1\) and \(\gamma _2\).

Given a subset \(\varGamma \) of \({\text {Path}}(\varSigma )\) which is closed under reversal and concatenation, we call a function \(h:\varGamma \rightarrow G\)multiplicative if

$$\begin{aligned} h_{\gamma ^{-1}}=h^{-1}_\gamma ,\quad h_{\gamma _1\gamma _2}=h_{\gamma _2}h_{\gamma _1} \end{aligned}$$

for all \(\gamma \) and for all \(\gamma _1,\gamma _2\) with \({\overline{\gamma }}_1={\underline{\gamma }}_2\). We denote the set of such multiplicative functions by \({\text {Mult}}(\varGamma ,G)\). Note that, for any such function h, we have \(h_{\gamma _1}=h_{\gamma _2}\) whenever \(\gamma _1\sim \gamma _2\).

A path is simple if it is injective on [0, 1], while a loop is simple if it is injective as a map on the circle. We say that a finite subset \({\mathbb {G}}=\{e_1,\dots ,e_m\}\subseteq {\text {Path}}(\varSigma )\) is an embedded graph in \(\varSigma \) if each path \(e_j\) is non-constant, is either simple or a simple loop, and meets other paths \(e_k\) only at its endpoints. Then we refer to the sequence \((e_1,\dots ,e_m)\) as a labelled embedded graph. We will sometimes write abusively \({\mathbb {G}}=(V,E,F)\) to mean that V is the set of endpoints of paths in \({\mathbb {G}}\), \(E={\mathbb {G}}\) and F is the set of connected components of \(\varSigma \setminus \{e^*:e\in {\mathbb {G}}\}\). Here

$$\begin{aligned} e^*=\{e(t):t\in [0,1]\}. \end{aligned}$$

We say that an embedded graph \({\mathbb {G}}\) is a discretization of \(\varSigma \) if each face \(f\in F\) is a simply connected domain in \(\varSigma \). Write \({\text {Path}}({\mathbb {G}})\) for the subset of \({\text {Path}}(\varSigma )\) obtained by concatenations of the paths in \({\mathbb {G}}\) and their reversals.

A random process \(H=(H_\gamma :\gamma \in {\text {Path}}(\varSigma ))\) (on some probability space \((\varOmega ,{\mathcal {F}},{\mathbb {P}})\)) taking values in G is a Yang–Mills holonomy field if

  1. (a)

    H is multiplicative, that is, \(H(\omega )\in {\text {Mult}}({\text {Path}}(\varSigma ),G)\) for all \(\omega \in \varOmega \),

  2. (b)

    for any discretization \({\mathbb {G}}=(V,E,F)\) of \(\varSigma \) and all \(h\in {\text {Mult}}({\text {Path}}({\mathbb {G}}),G)\),

    $$\begin{aligned} {\mathbb {P}}(H_e\in dh_e\text { for all }e\in E)=p_T(1)^{-1}\prod _{f\in F}p_{{\text {area}}(f)}(h_f)\prod _{e\in E}dh_e \end{aligned}$$
    (3)
  3. (c)

    for any convergent sequence \(\gamma (n)\rightarrow \gamma \) in \({\text {Path}}(\varSigma )\) with fixed endpoints,

    $$\begin{aligned} H_{\gamma (n)}\rightarrow H_\gamma \quad \text {in probability.} \end{aligned}$$
    (4)

The equation (3) specifies certain finite-dimensional distributions of H, considered as probability measures on \(G^E\). The volume element \(\prod _{e\in E}dh_e\) is the product of normalized Haar measures on G. For each face f, we have chosen a simple loop \(\gamma (f)\in {\text {Loop}}({\mathbb {G}})\) whose range is the boundary of f and set \(h_f=h_{\gamma (f)}\). The invariance properties of Haar measure and the heat kernel under inversion and conjugation guarantee that the expression (3) depends neither on the orientations of the edges nor on the choice of loops bounding the faces.

For each \(\gamma \in {\text {Path}}(\varSigma )\), we can define a coordinate function \(H_\gamma :{\text {Mult}}({\text {Path}}(\varSigma ),G)\rightarrow G\) by \(H_\gamma (h)=h_\gamma \). We define a \(\sigma \)-algebra \({\mathcal {C}}\) on \({\text {Mult}}({\text {Path}}(\varSigma ),G)\) by

$$\begin{aligned} {\mathcal {C}}=\sigma (H_\gamma :\gamma \in {\text {Path}}(\varSigma )). \end{aligned}$$

Then \((H_\gamma :\gamma \in {\text {Path}}(\varSigma ))\) is a multiplicative random process on \(({\text {Mult}}({\text {Path}}(\varSigma ),G),{\mathcal {C}})\). We use the same notation \((H_\gamma :\gamma \in {\text {Path}}(\varSigma ))\) both for this canonical coordinate process and also, more generally, for any multiplicative random process.

Our basic object of study is the Yang–Mills measure provided by the following theorem of Lévy [38, Theorem 2.62], building on earlier work of Driver [17] and Sengupta  [48].

Theorem 2.1

There is a unique probability measure on \(({\text {Mult}}({\text {Path}}(\varSigma ),G),{\mathcal {C}})\) under which the coordinate process \((H_\gamma :\gamma \in {\text {Path}}(\varSigma ))\) is a Yang–Mills holonomy field.

Let \(H=(H_\gamma :\gamma \in {\text {Path}}(\varSigma ))\) be a Yang–Mills holonomy field in G. We note the following properties of gauge invariance and invariance under area-preserving diffeomorphisms, which follow from invariance properties of (3) and the uniqueness statement of the theorem. Let \(s:\varSigma \rightarrow G\) be any function and let \(\psi :\varSigma \rightarrow \varSigma \) be an area-preserving diffeomorphism. Consider the processes

$$\begin{aligned} H^s=(s({{\overline{\gamma }}})H_\gamma s({{\underline{\gamma }}})^{-1}:\gamma \in {\text {Path}}(\varSigma )),\quad H^\psi =(H_{\psi \circ \gamma }:\gamma \in {\text {Path}}(\varSigma )). \end{aligned}$$

Then \(H^s\) and \(H^\psi \) have the same law as H. In particular, the relevant data from \(\varSigma \) are just its genus and the total area T (Fig. 1).

Fig. 1
figure 1

A family of simple loops obtained by intersecting the sphere with a rotating plane, as described in Sect. 2.2

2.2 Embedded Brownian loops

We specialize now to the case where the surface \(\varSigma \) is the sphere \({\mathbb {S}}_T\) of area T. In each Yang–Mills holonomy field \(H=(H_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\), there are many embedded Brownian loops in G based at 1 and parametrized by [0, T], as we now show. Recall that a random process \(B=(B_t:t\in [0,T])\) taking values in G is a Brownian loop based at 1 if

  1. (a)

    B is continuous, that is, \(B(\omega )\in C([0,T],G)\) for all \(\omega \in \varOmega \),

  2. (b)

    for all \(n\in {\mathbb {N}}\), all \(g_1,\dots ,g_{n-1}\in G\) and all increasing sequences \((t_1,\dots ,t_{n-1})\) in (0, T), setting \(g_0=g_n=1\) and \(t_0=0\) and \(t_n=T\) and writing \(t_k=s_1+\dots +s_k\),

    $$\begin{aligned} {\mathbb {P}}(B_{t_k}\in dg_k\text { for }k=1,\dots ,n-1)={p_T(1)}^{-1}\prod _{i=1}^np_{s_i}(g_ig_{i-1}^{-1})\prod _{k=1}^{n-1}dg_k. \end{aligned}$$

Choose a point x in \({\mathbb {S}}_T\) and let P be a tangent plane to \({\mathbb {S}}_T\) at x, considered as embedded in \({\mathbb {R}}^3\). Choose a line L in P through x and rotate P once around L. The resulting intersections of P with \({\mathbb {S}}_T\), which are a nested family of circles, may be given a consistent orientation and then considered as a family in \({\text {Loop}}({\mathbb {S}}_T)\), all based at x. We can parametrize this family of loops as \((l(t):t\in [0,T])\) so that the domain inside l(t) has area t for all T. Then, for all \(n\in {\mathbb {N}}\) and all sequences \((t_1,\dots ,t_{n-1})\) in (0, T), the loops \(l(t_1),\dots ,l(t_{n-1})\) are the edges of a discretization of \({\mathbb {S}}_T\). Define a random process \(\beta =(\beta _t:t\in [0,T])\) in G by

$$\begin{aligned} \beta _t=H_{l(t)}. \end{aligned}$$

It is straightforward to deduce from property (b) of the Yang–Mills holonomy field that the finite-dimensional distributions of \(\beta \) satisfy condition (b) for the Brownian loop. Hence, by standard arguments, \(\beta \) has a continuous version, B say, which is a Brownian loop in G based at 1. The reader will see many ways to vary this construction while still obtaining a Brownian loop.

2.3 Convergence to the master field on the sphere

We specialize now to the case where the structure group G is the group of \(N\times N\) unitary matrices U(N). Let \(H^N=(H^N_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) be a Yang–Mills holonomy field in U(N) over the sphere \({\mathbb {S}}_T\) of area T. Our main results establish a law of large numbers for this random field in the limit \(N\rightarrow \infty \), which we express for now in terms of the normalized trace

$$\begin{aligned} \mathrm {tr}_N(g)=\mathrm {tr}(g)=N^{-1}\sum _{i=1}^Ng_{ii}. \end{aligned}$$

Here is our first main result.

Theorem 2.2

There exists a function on loops

$$\begin{aligned} \varPhi _T:{\text {Loop}}({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\end{aligned}$$

such that, for all \(l\in {\text {Loop}}({\mathbb {S}}_T)\),

$$\begin{aligned} \mathrm {tr}_N(H^N_l)\rightarrow \varPhi _T(l)\quad \hbox { in probability as}\ N\rightarrow \infty . \end{aligned}$$

The function \(\varPhi _T\) is known in the physics literature as the master field on the sphere. Until we have proved Theorem 2.2, it will be convenient provisionally to define \(\varPhi _T\) by

$$\begin{aligned} \varPhi _T(l)= {\left\{ \begin{array}{ll}\lim \nolimits _{N\rightarrow \infty }{\mathbb {E}}(\mathrm {tr}_N(H^N_l)),&{}\text {if this limit exists},\\ 0,&{}\text {otherwise}.\end{array}\right. } \end{aligned}$$

Note that, since \(|\mathrm {tr}_N(H^N_l)|\le 1\), by bounded convergence, as soon as we show that \(\mathrm {tr}_N(H^N_l)\) converges in probability with deterministic limit, it will follow that \({\mathbb {E}}(\mathrm {tr}_N(H^N_l))\) converges with the same limit, so that this limit must equal \(\varPhi _T(l)\) as provisionally defined.

Given Theorem 2.2, the master field inherits certain properties from its finite-N approximations \({\mathbb {E}}(\mathrm {tr}_N(H^N_l))\), as the reader may easily check.

Proposition 2.3

The master field \(\varPhi _T\) has the following properties:

  1. (a)

    \(\varPhi _T=1\) on constant loops and \(\varPhi _T(l)=\varPhi _T(l^{-1})\in [-1,1]\) for all loops l,

  2. (b)

    \(\varPhi _T(\gamma _1\gamma _2)=\varPhi _T(\gamma _2\gamma _1)\) for all pairs of paths \(\gamma _1,\gamma _2\) such that \(\gamma _1\gamma _2\) is a loop,

  3. (c)

    \(\varPhi _T(l_1)=\varPhi _T(l_2)\) whenever \(l_1\sim l_2\),

  4. (d)

    for all \(x,y\in {\mathbb {S}}_T\), all \(n\in {\mathbb {N}}\), all \(a_1,\dots ,a_n\in {\mathbb {C}}\) and all \(\gamma _1,\dots ,\gamma _n\in {\text {Path}}_{x,y}({\mathbb {S}}_T)\),

    $$\begin{aligned} \sum _{i,j=1}^n a_i\overline{a_j}\varPhi _T(\gamma _i\gamma _j^{-1})\ge 0 \end{aligned}$$
  5. (e)

    for all loops l and any area-preserving diffeomorphism \(\psi \) of \({\mathbb {S}}_T\),

    $$\begin{aligned} \varPhi _T(\psi (l))=\varPhi _T(l). \end{aligned}$$

2.4 Characterization of the master field on the sphere

Our second main result is an analytic characterization of the master field. This will require some associated notions which we now introduce. Consider the following variational problem: minimize the functional

$$\begin{aligned} {\mathcal {I}}_T(\mu )=\int _{{\mathbb {R}}^2}\left\{ \tfrac{1}{2}(x^2+y^2)T-2\log |x-y|\right\} \mu (dx)\mu (dy) \end{aligned}$$
(5)

over the set of probability measures \(\mu \) on \({\mathbb {R}}\) such that, for all intervals [ab],

$$\begin{aligned} \mu ([a,b])\le b-a. \end{aligned}$$

We note for later use some statements concerning this problem, proofs of which may be found in Lévy and Maïda [43]. First, the functional \({\mathcal {I}}_T\) is well-defined on the given set of probability measures, with values in \((-\infty ,\infty ]\), and has a unique minimizer, which we denote by \(\mu _T\). Then \(\mu _T\) has a continuous density function \(\rho _T\) with respect to Lebesgue measure, with \(0\le \rho _T(x)\le 1\) for all x. In the case \(T\in (0,\pi ^2]\), \(\rho _T\) is the semi-circle density of variance 1/T, given by

$$\begin{aligned} \rho _T(x)=\frac{T}{2\pi }\sqrt{\frac{4}{T}-x^2},\quad |x|\le \frac{2}{\sqrt{T}}. \end{aligned}$$
(6)

Note that the right-hand side in (6) exceeds 1 when \(x=0\) for \(T>\pi ^2\).

For \(T\in (\pi ^2,\infty )\), there is a unique \(k\in (0,1)\) such that

$$\begin{aligned} T=8EK-4(1-k^2)K^2 \end{aligned}$$

where \(K=K(k)\) and \(E=E(k)\) are, respectively, the complete elliptic integrals of the first and second kind. See for example  [37, Chapter 3, equations (3.1.3) and (3.5.4)]. Set

$$\begin{aligned} \alpha =4kK/T,\quad \beta =4K/T. \end{aligned}$$
(7)

Then the minimizing density \(\rho _T\) is identically 1 on \([-\alpha ,\alpha ]\), is supported on \([-\beta ,\beta ]\), and satisfies, for \(|x|\in (\alpha ,\beta )\),

$$\begin{aligned} \rho _T(x)=\frac{2\sqrt{(x^2-\alpha ^2)(\beta ^2-x^2)}}{\pi \beta |x|}\int _0^1\frac{ds}{(1-\alpha ^2s^2/x^2)\sqrt{(1-s^2)(1-\alpha ^2s^2/\beta ^2)}}. \end{aligned}$$
(8)

See [43, Lemma 4.7, equation (4.14)]. See also  [43, Figure 7] for an informative plot of the family of densities \((\rho _T:T\in (0,\infty ))\).

Let us say that \(l\in {\text {Loop}}({\mathbb {S}}_T)\) is a regular loop if there is a labelled embedded graph \({\mathbb {G}}_l=(e_1,\dots ,e_m)\) in \({\text {Path}}({\mathbb {S}}_T)\) such that l is given by the concatenation \(e_1\dots e_m\), in which \({\underline{e}}_1\) has degree 2 and in which \({\underline{e}}_2,\dots ,{\underline{e}}_m\) have degree 4 and are transverse self-intersections of l. Here, we say that a self-intersection of l at a vertex v of degree 4 is transverse if, as l passes through v, it arrives and leaves by opposite edges. Note that \({\mathbb {G}}_l\) is then uniquely determined by l.

Given a regular loop l and a point v of self-intersection of l, there are two regular loops \(l_v\) and \({{\hat{l}}}_v\) starting from v, obtained by splitting l at v, that is, by following l on its first and second exit from v, respectively, until it first returns to v. Note that both \(l_v\) and \({{\hat{l}}}_v\) have fewer self-intersections than l. See Fig. 2 for an example. Denote by \(f_1\) the face of \({\mathbb {G}}\) which is adjacent to the two outgoing strands of l at v, and denote by \(f_1,f_2,f_3,f_4\) the faces of \({\mathbb {G}}\) found on making a small circuit around v in the positive sense, starting from \(f_1\). Note that these faces may not all be distinct. For each face f of \({\mathbb {G}}\), define

$$\begin{aligned} {\text {sgn}}_v(f)=\sum _{k=1}^4(-1)^{k+1}1_{\{f_k\}}(f). \end{aligned}$$

In the case where the faces \(f_1,f_2,f_2,f_4\) are distinct, we have \({\text {sgn}}_v(f_1)={\text {sgn}}_v(f_3)=1\) and \({\text {sgn}}_v(f_2)={\text {sgn}}_v(f_4)=-1\). Since \({\mathbb {G}}\) is embedded in the sphere, the only other possibility is that \(f_1=f_3\not =f_2\not =f_4\not =f_1\), in which case \({\text {sgn}}_v(f_1)=2\) and \({\text {sgn}}_v(f_2)={\text {sgn}}_v(f_4)=-1\). See Fig. 3. For \(\eta >0\), we say that a \(C^\infty \) map

$$\begin{aligned} \theta :[0,\eta )\times {\mathbb {S}}_T\rightarrow {\mathbb {S}}_T \end{aligned}$$

is a Makeenko–Migdal flow at (lv) if

  1. (a)

    \(\theta (0,x)=x\) for all x,

  2. (b)

    \(\theta (t,.)\) is a diffeomorphism of \({\mathbb {S}}_T\) for all t,

  3. (c)

    for any face f of the embedded graph \({\mathbb {G}}\),

    $$\begin{aligned} \frac{d}{dt}{\text {area}}(\theta (t,f))={\text {sgn}}_v(f). \end{aligned}$$
    (9)

We can now state our analytic characterization of the master field.

Fig. 2
figure 2

The splitting of the loop \(e_1e_2\ldots e_7\) on the left-hand-side yields the two loops \(e_2e_3e_4\) and \(e_5e_6e_7e_1\) respectively represented with plain and dashed strands

Fig. 3
figure 3

The value of \({\text {sgn}}_v\), where v is denoted by a dot, is printed on each face of the embedded graph for two different loops

Theorem 2.4

The master field \(\varPhi _T:{\text {Loop}}({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\) has the following properties, which together characterize it uniquely:

  1. (a)

    \(\varPhi _T\) is continuous with respect to the length metric on \({\text {Loop}}({\mathbb {S}}_T)\),

  2. (b)

    \(\varPhi _T\) is invariant under reduction: for all pairs of loops \(l_1,l_2\) with \(l_1\sim l_2\),

    $$\begin{aligned} \varPhi _T(l_1)=\varPhi _T(l_2) \end{aligned}$$
  3. (c)

    \(\varPhi _T\) is invariant under area-preserving homeomorphisms: for all regular loops l and any area-preserving homeomorphism \(\xi \) of \({\mathbb {S}}_T\) such that \(\xi (l)\in {\text {Loop}}({\mathbb {S}}_T)\),

    $$\begin{aligned} \varPhi _T(\xi (l))=\varPhi _T(l) \end{aligned}$$
  4. (d)

    \(\varPhi _T\)satisfies the Makeenko–Migdal equations: for all regular loops l, all points v of self-intersection of l, and any Makeenko–Migdal flow \(\theta \) at (lv), \( \varPhi _T(\theta (t,l))\) is differentiable in t at 0 with

    $$\begin{aligned} \left. \frac{d}{dt}\right| _{t=0} \varPhi _T(\theta (t,l))=\varPhi _T(l_v)\varPhi _T({{\hat{l}}}_v) \end{aligned}$$
    (10)
  5. (e)

    for all simple loops l and all \(n\in {\mathbb {N}}\),

    $$\begin{aligned} \varPhi _T(l^n)=\frac{2}{n\pi }\int _0^\infty \cosh \left\{ (a_1-a_2)nx/2\right\} \sin \{n\pi \rho _T(x)\}dx \end{aligned}$$
    (11)

    where \(a_1\) and \(a_2\) are the areas of the connected components of \({\mathbb {S}}_T{\setminus } l^*\).

Note that the integrand in (11) vanishes whenever \(\rho _T(x)=0\) or \(\rho _T(x)=1\). In property (e), we have written \(l^n\) for the n-fold concatenation of l with itself. In fact, it suffices for uniqueness that property (e) hold in the case \(n=1\), as we show in Sect. 6.4.

2.5 Outline of the main argument

We now outline the main steps in our proof of Theorems 2.2 and 2.4. We build progressively an understanding of the limit, first for simple loops, then regular loops, and finally for all loops of finite length. First, we prove in Sect. 3.5 the following statement for simple loops. The argument uses harmonic analysis in U(N) to express means and covariances of \(\mathrm {tr}_N(H^N_{l^n})\) in terms of a discrete \(\beta \)-ensemble, whose asymptotics as \(N\rightarrow \infty \) we can compute. Write \({\text {Loop}}_0({\mathbb {S}}_T)\) for the set of simple loops in \({\text {Loop}}({\mathbb {S}}_T)\). Let \(l\in {\text {Loop}}_0({\mathbb {S}}_T)\) and recall that we write \(l^*\) for the range of l. Then \({\mathbb {S}}_T{\setminus } l^*\) has two connected components. Write \(a_1(l)\) for the area of the component on the left of l and \(a_2(l)\) for the area of the component on the right. Then \(a_1(l),a_2(l)>0\) and \(a_1(l)+a_2(l)=T\). Set

$$\begin{aligned} \phi _T(n,a_1,a_2)=\frac{2}{n\pi }\int _0^\infty \cosh \left\{ (a_1-a_2)nx/2\right\} \sin \{n\pi \rho _T(x)\}dx. \end{aligned}$$
(12)

Proposition 2.5

For all \(n\in {\mathbb {N}}\),

$$\begin{aligned} \mathrm {tr}_N(H^N_{l^n})\rightarrow \varPhi _T(l^n)=\phi _T(n,a_1(l),a_2(l)) \end{aligned}$$

uniformly in \(l\in {\text {Loop}}_0({\mathbb {S}}_T)\) in \(L^2({\mathbb {P}})\) as \(N\rightarrow \infty \).

The next step is the following proposition, which is proved in Sect. 4.5. The argument is based on the Makeenko–Migdal equations for Wilson loops, which will be discussed in Sect. 4.3. Write \({\text {Loop}}_n({\mathbb {S}}_T)\) the set of regular loops having at most n self-intersections.

In order to state the proposition, we will need to introduce certain quantities associated to a regular loop \(l\in {\text {Loop}}_n({\mathbb {S}}_T)\). There is a winding number function \(n_l:{\mathbb {S}}_T{\setminus } l^*\rightarrow {\mathbb {Z}}\), which we fix uniquely by requiring its minimal value to be 0. The winding number is discussed in more detail in Sect. 4.4. The function \(n_l\) is constant on the faces of the associated embedded graph, which are the connected components of \({\mathbb {S}}_T{\setminus } l^*\). The notion of continuity in area is defined at the end of Sect. 4.1.

Proposition 2.6

For all \(n\in {\mathbb {N}}\),

$$\begin{aligned} \mathrm {tr}_N(H^N_l)\rightarrow \varPhi _T(l) \end{aligned}$$

uniformly in \(l\in {\text {Loop}}_n({\mathbb {S}}_T)\) in \(L^2({\mathbb {P}})\) as \(N\rightarrow \infty \). Moreover, the restriction of the master field \(\varPhi _T\) to \({\text {Loop}}_n({\mathbb {S}}_T)\) is the unique continuous function \({\text {Loop}}_n({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\) with the following properties: it is invariant under area-preserving homeomorphisms and uniformly continuous in area, it satisfies the Makeenko–Migdal equations (10), and satisfies, for some constant \(C_n<\infty \) and all loops \(l\in {\text {Loop}}_n({\mathbb {S}}_T)\),

$$\begin{aligned} |\varPhi _T(l)-\phi _T(n_*,a_0,a_*)|\le C_n(T-a_{k_0}-a_{k_*}) \end{aligned}$$
(13)

where \(n_*\) is the maximum of the winding number function \(n_l\), where \(k_0\) and \(k_*\) are the indices of faces of minimal and maximal winding number, and where \(a_0\) and \(a_*\) are determined by

$$\begin{aligned} a_0+a_*=T,\quad a_*n_*=\sum _{k=1}^pa_kn_k \end{aligned}$$

where \(a_k\) and \(n_k\), for \(k=1,\dots ,p\), are respectively the area and the winding number of the face of index k.

Finally, we extend to all loops of finite length in the following proposition, which combines the statements of Theorems 2.2 and 2.4. The proof is given in Sect. 5, using approximation by piecewise geodesics, and by adapting some general arguments of Lévy [40].

Proposition 2.7

For all \(l\in {\text {Loop}}({\mathbb {S}}_T)\),

$$\begin{aligned} \mathrm {tr}_N(H^N_l)\rightarrow \varPhi _T(l) \end{aligned}$$

in probability as \(N\rightarrow \infty \). Moreover, the master field \(\varPhi _T\) is the unique continuous function \({\text {Loop}}({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\) with the following properties: it is invariant under reduction, invariant under area-preserving homeomorphisms, satisfies the Makeenko–Migdal equations (10) on regular loops, and satisfies (11) for simple loops.

2.6 Convergence of spectral measures

Let \((H^N_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) be a Yang–Mills holonomy field in U(N). For \(l\in {\text {Loop}}({\mathbb {S}}_T)\), consider the empirical eigenvalue distribution on the unit circle \({\mathbb {U}}\), given by

$$\begin{aligned} \nu _T^N(l)=\frac{1}{N}\sum _{i=1}^N\delta _{\lambda _i} \end{aligned}$$

where \(\lambda _1,\dots ,\lambda _N\) are the eigenvalues of \(H^N_l\) enumerated with multiplicity. Write \({\mathcal {M}}_1({\mathbb {U}})\) for the set of Borel probability measures on \({\mathbb {U}}\).

Corollary 2.8

There is a function \(\nu _T:{\text {Loop}}({\mathbb {S}}_T)\rightarrow {\mathcal {M}}_1({\mathbb {U}})\) such that, for all \(l\in {\text {Loop}}({\mathbb {S}}_T)\),

$$\begin{aligned} \nu _T^N(l)\rightarrow \nu _T(l) \end{aligned}$$

weakly in probability on \({\mathbb {U}}\) as \(N\rightarrow \infty \). Moreover, for all simple loops l and all \(n\in {\mathbb {N}}\),

$$\begin{aligned} \int _{\mathbb {U}}\omega ^n\nu _T(l)(d\omega ) =\frac{2}{n\pi }\int _0^\infty \cosh \left\{ (a_1(l)-a_2(l))nx/2\right\} \sin \{n\pi \rho _T(x)\}dx. \end{aligned}$$

Moreover, for \(T\in (0,\pi ^2]\), all simple loops l, and all bounded Borel functions f,

$$\begin{aligned} \int _{\mathbb {U}}f(\omega )\nu _T(l)(d\omega )=\int _{-\pi }^\pi f(e^{i\theta })s_{a_1a_2/T}(\theta )d\theta \end{aligned}$$
(14)

where \(s_t\) is the semi-circle density of variance t, given by

$$\begin{aligned} s_t(x)=\frac{1}{2\pi t}\sqrt{4t-x^2},\quad |x|\le 2\sqrt{t}. \end{aligned}$$
(15)

Proof

By Theorem 2.2, for \(l\in {\text {Loop}}({\mathbb {S}}_T)\) and all \(n\in {\mathbb {N}}\), we have

$$\begin{aligned} \int _{\mathbb {U}}\omega ^n\nu _T^N(l)(d\omega )=\mathrm {tr}(H^N_{l^n})\rightarrow \varPhi _T(l^n) \end{aligned}$$

in probability as \(N\rightarrow \infty \). Since \({\mathbb {U}}\) is compact, by a standard tightness argument, it follows that there exists a probability measure \(\nu _T(l)\) on \({\mathbb {U}}\) such that

$$\begin{aligned} \int _{\mathbb {U}}\omega ^n\nu _T(l)(d\omega )=\varPhi _T(l^n) \end{aligned}$$

for all \(n\in {\mathbb {N}}\) and such that \(\nu _T^N(l)\rightarrow \nu _T(l)\) weakly in probability as \(N\rightarrow \infty \). By Theorem 2.4, \(\varPhi _T(l^n)\) is given by (11) for all simple loops l. Finally, we will show in Sect. 3.4 that, for all \(T\in (0,\pi ^2]\) and all \(n\in {\mathbb {N}}\),

$$\begin{aligned} \varPhi _T(l^n)=\int _{-\pi }^\pi e^{in\theta }s_{a_1a_2/T}(\theta )d\theta \end{aligned}$$

so (14) holds for polynomials, and so it holds in general. \(\quad \square \)

Thus, for \(T\in (0,\pi ^2]\) and for simple loops l, the limiting spectral measure \(\nu _T(l)\) has a semi-circle density on \({\mathbb {U}}\), with

$$\begin{aligned} \mathrm {supp}(\nu _T(l))=\{e^{i\theta }:|\theta |\le 2\sqrt{a_1a_2/T}\}. \end{aligned}$$

The maximal support is then \(\{e^{i\theta }:|\theta |\le \sqrt{T}\}\), achieved when \(a_1=a_2=T/2\). Note that, in the critical case \(T=\pi ^2\), the two endpoints of the maximal support meet at \(\theta =\pm \pi \).

2.7 Free unitary Brownian loop

As a corollary of Theorem 2.2, we show that the Brownian loop in U(N) based at 1 of lifetime T converges in non-commutative distribution as \(N\rightarrow \infty \). Moreover, we identify the limiting empirical distribution of eigenvalues at each time \(t\in [0,T]\).

Consider the free unital \(*\)-algebra \({\mathcal {A}}_T\) of polynomials over \({\mathbb {C}}\) in the variables \((X_t:t\in [0,T])\) and their inverses. Each element \(Q\in {\mathcal {A}}_T\) is a finite linear combination over \({\mathbb {C}}\) of monomials of the form

$$\begin{aligned} X_{t_1}^{\varepsilon _1}\dots X_{t_n}^{\varepsilon _n} \end{aligned}$$

where \(t_1,\dots ,t_n\in [0,T]\) and \(\varepsilon _1,\dots ,\varepsilon _n\in \{-1,1\}\). Thus each \(Q\in {\mathcal {A}}_T\) may be written as a non-commutative polynomial

$$\begin{aligned} Q=q(X_t,X_t^{-1}:t\in [0,T]) \end{aligned}$$

with coefficients in \({\mathbb {C}}\). The operation \(*\) is the unique conjugate-linear, anti-multiplicative involution on \({\mathcal {A}}_T\) such that

$$\begin{aligned} X_t^*=X_t^{-1}. \end{aligned}$$

For each \(N\in {\mathbb {N}}\), there exists a Brownian loop \(B^N=(B^N_t:t\in [0,T])\) in U(N) based at 1 of parameter T. Define a random non-negative unit traceFootnote 2 on \({\mathcal {A}}_T\) by setting

$$\begin{aligned} \tau _N(Q)=\mathrm {tr}_N(q(B^N_t,(B^N_t)^{-1}:t\in [0,T])). \end{aligned}$$

Theorem 2.9

There is a non-negative unit trace \(\tau _\infty \) on \({\mathcal {A}}_T\) such that, for all \(Q\in {\mathcal {A}}_T\),

$$\begin{aligned} \tau _N(Q)\rightarrow \tau _\infty (Q)\quad \text {in probability as }N\rightarrow \infty . \end{aligned}$$

Proof

It will suffice to consider the case where \(B^N\) is constructed from a Yang–Mills holonomy field \((H^N_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) in U(N), as in Sect. 2.2. Then, for some \(x\in {\mathbb {S}}_T\) and some family of loops \(l(t)\in {\text {Loop}}({\mathbb {S}}_T)\) based at x, we have

$$\begin{aligned} B^N_t=H_{l(t)}^N\quad \text {almost surely, for all }t\in [0,T]. \end{aligned}$$

Consider first the case of a monomial \(Q=X_{t_1}^{\varepsilon _1}\dots X_{t_n}^{\varepsilon _n}\) with \(\varepsilon _1,\dots ,\varepsilon _n\in \{-1,1\}\) and set \(l_Q=l(t_n)^{\varepsilon _n}\dots l(t_1)^{\varepsilon _1}\). Then, by Theorem 2.2,

$$\begin{aligned} \tau _N(Q)=\mathrm {tr}((B_{t_1}^N)^{\varepsilon _1}\dots (B_{t_n}^N)^{\varepsilon _n})=\mathrm {tr}((H^N_{l(t_1)})^{\varepsilon _1}\dots (H^N_{l(t_n)})^{\varepsilon _n})=\mathrm {tr}(H^N_{l_Q})\rightarrow \varPhi _T(l_Q) \end{aligned}$$

in probability as \(N\rightarrow \infty \). Define \(\tau _\infty (Q)=\varPhi _T(l_Q)\) for all monomials Q and extend \(\tau _\infty \) linearly to \({\mathcal {A}}_T\). Then \(\tau _N(Q)\rightarrow \tau _\infty (Q)\) in probability as \(N\rightarrow \infty \), for all \(Q\in {\mathcal {A}}_T\), and \(\tau _\infty \) inherits the property of being a non-negative unit trace from its random approximations \(\tau _N\). \(\quad \square \)

Given a non-commutative random process \(x=(x_t:t\in [0,T])\) in a non-commutative probability space \(({\mathcal {A}},\tau )\), let us say that x is a free unitary Brownian loop if, for all n, all \(t_1,\dots ,t_n\in [0,T]\) and all \((y_{t_k},Y_{t_k})\in \{(x_{t_k},X_{t_k}),(x^*_{t_k},X^*_{t_k})\}\),

$$\begin{aligned} \tau (y_{t_1}\dots y_{t_n})=\tau _\infty (Y_{t_1}\dots Y_{t_n}). \end{aligned}$$

In particular, the canonical process \((X_t:t\in [0,T])\) is a free unitary Brownian loop in \(({\mathcal {A}}_T,\tau _\infty )\). We shall see in Sect. 6 that, in the subcritical regime \(T\le \pi ^2\), a free unitary Brownian loop x has the same marginal distributions as \(e^{ib}\), where b is a free Brownian loop with the same lifetime. Thus the spectral measure of each marginal of a free unitary Brownian loop is the push-forward of a Wigner law by the exponential mapping to the circle. However, we shall also see that the full non-commutative distributions of x and \(e^{ib}\) are different.

2.8 The master field as a holonomy in \(U(\infty )\)

We will carry out the suggestion of Singer [50], to use a variation of the Gelfand-Naimark-Segal construction to obtain from the master field a family of Hilbert spaces, indexed by \({\mathbb {S}}_T\), and equipped with a canonical connection, viewed as a family of unitary transport operators indexed by \({\text {Path}}({\mathbb {S}}_T)\). First, in order to clarify and motivate this construction, we will make an analogous construction for finite N, showing its relation to the notion of Yang–Mills holonomy field in U(N). Conditional on a certain non-degeneracy property for the master field, we will further exhibit the finite-N holonomy measures as recoverable by restriction of the limit holonomy field to certain invariant random subspaces.

We have presented the Yang–Mills holonomy field as a process \((H^N_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) with values in U(N). However, the property of gauge-invariance allows us to think of it as follows. Suppose we are given a family of complex vector spaces \(V=(V_x:x\in {\mathbb {S}}_T)\), each equipped with a Hermitian inner product and having dimension N. Choose, for each \(x\in {\mathbb {S}}_T\), a complex linear isometry \(s(x):{\mathbb {C}}^N\rightarrow V_x\). Given a Yang–Mills holonomy field \((H^N_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) in U(N), for each \(\gamma \in {\text {Path}}_{x,y}({\mathbb {S}}_T)\), we can define a complex linear isometry \(T_\gamma :V_x\rightarrow V_y\) by

$$\begin{aligned} T_\gamma =s(y)H^N_\gamma s(x)^{-1}. \end{aligned}$$

Then, by gauge invariance, the law of the process \((T_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) does not depend on the choice of the family of isometries \((s(x):x\in {\mathbb {S}}_T)\). We call any process with this law a Yang–Mills holonomy field in\({\text {Isom}}(V)\). The original holonomy field \((H^N_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) then corresponds to the case where \(V_x={\mathbb {C}}^N\) for all x. Moreover, given any Yang–Mills holonomy field \((T_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) in \({\text {Isom}}(V)\) and any choice of a family of complex linear isometries \(s(x):{\mathbb {C}}^N\rightarrow V_x\), we obtain a Yang–Mills holonomy field \((H^N_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) in U(N) by setting

$$\begin{aligned} H^N_\gamma =s(y)^{-1}T_\gamma s(x). \end{aligned}$$

Proposition 2.10

Let \((H^N_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) be a Yang–Mills holonomy field in U(N) and let E be an independent uniformly random unit vector in \({\mathbb {C}}^N\). Define, for \(l\in {\text {Loop}}({\mathbb {S}}_T)\),

$$\begin{aligned} \tau _T^N(l)=\langle E,H^N_l E\rangle . \end{aligned}$$

Then \(\tau _T^N(l)\rightarrow \varPhi _T(l)\) in probability as \(N\rightarrow \infty \) for all l.

We already know that

$$\begin{aligned} {\mathbb {E}}(\tau _T^N(l)|H^N)=\varPhi _T^N(l)\rightarrow \varPhi _T(l) \end{aligned}$$

in probability as \(N\rightarrow \infty \). The proposition thus shows that the same convergence in probability holds without taking the expectation over the random vector E. The extra randomness present in \(\tau _T^N\) makes it a more natural object than \(\varPhi _T^N\) in certain constructions below.

Proof

Let \(Z_1,\dots ,Z_N\) be independent complex Gaussian random variables. Set

$$\begin{aligned} S_N=\sum _{k=1}^N|Z_k|^2,\quad {{\tilde{E}}}=(Z_1,\dots ,Z_N)/\sqrt{S_N}. \end{aligned}$$

Then \({{\tilde{E}}}\) is a uniform random unit vector in \({\mathbb {C}}^N\). Define, for \(l\in {\text {Loop}}({\mathbb {S}}_T)\),

$$\begin{aligned} {{\tilde{\tau }}}_T^N(l) =\sum _{k=1}^N\lambda _k|{{\tilde{E}}}_k|^2 =\sum _{k=1}^N\lambda _k|Z_k|^2/S_N \end{aligned}$$

where \(\lambda _1,\dots ,\lambda _N\) is an enumeration of the eigenvalues of \(H^N_l\). Since \(H_l^N\) is diagonalized by unitary conjugation and \({{\tilde{E}}}\) is independent of \(H_l^N\), it follows that \({{\tilde{\tau }}}_T^N(l)\) has the same distribution as \(\tau _T^N(l)\). By Corollary 2.8, the empirical distribution of eigenvalues

$$\begin{aligned} \nu _T^N(l)=\frac{1}{N}\sum _{k=1}^N\delta _{\lambda _k} \end{aligned}$$

converges weakly in probability on the unit circle \({\mathbb {U}}\) with deterministic limit \(\nu _T(l)\) satisfying

$$\begin{aligned} \varPhi _T(l)=\int _{\mathbb {U}}\lambda \nu _T(l)(d\lambda ). \end{aligned}$$

But \({\mathbb {E}}(|Z_k|^2)=2\), so we obtain the following limits in \(L^2\)

$$\begin{aligned} \frac{1}{N}\sum _{k=1}^N\lambda _k|Z_k|^2\rightarrow 2\varPhi _T(l),\quad S_N/N\rightarrow 2. \end{aligned}$$

Hence \({{\tilde{\tau }}}_T^N(l)\rightarrow \varPhi _T(l)\) in probability as \(N\rightarrow \infty \). \(\quad \square \)

Fix a reference point \(r\in {\mathbb {S}}_T\) and consider for each \(x\in {\mathbb {S}}_T\) the vector space \({\mathcal {V}}_x\) of complex functions on \({\text {Path}}_{r,x}({\mathbb {S}}_T)\) of finite support. Thus, each \(v\in {\mathcal {V}}_x\) has the form

$$\begin{aligned} v=\sum _{i=1}^na_i\delta _{\gamma _i} \end{aligned}$$

for some \(n\ge 0\), with \(a_i\in {\mathbb {C}}\) and \(\gamma _i\in {\text {Path}}_{r,x}({\mathbb {S}}_T)\) for all i. There are unique Hermitian forms \(\langle .,.\rangle ^N_x\) and \(\langle .,.\rangle _x\) on \({\mathcal {V}}_x\) such that, for all \(\gamma _1,\gamma _2\in {\text {Path}}_{r,x}({\mathbb {S}}_T)\),

$$\begin{aligned} \langle \delta _{\gamma _1},\delta _{\gamma _2}\rangle ^N_x=\tau ^N_T(\gamma _1\gamma _2^{-1}),\quad \langle \delta _{\gamma _1},\delta _{\gamma _2}\rangle _x=\varPhi _T(\gamma _1\gamma _2^{-1}). \end{aligned}$$

By Proposition 2.10, \(\langle v,v'\rangle ^N_x\rightarrow \langle v,v'\rangle _x\) in probability for all \(v,v'\). The form \(\langle .,.\rangle ^N_x\) is non-negative definite for all N, so \(\langle .,.\rangle _x\) is also non-negative definite, as we observed in Proposition 2.3. For \(x,y\in {\mathbb {S}}_T\) and \(\gamma \in {\text {Path}}_{x,y}({\mathbb {S}}_T)\), there is a unique complex linear map \(T_\gamma :{\mathcal {V}}_x\rightarrow {\mathcal {V}}_y\) such that, for all \(\gamma _0\in {\text {Path}}_{r,x}({\mathbb {S}}_T)\),

$$\begin{aligned} T_\gamma \delta _{\gamma _0}=\delta _{\gamma _0\gamma }. \end{aligned}$$

Note that, for \(\gamma _1,\gamma _2\in {\text {Path}}_{r,x}({\mathbb {S}}_T)\),

$$\begin{aligned} \langle T_\gamma \delta _{\gamma _1},T_\gamma \delta _{\gamma _2}\rangle _y^N=\langle \delta _{\gamma _1\gamma },\delta _{\gamma _2\gamma }\rangle _y^N=\tau _T^N(\gamma _1\gamma \gamma ^{-1}\gamma _2^{-1}) =\tau _T^N(\gamma _1\gamma _2^{-1})=\langle \delta _{\gamma _1},\delta _{\gamma _2}\rangle _x^N. \end{aligned}$$

It follows that \(\langle T_\gamma v_1,T_\gamma v_2\rangle _y^N=\langle v_1,v_2\rangle _x^N\) for all \(v_1,v_2\in {\mathcal {V}}_x\). Similarly, \(T_\gamma \) preserves the form \(\langle .,.\rangle \).

For each \(x\in {\mathbb {S}}_T\), write \(V_x^N\) for the quotient of the vector space \({\mathcal {V}}_x\) by the kernel

$$\begin{aligned} {\mathcal {K}}_x^N=\{v\in {\mathcal {V}}_x:\langle v,v\rangle _x^N=0\}. \end{aligned}$$

Write \([v]^N=v+{\mathcal {K}}_x^N\). It is straightforward to check that, for \(\gamma _1,\gamma _2\in {\text {Path}}_{r,x}({\mathbb {S}}_T)\) with \(\gamma _1\sim \gamma _2\), we have \([\delta _{\gamma _1}]^N=[\delta _{\gamma _2}]^N\) in \(V_x^N\).

Proposition 2.11

Almost surely, for all \(x\in {\mathbb {S}}_T\), the random vector space \(V_x^N\) has finite dimension N.

Proof

Since \(T_\gamma ^N\) is a linear isometry for all paths \(\gamma \in {\text {Path}}({\mathbb {S}}_T)\), it will suffice to consider the case \(x=r\). Let \((l(t):t\in [0,T])\) be a family of loops in \({\text {Loop}}_x({\mathbb {S}}_T)\) such as considered in Sect. 2.2, now with \(x=r\). For \(k=1,\dots ,N\) set \(l_k=l((k-1)T/N)\). Write \(\varOmega _0\) for the event that the random vectors \(H_{l_1}E=E,H_{l_2}E,\dots ,H_{l_N}E\) are linearly independent in \({\mathbb {C}}^N\). The joint law of \(H_{l_2},\dots ,H_{l_N}\) has a density \(\rho \) with respect to the product of normalized Haar measures on \(U(N)^{N-1}\) given by

$$\begin{aligned} \rho (h_2,\dots ,h_N)=\frac{p_{T/N}(h_2)p_{T/N}(h_N)}{p_T(1)}\prod _{k=3}^Np_{T/N}(h_kh_{k-1}^{-1}). \end{aligned}$$

Consider the equivalent probability measure \({{\tilde{{\mathbb {P}}}}}\) given by

$$\begin{aligned} d{{\tilde{{\mathbb {P}}}}}/d{\mathbb {P}}=\rho (H_{l_2},\dots ,H_{l_N})^{-1}. \end{aligned}$$

Under \({{\tilde{{\mathbb {P}}}}}\), the random vectors \(H_{l_2}E,\dots ,H_{l_N}E\) are independent and uniformly distributed on the unit ball in \({\mathbb {C}}^N\). Hence

$$\begin{aligned} {\mathbb {P}}(\varOmega _0)={{\tilde{{\mathbb {P}}}}}(\varOmega _0)=1. \end{aligned}$$

Now, for

$$\begin{aligned} v=\sum _{k=1}^Na_k\delta _{l_k} \end{aligned}$$

we have

$$\begin{aligned} \langle v,v\rangle _r^N=\left| \sum _{k=1}^Na_kH_{l_k}E\right| ^2. \end{aligned}$$

So, on the event \(\varOmega _0\), \(v\in {\mathcal {K}}^N_r\) only if \(a_1=\dots =a_N=0\), that is to say, \(l_1+{\mathcal {K}}_r^N,\dots ,l_N+{\mathcal {K}}_r^N\) are linearly independent in \(V_r^N\). On the other hand, for any loop \(l\in {\text {Loop}}_r({\mathbb {S}}_T)\), on the same event \(\varOmega _0\), \(H_lE\) is a linear combination of \(H_{l_1}E,\dots ,H_{l_N}E\), so \(\delta _l+{\mathcal {K}}_r^N\) lies in the linear span of \(\delta _{l_1}+{\mathcal {K}}_r^N,\dots ,\delta _{l_N}+{\mathcal {K}}_r^N\). \(\quad \square \)

The form \(\langle .,.\rangle _x^N\) induces a random Hermitian inner product on \(V_x^N\), which we will denote also by \(\langle .,.\rangle ^N_x\). Moreover, for all \(x,y\in {\mathbb {S}}_T\) and all \(\gamma \in {\text {Path}}_{x,y}({\mathbb {S}}_T)\), we can define a random linear isometry \(T_\gamma ^N:V_x^N\rightarrow V_y^N\) by

$$\begin{aligned} T_\gamma ^N(v+{\mathcal {K}}^N_x)=T_\gamma v+{\mathcal {K}}_y^N. \end{aligned}$$

The family of isometries \((T^N_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) inherits the following properties from \((T_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\):

$$\begin{aligned} T^N_x=1_x,\quad T^N_{\gamma _1\gamma _2}=T^N_{\gamma _2}T^N_{\gamma _1}. \end{aligned}$$

Here, we have written x for the constant loop at x, \(1_x\) for the identity map on \(V_x\), and the second identity is valid whenever the concatenation \(\gamma _1\gamma _2\) is possible. On the other hand, if \(\gamma _1\sim \gamma _2\) then \(T^N_{\gamma _1}=T^N_{\gamma _2}\). Hence, since \(\gamma \gamma ^{-1}\sim x\), we have

$$\begin{aligned} T^N_{\gamma ^{-1}}T^N_\gamma =T^N_{\gamma \gamma ^{-1}}=T^N_x=1_x. \end{aligned}$$

In fact, the family of isometries \((T^N_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) may be considered as a Yang–Mills holonomy field in \({\text {Isom}}(V^N)\), in a sense made precise in the following proposition.

Proposition 2.12

Conditional on \((V^N_x:x\in {\mathbb {S}}_T)\), choose a family of independent uniform random isometries \((s(x):x\in {\mathbb {S}}_T)\) with \(s(x):{\mathbb {C}}^N\rightarrow V_x^N\) for all x, and set

$$\begin{aligned} {{\tilde{H}}}_\gamma ^N=s(y)^{-1}T_\gamma ^Ns(x). \end{aligned}$$

Then \(({{\tilde{H}}}_\gamma ^N:\gamma \in {\text {Path}}({\mathbb {S}}_T))\) is a Yang–Mills holonomy field in U(N).

Proof

Consider, for each \(x\in {\mathbb {S}}_T\), the unique linear map \(\pi ^N_x:{\mathcal {V}}_x\rightarrow {\mathbb {C}}^N\) such that \(\pi _x^N(\delta _\gamma )=H^N_\gamma E\) for all \(\gamma \in {\text {Path}}_{r,x}({\mathbb {S}}_T)\). Then \(\pi _N\) has kernel \({\mathcal {K}}^N_x\) and the quotient map \(V_x^N\rightarrow {\mathbb {C}}^N\), which we denote abusively also by \(\pi _N\), is an isometry, by definition of \(\langle .,.\rangle _x^N\). For \(\gamma _0\in {\text {Path}}_{r,x}({\mathbb {S}}_T)\) and \(\gamma \in {\text {Path}}_{x,y}({\mathbb {S}}_T)\), we have

$$\begin{aligned} \pi _y^N(T_\gamma \delta _{\gamma _0}) =\pi _y^N(\delta _{\gamma _0\gamma }) =H^N_{\gamma _0\gamma }E =H^N_\gamma H^N_{\gamma _0}E =H^N_\gamma \pi _x^N(\delta _{\gamma _0}) \end{aligned}$$

so the quotient maps satisfy, for all \(v\in V^N_x\),

$$\begin{aligned} \pi _y^N(T^N_\gamma v)=H^N_\gamma \pi _x^N(v). \end{aligned}$$

Since Haar measure is invariant under multiplication, the random variables \((\pi _x^Ns(x):x\in {\mathbb {S}}_T)\) in U(N) are independent, uniformly distributed, and independent of \(H^N\). Now

$$\begin{aligned} {{\tilde{H}}}_\gamma ^N =(\pi _y^Ns(y))^{-1}H_\gamma ^N(\pi _x^Ns(x)) \end{aligned}$$

so \(({{\tilde{H}}}_\gamma ^N:\gamma \in {\text {Path}}({\mathbb {S}}_T))\) is a Yang–Mills holonomy field in U(N) by gauge invariance.

\(\square \)

For each \(x\in {\mathbb {S}}_T\), given a path \(\gamma \in {\text {Path}}_{r,x}({\mathbb {S}}_T)\), we can define a state \(\tau _\gamma ^N\) on the set of bounded linear operators on \(V^N_x\) by

$$\begin{aligned} \tau _\gamma ^N(A)=\langle [\delta _\gamma ]^N,A[\delta _\gamma ]^N\rangle _x^N. \end{aligned}$$

Then, for all \(l\in {\text {Loop}}_x({\mathbb {S}}_T)\),

$$\begin{aligned} \tau _\gamma ^N(T_l^N)=\langle \delta _\gamma ,T_l\delta _\gamma \rangle _x^N=\langle \delta _\gamma ,\delta _{\gamma l}\rangle _x^N=\tau _T^N(\gamma l^{-1}\gamma ^{-1})=\tau _T^N(l). \end{aligned}$$

Then, on restricting \(\tau _\gamma ^N\) to the von Neumann algebra \({\mathcal {A}}_x^N\) generated by \((T^N_l:l\in {\text {Loop}}_x({\mathbb {S}}_T))\), we obtain a non-negative unit trace \(\tau _x^N\) on \({\mathcal {A}}_x^N\), which does not depend on the choice of path \(\gamma \). This construction has been done starting from the random Hermitian forms \(\langle .,.\rangle ^N_x\) for \(x\in {\mathbb {S}}_T\). We now explore the analogous construction starting from \(\langle .,.\rangle _x\).

Consider the kernel

$$\begin{aligned} {\mathcal {K}}_x=\{v\in {\mathcal {V}}_x:\langle v,v\rangle _x=0\} \end{aligned}$$

and write \(V_x\) for the Hilbert space obtained by completing \({\mathcal {V}}_x/{\mathcal {K}}_x\) with respect to \(\langle .,.\rangle _x\). Write \([v]=v+{\mathcal {K}}_x\). Then \([\delta _{\gamma _1}]=[\delta _{\gamma _2}]\) whenever \(\gamma _1\sim \gamma _2\). For \(x,y\in {\mathbb {S}}_T\) and \(\gamma \in {\text {Path}}_{x,y}({\mathbb {S}}_T)\), there is a unique Hilbert space isometry \({{\tilde{T}}}_\gamma :V_x\rightarrow V_y\) such that, for all \(v\in {\mathcal {V}}_x\),

$$\begin{aligned} {{\tilde{T}}}_\gamma (v+{\mathcal {K}}_x)=T_\gamma v+{\mathcal {K}}_y. \end{aligned}$$

The family of isometries \(({{\tilde{T}}}_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) then has the following properties:

$$\begin{aligned} {{\tilde{T}}}_x=1_x,\quad {{\tilde{T}}}_{\gamma ^{-1}}=({{\tilde{T}}}_\gamma )^{-1},\quad {{\tilde{T}}}_{\gamma _1\gamma _2}={{\tilde{T}}}_{\gamma _2}{{\tilde{T}}}_{\gamma _1} \end{aligned}$$

where the last identity holds whenever the concatenation \(\gamma _1\gamma _2\) is possible.

For each \(x\in {\mathbb {S}}_T\), given a path \(\gamma \in {\text {Path}}_{r,x}({\mathbb {S}}_T)\), we can define a state \(\tau _\gamma \) on the set of bounded linear operators on \(V_x\) by

$$\begin{aligned} \tau _\gamma (A)=\langle [\delta _\gamma ],A[\delta _\gamma ]\rangle _x. \end{aligned}$$

Then, for all \(l\in {\text {Loop}}_x({\mathbb {S}}_T)\),

$$\begin{aligned} \tau _\gamma ({{\tilde{T}}}_l)=\langle \delta _\gamma ,T_l\delta _\gamma \rangle _x=\langle \delta _\gamma ,\delta _{\gamma l}\rangle _x=\varPhi _T(\gamma l^{-1}\gamma ^{-1})=\varPhi _T(l). \end{aligned}$$

Recall from Proposition 2.3 that \(\varPhi _T(x)=1\) and \(\varPhi _T(l_1l_2)=\varPhi _T(l_2l_1)\). Then, on restricting \(\tau _\gamma \) to the von Neumann algebra \({\mathcal {A}}_x\) generated by \((\tilde{T}_l:l\in {\text {Loop}}_x({\mathbb {S}}_T))\), we obtain a non-negative unit trace \(\tau _x\) on \({\mathcal {A}}_x\), which does not depend on the choice of path \(\gamma \).

We note some further properties of \(({\mathcal {A}}_x,\tau _x)\). First, for all integers n, and all \(l\in {\text {Loop}}_x({\mathbb {S}}_T)\),

$$\begin{aligned} \tau _x({{\tilde{T}}}_l^n)=\varPhi _T(l^n)=\int _{\mathbb {U}}\omega ^n\nu _T(l)(d\omega ) \end{aligned}$$

where \(\nu _T(l)\) is the limit spectral measure obtained in Sect. 2.6. So \(\nu _T(l)\) is the spectral measure of \({{\tilde{T}}}_l\). (Here T refers to the area of the sphere, while \({{\tilde{T}}}_l^n\) is the nth power of the transport operator \(\tilde{T}_l\) defined above.) Second, since the master field is invariant under area-preserving diffeomorphisms of \({\mathbb {S}}_T\), the choice of such a diffeomorphism \(\psi \) gives an isomorphism \(({\mathcal {A}}_x,\tau _x)\rightarrow ({\mathcal {A}}_y,\tau _y)\) whenever \(\psi (x)=y\).

Singer [50] conjectured, without explicit construction, that the von Neumann algebras \({\mathcal {A}}_x\) were factors, that is to say, their centres were trivialFootnote 3. We remark that, if this conjecture holds then, sinceFootnote 4 the spectral measures \(\nu _T(l)\) are absolutely continuous, at least for simple loops separating the sphere into components of equal area, as follows from Proposition 6.2, and since \(\tau _x\) is a finite normalized trace, we see that

$$\begin{aligned} \{\tau _x(p):p\in {\mathcal {A}}_x, p^2=p\}=[0,1] \end{aligned}$$

and \({\mathcal {A}}_x\) must be of type \(\text {II}_1\) and have unique state \(\tau _x\).

It is an open question whether in fact \({\mathcal {K}}_x\) is spanned, for all x, by vectors of the form \(\delta _\gamma -\delta _{\gamma _0}\), where \(\gamma ,\gamma _0\in {\text {Path}}_{r,x}({\mathbb {S}}_T)\) and, for some sequence \((\gamma _n:n\in {\mathbb {N}})\) in \({\text {Path}}_{r,x}({\mathbb {S}}_T)\) with \(\gamma _n\sim \gamma _0\) for all n, we have \(\gamma _n\rightarrow \gamma \) in length. Since we know that such vectors all lie in \({\mathcal {K}}^N_x\) for all N, if true, this would allow to identify \(V_x^N\) with the orthogonal complement of \({\mathcal {K}}^N_x/{\mathcal {K}}_x\) in \(V_x\). Then the Yang–Mills holonomy field \((T^N_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) in \({\text {Isom}}(V^N)\) would be obtained by restricting the family of isometries \((\tilde{T}_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) to the N-dimensional random subspaces \((V_x^N:x\in {\mathbb {S}}_T)\).

3 Harmonic Analysis in U(N) and a Discrete \(\beta \)-ensemble

3.1 A representation formula

Let \(H=(H_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) be a Yang–Mills holonomy field in U(N). Here, and from now on, we suppress mention of N in the notation for H. On the other hand, for \(l\in {\text {Loop}}({\mathbb {S}}_T)\) and \(n\in {\mathbb {Z}}\), we will write \(H^n_l\) for the nth power of the matrix \(H_l\) which, by the multiplicative property, is also given by \(H_{l^n}\). We obtain in this subsection a key formula for the moments of the holonomy \(H_l\) of a simple loop l in terms of a certain discrete \(\beta \)-ensemble, with \(\beta =2\). Set

$$\begin{aligned} {\mathbb {Z}}_\mathrm {sym}={\left\{ \begin{array}{ll}{\mathbb {Z}},&{}\text { if }N\text { is odd},\\ {\mathbb {Z}}+1/2,&{} \text { if }N\text { is even.}\end{array}\right. } \end{aligned}$$

Consider the discrete \(\beta \)-ensemble \(\varLambda \) in \(N^{-1}{\mathbb {Z}}_\mathrm {sym}\) given by

$$\begin{aligned} {\mathbb {P}}\left( \varLambda =\lambda \right) \propto \prod _{\begin{array}{c} j,k=1\\ j<k \end{array}}^N(\lambda _j-\lambda _k)^2\prod _{i=1}^Ne^{-N\lambda _i^2T/2} \end{aligned}$$
(16)

where \(\lambda \) runs over decreasing sequences \((\lambda _1,\dots ,\lambda _N)\) in \(N^{-1}{\mathbb {Z}}_\mathrm {sym}\). For \(\alpha \in {\mathbb {R}}{\setminus }\{0\}\) and for \(z\in {\mathbb {C}}\) with \(|\alpha ||z-\lambda _j|>1\) for all j, set

$$\begin{aligned} G^\alpha _\lambda (z)=\frac{\alpha }{N}\sum _{j=1}^N{\text {Log}}\left( 1+\frac{1}{\alpha (z-\lambda _j)}\right) \end{aligned}$$

where \({\text {Log}}\) denotes the principal value of the logarithm. Then, for \(a\in (0,T)\), set \(I_0^a(\lambda )=1\) and define for \(n\in {\mathbb {Z}}{\setminus }\{0\}\)

$$\begin{aligned} I_n^a(\lambda )=\frac{e^{-an^2/(2N)}}{2\pi in}\int _\gamma \exp \{-n(az-G^{N/n}_\lambda (z))\}dz \end{aligned}$$

where \(\gamma \) is any positively oriented simple loop around the set

$$\begin{aligned} {[}\lambda _N,\lambda _1]+\{z\in {\mathbb {C}}:|z|\le |n|/N\}. \end{aligned}$$

It is straightforward to check that \(I_n^a(\lambda )\) does not depend on the choice of \(\gamma \).

Proposition 3.1

Let \(l\in {\text {Loop}}({\mathbb {S}}_T)\) be a simple loop which divides \({\mathbb {S}}_T\) into components of areas a and b. Then, for all \(m,n\in {\mathbb {Z}}\),

$$\begin{aligned} {\mathbb {E}}(\mathrm {tr}(H_l^{-m})\mathrm {tr}(H_l^n))={\mathbb {E}}(I_m^a(\varLambda )I_n^b(\varLambda )). \end{aligned}$$

Here and from now on, we suppress the N in our notation for the normalized trace on U(N). This formula allows to prove the convergence of the random variables \(\mathrm {tr}(H^n_l)\) for simple loops l, as will be explained in Sect. 3.2.

To prove Proposition 3.1, we will use the decomposition of the heat kernel as a sum over the characters of U(N). The results we use may be found for example in [36]. For \(\lambda \in ({\mathbb {Z}}_\mathrm {sym})^N\), set

$$\begin{aligned} \Vert \lambda \Vert ^2=\frac{1}{N}\sum _{j=1}^N\lambda _j^2. \end{aligned}$$

Write \(\rho =(\rho _1,\dots ,\rho _N)\) for the unique minimizer of \(\Vert .\Vert \) among decreasing sequences in \(({\mathbb {Z}}_\mathrm {sym})^N\), which is given by

$$\begin{aligned} \rho _j=\frac{1}{2}(N+1)-j. \end{aligned}$$

For \(\lambda \in {\mathbb {Z}}^N\), there is a unique continuous function \(\chi _\lambda :U(N)\rightarrow {\mathbb {C}}\) given by the Weyl character formula

$$\begin{aligned} \chi _\lambda (g)\det (e^{i\theta _j\rho _k})_{j,k=1}^N=\det (e^{i\theta _j(\lambda _k+\rho _k)})_{j,k=1}^N,\quad g\in U(N) \end{aligned}$$
(17)

where \(e^{i\theta _1},\dots ,e^{i\theta _N}\) are the eigenvalues of g. Then

$$\begin{aligned} (\chi _\lambda :\lambda \in {\mathbb {Z}}^N,\lambda _1\ge \dots \ge \lambda _N) \end{aligned}$$

is a parametrization of the set of characters of irreducible representations of U(N). For characters \(\chi _\lambda \) and \(\chi _\mu \), we have

$$\begin{aligned} \int _{U(N)}\chi _\lambda (g)\overline{\chi _\mu (g)}dg=\int _{U(N)}\chi _\lambda (g)\chi _\mu (g^{-1})dg=\delta _{\lambda ,\mu } \end{aligned}$$
(18)

and

$$\begin{aligned} \varDelta \chi _\lambda =-(\Vert \lambda +\rho \Vert ^2-\Vert \rho \Vert ^2)\chi _\lambda . \end{aligned}$$
(19)

Moreover, the heat kernel \((p_t(g):t\in (0,\infty ),g\in U(N))\) is given by the following absolutely convergent sum

$$\begin{aligned} p_t(g)=e^{\Vert \rho \Vert ^2t/2}\sum _\lambda \chi _\lambda (1)\chi _\lambda (g)e^{-\Vert \lambda +\rho \Vert ^2t/2}. \end{aligned}$$
(20)

The character values at the identity are given by the Weyl dimension formula

$$\begin{aligned} \chi _\lambda (1)=\prod _{\begin{array}{c} j,k=1\\ j<k \end{array}}^N\frac{\lambda _j+\rho _j-\lambda _k-\rho _k}{\rho _j-\rho _k}. \end{aligned}$$
(21)

The change of variable \(\mu =\lambda +\rho \) gives a convenient reparametrization of the set of characters by

$$\begin{aligned} W=\{\mu \in ({\mathbb {Z}}_\mathrm {sym})^N:\mu _1>\dots >\mu _N\}. \end{aligned}$$

For \(x\in ({\mathbb {Z}}_\mathrm {sym})^N\) with all components distinct, we will write [x] for the decreasing rearrangement of x. From (17), we see that,

$$\begin{aligned} \chi _{x-\rho }=\varepsilon (x)\chi _{[x]-\rho } \end{aligned}$$

where

$$\begin{aligned} \varepsilon (x)={\left\{ \begin{array}{ll}{\text {sgn}}(\sigma ),&{} \text {if }x\text { has all components distinct},\\ 0,&{}\text {otherwise},\end{array}\right. } \end{aligned}$$

where \(\sigma \) is the unique permutation such that \([x]_j=x_{\sigma (j)}\) for all j. Then the orthogonality relation (18) extends to all \(x,y\in ({\mathbb {Z}}_\mathrm {sym})^N\) in the form

$$\begin{aligned} \int _{U(N)}\chi _{x-\rho }(g)\chi _{y-\rho }(g^{-1})dg=\varepsilon (x)\varepsilon (y)\delta _{[x],[y]}. \end{aligned}$$
(22)

To compute the desired moments of holonomy traces, we shall need the following product formula, which may be obtained from (17) by a straightforward computation. For all \(n\in {\mathbb {Z}}\), we have

$$\begin{aligned} \chi _\lambda (g)\mathrm {Tr}(g^n) =\sum _{j=1}^N\chi _{\lambda +n\omega ^j}(g). \end{aligned}$$
(23)

where \(\omega ^j\) is the jth elementary vector in \({\mathbb {Z}}^N\).

Proof of Proposition 3.1

From the definition of the Yang–Mills measure, we have

$$\begin{aligned} {\mathbb {E}}(\mathrm {tr}(H_l^{-m})\mathrm {tr}(H_l^{n}))\propto \int _{U(N)}p_a(g)\mathrm {tr}(g^{-m})\mathrm {tr}(g^{n})p_b(g^{-1})dg \end{aligned}$$

where \(\propto \) signifies equality up to a constant independent of m and n. We expand the heat kernel in characters to obtain

$$\begin{aligned}&\int _{U(N)}p_a(g)\mathrm {tr}(g^{-m})\mathrm {tr}(g^{n})p_b(g^{-1})dg\\&\quad \propto \sum _{\lambda ,\mu \in W}e^{-\Vert \lambda \Vert ^2a/2-\Vert \mu \Vert ^2b/2}\chi _{\lambda -\rho }(1)\chi _{\mu -\rho }(1)\\&\qquad \int _{U(N)}\chi _{\lambda -\rho }(g)\mathrm {tr}(g^{-m})\mathrm {tr}(g^{n})\chi _{\mu -\rho }(g^{-1})dg. \end{aligned}$$

The interchange of summation and integration here is valid because \(a,b>0\) which ensures absolute convergence. By orthogonality of characters (22) and the product rule (23), for all \(\lambda ,\mu \in W\),

$$\begin{aligned}&\int _{U(N)}\chi _{\lambda -\rho }(g)\mathrm {tr}(g^{-m})\mathrm {tr}(g^{n})\chi _{\mu -\rho }(g^{-1})dg\\&\quad \quad =\frac{1}{N^2}\sum _{j,k=1}^N\varepsilon (\lambda -m\omega ^j)\varepsilon (\mu -n\omega ^k)\delta _{[\lambda -m\omega ^j],[\mu -n\omega ^k]}. \end{aligned}$$

Now, for \(\nu \in W\), we have \([\lambda -m\omega ^j]=[\mu -n\omega ^k]=\nu \) for some \(j,k\in \{1,\dots ,N\}\) if and only if \(\lambda =[\nu +m\omega ^{j'}]\) and \(\mu =[\nu +n\omega ^{k'}]\) for some \(j',k'\in \{1,\dots ,N\}\), and then

$$\begin{aligned} N\Vert \lambda \Vert ^2=N\Vert \nu \Vert ^2+2m\nu _{j'}+m^2,\quad N\Vert \mu \Vert ^2=N\Vert \nu \Vert ^2+2n\nu _{k'}+n^2 \end{aligned}$$

and

$$\begin{aligned} \varepsilon (\lambda -m\omega ^j)=\varepsilon (\nu +m\omega ^{j'}),\quad \varepsilon (\mu -m\omega ^k)=\varepsilon (\nu +n\omega ^{k'}) \end{aligned}$$

so, using the dimension formula (21),

$$\begin{aligned} \chi _{\lambda -\rho }(1)\varepsilon (\lambda -m\omega ^j)&=\chi _{\nu +m\omega ^{j'}-\rho }(1)=\chi _{\nu -\rho }(1)\prod _{i\not =j}\frac{\nu _j+m-\nu _i}{\nu _j-\nu _i},\\ \chi _{\mu -\rho }(1)\varepsilon (\mu -n\omega ^k)&=\chi _{\nu +n\omega ^{k'}-\rho }(1)=\chi _{\nu -\rho }(1)\prod _{i\not =k}\frac{\nu _k+n-\nu _i}{\nu _k-\nu _i}. \end{aligned}$$

Hence

$$\begin{aligned}&\int _{U(N)}p_a(g)\mathrm {tr}(g^{-m})\mathrm {tr}(g^{n})p_b(g^{-1})dg\\&\quad \quad \propto \sum _{\nu \in W}\prod ^N_{\begin{array}{c} j,k=1\\ j<k \end{array}}(\nu _j-\nu _k)^2e^{-\Vert \nu \Vert ^2T/2}J(\nu ,m,a)J(\nu ,n,b) \end{aligned}$$

where

$$\begin{aligned} J(\nu ,m,a)=e^{-m^2a/(2N)}\frac{1}{N}\sum _{j=1}^Ne^{-ma\nu _j/N}\prod _{i\not =j}\frac{\nu _j+m-\nu _i}{\nu _j-\nu _i}. \end{aligned}$$

Note that \(J(\nu ,0,a)=1=I^a_0(\nu /N)\) and, for \(|m|\ge 1\),

$$\begin{aligned} J(\nu ,m,a)&=\frac{e^{-m^2a/(2N)}}{2\pi imN}\int _{N\gamma (\nu )}\prod _{j=1}^N\left( 1+\frac{m}{z-\nu _j}\right) e^{-maz/N}dz\\&=\frac{e^{-m^2a/(2N)}}{2\pi im}\int _{\gamma (\nu )}\exp \{-m(az-G_{\nu /N}^{N/m}(z))\}dz=I_m^a(\nu /N) \end{aligned}$$

where \(\gamma (\nu )\) is a positively oriented simple loop around

$$\begin{aligned} {[}\nu _N/N,\nu _1/N]+\{z\in {\mathbb {C}}:|z|\le |m|/N\}. \end{aligned}$$

So we obtain

$$\begin{aligned} {\mathbb {E}}(\mathrm {tr}(H_l^{-m})\mathrm {tr}(H_l^{n}))&\propto \sum _{\nu \in W}\prod _{\begin{array}{c} j,k=1\\ j<k \end{array}}^N(\nu _j-\nu _k)^2e^{-\Vert \nu \Vert ^2T/2}I_m^a(\nu /N)I_{n}^b(\nu /N)\\&\propto \sum _{N\lambda \in W}\prod _{\begin{array}{c} j,k=1\\ j<k \end{array}}^N(\lambda _j-\lambda _k)^2\prod _{i=1}^Ne^{-N\lambda ^2_iT/2}I_m^a(\lambda )I_{n}^b(\lambda )\\&\propto {\mathbb {E}}(I^a_m(\varLambda )I_{n}^b(\varLambda )). \end{aligned}$$

Since the identity \({\mathbb {E}}(\mathrm {tr}(H_l^{-m})\mathrm {tr}(H_l^{n}))={\mathbb {E}}(I^a_m(\varLambda )I_{n}^b(\varLambda ))\) holds for \(m=n=0\), it therefore holds for all m and n. \(\quad \square \)

The first part of the above proof follows ideas from the physics literature [6, 14]. The use of contour integrals in writing the function J and in the formulation of Proposition 3.1 is new and provides us with a route to make rigorous the asymptotics performed in  [6, 14].

3.2 Concentration for the discrete \(\beta \)-ensemble and tightness of the support

We shall need two facts about the discrete \(\beta \)-ensemble \(\varLambda \) defined in equation (16). Denote by \(\pi _N\) the law on \({\mathcal {M}}_1({\mathbb {R}})\) of the normalized empirical distribution

$$\begin{aligned} \mu _\varLambda =\frac{1}{N}\sum _{i=1}^N\delta _{\varLambda _i}. \end{aligned}$$

Recall from (5) the functional

$$\begin{aligned} {\mathcal {I}}_T(\mu )=\int _{{\mathbb {R}}^2}\left\{ \tfrac{1}{2}(x^2+y^2)T-2\log |x-y|\right\} \mu (dx)\mu (dy) \end{aligned}$$

defined for probability measures \(\mu \) on \({\mathbb {R}}\) such that \(\mu ([a,b])\le b-a\) for all intervals [ab]. We extend \({\mathcal {I}}_T\) to \({\mathcal {M}}_1({\mathbb {R}})\) by setting \({\mathcal {I}}_T(\mu )=\infty \) if \(\mu \) does not satisfy this constraint. Guionnet and Maïda [30] showed the following large deviation principle.

Theorem 3.2

The sequence of probability measures \((\pi _N:N\in {\mathbb {N}})\) satisfies a large deviation principle on \({\mathcal {M}}_1({\mathbb {R}})\) with rate function \({\mathcal {I}}_T\) and speed \(N^2\).

Let us remark that this result also allows to prove, for all \(T\in (0,\infty )\), the existence of the limit

$$\begin{aligned} F(T)=\lim _{N\rightarrow \infty }N^{-2}\log (p^N_T(1)) \end{aligned}$$
(24)

where \(p^N_T(1)\) denotes the heat kernel of U(N) on the diagonal at time T. This approach was followed by Lévy and Maïda, who obtained in [43, Proposition 5.2] an exact formula for F. They showed moreover that F is \(C^2\) on \((0,\infty )\) and \(C^\infty \) on \((0,\pi ^2)\cup (\pi ^2,\infty )\), but that the third derivative has a discontinuity at \(\pi ^2\). In doing so, they gave a rigorous proof of the Douglas–Kazakov phase transition  [15] and of the fact that it is of third order. See also [7] for another approach using tools of statistical mechanics. We call \(T\in (0,\pi ^2)\) the subcritical regime and \(T\in (\pi ^2,\infty )\) the supercritical regime. We shall see in Sects. 6.1 and 6.2 that, in the limit \(N\rightarrow \infty \), the behaviour of the eigenvalues of the unitary Brownian loop of length T is very different in one regime to the other.

We need also a tightness result for the positions \(\varLambda _N\) and \(\varLambda _1\) of the leftmost and rightmost particles, which is obtained by a variation on ideas of Johansson [32]. See also Féral [20], who adapts to the discrete case some arguments of Ben Arous, Dembo and Guionnet [3, Section 6] for eigenvalues of GOE matrices.

Lemma 3.3

Set

$$\begin{aligned} \varLambda ^*=\max \{|\varLambda _1|,|\varLambda _N|\}. \end{aligned}$$

For all \(a\in [0,\infty )\), there are constants \(C,R<\infty \) depending only on a and T such that

$$\begin{aligned} {\mathbb {E}}\left( e^{a\varLambda ^*}1_{\{\varLambda ^*\ge R\}}\right) \le Ce^{-N}. \end{aligned}$$

Proof

It will be convenient in this proof to label the particle positions in increasing order, where before we labelled them in decreasing order, so \(\varLambda _N\) now denotes the position of the rightmost particle. Then, by symmetry, it will suffice to show that, for all \(a\in [0,\infty )\), there are constants \(C,R<\infty \) depending only on a and T such that

$$\begin{aligned} {\mathbb {E}}\left( e^{a\varLambda _N}1_{\{\varLambda ^*=\varLambda _N\ge R\}}\right) \le Ce^{-N}. \end{aligned}$$

Fix N and, for \(M\in {\mathbb {N}}\), set

$$\begin{aligned} Z_M=\sum _\lambda \prod _{\begin{array}{c} j,k=1\\ j<k \end{array}}^M(\lambda _j-\lambda _k)^2\prod _{i=1}^Me^{-N\lambda _i^2T/2} \end{aligned}$$

where the sum is taken over the set \(S_M\) of increasing sequences \(\lambda =(\lambda _1,\dots ,\lambda _M)\) in \(N^{-1}{\mathbb {Z}}_\mathrm {sym}\). Only the cases \(M=N-1\) and \(M=N\) will be considered further. In the following calculation, we write the possible values of \(\varLambda =(\varLambda _1,\dots ,\varLambda _N)\) in the form \((\lambda ,\lambda _N)\), where \(\lambda =(\lambda _1,\dots ,\lambda _{N-1})\in S_{N-1}\) and \(\lambda _N\in N^{-1}{\mathbb {Z}}_\mathrm {sym}\), and we write \(\lambda ^*\) for \(\max \{|\lambda _1|,|\lambda _{N-1}|\}\). We have

$$\begin{aligned}&{\mathbb {E}}\left( e^{a\varLambda _N}1_{\{\varLambda ^*=\varLambda _N\ge R\}}\right) \\&\quad \quad =\frac{1}{Z_N}\sum _{\lambda _N}\sum _\lambda e^{a\lambda _N}1_{\{\lambda _N\ge R,\lambda ^*\le \lambda _N\}}\prod _{\begin{array}{c} j,k=1\\ j<k \end{array}}^N(\lambda _j-\lambda _k)^2\prod _{i=1}^Ne^{-N\lambda _i^2T/2}\\&\quad \quad \le \frac{1}{Z_N} \sum _s \sum _\lambda e^{as} 1_{\{s\ge R,\lambda ^*\le s\}} \prod _{\begin{array}{c} j,k=1\\ j<k \end{array}}^{N-1}(\lambda _j-\lambda _k)^2 \prod _{i=1}^{N-1} (s-\lambda _i)^2 \prod _{i=1}^{N-1} e^{-N\lambda _i^2T/2} e^{-Ns^2T/2} \\&\quad \quad \le \frac{Z_{N-1}}{Z_N}\sum _se^{as-Ns^2T/2}1_{\{s\ge R\}}(4s^2)^{N-1} \end{aligned}$$

where s and \(\lambda _N\) are summed over \(N^{-1}{\mathbb {Z}}_\mathrm {sym}\) and \(\lambda \) is summed over \(S_{N-1}\). We will show in Lemma 3.4 that there is a constant \(c\in (0,\infty )\), depending only on T, such that

$$\begin{aligned} Z_{N-1}/Z_N\le e^{cN}. \end{aligned}$$

We can now choose \(C,R\in (0,\infty )\), depending only on ac and T, so that

$$\begin{aligned} \sum _s e^{as-Ns^2T/2}1_{\{s\ge R\}} (4s^2)^{N-1} \le Ce^{-(c+1)N} \end{aligned}$$

for all N, so obtaining the desired estimate

$$\begin{aligned} {\mathbb {E}}\left( e^{a\varLambda _N}1_{\{\varLambda ^*=\varLambda _N\ge R\}}\right) \le Ce^{-N}. \end{aligned}$$

\(\square \)

It remains to prove the following estimate, which limits the rate of decay of \(Z_N\) as \(N\rightarrow \infty \).

Lemma 3.4

There exists \(c\in (0,\infty )\), depending only on T, such that, for all \(N\ge 2\),

$$\begin{aligned} Z_{N-1}/Z_N\le e^{cN}. \end{aligned}$$

Proof

Let us consider again the set \(S_{N-1}\) of increasing sequences \(\lambda =(\lambda _1,\dots ,\lambda _{N-1})\) in \(N^{-1}{\mathbb {Z}}_\mathrm {sym}\). For \(\lambda \in S_{N-1}\), set

$$\begin{aligned} E(\lambda )=\prod _{j<k}(\lambda _j-\lambda _k)^2\prod _ie^{-(N-1)T\lambda _i^2/2} \end{aligned}$$

and set

$$\begin{aligned} A_N=\frac{1}{N}\sum _{s\in N^{-1}{\mathbb {Z}}_\mathrm {sym}}e^{-Ts^2/2},\quad E_N=\sup _{\lambda \in S_{N-1}}E(\lambda ). \end{aligned}$$

Note that

$$\begin{aligned} A_N\le \frac{1}{N}+\int _{\mathbb {R}}e^{-Tx^2/2}dx\le 1+\sqrt{\frac{2\pi }{T}}. \end{aligned}$$

We will show that, for \(r=1+4/T\), there exists \(\lambda (N)\in S_{N-1}\) with \(\lambda (N)^*\le r\) such that

$$\begin{aligned} E_N=E(\lambda (N)). \end{aligned}$$
(25)

Now

$$\begin{aligned} Z_{N-1}&=\sum _{\lambda \in S_{N-1}}\prod _{j<k}(\lambda _j-\lambda _k)^2\prod _ie^{-NT\lambda _i^2/2} =\sum _{\lambda \in S_{N-1}}E(\lambda )\prod _ie^{-T\lambda _i^2/2}\\&\le \sum _{\lambda \in S_{N-1}}E_N\prod _ie^{-T\lambda _i^2/2} =(N^N/N!)E_NA_N^{N-1} \le e^NE_NA_N^{N-1}. \end{aligned}$$

On the other hand, there exists \(s\in N^{-1}{\mathbb {Z}}_\mathrm {sym}\) with \(s\in [2r,3r]\) so, by considering the single term in the sum where \(\lambda =(\lambda (N),s)\),

$$\begin{aligned} Z_N&= \sum _{\lambda \in S_N} \prod _{j<k}(\lambda _j-\lambda _k)^2 \prod _ie^{-NT\lambda _i^2/2} \\&\ge \prod _{j<k}(\lambda _j(N)-\lambda _k(N))^2 \prod _i(\lambda _i(N)-s)^2 \prod _ie^{-NT\lambda _i(N)^2/2}e^{-NTs^2/2}\\&= E_N \prod _i(\lambda _i(N)-s)^2 \prod _ie^{-T\lambda _i(N)^2/2}e^{-NTs^2/2}\\&\ge E_N e^{-(N-1)Tr^2/2}e^{-9NTr^2/2}. \end{aligned}$$

Hence

$$\begin{aligned} Z_{N-1}/Z_N\le e^NA_N^{N-1}e^{(N-1)Tr^2/2}e^{9NTr^2/2} \end{aligned}$$

which is a bound of the desired form.

It remains to show (25). To see this, given \(\lambda \in S_{N-1}\) with \(\lambda _{N-1}=\lambda ^*=t\ge 1+4/T\), we can choose \(s\in N^{-1}{\mathbb {Z}}_\mathrm {sym}{\setminus }\{\lambda _1,\dots ,\lambda _{N-2}\}\) with \(|s|\le 1\) and consider the increasing rearrangement \({{\tilde{\lambda }}}\) of \((\lambda _1,\dots ,\lambda _{N-2},s)\). Note that

$$\begin{aligned} \sum _{i=1}^{N-2}\log |s-\lambda _i|\ge \sum _{x\in N^{-1}{\mathbb {Z}}_\mathrm {sym}{\setminus }\{s\}:|x-s|\le 1}\log |s-x|\ge 2N\int _0^1\log xdx=-2N \end{aligned}$$

whereas

$$\begin{aligned} T(t^2-1)/2-2\log t-4\ge T(t^2-1)/2-2t-4\ge 0. \end{aligned}$$

Then

$$\begin{aligned} E(\lambda )/E({{\tilde{\lambda }}})\le & {} t^{2N}\prod _{i=1}^{N-2}(s-\lambda _i)^{-2}e^{-NT(t^2-1)/2}\\\le & {} \exp \{-N(T(t^2-1)/2-2\log t-4)\}\le 1. \end{aligned}$$

A similar argument applies if \(\lambda _1=-\lambda ^*\le -1-4/T\). By iterating this procedure, we can find \(\mu \in S_{N-1}\) with \(\mu ^*\le 1+4/T\) and \(E(\mu )\ge E(\lambda )\). Since there are only finitely many sequences \(\mu \in S_{N-1}\) with \(\mu ^*\le 1+4/T\), this establishes the claim. \(\quad \square \)

3.3 Dimension-free continuity estimate for the holonomy of a simple loop

The following estimate will be needed for the proof of Proposition 2.7.

Lemma 3.5

There is a universal constant \(K_T\in (0,\infty )\), in particular independent of N, such that, for any simple loop \(l\in {\text {Loop}}({\mathbb {S}}_T)\) dividing \({\mathbb {S}}_T\) into components of areas a and b,

$$\begin{aligned} {\mathbb {E}}(\mathrm {tr}(I-H_l))\le K\min (a,b). \end{aligned}$$

From the following proof, it should be possible to show that \(K_T\) is bounded as a function of T. We shall not use this fact and will not prove it here.

Proof

By symmetry, it suffices to consider the case \(b\le a\). Define, for each decreasing sequence \(\lambda =(\lambda _1,\dots ,\lambda _N)\) in \(N^{-1}{\mathbb {Z}}_\mathrm {sym}\),

$$\begin{aligned} D(b,\lambda )=\frac{1}{2\pi i}\int _\gamma e^{-b(z+1/(2N))}\exp \{G^N_\lambda (z)\}dz \end{aligned}$$

where \(\gamma \) is a positively oriented simple loop around the set \([\lambda _N,\lambda _1]+\{z\in {\mathbb {C}}:|z|\le 1/N\}\). We use the residue theorem to compute

$$\begin{aligned} D(b,\lambda )=\frac{1}{N}\sum _{j=1}^Ne^{-b(\lambda _j+1/(2N))}\prod _{i\not =j}\frac{\lambda _j+N^{-1}-\lambda _i}{\lambda _j-\lambda _i}. \end{aligned}$$

Note the identity

$$\begin{aligned} \frac{1}{N}\sum _{j=1}^N\prod _{i\not =j}\frac{\lambda _j+N^{-1}-\lambda _i}{\lambda _j-\lambda _i}=1. \end{aligned}$$
(26)

This may be seen, for example, by evaluating both sides of the product rule (23) for \(\chi _{\rho +N\lambda }(g)\mathrm {Tr}(g)\) at \(g=1\) using the dimension formula (21). Moreover, since \(\lambda \in N^{-1}{\mathbb {Z}}_\mathrm {sym}\), all the terms in the sum (26) are non-negative. Now, for all j,

$$\begin{aligned} 1-e^{-b(\lambda _j+\frac{1}{2N})}\le b|\lambda _j+1/(2N)|\le b(\lambda ^*+1/(2N)) \end{aligned}$$

so

$$\begin{aligned} 1-D(b,\lambda ) =\frac{1}{N}\sum _{j=1}^N(1-e^{-b(\lambda _j+1/(2N))})\prod _{i\not =j}\frac{\lambda _j+N^{-1}-\lambda _i}{\lambda _j-\lambda _i} \le b(\lambda ^*+1/(2N)). \end{aligned}$$

But, by Proposition 3.1, we have

$$\begin{aligned} {\mathbb {E}}(\mathrm {tr}(H_l))={\mathbb {E}}(D(b,\varLambda )) \end{aligned}$$

so

$$\begin{aligned} {\mathbb {E}}(\mathrm {tr}(I-H_l))={\mathbb {E}}(1-D(b,\varLambda ))\le b\left( {\mathbb {E}}(\varLambda ^*)+\frac{1}{2N}\right) \end{aligned}$$

as required. \(\quad \square \)

3.4 Evaluation of some contour integrals

In passing from the limit particle density \(\rho _T\) for the \(\beta \)-ensemble, as given in (8), to the evaluation of the master field on simple loops, we will need to evaluate certain contour integrals expressed in terms of the Stieltjes transform

$$\begin{aligned} G_T(z)=\int _{\mathbb {R}}\frac{\rho _T(x)dx}{z-x}. \end{aligned}$$

The following calculation is taken from [6, 14].

Proposition 3.6

Let \(T\in (0,\pi ^2]\) and let \(a\in (0,T)\). Let \(\gamma \) be a positively oriented closed curve around the set \([-2/\sqrt{T},2/\sqrt{T}]\). Then, for all \(n\in (0,\infty )\),

$$\begin{aligned} \frac{1}{2\pi in}\int _\gamma \exp \{-n(az-G_T(z))\}dz=\int _{\mathbb {R}}e^{inx}s_{a(T-a)/T}(x)dx \end{aligned}$$

where \(s_t\) is the semi-circle density (15) of variance t.

Proof

Since \(T\in (0,\pi ^2]\), we have \(\rho _T=s_{1/T}\). Then \(\rho _T(x)=\sqrt{T}\rho _1(\sqrt{T}x)\) so, by a scaling argument, it will suffice to consider the case \(T=1\). A standard calculation of the Stieltjes transform gives

$$\begin{aligned} G_1(z)=\int _{\mathbb {R}}\frac{\rho _1(x)dx}{z-x}=\frac{z-\sqrt{z^2-4}}{2}. \end{aligned}$$

Note that \(G_1\) maps \({\mathbb {C}}\setminus [-2,2]\) conformally to the punctured unit disc \({\mathbb {D}}\setminus \{0\}\) with inverse \(z+1/z\). Also, \(G_1(\gamma )\) is a negatively oriented closed curve around \(\{0\}\). Write \(b=1-a\). We make the change of variable \(w=G_1(z)\) to obtain

$$\begin{aligned}&\frac{1}{2\pi in}\int _{\gamma }\exp \{-n(az-G_1(z))\}dz\\&\quad =\frac{1}{2\pi in}\int _{G_1(\gamma )}\exp \{n(bw-aw^{-1})\}(1-w^{-2})dw\\&\quad =\frac{1}{2\pi n}\int _0^{2\pi }\exp \{n(be^{-i\theta }-ae^{i\theta })\}(e^{i\theta }-e^{-i\theta })d\theta \\&\quad =\frac{1}{2\pi n}\sum _{k=0}^\infty \frac{n^k}{k!}\sum _{j=0}^k\left( {\begin{array}{c}k\\ j\end{array}}\right) \int _0^{2\pi }(be^{-i\theta })^j(-ae^{i\theta })^{k-j}(e^{i\theta }-e^{-i\theta })d\theta \\&\quad =\sum _{m=0}^\infty \frac{(-n^2ab)^m}{m!(m+1)!}=\int _{\mathbb {R}}e^{inx}s_{ab}(x)dx \end{aligned}$$

where we used in the last equality the moment formula

$$\begin{aligned} \int _{\mathbb {R}}x^{2m}s_t(x)dx=\frac{t^{2m}}{m!(m+1)!}. \end{aligned}$$

\(\square \)

More generally, for all \(T\in (0,\infty )\), the following is obtained in [43, equation (4.12)]

$$\begin{aligned} G_T(z)=\frac{zT}{2}-\frac{2}{\beta z}\sqrt{(z^2-\alpha ^2)(z^2-\beta ^2)}\int _0^1\frac{ds}{(1-\alpha ^2s^2/z^2)\sqrt{(1-s^2)(1- k^2s^2)}} \end{aligned}$$
(27)

where \(k=\alpha /\beta \in (0,1)\) and \(\alpha ,\beta \) are as defined in (7). Moreover, for \(|x|\in [\alpha ,\beta ]\), in the limit \(z\rightarrow x\) with \(z\not \in {\mathbb {R}}\), we have

$$\begin{aligned} \mathrm {Re}(G_T(z))\rightarrow \lim _{\varepsilon \rightarrow 0}\int _{|y-x|\ge \varepsilon }\frac{\rho _T(y)dy}{x-y}=\frac{xT}{2}. \end{aligned}$$
(28)

Proposition 3.7

Let \(T\in (0,\infty )\) and let \(a,b\in (0,T)\) with \(a+b=T\). Let \(\gamma \) be a positively oriented closed curve around the set \([-\beta ,\beta ]\). Then, for all \(n\in {\mathbb {N}}\),

$$\begin{aligned} \frac{1}{2\pi in}\int _\gamma \exp \{-n(az-G_T(z))\}dz&=\frac{2}{n\pi }\int _0^\infty \cosh \left\{ (a-b)nx/2\right\} \sin \{n\pi \rho _T(x)\}dx\\&=\frac{1}{2\pi in } \int _{\gamma ^{-1}}\exp \{n(bz-G_T(z))\}dz. \end{aligned}$$

Proof

Since the integrand of the left-hand side is holomorphic in \({\mathbb {C}}{\setminus }[-\beta ,\beta ]\), we can take \(\gamma \) to be the anti-clockwise boundary of \([-\beta -\varepsilon ,\beta +\varepsilon ]\times [-\varepsilon ,\varepsilon ]\) for any \(\varepsilon >0\). Now, as \(\rho _T\) is Hölder continuous, by the Plemelj–Sokhotskyi formula [25], \(G_T\) can be continuously extended, as \(G_{+}\) and \(G_{-}\) say, on \({\overline{{\mathbb {H}}}}=\{z\in {\mathbb {C}}:\mathrm {Im}(z)\ge 0\}\) and \(-{\overline{{\mathbb {H}}}}\), with

$$\begin{aligned} G_\pm (x)=\lim _{\varepsilon \rightarrow 0}\int _{|y-x|\ge \varepsilon }\frac{\rho _T(y)dy}{x-y}\mp i\pi \rho _T(x)=\frac{xT}{2}\mp i\pi \rho _T(x) \end{aligned}$$

for any \(x\in {\mathbb {R}}\). We can take the limit \(\varepsilon \rightarrow 0\) in the contour integrals along \(\gamma \) and \(\gamma ^{-1}\), using the dominated convergence theorem, to obtain

$$\begin{aligned} \frac{1}{n\pi }\int _{\mathbb {R}}\exp \left\{ (a-b)nx/2\right\} \sin \{n\pi \rho _T(x)\}dx. \end{aligned}$$

Since \(\rho _T\) is symmetric, this gives the claimed identity. \(\quad \square \)

3.5 Proof of Proposition 2.5

Consider the discrete \(\beta \)-ensemble \(\varLambda \) defined by (16). By Theorem 3.2,

$$\begin{aligned} \mu _\varLambda \rightarrow \mu _T\quad \text {weakly in probability on }{\mathbb {R}}\text { as } N\rightarrow \infty . \end{aligned}$$
(29)

Fix \(n\in {\mathbb {N}}\). By Lemma 3.3, there exist \(C,R\in (0,\infty )\), independent of N, such that

$$\begin{aligned} {\mathbb {E}}(e^{2nT\varLambda ^*}1_{\varOmega _R^c})\le Ce^{-N} \end{aligned}$$
(30)

where

$$\begin{aligned} \varLambda ^*=\max \{|\varLambda _1|,|\varLambda _N|\},\quad \varOmega _R=\{\mathrm {supp}(\mu _\varLambda )\subseteq [-R,R]\}=\{\varLambda ^*\le R\}. \end{aligned}$$

We increase the value of R if necessary so that

$$\begin{aligned} \mathrm {supp}(\mu _T)\subseteq [-R,R]. \end{aligned}$$

Denote by \(\gamma _R\) the positively oriented boundary of the set

$$\begin{aligned} {[}-R,R]+\{z\in {\mathbb {C}}:|z|\le 1\}. \end{aligned}$$

Recall that, for \(\alpha \in (0,\infty )\) and \({\text {dist}}(z,\mathrm {supp}(\mu _\varLambda ))>1/\alpha \), we set

$$\begin{aligned} G_\varLambda ^\alpha (z)=\alpha \int _{\mathbb {R}}{\text {Log}}\left( 1+\frac{1}{\alpha (z-x)}\right) \mu _\varLambda (dx). \end{aligned}$$

For \(N\ge n+1\), the contour \(\gamma _{R\vee \varLambda ^*}\) contains the set

$$\begin{aligned} \mathrm {supp}(\mu _\varLambda )+\{z\in {\mathbb {C}}:|z|\le n/N\} \end{aligned}$$

so we can write, for \(a\in (0,T)\),

$$\begin{aligned} I_n^a(\varLambda )=\frac{e^{-an^2/(2N)}}{2\pi in}\int _{\gamma _{R\vee \varLambda ^*}}\exp \{-n(az-G_\varLambda ^{N/n}(z))\}dz. \end{aligned}$$

Recall also that we set

$$\begin{aligned} G_T(z)=\int _{\mathbb {R}}\frac{\mu _T(dx)}{z-x} \end{aligned}$$

and, for \(a,b>0\) with \(a+b=T\),

$$\begin{aligned} I_n^a=I_n^b=\frac{2}{n\pi }\int _0^\infty \cosh \left\{ (a-b)nx/2\right\} \sin \{n\pi \rho _T(x)\}dx \end{aligned}$$

and that, by Proposition 3.7,

$$\begin{aligned} I_n^a=\frac{1}{2\pi in}\int _{\gamma _R}\exp \{-n(az-G_T(z))\}dz. \end{aligned}$$

In Proposition 3.1 we showed that, for any simple loop \(l\in {\text {Loop}}({\mathbb {S}}_T)\) which divides \({\mathbb {S}}_T\) into components of areas a and b,

$$\begin{aligned} {\mathbb {E}}(\mathrm {tr}(H_l^n))={\mathbb {E}}(I_n^a(\varLambda ))={\mathbb {E}}(I_n^b(\varLambda )) \end{aligned}$$

and

$$\begin{aligned} {\mathbb {E}}(|\mathrm {tr}(H_l^n)|^2)={\mathbb {E}}(\mathrm {tr}(H_l^{-n})\mathrm {tr}(H_l^n))={\mathbb {E}}(I_n^a(\varLambda )I_n^b(\varLambda )). \end{aligned}$$

We will show that, for all \(n\in {\mathbb {N}}\), in the limit \(N\rightarrow \infty \), uniformly in \(a\in (0,T)\),

$$\begin{aligned} {\mathbb {E}}(I_n^a(\varLambda ))\rightarrow I_n^a,\quad {\mathbb {E}}(I_n^a(\varLambda )I_n^b(\varLambda ))\rightarrow I_n^aI_n^b. \end{aligned}$$
(31)

Then

$$\begin{aligned} {\mathbb {E}}(\mathrm {tr}(H_l^n))\rightarrow I_n^a,\quad {\mathbb {E}}(|\mathrm {tr}(H_l^n)|^2)\rightarrow |I_n^a|^2 \end{aligned}$$

so

$$\begin{aligned} {\mathbb {E}}(|\mathrm {tr}(H_l^n)-I_n^a|^2)={\mathbb {E}}(|\mathrm {tr}(H_l^n)|^2)-2{\mathbb {E}}(\mathrm {tr}(H_l^n))I_n^a+|I_n^a|^2\rightarrow 0 \end{aligned}$$

as required.

The following estimates hold for \(|w|\le 1/2\)

$$\begin{aligned} \left| {\text {Log}}(1+w)\right| \le 2|w|,\quad \left| {\text {Log}}(1+w)-w\right| \le |w|^2. \end{aligned}$$

We apply these estimates with \(w=n/(N(z-x))\), for \(N\ge 2n\) and for points z on the contour \(\gamma _{R\vee \varLambda ^*}\) and x in the support of \(\mu _\varLambda \), to obtain

$$\begin{aligned} |G_\varLambda ^{N/n}(z)|\le 2,\quad |G_\varLambda ^{N/n}(z)-G_\varLambda (z)|\le n/N \end{aligned}$$

where

$$\begin{aligned} G_\varLambda (z)=\int _{\mathbb {R}}\frac{\mu _\varLambda (dx)}{z-x}. \end{aligned}$$

Note that \(\gamma _R\) has length \(4R+2\pi \). By some straightforward estimation, on \(\varOmega _R^c\),

$$\begin{aligned} |I_n^a(\varLambda )|\le \frac{1}{2\pi n}(4\varLambda ^*+2\pi )e^{nT(\varLambda ^*+1)+2n} \end{aligned}$$

while, on \(\varOmega _R\),

$$\begin{aligned} |I_n^a(\varLambda )|\le \frac{1}{2\pi n}(4R+2\pi )e^{nT(R+1)+2n}. \end{aligned}$$

Then, by the estimate (30), uniformly in \(a\in (0,T)\),

$$\begin{aligned} {\mathbb {E}}(|I_n^a(\varLambda )|1_{\varOmega _R^c})\rightarrow 0,\quad {\mathbb {E}}(|I_n^a(\varLambda )I_n^b(\varLambda )|1_{\varOmega _R^c})\rightarrow 0. \end{aligned}$$

On the other hand, on the event \(\varOmega _R\), we have

$$\begin{aligned} I_n^a(\varLambda )=\frac{e^{-an^2/(2N)}}{2\pi in}\int _{\gamma _R}\exp \{-n(az-G_\varLambda ^{N/n}(z))\}dz. \end{aligned}$$

Hence, the weak limit (29) implies that, uniformly in \(a\in (0,T)\),

$$\begin{aligned} I_n^a(\varLambda )1_{\varOmega _R} \rightarrow \frac{1}{2\pi in}\int _{\gamma _R}\exp \{-n(az-G_T(z))\}dz=I_n^a \end{aligned}$$

in probability, and so

$$\begin{aligned} {\mathbb {E}}(I_n^a(\varLambda )1_{\varOmega _R})\rightarrow I_n^a,\quad {\mathbb {E}}(I_n^a(\varLambda )I^b_n(\varLambda )1_{\varOmega _R})\rightarrow I_n^aI_n^b. \end{aligned}$$

The desired limits (31) now follow. \(\quad \square \)

4 Makeenko–Migdal Equations

Our aim in this section is to prove Proposition 2.6. For this, our main tool will be the Makeenko–Migdal equations. In order to formulate these precisely, we first give a description of the set of regular loops modulo area-preserving homeomorphisms of \({\mathbb {S}}_T\). This allows to reduce our analysis to a series of finite-dimensional simplices, each representing the possible vectors of face-areas for a given combinatorial graph. We show that the Makeenko–Migdal equations allow us to move area between faces of a regular loop provided only that the total area and the total winding number are conserved. This finally allows an inductive scheme to bootstrap the convergence we have shown for simple loops to all regular loops.

4.1 Combinatorial planar graphs and loops

Recall from Sect. 2.1 the notion of a labelled embedded graph. Given two labelled embedded graphs \({\mathbb {G}}=(e_1,\dots ,e_m)\) and \({\mathbb {G}}'=(e_1',\dots ,e_m')\), let us write \({\mathbb {G}}\sim {\mathbb {G}}'\) if there is an orientation-preserving homeomorphism \(\theta \) of \({\mathbb {S}}_T\) such that \(e_j'=\theta \circ e_j\) for all j. Further, let us write \({\mathbb {G}}\approx {\mathbb {G}}'\) if \(\theta \) may be chosen to be area-preserving. Then \(\sim \) and \(\approx \) are equivalence relations on the set of labelled embedded graphs. We will call the equivalence class of \({\mathbb {G}}\) under \(\sim \) the combinatorial graph associated to \({\mathbb {G}}\).

Fig. 4
figure 4

A labelled embedded graph of a regular loop (see definition on page 8), with the standard labelling written on vertices and faces

We define a standard labelling of the vertices and faces of \({\mathbb {G}}\) as follows. Consider the sequence of vertices \((\underline{e}_1,{{\overline{e}}}_1,\dots ,{\underline{e}}_m,{{\overline{e}}}_m)\) and write \(V=(v_1,\dots ,v_q)\) for the subsequence obtained by dropping any vertex which has already appeared. Similarly consider the sequence of faces \((l(e_1),r(e_1),\dots ,l(e_m),r(e_m))\), where \(l(e_j)\) and \(r(e_j)\) are the connected components of \({\mathbb {S}}_T{\setminus }\{e_1^*,\dots ,e_m^*\}\) to the left and right of \(e_j\). Then write \(F=(f_1,\dots ,f_p)\) for the subsequence obtained by dropping any face which has already appeared. See Fig. 4 for an example. Set

$$\begin{aligned} {\mathcal {V}}=\{1,\dots ,q\},\quad {\mathcal {E}}=\{1,\dots ,m\},\quad {\mathcal {F}}=\{1,\dots ,p\}. \end{aligned}$$

The combinatorial graph associated to \({\mathbb {G}}\) is then characterizedFootnote 5 by the integers qmp and the functions \(s,t:{\mathcal {E}}\rightarrow {\mathcal {V}}\) and \(l,r:{\mathcal {E}}\rightarrow {\mathcal {F}}\) given by

  1. (a)

    \(s(j)=i\) if \(v_i\) is the starting point of \(e_j\),

  2. (b)

    \(t(j)=i\) if \(v_i\) is the terminal point of \(e_j\),

  3. (c)

    \(l(j)=k\) if \(f_k\) is the face to the left of \(e_j\),

  4. (d)

    \(r(j)=k\) if \(f_k\) is the face to the right of \(e_j\).

We call any quadruple \({\mathcal {G}}=(s,t,l,r)\) which arises in this way a combinatorial planar graph. We freely identify \({\mathcal {G}}\) with the corresponding equivalence class of labelled embedded graphs.

Given a combinatorial planar graph \({\mathcal {G}}\), consider the simplex

$$\begin{aligned} \varDelta _{\mathcal {G}}(T)=\{(a_1,\dots ,a_p):a_k>0\text { for all }k\text { and }a_1+\dots +a_p=T\}. \end{aligned}$$

Given a labelled embedded graph \({\mathbb {G}}\in {\mathcal {G}}\), define the face-area vector\(a({\mathbb {G}})=(a_1,\dots ,a_p)\) by

$$\begin{aligned} a_k={\text {area}}(f_k). \end{aligned}$$

Then \(a({\mathbb {G}})\in \varDelta _{\mathcal {G}}(T)\). For \(a\in \varDelta _{\mathcal {G}}(T)\), set

$$\begin{aligned} {\mathcal {G}}(a)=\{{\mathbb {G}}\in {\mathcal {G}}:a({\mathbb {G}})=a\}. \end{aligned}$$

The sets \({\mathcal {G}}(a)\) are then the equivalence classes of the relation \(\approx \). We call a sequence \({\mathfrak {l}}_0=((j_1,\varepsilon _1),\dots ,(j_r,\varepsilon _r))\) in \({\mathcal {E}}\times \{-1,1\}\) a loop in\({\mathcal {G}}\) if

$$\begin{aligned} t(j_k,\varepsilon _k)=s(j_{k+1},\varepsilon _{k+1}) \end{aligned}$$
(32)

for \(k=1,\dots ,r\), where \(j_{r+1}=j_1\) and \(\varepsilon _{r+1}=\varepsilon _1\) and where

$$\begin{aligned} s(j,\varepsilon )=t(j,-\varepsilon )={\left\{ \begin{array}{ll}s(j),&{}\text {if }\varepsilon =1,\\ t(j),&{}\text {if }\varepsilon =-1.\end{array}\right. } \end{aligned}$$

The condition (32) means that, in any labelled embedded graph \({\mathbb {G}}=(e_1,\dots ,e_m)\in {\mathcal {G}}\), we can concatenate the sequence of edges \((e_{j_1}^{\varepsilon _1},e_{j_2}^{\varepsilon _2},\dots ,e_{j_r}^{\varepsilon _r})\) to form a loop

$$\begin{aligned} l_0=e_{j_1}^{\varepsilon _1}e_{j_2}^{\varepsilon _2}\dots e_{j_r}^{\varepsilon _r}. \end{aligned}$$

We call the loop \(l_0\) so obtained the drawing of \({\mathfrak {l}}_0\) in \({\mathbb {G}}\). Note that the sequence

$$\begin{aligned} {\mathfrak {l}}^{-1}=((j_r,-\varepsilon _r),\dots ,(j_1,-\varepsilon _1)) \end{aligned}$$

is then also a loop in \({\mathcal {G}}\), whose drawing in \({\mathbb {G}}\) is the reversal \(l^{-1}\) of l. Note also the obvious notion of concatenation for loops in \({\mathcal {G}}\).

In the case of interest to us, \({\mathcal {G}}\) will be the combinatorial graph of the labelled embedded graph \({\mathbb {G}}=(e_1,\dots ,e_m)\) of a regular loop l. We write then a(l) for \(a({\mathbb {G}}).\) If l has n self-intersections, we have \(q=n+1\), \(m=2n+1\) and, by Euler’s relation, \(p=n+2\). Note that the set of self-intersections is given in the standard labelling by \(\{v_i:i\in {\mathcal {I}}\}\), where \({\mathcal {I}}=\{2,3,\dots ,n+1\}\). We recover l as the drawing in \({\mathbb {G}}\) of the loop

$$\begin{aligned} {\mathfrak {l}}=((1,1),\dots ,(2n+1,1)) \end{aligned}$$

in \({\mathcal {G}}\). We call the pair \(({\mathcal {G}},{\mathfrak {l}})\) a combinatorial planar loop. For each \(n\ge 0\), there are only finitely many combinatorial loops with n self-intersections. We will write abusively \({\mathfrak {l}}\) for \(({\mathcal {G}},{\mathfrak {l}})\), \(\varDelta _{\mathfrak {l}}(T)\) for \(\varDelta _{\mathcal {G}}(T)\) and \({\mathfrak {l}}(a)\) for \({\mathcal {G}}(a)\). Given a loop \({\mathfrak {l}}_0\) in \({\mathcal {G}}\), it may be that the drawing \(l_0\) of \({\mathfrak {l}}_0\) in \({\mathbb {G}}\) is a regular loop. We could then consider the combinatorial loop associated to \(l_0\), without reference to its relation to \({\mathcal {G}}\). We will therefore need to make clear when such a combinatorial loop is to be considered in the context of a larger combinatorial graph. We shall also write \(l_0\in {\mathfrak {l}}(a)\) whenever \(l_0\in {\text {Loop}}({\mathbb {S}}_T)\) is a drawing of \({\mathfrak {l}}\) in a graph belonging to \({\mathfrak {l}}(a)\), for some area vector a. Given \(n\ge 0\) and a function \(\varPhi :{\text {Loop}}_n({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\) which is invariant under area-preserving homeomorphisms, for any combinatorial planar loop \({\mathfrak {l}}\) having n self-intersections, we can define a quotient map\(\phi _{\mathfrak {l}}:\varDelta _{\mathfrak {l}}(T)\rightarrow {\mathbb {C}}\) by setting

$$\begin{aligned} \phi _{\mathfrak {l}}(a)=\varPhi (l) \end{aligned}$$

where l is any loop in \({\mathfrak {l}}(a)\). We say that \(\varPhi \)is uniformly continuous in area if the map \(\phi _{\mathfrak {l}}:\varDelta _{\mathfrak {l}}(T)\rightarrow {\mathbb {C}}\) is uniformly continuous for all such combinatorial loops \({\mathfrak {l}}\).

4.2 Generalized Makeenko–Migdal equations

Let \({\mathfrak {l}}\) be a combinatorial planar loop. Write m and p for the numbers of edges and faces in the associated combinatorial graph. Let \(H=(H_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) be a Yang–Mills holonomy field in U(N).

Proposition 4.1

Let \(f:U(N)^m\rightarrow {\mathbb {C}}\) be a continuous bounded function. Then we can define a function \(E(f):\varDelta _{\mathfrak {l}}(T)\rightarrow {\mathbb {C}}\) by

$$\begin{aligned} E(f)(a)=\frac{1}{p_T(1)}\int _{U(N)^m}f(g_1,\dots ,g_m)\prod _{k=1}^p p_{a_k}({{\tilde{g}}}_k)\prod _{j=1}^mdg_j \end{aligned}$$

where \({{\tilde{g}}}_k\) is the holonomy around \(f_k\) obtained from the edge holonomies \(g_1,\dots ,g_m\). Moreover E(f) is uniformly continuous on \(\varDelta _{\mathfrak {l}}(T)\) and

$$\begin{aligned} E(f)(a)={\mathbb {E}}(f(H_{e_1},\dots ,H_{e_m})) \end{aligned}$$

for \(a\in \varDelta _{\mathfrak {l}}(T)\), whenever \({\mathbb {G}}=(e_1,\dots ,e_m)\) is a labelled embedded graph with \({\mathbb {G}}\in {\mathfrak {l}}(a)\).

Proof

The function E(f) is well defined because \(p_a(g)=p_a(g^{-1})=p_a(hgh^{-1})\), which ensures that the right-hand side does not depend on the choices of starting point and direction for the loop holonomy \({{\tilde{g}}}_k\). It will suffice to show uniform continuity on each of the sets

$$\begin{aligned} \varDelta _k=\{a\in \varDelta _{\mathfrak {l}}(T):a_k\ge T/p\}. \end{aligned}$$

Then, by symmetry, it will suffice to consider the case \(k=p\). Given a family of edge holonomies \(g=(g_1,\dots ,g_m)\in U(N)^m\), for each face \(f_k\), choose an adjacent edge and denote by \({{\tilde{g}}}_k\) the holonomy around \(f_k\) starting from that edge. Fix also edge labels \(i_1,\dots ,i_{m-p+1}\) such that the edges \(e_{i_1},\dots ,e_{i_{m-p+1}}\) form a spanning tree of the associated graph. Define, for \(k=1,\dots ,p-1\) and \(j=1,\dots ,m-p+1\),

$$\begin{aligned} b_k={{\tilde{g}}}_k,\quad h_j=g_{i_j}. \end{aligned}$$

Then the map \(g\mapsto (b,h):U(N)^m\rightarrow U(N)^m\) preserves the m-fold product of Haar measure. Moreover, the edge holonomies \(g_1,\dots ,g_m\) and the loop holonomy \({{\tilde{g}}}_p\) for \(f_p\) are given by finite products of \(b_1,\dots ,b_{p-1}\) and \(h_1,\dots ,h_{m-p+1}\) and their inverses. See [40, Proposition 2.4.2]. Hence we have

$$\begin{aligned} E(f)(a)&=\frac{1}{p_T(1)}\int _{U(N)^m}f(g_1(b,h),\dots ,g_m(b,h))p_{a_p}(\tilde{g}_p(b,h))\nonumber \\&\quad \prod _{k=1}^{p-1}p_{a_k}(b_k)db_k\prod _{j=1}^{m-p+1}dh_j\nonumber \\&=\frac{1}{p_T(1)}{\mathbb {E}}\int _{U(N)^{m-p+1}}f(g_1(B,h),\dots ,g_m(B,h))p_{a_p}({{\tilde{g}}}_p(B,h))\prod _{j=1}^{m-p+1}dh_j \end{aligned}$$
(33)

where \(B=(B^1_{a_1},\dots ,B^{p-1}_{a_{p-1}})\) and \(B^1,\dots ,B^{p-1}\) are independent Brownian motions in G starting from 1. Now, on \(\varDelta _p\), we have \(a_p\ge T/p\), so the claimed uniform continuity follows from standard continuity estimates for Brownian motion and the heat kernel in G. \(\quad \square \)

For \(i\in \{1,\dots ,m\}\) and \(g\in U(N)\), define maps \(R_{i,g}\) and \({{\hat{R}}}_{i,g}\) on \(U(N)^m\) by

$$\begin{aligned} R_{i,g}(h_1,\dots ,h_m)&=(h_1,\dots ,h_ig,\dots ,h_m),\\ {{\hat{R}}}_{i,g}(h_1,\dots ,h_m)&=(h_1,\dots ,g^{-1}h_i,\dots ,h_m). \end{aligned}$$

For \(i\in \{1,\dots ,m\}\) and \(X\in {\mathfrak {u}}(N)\), define a differential operator \({\mathcal {L}}^i_X\) on \(U(N)^m\) by

$$\begin{aligned} {\mathcal {L}}^i_X(f)=\left. \frac{d}{dt}\right| _{t=0}f\circ R_{i,e^{tX}}. \end{aligned}$$

Choose an orthonormal basis \((X_n:n=1,\dots ,N^2)\) for \({\mathfrak {u}}(N)\) (with inner product (1)) and, for \(i,j\in \{1,\dots ,m\}\), define

$$\begin{aligned} \varDelta _{i,j}(f)=\sum _n{\mathcal {L}}^i_{X_n}\circ {\mathcal {L}}^j_{X_n}(f). \end{aligned}$$

The operator \(\varDelta _{i,j}\) does not depend on the choice of orthonormal basis.

A function \(f:U(N)^m\rightarrow {\mathbb {C}}\) is said to have extended gauge invariance if, for all \(g\in U(N)\) and for \(i=1,\dots ,m-1\),

$$\begin{aligned} f\circ {{\hat{R}}}_{i,g}\circ R_{i+1,g}=f. \end{aligned}$$

Thus we require

$$\begin{aligned} f(h_1,\dots ,g^{-1}h_i,h_{i+1}g,\dots ,h_m)=f(h_1,\dots ,h_i,h_{i+1},\dots ,h_m). \end{aligned}$$

Recall that

$$\begin{aligned} \varDelta _{\mathfrak {l}}(T)=\{(a_1,\dots ,a_{n+2}):a_k>0\text { for all }k\text { and } a_1+\dots +a_{n+2}=T\}. \end{aligned}$$

Write \({\mathcal {I}}\) for the set of intersection labels and \({\mathcal {F}}\) for the set of face labels in the combinatorial graph \({\mathcal {G}}\) of \({\mathfrak {l}}\), as usual. For \(i\in {\mathcal {I}}\), define a (constant) vector field \(\varXi _i\) on \(\varDelta _{\mathfrak {l}}(T)\) as follows. Choose \({\mathbb {G}}\in {\mathcal {G}}\) and write l for the drawing of \({\mathfrak {l}}\) in \({\mathbb {G}}\). In the standard labelling of \({\mathbb {G}}\), the vertex \(v_i\) is a self-intersection of l. Write \((k_1,k_2,k_3,k_4)\) for the labels of the faces found on making a small anti-clockwise circuit around \(v_i\), starting in the face \(f_{k_1}\) adjacent to two outgoing edges, and in the corner adjacent to those edges. Note that the case \(k_1=k_3\) can arise, but the condition that we start in the corner adjacent to the outgoing edges allows us to specify the sequence \((k_1,k_2,k_3,k_4)\) uniquely in any case. In the example of Fig. 4, if \(i=2\), then \(k_1=k_3=1\), \(k_2=3\) and \(k_4=2\). This sequence does not depend on the choice of \({\mathbb {G}}\). Set

$$\begin{aligned} \varXi _i=\partial _{k_1}-\partial _{k_2}+\partial _{k_3}-\partial _{k_4} \end{aligned}$$
(34)

where \(\partial _k=\partial /\partial a_k\) denotes the elementary vector field in direction k. Note that \(\varXi _i\) is tangent to the simplex \(\varDelta _{\mathfrak {l}}(T)\).

The following theorem is a specialization of a result of Driver, Gabriel, Hall and Kemp [18, Theorem 2], which generalizes a formulation of Lévy [41].

Theorem 4.2

Let \(f:U(N)^m\rightarrow {\mathbb {C}}\) be a \(C^\infty \) function having extended gauge invariance. Then, for all \(i\in {\mathcal {I}}\), the function E(f) has a directional derivative on \(\varDelta _{\mathfrak {l}}(T)\) in direction \(\varXi _i\) given by

$$\begin{aligned} \varXi _iE(f)=-E(\varDelta _{j_1,j_2}(f)) \end{aligned}$$

where \(j_1,j_2\) are determined by \(s(j_1)=s(j_2)=i\).

4.3 Makeenko–Migdal equations for Wilson loops

Given a loop \({\mathfrak {l}}_0=((j_1,\varepsilon _1),\dots ,(j_r,\varepsilon _r))\) in \({\mathcal {G}}\), we can define a continuous bounded function \(W_{{\mathfrak {l}}_0}:U(N)^m\rightarrow {\mathbb {C}}\) by

$$\begin{aligned} W_{{\mathfrak {l}}_0}(h_1,\dots ,h_m)=\mathrm {tr}(h_{j_r}^{\varepsilon _r}\dots h_{j_1}^{\varepsilon _1}). \end{aligned}$$

Given a sequence of loops \(({\mathfrak {l}}_1,\dots ,{\mathfrak {l}}_k)\) in \({\mathcal {G}}\), define the Wilson loop function

$$\begin{aligned} \phi ^N_{{\mathfrak {l}}_1,\dots ,{\mathfrak {l}}_k}:\varDelta _{\mathfrak {l}}(T)\rightarrow {\mathbb {C}}\end{aligned}$$

by

$$\begin{aligned} \phi ^N_{{\mathfrak {l}}_1,\dots ,{\mathfrak {l}}_k}=E(W_{{\mathfrak {l}}_1}\dots W_{{\mathfrak {l}}_k}). \end{aligned}$$

Then \(\phi ^N_{{\mathfrak {l}}_1,\dots ,{\mathfrak {l}}_k}\) is uniformly continuous and, for all \(a\in \varDelta _{\mathfrak {l}}(T)\) and all \({\mathbb {G}}\in {\mathfrak {l}}(a)\),

$$\begin{aligned} \phi ^N_{{\mathfrak {l}}_1,\dots ,{\mathfrak {l}}_k}(a)={\mathbb {E}}(\mathrm {tr}(H_{l_1})\dots \mathrm {tr}(H_{l_k})) \end{aligned}$$
(35)

where \(l_1,\dots ,l_k\) are the drawings of \({\mathfrak {l}}_1,\dots ,{\mathfrak {l}}_k\) in \({\mathbb {G}}\).

For \(i\in {\mathcal {I}}\), we obtain two regular loops \(l_i\) and \({{\hat{l}}}_i\) by splitting l at \(v_i\), that is, by following the two outgoing strands of l from \(v_i\) until their first return to \(v_i\). In one case we will pass through the endpoint of l and begin another circuit of l until we reach \(v_i\). Write \({\mathfrak {l}}_i\) and \({{\hat{{\mathfrak {l}}}}}_i\) for the loops in \({\mathcal {G}}\) whose drawings in \({\mathbb {G}}\) are \(l_i\) and \({{\hat{l}}}_i\), which do not depend on the choice of \({\mathbb {G}}\). Then set

$$\begin{aligned} {[}{\mathfrak {l}}]_i={\mathfrak {l}}_i{{\hat{{\mathfrak {l}}}}}_i{\mathfrak {l}}_i^{-1}{{\hat{{\mathfrak {l}}}}}_i^{-1},\quad {[}{{\hat{{\mathfrak {l}}}}}]_i={{\hat{{\mathfrak {l}}}}}_i{\mathfrak {l}}_i{{\hat{{\mathfrak {l}}}}}_i^{-1}{\mathfrak {l}}_i^{-1} \end{aligned}$$

where \({\mathfrak {l}}_i^{-1},{{\hat{{\mathfrak {l}}}}}_i^{-1}\) denote the reversals of \({\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i\) and the right-hand sides are understood as concatenations.

Proposition 4.3

(Makeenko–Migdal equations for Wilson loops). The functions \(\phi ^N_{\mathfrak {l}}\) and \(\phi ^N_{{\mathfrak {l}},{\mathfrak {l}}^{-1}}\) have directional derivatives in \(\varDelta _{\mathfrak {l}}(T)\) in direction \(\varXi _i\) given by

$$\begin{aligned} \varXi _i\phi ^N_{\mathfrak {l}}=\phi ^N_{{\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i},\quad \varXi _i\phi ^N_{{\mathfrak {l}},{\mathfrak {l}}^{-1}}=\phi ^N_{{\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i,{\mathfrak {l}}^{-1}}+\phi ^N_{{\mathfrak {l}},{\mathfrak {l}}_i^{-1},{{\hat{{\mathfrak {l}}}}}_i^{-1}}-N^{-2}(\phi ^N_{[{\mathfrak {l}}]_i}+\phi ^N_{[{{\hat{{\mathfrak {l}}}}}]_i}). \end{aligned}$$

Proof

We give details only for \(\phi ^N_{{\mathfrak {l}},{\mathfrak {l}}^{-1}}\). The simpler argument for \(\phi ^N_{\mathfrak {l}}\) will then be obvious. The argument for \(\phi ^N_{\mathfrak {l}}\) already appeared after Theorem 2.6 in [19] and in Section 9.2 of [41]. Given \({\mathbb {G}}=(e_1,\dots ,e_m)\in {\mathcal {G}}\), set \(l=e_1\dots e_m\), so l is the drawing of \({\mathfrak {l}}\) in \({\mathbb {G}}\). Given \(h=(h_1,\dots ,h_m)\in U(N)^m\), there is a unique multiplicative function

$$\begin{aligned} (h_\gamma :\gamma \in {\text {Path}}({\mathbb {G}}))\in {\text {Mult}}({\text {Path}}({\mathbb {G}}),U(N)) \end{aligned}$$

such that \(h_{e_j}=h_j\) for all j. Then \(\phi ^N_{{\mathfrak {l}},{\mathfrak {l}}^{-1}}=E(f)\), where \(f=|W_{\mathfrak {l}}|^2\) and

$$\begin{aligned} W_{\mathfrak {l}}(h_1,\dots ,h_m)=\mathrm {tr}(h_l)=\mathrm {tr}(h_m\dots h_1). \end{aligned}$$

Note that \(W_{\mathfrak {l}}\) has extended gauge invariance and so also does f. We can write \(l_i=e\gamma \) and \({{\hat{l}}}_i={{\hat{e}}}{{\hat{\gamma }}}\), where \(e=e_{j_1},{{\hat{e}}}=e_{j_2}\), \(s(j_1)=s(j_2)=i\) and \(\gamma ,{{\hat{\gamma }}}\in {\text {Path}}({\mathbb {G}})\). Then

$$\begin{aligned} f(h)=\mathrm {tr}(h_l)\mathrm {tr}(h_l^{-1})=\mathrm {tr}(h_{{{\hat{\gamma }}}}h_{{{\hat{e}}}}h_\gamma h_e)\mathrm {tr}(h_e^{-1}h_\gamma ^{-1}h_{{{\hat{e}}}}^{-1}h_{{{\hat{\gamma }}}}^{-1}). \end{aligned}$$

For \(X\in {\mathfrak {u}}(N)\),

$$\begin{aligned} {\mathcal {L}}_X^{j_1}\circ {\mathcal {L}}_X^{j_2}(f)(h)&=\mathrm {tr}(h_{{{\hat{\gamma }}}}h_{{{\hat{e}}}}Xh_\gamma h_eX)\mathrm {tr}(h_l^{-1})+\mathrm {tr}(h_l)\mathrm {tr}(Xh_e^{-1}h_\gamma ^{-1}Xh_{{{\hat{e}}}}^{-1}h_{{{\hat{\gamma }}}}^{-1})\\&\quad -\mathrm {tr}(h_{{{\hat{\gamma }}}}h_{{{\hat{e}}}}h_\gamma h_eX)\mathrm {tr}(h_e^{-1}h_\gamma ^{-1}X h_{{{\hat{e}}}}^{-1}h_{{{\hat{\gamma }}}}^{-1})\\&\quad -\mathrm {tr}(h_{{{\hat{\gamma }}}}h_{{{\hat{e}}}}Xh_\gamma h_e)\mathrm {tr}(Xh_e^{-1}h_\gamma ^{-1}h_{{{\hat{e}}}}^{-1}h_{{{\hat{\gamma }}}}^{-1}). \end{aligned}$$

Write \(E_{j,k}\) for the elementary matrix with a 1 in the (jk)-entry. Set

$$\begin{aligned} X_{j,j}=iE_{j,j}/{\sqrt{N}},\quad X_{j,k}={\left\{ \begin{array}{ll} (E_{j,k}-E_{k,j})/{\sqrt{2N}},&{}\text { for }j<k,\\ i(E_{j,k}+E_{k,j})/{\sqrt{2N}},&{}\text { for }j>k. \end{array}\right. } \end{aligned}$$

Then \(\{X_{j,k}:j,k=1,\dots ,N\}\) is an orthonormal basis in \({\mathfrak {u}}(N)\). A simple calculation gives the standard identity

$$\begin{aligned} \sum _{j,k=1}^NX_{j,k}\otimes X_{j,k}=-\frac{1}{N}\sum _{j,k=1}^NE_{j,k}\otimes E_{k,j}. \end{aligned}$$

We sum to obtain

$$\begin{aligned} -\varDelta _{j_1,j_2}(f)(h)&=\mathrm {tr}(h_{{{\hat{\gamma }}}}h_{{{\hat{e}}}})\mathrm {tr}(h_\gamma h_e)\mathrm {tr}(h_l^{-1})+\mathrm {tr}(h_l)\mathrm {tr}(h_e^{-1}h_\gamma ^{-1})\mathrm {tr}(h_{{{\hat{e}}}}^{-1}h_{{{\hat{\gamma }}}}^{-1})\\&\quad -\frac{1}{N^2}\mathrm {tr}(h_{{{\hat{\gamma }}}}h_{{{\hat{e}}}}h_\gamma h_eh_{{{\hat{e}}}}^{-1}h_{{{\hat{\gamma }}}}^{-1}h_e^{-1}h_\gamma ^{-1})\\&\quad -\frac{1}{N^2}\mathrm {tr}(h_\gamma h_eh_{{{\hat{\gamma }}}}h_{{{\hat{e}}}}h_e^{-1}h_\gamma ^{-1}h_{{{\hat{e}}}}^{-1}h_{{{\hat{\gamma }}}}^{-1}) \end{aligned}$$

and hence, by Theorem 4.2,

$$\begin{aligned} \varXi _i\phi ^N_{{\mathfrak {l}},{\mathfrak {l}}^{-1}}=-E(\varDelta _{j_1,j_2}(f))=\phi ^N_{{\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i,{\mathfrak {l}}^{-1}}+\phi ^N_{{\mathfrak {l}},{\mathfrak {l}}_i^{-1},{{\hat{{\mathfrak {l}}}}}_i^{-1}}-N^{-2}(\phi ^N_{[{\mathfrak {l}}]_i}+\phi ^N_{[{{\hat{{\mathfrak {l}}}}}]_i}). \end{aligned}$$

\(\square \)

4.4 Makeenko–Migdal vectors and the winding number

Let \(l\in {\text {Loop}}({\mathbb {S}}_T)\) be a regular loop and let \({\mathbb {G}}=(V,E,F)\) be the associated labelled embedded graph. For any pair of faces \(f_0,f_*\in F\) and points \(x_0\in f_0,x\in f_*\), the set \({\mathbb {S}}_T{\setminus }\{x_0,x_*\}\) can be retracted to a simple closed curve s in \({\mathbb {S}}_T\) which winds positively around \(x_*\) and negatively around \(x_0\). Furthermore, there is a unique \(n_l(f_0,f_*)\in {\mathbb {Z}}\) such that l is homotopic within \({\mathbb {S}}_T{\setminus }\{x_0,x_*\}\) to \(s^{n_l(f_0,f_*)}\). The integer \(n_l(f_0,f_*)\) does not depend on the choice of \(x_0,x_*\) but only on \(f_0,f_*\). Setting \(n_l(f,f)= 0\) for any \(f\in F\), this defines a skew symmetric function \(n_l:F^2\rightarrow {\mathbb {Z}}\). Fixing an orientation preserving homeomorphism from \({\mathbb {S}}_T{\setminus }\{x_0\}\) to \({\mathbb {R}}^2\), for each face \(f\in F\), \(n_l(f_0,f)\) is the winding number of the image of l around the image of f in the plane \({\mathbb {R}}^2\). This number can be computed as follows. Given a track from \(f_0\) to f, comprising edges \(e_1,\dots ,e_k\) and faces \(f_1,\dots ,f_k\) such that \(f_k=f\) and \(e_j\) is adjacent to both \(f_{j-1}\) and \(f_j\) for all j, we have

$$\begin{aligned} n_l(f_0,f)=L(f)-R(f) \end{aligned}$$

where L(f) and R(f) are the numbers of edges \(e_j\) with \(f_j\) on the left and right respectively. (The notation here does not refer to the standard labelling of \({\mathbb {G}}\).) This construction yields the following observation: for any \(f_0,f_*\in F\),

$$\begin{aligned} n_l(f_0,f)+n_l(f,f_*)=n_l(f_0,f_*)=-n_l(f_*,f_0). \end{aligned}$$

It follows that the function \(f\mapsto n_l(f_0,f)\) depends on the choice of face \(f_0\) only through the addition of a constant. We shall denote it abusively as well by \(n_l\) and call it the winding number function of l. See Fig. 5 for an example.

Fig. 5
figure 5

A track between two faces \(f_0\) and f is drawn with dashed lines. The value of the winding number for the choice of \(f_0\) is printed on each face

The winding number is invariant under orientation-preserving homeomorphisms of \({\mathbb {S}}_T\), so, using the notations introduced in Sect. 4.1, we obtain also a function

$$\begin{aligned} n_{\mathfrak {l}}:{\mathcal {F}}\rightarrow {\mathbb {Z}}\end{aligned}$$

determined by the associated combinatorial loop \({\mathfrak {l}}\), also defined up to an additive constant, by setting

$$\begin{aligned} n_{\mathfrak {l}}(k)=n_l(f) \end{aligned}$$

where f is the kth face in the standard labelling of \({\mathbb {G}}\).

The following lemma is a reformulation of a lemma of Lévy  [41, Lemma 6.28]. See also Dahlqvist [13, Lemma 21]. We give a slightly different proof, relying on properties of the winding number in place of a dimension-counting argument. The prior results were stated for the whole plane, while ours applied to the sphere, but this make little difference to the argument.

Lemma 4.4

There is an orthogonal direct sum decomposition

$$\begin{aligned} {\mathbb {R}}^{\mathcal {F}}={\mathfrak {m}}_{\mathfrak {l}}\oplus {\mathfrak {n}}_{\mathfrak {l}}\end{aligned}$$

where

$$\begin{aligned} {\mathfrak {m}}_{\mathfrak {l}}={\text {span}}\{\varXi _i:i\in {\mathcal {I}}\},\quad {\mathfrak {n}}_{\mathfrak {l}}={\text {span}}\{1,n_{\mathfrak {l}}\}. \end{aligned}$$

Proof

Note first that \(1^T\varXi _i=1-1+1-1=0\) for all i. Let \(i\in {\mathcal {I}}\). Write \(k_1,k_2,k_3,k_4\) for the faces at i, listed anticlockwise starting from the face \(k_1\) adjacent to both outgoing edges. Then the values of \(n_{\mathfrak {l}}\) at \(k_1,k_2,k_3,k_4\) are given respectively by \(n,n+1,n,n-1\) for some n, so

$$\begin{aligned} n_{\mathfrak {l}}^T\varXi _i=n_{\mathfrak {l}}(k_1)-n_{\mathfrak {l}}(k_2)+n_{\mathfrak {l}}(k_3)-n_{\mathfrak {l}}(k_4)=0. \end{aligned}$$

Hence, if \(\alpha \in {\mathfrak {m}}_{\mathfrak {l}}\), then \(1^T\alpha =0\) and \(n_{\mathfrak {l}}^T\alpha =0\).

Suppose on the other hand that \(\alpha \in {\mathfrak {m}}_{\mathfrak {l}}^\perp \). Consider the 1-forms (of the dual graph) \(d\alpha \) and \(d\nu \), given by

$$\begin{aligned} d\alpha (j)=\alpha (l(j))-\alpha (r(j)),\quad dn_{\mathfrak {l}}(j)=n_{\mathfrak {l}}(l(j))-n_{\mathfrak {l}}(r(j)),\quad j\in {\mathcal {E}}. \end{aligned}$$

Then \(dn_{\mathfrak {l}}(j)=1\) for all j. On the other hand, for \(j=1,\dots ,m-1\), there is an \(i_j\in {\mathcal {I}}\) such that \(t(j)=i_j=s(j+1)\), so

$$\begin{aligned} d\alpha (j)-d\alpha (j+1)=\pm \varXi _{i_j}^T\alpha =0. \end{aligned}$$

Hence \(d\alpha =c_1dn_{\mathfrak {l}}\) and so \(\alpha =c_1n_{\mathfrak {l}}+c_2\) for some constants \(c_1,c_2\). \(\quad \square \)

Note that \(\varDelta _{\mathfrak {l}}(T)\) is convex, and that, by counting dimensions, the vectors \(\{\varXi _i:i\in {\mathcal {I}}\}\) are linearly independent. We deduce from these facts, and the preceding lemma the following proposition. Write \(\overline{\varDelta _{\mathfrak {l}}(T)}\) for the closure of \(\varDelta _{\mathfrak {l}}(T)\) in \({\mathbb {R}}^p\).

Proposition 4.5

Let \(a\in \varDelta _{\mathfrak {l}}(T)\) and \(a'\in \overline{\varDelta _{\mathfrak {l}}(T)}\). Set \(w=a'-a\). Then \(a+tw\in \varDelta _{\mathfrak {l}}(T)\) for all \(t\in [0,1)\). Moreover, there exists \(\alpha \in {\mathbb {R}}^{\mathcal {I}}\) such that

$$\begin{aligned} w=\sum _{i\in {\mathcal {I}}}\alpha _i\varXi _i \end{aligned}$$

if and only if

$$\begin{aligned} \sum _{k\in {\mathcal {F}}}a_kn_{\mathfrak {l}}(k)=\sum _{k\in {\mathcal {F}}}a'_kn_{\mathfrak {l}}(k). \end{aligned}$$

Moreover, in this case, \(\alpha \) is uniquely determined by w and

$$\begin{aligned} \sum _{i\in {\mathcal {I}}}|\alpha _i|\le C_{\mathfrak {l}}\sum _{k\in {\mathcal {F}}}|a_k-a'_k| \end{aligned}$$

for some constant \(C_{\mathfrak {l}}<\infty \) depending only on \({\mathfrak {l}}\).

4.5 Proof of Proposition 2.6

We will show inductively that the following statements hold for all \(n\ge 0\). Firstly, for all combinatorial planar loops\({\mathfrak {l}}\)with no more thannself-intersections, there is a continuous function

$$\begin{aligned} \phi _{\mathfrak {l}}:\varDelta _{\mathfrak {l}}(T)\rightarrow {\mathbb {R}}\end{aligned}$$

such that, uniformly on\(\varDelta _{\mathfrak {l}}(T)\)as\(N\rightarrow \infty \),

$$\begin{aligned} \phi _{\mathfrak {l}}^N\rightarrow \phi _{\mathfrak {l}},\quad \phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}^N\rightarrow (\phi _{\mathfrak {l}})^2. \end{aligned}$$

Secondly, the restriction of the master field\(\varPhi _T\)to\({\text {Loop}}_n({\mathbb {S}}_T)\)is the unique function\({\text {Loop}}_n({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\)with the following properties: it is invariant under area-preserving diffeomorphisms, for any combinatorial planar loop\({\mathfrak {l}}\)having at mostnself-intersections, the quotient map\(\phi _{\mathfrak {l}}:\varDelta _{\mathfrak {l}}(T)\rightarrow {\mathbb {R}}\)is uniformly continuous and\(\varPhi _T\)satisfies the Makeenko–Migdal equations (10) and the estimate (13).

For \(a\in \varDelta _{\mathfrak {l}}(T)\) and \(l\in {\mathfrak {l}}(a)\),

$$\begin{aligned} {\mathbb {E}}(|\mathrm {tr}(H_l)-\phi _{\mathfrak {l}}(a)|^2)=\phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}^N(a)-\phi ^N_{\mathfrak {l}}(a)^2+(\phi _{\mathfrak {l}}^N(a)-\phi _{\mathfrak {l}}(a))^2 \end{aligned}$$

so the first statement implies that, as \(N\rightarrow \infty \),

$$\begin{aligned} \mathrm {tr}(H_l)\rightarrow \phi _{\mathfrak {l}}(a)=\varPhi _T(l) \end{aligned}$$

in \(L^2\), uniformly in \(l\in {\text {Loop}}_n({\mathbb {S}}_T)\). So the two statements suffice to prove Proposition 2.6.

For the simple combinatorial loop \({\mathfrak {s}}\), set

$$\begin{aligned} \phi _{\mathfrak {s}}(a,b)=\phi _T(1,a,b) \end{aligned}$$

then \(\phi _{\mathfrak {s}}\) is continuous on \(\varDelta _{\mathfrak {s}}(T)\) and, by Proposition 2.5, \(\phi ^N_{\mathfrak {s}}\rightarrow \phi _{\mathfrak {s}}\) and \(\phi _{{\mathfrak {s}},{\mathfrak {s}}^{-1}}^N\rightarrow (\phi _{\mathfrak {s}})^2\) uniformly on \(\varDelta _{\mathfrak {s}}(T)\). There are no self-intersections, so no Makeenko–Migdal equations. For \((a,b)\in \varDelta _{\mathfrak {s}}(T)\) and \(s\in {\mathfrak {s}}(a,b)\),

$$\begin{aligned} \varPhi _T(s)=\phi _{\mathfrak {s}}(a,b)=\phi _T(1,a,b). \end{aligned}$$

Hence the desired statements hold for \(n=0\).

Let \(n\ge 1\) and suppose inductively that the desired statements hold for \(n-1\). Let \({\mathfrak {l}}\) be a combinatorial planar loop with n self-intersections. Choose faces \(k_0\) and \(k_*\) of minimal and maximal winding number and set

$$\begin{aligned} n_*=n_{\mathfrak {l}}(k_*)-n_{\mathfrak {l}}(k_0). \end{aligned}$$

Let \(a=(a_1,\dots ,a_{n+2})\in \varDelta _{\mathfrak {l}}(T)\). Recall that \(a_0,a_*\in [0,T]\) are determined by

$$\begin{aligned} a_0+a_*=T,\quad a_0n_{\mathfrak {l}}(k_0)+a_*n_{\mathfrak {l}}(k_*)=\sum _{k=1}^{n+2}a_kn_{\mathfrak {l}}(k). \end{aligned}$$
(36)

Then, by Proposition 4.5, there exists a unique \(\alpha \in {\mathbb {R}}^{\mathcal {I}}\), with

$$\begin{aligned} \sum _{i\in {\mathcal {I}}}|\alpha _i|\le 2C_{\mathfrak {l}}(T-a_{k_0}-a_{k_*}) \end{aligned}$$
(37)

such that, for

$$\begin{aligned} a(t)=a+t\sum _{i\in {\mathcal {I}}}\alpha _i\varXi _i \end{aligned}$$
(38)

we have \(a(t)\in \varDelta _{\mathfrak {l}}(T)\) for all \(t\in [0,1)\) and

$$\begin{aligned} a_{k_0}(1)=a_0,\quad a_{k_*}(1)=a_*. \end{aligned}$$

By Proposition 4.3, the maps

$$\begin{aligned} t\mapsto \phi _{\mathfrak {l}}^N(a(t)),\quad t\mapsto \phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}^N(a(t)) \end{aligned}$$

are differentiable on [0, 1), with

$$\begin{aligned} \frac{d}{dt}\phi _{\mathfrak {l}}^N(a(t))=\sum _{i\in {\mathcal {I}}}\alpha _i\phi ^N_{{\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i}(a(t)) \end{aligned}$$

and

$$\begin{aligned} \frac{d}{dt}\phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}^N(a(t))=\sum _{i\in {\mathcal {I}}}\alpha _i\left( \phi ^N_{{\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i,{\mathfrak {l}}^{-1}}+\phi ^N_{{\mathfrak {l}},{\mathfrak {l}}_i^{-1},{{\hat{{\mathfrak {l}}}}}_i^{-1}}-N^{-2}(\phi ^N_{[{\mathfrak {l}}]_i}+\phi ^N_{[{{\hat{{\mathfrak {l}}}}}]_i})\right) (a(t)). \end{aligned}$$

Here we have used the fact that the directional derivatives given by Proposition 4.3 are continuous on \(\varDelta _{\mathfrak {l}}(T)\) to guarantee differentiability in any linear combination of those directions. We integrate to obtain, for all \(t\in [0,1)\),

$$\begin{aligned} \phi _{\mathfrak {l}}^N(a(t))=\phi _{\mathfrak {l}}^N(a)+\sum _{i\in {\mathcal {I}}}\int _0^t\alpha _i\phi ^N_{{\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i}(a(s))ds \end{aligned}$$
(39)

and

$$\begin{aligned}&\phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}^N(a(t))\nonumber \\&\quad =\phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}^N(a) +\sum _{i\in {\mathcal {I}}}\alpha _i\int _0^t\left( \phi ^N_{{\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i,{\mathfrak {l}}^{-1}}+\phi ^N_{{\mathfrak {l}},{\mathfrak {l}}_i^{-1},{{\hat{{\mathfrak {l}}}}}_i^{-1}} -N^{-2}(\phi ^N_{[{\mathfrak {l}}]_i}+\phi ^N_{[{{\hat{{\mathfrak {l}}}}}]_i})\right) (a(s))ds. \end{aligned}$$
(40)

Since \(\phi _{\mathfrak {l}}^N\) and \(\phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}^N\) extend continuously to \(\overline{\varDelta _{\mathfrak {l}}(T)}\) and the integrands on the right are bounded, these equations hold also for \(t=1\).

We shall now prove the following key identities

$$\begin{aligned} \phi _{{\mathfrak {l}}}^N(a(1))=\phi ^N_{{\mathfrak {s}}^{n_*}}(a_0,a_*),\quad \phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}^N(a(1))=\phi ^N_{{\mathfrak {s}}^{n_*},{\mathfrak {s}}^{-n_*}}(a_0,a_*). \end{aligned}$$
(41)

Choose a loop \(l\in {\mathfrak {l}}(a)\) such that the faces \(f_{k_0}\) and \(f_{k_*}\) of the associated embedded graph have a \(C^1\) boundary. We will use a deformation \((l_t)_{t\in [0,1]}\)of l constructed using a diffeomorphism from \({\mathbb {S}}_T{\setminus }(f_{k_0}\cup f_{k_*})\) to a cylinder and then contracting the cylinder to a circle. Since \(k_*\) and \(k_0\) have maximal and minimal winding number, the pair \((a_0,a_*)\) defined by (36) satisfies \(a_0\ge a_{k_0}\) and \(a_*\ge a_{k_*}\). Hence, there is an area-preserving \(C^1\) diffeomorphism

$$\begin{aligned} F:{\mathbb {S}}_T{\setminus }(f_{k_0}\cup f_{k_*})\rightarrow ({\mathbb {R}}/{\mathbb {Z}})\times [-a_0+a_{k_0},a_*-a_{k_*}] \end{aligned}$$

where the right-hand side is endowed with Lebesgue measure. By re-basing the loop l if necessary, we may assume that the starting point l(0) is not adjacent to \(f_0\) or \(f_*\). Write \(F(l(\tau ))=(\theta (\tau ),y(\tau ))\). We can and do choose l and F so that \(F(l(0))=(0,0)\) and so that \({{\dot{\theta }}}(\tau )\) makes only finitely many changes of sign. For \(t\in [0,1]\) and \((\theta ,y)\in ({\mathbb {R}}/{\mathbb {Z}})\times [-a_0+a_{k_0},a_*-a_{k_*}]\), define

$$\begin{aligned} C_t(\theta ,y)=(\theta ,(1-t)y) \end{aligned}$$

and define a family \((l_t:t\in [0,1])\) in \({\text {Loop}}({\mathbb {S}}_T)\) by

$$\begin{aligned} l_t(\tau )=F^{-1}\circ C_t\circ F(l(\tau )). \end{aligned}$$
(42)

Then \(l_0=l\) and \((l_t:t\in [0,1])\) is continuous in length with fixed endpoints. Define

$$\begin{aligned} s(\tau )=F^{-1}(\tau ,0). \end{aligned}$$

Then \(s\in {\text {Loop}}_0({\mathbb {S}}_T)\) and, since \(F\circ l_1(\tau )=(\theta (\tau ),0)\), we have \(l_1\sim s^{n_*}\), where \(n^*=\theta (1)\in {\mathbb {Z}}\) is the winding number of l. Then, by continuity in probability and invariance under reduction of the holonomy field,

$$\begin{aligned} \begin{aligned} \phi ^N_{{\mathfrak {l}}}(a(1))&=\lim _{t\uparrow 1}\phi ^N_{{\mathfrak {l}}}(a(t))=\lim _{t\uparrow 1}{\mathbb {E}}(\mathrm {tr}(H_{l_t} ))\\&={\mathbb {E}}(\mathrm {tr}(H_{l_1}))={\mathbb {E}}(\mathrm {tr}(H_{s^{n_*}}))=\phi ^N_{{\mathfrak {s}}^{n_*}}(a_0,a_*). \end{aligned} \end{aligned}$$

We see similarly that \(\phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}^N(a(1))=\phi ^N_{{\mathfrak {s}}^{n_*},{\mathfrak {s}}^{-n_*}}(a_0,a_*)\).

Now, given (41), by Proposition 2.5,

$$\begin{aligned} \phi ^N_{{\mathfrak {s}}^{n_*}}(a_1,a_2)\rightarrow \phi _T(n_*,a_1,a_2) \end{aligned}$$

uniformly in \((a_1,a_2)\in \varDelta _{\mathfrak {s}}(T)\). Write \(l_i\) and \({{\hat{l}}}_i\) for the drawings of \({\mathfrak {l}}_i\) and \({{\hat{{\mathfrak {l}}}}}_i\) in \({\mathbb {G}}\) for some \({\mathbb {G}}\in {\mathfrak {l}}(a)\). Then

$$\begin{aligned} \phi ^N_{{\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i}(a)={\mathbb {E}}(\mathrm {tr}(H_{l_i})\mathrm {tr}(H_{{{\hat{l}}}_i})). \end{aligned}$$

Since both \(l_i\) and \({{\hat{l}}}_i\) have no more than \(n-1\) self-intersections, by the inductive hypothesis,

$$\begin{aligned} \mathrm {tr}(H_{l_i})\rightarrow \phi _{{\mathfrak {l}}_i}(a),\quad \mathrm {tr}(H_{{{\hat{l}}}_i})\rightarrow \phi _{{{\hat{{\mathfrak {l}}}}}_i}(a) \end{aligned}$$
(43)

in \(L^2\), uniformly in \(a\in \varDelta _{\mathfrak {l}}(T)\). Hence

$$\begin{aligned} \phi ^N_{{\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i}\rightarrow \phi _{{\mathfrak {l}}_i}\phi _{{{\hat{{\mathfrak {l}}}}}_i} \end{aligned}$$

uniformly on \(\varDelta _{\mathfrak {l}}(T)\). Here we used the obvious submersions \(\varDelta _{\mathfrak {l}}(T)\rightarrow \varDelta _{{\mathfrak {l}}_i}(T)\) and \(\varDelta _{\mathfrak {l}}(T)\rightarrow \varDelta _{{{\hat{{\mathfrak {l}}}}}_i}(T)\) in evaluating \(\phi _{{\mathfrak {l}}_i}\) and \(\phi _{{{\hat{{\mathfrak {l}}}}}_i}\) on \(\varDelta _{\mathfrak {l}}(T)\). We let \(N\rightarrow \infty \) in (39), first in the case \(t=1\) and then for \(t\in (0,1)\) to see that \(\phi _{\mathfrak {l}}^N\) converges uniformly on \(\overline{\varDelta _{\mathfrak {l}}(T)}\) with continuous limit, \(\phi _{\mathfrak {l}}\) say, satisfying, for all \(t\in [0,1]\),

$$\begin{aligned} \phi _{\mathfrak {l}}(a(t))=\phi _{\mathfrak {l}}(a)+\sum _{i\in {\mathcal {I}}}\int _0^t\alpha _i\phi _{{\mathfrak {l}}_i}(a(s))\phi _{{{\hat{{\mathfrak {l}}}}}_i}(a(s))ds. \end{aligned}$$
(44)

Using again (41) and Proposition 2.5, for \(s\in {\mathfrak {s}}(a_1,a_2)\),

$$\begin{aligned} \phi ^N_{{\mathfrak {s}}^n,{\mathfrak {s}}^{-n}}(a_1,a_2)={\mathbb {E}}(|\mathrm {tr}(H^n_s)|^2)\rightarrow \phi _T(n,a_1,a_2)^2 \end{aligned}$$

uniformly in \((a_1,a_2)\in \varDelta _{\mathfrak {s}}(T)\) as \(N\rightarrow \infty \). We have

$$\begin{aligned} \phi ^N_{{\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i,{\mathfrak {l}}^{-1}}(a)=\phi ^N_{{\mathfrak {l}},{\mathfrak {l}}_i^{-1},{{\hat{{\mathfrak {l}}}}}_i^{-1}}(a) ={\mathbb {E}}(\mathrm {tr}(H_{l_i})\mathrm {tr}(H_{{{\hat{l}}}_i})\mathrm {tr}(H_{l^{-1}})) \end{aligned}$$

and we have just shown that

$$\begin{aligned} {\mathbb {E}}(\mathrm {tr}(H_{l^{-1}}))={\mathbb {E}}(\mathrm {tr}(H_l))\rightarrow \phi _{\mathfrak {l}}(a) \end{aligned}$$

uniformly in \(a\in \varDelta _{\mathfrak {l}}(T)\). In combination with (43), we deduce that, uniformly on \(\varDelta _{\mathfrak {l}}(T)\),

$$\begin{aligned} \phi ^N_{{\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i,{\mathfrak {l}}^{-1}}\rightarrow \phi _{{\mathfrak {l}}_i}\phi _{{{\hat{{\mathfrak {l}}}}}_i}\phi _{\mathfrak {l}}. \end{aligned}$$

Hence, on letting \(N\rightarrow \infty \) in (40), first in the case \(t=1\) and then for \(t\in (0,1)\), we see that \(\phi ^N_{{\mathfrak {l}},{\mathfrak {l}}^{-1}}\) converges uniformly on \(\overline{\varDelta _{\mathfrak {l}}(T)}\) with continuous limit, \(\phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}\) say, satisfying, for all \(t\in [0,1]\),

$$\begin{aligned} \phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}(a(t))=\phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}(a) +2\sum _{i\in {\mathcal {I}}}\alpha _i\int _0^t\phi _{{\mathfrak {l}}_i}(a(s))\phi _{{{\hat{{\mathfrak {l}}}}}_i}(a(s))\phi _{\mathfrak {l}}(a(s))ds. \end{aligned}$$
(45)

By differentiating (44) and (45), we see that

$$\begin{aligned} \frac{d}{dt}\left( \phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}(a(t))-\phi _{\mathfrak {l}}(a(t))^2\right) =0 \end{aligned}$$

so

$$\begin{aligned} \phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}(a)-\phi _{\mathfrak {l}}(a)^2=\phi _{{\mathfrak {l}},{\mathfrak {l}}^{-1}}(a(1))-\phi _{\mathfrak {l}}(a(1))^2=0. \end{aligned}$$

Thus the first of the desired statements holds for n.

We turn to the second statement. First we will show the claimed properties of the master field \(\varPhi _T\) on \({\text {Loop}}_n({\mathbb {S}}_T)\). By the first statement, for all \(a\in \varDelta _{\mathfrak {l}}(T)\) and \(l\in {\mathfrak {l}}(a)\),

$$\begin{aligned} \varPhi _T(l)=\phi _{\mathfrak {l}}(a). \end{aligned}$$

Hence \(\varPhi _T\) is invariant under area-preserving homeomorphisms. We take \(t=1\) in (44) and use the estimate (37) to see that

$$\begin{aligned} |\varPhi _T(l)-\phi _T(n_*,a_0,a_*)|\le 2C_{\mathfrak {l}}(T-a_{k_0}-a_{k_*}). \end{aligned}$$

It remains to show that \(\varPhi _T\) satisfies the Makeenko–Migdal equations (10) on \({\text {Loop}}_n({\mathbb {S}}_T)\). Let l be a regular loop with n self-intersections. Let \(i\in {\mathcal {I}}\) and let \(\theta :[0,\eta )\times {\mathbb {S}}_T\rightarrow {\mathbb {S}}_T\) be a Makeenko–Migdal flow at \((l,v_i)\). Write \(a_\theta (t)\) for the face-area vector of \(l(t)=\theta (t,l)\). Then

$$\begin{aligned} a_\theta (t)=a+t\varXi _i \end{aligned}$$

so, by the argument leading to (44),

$$\begin{aligned} {\mathbb {E}}(\mathrm {tr}(H_{l(t)}))={\mathbb {E}}(\mathrm {tr}(H_l))+\int _0^t{\mathbb {E}}(\mathrm {tr}(H_{l_i(s)})\mathrm {tr}(H_{{{\hat{l}}}_i(s)}))ds. \end{aligned}$$

By bounded convergence, on letting \(N\rightarrow \infty \), we obtain

$$\begin{aligned} \varPhi _T(l(t))=\varPhi _T(l)+\int _0^t\varPhi _T(l_i(s))\varPhi _T({{\hat{l}}}_i(s))ds \end{aligned}$$

as required.

Suppose finally that \(\varPsi :{\text {Loop}}_n({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\) is another function with the same properties. We have to show that \(\varPsi =\varPhi _T\) on \({\text {Loop}}_n({\mathbb {S}}_T)\). Let \({\mathfrak {l}}\) be a combinatorial planar loop with at most n self-intersections. Then, for all \(a\in \varDelta _{\mathfrak {l}}(T)\), the set of embedded loops \({\mathfrak {l}}(a)\) is non-empty, and \(\varPsi \) takes a constant value on \({\mathfrak {l}}(a)\). So there is a unique function

$$\begin{aligned} \psi _{\mathfrak {l}}:\varDelta _{\mathfrak {l}}(T)\rightarrow {\mathbb {C}}\end{aligned}$$

such that \(\psi _{\mathfrak {l}}(a)=\varPsi (l)\) for all \(l\in {\mathfrak {l}}(a)\). Let i be a self-intersection of \({\mathfrak {l}}\) and let \(l\in {\mathfrak {l}}(a)\). Then there exists a Makeenko–Migdal flow \(\theta \) at \((l,v_i)\). Thus, for t sufficiently small, we have \(\theta (t,l)\in {\mathfrak {l}}(a+t\varXi _i)\) and hence

$$\begin{aligned} \psi _{\mathfrak {l}}(a+t\varXi _i)=\varPsi (\theta (t,l)). \end{aligned}$$

Since \(\varPsi \) satisfies the Makeenko–Migdal equations, it follows that \(\psi _{\mathfrak {l}}\) has a directional derivative at a given by

$$\begin{aligned} \varXi _i\psi _{\mathfrak {l}}(a)=\left. \frac{d}{dt}\right| _{t=0}\varPsi (\theta (t,l))=\varPsi (l_i)\varPsi ({{\hat{l}}}_i)=\psi _{{\mathfrak {l}}_i}(a)\psi _{{{\hat{{\mathfrak {l}}}}}_i}(a) \end{aligned}$$

where \(l_i,{{\hat{l}}}_i\) are the loops obtained by splitting l at \(v_i\), and \({\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i\) are the associated combinatorial loops.

Given \(a\in \varDelta _{\mathfrak {l}}(T)\), define a(t) as at (38). Then, by the argument leading to (39), for all \(t\in [0,1)\),

$$\begin{aligned} \psi _{\mathfrak {l}}(a(t))=\psi _{\mathfrak {l}}(a)+\sum _{i\in {\mathcal {I}}}\alpha _i\int _0^t\psi _{{\mathfrak {l}}_i}(a(s))\psi _{{{\hat{{\mathfrak {l}}}}}_i}(a(s))ds. \end{aligned}$$

Since \(\varPsi \) satisfies (13), in the limit \(t\rightarrow 1\), we have

$$\begin{aligned} |\psi _{\mathfrak {l}}(a(t))-\phi _T(n_*,a_0,a_*)|\le C_n(T-a_{k_0}(t)-a_{k_*}(t))\rightarrow 0. \end{aligned}$$

By the inductive hypothesis, since \({\mathfrak {l}}_i\) and \({{\hat{{\mathfrak {l}}}}}_i\) have no more that \(n-1\) self-intersections,

$$\begin{aligned} \psi _{{\mathfrak {l}}_i}(a(s))=\phi _{{\mathfrak {l}}_i}(a(s)),\quad \psi _{{{\hat{{\mathfrak {l}}}}}_i}(a(s))=\phi _{{{\hat{{\mathfrak {l}}}}}_i}(a(s)) \end{aligned}$$

for all \(s\in [0,1)\). Hence \(\varPsi (l)=\psi _{\mathfrak {l}}(a)=\phi _{\mathfrak {l}}(a)=\varPhi _T(l)\), showing that \(\varPsi =\varPhi _T\) on \({\text {Loop}}_n({\mathbb {S}}_T)\), as required. Hence both statements hold for n and the induction proceeds.

\(\square \)

5 Extension to Loops of Finite Length

5.1 Some estimates for piecewise geodesic loops

Our aim in this section is to prove Proposition 2.7, which is the final step in the proof of our main result Theorem 2.2. We follow a line of argument adapted from [8], where estimates are given for Wilson loops in the plane, which are uniform in N. Instead of using explicit formulas for expectations of Wilson loops, the idea is to revisit certain estimates which were used in the construction of the Yang-Mills measure [40, Section 3.3], and to show that, when applied to suitable functions, these estimates are uniform in N. For clarity, we shall give a more detailed account of the argument of [8, Theorem 4.1], reproducing part of the proof of [40, Section 3.3].

Write \({\text {Path}}_*({\mathbb {S}}_T)\) and \({\text {Loop}}_*({\mathbb {S}}_T)\) for the sets of piecewise geodesic paths and loops in \({\mathbb {S}}_T\). Set \(\kappa =\sqrt{\pi T}/2\) and note that \(\kappa \) is the length of a great circle between antipodal points in \({\mathbb {S}}_T\). For \(\alpha \in {\text {Path}}({\mathbb {S}}_T)\), write \(n_0(\alpha )\) for the smallest integer such that \(2^{-n_0(\alpha )}\ell (\alpha )<\kappa \). For \(n\ge n_0(\alpha )\), we define \(D_n(\alpha )\in {\text {Path}}_*({\mathbb {S}}_T)\) by parametrizing \(\alpha \) by [0, 1] at constant speed and then interpolating the points \((\alpha (k2^{-n}):k=0,1,\dots ,2^n)\) by geodesics. Then \(D_n(\alpha )\rightarrow \alpha \) in length as \(n\rightarrow \infty \), so \({\text {Path}}_*({\mathbb {S}}_T)\) is dense in \({\text {Path}}({\mathbb {S}}_T)\) for the topology of convergence in length with fixed endpoints. Note in particular that, when \(\ell (\alpha )<\kappa \), we use the notation \(D_0(\alpha )\) for the unique geodesic with the same endpoints as \(\alpha \). Define, for \(\alpha \in {\text {Loop}}({\mathbb {S}}_T)\),

$$\begin{aligned} \varPsi _N(\alpha )=\sqrt{{\mathbb {E}}(\mathrm {tr}(I-H_\alpha ))}=\sqrt{1-\varPhi _T^N(\alpha )},\quad \varPhi _T^N(\alpha )={\mathbb {E}}(\mathrm {tr}(H_\alpha )) \end{aligned}$$

where \(H=(H_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) is a Yang–Mills holonomy field in U(N). By Lemma 3.5, there is a \(K_1>0\) such that, for all N and all \(a\in [0,T]\), for all simple loops \(\alpha \) bounding a domain of area a,

$$\begin{aligned} \varPsi _N(\alpha )\le K_1\sqrt{a}. \end{aligned}$$
(46)

The function \(\varPsi _N\) inherits from H the following properties: for all \(\alpha ,\beta \in {\text {Loop}}({\mathbb {S}}_T)\) with \(\alpha \sim \beta \),

$$\begin{aligned} \varPsi _N(\alpha )=\varPsi _N(\alpha ^{-1}),\quad \varPsi _N(\alpha )=\varPsi _N(\beta ) \end{aligned}$$
(47)

and, for all pairs of paths \(\gamma _1,\gamma _2\in {\text {Path}}({\mathbb {S}}_T)\) which concatenate to form a loop,

$$\begin{aligned} \varPsi _N(\gamma _1\gamma _2)=\varPsi _N(\gamma _2\gamma _1). \end{aligned}$$
(48)

Moreover, using the identity

$$\begin{aligned} 2\varPsi _N(\alpha \beta )^2={\mathbb {E}}\left( \mathrm {tr}((H_\alpha -H_\beta ^*)(H_{\alpha }^*-H_\beta ))\right) \end{aligned}$$

and Cauchy–Schwarz, we obtain

$$\begin{aligned} \varPsi _N(\alpha \beta )\le \varPsi _N(\alpha )+\varPsi _N(\beta ) \end{aligned}$$
(49)

whenever \(\alpha \) and \(\beta \) have the same base point.

Lemma 5.1

There is a function \(\varPsi :{\text {Loop}}_*({\mathbb {S}}_T)\rightarrow [0,\infty )\) satisfying (46), (47), (48) and (49) and such that \(\varPsi _N(\alpha )\rightarrow \varPsi (\alpha )\) as \(N\rightarrow \infty \) for all \(\alpha \in {\text {Loop}}_*({\mathbb {S}}_T)\).

Proof

Let us first argue that, for any \(\alpha \in {\text {Loop}}_*({\mathbb {S}}_T)\), there is an \(n\ge 0\), a combinatorial loop \({\mathfrak {l}}\) having at most n self-intersections, and a sequence \((\alpha _k:k\in {\mathbb {N}})\) in \({\text {Loop}}_n({\mathbb {S}}_T)\) such that \(\alpha _k\) is a drawing of \({\mathfrak {l}}\) for all k and \(\alpha _k\rightarrow \alpha \) in length as \(k\rightarrow \infty \). A loop \(\alpha \in {\text {Loop}}_*({\mathbb {S}}_T)\) is a finite concatenation \(\gamma _1\dots \gamma _m\) of segments of great circles, each of which we may assume to have length less than \(\kappa \). Set \(n=m(m-1)/2\). Consider the parametrized family in \({\text {Loop}}_*({\mathbb {S}}_T)\) obtained by small perturbations of the segment endpoints. Since any two distinct segments \(\gamma _i\) and \(\gamma _j\) either intersect in at most one point, or are contained in the same great circle, the set of loops in this family which are not in \({\text {Loop}}_n({\mathbb {S}}_T)\) is of measure zero for a random choice of endpoints. Hence there exists a sequence \((\alpha _k:k\in {\mathbb {N}})\) in \({\text {Loop}}_n({\mathbb {S}}_T)\) such that \(\alpha _k\rightarrow \alpha \) with fixed endpoints.

Since the set of combinatorial planar loops with n self-intersections is finite, we can assume without loss of generality that there is a combinatorial planar loop \({\mathfrak {l}}\) such that \(\alpha _k\) is a drawing of \({\mathfrak {l}}\) for all \(k\ge 1\) and such that \(a(\alpha _k)\) converges in \(\overline{\varDelta _{\mathfrak {l}}(T)}\) as \(k\rightarrow \infty \), with limit a say. Consider the Wilson loop function \(\phi _{\mathfrak {l}}^N:\varDelta _{\mathfrak {l}}(T)\rightarrow {\mathbb {C}}\). By Proposition 2.6, we know that \(\phi _{\mathfrak {l}}^N\) is uniformly continuous on \(\varDelta _{\mathfrak {l}}(T)\) for all N and \(\phi _{\mathfrak {l}}^N\rightarrow \phi _{\mathfrak {l}}\) uniformly on \(\varDelta _{\mathfrak {l}}(T)\) as \(N\rightarrow \infty \). Hence \(\phi _{\mathfrak {l}}\) has a continuous extension to \(\overline{\varDelta _{\mathfrak {l}}(T)}\), which we denote also by \(\phi _{\mathfrak {l}}\), and \(\phi _{\mathfrak {l}}^N\rightarrow \phi _{\mathfrak {l}}\) uniformly on \(\overline{\varDelta _{\mathfrak {l}}(T)}\). Now, as \(N\rightarrow \infty \),

$$\begin{aligned} \varPhi _T^N(\alpha )=\phi ^N_{\mathfrak {l}}(a)\rightarrow \phi _{\mathfrak {l}}(a) \end{aligned}$$

so we can conclude that the the following limit is well-defined

$$\begin{aligned} \varPhi _T(\alpha )=\lim _{N\rightarrow \infty }\varPhi _T^N(\alpha ). \end{aligned}$$

Let us set

$$\begin{aligned} \varPsi (\alpha )=\sqrt{1-\varPhi _T(\alpha )}. \end{aligned}$$

Then \(\varPsi (\alpha )=\lim _{N\rightarrow \infty }\varPsi _N(\alpha )\) and the properties (46), (47), (48), (49) extend to \(\varPsi \) on taking the limit \(N\rightarrow \infty \). \(\quad \square \)

We note for later use a further inequality which follows from (47), (48), (49): for all \(\alpha ,\beta \in {\text {Loop}}_*({\mathbb {S}}_T)\) having the same base point,

$$\begin{aligned} |\varPsi (\alpha )-\varPsi (\beta )|\le \varPsi (\alpha \beta ^{-1}). \end{aligned}$$
(50)

The following isoperimetric inequality is shown in [40, Lemma 3.3.5]: there is a constant\(K_2\in [\kappa ^{-1},\infty )\)such that, for all\(a\in [0,T]\)and all\(\alpha \in P_*({\mathbb {S}}_T)\)of length\(\ell (\alpha )<K_2^{-1}\)and such that the loop\(s=\alpha ^{-1}D_0(\alpha )\)is simple, we have

$$\begin{aligned} \sqrt{a}\le K_2\ell (\alpha )^{3/4}(\ell (\alpha )-\ell (D_0(\alpha )))^{1/4} \end{aligned}$$
(51)

whereais the smaller of the areas of the connected components of\({\mathbb {S}}_T{\setminus } s^*\). The next proposition follows a line of argument similar to [40, Lemma 3.3.4] but reformulated in a simpler way and taking care to obtain a constant K independent of N.

Proposition 5.1

There is a constant \(K\in [\kappa ^{-1},\infty )\) such that, for all \(N\in {\mathbb {N}}\), all \(n\ge 0\) and all \(\alpha \in {\text {Path}}({\mathbb {S}}_T)\) with \(2^{-n}\ell (\alpha )<K^{-1}\), we have

$$\begin{aligned} \varPsi _N(\alpha D_n(\alpha )^{-1})\le K\ell (\alpha )^{3/4}(\ell (\alpha )-\ell (D_n(\alpha )))^{1/4}. \end{aligned}$$

Moreover the same estimate holds for \(\varPsi \) whenever \(\alpha \in {\text {Path}}_*({\mathbb {S}}_T)\).

Proof

The argument relies only on the properties (46), (47), (48), (49) which hold for both \(\varPsi _N\) and \(\varPsi \), and the continuity of \(\varPsi _N\) on \({\text {Loop}}({\mathbb {S}}_T)\), which allows us to reduce to the case \(\alpha \in {\text {Path}}_*({\mathbb {S}}_T)\). We will write it out for \(\varPsi \). Consider first the case where \(\alpha \in {\text {Path}}_*({\mathbb {S}}_T)\), and \(\alpha \) is injective with \(\ell (\alpha )<\kappa \). Then (see [40, Proposition 3.3.6] and Fig. 2 therein) we can write \(\alpha \) as the concatenation \(\alpha _1\dots \alpha _p\) of its excursions away from, or along, \(D_0(\alpha )\) to obtain a lasso decomposition

$$\begin{aligned} \alpha D_0(\alpha )^{-1}\sim l_1\dots l_p,\quad l_i=\gamma _is_i\gamma _i^{-1},\quad s_i=\alpha _iD_0(\alpha _i)^{-1} \end{aligned}$$

where \(s_i\in {\text {Loop}}_*({\mathbb {S}}_T)\) and \(\gamma _i\in {\text {Path}}_*({\mathbb {S}}_T)\) for all i, and where either \(s_i\) is simple or \(\alpha _i=D_0(\alpha _i)\), and such that

$$\begin{aligned} \ell (\alpha )=\ell (\alpha _1)+\dots +\ell (\alpha _p),\quad \ell (D_0(\alpha ))=\ell (D_0(\alpha _1))+\dots +\ell (D_0(\alpha _p)). \end{aligned}$$

Write \(a_i\) for the smaller of the areas of the connected components of \({\mathbb {S}}_T{\setminus } s_i^*\). In the case \(\alpha _i=D_0(\alpha _i)\), when there is only one such component, set \(a_i=0\). Note that \(\ell (s_i)\le 2\ell (\alpha _i)\le 2\ell (\alpha )\). Take \(K=\max \{K_1K_2,2K_2\}\) and suppose that \(\ell (\alpha )<K^{-1}\). Then

$$\begin{aligned} \varPsi (\alpha D_0(\alpha )^{-1})&=\varPsi (l_1\dots l_p)\le \varPsi (l_1)+\dots +\varPsi (l_p)=\varPsi (s_1)+\dots +\varPsi (s_p)\\&\le K_1({\sqrt{a}_1}+\dots +{\sqrt{a}_p})\\&\le K_1K_2\sum _i\ell (\alpha _i)^{3/4}(\ell (\alpha _i)-\ell (D_0(\alpha _i)))^{1/4}\\&\le K\ell (\alpha )^{3/4}(\ell (\alpha )-\ell (D_0(\alpha )))^{1/4} \end{aligned}$$

where we used Hölder’s inequality for the last step.

Now, for general \(\alpha \in {\text {Path}}_*({\mathbb {S}}_T)\) with \(\ell (\alpha )<K^{-1}\), according to [40, Proposition 1.4.9], we can write \(\alpha \) as a concatenation \(\alpha _1\dots \alpha _p\gamma \), where each \(\alpha _i\) is run up to the end of its first interval of self intersection. So we obtain a lasso decomposition

$$\begin{aligned} \alpha \sim l_1\dots l_p\gamma ,\quad l_i=\gamma _i s_i\gamma _i^{-1},\quad \ell (\alpha )=\ell (s_1)+\dots +\ell (s_p)+\ell (\gamma ) \end{aligned}$$

where \(s_i\in {\text {Loop}}_*({\mathbb {S}}_T)\) is simple and \(\gamma _i\in {\text {Path}}_*({\mathbb {S}}_T)\) for all i, and where \(\gamma \in {\text {Path}}_*({\mathbb {S}}_T)\) is injective. Write \(a_i\) for the smaller of the areas of the connected components of \({\mathbb {S}}_T{\setminus } s_i^*\). Then

$$\begin{aligned} \varPsi (l_i)=\varPsi (s_i)\le K_1\sqrt{a_i}\le K_1K_2\ell (s_i) \end{aligned}$$

so

$$\begin{aligned} \varPsi (l_1)+\dots +\varPsi (l_p)\le K(\ell (s_1)+\dots +\ell (s_p))=K(\ell (\alpha )-\ell (\gamma )). \end{aligned}$$

On the other hand, by the first part,

$$\begin{aligned} \varPsi (\gamma D_0(\gamma )^{-1})\le K\ell (\gamma )^{3/4}(\ell (\gamma )-\ell (D_0(\gamma )))^{1/4}. \end{aligned}$$

But \(D_0(\gamma )=D_0(\alpha )\), so

$$\begin{aligned} \varPsi (\alpha D_0(\alpha )^{-1})&=\varPsi (l_1\dots l_p\gamma D_0(\gamma )^{-1})\le \varPsi (l_1)+\dots +\varPsi (l_p)+\varPsi (\gamma D_0(\gamma )^{-1})\\&\le K(\ell (\alpha )-\ell (\gamma ))+K\ell (\gamma )^{3/4}(\ell (\gamma )-\ell (D_0(\gamma )))^{1/4}\\&\le K\ell (\alpha )^{3/4}(\ell (\alpha )-\ell (D_0(\alpha )))^{1/4}. \end{aligned}$$

Finally, for \(n\ge 0\) and \(\alpha \in {\text {Path}}_*({\mathbb {S}}_T)\) with \(2^{-n}\ell (\alpha )<K^{-1}\), we can write \(\alpha \) as a concatenation \(\alpha _1\dots \alpha _{2^n}\) such that

$$\begin{aligned} D_n(\alpha )=D_0(\alpha _1)\dots D_0(\alpha _{2^n}),\quad \ell (\alpha _i)=2^{-n}\ell (\alpha ). \end{aligned}$$

Then there is a lasso decomposition

$$\begin{aligned} \alpha D_n(\alpha )^{-1}\sim l_1\dots l_{2^n},\quad l_i=\gamma _i\alpha _i D_0(\alpha _i)^{-1}\gamma _i^{-1} \end{aligned}$$

where \(\gamma _i\in {\text {Loop}}_*({\mathbb {S}}_T)\) for all i. Then, by the second part,

$$\begin{aligned} \varPsi (\alpha D_n(\alpha )^{-1})&=\varPsi (l_1\dots l_{2^n})\le \sum _i\varPsi (\alpha _iD_0(\alpha _i)^{-1})\\&\le \sum _iK\ell (\alpha _i)^{3/4}(\ell (\alpha _i)-\ell (D_0(\alpha _i))^{1/4}\\&\le K\ell (\alpha )^{3/4}(\ell (\alpha )-\ell (D_n(\alpha ))^{1/4}. \end{aligned}$$

\(\square \)

5.2 Proof of Proposition 2.7

Let \((H_\gamma :\gamma \in {\text {Path}}({\mathbb {S}}_T))\) be a Yang–Mills holonomy field in U(N). We have to show that \(\mathrm {tr}(H_l)\) converges in probability as \(N\rightarrow \infty \) for all \(l\in {\text {Loop}}({\mathbb {S}}_T)\). Further, we have to show that the master field

$$\begin{aligned} \varPhi _T(l)=\lim _{N\rightarrow \infty }{\mathbb {E}}(\mathrm {tr}(H_l)) \end{aligned}$$

is the unique continuous function \({\text {Loop}}({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\) which is invariant under reduction and under area-preserving homeomorphisms, satisfies the Makeenko–Migdal equations (10) on regular loops, and satisfies (11) for simple loops.

Let \(l\in {\text {Loop}}({\mathbb {S}}_T)\) and set \(l_n=D_n(l)\). Note that \(D_n(l_m)=l_n\) when \(m\ge n\). By (50) and Proposition 5.1, for \(n\in {\mathbb {N}}\) sufficiently large and \(m\ge n\),

$$\begin{aligned} |\varPsi (l_m)-\varPsi (l_n)|\le \varPsi (l_ml_n^{-1})\le K\ell (l)^{3/4}(\ell (l)-\ell (l_n))^{1/4}. \end{aligned}$$

Also

$$\begin{aligned} {\mathbb {E}}(|\mathrm {tr}(H_{l_n})-\mathrm {tr}(H_l)|^2)\le {\mathbb {E}}(\mathrm {tr}((H_{l_n}-H_l)(H_{l_n}-H_l)^*))=2\varPsi _N(ll_n^{-1})^2 \end{aligned}$$

so

$$\begin{aligned} \Vert \mathrm {tr}(H_{l_n})-\mathrm {tr}(H_l)\Vert _2\le \sqrt{2}\varPsi _N(ll_n^{-1})\le \sqrt{2}K\ell (l)^{3/4}(\ell (l)-\ell (l_n))^{1/4}. \end{aligned}$$

Since \(\ell (l_n)\rightarrow \ell (l)\) as \(n\rightarrow \infty \), we see that \(\varPsi (l_n)\) and \(\varPhi _T(l_n)=1-\varPsi (l_n)^2\) must converge as \(n\rightarrow \infty \). Define

$$\begin{aligned} {{\tilde{\varPhi }}}(l)=\lim _{n\rightarrow \infty }\varPhi _T(l_n). \end{aligned}$$

Let \(n\rightarrow \infty \) and then \(N\rightarrow \infty \) in the inequality

$$\begin{aligned} \Vert \mathrm {tr}(H_l)-{{\tilde{\varPhi }}}(l)\Vert _1\le \Vert \mathrm {tr}(H_l)-\mathrm {tr}(H_{l_n})\Vert _1+\Vert \mathrm {tr}(H_{l_n})-\varPhi _T(l_n)\Vert _1+|\varPhi _T(l_n)-{{\tilde{\varPhi }}}_T(l)| \end{aligned}$$

to see that \(\mathrm {tr}(H_l)\rightarrow {{\tilde{\varPhi }}}(l)\) in probability and \(\varPhi _T(l)=\lim _{N\rightarrow \infty }{\mathbb {E}}(\mathrm {tr}(H_l))={{\tilde{\varPhi }}}(l)\).

The invariance of \(\varPhi _T\) on \({\text {Loop}}({\mathbb {S}}_T)\) under reduction and area-preserving homeomorphisms follows from the corresponding invariance properties of \(\varPhi _T^N\). The claimed properties of \(\varPhi _T\) on simple and regular loops were shown in Propositions 2.5 and 2.6. We now show that \(\varPhi _T\) is continuous on \({\text {Loop}}({\mathbb {S}}_T)\). For this, we translate to our context the argument of [40, Proposition 3.3.9]. Let \(\alpha \in {\text {Loop}}({\mathbb {S}}_T)\) and let \((\alpha _n:n\in {\mathbb {N}})\) be a sequence in \({\text {Loop}}({\mathbb {S}}_T)\) which converges to \(\alpha \) in length. We have to show that \(\varPhi _T(\alpha _n)\rightarrow \varPhi _T(\alpha )\). There exist area-preserving homeomorphisms \(\theta _n\) on \({\mathbb {S}}_T\) such that \(\theta _n(\alpha _n)\) converges to \(\alpha \) in length with fixed endpoints. We have \(\varPhi _T(\alpha )=1-\varPsi (\alpha )^2\) and we know that \(\varPsi (D_m(\alpha _n))\rightarrow \varPsi (\alpha _n)\) as \(m\rightarrow \infty \). Hence it will suffice to consider the case where \(\alpha _n\) is piecewise geodesic for all n and \(\alpha _n\) converges to \(\alpha \) in length with fixed endpoints, and to show then that \(\varPsi (\alpha _n)\rightarrow \varPsi (\alpha )\) as \(n\rightarrow \infty \). Parametrize \(\alpha \) at constant speed and choose parametrizations for the loops \(\alpha _n\) so that

$$\begin{aligned} \Vert \alpha _n-\alpha \Vert _\infty =\sup _{t\in [0,1]}|\alpha _n(t)-\alpha (t)|\rightarrow 0. \end{aligned}$$

Fix \(m\ge 0\) and write \(D_m(\alpha )\) and \(\alpha _n\) as concatenations

$$\begin{aligned} D_m(\alpha )=\sigma _1\dots \sigma _{2^m},\quad \alpha _n=\alpha _{n,1}\dots \alpha _{n,2^m} \end{aligned}$$

where \(\sigma _i\) is the geodesic from \(\alpha ((i-1)2^{-m})\) to \(\alpha (i2^{-m})\) and \(\alpha _{n,i}\) is the restriction of \(\alpha _n\) to \([(i-1)2^{-m},i2^{-m}]\). For \(i=0,1,\dots ,2^m\), denote by \(\eta _{n,i}\) the geodesic from \(\alpha (i2^{-m})\) to \(\alpha _n(i2^{-m})\). Then \(\ell (\eta _{n,0})=\ell (\eta _{n,2^m})=0\) and, for \(i=1,\dots ,2^m-1\),

$$\begin{aligned} \ell (\eta _{n,i})\le \Vert \alpha _n-\alpha \Vert _\infty . \end{aligned}$$

Set

$$\begin{aligned} \beta _n=\beta _{n,1}\dots \beta _{n,2^m},\quad \beta _{n,i}=\eta _{n,i-1}\alpha _{n,i}\eta _{n,i}^{-1}. \end{aligned}$$

Then \(\alpha _n\sim \beta _n\) and \(D_0(\beta _{n,i})=\sigma _i\) for all i. So

$$\begin{aligned} \varPsi (\alpha _nD_m(\alpha )^{-1})=\varPsi (\beta _nD_m(\alpha )^{-1}) \end{aligned}$$

and, by the argument used in the last part of the proof of Proposition 5.1,

$$\begin{aligned} \varPsi (\beta _nD_m(\alpha )^{-1})\le K\ell (\beta _n)^{3/4}(\ell (\beta _n)-\ell (D_m(\alpha ))^{1/4}. \end{aligned}$$

Now

$$\begin{aligned} \ell (\beta _n)\le \ell (\alpha _n)+2^{m+1}\Vert \alpha _n-\alpha \Vert _\infty \end{aligned}$$

so

$$\begin{aligned}&|\varPsi (\alpha _n)-\varPsi (\alpha )|\le \varPsi (\alpha _nD_m(\alpha )^{-1}) +|\varPsi (D_m(\alpha ))-\varPsi (\alpha )|\\&\quad \le K(\ell (\alpha _n)+2^{m+1}\Vert \alpha _n-\alpha \Vert _\infty )^{3/4}(\ell (\alpha _n)-\ell (D_m(\alpha ))+2^{m+1}\Vert \alpha _n-\alpha \Vert _\infty )^{1/4}\\&\quad \quad +|\varPsi (D_m(\alpha ))-\varPsi (\alpha )|. \end{aligned}$$

On letting first \(n\rightarrow \infty \) and then \(m\rightarrow \infty \), we see that \(\varPsi (\alpha _n)\rightarrow \varPsi (\alpha )\) as required.

Finally, suppose that \(\varPsi :{\text {Loop}}({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\) is another function with the same properties. Then \(\varPsi =\varPhi _T\) on \({\text {Loop}}_0({\mathbb {S}}_T)\). Suppose inductively for \(n\ge 1\), that \(\varPsi =\varPhi _T\) on \({\text {Loop}}_{n-1}({\mathbb {S}}_T)\), and let \(l\in {\text {Loop}}_n({\mathbb {S}}_T)\). We follow the argument at the end of the proof of Proposition 2.6, except that, in place of the estimate (13), we use the cylinder-based loop deformation \(l_t\) defined at (42) and the continuity of \(\varPsi \) to see that

$$\begin{aligned} \psi _{\mathfrak {l}}(a(t))=\varPsi (l_t)\rightarrow \varPsi (l_1)=\varPsi (s^{n_*})=\phi _T(n_*,a_0,a_*). \end{aligned}$$

Then we can conclude as before that \(\varPsi =\varPhi _T\) on \({\text {Loop}}_n({\mathbb {S}}_T)\). Hence, by induction, \(\varPsi =\varPhi _T\) on all regular loops. But these are dense in \({\text {Loop}}({\mathbb {S}}_T)\) and \(\varPsi \) and \(\varPhi _T\) are continuous, so \(\varPsi =\varPhi _T\) on \({\text {Loop}}({\mathbb {S}}_T)\). \(\quad \square \)

6 Further Properties of the Master Field

6.1 Relation with the Hermitian Brownian loop

Let \(W=(W_t:t\ge 0)\) be a Brownian motion in the set of \(N\times N\) Hermitian matrices H(N) equipped with the inner product

$$\begin{aligned} \langle h_1,h_2\rangle =N\mathrm {Tr}(h_1h_2^*). \end{aligned}$$

Let \(w=(w_t:t\ge 0)\) be a free Brownian motion, defined on some non-commutative probability space \(({\mathcal {A}},\tau )\). The inner product is scaled with N so that W converges in non-commutative distribution (in probability) to w, that is to say, for all \(n\in {\mathbb {N}}\) and all \(t_1,\dots ,t_n\ge 0\),

$$\begin{aligned} \mathrm {tr}(W_{t_1}\dots W_{t_n})\rightarrow \tau (w_{t_1}\dots w_{t_n}) \end{aligned}$$

in probability as \(N\rightarrow \infty \). Fix \(T\in (0,\infty )\) and define for \(t\in [0,T]\)

$$\begin{aligned} B_t=W_t-\tfrac{t}{T}W_T,\quad b^T_t=w_t-\tfrac{t}{T}w_T. \end{aligned}$$

Then \(B=(B_t:t\in [0,T])\) is a Hermitian Brownian loop in H(N) and B converges in non-commutative distribution to \(b^T=(b^T_t:t\in [0,T])\). The non-commutative process \(b^T\) is called the free Hermitian Brownian loop. We will write simply b for \(b^1\).

Let \(x=(x_t:t\in [0,T])\) be a free unitary Brownian loop in \(({\mathcal {A}},\tau )\), as defined in Sect. 2.7.

Proposition 6.1

Suppose that \(T\in (0,\pi ^2]\). Then, for all \(t\in [0,T]\) and all \(n\in {\mathbb {Z}}\),

$$\begin{aligned} \tau (x_t^n)=\int _{\mathbb {R}}e^{inx}s_{\sqrt{t(T-t)/T}}(x)dx=\tau (e^{inb^T_t}) \end{aligned}$$

where \(s_t\) is the semi-circle density (15) of variance t. On the other hand, for almost all T and almost all \(s,t\in (0,T)\) with \(s<t\),

$$\begin{aligned} \tau (x^*_sx_t)\not =\tau (e^{-ib^T_s}e^{ib^T_t}) \end{aligned}$$

so \((e^{ib^T_t}:t\in [0,T])\) is not a free unitary Brownian loop.

Proof

The first assertion is the content of Proposition 3.6. We turn to the second assertion. Let \((X_t:t\in [0,T])\) be a Brownian loop in U(N) based at 1. Then, since Brownian motion in U(N) is a Lévy process, \(X_s^{-1}X_t\) has same law as \(X_{t-s}\). On letting \(N\rightarrow \infty \), we deduce that

$$\begin{aligned} \tau (x_s^*x_t)=\tau (x_{t-s})=\tau (e^{ib^T_{t-s}})=\tau (e^{i(b^T_t-b^T_s)}) \end{aligned}$$

where we used free independence and stationarity of the increments of free Brownian motion for the last equality. Hence, by the scaling properties of free Brownian motion,

$$\begin{aligned} \tau (e^{-ib^T_s}e^{ib^T_t})-\tau (x^*_sx_t)=\tau (e^{-ib^T_s}e^{ib^T_t}-e^{i(b^T_t-b^T_s)})=F_{s/T,t/T}(\sqrt{T}) \end{aligned}$$

where

$$\begin{aligned} F_{s,t}(\sigma )=\tau (e^{-i\sigma b_s}e^{i\sigma b_t}-e^{i\sigma (b_t-b_s)}). \end{aligned}$$

By Fubini’s theorem, it will suffice to show, for all \(s,t\in (0,1)\) with \(s<t\), that \(F_{s,t}(\sigma )\not =0\) for almost all \(\sigma \in (0,\pi ]\). We expand the exponential function up to fourth order and use scale invariance of free Brownian motion to obtain

$$\begin{aligned} F_{s,t}(0)=F_{s,t}'(0)=F_{s,t}''(0)=F_{s,t}'''(0)=0,\quad F_{s,t}''''(0)=2\tau (b_s^2b_t^2-b_sb_tb_sb_t). \end{aligned}$$

The variables \((b_t:t\in [0,1])\) are semi-circular, therefore all free cumulants of order more than 3 vanish (see for example  [46, equation 11.4]). So, using the decomposition of moments into free cumulantsFootnote 6 (see [46, equation 11.8]),

$$\begin{aligned} \tau (b_s^2b_t^2-b_sb_tb_sb_t)=\tau (b_s^2)\tau (b_t^2)-\tau (b_sb_t)^2=s(t-s)(1-t)>0. \end{aligned}$$

Since \(F_{s,t}\) is analytic in \(\sigma \) on \((0,\pi ]\), this implies that it has at most finitely many zeros.

\(\square \)

6.2 Duality at the midpoint of the loop

Recall from (6) and (8) the form of \(\rho _T\). It will be convenient to set \(\alpha =0\) and \(\beta =2/\sqrt{T}\) in the subcritical case \(T\in (0,\pi ^2]\). Denote by \({\mathbb {H}}\) the open upper half-plane. The following relation appeared first in the physics literature [28, equation 1.2], without a mathematical proof.

Proposition 6.2

Let \((x_t:t\in [0,T])\) be a free unitary Brownian loop. Then, for all \(T>0\), the spectral measure of \(x_{T/2}\) has a density \(\rho _T^*\) with respect to Lebesgue measure on \({\mathbb {U}}\) (of mass \(2\pi \)), which is invariant under complex conjugation and is such that

$$\begin{aligned} \pi \rho ^*_{T}:{\mathbb {U}}\cap {\mathbb {H}}\rightarrow (\alpha ,\beta ) \end{aligned}$$

is the inverse mapping of

$$\begin{aligned} e^{i\pi \rho _T}:(\alpha ,\beta )\rightarrow {\mathbb {U}}\cap {\mathbb {H}}. \end{aligned}$$

Proof

We write the proof for the supercritical case \(T>\pi ^2\), leaving the minor adjustments needed when \(T\le \pi ^2\) to the reader. The function \(\rho _T:(\alpha ,\beta )\rightarrow (0,1)\) is continuous and strictly decreasing, with \(\rho _T(\alpha )=1\) and \(\rho _T(\beta )=0\). Indeed, according to formula (8) and an elementary computation (see for example [44, equation 150]), for \(x\in (\alpha ,\beta )\),

$$\begin{aligned} \frac{\pi \alpha }{2}\sqrt{(x^2-\alpha ^2)(\beta ^2-x^2)}\rho _T'(x)=\int _0^1\frac{\alpha ^2s^2-x^2}{\beta ^2\sqrt{(1-s^2)(1-k^2s^2)}}ds<0. \end{aligned}$$

Write \(\psi \) for the inverse of the bijection \(\pi \rho _T:(\alpha ,\beta )\rightarrow (0,\pi )\). For all \(n\in {\mathbb {Z}}{\setminus }\{0\}\), by Lemma 3.7,

$$\begin{aligned} \tau (x_{T/2}^n)=\frac{2}{n\pi }\int _\alpha ^\beta \sin \{n\pi \rho _T(x)\}dx=-\frac{2}{n\pi }\int _0^\pi \sin (n\theta )\psi '(\theta )d\theta . \end{aligned}$$

We integrate by parts to obtain

$$\begin{aligned} \tau (x_{T/2}^n)=\frac{2}{\pi }\int _0^\pi \cos (n\theta )\psi (\theta )d\theta . \end{aligned}$$

Hence the spectral measure of \(x_{T/2}\) has a density \(\rho ^*_T\) with respect to Lebesgue measure on \({\mathbb {U}}\) given by

$$\begin{aligned} \rho ^*_T(e^{i\theta })=\psi (|\theta |)/\pi ,\quad |\theta |\le \pi . \end{aligned}$$

\(\square \)

6.3 Convergence to the planar master field

We now investigate the behaviour of the master field \(\varPhi _T\) as \(T\rightarrow \infty \). For \(T>0\), \(n\in {\mathbb {N}}\) and \(t\in [0,T]\), set

$$\begin{aligned} m_T(n,t)=\varPhi _T(l^n) \end{aligned}$$

where l is a simple loop which divides \({\mathbb {S}}_T\) into components of areas t and \(T-t\). Recall that \(m_T(n,t)\) does not depend on the choice of l.

Proposition 6.3

We have

$$\begin{aligned} m_T(n,t)\rightarrow \frac{e^{-nt/2}}{2\pi in}\int _\gamma \left( 1+\frac{1}{z}\right) ^ne^{-ntz}dz =e^{-nt/2}\sum _{k=1}^{n-1}\frac{(-t)^k}{k!}\left( {\begin{array}{c}n\\ k+1\end{array}}\right) n^{k-1} \end{aligned}$$

uniformly in \(t\in [0,T]\) as \(T\rightarrow \infty \), where \(\gamma \) is any positively oriented loop in \({\mathbb {C}}\) winding once around 0.

Proof

Since the second complete elliptic integral E(k) is bounded and the first K(k) is bounded on compacts in [0, 1), the relation

$$\begin{aligned} T=8EK-4(1-k^2)K^2 \end{aligned}$$

forces \(k\rightarrow 1\) as \(T\rightarrow \infty \). Since \(\alpha =k\beta \le 1/2\) and \(\beta \ge 1/2\) for all T, this implies \(\alpha ,\beta \rightarrow 1/2\) as \(T\rightarrow \infty \). Hence \(\rho _T(x)\rightarrow 1\) for \(|x|<1/2\) and \(\rho _T(x)\rightarrow 0\) for \(|x|>1/2\), so

$$\begin{aligned} G_T(z)=\int _{\mathbb {R}}\frac{\rho _T(x)}{z-x}dx\rightarrow \int _{-1/2}^{1/2}\frac{dx}{z-x}={\text {Log}}\left( \frac{z+1/2}{z-1/2}\right) \end{aligned}$$

uniformly on compacts in \(\{z\in {\mathbb {C}}:|z|>1/2\}\). By Proposition 3.7, for \(R>1/2\) and T sufficiently large, uniformly in \(t\in [0,T]\),

$$\begin{aligned} m_T(n,t)&=\frac{1}{2\pi in}\int _{\gamma _R}\exp \{-n(tz-G_T(z))\}dz\\&\rightarrow \frac{1}{2\pi in}\int _{\gamma _R}e^{-ntz}\left( \frac{z+1/2}{z-1/2}\right) ^ndz =\frac{e^{-nt/2}}{2\pi in}\int _\gamma \left( 1+\frac{1}{z}\right) ^ne^{-ntz}dz \end{aligned}$$

where \(\gamma _R\) is the positively oriented boundary of \(\{z\in {\mathbb {C}}:|z|=R\}\). \(\quad \square \)

We remark that it is known from Biane [4] that the free unitary Brownian motion has spectral measure \(\nu _t\) on the circle \({\mathbb {U}}\), with moments given for all \(n\in {\mathbb {Z}},\) by

$$\begin{aligned} \int _{{\mathbb {U}}}\omega ^n\nu _t(d\omega )=e^{-|n|t/2}\sum _{k=1}^{|n|-1}\frac{(-t)^k}{k!}\left( {\begin{array}{c}|n|\\ k+1\end{array}}\right) |n|^{k-1} \end{aligned}$$

whereas the spectral measure \(\nu _{t}^N\) of a Brownian motion on U(N) at time t satisfies

$$\begin{aligned} \nu _t^N\longrightarrow \nu _t \end{aligned}$$

weakly in probability on \({\mathbb {U}}\) as \(N\rightarrow \infty \). See also Lévy [39], for another proof using Schur–Weyl duality. Let us write \(\nu _{t,T}^N\) and \(\nu _{t,T}\) for the spectral mesure of a marginal of respectively the unitary Brownian loop on U(N) and of the free unitary Brownian loop (defined in Sect. 2.7), where both processes have lifetime T and the marginal is taken at time t. We can now deduce that the following diagram commutes

where the horizontal arrows denote weak limits on \({\mathbb {U}}\) as \(N\rightarrow \infty \), which follow from Theorem 2.9 and Corollary 2.8, along with [4], whereas the left vertical arrow denotes a limit in law on \({\mathcal {M}}_1({\mathbb {U}})\) and the right vertical arrow denotes the weak limit induced by Proposition 6.3. The next proposition enables us to show that such a commutative diagram holds true for the non-commutative distributions. We will not give details here on this latter point.

Denote by \({\text {Loop}}({\mathbb {R}}^2)\) the set of loops of finite length in \({\mathbb {R}}^2\) and let

$$\begin{aligned} \varPhi :{\text {Loop}}({\mathbb {R}}^2)\rightarrow [-1,1] \end{aligned}$$

be the planar master field as defined in [41].

Proposition 6.4

For each \(T>0\), fix a point \(y_T\in {\mathbb {S}}_T\) and denote by \(p_T\) the inverse map of the stereographic projection \({\mathbb {S}}_T{\setminus }\{y_T\}\rightarrow {\mathbb {C}}\). Then, for all \(l\in {\text {Loop}}({\mathbb {R}}^2)\),

$$\begin{aligned} \varPhi _T(p_T(l))\rightarrow \varPhi (l)\quad \text {as }T\rightarrow \infty . \end{aligned}$$

Proof

Let l be a simple loop in \({\text {Loop}}({\mathbb {R}}^2)\) and denote by a the finite area enclosed by l. Then \(p_T(l)\) is a simple loop in \({\text {Loop}}({\mathbb {S}}_T)\) which divides \({\mathbb {S}}_T\) into two components and does not pass through \(y_T\). Denote by \(a_T\) the area of the component which does not contain \(y_T\). Then \(a_T\rightarrow a\) as \(T\rightarrow \infty \). By Proposition 6.3, this implies

$$\begin{aligned} \varPhi _T(p_T(l^n))\rightarrow e^{-na/2}\sum _{k=1}^{n-1}\frac{(-a)^k}{k!}\left( {\begin{array}{c}n\\ k+1\end{array}}\right) n^{k-1}=\varPhi (l^n) \end{aligned}$$

as \(T\rightarrow \infty \), where we used [41, equation (2)] for the last equality.

Now \(\varPhi \) also satisfies the Makeenko–Migdal equations  [41]. By a variation of the argument used to prove Theorem 2.6, we can extend convergence from powers of simple loops to all regular loops. We sketch the small change which is needed. There is now a face, \(k_\infty \) say, of infinite area. So we work in the orthant

$$\begin{aligned} Y_{\mathfrak {l}}=\{(a_k:k\in {\mathcal {F}}_{\mathfrak {l}}):a_{k_\infty }=\infty \text { and }a_k\in (0,\infty )\text { for all }k\not =k_\infty \}. \end{aligned}$$

Set

$$\begin{aligned} {\overline{a}}=\sum _{k\not =k_\infty }a_k. \end{aligned}$$

Write \(k_0,k_*\), as before, for the faces of minimal and maximal winding number, now choosing the additive constant so that \(n_{\mathfrak {l}}(k_\infty )=0\). Given \(a\in Y_{\mathfrak {l}}\), either \(\langle a,n_{\mathfrak {l}}\rangle \ge 0\), or \(\langle a,n_{\mathfrak {l}}\rangle <0\). (We use here the convention that \(\infty \times 0=0\).) In the first case, \(k_*\not = k_\infty \) and there exists \(a'\in \overline{Y_{\mathfrak {l}}}\) with \(a'_k=0\) for \(k\not =k_*,k_\infty \) such that

$$\begin{aligned} \langle a',n_{\mathfrak {l}}\rangle =a_{k_*}'n_{\mathfrak {l}}(k_*)=\langle a,n_{\mathfrak {l}}\rangle . \end{aligned}$$

Set

$$\begin{aligned} v_k= {\left\{ \begin{array}{ll} a'_k-a_k,&{}\text {if }k\not =k_\infty ,\\ {\overline{a}}-\overline{a'},&{}\text {if }k=k_\infty . \end{array}\right. } \end{aligned}$$

and set \(a(t)=a+tv\). Then \(a'=a(1)\) and \(a(t)\in Y_{\mathfrak {l}}\) for all \(t\in [0,1)\), and \(v\in {\mathfrak {m}}_{\mathfrak {l}}\) by Proposition 4.5. An analogous argument holds in the second case. We can then proceed as in Sect. 4.5. The arguments of Sect. 5 also carry over to extend the limit

$$\begin{aligned} \varPhi _T(p_T(l))\rightarrow \varPhi (l) \end{aligned}$$

to all \(l\in {\text {Loop}}({\mathbb {R}}^2)\). We omit the details. \(\quad \square \)

6.4 Uniqueness of the master field

In Theorem 2.4, we showed that the master field is characterized by certain properties. In fact there is some redundancy in this characterization, as the following result shows in replacing property (11) by (52), which is the case \(n=1\) of (11).

Proposition 6.5

Let \(\varPhi :{\text {Loop}}({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\) be a continuous function, which is invariant under reduction and under area-preserving, orientation-preserving homeomorphisms, satisfies the Makeenko–Migdal equations on regular loops, and is given on simple loops l by

$$\begin{aligned} \varPhi (l)=\frac{2}{\pi }\int _0^\infty \cosh \left\{ (a-b)x/2\right\} \sin \{\pi \rho _T(x)\}dx \end{aligned}$$
(52)

where a and b are the areas of the connected components of \({\mathbb {S}}_T{\setminus }\{l^*\}\). Then \(\varPhi \) is the master field \(\varPhi _T\).

Furthermore, we shall see in the next section that given the value (52) of \(\varPhi _T(l)\) for any simple loops, it is possible to obtain new explicit formulas for \(\varPhi _T(l^n)\) using the Makeenko-Migdal equations (see (55)), matching with (11). The proof of proposition 6.5 will be based on an argument for a special class of loops which we now introduce. Informally, for \(n\ge 1\) fix an initial point \(x_1\) and draw an inward anticlockwise spiral which winds n times around another point o, crossing the line \(ox_1\) at points \(x_2,\dots ,x_n\) then, on hitting \(ox_1\) for the nth time, returning to \(x_1\) along \(ox_1\). Thus we obtain a combinatorial planar loop \({\mathfrak {l}}_n\) whose combinatorial graph is given as follows:

$$\begin{aligned} {\mathcal {V}}=\{1,\dots ,n\},\quad {\mathcal {E}}=\{1,\dots ,n\}\cup \{1',\dots ,(n-1)'\},\quad {\mathcal {F}}=\{0,1,\dots ,n\} \end{aligned}$$

where, for \(j=1,\dots ,n-1\),

$$\begin{aligned} s(j)=j,\quad t(j)=j+1,\quad s(j')=j+1,\quad t(j')=j \end{aligned}$$

and

$$\begin{aligned} l(j)=l(j')=j,\quad r(j)=r(j')=j-1 \end{aligned}$$

while

$$\begin{aligned} s(n)=t(n)=n,\quad l(n)=n,\quad r(n)=n-1. \end{aligned}$$

See Fig. 6. Here, we have used a non-standard labelling for the edges and faces which is adapted to the structure of the graph. Note that the self-intersections of \({\mathfrak {l}}_n\) are labelled by \(\{2,\dots ,n\}\). If we fix the additive constant for the winding number so that \(n_{{\mathfrak {l}}_n}(0)=0\), then \(n_{{\mathfrak {l}}_n}(n)=n\). For \(n\ge 1\) and for any combinatorial planar loop \({\mathfrak {l}}\) with \(n-1\) self-intersections, we have

$$\begin{aligned} n_*=\max \{n_{\mathfrak {l}}(k)-n_{\mathfrak {l}}(k'):k,k'\in {\mathcal {F}}\}\le n. \end{aligned}$$

We call \({\mathfrak {l}}_n\), and any associated regular embedded loop l, and any rerooting of l, a maximally winding loop.

Fig. 6
figure 6

A drawing of the maximally winding loop \({\mathfrak {l}}_4\)

Proof of Proposition 6.5

Let \(n\ge 1\) and suppose inductively that \(\varPhi (l^m)=\varPhi _T(l^m)\) for all \(m\le n\) and all simple loops l. A comparison of equations (11) and (52) shows that this is true for \(n=1\). Let l be a simple loop which divides \({\mathbb {S}}_T\) into components of areas \(a_0,a_*\in (0,T)\). We can find \((\alpha _1,\alpha _2,\dots ,\alpha _{n+2})\) such that

$$\begin{aligned} \alpha _1=0,\quad \alpha _2=a_0,\quad \alpha _{n+1}=a_*,\quad \alpha _{n+2}=0 \end{aligned}$$

and, for \(m=2,\dots ,n+1\),

$$\begin{aligned} \alpha _{m-1}-2\alpha _m+\alpha _{m+1}<0. \end{aligned}$$

Consider the (constant) vector field v on \(\varDelta _{{\mathfrak {l}}_{n+1}}(T)\) given by

$$\begin{aligned} v=\sum _{i=2}^{n+1}\alpha _i\varXi _i. \end{aligned}$$

Then \(v_0=-a_0\) and \(v_{n+1}=-a_*\) and \(v_k>0\) for \(k=1,\dots ,n\). Set

$$\begin{aligned} a(t)=(a_0,0,\dots ,0,a_*)+tv \end{aligned}$$

then \(a(t)\in \varDelta _{{\mathfrak {l}}_{n+1}}(T)\) for all \(t\in (0,1)\) and \(a_0(1)=a_{n+1}(1)=0\). There exists a continuous family of loops \((l(t):t\in [0,1])\), with a common basepoint such that, \(l(0)=l^{n+1}\), \(l(t)\in {\mathcal {G}}_{{\mathfrak {l}}_{n+1}}(a(t))\) for all \(t\in (0,1)\), and l(1) is a maximally winding loop with \(n-2\) self-intersections. Then, by the arguments used in the proof of Proposition 2.6,

$$\begin{aligned} \varPhi (l(1))=\varPhi (l^{n+1})+\sum _{i=2}^{n+1}\alpha _i\int _\tau ^1\varPhi (l_i(s))\varPhi ({{\hat{l}}}_i(s))ds \end{aligned}$$

where \(l_i(s)\) and \({{\hat{l}}}_i(s)\) are maximally winding loops having \(i-2\) and \(n+1-i\) self-intersections. But the same equation holds for \(\varPhi _T\) and the inductive hypothesis, combined with the argument of the proof of Proposition 2.6, implies that

$$\begin{aligned} \varPhi (l(1))=\varPhi _T(l(1)),\quad \varPhi (l_i(s))=\varPhi _T(l_i(s)),\quad \varPhi ({{\hat{l}}}_i(s))=\varPhi _T({{\hat{l}}}_i(s)). \end{aligned}$$

Hence \(\varPhi (l^{n+1})=\varPhi _T(l^{n+1})\) and the induction proceeds. Finally, by Proposition 2.6, it follows that \(\varPhi (l)=\varPhi _T(l)\) for all \(l\in {\text {Loop}}({\mathbb {S}}_T)\). \(\quad \square \)

On the other hand, condition (52) is not redundant in Proposition 6.5, as we now show. Each loop \(l\in {\text {Loop}}({\mathbb {S}}_T)\) has a winding number function

$$\begin{aligned} n_l:{\mathbb {S}}_T{\setminus } l^*\rightarrow {\mathbb {Z}}\end{aligned}$$

which is unique up to an additive constant. By the Banchoff–Pohl inequality [2], we know that \(n_l\in L^2({\mathbb {S}}_T)\) so \(n_l\) has a well-defined average value \(\langle n_l\rangle \) with respect to the uniform distribution on \({\mathbb {S}}_T\), up to the same additive constant. Hence, we can define a unique function \(\varPsi :{\text {Loop}}({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\) by

$$\begin{aligned} \varPsi (l)=e^{2\pi i\langle n_l\rangle }. \end{aligned}$$

For loops \(l_1,l_2\) based at the same point, we have \(n_{l_1l_2}=n_{l_1}+n_{l_2}\), so

$$\begin{aligned} \varPsi (l_1l_2)=\varPsi (l_1)\varPsi (l_2). \end{aligned}$$

Morever, \(\varPsi \) is invariant under any Makeenko–Migdal flow. Consider, for \(n\in {\mathbb {Z}}\), the twisted master field \(\varPhi _T^{(n)}:{\text {Loop}}({\mathbb {S}}_T)\rightarrow {\mathbb {C}}\) given by

$$\begin{aligned} \varPhi ^{(n)}_T(l)=\varPsi (l)^n\varPhi _T(l). \end{aligned}$$

Then \(\varPhi _T^{(n)}\) is continuous, invariant under reduction and area-preserving, orientation-preserving homeomorphisms and satisfies the Makeenko–Migdal equations on regular loops. However, for a simple loop l which winds positively around a domain of area a, we have

$$\begin{aligned} \varPsi (l)=e^{2\pi ia/T} \end{aligned}$$

so, for \(n\not =0\), \(\varPhi _T^{(n)}\) is not the master field. Hence, by Proposition 6.5, or by inspection, \(\varPhi _T^{(n)}\) does not satisfy (52).

For \(n\not =0\), the twisted field \(\varPhi _T^{(n)}\) from the preceding paragraph also fails to be invariant under orientation-reversing homeomorphisms. We do not know whether this stronger invariance condition would allow one to dispense with (52) in Proposition 6.5.

6.5 Combinatorial formulas for the master field

Rusakov [47] proposed, without proof, that there should be a closed formula for the value of the master field for any regular loop on the sphere. We now prove a formula for a restricted class of loops introduced in [34], which agrees with (42) in  [47]. We were not able to prove or disprove this latter formula in the general caseFootnote 7 and leave this question open. Let us say that a combinatorial planar loop \({\mathfrak {l}}\) is splittableFootnote 8 if for all self-intersection points i of \({\mathfrak {l}}\), the two loops \({\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i\), obtained by following outgoing strands of \({\mathfrak {l}}\) starting from i, intersect only at i (Fig. 7).

Fig. 7
figure 7

A splittable combinatorial planar loop with its family of simple loops \({\mathcal {S}}_{\mathfrak {l}}\) drawn in dashed lines, next to a combinatorial representation of the tree structure of \({\mathcal {S}}_{\mathfrak {l}}\)

Let \({\mathfrak {l}}\) be a splittable combinatorial planar loop with n points of intersection. On splitting \({\mathfrak {l}}\) at all points of intersection, we obtain a family of simple combinatorial loops \({\mathcal {S}}_{\mathfrak {l}}=\{{\mathfrak {s}}_1,\dots {\mathfrak {s}}_{n+1}\}\) in \({\mathfrak {l}}\), which has the structure of a tree, in which \({\mathfrak {s}}_j\) and \({\mathfrak {s}}_{j'}\) are adjacent if they share a point of intersection of \({\mathfrak {l}}\). We choose the sequence \(({\mathfrak {s}}_1,\dots ,{\mathfrak {s}}_{n+1})\) to be an adapted labelling of \({\mathcal {S}}_{\mathfrak {l}}\), meaning that \({\mathfrak {s}}_{k+1}\) is adjacent to at least one of \({\mathfrak {s}}_1,\dots ,{\mathfrak {s}}_k\) for all \(k\in \{1,\dots ,n\}\). Given \(T\in (0,\infty )\), a distinguished face \(k\in {\mathcal {F}}_{\mathfrak {l}}\) and an adapted labelling \(({\mathfrak {s}}_1,\dots ,{\mathfrak {s}}_{n+1})\) of \({\mathcal {S}}_{\mathfrak {l}}\), let us say that a sequence \((\gamma _1,\dots ,\gamma _{n+1})\) of disjoint simple loops in \({\mathbb {C}}\) around \([-\beta ,\beta ]\) is admissible if

  1. (a)

    \(\gamma _{j+1}\) lies in the infinite component of \({\mathbb {C}}{\setminus }\gamma _j^*\) for all \(j\le n\),

  2. (b)

    \(\gamma _j\) has the same orientation in \({\mathbb {C}}\) as \({\mathfrak {s}}_j\) has around k for all j.

For any self-intersection point i of \({\mathfrak {l}}\), we label the loops among \({\mathfrak {l}}_i,{{\hat{{\mathfrak {l}}}}}_i\) using the left and right outgoing edges at i by \({\mathfrak {l}}_{i,l}\) and \({\mathfrak {l}}_{i,r}\) respectively. The loops \({\mathfrak {l}}_{i,l}\) and \({\mathfrak {l}}_{i,r}\) are also splittable, and the pair \(\{{\mathcal {S}}_{{\mathfrak {l}}_{i,l}},{\mathcal {S}}_{{\mathfrak {l}}_{i,r}}\}\) is a partition of \({\mathcal {S}}_{\mathfrak {l}}\). Write j(il) and j(ir) for the loop labels in \({\mathcal {S}}_{\mathfrak {l}}\) such that \({\mathfrak {s}}_{j(i,l)}\) and \({\mathfrak {s}}_{j(i,r)}\) use the left and right outgoing edges at i respectively. Let \(n_{\mathfrak {l}}\) be the winding number function of \({\mathfrak {l}}\), where the additive constant is chosen so that \(n_{\mathfrak {l}}(k)=0\). Set \(\varepsilon _j=-1\) or \(\varepsilon _j=1\) according as \({\mathfrak {s}}_j\) winds positively or negatively around k. Set

$$\begin{aligned} {\mathcal {O}}_{\mathfrak {l}}=\{(z_1,\dots ,z_{n+1})\in {\mathbb {C}}^{n+1}:z_j\not =z_{j'}\text { for all }j,j'\text { distinct}\} \end{aligned}$$

and, for \(a\in \varDelta _{\mathfrak {l}}(T)\) and \(z\in {\mathcal {O}}_{\mathfrak {l}}\), define

$$\begin{aligned} Q_{{\mathfrak {l}},k}(a,z)=\frac{\prod _{j=1}^{n+1}\exp \{\langle n_{\mathfrak {l}},a\rangle z_j+\varepsilon _jG_T(z_j)\}}{\prod _{i\in {\mathcal {I}}}(z_{j(i,r)}-z_{j(i,l)})}, \end{aligned}$$

where \({\mathcal {I}}\) denotes, as in Sect. 4.1, the set of points of intersection of \({\mathfrak {l}}\). Recall from Sect. 4.5 that, for all combinatorial planar loops \({\mathfrak {l}}\), there is a uniformly continuous map

$$\begin{aligned} \phi _{\mathfrak {l}}:\varDelta _{\mathfrak {l}}(T)\rightarrow {\mathbb {R}}\end{aligned}$$

such that \(\varPhi _T(l)=\phi _{\mathfrak {l}}(a)\) for all \(a\in \varDelta _{\mathfrak {l}}(T)\) and all \(l\in {\mathfrak {l}}(a)\).

Proposition 6.6

For all \(T\in (0,\infty )\), all splittable combinatorial planar loops \({\mathfrak {l}}\) with n self-intersections and equipped with a distinguished face k, all adapted labellings \(({\mathfrak {s}}_1,\dots ,{\mathfrak {s}}_{n+1})\) of \({\mathcal {S}}_{\mathfrak {l}}\), and all admissible sequences of closed loops \((\gamma _1,\dots ,\gamma _{n+1})\), we have, for all \(a\in \varDelta _{\mathfrak {l}}(T)\),

$$\begin{aligned} \phi _{\mathfrak {l}}(a)=\left( \frac{1}{2\pi i}\right) ^{n+1}\int _{\gamma _1}dz_1\dots \int _{\gamma _{n+1}}dz_{n+1}\frac{\prod _{j=1}^{n+1}\exp \{\langle n_{\mathfrak {l}},a\rangle z_j+\varepsilon _jG_T(z_j)\}}{\prod _{i\in {\mathcal {I}}}(z_{j(i,r)}-z_{j(i,l)})}. \end{aligned}$$
(53)

Before proving this proposition, let us give an example with maximally winding loops, as defined in Sect. 6.4. Any maximally winding loop \({\mathfrak {l}}_n\) , with \(n\ge 1\) is splittable. According to proposition 6.6, using the labelling of Sect., 6 as in Fig. 6 and choosing 0 as the distinguished face, for any \((a_0,\ldots , a_n)\in \varDelta _{{\mathfrak {l}}_{n}}(T),\)

$$\begin{aligned} \phi _{{\mathfrak {l}}_n}(a_0,a_1,\ldots , a_{n})= \left( \frac{1}{2\pi i}\right) ^{n}\int _{\gamma _1}dz_1\dots \int _{\gamma _{n}}dz_{n}\frac{\prod _{j=1}^{n}\exp \{ (\sum _{i=j}^na_i)z_{j} -G_T(z_j)\}}{(z_1-z_2)(z_2-z_3)\ldots (z_{n-1}-z_{n}) } \end{aligned}$$
(54)

where \((\gamma _1,\dots ,\gamma _{n})\) are nested clockwise-oriented simple loops in \({\mathbb {C}}\) around \([-\beta ,\beta ]\) and \(g_{j+1}\) lies in the infinite component of \({\mathbb {C}}\setminus \gamma _j^*\) for all \(j\le n-1\). Recall that if l is a simple loop which divides \({\mathbb {S}}_T\) into components of areas a and b, then \(\varPhi _T(l^n)= \phi _{{\mathfrak {l}}_n}(a,0,\ldots ,0,b ).\) Therefore, for such a simple loop l, the last formula yields the identity

$$\begin{aligned} \varPhi _t(l^n)= \left( \frac{1}{2\pi i}\right) ^{n}\int _{\gamma _1}dz_1\dots \int _{\gamma _{n}}dz_{n}\frac{\prod _{j=1}^{n}\exp \{b z_i -G_T(z_j)\}}{(z_1-z_2)(z_2-z_3)\ldots (z_{n-1}-z_{n}) }. \end{aligned}$$
(55)

Thanks to the following lemma and proposition 3.7, this new formula agree the first formula we obtained (11).

Lemma 6.7

For \(n\ge 2\) and \(F:({\mathbb {C}}{\setminus } [-\beta ,\beta ])^n\rightarrow {\mathbb {C}}\) a holomorphic symmetric function,

$$\begin{aligned}&\left( \frac{1}{2\pi i}\right) ^{n}\int _{\gamma _1}dz_1\dots \int _{\gamma _{n}}dz_{n}\frac{F(z_1,\ldots ,z_n)}{(z_1-z_2)(z_2-z_3)\ldots (z_{n-1}-z_{n}) }\\&\quad = \frac{1}{2i\pi n}\int _{\gamma _1}F(z,\ldots ,z) dz \end{aligned}$$

where \((\gamma _1,\ldots ,\gamma _n)\) are nested contours as in (54).

Proof

For \(n=2,\) the formula follows from the residue theorem. The result can then be proved by induction on n,  using the identity

$$\begin{aligned} \sum _{m=0}^{n-1}\frac{1}{\prod _{k=1}^{n-1}(z_{\sigma ^m(k)}-z_{\sigma ^{m}(k+1)}) } =0 \end{aligned}$$

where \(\sigma \) is the full cycle \((1\,2\ldots n),\) changing variables, using the residue theorem and lastly the induction hypothesis. The details are left to the reader. \(\quad \square \)

To prove Proposition 6.6, we will need the following technical lemma. Set

$$\begin{aligned} \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)=\left\{ (a_k:k\in {\mathcal {F}}_{\mathfrak {l}}):a_k\in {\mathbb {C}},\,\sum _ka_k=T\right\} . \end{aligned}$$

Lemma 6.8

The map \(\phi _{\mathfrak {l}}\) has an analytic extension \(\varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\rightarrow {\mathbb {C}}\).

Proof

The following formula is the case \(t=1\) of (44):

$$\begin{aligned} \phi _T(n,a_0,a_*)=\phi _{\mathfrak {l}}(a)+\sum _{i\in {\mathcal {I}}}\alpha _i\int _0^1\phi _{{\mathfrak {l}}_i}(a(s))\phi _{{{\hat{{\mathfrak {l}}}}}_i}(a(s))ds \end{aligned}$$
(56)

where the left-hand side is defined by (12). We see from (12) that \(\phi _T(n,.,.)\) has an analytic extension to \(\varDelta _{{\mathfrak {s}},{\mathbb {C}}}(T)\). Also, the real linear maps

$$\begin{aligned} a\mapsto \alpha :\varDelta _{\mathfrak {l}}(T)\rightarrow {\mathbb {R}}^{\mathcal {I}},\quad a\mapsto (a_0,a_*):\varDelta _{\mathfrak {l}}(T)\rightarrow \varDelta _{\mathfrak {s}}(T) \end{aligned}$$

extend to complex linear maps \(\varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\rightarrow {\mathbb {C}}^{\mathcal {I}}\) and \(\varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\rightarrow \varDelta _{{\mathfrak {s}},{\mathbb {C}}}(T)\). We can therefore use (56) recursively to construct the desired analytic extension of \(\phi _{\mathfrak {l}}\). \(\quad \square \)

Proof of Proposition 6.6

Since \(Q_{{\mathfrak {l}},k}(a,z)\) is continuous in \(z=(z_1,\dots ,z_{n+1})\) on \({\mathcal {O}}_{\mathcal {S}}\), analytic in a, and uniformly bounded on compacts in \(\varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\), the right-hand side of (53) is a well-defined multiple contour integral, does not depend on the order of integration, does not depend on the choice of admissible family \((\gamma _1,\dots ,\gamma _{n+1})\), and defines an analytic function \(\psi _{\mathfrak {l}}\) on \(\varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\). Set

$$\begin{aligned} \delta _{\mathfrak {l}}(a)=\phi _{\mathfrak {l}}(a)-\psi _{\mathfrak {l}}(a). \end{aligned}$$

Then \(\delta _{\mathfrak {l}}\) is also analytic on \(\varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\) by Proposition 6.8. We will show by induction on n that \(\delta _{\mathfrak {l}}(a)=0\).

For \(n=0\), this follows from Proposition 2.5. Suppose inductively that the statement holds for \(n-1\) and let \({\mathfrak {l}}\) be a splittable combinatorial planar loop with n intersections. Fix \(i\in {\mathcal {I}}_{\mathfrak {l}}\), to be chosen later, and write \(k_l\) and \(k_r\) for the labels in \({\mathfrak {l}}_{i,l}\) and \({\mathfrak {l}}_{i,r}\) of the faces containing the face k in \({\mathfrak {l}}\). For \(a\in \varDelta _{\mathfrak {l}}(T)\), write \(a_l\) and \(a_r\) for the images of a under the natural submersions

$$\begin{aligned} \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\rightarrow \varDelta _{{\mathfrak {l}}_{i,l},{\mathbb {C}}}(T),\quad \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\rightarrow \varDelta _{{\mathfrak {l}}_{i,r},{\mathbb {C}}}(T). \end{aligned}$$

For \((z_1,\dots ,z_{n+1})\in {\mathbb {C}}^{n+1}\), set

$$\begin{aligned} z_l=(z_j:{\mathfrak {s}}_j\in {\mathcal {S}}_{{\mathfrak {l}}_{i,l}}),\quad z_r=(z_j:{\mathfrak {s}}_j\in {\mathcal {S}}_{{\mathfrak {l}}_{i,r}}). \end{aligned}$$

Then, for \(a\in \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\) and \({\mathfrak {s}}\in {\mathcal {S}}\), we have

$$\begin{aligned} \varXi _i\langle n_{\mathfrak {s}},a\rangle = {\left\{ \begin{array}{ll} 1,&{}\text {if }{\mathfrak {s}}\text { uses the right outgoing edge at }i,\\ -1,&{}\text {if }{\mathfrak {s}}\text { uses the left outgoing edge at }i,\\ 0,&{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

Hence

$$\begin{aligned} \varXi _iQ_{{\mathfrak {l}},k}(a,z)=Q_{{\mathfrak {l}}_{i,l},k_l}(a_l,z_l)Q_{{\mathfrak {l}}_{i,r},k_r}(a_r,z_r). \end{aligned}$$
(57)

Since \(n\ge 1\), the tree \({\mathcal {S}}_{\mathfrak {l}}\) has at least two leaves, and one of them, say \({\mathfrak {s}}_m\), is not the boundary of the distinguished face k. Since the labelling is adapted, there exists \(p\le m-1\) such that \({\mathfrak {s}}_p\) is adjacent to \({\mathfrak {s}}_m\). Denote by \(k_c\) the component of its complement which does not include \(k_\infty \) and set \(i\in {\mathcal {I}}_{{\mathfrak {l}}}\) to be the intersection point \({\mathfrak {s}}_m\) and \({\mathfrak {s}}_p\) are sharing. The sequence \(({\mathfrak {s}}_1,\dots ,{\mathfrak {s}}_{m-1},{\mathfrak {s}}_{m+1},\dots ,{\mathfrak {s}}_n)\) is an adapted labelling of \({\mathfrak {l}}_{i,l}\) and the family of loops \((\gamma _1,\dots ,\gamma _{m-1},\gamma _{m+1},\dots ,\gamma _n)\) is admissible for this sequence and for the distinguished face \(k_l\). Also, \({\mathfrak {s}}_m\) is an adapted labelling of \({\mathfrak {l}}_{i,r}\) with admissible loop \(\gamma _m\). Since the right-hand side of (57) is uniformly bounded on any compact subset of \(\varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\times {\mathcal {O}}_n\), we deduce that, for all \(a\in \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\),

$$\begin{aligned} \varXi _i\psi _{\mathfrak {l}}(a)=\psi _{{\mathfrak {l}}_{i,l}}(a_l)\psi _{{\mathfrak {l}}_{i,r}}(a_r). \end{aligned}$$

On the other hand, since \(\varPhi _T\) satisfies the Makeenko–Migdal equations, for all \(a\in \varDelta _{\mathfrak {l}}(T)\),

$$\begin{aligned} \varXi _i\phi _{\mathfrak {l}}(a)=\phi _{{\mathfrak {l}}_{i,l}}(a_l)\phi _{{\mathfrak {l}}_{i,r}}(a_r) \end{aligned}$$

and this extends to \(a\in \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\) by analyticity. But \({\mathfrak {l}}_{i,l}\) and \({\mathfrak {l}}_{i,r}\) are splittable and have no more than \(n-1\) points of intersection. So we have shown that, for all \(a\in \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\),

$$\begin{aligned} \varXi _i\delta _{\mathfrak {l}}(a)=0. \end{aligned}$$
(58)

We check now the boundary condition of this equation. Since \({\mathfrak {l}}\) is splittable, there is a splittable loop \({{\tilde{{\mathfrak {l}}}}}\), with exactly \(n-1\) intersections, an affine map

$$\begin{aligned} \iota _c:\varDelta _{\mathfrak {l}}(T)\cap \{a: a_{k_c}=0\}\rightarrow \varDelta _{{{\tilde{{\mathfrak {l}}}}}}(T) \end{aligned}$$

and a distinguished face \({{\tilde{k}}}\in {\mathcal {F}}_{{{\tilde{{\mathfrak {l}}}}}}\) such that, for any \(a\in \varDelta _{\mathfrak {l}}(T)\) with \(a_{k_c}=0\),

$$\begin{aligned} {\mathfrak {l}}(a)\cap {{\tilde{{\mathfrak {l}}}}}(\iota _c(a))\not =\emptyset \end{aligned}$$

and \(\iota _c(a)_{{{\tilde{k}}}}=0\) if and only if \(a_k=0\). Moreover, for all \(a\in \varDelta _{\mathfrak {l}}(T)\),

$$\begin{aligned} \phi _{\mathfrak {l}}(a)=\phi _{{{\tilde{{\mathfrak {l}}}}}}(\iota _c(a)). \end{aligned}$$
(59)

Furthermore, by analyticity of \(\phi _{\mathfrak {l}}\) and \(\phi _{{{\tilde{{\mathfrak {l}}}}}}\), this equality holds true for all \(a\in \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\) with \(a_{k_c}=0\). Let \(\nu \in {\mathbb {Z}}^{{\mathcal {F}}_{\mathfrak {l}}}\) be the vector with \(\nu _{k_c}=1\) which is proportional to \(\varXi _i\), viewed as an element of \(\{1_{{\mathcal {F}}_{\mathfrak {l}}}\}^\bot \cap {\mathbb {R}}^{{\mathcal {F}}_{\mathfrak {l}}}\), where i is the only vertex adjacent to \(F_c\). Then, by (58), for all \(a\in \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\),

$$\begin{aligned} \psi _{\mathfrak {l}}(a)=\psi _{\mathfrak {l}}(a-a_{k_c}\nu ). \end{aligned}$$

As \(a-a_{k_c}\nu \in \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\cap \{a:a_{k_c}=0\}\), by (59), in order to conclude, it is sufficient to show that, for all \(a\in \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\) with \(a_{k_c}=0\),

$$\begin{aligned} \phi _{\mathfrak {l}}(a)=\phi _{{{\tilde{{\mathfrak {l}}}}}}(\iota _c(a)). \end{aligned}$$

For such a vector a and for \(z\in {\mathcal {O}}_n\), set \(\tilde{z}=(z_j:j\not =m)\). Then

$$\begin{aligned} Q_{{\mathfrak {l}},k}(a,z)=Q_{{{\tilde{{\mathfrak {l}}}}},{{\tilde{k}}}}(a,\tilde{z})\frac{\varepsilon _{m}e^{\varepsilon _{m}G_T(z_m)}}{z_m-z_p}. \end{aligned}$$

For \(a\in \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\), the only singularity of \(z_m\in {\mathbb {C}}{\setminus }[-\beta ,\beta ]\mapsto Q_{{\mathfrak {l}},k}(a,z)\) is at \(z_p\). Since the family of loops \((\gamma _1,\dots ,\gamma _{n+1})\) is admissible, by deforming \(\gamma _m\), we can assume that the bounded component of \({\mathbb {C}}{\setminus }\gamma _{m}\) contains all \(\gamma _j\) with \(j\not =o\). Then, for all \({{\tilde{z}}}\in {\mathcal {O}}_{n-1}\),

$$\begin{aligned} \frac{1}{2\pi i}\int _{\gamma _{m}}Q_{{\mathfrak {l}},k}(a,z)dz_{{\mathfrak {s}}_o} =Q_{{{\tilde{{\mathfrak {l}}}}},{{\tilde{k}}}}(a,{{\tilde{z}}}) \frac{1}{2\pi i}\int _C\frac{e^{\varepsilon _mG_T(z_m)}}{z_m-z_p}dz_m \end{aligned}$$

with C an anticlockwise circle with centre 0, whose interior contains all contours \((\gamma _j:j\not =m)\). Since \(G_T(z)\sim 1/z\) as \(z\rightarrow \infty \), it follows that

$$\begin{aligned} \frac{1}{2\pi i}\int _C\frac{e^{\varepsilon _mG_T(z_m)}}{z_m-z_p}dz_m =-\frac{1}{2\pi i}\int _{1/C}\frac{e^{\varepsilon _mG_T(1/y)}}{y(1-yz_p)}dy=1. \end{aligned}$$

Therefore, performing the integration in \(\phi _{\mathfrak {l}}(a)\) first with respect to \(z_m\), we obtain, when \(a\in \varDelta _{{\mathfrak {l}},{\mathbb {C}}}(T)\), with \(a_{k_c}=0\),

$$\begin{aligned} \phi _{\mathfrak {l}}(a)&=\left( \frac{1}{2\pi i}\right) ^n\int _{z_j\in \gamma _j,\text { for }j\not =m} Q_{{{\tilde{{\mathfrak {l}}}}},{{\tilde{k}}}}(a,(z_{\mathfrak {s}})_{{\mathfrak {s}}\in {\mathcal {S}}_{{\mathfrak {l}}}\setminus \{{\mathfrak {s}}_m\}})\prod _{j\not =m}dz_j\\&=\phi _{{{\tilde{{\mathfrak {l}}}}}}(\iota _c(a)). \end{aligned}$$

\(\square \)