Avoid common mistakes on your manuscript.
Correction to: Letters in Mathematical Physics (2021) 111:67 https://doi.org/10.1007/s11005-021-01396-z
1 Introduction
Denote \({\mathcal {H}}_N(I)\) the set of Hermitian matrices of size \(N=1,2,\dots \) with eigenvalues in the (closed) interval \(I\subseteq {\mathbb {R}}\), endowed with the probability measure
Here \(C_N:=\int _{{\mathcal {H}}_N(I)}\exp \textrm{tr}\,V(X)\mathrm dX\) and \(V=V(x)\) is a smooth function of \(x\in I^\circ \) (the interior of I) so that V(X) is defined for all \(X\in {\mathcal {H}}_N(I)\) by the spectral theorem. We assume that V satisfies the following decay assumptions: there exists \(\varepsilon >0\) such that \(\exp V(x)={\mathcal {O}}\left( |x-x_0|^{-1+\varepsilon }\right) \) as \(x\in I^\circ \) approaches a finite endpoint \(x_0\) of I; if I extends to \(\pm \infty \) we assume that \(V(x)\rightarrow -\infty \) fast enough as \(x\rightarrow \pm \infty \) in order for the measure (1.1) to have finite moments of all orders. In particular, the family \(\bigl (P_n(x)\bigr )_{n\ge 0}\) of orthogonal polynomials associated with the measure \(\exp (V(x))\mathrm dx\) on I exists (unique):
The generating functions of moments
are analytic functions of \(z_1,\dots ,z_\ell \in {\mathbb {C}}\setminus I\), symmetric in the variables \(z_1,\dots ,z_\ell \). The generating functions for cumulants (or, connected moments) are
Introduce the \(2\times 2\) matrix
which is the well-known solution to the Riemann–Hilbert problem for orthogonal polynomials [4]; it is an analytic function of \(z\in {\mathbb {C}}\setminus I\).
Theorem 1.1
(Theorem 1.5 in [6]) Let
with \(Y_N(z)\) as in (1.5). Then the cumulant generating functions (1.4) are given by
where \('\) is the derivative with respect to z and \(\textrm{cyc}((\ell ))\) is the set of \(\ell \)-cycles in the symmetric group \({\mathfrak {S}}_\ell \).
The proof originally presented in [6] is correct only for \(\ell =1\). This is because the proof is based on formula (3.5) in [6] for the correlators that holds true only when \(\ell =1\); the correct formula is
the sum on the right-hand side running over all distinct (unordered) set partitions of \(\lbrace 1,\dots ,\ell \rbrace \), i.e. unordered collections of non-empty pairwise disjoint subsets of \(\lbrace 1,\dots ,\ell \rbrace \) whose union is \(\lbrace 1,\dots ,\ell \rbrace \). For example,
Here, \(\rho _k(x_1,\dots ,x_k)\) is the k-point correlation function of eigenvalues, given by the well-known determinantal formula [3]
in terms of the Christoffel–Darboux kernel \(K_N(x,y)\), cf. (2.7) below. Therefore, the analysis at [6, page 24] (for \(\ell =2\)) and at [6, page 27] (for \(\ell >2\)) is incorrect. For this reason, here we provide a proof (for all \(\ell \ge 1\)) which follows a different strategy, closer in spirit to an analogous proof in [1, 5].
2 Proof of Theorem 1.1
The strategy of the proof is based on the observation that setting
we have
Here and below it is assumed that \(z_i\not \in I\).
2.1 Orthogonal polynomials on the real line and unitary-invariant ensembles
We denote \(P_\ell \) the monic orthogonal polynomials, \(h_\ell =\int _I P^2_\ell (x)\textrm{e}^{V(x)}\mathrm dx\), see (1.2), and
their Cauchy transforms. The matrix
introduced in (1.5), is an analytic function of \(\zeta \in {\mathbb {C}}\setminus I\). It satisfies the jump condition
where \(Y_{N,\pm }(x):=\lim _{\epsilon \rightarrow 0_+}Y_N(x\pm \textrm{i}\epsilon )\), \(x\in I^\circ \) (\(I^\circ \) is the interior of the interval I). As \(\zeta \rightarrow \infty \), we have
where we denote \({\textbf{1}}=\begin{pmatrix} 1&{} 0 \\ 0 &{} 1 \end{pmatrix}\) and \(\sigma _3=\begin{pmatrix} 1&{} 0 \\ 0 &{} -1 \end{pmatrix}\). Lastly, we recall the Christoffel–Darboux kernel
The last identity is known as Christoffel–Darboux identity and allows to rewrite the Christoffel–Darboux kernel in terms of the matrix \(Y_N\) in (2.4) as
which is independent of the choice of boundary value of \(Y_N\) because of (2.5).
Next, we need to recall the connection of orthogonal polynomials to the theory of unitary-invariant ensembles of random matrices. The main point which is relevant for our present purposes is that [3]
where it is convenient to explicitly express the dependence of \(P_\ell =P_\ell ^V\) and \(h_\ell =h_\ell ^V\) on the potential V. Therefore, introducing the modified potential
we have
Lemma 2.1
We have
Proof
We have \(h_i^{V_{{\textbf{t}},{\textbf{z}}}}=\int _{I}\bigl (P_i^{V_{{\textbf{t}},{\textbf{z}}}}(x)\bigr )^2\textrm{e}^{V_{{\textbf{t}},{\textbf{z}}}(x)}\mathrm dx\) hence
but the first term vanishes by orthogonality because \(P_i^{V_{{\textbf{t}},{\textbf{z}}}}(x)\) are normalized to be monic and, therefore, \(\partial _{t_j}P_i^{V_{{\textbf{t}},{\textbf{z}}}}(x)\) is a polynomial of degree strictly less than i. \(\square \)
Remark 2.2
We allow for the potential V to be complex-valued and, accordingly, slightly abuse the standard terminology and still refer to (1.2) as an orthogonality condition on the real line, even though one usually considers only real-valued potentials in such a context. However, in order to be able to consider the analytic generating function \({\mathscr {Z}}_N({\textbf{t}},{\textbf{z}})\), this caveat plays a role.
More importantly, existence of the monic “orthogonal” polynomial as in (1.2) with respect to a complex-valued potential is not ensured for all values of the parameters \({\textbf{t}},{\textbf{z}}\). However, the condition for existence (non-vanishing of the Hankel determinants of the moments) is open in the space of parameters \({\textbf{t}},{\textbf{z}}\), and contains the subspace \({\textbf{t}}=0\) by standard arguments pertaining to the classical real-valued case (in which the Hankel matrices are positive-definite). Therefore, for sufficiently small \({\textbf{t}}\), the existence of \(P_i^{V_{{\textbf{t}},{\textbf{z}}}}(x)\) is not an issue and we will restrict to sufficiently small \({\textbf{t}}\) without further mention in what follows, as in the end we are interested in the quantities (2.2).
2.2 Case \(\ell =1\)
It follows from (2.8) that
In the following we shall use the notation
for the jump of a function f across I, namely \(f_\pm (x):=\lim _{\epsilon \rightarrow 0_+}f(x\pm \textrm{i}\epsilon )\). The next lemma is well known, see e.g. [2], and we re-prove it here for the reader’s convenience.
Lemma 2.3
We have
Proof
Let us denote \(':=\partial _x\). It follows from the jump condition (2.5) for \(Y_N\) that
Therefore we compute
The last term vanishes and so, by the cyclic property of the trace, we have
which is easily seen to be equivalent to (2.14). \(\square \)
We are ready for the proof of the case \(\ell =1\). In such case \({\textbf{t}}=t\), \({\textbf{z}}=z\), and \(V_{{\textbf{t}},{\textbf{z}}}(x)=V(x)+t/(z-x)\). By (2.11), Lemma 2.1, and (2.7), we have
where we denote explicitly the dependence of the Christoffel–Darboux kernel on the potential. Let \(\Gamma \) be an oriented contour in the complex plane which surrounds I in counterclockwise sense (i.e., I lies on the left of \(\Gamma \)) and leaves z outside (i.e., z lies to the right of \(\Gamma \)). Then, using Lemma 2.3, we get
where \(Y_N(\cdot ;{\textbf{t}},{\textbf{z}})\) is the matrix (2.4) for the potential \(V_{{\textbf{t}},{\textbf{z}}}\). The last contour integral can be evaluated by a residue computation as
Remark 2.4
In the last expression (and similarly below), the residue at infinity is a formal residue, namely the limit of integrals on the upper and lower semicircles \(|\zeta |=R\), \(\pm \textrm{Im}\,\zeta > 0\), as \(R\rightarrow +\infty \). Even if the integrand is discontinuous across the real axis, the limit is nevertheless given by (minus) the coefficient of the term \(\zeta ^{-1}\) in the asymptotic expansion at \(\zeta =\infty \), as the integrand has the same asymptotic expansion as \(\zeta \rightarrow \infty \) in both sectors.
It can be checked from (2.6) that the residue at \(\zeta =\infty \) vanishes. Therefore
Evaluating this identity at \(t=0\), taking into account (2.2), we obtain exactly (1.7).
2.3 Case \(\ell =2\)
Let us first formulate a result that will be needed for all \(\ell \ge 2\). Let
where, again, \(Y_N(\cdot ;{\textbf{t}},{\textbf{z}})\) is the matrix (2.4) for the potential \(V_{{\textbf{t}},{\textbf{z}}}\).
Lemma 2.5
Let \(\ell \ge 2\), \({\textbf{t}}=(t_1,\dots ,t_\ell )\), \({\textbf{z}}=(z_1,\dots ,z_\ell )\), and \(V_{{\textbf{t}},{\textbf{z}}}(x)=V(x)+\sum _{i=1}^\ell \tfrac{t_i}{z_i-x}\). For all \(1\le j\le \ell \), we have
Proof
Let us denote by \(\Omega _j(\zeta ;{\textbf{t}},{\textbf{z}})\) the left-hand side of (2.25). Using (2.5) we get the identities, for \(x\in I^\circ \),
from which we readily ascertain that \(\Delta \Omega _j(x;{\textbf{t}},{\textbf{z}})=0\) for all \(x\in I^\circ \). Hence, \(\Omega _j(\zeta ;{\textbf{t}},{\textbf{z}})\) is a meromorphic function of \(\zeta \) with a single simple pole at \(\zeta =z_j\) and which vanishes at \(\zeta =\infty \), because of (2.6), and so the statement follows. (Singularities at the endpoints of I are ruled out by our assumptions on V.) \(\square \)
Let us consider the case \(\ell =2\), in which \({\textbf{t}}=(t_1,t_2)\), \({\textbf{z}}=(z_1,z_2)\), and \(V_{{\textbf{t}},{\textbf{z}}}(x)=V(x)+\tfrac{t_1}{z_1-x}+\tfrac{t_2}{z_2-x}\). By the argument used for \(\ell =1\), cf. (2.23), we obtain
Next we have to take a derivative in \(t_2\): omitting the explicit dependence on \({\textbf{t}},{\textbf{z}}\), we have
We use (2.25) to rewrite the first term inside the trace in the right-hand side as
and the second term as
where \([A,B]:=AB-BA\) is the commutator. The term in the last row exactly cancels with (2.30), and so, rearranging terms,
Since \(\textrm{tr}\,([A,B]B)=\textrm{tr}\,([AB,B])=0\), the proof of the case \(\ell =2\) is completed by setting \(t_1=t_2=0\).
2.4 Case \(\ell \ge 3\)
As usual, let \({\textbf{t}}=(t_1,\dots ,t_\ell )\) and \({\textbf{z}}=(z_1,\dots ,z_\ell )\). Let us denote, for \(\ell \ge 2\),
where the sum extends over cyclic permutations of \(\lbrace 1,\dots ,\ell \rbrace \). We aim at proving that
where \(Y_N(x;{\textbf{t}},{\textbf{z}})\), and so \(R(x;{\textbf{t}},{\textbf{z}})\), are computed for the potential \(V_{{\textbf{t}},{\textbf{z}}}(x)=V(x)+\sum _{i=1}^\ell \frac{t_i}{z_i-x}\). Then, (1.9) follows by taking \(t_i=0\).
The proof of (2.34) is by induction on \(\ell \ge 2\) and it is similar in spirit to that in [1]. Hence, let us assume (2.34) for some \(\ell \ge 2\) and let us prove it for \(\ell +1\). Since the potential V is arbitrary, by replacing V with \(V+t_{\ell +1}/(z_{\ell +1}-x)\) we can assume (2.34) holds true for \(V_{{\textbf{t}},{\textbf{z}}}(x)=V(x)+\sum _{j=1}^{\ell +1}\frac{t_{j}}{z_{j}-x}\), and so we just have to show that \(\partial _{t_{\ell +1}}S_\ell (z_1,\dots ,z_\ell ;{\textbf{t}})\) is equal to \(S_{\ell +1}(z_1,\dots ,z_\ell ,z_{\ell +1};{\textbf{t}})\). To this end we first observe that by (2.25), we have
Therefore,
Expanding \([R(z_{\ell +1};{\textbf{t}},{\textbf{z}}),R(z_{i_j};{\textbf{t}},{\textbf{z}})]=R(z_{\ell +1};{\textbf{t}},{\textbf{z}})R(z_{i_j};{\textbf{t}},{\textbf{z}})-R(z_{i_j};{\textbf{t}},{\textbf{z}}) R(z_{\ell +1};{\textbf{t}},{\textbf{z}})\), we note that in the previous sum, each term involving the expression
appears twice, but with different denominators. Collecting such terms yields
where we set \(i_0:=i_\ell \) in the internal summation. The proof is complete.
References
Bertola, M., Dubrovin, B., Yang, D.: Correlation functions of the KdV hierarchy and applications to intersection numbers over \(\overline{{\cal{M} }}_{g, n}\). Phys. D 327, 30–57 (2016)
Claeys, T., Grava, T., McLaughlin, K.D.T.-R.: Asymptotics for the partition function in two-cut random matrix models. Commun. Math. Phys. 339(2), 513–587 (2015)
Deift, P.: Orthogonal Polynomials and Random Matrices: a Riemann–Hilbert Approach. American Mathematical Society, Providence (1999)
Fokas, A.S., Its, A.R., Kitaev, A.V.: The isomonodromy approach to matrix models in 2D quantum gravity. Commun. Math. Phys. 147(2), 395–430 (1992)
Gisonni, M., Grava, T., Ruzza, G.: Laguerre Ensemble: Correlators, Hurwitz Numbers and Hodge Integrals. Ann. Henri Poincaré 21(10), 3285–3339 (2020)
Gisonni, M., Grava, T., Ruzza, G.: Jacobi Ensemble, Hurwitz numbers and Wilson polynomials. Lett. Math. Phys. 111(3), Paper No. 67, 38 pp (2021)
Acknowledgements
This project has received funding from the European Union’s H2020 research and innovation programme under the Marie Skłodowska–Curie grant No. 778010 IPaDEGAN. TG acknowledges the support of INdAM/GNFM and the research project Mathematical Methods in Non Linear Physics (MMNLP), Gruppo 4-Fisica Teorica of INFN. GR acknowledges the support of the FCT grant 2022.07810.CEECIND.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gisonni, M., Grava, T. & Ruzza, G. Correction To: Jacobi Ensemble, Hurwitz Numbers and Wilson Polynomials. Lett Math Phys 113, 86 (2023). https://doi.org/10.1007/s11005-023-01707-6
Published:
DOI: https://doi.org/10.1007/s11005-023-01707-6