Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Proof of a McKean conjecture on the rate of convergence of Boltzmann-equation solutions


The present work provides a definitive answer to the problem of quantifying relaxation to equilibrium of the solution to the spatially homogeneous Boltzmann equation for Maxwellian molecules. Under really mild conditions on the initial datum and a weak, physically consistent, angular cutoff hypothesis, our main result (Theorem 1) contains the first precise statement that the total variation distance between the solution and the limiting Maxwellian distribution admits an upper bound of the form \(C e^{\varLambda _b t}\), \(\varLambda _b\) being the least negative eigenvalue of the linearized collision operator and \(C\) a constant depending only on the initial datum. The validity of this quantification was conjectured, about fifty years ago, by Henry P. McKean. As to the proof of our results, we have taken as point of reference an analogy between the problem of convergence to equilibrium and the central limit theorem of probability theory, highlighted by McKean.

This is a preview of subscription content, log in to check access.

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.


  1. 1.

    It should be noted that condition (2) is tantamount to assuming that the counterpart of \(b\) in the \({\varvec{\sigma }}\)-representation is an even function.


  1. 1.

    Arkeryd, L.: Intermolecular forces of infinite range and the Boltzmann equation. Arch. Ration. Mech. Anal. 77, 11–21 (1981)

  2. 2.

    Aubin, T.: Nonlinear Analysis on Manifolds. Monge-Ampère Equations. Springer, New York (1982)

  3. 3.

    Bassetti, F., Ladelli, L.: Self-similar solutions in one-dimensional kinetic models: a probabilistic view. Ann. Appl. Probab. 22, 1928–1961 (2012)

  4. 4.

    Bassetti, F., Ladelli, L., Matthes, D.: Central limit theorem for a class of one-dimensional kinetic equations. Probab. Theory Relat. Fields 150, 77–109 (2010)

  5. 5.

    Bassetti, F., Ladelli, L., Regazzini, E.: Probabilistic study of the speed of approach to equilibrium for an inelastic Kac model. J. Stat. Phys. 133, 683–710 (2008)

  6. 6.

    Beurling, A.: Sur les intégrales de Fourier absolument convergentes et leur application à une transformation fonctionnelle. In: 9th Congr. Math. Scandinaves, Tryekeri, Helsinki, 1938, pp. 199–210, Helsinki (1939) [See also: The Collected Works of Arne Beurling, vol. 2. Harmonic Analysis (L. Carleson, P. Malliavin, V. Neuberger and J. Wermer, eds.). Birkhäuser, Boston (1989)]

  7. 7.

    Bhattacharya, R.N., Rao, R.R.: Normal Approximation and Asymptotic Expansions. Wiley, New York (1976)

  8. 8.

    Bobylev, A.V.: The theory of the nonlinear spatially uniform Boltzmann equation for Maxwell molecules. Math. Phys. Rev. 7, 111–233 (1988)

  9. 9.

    Bobylev, A.V., Cercignani, C.: On the rate of entropy production for the Boltzmann equation. J. Stat. Phys. 94, 603–618 (1999)

  10. 10.

    Carleman, T.: Sur la théorie de l’equation intégrodifferentielle de Boltzmann. Acta Math. 60, 91–146 (1932)

  11. 11.

    Carlen, E.A., Carvalho, M.C.: Strict entropy production bounds and stability of the rate of convergence to equilibrium for the Boltzmann equation. J. Stat. Phys. 67, 575–608 (1992)

  12. 12.

    Carlen, E.A., Carvalho, M.C.: Entropy production estimates for Boltzmann equation with physically realistic collision kernels. J. Stat. Phys. 74, 743–782 (1994)

  13. 13.

    Carlen, E.A., Carvalho, M.C., Gabetta, E.: Central limit theorem for Maxwellian molecules and truncation of the Wild expansion. Commun. Pure Appl. Math. 53, 370–397 (2000)

  14. 14.

    Carlen, E.A., Carvalho, M.C., Gabetta, E.: On the relation between rates of relaxation and convergence of Wild sums for solutions of the Kac equation. J. Funct. Anal. 220, 362–387 (2005)

  15. 15.

    Carlen, E.A., Carvalho, M.C., Loss, M.: Determination of the spectral gap for Kac’s master equation and related stochastic evolution. Acta Math. 191, 1–54 (2003)

  16. 16.

    Carlen, E.A., Gabetta, E., Regazzini, E.: On the rate of explosion for infinite energy solutions of the spatially homogeneous Boltzmann equation. J. Stat. Phys. 129, 699–723 (2007)

  17. 17.

    Carlen, E.A., Gabetta, E., Regazzini, E.: Probabilistic investigation on the explosion of solutions of the Kac equation with infinite energy initial distribution. J. Appl. Probab. 45, 95–106 (2008)

  18. 18.

    Carlen, E.A., Gabetta, E., Toscani, G.: Propagation of smoothness and the rate of exponential convergence to equilibrium for a spatially homogeneous Maxwellian gas. Commun. Math. Phys. 199, 521–546 (1999)

  19. 19.

    Carlen, E.A., Geronimo, J.S., Loss, M.: Determination of the spectral gap in the Kac model for physical momentum and energy-conserving collisions. SIAM J. Math. Anal. 40, 327–364 (2008)

  20. 20.

    Carlen, E.A., Lu, X.: Fast and slow convergence to equilibrium for Maxwellian molecules via Wild sums. J. Stat. Phys. 112, 59–134 (2003)

  21. 21.

    do Carmo, M.P.: Riemannian Geometry. Birkhäuser, Boston (1992)

  22. 22.

    Cercignani, C.: The Boltzmann Equation and its Applications. Springer, New York (1988)

  23. 23.

    Cercignani, C., Illner, R., Pulvirenti, M.: The Mathematical Theory of Dilute Gases. Springer, New York (1994)

  24. 24.

    Chow, Y.S., Teicher, H.: Probability Theory. Independence, Interchangeability, Martingales, 3rd edn. Springer, New York (1997)

  25. 25.

    Constantine, G.M., Savits, T.H.: A multivariate Faà di Bruno formula with applications. Trans. Am. Math. Soc. 348, 503–520 (1996)

  26. 26.

    Desvillettes, L., Villani, C.: On the trend to global equilibrium for spatially inhomogeneous kinetic systems: the Boltzmann equation. Invent. Math. 159, 245–316 (2005)

  27. 27.

    Dolera, E.: Rapidity of convergence to equilibrium of the solution of the Boltzmann equation for Maxwellian molecules. Ph.D. thesis, Università degli Studi di Pavia (2010)

  28. 28.

    Dolera, E.: On the computation of the spectrum of the linearized Boltzmann collision operator for Maxwellian molecules. Boll. Unione Mat. Ital. (9) 4, 47–68 (2011)

  29. 29.

    Dolera, E.: Spatially homogeneous Maxwellian molecules in a neighborhood of the equilibrium. Ist. Lombardo Accad. Sci. Lett. Rend. A. 145 (2011). arXiv:1206.3425

  30. 30.

    Dolera, E.: Estimates of the approximation of weighted sums of conditionally independent random variables by the normal law. J. Inequal. Appl. 2013, 320 (2013)

  31. 31.

    Dolera, E.: Mathematical treatment of the homogeneous Boltzmann equation for Maxwellian molecules in the presence of singular kernels. arXiv:1306.5133

  32. 32.

    Dolera, E., Gabetta, E., Regazzini, E.: Reaching the best possible rate of convergence to equilibrium for solutions of Kac’s equation via central limit theorem. Ann. Appl. Probab. 19, 186–209 (2009)

  33. 33.

    Dolera, E., Regazzini, E.: The role of the central limit theorem in discovering sharp rates of convergence to equilibrium for the solution of the Kac equation. Ann. Appl. Probab. 20, 430–461 (2010)

  34. 34.

    Drmota, M.: Random Trees. An interplay between Combinatorics and Probability. Springer, Wien (2009)

  35. 35.

    Fortini, S., Ladelli, L., Regazzini, E.: A central limit problem for partially exchangeable random variables. Theory Probab. Appl. 41, 224–246 (1996)

  36. 36.

    Fristedt, B., Gray, L.: A Modern Approach to Probability Theory. Birkhäuser, Boston (1997)

  37. 37.

    Gabetta, E., Regazzini, E.: Some new results for McKean’s graphs with applications to Kac’s equation. J. Stat. Phys. 125, 947–974 (2006)

  38. 38.

    Gabetta, E., Regazzini, E.: Central limit theorem for the solution of the Kac equation. Ann. Appl. Probab. 18, 2320–2336 (2008)

  39. 39.

    Gabetta, E., Regazzini, E.: Central limit theorems for the solutions of the Kac equation: speed of approach to equilibrium in weak metrics. Probab. Theory Relat. Fields 146, 451–480 (2010)

  40. 40.

    Gabetta, E., Toscani, G., Wennberg, B.: Metrics for probability distributions and the trend to equilibrium for solutions of the Boltzmann equation. J. Stat. Phys. 81, 901–934 (1995)

  41. 41.

    Grigoryan, A.: Heat Kernel and Analysis on Manifolds. American Mathematical Society, Providence (2009)

  42. 42.

    Grünbaum, A.: Linearization for the Boltzmann equation. Trans. Am. Math. Soc. 165, 425–449 (1972)

  43. 43.

    Hardy, G.H., Littlewood, J.E., Polya, G.: Inequalities, 2nd edn. Cambridge University Press, Cambridge (1952) (reprinted in 1994)

  44. 44.

    Hilbert, D.: Begründung der kinetischen Gastheorie. Math. Ann. 72, 562–577 (1912)

  45. 45.

    Hirsch, M.W.: Differential Topology. Springer, New York (1976)

  46. 46.

    Ikenberry, E., Truesdell, C.: On the pressures and the flux of energy in a gas according to Maxwell’s kinetic theory. I. J. Ration. Mech. Anal. 5, 1–54 (1956)

  47. 47.

    Kac, M.: Foundations of kinetic theory. In: Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, vol. 3, pp. 171–197. University of California Press, Berkeley (1956)

  48. 48.

    Kallenberg, O.: Foundations of Modern Probability, 2nd edn. Springer, New York (2002)

  49. 49.

    Maxwell, J.C.: On the dynamical theory of gases. Philos. Trans. R. Soc. Lond. Ser. A 157, 49–88 (1867)

  50. 50.

    McKean Jr., H.P.: Speed of approach to equilibrium for Kac’s caricature of a Maxwellian gas. Arch. Ration. Mech. Anal. 21, 343–367 (1966)

  51. 51.

    McKean Jr., H.P.: An exponential formula for solving Boltzmann’s equation for a Maxwellian gas. J. Combin. Theory 2, 358–382 (1967)

  52. 52.

    Merris, R.: Combinatorics, 2nd edn. Wiley, New York (2003)

  53. 53.

    Morgenstern, D.: General existence and uniqueness proof for the spatially homogeneous solutions of the Maxwell–Boltzmann equation in the case of Maxwellian molecules. Proc. Natl. Acad. Sci. USA 40, 719–721 (1954)

  54. 54.

    Morgenstern, D.: Analytical studies related to the Maxwell–Boltzmann equation. J. Ration. Mech. Anal. 4, 533–555 (1955)

  55. 55.

    Mouhot, C.: Rate of convergence to equilibrium for the spatially homogeneous Boltzmann equation with hard potentials. Commun. Math. Phys. 261, 629–672 (2006)

  56. 56.

    Murata, H., Tanaka, H.: An inequality for certain functional of multidimensional probability distributions. Hiroshima Math. J. 4, 75–81 (1974)

  57. 57.

    Parthasarathy, K.R.: Probability Measures on Metric Spaces. Academic Press, New York (1967) (reprinted in 2005 by AMS Chelsea, Providence)

  58. 58.

    Petrov, V.V.: Limit Theorems of Probability Theory. Sequences of Independent Random Variables. The Clarendon Press, Oxford University Press, New York (1995)

  59. 59.

    Sansone, G.: Orthogonal Functions. Interscience Publishers, New York (1959) (reprinted in 1991 by Dover Publications, New York)

  60. 60.

    Stroock, D.W.: Probability Theory. An Analytic View, 2nd edn. Cambridge University Press, Cambridge (2011)

  61. 61.

    Tanaka, H.: Probabilistic treatement of the Boltzmann equation of Maxwellian molecules. Z. Wahrsch. Verw. Gebiete 46, 67–105 (1978)

  62. 62.

    Toscani, G., Villani, C.: Probability metrics and uniqueness of the solution of the Boltzmann equation for a Maxwell gas. J. Stat. Phys. 94, 619–637 (1999)

  63. 63.

    Truesdell, C., Muncaster, R.: Fundamentals of Maxwell’s Kinetic Theory of a Simple Monoatomic Gas. Academic Press, New York (1980)

  64. 64.

    Villani, C.: Fisher information estimates for Boltzmann’s collision operator. J. Math. Pures Appl. 77, 821–837 (1998)

  65. 65.

    Villani, C.: A review of mathematical topics in collisional kinetic theory. In: Friedlander, S., Serre, D. (eds.) Handbook of Mathematical Fluid Dynamics, vol. 1, pp. 71–305. North-Holland, Amsterdam (2002)

  66. 66.

    Villani, C.: Cercignani’s conjecture is sometimes true and always almost true. Commun. Math. Phys. 234, 455–490 (2003)

  67. 67.

    Wild, E.: On Boltzmann’s equation in kinetic theory of gases. Proc. Camb. Philos. Soc. 47, 602–609 (1951)

Download references


The authors would like to thank professor Eric Carlen for his constructive and helpful comments, and constant encouragement. They also acknowledge the advice of professor Francesco Bonsante.

Author information

Correspondence to Eugenio Regazzini.

Additional information

Work partially supported by MIUR-2008MK3AFZ.

Appendix A

Appendix A

Gathered here are the proofs of unproved propositions and formulas scattered throughout Sects. 1 and 2.

A.1 Proof of (24), (106) and (114)

Fix \(s > 0\) and define

$$\begin{aligned} \text {A}_{1}^{(s)}(\nu , \tau _{\nu })&:= {\mathsf{E}}_t\left( \sum _{j = 1}^{\nu } |\pi _{j, \nu }|^s\ \ \big | \ \nu , \tau _{\nu } \right) \\ \text {A}_2(\nu , \tau _{\nu })&:= {\mathsf{E}}_t\left( \sum _{j = 1}^{\nu } \pi _{j, \nu }^{2} |\zeta _{j, \nu }|\ \ \big | \ \nu , \tau _{\nu } \right) \\ \text {A}_3(\nu , \tau _{\nu })&:= {\mathsf{E}}_t\left( \sum _{j = 1}^{\nu }|\pi _{j, \nu }^{3} \eta _{j, \nu }|\ \ \big | \ \nu , \tau _{\nu } \right) . \end{aligned}$$

These functions satisfy the relations

$$\begin{aligned} \begin{aligned} \text {A}(1, {\mathfrak {t}}_1)&= 1 \\ \text {A}(n, {\mathfrak {t}}_n)&= \alpha [\text {A}(n_l, {\mathfrak {t}}_n^l) + \text {A}(n_r, {\mathfrak {t}}_n^r)]\ \text {if}\ n \ge 2 \end{aligned} \end{aligned}$$

for every \(n\) in \({\mathbb {N}}\), \({\mathfrak {t}}_n\) in \({\mathbb {T}}(n)\) and for some suitable constant \(\alpha \). This claim is checked for each of them, following a common scheme of reasoning. First, \(\text {A}_{1}^{(s)}(1, {\mathfrak {t}}_1) = \text {A}_2(1, {\mathfrak {t}}_1) = \text {A}_3(1, {\mathfrak {t}}_1) = 1\) holds by definition. Then, to obtain the latter identity in (137) as regards \(\text {A}_{1}^{(s)}\), utilize (22) in the equality

$$\begin{aligned} \text {A}_{1}^{(s)}(n, {\mathfrak {t}}_n) = \int \limits _{[0, \pi ]^{n-1}} \sum _{j = 1}^{n} |\pi _{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi })|^s \beta ^{\otimes _{n-1}}(\text {d}\varvec{\varphi }). \end{aligned}$$

Thus, \(\alpha = \int _{0}^{\pi } |\cos \varphi |^s \beta (\text {d}\varphi ) = \int _{0}^{\pi } |\sin \varphi |^s \beta (\text {d}\varphi ) = l_s(b)\), where the validity of the exchange of \(\cos \) with \(\sin \) is a consequence of (2). As to \(\text {A}_2\), use (22) and (105) in

$$\begin{aligned} \text {A}_2(n, {\mathfrak {t}}_n) = \int \limits _{[0, \pi ]^{n-1}} \sum _{j = 1}^{n} |\pi _{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi })|^2 |\zeta _{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi })| \beta ^{\otimes _{n-1}}(\text {d}\varvec{\varphi }) \end{aligned}$$

to show that \(\text {A}_2\) satisfies the latter identity in (137) with \(\alpha = f(b)\). Passing to \(\text {A}_3\), consider (22) and (113) in conjunction with

$$\begin{aligned} \text {A}_3(n, {\mathfrak {t}}_n) = \int \limits _{[0, \pi ]^{n-1}} \sum _{j = 1}^{n} |\pi _{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi })|^3 |\eta _{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi })| \beta ^{\otimes _{n-1}}(\text {d}\varvec{\varphi }) \end{aligned}$$

to verify that \(\text {A}_3\) meets the latter identity in (137) with \(\alpha = g(b)\).

At this stage, since \(\delta _j({\mathfrak {t}}_n^l) + 1 = \delta _j({\mathfrak {t}}_n)\) for \(j = 1, \dots , n_l\) and \(\delta _j({\mathfrak {t}}_n^r) + 1 = \delta _{j+n_l}({\mathfrak {t}}_n)\) for \(j = 1, \dots , n_r\), an induction argument yields \(\text {A}(n, {\mathfrak {t}}_n) = \sum _{j = 1}^{n} \alpha ^{\delta _j({\mathfrak {t}}_n)}\), where \(\delta _j\) is the depth defined in Sect. 1.5. By the concept of germination explained in Sect. 1.5, \(\delta _j({\mathfrak {t}}_{n, k}) = \delta _j({\mathfrak {t}}_n) + \delta _{j, k} + \delta _{j, k+1}\) for \(j = 1, \dots , k+1\), with \(\delta _{r, s}\) standing for the Kronecker delta, and \(\delta _j({\mathfrak {t}}_{n, k}) = \delta _{j+1}({\mathfrak {t}}_n)\) for \(j = k+2, \dots , n\). Then, the specific form of \(\text {A}(n, {\mathfrak {t}}_n)\) shows that

$$\begin{aligned} \frac{1}{n} \sum _{k = 1}^{n} \text {A}(n+1, {\mathfrak {t}}_{n, k}) = \left( 1 + \frac{2\alpha - 1}{n}\right) \text {A}(n, {\mathfrak {t}}_n) \end{aligned}$$

holds for every \(n\) in \({\mathbb {N}}\) and \({\mathfrak {t}}_n\) in \({\mathbb {T}}(n)\). Now, since \({\mathsf{E}}_t[\text {A}(n+1, \tau _{n+1})\ | \ \tau _n = {\mathfrak {t}}_n] = \sum _{k = 1}^{n}\text {A}(n+1, {\mathfrak {t}}_{n, k}){\mathsf{P}}_t[\tau _{n+1} = {\mathfrak {t}}_{n, k}\ | \ \tau _n = {\mathfrak {t}}_n]\), (19) and (138) imply that \(a_n := {\mathsf{E}}_t[\text {A}(n, \tau _n)]\) satisfies \(a_1 = 1\) and \(a_{n+1} = \left( 1 + \frac{2\alpha - 1}{n}\right) a_n\) for every \(n\) in \({\mathbb {N}}\). Hence, if \((1 - 2\alpha )\) does not belong to \({\mathbb {N}}\), \(a_n = \frac{\varGamma (n + 2\alpha - 1)}{\varGamma (n) \varGamma (2\alpha )}\) for every \(n\) in \({\mathbb {N}}\). Otherwise, if \((1 - 2\alpha ) = m\), then \(a_n = (-1)^{n+1} \left( {\begin{array}{c}m-1\\ n-1\end{array}}\right) \) for \(n = 1, \dots , m\) and \(a_n = 0\) for \(n > m\). Finally, note that the expectations in (24), (106) and (114) coincide with \({\mathsf{E}}_t[\text {A}_{1}^{(s)}]\), \({\mathsf{E}}_t[\text {A}_2]\) and \({\mathsf{E}}_t[\text {A}_3]\) respectively, and that \({\mathsf{E}}_t[\text {A}(\nu , \tau _{\nu })\ | \ \nu ] = a_{\nu }\), in view of the stochastic independence of \(\nu \) and \(\{\tau _n\}_{n \ge 1}\). Therefore, conclude by observing that \({\mathsf{E}}_t[a_{\nu }] = \sum _{n = 1}^{\infty } a_n e^{-t} (1 - e^{-t})^{n-1} = e^{-(1 - 2\alpha )t}\).

A.2 Probability law of \(\{\tau _n\}_{n \ge 1}\)

The aim is to show that the coefficient \(p_n({\mathfrak {t}}_n)\) in the Wild-McKean sum is equal to \({\mathsf{P}}_t[\tau _n = {\mathfrak {t}}_n]\) for every \(n\). Proceeding by mathematical induction, observe that the assertion is trivially true for \(n = 1, 2\). To treat the case \(n \ge 3\), introduce the symbol \({\mathbb {P}}({\mathfrak {t}}_n)\) to denote the subset of \({\mathbb {T}}(n-1)\) of the trees which are able to produce \({\mathfrak {t}}_n\) by germination. Whence,

$$\begin{aligned} {\mathsf{P}}_t[\tau _n = {\mathfrak {t}}_n]&= \sum _{{\mathfrak {s}}_{n-1}\in {\mathbb {P}}({\mathfrak {t}}_n)} {\mathsf{P}}_t[\tau _n = {\mathfrak {t}}_n\ | \ \tau _{n-1} = {\mathfrak {s}}_{n-1}] \ {\mathsf{P}}_t[\tau _{n-1} = {\mathfrak {s}}_{n-1}]\\&= \frac{1}{n-1} \sum _{{\mathfrak {s}}_{n-1}\in {\mathbb {P}}({\mathfrak {t}}_n)} {\mathsf{P}}_t[\tau _{n-1} = {\mathfrak {s}}_{n-1}] = \frac{1}{n-1} \sum _{{\mathfrak {s}}_{n-1}\in {\mathbb {P}}({\mathfrak {t}}_n)} p_{n-1}({\mathfrak {s}}_{n-1}) \end{aligned}$$

the last equality being valid thanks to the inductive hypothesis. Now,

$$\begin{aligned} \sum _{{\mathfrak {s}}_{n-1}\in {\mathbb {P}}({\mathfrak {t}}_n)} p_{n-1}({\mathfrak {s}}_{n-1}) = \sum _{\begin{array}{c} {\mathfrak {s}}_{n-1}\in {\mathbb {P}}({\mathfrak {t}}_n)\\ {\mathfrak {s}}_{n-1}^l = {\mathfrak {t}}_n^l \end{array}} p_{n-1}({\mathfrak {s}}_{n-1}) \ + \sum _{\begin{array}{c} {\mathfrak {s}}_{n-1}\in {\mathbb {P}}({\mathfrak {t}}_n)\\ {\mathfrak {s}}_{n-1}^r = {\mathfrak {t}}_n^r \end{array}} p_{n-1}({\mathfrak {s}}_{n-1}) \end{aligned}$$

and, by (33), the RHS turns out to be equal to

$$\begin{aligned} \frac{1}{n-2} \left[ p_{n_l}({\mathfrak {t}}_n^l) \sum _{\begin{array}{c} {\mathfrak {s}}_{n-1}\in {\mathbb {P}}({\mathfrak {t}}_n)\\ {\mathfrak {s}}_{n-1}^l = {\mathfrak {t}}_n^l \end{array}} p_{n_r - 1}({\mathfrak {s}}_{n-1}^r) \ + \ p_{n_r}({\mathfrak {t}}_n^r) \sum _{\begin{array}{c} {\mathfrak {s}}_{n-1}\in {\mathbb {P}}({\mathfrak {t}}_n)\\ {\mathfrak {s}}_{n-1}^r = {\mathfrak {t}}_n^r \end{array}} p_{n_l - 1}({\mathfrak {s}}_{n-1}^l) \right] . \end{aligned}$$

At this stage, observe that

$$\begin{aligned}&\sum _{\begin{array}{c} {\mathfrak {s}}_{n-1}\in {\mathbb {P}}({\mathfrak {t}}_n)\\ {\mathfrak {s}}_{n-1}^l = {\mathfrak {t}}_n^l \end{array}} p_{n_r - 1}({\mathfrak {s}}_{n-1}^r) = \sum _{{\mathfrak {s}}_{n_r - 1} \in {\mathbb {P}}({\mathfrak {t}}_n^r)} p_{n_r - 1}({\mathfrak {s}}_{n_r - 1})\\&\qquad = (n_r - 1) \sum _{{\mathfrak {s}}_{n_r - 1} \in {\mathbb {P}}({\mathfrak {t}}_n^r)} {\mathsf{P}}_t[\tau _{n_r - 1} = {\mathfrak {s}}_{n_r - 1}] {\mathsf{P}}_t[\tau _{n_r} = {\mathfrak {t}}_n^r \ | \ \tau _{n_r - 1} = {\mathfrak {s}}_{n_r - 1}] \\&\qquad = (n_r - 1) {\mathsf{P}}_t[\tau _{n_r} = {\mathfrak {t}}_n^r] = (n_r - 1) p_{n_r}({\mathfrak {t}}_n^r) \end{aligned}$$

and that the same procedure yields \(\sum _{\begin{array}{c} {\mathfrak {s}}_{n-1}\in {\mathbb {P}}({\mathfrak {t}}_n)\\ {\mathfrak {s}}_{n-1}^r = {\mathfrak {t}}_n^r \end{array}} p_{n_l - 1}({\mathfrak {s}}_{n-1}^l) = (n_l - 1) p_{n_l}({\mathfrak {t}}_n^l)\). To complete the proof it is enough to combine the previous equations and to recall that \(n = n_l + n_r\).

A.3 A few interesting characteristics of \({\mathcal {C}}[\zeta , \eta ; \varphi ]\)

The first point concerns the invariance of (38) w.r.t. the choice of \(\{\mathbf{a}(\mathbf{u}), \mathbf{b}(\mathbf{u}), \mathbf{u}\}\). Fix \(\varvec{\xi }\ne \mathbf{0}\) and let \(\{\mathbf{a}(\mathbf{u}), \mathbf{b}(\mathbf{u}), \mathbf{u}\}\) and \(\{\mathbf{a}^{'}(\mathbf{u}), \mathbf{b}^{'}(\mathbf{u}), \mathbf{u}\}\) be distinct positive bases. Then, write \(\varvec{\psi }^l\) and \(\varvec{\psi }^r\) in (37) with \(\{\mathbf{a}^{'}(\mathbf{u}), \mathbf{b}^{'}(\mathbf{u}), \mathbf{u}\}\) in the place of \(\{\mathbf{a}(\mathbf{u}), \mathbf{b}(\mathbf{u}), \mathbf{u}\}\). Since there exists some \(\theta ^{*}\) in \([0, 2\pi )\) such that \(\mathbf{a}^{'} = \cos \theta ^{*} \mathbf{a} - \sin \theta ^{*} \mathbf{b}\) and \(\mathbf{b}^{'} = \sin \theta ^{*} \mathbf{a} + \cos \theta ^{*} \mathbf{b}\), the change of basis gives

$$\begin{aligned} \begin{array}{lll} \varvec{\psi }^l(\varphi , \theta , \mathbf{u}) &{}= \cos (\theta - \theta ^{*}) \sin \varphi \mathbf{a}(\mathbf{u}) + \sin (\theta - \theta ^{*}) \sin \varphi \mathbf{b}(\mathbf{u}) + \cos \varphi \mathbf{u}\\ \varvec{\psi }^r(\varphi , \theta , \mathbf{u}) &{}= -\cos (\theta - \theta ^{*}) \cos \varphi \mathbf{a}(\mathbf{u}) - \sin (\theta - \theta ^{*}) \cos \varphi \mathbf{b}(\mathbf{u}) + \sin \varphi \mathbf{u}. \end{array} \end{aligned}$$

After substituting these expressions in (38), the desired conclusion follows from an obvious change of variable.

To prove the measurability of \((\varvec{\xi }, \varphi ) \mapsto I(\varvec{\xi }, \varphi )\), resort to Proposition 9 in Section 9.3 of [36], so that it is enough to verify the continuity of \(\varphi \mapsto I(\varvec{\xi }, \varphi )\) for each fixed \(\varvec{\xi }\) and the measurability of \(\varvec{\xi }\mapsto I(\varvec{\xi }, \varphi )\) for each fixed \(\varphi \). The former claim follows from the form of the dependence on \(\varphi \) in (37)–(38). To verify the latter, one can show that also \(\varvec{\xi }\mapsto I(\varvec{\xi }, \varphi )\) is continuous for each fixed \(\varphi \). Continuity at \(\varvec{\xi }= \mathbf{0}\) can be derived from the relation \(|\varvec{\psi }^l| = |\varvec{\psi }^r| = 1\) and an ensuing application of the dominated convergence theorem. To check continuity at \(\varvec{\xi }^{*} \ne \mathbf{0}\), take a sequence \(\{\varvec{\xi }_n\}_{n \ge 1}\) converging to \(\varvec{\xi }^{*}\) and observe that \(|\varvec{\xi }_n| \rightarrow |\varvec{\xi }^{*}|\) and \(\mathbf{u}_n := \varvec{\xi }_n/|\varvec{\xi }_n| \rightarrow \mathbf{u}^{*} := \varvec{\xi }^{*}/|\varvec{\xi }^{*}|\). Fix a small open neighborhood \(\Omega (\mathbf{u}^{*}) \subset S^2\) of \(\mathbf{u}^{*}\) in such a way that \(S^2 {\setminus }\overline{\Omega (\mathbf{u}^{*})}\) contains at least two antipodal points. In view of the first part of this appendix, choose a distinguished basis in such a way that the restrictions of \(\mathbf{u}\mapsto \mathbf{a}(\mathbf{u})\) and \(\mathbf{u}\mapsto \mathbf{b}(\mathbf{u})\) to \(\Omega (\mathbf{u}^{*})\) vary with continuity. As a consequence, \(\varvec{\psi }^l(\varphi , \theta , \mathbf{u}_n)\) converges to \(\varvec{\psi }^l(\varphi , \theta , \mathbf{u}^{*})\) and \(\varvec{\psi }^r(\varphi , \theta , \mathbf{u}_n)\) converges to \(\varvec{\psi }^r(\varphi , \theta , \mathbf{u}^{*})\) for every \(\varphi \) in \([0, \pi ]\) and \(\theta \) in \((0, 2\pi )\), and the convergence of \(I(\varvec{\xi }_n, \varphi )\) to \(I(\varvec{\xi }^{*}, \varphi )\) follows again from the dominated convergence theorem. To show that \(\varvec{\xi }\mapsto I(\varvec{\xi }, \varphi )\) is a c.f. for every \(\varphi \) in \([0, \pi ]\), resort to the multivariate version of the Bochner characterization. See Exercise 3.1.9 in [60]. The only point that requires some care is positivity. If this property were not in force, one could find a positive integer \(N\), two \(N\)-vectors \((\omega _1, \dots , \omega _N)\) and \((\varvec{\xi }_1, \dots , \varvec{\xi }_N)\) in \({\mathbb {C}}^N\) and \(({\mathbb {R}}^3)^N\) respectively, and some \(\varphi ^{*}\) in \([0, \pi ]\) in such a way that \(\sum _{j = 1}^{N} \sum _{k = 1}^{N} \omega _j \overline{\omega }_k I(\varvec{\xi }_j - \varvec{\xi }_k, \varphi ^{*}) < 0\). Note that the LHS of this inequality is a real number since \(I(-\varvec{\xi }, \varphi ) = \overline{I(\varvec{\xi }, \varphi )}\) for any \(\varvec{\xi }\) and \(\varphi \). Hence, by continuity of \(\varphi \mapsto I(\varvec{\xi }, \varphi )\), there exists an open interval \(J\) in \([0, \pi ]\) containing \(\varphi ^{*}\) such that \(\varphi \mapsto \sum _{j = 1}^{N} \sum _{k = 1}^{N} \omega _j \overline{\omega }_k I(\varvec{\xi }_j - \varvec{\xi }_k, \varphi )\) is strictly negative on \(J\). Now, choose a specific \(b_{*}\) for which the resulting p.m. in (20), say \(\beta _{*}\), is supported by \(\overline{J}\). By construction, \(L := \int _{0}^{\pi } \sum _{j = 1}^{N} \sum _{k = 1}^{N} \omega _j \overline{\omega }_k I(\varvec{\xi }_j - \varvec{\xi }_k, \varphi ) \beta _{*}(\text {d}\varphi )\) is a strictly negative number, a fact which immediately leads to a contradiction. Indeed, denote by \({\mathcal {Q}}_{*}[\zeta , \eta ]\) the RHS of (35) when \(b_{*}\) replaces \(b\) in the definition of \(Q[p, q]\). Observe that \({\mathcal {Q}}_{*}[\zeta , \eta ]\) is in any case a p.m. even if \(b_{*}\) does not meet (2). Now, \(L\) must be equal to \(\sum _{j = 1}^{N} \sum _{k = 1}^{N} \omega _j \overline{\omega }_k \hat{\mathcal {Q}}_{*}[\zeta , \eta ](\varvec{\xi }_j - \varvec{\xi }_k)\), thanks to (36), and this quantity must be non-negative, from the Bochner criterion again.

To prove (39), start by verifying that \(\varphi \mapsto {\mathcal {C}}[\zeta , \eta ; \varphi ]\) is measurable, which is tantamount to checking that \(\varphi \mapsto {\mathcal {C}}[\zeta , \eta ; \varphi ](K)\) is measurable for every \(K = \mathsf X _{\begin{array}{c} i=1 \end{array}}^{3} (-\infty , x_i]\), in view of Lemma 1.40 of [48]. To this aim, fix such a \(K\) and use the Fubini theorem to show that

$$\begin{aligned} (\mathbf{a}, \mathbf{b}, c, \varphi ) \mapsto \left( \frac{1}{2\pi }\right) ^3 \int \limits _{-c}^{c} \int \limits _{-c}^{c} \int \limits _{-c}^{c} \left[ \prod _{m=1}^{3} \frac{e^{-i\xi _m a_m} - e^{-i\xi _m b_m}}{i\xi _m} \right] \hat{\mathcal {C}}[\zeta , \eta ; \varphi ](\varvec{\xi }) \text {d} \varvec{\xi }\end{aligned}$$

is measurable, since \((\varvec{\xi }, \varphi ) \mapsto \hat{\mathcal {C}}[\zeta , \eta ; \varphi ](\varvec{\xi })\) does. To complete the argument, invoke the inversion formula and note that \({\mathcal {C}}[\zeta , \eta ; \varphi ](K)\) is equal to the limit of the above expression as \(c \uparrow +\infty \), \(a_m \downarrow -\infty \) and \(b_m \downarrow x_m\) for \(m = 1, 2, 3\). This paves the way for writing the integral in (39), and the equality therein follows from (36) and (38) in view of the injectivity of the Fourier-transform operator.

A.4 Proof of (40)

The first step is to show that \(\varvec{\varphi } \mapsto {\mathcal {C}}_{{\mathfrak {t}}_n}[\mu _0; \varvec{\varphi }]\) is measurable as a map from \([0, \pi ]^{n-1}\) into \(\mathcal {P}({\mathbb {R}}^3)\). Mimicking the argument in the last part of A.3, it suffices to verify the measurability of \((\varvec{\xi }, \varvec{\varphi }) \mapsto \hat{\mathcal {C}}_{{\mathfrak {t}}_n}[\mu _0; \varvec{\varphi }](\varvec{\xi })\) by means of Proposition 9 in Section 9.3 of [36]. On the one hand, the function \(\varvec{\xi }\mapsto \hat{\mathcal {C}}_{{\mathfrak {t}}_n}[\mu _0; \varvec{\varphi }](\varvec{\xi })\) is continuous for every fixed \(\varvec{\varphi }\). On the other hand, measurability of \(\varvec{\varphi } \mapsto \hat{\mathcal {C}}_{{\mathfrak {t}}_n}[\mu _0; \varvec{\varphi }](\varvec{\xi })\), for every fixed \(\varvec{\xi }\), can be proved by induction. When \(n = 1\), \(\hat{\mathcal {C}}_{{\mathfrak {t}}_1}[\mu _0; \emptyset ](\varvec{\xi })\) is independent of \(\varvec{\varphi }\) and the claim is obvious. When \(n \ge 2\), it suffices to recall (44) and to exploit the inductive hypothesis. To conclude, the equality

$$\begin{aligned} \hat{\mathcal {Q}}_{{\mathfrak {t}}_n}[\mu _0](\varvec{\xi }) = \int \limits _{[0, \pi ]^{n-1}} \hat{\mathcal {C}}_{{\mathfrak {t}}_n}[\mu _0; \varvec{\varphi }](\varvec{\xi }) \beta ^{\otimes _{n-1}}(\text {d}\varvec{\varphi }) \end{aligned}$$

for \(n = 2, 3, \dots \) will be proved by mathematical induction. First, when \(n = 2\), (139) is valid since it coincides with (36). When \(n \ge 3\), combine the definition of \({\mathcal {Q}}_{{\mathfrak {t}}_n}\) with (36) to obtain

$$\begin{aligned} \hat{\mathcal {Q}}_{{\mathfrak {t}}_n}[\mu _0](\varvec{\xi }) = \int \limits _{0}^{\pi }\int \limits _{0}^{2\pi } \hat{\mathcal {Q}}_{{\mathfrak {t}}_n^l}[\mu _0](\rho \cos \varphi \varvec{\psi }^l) \hat{\mathcal {Q}}_{{\mathfrak {t}}_n^r}[\mu _0](\rho \sin \varphi \varvec{\psi }^r) u_{(0, 2\pi )}(\text {d}\theta ) \beta (\text {d}\varphi ) \end{aligned}$$

and the argument is completed by invoking the inductive hypothesis, the definition of \({\mathcal {C}}_{{\mathfrak {t}}_n}\) and (36). Therefore, (139) entails (40) in view of the injectivity of the Fourier-transform operator.

A.5 Proof of Proposition 4

Put \(k := \lceil 2/p \rceil \) with \(p\) as in (15) and consider the random vector \(\mathbf{S} = (S_1, S_2, S_3) := \sum _{j=1}^{2k} (-1)^{j} \mathbf{V}_j\), whose c.f. \(\phi \) is given by \(\phi (\varvec{\xi }) = |\hat{\mu }_{0}(\varvec{\xi })|^{2k}\). The assumptions (47)–(48) plainly entail \({\mathsf{E}}_t\left[ \mathbf{S}\right] = \mathbf{0}\), \({\mathsf{E}}_t\left[ S_i S_j\right] = 0\) for \(i \ne j\), and \({\mathsf{E}}_t\left[ S_{i}^{2}\right] = 2k \sigma _{i}^{2}\) for \(i = 1, 2, 3\). Note also that \(\sigma _{i}^{2} > 0\) for \(i = 1, 2, 3\) as a consequence of (15). Moreover, thanks to the Lyapunov inequality, \({\mathsf{E}}_t\left[ |\mathbf{S}|^3\right] \le (2k)^3 {\mathfrak {m}}_3\). Now, standard arguments explained, e.g., in Section 8.4 of [24] show that

$$\begin{aligned} \phi (\varvec{\xi }) \le 1 - k\sigma _{*}^{2} |\varvec{\xi }|^2 + \frac{(2k)^3 {\mathfrak {m}}_3}{6} |\varvec{\xi }|^3 \end{aligned}$$

with \(\sigma _{*}^{2} := \min \{\sigma _{1}^{2}, \sigma _{2}^{2}, \sigma _{3}^{2}\}\). Thus, \(\phi (\varvec{\xi }) \le 1 - \frac{k}{2} \sigma _{*}^{2} |\varvec{\xi }|^2\) whenever \(|\varvec{\xi }| \le (3 \sigma _{*}^{2})/(8 k^2 {\mathfrak {m}}_3)\), and elementary algebra entails \(1 - \frac{k}{2} \sigma _{*}^{2} |\varvec{\xi }|^2 \le \frac{\lambda ^2}{\lambda ^2 + |\varvec{\xi }|^2}\) for every \(\varvec{\xi }\), provided that \(\lambda ^2 \ge 2/k\sigma _{*}^{2}\). Now, (15) gives \(|\phi (\varvec{\xi })| \le L|\varvec{\xi }|^{-4}\) for every \(\varvec{\xi }\ne \mathbf{0}\), with \(L := (\sup _{\varvec{\xi }\in {\mathbb {R}}^3} |\varvec{\xi }|^p |\hat{\mu }_0(\varvec{\xi })|)^{4/p}\), and again some algebra entails \(L|\varvec{\xi }|^{-4} \le \frac{\lambda ^2}{\lambda ^2 + |\varvec{\xi }|^2}\) if \(|\varvec{\xi }|^2 \ge B(\lambda ) := (L + \sqrt{L^2 + 4L\lambda ^4})/(2\lambda ^2)\). Note that \(B(\lambda ) \le 2\sqrt{L}\) holds true when \(\lambda ^2 \ge (2\sqrt{L})/3\). At this stage, choosing any \(\lambda \) satisfying \(\lambda ^2 \ge \max \{2/k\sigma _{*}^{2}, (2\sqrt{L})/3\}\) yields

$$\begin{aligned} |\phi (\varvec{\xi })| \le \frac{\lambda ^2}{\lambda ^2 + |\varvec{\xi }|^2} \end{aligned}$$

for every \(\varvec{\xi }\) such that either \(|\varvec{\xi }| \le (3 \sigma _{*}^{2})/(8 k^2 {\mathfrak {m}}_3)\) or \(|\varvec{\xi }|^2 \ge 2\sqrt{L}\). Therefore, the proof is completed if \((4L)^{1/4} \le (3 \sigma _{*}^{2})/(8 k^2 {\mathfrak {m}}_3)\). Otherwise, if \((4L)^{1/4} > (3 \sigma _{*}^{2})/(8 k^2 {\mathfrak {m}}_3)\), define

$$\begin{aligned} M := \sup _{\{(3 \sigma _{*}^{2})/(8 k^2 {\mathfrak {m}}_3) \le |\varvec{\xi }| \le (4L)^{1/4}\}} |\phi (\varvec{\xi })| \end{aligned}$$

and resort to Corollary 2 in Section 8.4 of [24] to state that \(M < 1\). Then, (140) holds true also when \((3 \sigma _{*}^{2})/(8 k^2 {\mathfrak {m}}_3) \le |\varvec{\xi }| \le (4L)^{1/4}\) if \(M \le \inf _{(3 \sigma _{*}^{2})/(8 k^2 {\mathfrak {m}}_3) \le |\varvec{\xi }| \le (4L)^{1/4}}\left( \frac{\lambda ^2}{\lambda ^2 + |\varvec{\xi }|^2}\right) \), the last inequality being equivalent to \(\lambda ^2 \le 2\sqrt{L} M/(1 - M)\). In conclusion, taking

$$\begin{aligned} \lambda ^2 := \max \{2/k\sigma _{*}^{2}, (2\sqrt{L})/3, 2\sqrt{L} M/(1 - M)\} \end{aligned}$$

leads to state that (140) is valid for every \(\varvec{\xi }\), and (49) follows.

A.6 Proof of Proposition 5

Initially, suppose that \(\chi (\text {d} \mathbf{x}) = f(\mathbf{x}) \text {d} \mathbf{x}\) for some \(f\) in \(\text {L}^1({\mathbb {R}}^3)\). Therefore, \(\varDelta \hat{\chi }(\varvec{\xi }) = \int _{{\mathbb {R}}^3}|\mathbf{x}|^2 f(\mathbf{x}) e^{i \mathbf{x}\cdot \varvec{\xi }} \text {d} \mathbf{x}\) and then, by the Plancherel identity,

$$\begin{aligned} \int \limits _{{\mathbb {R}}^3}|f(\mathbf{x})|^2 (1 + |\mathbf{x}|^4) \text {d} \mathbf{x}= \left( \frac{1}{2\pi }\right) ^3 \int \limits _{{\mathbb {R}}^3}\left[ |\hat{\chi }(\varvec{\xi })|^2 + |\varDelta \hat{\chi }(\varvec{\xi })|^2\right] \text {d} \varvec{\xi }. \end{aligned}$$

Now, note that \(|\chi |({\mathbb {R}}^3) = \int _{{\mathbb {R}}^3}|f(\mathbf{x})| \text {d} \mathbf{x}\) and apply the Cauchy-Schwartz inequality to get

$$\begin{aligned} \int \limits _{{\mathbb {R}}^3}|f(\mathbf{x})| \text {d} \mathbf{x}\ \le \ \left( \,\,\int \limits _{{\mathbb {R}}^3}\frac{\text {d} \mathbf{x}}{1 + |\mathbf{x}|^4}\right) ^{1/2} \cdot \left( \,\,\int \limits _{{\mathbb {R}}^3}|f(\mathbf{x})|^2 (1 + |\mathbf{x}|^4)\,\text {d} \mathbf{x}\right) ^{1/2} \end{aligned}$$

where \(\int _{{\mathbb {R}}^3}\frac{\text {d} \mathbf{x}}{1 + |\mathbf{x}|^4} = \sqrt{2}\pi ^2\). For a general \(\chi \), consider the convolution \(\chi _{\epsilon }\) of \(\chi \) with the Gaussian distribution of zero mean and covariance matrix \(\epsilon ^2 \text {I}\). Since \(\chi _{\epsilon }\) is absolutely continuous, the first part of the proof gives

$$\begin{aligned} |\chi _{\epsilon }|({\mathbb {R}}^3)\ \le 2^{-5/4}\pi ^{-1/2} \left( \,\,\int \limits _{{\mathbb {R}}^3}\left[ |\hat{\chi }_{\epsilon }(\varvec{\xi })|^2 + |\varDelta \hat{\chi }_{\epsilon }(\varvec{\xi })|^2\right] \text {d} \varvec{\xi }\right) ^{1/2} \end{aligned}$$

and thereby, taking account of \(|\chi |({\mathbb {R}}^3) \le \liminf _{\epsilon \downarrow 0} |\chi _{\epsilon }|({\mathbb {R}}^3)\) and letting \(\epsilon \downarrow 0\),

$$\begin{aligned} |\chi |({\mathbb {R}}^3)\ \le 2^{-5/4}\pi ^{-1/2} \left( \,\,\int \limits _{{\mathbb {R}}^3}\left[ |\hat{\chi }(\varvec{\xi })|^2 + |\varDelta \hat{\chi }(\varvec{\xi })|^2\right] \text {d} \varvec{\xi }\right) ^{1/2}. \end{aligned}$$

To complete the argument, observe that \(\sup _{B \in {\fancyscript{B}}({{\mathbb {R}}}^3)} |\chi (B)| \le |\chi |({\mathbb {R}}^3)\).

A.7 Proof of Proposition 6

Taking account of (47), note that

$$\begin{aligned} {\mathsf{E}}_t\left[ (S(\mathbf{u}))^2 \ | \ {\fancyscript{H}} \right] = \sum _{j=1}^{\nu } \pi _{j, \nu }^2 {\mathsf{E}}_t\left[ (\mathbf{V}_j \cdot \varvec{\psi }_{j, \nu }(\mathbf{u}))^2 \ | \ {\fancyscript{H}} \right] \le {\mathfrak {m}}_2 \end{aligned}$$

with \({\mathfrak {m}}_2 = 3\). The equality emanates by virtue of the stochastic independence of the \(\mathbf{V}_j\)’s while the inequality follows from the combination of the Cauchy-Schwartz inequality with (23) and the identity \(|\varvec{\psi }_{j, n}(\mathbf{u})| = 1\). Thus, (55) holds true for \(h = 2\) with \(g_2 = {\mathfrak {m}}_2\). The case \(h = 1\) can be derived from the case \(h = 2\) thanks to the conditional Lyapunov inequality after putting \(g_1 = \sqrt{g_2}\). When \(h \ge 3\), an inequality due to Rosenthal (see Section 2.3 in [58]) yields

$$\begin{aligned} {\mathsf{E}}_t\left[ |S(\mathbf{u})|^h \ | \ {\fancyscript{H}} \right]&\le c(h) \left\{ \sum _{j = 1}^{\nu } {\mathsf{E}}_t\left[ |\pi _{j, \nu } \mathbf{V}_j \cdot \varvec{\psi }_{j, \nu }(\mathbf{u})|^h \ | \ {\fancyscript{H}} \right] \right. \\&\left. + \left( \sum _{j = 1}^{\nu } {\mathsf{E}}_t\left[ |\pi _{j, \nu } \mathbf{V}_j \cdot \varvec{\psi }_{j, \nu }(\mathbf{u})|^2\ | \ {\fancyscript{H}} \right] \right) ^{h/2} \right\} \end{aligned}$$

where \(c(h)\) is a positive constant depending only on \(h\). An additional application of the Cauchy–Schwartz inequality, combined with (23) and \(|\varvec{\psi }_{j, n}(\mathbf{u})| = 1\), gives

$$\begin{aligned} {\mathsf{E}}_t\left[ |S(\mathbf{u})|^h \ | \ {\fancyscript{H}} \right] \!\le \! c(h) \left\{ {\mathfrak {m}}_h \sum _{j = 1}^{\nu } |\pi _{j, \nu }|^h \!+\! \left( {\mathfrak {m}}_2 \sum _{j = 1}^{\nu } \pi _{j, \nu }^2 \right) ^{h/2}\right\} \!\le \! c(h) \left\{ {\mathfrak {m}}_h \!+\! {\mathfrak {m}}_{2}^{h/2} \right\} \end{aligned}$$

which entails (55) with \(g_h = c(h) \{{\mathfrak {m}}_h + {\mathfrak {m}}_{2}^{h/2}\}\). Now, \(\frac{\partial ^h}{\partial \rho ^h}{\mathcal {N}}(\rho ; \mathbf{u})\) exists and is uniformly bounded by \(g_h\), for \(h = 1, \dots , 2k\). Then, since \(\hat{{\mathcal {M}}}(\rho \mathbf{u}) = {\mathsf{E}}_t[\hat{{\mathcal {N}}}(\rho ; \mathbf{u}) \ | \ {\fancyscript{G}} ]\) and the interchanging of the derivative with the expectation is here valid, one gets (56). Finally, taking \(\mathbf{u}= \mathbf{e}_i\) in (56) yields \(\int _{{\mathbb {R}}^3}v_{i}^{2k}{\mathcal {M}}(\text {d}\mathbf{v}) < +\infty \) for \(i = 1, 2, 3\) which, in turn, entails (57).

A.8 Proof of Proposition 7

The definition of the \(k\)-th Hermite polynomial shows that \(\frac{\text {d}^k}{\text {d}\rho ^k} e^{-\rho ^2/2} = 2^{-k/2} H_k\left( \frac{\rho }{\sqrt{2}}\right) e^{-\rho ^2/2}\) for every \(k\) in \({\mathbb {N}}_0\) and \(\rho \) in \({\mathbb {R}}\). See, for example, (1) in Section 2.IV of [59]. Moreover, according to (\(9_2\)) therein,

$$\begin{aligned} H_k\left( \frac{\rho }{\sqrt{2}}\right) = k! \sum _{h =0}^{[k/2]} \frac{(-1)^{k+h}}{h! (k - 2h)!} (\sqrt{2} \rho )^{k - 2h} \end{aligned}$$

where \([n]\) stands for the integral part of \(n\), and hence

$$\begin{aligned} \int \limits _{x}^{+\infty } \left( \frac{\text {d}^k}{\text {d}\rho ^k} e^{-\rho ^2/2} \right) ^2 \rho ^m \text {d}\rho = \sum _{h =0}^{[k/2]} \sum _{l =0}^{[k/2]} \gamma _{k, h, l} \int \limits _{x}^{+\infty } \rho ^{m + 2(k - h -l)} e^{-\rho ^2} \text {d}\rho \end{aligned}$$

with \(\gamma _{k, h, l} := \frac{(-2)^{-h -l}(k!)^2}{h! l! (k - 2h)! (k - 2l)!}\). Now, take account of the following elementary inequalities: \(\int _{x}^{+\infty } e^{- \rho ^2/2} \text {d}\rho \le \ \frac{1}{x} e^{- x^2/2}\) for \(x > 0\), and \(\rho ^t e^{-\rho ^2/2} \le \ (t/e)^{t/2}\) for \(\rho \ge 0\) and \(t \ge 0\), with the proviso that \(0^0 := 1\) when \(t = 0\). Whence,

$$\begin{aligned}&\int \limits _{x}^{+\infty } \left( \frac{\text {d}^k}{\text {d}\rho ^k} e^{-\rho ^2/2} \right) ^2 \rho ^m \text {d}\rho \\&\quad \le \sum _{h =0}^{[k/2]} \sum _{l =0}^{[k/2]} |\gamma _{k, h, l}| \left( \frac{m + 2(k - h -l)}{e}\right) ^{m/2 + k - h -l} \frac{1}{x} e^{- x^2/2} \\&\quad \le c(m, s, k) x^{-s} \end{aligned}$$

with \(c(m, s, k) := \sum _{h =0}^{[k/2]} \sum _{l =0}^{[k/2]} |\gamma _{k, h, l}| \left( \frac{m + 2(k - h -l)}{e}\right) ^{m/2 + k - h -l} \left( \frac{s - 1}{e}\right) ^{(s - 1)/2}\).

A.9 Proof of Propositions 8 and 10

The main task is to prove (64), (66) and (93). The remaining inequalities (65) and (67) can be derived by interchanging derivative with expectation in the equality \(\hat{{\mathcal {M}}}(\rho \mathbf{u}) = {\mathsf{E}}_t[\hat{{\mathcal {N}}}(\rho ; \mathbf{u}) \ | \ {\fancyscript{G}}]\), since \(\varPsi (\rho )\) is a \({\fancyscript{G}}\)-measurable random variable for every fixed \(\rho \). To start, (64) follows from the combination of (32), (49) and (63), upon recalling that \(|\varvec{\psi }_{j, \nu }| = 1\). With a view to proving (66) and (93), it is worth noting that \(0 \le \rho \le \text {R}\) entails \(\sup _{\mathbf{u}\in S^2} \big |\hat{\mu }_0(\rho \pi _{j, \nu } \varvec{\psi }_{j, \nu }(\mathbf{u})) - 1\big | \le 19/128\) for \(j = 1, \dots , \nu \) and for every choice of \(\text {B}\) in (29), as shown in [30]. This paves the way for considering the principal value of the logarithm and then for writing

$$\begin{aligned} \hat{{\mathcal {N}}}(\rho ; \mathbf{u}) = \exp \left\{ \sum _{j=1}^{\nu } \text {Log}[ \hat{\mu }_0(\rho \pi _{j, \nu } \varvec{\psi }_{j, \nu }(\mathbf{u}))]\right\} . \end{aligned}$$

The next step concerns the computation of certain derivatives of \(\hat{{\mathcal {N}}}(\rho ; \mathbf{u})\) by means of the above identity. To this aim, the system of coordinates introduced in Sect. 2.2.2 comes now in useful. Then, for \(k = 1, \dots , 4\),

$$\begin{aligned} \frac{\partial }{\partial x} \hat{{\mathcal {N}}}(\rho ; \mathbf{h}_k(u, v))&= \hat{{\mathcal {N}}}(\rho ; \mathbf{h}_k(u, v)) \sum _{j=1}^{\nu } \frac{\partial }{\partial x} \text {Log}[ \hat{\mu }_0(\rho \pi _{j, \nu } \varvec{\psi }_{j, \nu }(\mathbf{h}_k(u, v)))] \end{aligned}$$
$$\begin{aligned} \frac{\partial ^2}{\partial x^2} \hat{{\mathcal {N}}}(\rho ; \mathbf{h}_k(u, v)) \!&= \! \hat{{\mathcal {N}}}(\rho ; \mathbf{h}_k(u, v))\left\{ \left( \sum _{j=1}^{\nu } \frac{\partial }{\partial x} \text {Log}[ \hat{\mu }_0(\rho \pi _{j, \nu } \varvec{\psi }_{j, \nu }(\mathbf{h}_k(u, v)))]\right) ^2 \right. \nonumber \\&\left. + \sum _{j=1}^{\nu } \frac{\partial ^2}{\partial x^2} \text {Log}[ \hat{\mu }_0(\rho \pi _{j, \nu } \varvec{\psi }_{j, \nu }(\mathbf{h}_k(u, v)))]\right\} \end{aligned}$$

where \(x\) can be \(\rho \), \(u\) or \(v\). To bound each of these products, use (64) as far as \(\hat{{\mathcal {N}}}(\rho ; \mathbf{h}_k)\) is concerned, and proceed with the detailed computation of bounds for the derivatives of the logarithms. As a starting point for all these calculations, consider the following equalities from [30]:

$$\begin{aligned} \hat{\mu }_0(\rho \pi _{j, \nu } \varvec{\psi }_{j, \nu }(\mathbf{u}))&= 1 - \frac{1}{2} \rho ^2\pi _{j, \nu }^2 \int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^2 \mu _0(\text {d}\mathbf{v}) \nonumber \\&- \frac{i}{3!}\rho ^3\pi _{j, \nu }^3 \int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^3 \mu _0(\text {d}\mathbf{v}) + R_j(\rho , \mathbf{u}) \quad \quad \ \end{aligned}$$


$$\begin{aligned}&\text {Log}[ \hat{\mu }_0(\rho \pi _{j, \nu } \varvec{\psi }_{j, \nu }(\mathbf{u}))] = - \frac{1}{2} \rho ^2\pi _{j, \nu }^2 \int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^2 \mu _0(\text {d}\mathbf{v}) \nonumber \\&\quad - \frac{i}{3!}\rho ^3\pi _{j, \nu }^3 \int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^3 \mu _0(\text {d}\mathbf{v}) + R_j(\rho , \mathbf{u}) - \Phi (w_j(\rho , \mathbf{u})) w_{j}^{2}(\rho , \mathbf{u}). \nonumber \\ \end{aligned}$$

Here, \(w_j(\rho , \mathbf{u}) := \hat{\mu }_0(\rho \pi _{j, \nu } \varvec{\psi }_{j, \nu }(\mathbf{u})) - 1\),

$$\begin{aligned} \Phi (z) := \frac{z - \text {Log}(1 + z)}{z^2} = \int \limits _{0}^{+\infty }\left( \,\,\int \limits _{x}^{+\infty } \frac{s - x}{s} e^{-s} \text {d}s\right) e^{-z x} \text {d}x \quad (\mathfrak {R}z > -1) \end{aligned}$$

and the remainder \(R_j(\rho , \mathbf{u})\) can assume one of the following forms:

$$\begin{aligned}&\frac{1}{3!} \rho ^4 \pi _{j, \nu }^4 \int \limits _{{\mathbb {R}}^3}\int \limits _{0}^{1} [\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^4 (1 - s)^3 e^{i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}] s} \text {d}s \mu _0(\text {d}\mathbf{v}) \\&\quad = -\frac{i}{2} \rho ^3 \pi _{j, \nu }^3 \int \limits _{{\mathbb {R}}^3}\int \limits _{0}^{1} [\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^3 (1 - s)^2 (e^{i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}] s} - 1) \text {d}s \mu _0(\text {d}\mathbf{v}) \\&\quad = -\rho ^2 \pi _{j, \nu }^2 \int \limits _{{\mathbb {R}}^3}\int \limits _{0}^{1} [\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^2 (1 - s) \\&\qquad \times \left( e^{i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}] s} - 1 - i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}] s \right) \text {d}s \mu _0(\text {d}\mathbf{v}). \end{aligned}$$

The aim is now to show that \(\sum _{j=1}^{\nu }\frac{\partial ^l}{\partial x^l} \text {Log}[ \hat{\mu }_0(\rho \pi _{j, \nu } \varvec{\psi }_{j, \nu })]\) admits, for \(l = 1, 2\), an upper bound presentable as a non-random polynomial in \(\rho \), independent of \(\mathbf{u}\).

As far as the derivatives w.r.t. \(\rho \) are concerned, for the first two terms on the RHS of (145) one gets

$$\begin{aligned}&\sup _{\mathbf{u}\in S^2} \left| \frac{\partial ^l}{\partial \rho ^l}\left[ - \frac{1}{2} \rho ^2\pi _{j, \nu }^2 \int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu } \cdot \mathbf{v}]^2 \text {d}\mu _0 - \frac{i}{3!}\rho ^3\pi _{j, \nu }^3 \int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu } \cdot \mathbf{v}]^3 \text {d}\mu _0 \right] \right| \nonumber \\&\quad \le \frac{1}{(2-l)!} {\mathfrak {m}}_2 \pi _{j, \nu }^2 \rho ^{2-l} + \frac{1}{(3-l)!} {\mathfrak {m}}_3|\pi _{j, \nu }|^3 \rho ^{3-l} \end{aligned}$$

for \(l = 0, 1, 2\), thanks to the fact that \(|\varvec{\psi }_{j, \nu }| = 1\). Moreover, recall that \({\mathfrak {m}}_2 = 3\) in view of (47). Standard manipulations of the above expressions of \(R_j(\rho , \mathbf{u})\) lead to

$$\begin{aligned} \sup _{\mathbf{u}\in S^2} \left| \frac{\partial ^l}{\partial \rho ^l} R_j(\rho , \mathbf{u})\right| \le c_l(R) {\mathfrak {m}}_4\pi _{j, \nu }^{4} \rho ^{4-l} \end{aligned}$$

for \(l = 0, 1, 2\), with \(c_0(R) = 1/24\), \(c_1(R) = 1/6\) and \(c_2(R) = 1/3\). See [30] for the details. After recalling (60), this last inequality plainly entails

$$\begin{aligned} \sup _{\begin{array}{c} \rho \in [0, \text {R}] \\ \mathbf{u}\in S^2 \end{array}} \sum _{j=1}^{\nu } \left| \frac{\partial ^l}{\partial \rho ^l} R_j(\rho , \mathbf{u})\right| \le \left( \frac{1}{2}\right) ^{4-l} c_l(R) {\mathfrak {m}}_4^{l/4}. \end{aligned}$$

As to the last term in (145), one has

$$\begin{aligned} \left| \frac{\partial }{\partial x} \left( \Phi (w_j) w_{j}^{2}\right) \right| \le |\Phi ^{'}(w_j)| \cdot \left| \frac{\partial }{\partial x} w_j \right| \cdot |w_j|^2 + 2|\Phi (w_j)| \cdot \left| \frac{\partial }{\partial x} w_j \right| \cdot |w_j|\qquad \quad \end{aligned}$$


$$\begin{aligned}&\left| \frac{\partial ^2}{\partial x^2} \left( \Phi (w_j) w_{j}^{2}\right) \right| \le |\Phi ^{''}(w_j)| \cdot \left| \frac{\partial }{\partial x} w_j \right| ^2 \cdot |w_j|^2 \nonumber \\&\quad +\, |\Phi ^{'}(w_j)| \left( 4\left| \frac{\partial }{\partial x} w_j \right| ^2 \cdot |w_j| + \left| \frac{\partial ^2}{\partial x^2} w_j \right| \cdot |w_j|^2\right) \nonumber \\&\quad +\, |\Phi (w_j)| \left( 2 \left| \frac{\partial ^2}{\partial x^2} w_j \right| \cdot |w_j| + 2 \left| \frac{\partial }{\partial x} w_j \right| ^2\right) . \end{aligned}$$

Since \(\Phi \) is completely monotone, \(|w_j| \le \frac{19}{128}\) yields \(|\Phi ^{(l)}(w_j)| \le |\Phi ^{(l)}(-\frac{19}{128})|\) for every \(l\) in \({\mathbb {N}}\). Then, combining (144) with (146)–(147) gives

$$\begin{aligned} \sup _{\mathbf{u}\in S^2} \left| \frac{\partial ^l}{\partial \rho ^l} w_j(\rho , \mathbf{u}) \right|&\le \frac{1}{(2-l)!}{\mathfrak {m}}_{2} \pi _{j, \nu }^2 \rho ^{2-l} + \frac{1}{(3-l)!} {\mathfrak {m}}_{3}|\pi _{j, \nu }|^3 \rho ^{3-l} \nonumber \\&+\, c_{l}(R) {\mathfrak {m}}_4\pi _{j, \nu }^{4} \rho ^{4-l} \end{aligned}$$
$$\begin{aligned} \sup _{\mathbf{u}\in S^2} \left| \frac{\partial ^l}{\partial \rho ^l} w_j(\rho , \mathbf{u}) \right| ^2&\le \frac{3}{[(2-l)!]^2}{\mathfrak {m}}_{2}^{2} \pi _{j, \nu }^4 \rho ^{4-2l} + \frac{3}{[(3-l)!]^2} {\mathfrak {m}}_{3}^{2}\pi _{j, \nu }^6 \rho ^{6-2l} \nonumber \\&+\, 3c^{2}_{l}(R) {\mathfrak {m}}_4^2 \pi _{j, \nu }^{8} \rho ^{8-2l} \end{aligned}$$

for \(l = 0, 1, 2\). By virtue of the Lyapunov inequality and Theorem 19 in [43], (152) entails

$$\begin{aligned} \sup _{\begin{array}{c} \rho \in [0, \text {R}] \\ \mathbf{u}\in S^2 \end{array}} \sum _{j=1}^{\nu } \left| \frac{\partial ^l}{\partial \rho ^l} w_j(\rho , \mathbf{u})\right| ^2 \le k_l(w) {\mathfrak {m}}_4^{l/2} \end{aligned}$$

for \(l = 0, 1, 2\), with

$$\begin{aligned} k_l(w) := 4^{l-2}\left[ \frac{3}{[(2-l)!]^2} + \frac{3}{4[(3-l)!]^2} + \frac{3}{16} c^{2}_{l}(R)\right] . \end{aligned}$$

Thus, starting from (142)–(143) and utilizing (146)–(148), (149)–(150) and (153) with \(x = \rho \), one can define the \(\wp _k\)’s in (66)–(67) as follows:

$$\begin{aligned} \wp _1(\rho )&= \frac{1}{2} {\mathfrak {m}}_3 \rho ^2 + {\mathfrak {m}}_2 \rho + \frac{1}{8} c_1(R) {\mathfrak {m}}_4^{1/4} + \left| \Phi ^{'}\left( -\frac{19}{128}\right) \right| k_0(w) \sqrt{k_1(w)} {\mathfrak {m}}_4^{1/4} \\&+ \left| \Phi \left( -\frac{19}{128}\right) \right| \cdot \left[ k_0(w) + k_1(w){\mathfrak {m}}_4^{1/2}\right] \end{aligned}$$


$$\begin{aligned} \wp _2(\rho )&= \wp _{1}^{2}(\rho ) + {\mathfrak {m}}_3 \rho + {\mathfrak {m}}_2 + \frac{1}{4}c_2(R) {\mathfrak {m}}_4^{1/2} + \left| \Phi ^{''}\left( -\frac{19}{128}\right) \right| k_0(w) k_1(w) {\mathfrak {m}}_4^{1/2}\\&+ \left| \Phi ^{'}\left( -\frac{19}{128}\right) \right| \cdot \left[ 4 \sqrt{k_0(w)} k_1(w){\mathfrak {m}}_4^{1/2} + k_0(w)\sqrt{k_2(w)} {\mathfrak {m}}_4^{1/2}\right] \\&+ \left| \Phi \left( -\frac{19}{128}\right) \right| \cdot \left[ k_0(w) + 2k_1(w){\mathfrak {m}}_4^{1/2} + k_2(w){\mathfrak {m}}_4\right] . \end{aligned}$$

This completes the proof of (66), showing also that the bound therein is independent of the choice of \(\text {B}\) in (29).

To prove (93), one begins by considering \((u, v)\) in \(D_k\) and taking \(\text {B}\) in (29) equal to \(\text {B}_k\) according to (90)–(91). In this way, every map \(\varvec{\psi }_{j, \nu ; k} : (u, v) \mapsto \varvec{\psi }_{j, \nu }(\mathbf{h}_k(u, v))\), and hence the map \(\hat{{\mathcal {N}}}_k : (u, v) \mapsto \hat{{\mathcal {N}}}(\rho ; \mathbf{h}_k(u, v))\), turns out to belong to \(\text {C}^4(D_k)\) for \(k = 1, \dots , 4\). Then, one resorts to (142)–(143), with \(x\) standing either for \(u\) or \(v\), and uses (64) to bound the common factor \(\hat{{\mathcal {N}}}_k\). As to the derivatives w.r.t. \(x\), one evaluates the expression of \(\frac{\partial ^l}{\partial x^l} [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}]^m\) for \(l = 1, 2\), and applies the Cauchy-Schwartz inequality. Whence, after recalling (29) and introducing the \(\text {L}^2\) norm \(\mid \mid \!\cdot \! \mid \mid _{*}\) of matrices, one gets

$$\begin{aligned} \Big | \frac{\partial ^l}{\partial x^l}[\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}]^m \Big | \le |\mathbf{v}|^m \sum _{h = 1}^{l} \frac{m!}{(m-h)!} \left| \left| \frac{\partial ^{l-h+1}}{\partial x^{l-h+1}} \text {B}_k \right| \right| _{*}^{h} \end{aligned}$$

when \(l = 1, 2\) and \(m \ge l\). Since \(\mid \mid \!\frac{\partial ^s}{\partial x^s} \text {B}_k \! \mid \mid _{*}\ \le \sqrt{3}\) for every \(s\) in \({\mathbb {N}}\), one has

$$\begin{aligned}&\sup _{(u, v) \in D_k} \left| \frac{\partial ^l}{\partial x^l}\left[ - \frac{1}{2} \rho ^2\pi _{j, \nu }^2 \int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}]^2 \text {d}\mu _0 - \frac{i}{3!}\rho ^3\pi _{j, \nu }^3 \int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}]^3 \text {d}\mu _0 \right] \right| \nonumber \\&\quad \le \left( \sum _{h = 1}^{l} 3^{h/2}\right) {\mathfrak {m}}_2\pi _{j, \nu }^2\rho ^2 + \left( \sum _{h = 1}^{l} \frac{3^{h/2}}{(3-h)!} \right) {\mathfrak {m}}_3|\pi _{j, \nu }|^3 \rho ^3 \end{aligned}$$

for \(l = 1, 2\). Then, one proceeds with the study of the derivatives of the third term in the RHS of (145). As far as the first order derivative is concerned, one resorts to the second of the expressions of \(R_j\), given in the first part of this Appendix, to write

$$\begin{aligned} \frac{\partial }{\partial x} R_j(\rho , \mathbf{h}_k(u, v))&= -\frac{i}{2} \rho ^3 \pi _{j, \nu }^3 \int \limits _{{\mathbb {R}}^3}\,\,\int \limits _{0}^{1} (1 - s)^2 \left\{ (e^{i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}] s} - 1)\right. \\&\left. \times \left( \frac{\partial }{\partial x} [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}]^3 \right) + [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}]^3 i \rho \pi _{j, \nu } s\right. \\&\quad \left. \times \left( \frac{\partial }{\partial x} [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}] \right) e^{i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}] s}\right\} \text {d}s \mu _0(\text {d}\mathbf{v}). \end{aligned}$$

By virtue of (154), one gets

$$\begin{aligned} \sup _{(u, v) \in D_k} \left| \frac{\partial }{\partial x} R_j(\rho , \mathbf{h}_k(u, v)) \right| \le \frac{\sqrt{3}}{6} {\mathfrak {m}}_4\pi _{j, \nu }^4 \rho ^4 \end{aligned}$$

which, recalling (60), entails

$$\begin{aligned} \sup _{\begin{array}{c} \rho \in [0, \text {R}] \\ (u, v) \in D_k \end{array}} \sum _{j = 1}^{\nu } \left| \frac{\partial }{\partial x} R_j(\rho , \mathbf{h}_k(u, v)) \right| \le \frac{\sqrt{3}}{96}. \end{aligned}$$

To compute the second order derivatives of \(R_j\) one employs the third of its expressions to write

$$\begin{aligned} \frac{\partial ^2}{\partial x^2} R_j(\rho , \mathbf{h}_k(u, v))&= -\pi _{j, \nu }^{2} \rho ^2 \int \limits _{{\mathbb {R}}^3}\,\,\int \limits _{0}^{1} (1 - s) \left\{ (e^{i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}] s}\right. \\&- 1- i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}] s) \left( \frac{\partial ^2}{\partial x^2} [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}]^2 \right) \\&+ 2i \rho \pi _{j, \nu } s\ (e^{i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}] s} - 1) \cdot \left( \frac{\partial }{\partial x} [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}]^2 \right) \\&\times \left( \frac{\partial }{\partial x} [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}] \right) + i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}]^2 s\ \\&\times \left[ (e^{i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}] s} - 1)\cdot \left( \frac{\partial ^2}{\partial x^2} [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}] \right) \right. \\&\left. \left. + i \rho \pi _{j, \nu } s\ e^{i \rho \pi _{j, \nu } [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}] s}\ \left( \frac{\partial }{\partial x} [\varvec{\psi }_{j, \nu ; k} \cdot \mathbf{v}]\right) ^2 \right] \right\} \text {d}s \mu _0(\text {d}\mathbf{v}). \end{aligned}$$

From (154) and the inequality \(|e^{i x} - \sum _{r = 1}^{N - 1} (i x)^r/r!| \le |x|^N/N!\) one obtains the bound

$$\begin{aligned} \sup _{(u, v) \in D_k} \left| \frac{\partial ^2}{\partial x^2} R_j(\rho , \mathbf{h}_k(u, v)) \right| \le \frac{\sqrt{3} + 9}{6} {\mathfrak {m}}_4\pi _{j, \nu }^4 \rho ^4 \end{aligned}$$

which, taking account of (60), becomes

$$\begin{aligned} \sup _{\begin{array}{c} \rho \in [0, \text {R}] \\ (u, v) \in D_k \end{array}} \sum _{j = 1}^{\nu } \left| \frac{\partial ^2}{\partial x^2} R_j(\rho , \mathbf{h}_k(u, v)) \right| \le \frac{\sqrt{3} + 9}{96}. \end{aligned}$$

Finally, as to the remaining term in the RHS of (145), one utilizes (149)–(150) with \(x = u, v\). Then, combining (144) with (155) and (156) gives

$$\begin{aligned} \sup _{(u, v) \in D_k} \left| \frac{\partial }{\partial x} w_j \right|&\le \sqrt{3} {\mathfrak {m}}_2 \pi _{j, \nu }^2 \rho ^2 + \frac{\sqrt{3}}{2} {\mathfrak {m}}_3|\pi _{j, \nu }|^3 \rho ^3 + \frac{\sqrt{3}}{6} {\mathfrak {m}}_4\pi _{j, \nu }^{4} \rho ^4 \end{aligned}$$
$$\begin{aligned} \sup _{(u, v) \in D_k} \left| \frac{\partial }{\partial x} w_j \right| ^2&\le 9 {\mathfrak {m}}_{2}^{2} \pi _{j, \nu }^4 \rho ^4 + \frac{9}{4} {\mathfrak {m}}_{3}^{2}\pi _{j, \nu }^6 \rho ^6 + \frac{1}{4} {\mathfrak {m}}_4^2 \pi _{j, \nu }^{8} \rho ^8 . \end{aligned}$$

By virtue of the Lyapunov inequality and Theorem 19 in [43], (161) yields

$$\begin{aligned} \sup _{\begin{array}{c} \rho \in [0, \text {R}] \\ (u, v) \in D_k \end{array}} \sum _{j=1}^{\nu } \left| \frac{\partial }{\partial x} w_j(\rho , \mathbf{h}_k(u, v)) \right| ^2 \le \frac{613}{1024}. \end{aligned}$$

As for the second order derivatives, from the combination of (144) with (155) and (158) one gets

$$\begin{aligned}&\sup _{(u, v) \in D_k} \left| \frac{\partial ^2}{\partial x^2} w_j(\rho , \mathbf{h}_k(u, v)) \right| ^2 \le 3\left( \sum _{h = 1}^{2} 3^{h/2}\right) ^2 {\mathfrak {m}}_{2}^{2} \pi _{j, \nu }^4 \rho ^4 \nonumber \\&\quad + 3\left( \sum _{h = 1}^{2} \frac{3^{h/2}}{(3-h)!} \right) ^2 {\mathfrak {m}}_{3}^{2}\pi _{j, \nu }^6 \rho ^6 + 3\left( \frac{\sqrt{3} + 9}{6}\right) ^2 {\mathfrak {m}}_4^2 \pi _{j, \nu }^{8} \rho ^8 \end{aligned}$$

and hence

$$\begin{aligned} \sup _{\begin{array}{c} \rho \in [0, \text {R}] \\ (u, v) \in D_k \end{array}} \sum _{j=1}^{\nu } \left| \frac{\partial ^2}{\partial x^2} w_j(\rho , \mathbf{h}_k(u, v)) \right| ^2 \le W_{2}^{*} \end{aligned}$$


$$\begin{aligned} W_{2}^{*} := \frac{3}{16}\left( \,\sum _{h = 1}^{2} 3^{h/2}\right) ^2 + \frac{3}{64}\left( \,\sum _{h = 1}^{2} \frac{3^{h/2}}{(3-h)!} \right) ^2 + \frac{3}{256}\left( \frac{\sqrt{3} + 9}{6}\right) ^2. \end{aligned}$$

In view of (149), (151), (153), and (162),

$$\begin{aligned}&\sup _{(u, v) \in D_k} \sum _{j=1}^{\nu } \Big | \frac{\partial }{\partial x} \left( \Phi (w_j) w_{j}^{2}\right) \Big | \le \sqrt{\frac{613}{1024}} \left( \frac{1}{2}{\mathfrak {m}}_2 \rho ^2 + \frac{1}{6} {\mathfrak {m}}_3 |\rho |^3 + \frac{1}{24} {\mathfrak {m}}_4\rho ^4\right) \nonumber \\&\quad \quad \times \left( \left| \Phi ^{'}\left( -\frac{19}{128}\right) \right| \sqrt{k_0(w)} + 2\left| \Phi \left( -\frac{19}{128}\right) \right| \right) \end{aligned}$$

and, utilizing (145), (155), (156) and (165), one concludes that

$$\begin{aligned}&\sup _{(u, v) \in D_k} \sum _{j=1}^{\nu } \left| \frac{\partial }{\partial x} \text {Log}[ \hat{\mu }_0(\rho \pi _{j, \nu } \varvec{\psi }_{j, \nu }(\mathbf{h}_k(u, v)))] \right| \le \sqrt{3}{\mathfrak {m}}_2\rho ^2 + \frac{\sqrt{3}}{2}{\mathfrak {m}}_3\rho ^3 \nonumber \\&\quad + \frac{\sqrt{3}}{6} {\mathfrak {m}}_4\rho ^4 + \sqrt{\frac{613}{1024}} \left( \left| \Phi ^{'}\left( -\frac{19}{128}\right) \right| \sqrt{k_0(w)} + 2\left| \Phi \left( -\frac{19}{128}\right) \right| \right) \nonumber \\&\quad \times \left( \frac{1}{2}{\mathfrak {m}}_2 \rho ^2 + \frac{1}{6} {\mathfrak {m}}_3 \rho ^3 + \frac{1}{24} {\mathfrak {m}}_4\rho ^4\right) \end{aligned}$$

for \(\rho \) in \([0, \text {R}]\). To obtain a bound of the same type for the second derivative, one can first combine (150), (151), (153) and (161)–(164) to get

$$\begin{aligned}&\sup _{(u, v) \in D_k} \sum _{j=1}^{\nu } \left| \frac{\partial ^2}{\partial x^2} \left( \Phi (w_j) w_{j}^{2}\right) \right| \nonumber \\&\quad \le \left[ \left| \Phi ^{''}\left( -\frac{19}{128}\right) \right| \frac{613}{1024}\sqrt{k_0(w)} + \left| \Phi ^{'}\left( -\frac{19}{128}\right) \right| \left( \frac{613}{256} + \sqrt{W_{2}^{*}k_0(w)}\right) \right. \nonumber \\&\qquad \left. + 2\left| \Phi \left( -\frac{19}{128}\right) \right| \sqrt{W_{2}^{*}}\right] \cdot \left( \frac{1}{2}{\mathfrak {m}}_2 \rho ^2 + \frac{1}{6} {\mathfrak {m}}_3 \rho ^3 + \frac{1}{24} {\mathfrak {m}}_4\rho ^4\right) \nonumber \\&\qquad + 2\left| \Phi (-\frac{19}{128})\right| \left( 9 {\mathfrak {m}}_{2}^{2} \rho ^4 + \frac{9}{4} {\mathfrak {m}}_{3}^{2}\rho ^6 + \frac{1}{4} {\mathfrak {m}}_4^2 \rho ^8\right) \end{aligned}$$

and, then, utilize (145), (155), (158) and (167), to conclude that

$$\begin{aligned}&\sup _{(u, v) \in D_k} \sum _{j=1}^{\nu } \left| \frac{\partial ^2}{\partial x^2} \text {Log}[\hat{\mu }_0(\rho \pi _{j, \nu } \varvec{\psi }_{j, \nu }(\mathbf{h}_k(u, v)))] \right| \nonumber \\&\quad \le \left( \sum _{h = 1}^{2} 3^{h/2}\right) {\mathfrak {m}}_2\rho ^2 + \left( \sum _{h = 1}^{2} \frac{3^{h/2}}{(3-h)!} \right) {\mathfrak {m}}_3 \rho ^3 + \frac{\sqrt{3} + 9}{6} {\mathfrak {m}}_4\rho ^4 \nonumber \\&\qquad + \left[ \left| \Phi ^{''}\left( -\frac{19}{128}\right) \right| \frac{613}{1024}\sqrt{k_0(w)} + \left| \Phi ^{'}\left( -\frac{19}{128}\right) \right| \left( \frac{613}{256} + \sqrt{W_{2}^{*}k_0(w)}\right) \right. \nonumber \\&\left. \qquad + 2\left| \Phi \left( -\frac{19}{128}\right) \right| \sqrt{W_{2}^{*}}\right] \cdot \left( \frac{1}{2}{\mathfrak {m}}_2 \rho ^2 + \frac{1}{6} {\mathfrak {m}}_3 \rho ^3 + \frac{1}{24} {\mathfrak {m}}_4\rho ^4\right) \nonumber \\&\qquad + 2\left| \Phi \left( -\frac{19}{128}\right) \right| \left( 9 {\mathfrak {m}}_{2}^{2} \rho ^4 + \frac{9}{4} {\mathfrak {m}}_{3}^{2}\rho ^6 + \frac{1}{4} {\mathfrak {m}}_4^2 \rho ^8\right) . \end{aligned}$$

At this stage, one observes that the RHSs of (166) and (168) can be written as \(\rho ^2 \wp _{L, 1}(\rho )\) and \(\rho ^2 \wp _{L, 2}(\rho )\) respectively, for specific non-random polynomials \(\wp _{L, 1}\) and \(\wp _{L, 2}\) with positive coefficients. As final step of the proof, expressing \(\varDelta _{S^2}\) in local coordinates leads to

$$\begin{aligned}&\sup _{\mathbf{u}\in \Omega _k} |\varDelta _{S^2}\hat{{\mathcal {N}}}_k(\rho ; \mathbf{u})| \le 4(2 + \sqrt{3}) \\&\quad \times \sup _{(u, v) \in D_k} \left( \left| \frac{\partial ^2}{\partial u^2}\hat{{\mathcal {N}}}(\rho ; \mathbf{h}_k(u,v))\right| \!+\! \left| \frac{\partial }{\partial u}\hat{{\mathcal {N}}}(\rho ; \mathbf{h}_k(u, v))\right| \!+\! \left| \frac{\partial ^2}{\partial v^2}\hat{{\mathcal {N}}}(\rho ; \mathbf{h}_k(u, v))\right| \right) \end{aligned}$$

where \(4(2 + \sqrt{3}) = \max _{u \in [\frac{1}{12}\pi , \frac{11}{12}\pi ]} \max \{|\cot u|, 1/\sin ^2u\}\) and hence

$$\begin{aligned} \wp _L(\rho ) = 4(2 + \sqrt{3}) [2\rho ^2\wp _{L, 1}^2(\rho ) + \wp _{L, 1}(\rho ) + 2\wp _{L, 2}(\rho )]. \end{aligned}$$

A.10 Proof of Proposition 9

Fix the sample point \(\omega \) in \(U^c\) and denote by \(n\) the value of \(\nu \) at \(\omega \). Then, designate the values of \(\pi _{1, \nu }^{2}, \dots , \pi _{\nu , \nu }^{2}\) at \(\omega \) by \(a_1, \dots , a_n\) respectively, so that each \(a_j\) belongs to \([0, 1]\) and \(\sum _{j = 1}^{n} a_j = 1\) in view of (23). The argument continues by resorting to the following combinatorial tools:

  1. (i)

    The \(k\)-th elementary symmetric function \(S_k(a_1, \dots , a_n)\) defined by

    $$\begin{aligned} S_k(a_1, \dots , a_n) := \sum _{1 \le i_1 < \dots < i_k \le n} a_{i_1} \dots a_{i_k} \end{aligned}$$

    for \(k\) in \(\{1, \dots , n\}\).

  2. (ii)

    The \(k\)-th Newton symmetric function given by

    $$\begin{aligned} N_k(a_1, \dots , a_n) := \sum _{j = 1}^{n} a_{j}^{k}. \end{aligned}$$
  3. (iii)

    The group of relations, known as Newton’s identities, which read

    $$\begin{aligned} k S_k = \sum _{j = 1}^{k} (-1)^{j + 1} N_j S_{k - j} \end{aligned}$$

    for \(k\) in \(\{1, \dots , n\}\), with the proviso that \(S_0(a_1, \dots , a_n) := 1\).

See Section 1.9 of [52] for details. The way is now paved to prove that, if \(a_{*}\in (0, 1)\), \(N_1 = 1\) and \(N_2 \le a_{*}\), then

$$\begin{aligned} S_k \ge 1/k! - 2^{k-1}a_{*}\end{aligned}$$

holds for each \(k\) in \(\{1, \dots , n\}\). Proceeding by mathematical induction, when \(k = 1\), one has \(S_1 = N_1 = 1\) and (169) follows. When \(k \ge 2\), combine the Newton identities with the inductive hypothesis to get

$$\begin{aligned} S_k \ge \frac{1}{k}S_{k-1} - \frac{1}{k} \sum _{j = 2}^{k} N_j S_{k - j} \ge \frac{1}{k!} - \frac{1}{k} \left( 2^{k-2}a_{*}+ \sum _{j = 2}^{k} N_j S_{k - j}\right) . \end{aligned}$$

At this stage, note that \(N_j \le a_{*}\) for each \(j\) in \(\{2, \dots , k\}\). Moreover, thanks to the multinomial identity (see, e.g., 1.7.2 in [52]), \(N_1 = 1\) entails \(S_m \le 1/m! \le 1\) for each \(m\) in \(\{0, \dots , n\}\). Hence,

$$\begin{aligned} S_k \ge \frac{1}{k!} - \frac{1}{k}[2^{k-2} + (k-1)] a_{*}\end{aligned}$$

which concludes the proof of (169), after noting that \(2^{k-2} + (k-1) \le k2^{k-1}\) for every \(k\) in \({\mathbb {N}}\). To complete the proof of (68), observe that \(\omega \in U^c\) entails \(n \ge r\) by virtue of (51), so that

$$\begin{aligned} \prod _{j = 1}^{n} (1 + a_j x^2) = \sum _{k = 0}^{n} S_k(a_1, \dots , a_n) x^{2k} \ge S_r(a_1, \dots , a_n) x^{2r}. \end{aligned}$$

Finally, recall the relation between \(r\) and \(a_{*}\) given by (51) and apply (169) to obtain

$$\begin{aligned} S_r \ge \frac{1}{r!} - 2^{r - 1}a_{*}= \frac{1}{2 r!} = \epsilon . \end{aligned}$$

To prove (69), an obvious change of variable entails

$$\begin{aligned} \int \limits _{x}^{+\infty } \varPsi ^s(\rho ) \rho ^m \text {d}\rho = \lambda ^{m+1} \int \limits _{x/\lambda }^{+\infty } \left[ \frac{1}{\prod _{j=1}^{\nu } \left( {1 + \pi _{j, \nu }^{2} y^2}\right) }\right] ^{sq} y^m \text {d}y \end{aligned}$$

and conclude by using (68).

A.11 Proof of Proposition 11

The analysis to be developed is concerned with each of the charts \(\Omega _1, \dots , \Omega _4\), but it is of the same kind for all of them. Therefore, even if the notation agrees with that introduced in A. 9, the subscript \(k\) referring to the \(k\)-th chart will be dropped. The computation of the Laplacian in (141) yields

$$\begin{aligned} \varDelta _{S^2} \hat{{\mathcal {N}}}(\rho ; \mathbf{u}) = \hat{{\mathcal {N}}}(\rho ; \mathbf{u}) \Big \{\varDelta _{S^2} \text {Log}\hat{{\mathcal {N}}}(\rho ; \mathbf{u}) + \Big |\Big | \!\nabla _{S^2}\text {Log}\hat{{\mathcal {N}}}(\rho ; \mathbf{u})\! \Big |\Big |_{S^2}^2\Big \} \end{aligned}$$

for every fixed \(\rho \). It is worth mentioning that here any differential operator \({\fancyscript{D}}\), applied to the complex-valued function \(h = f + ig\), must be intended as \({\fancyscript{D}}h := {\fancyscript{D}}f + i {\fancyscript{D}}g\) and that the scalar product \(\langle U_1 + i U_2, V_1 + i V_2\rangle \) is defined, by linearity, as \(\langle U_1 , V_1\rangle - \langle U_2 , V_2\rangle + i \langle U_1 , V_2\rangle + i \langle U_2 , V_1\rangle \) for the vector fields \(U_1, U_2, V_1, V_2\). Now, after observing that \(1\!\!1_{T^c} = 1 - 1\!\!1_{T}\), one gets

$$\begin{aligned}&\left| {\mathsf{E}}_t\left[ \left( \varDelta _{S^2}\hat{{\mathcal {N}}}\right) 1\!\!1_{T^{c}} \ | \ {\fancyscript{G}} \right] \right| \ \le \ {\mathsf{E}}_t\left[ |\hat{{\mathcal {N}}}| \cdot \Big |\ \Big |\Big | \!\nabla _{S^2}\text {Log}\hat{{\mathcal {N}}}\! \Big |\Big |_{S^2}^2 \Big | 1\!\!1_{T^{c}} \ | \ {\fancyscript{G}} \right] \nonumber \\&\quad +\, e^{-\rho ^2/2} \Big |{\mathsf{E}}_t\left[ \varDelta _{S^2} \text {Log}\hat{{\mathcal {N}}}\ | \ {\fancyscript{G}} \right] \Big | + e^{-\rho ^2/2} {\mathsf{E}}_t\left[ |\varDelta _{S^2} \text {Log}\hat{{\mathcal {N}}}| 1\!\!1_T\ | \ {\fancyscript{G}}\right] \nonumber \\&\quad +\, {\mathsf{E}}_t\left[ \big |\hat{{\mathcal {N}}} - e^{-\rho ^2/2}\big | \cdot \big | \varDelta _{S^2} \text {Log}\hat{{\mathcal {N}}}\big | 1\!\!1_{T^c}\ | \ {\fancyscript{G}}\right] \end{aligned}$$

which represents the starting point for the proof at issue. To bound the term \(|\hat{{\mathcal {N}}}|\), use (141), (145), (80), (148) and (153) to write

$$\begin{aligned} \big |\hat{{\mathcal {N}}}(\rho ; \mathbf{u})\big | 1\!\!1_{T(\mathbf{u})^c} \le c(N) e^{-\rho ^2/6} \end{aligned}$$

for \(\rho \) in \([0, \text {R}]\), with \(c(N) := \exp \{\frac{1}{16}c_0(R) + |\Phi (-\frac{19}{128})| k_0(w)\}\). As to the term containing the gradient, an application of the triangular inequality in (145) shows that

$$\begin{aligned}&{\mathsf{E}}_t\left[ \left| \ \left| \left| \nabla _{S^2}\text {Log}\hat{{\mathcal {N}}}(\rho ; \mathbf{u})\right| \right| ^{2}_{S^{2}}\right| \ | \ {\fancyscript{G}}\right] \le \rho ^4 \text {Z}_G(\mathbf{u}) + \frac{1}{9}\rho ^6 \nonumber \\&\quad \times {\mathsf{E}}_t\left[ \left| \left| \sum _{j = 1}^{\nu }\pi _{j, \nu }^{3} \nabla _{S^2}\int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^3 \mu _0(\text {d}\mathbf{v})\right| \right| ^{2}_{S^{2}}\ | \ {\fancyscript{G}} \right] + 4E_1 + 4E_2 \qquad \end{aligned}$$

for \((\rho , \mathbf{u})\) in \([0, \text {R}] \times \Omega \), where \(E_1\) and \(E_2\) are conditional expectations, to be specified below, involving the remainders \(R_j\) and \(\Phi (w_j)w_{j}^2\) respectively. As to the second summand, the convexity of the square of the Riemannian length entails

$$\begin{aligned}&\left| \left| \sum _{j = 1}^{\nu }\pi _{j, \nu }^{2} \left( \pi _{j, \nu } \nabla _{S^2}\int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu } \cdot \mathbf{v}]^3 \text {d}\mu _0\right) \right| \right| ^2_{S^2} \\&\quad \le \text {W}\sup _{j, \mathbf{u}} \left| \left| \nabla _{S^2}\int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu } \cdot \mathbf{v}]^3 \text {d}\mu _0\right| \right| ^{2}_{S^{2}} \le \text {W}\left( \,\,\int \limits _{{\mathbb {R}}^3}\sup _{j, \mathbf{u}} \mid \mid \!\nabla _{S^2}[\varvec{\psi }_{j, \nu } \cdot \mathbf{v}]^3\! \mid \mid _{S^2} \text {d}\mu _0\right) ^2. \end{aligned}$$

To evaluate the last integral, recall that \(\varvec{\psi }_{j, \nu }\) is given by (29) with the proper choice of \(\text {B}\) as in (90)–(91), which makes \(\varvec{\psi }_{j, \nu } : \Omega \rightarrow S^2\) smooth. Writing the gradient in coordinates yields

$$\begin{aligned}&\sup _{j, \mathbf{u}} \mid \mid \!\nabla _{S^2}[\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^3\! \mid \mid _{S^2}\\&\quad = \sup _{j, (u,v)} \left[ (\partial _u [\varvec{\psi }_{j, \nu }(\mathbf{h}(u, v)) \cdot \mathbf{v}]^3)^2 + \frac{1}{\sin ^2 u} (\partial _v [\varvec{\psi }_{j, \nu }(\mathbf{h}(u, v)) \cdot \mathbf{v}]^3)^2\right] ^{1/2}. \end{aligned}$$

Since \(1/\sin ^2 u \le 4(2 + \sqrt{3})\) for every \((u, v)\) in \(D\), (154) leads to

$$\begin{aligned} \left( \,\,\int \limits _{{\mathbb {R}}^3}\sup _{\begin{array}{c} j = 1, \dots , \nu \\ \mathbf{u}\in \Omega \end{array}} \mid \mid \!\nabla _{S^2}[\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^3\! \mid \mid _{S^2} \mu _0(\text {d}\mathbf{v})\right) ^2 \le 27(9 + 4\sqrt{3}) {\mathfrak {m}}_{3}^{2}. \end{aligned}$$

The terms \(E_1\) and \(E_2\) in (172) can be derived as uniform bounds w.r.t. \(\mathbf{u}\) by writing \(\Big |\Big | \!\nabla _{S^2}\text {Log}\hat{{\mathcal {N}}}(\rho ; \mathbf{u})\! \Big |\Big |_{S^2}^2\) in coordinates, according to

$$\begin{aligned} E_1 \!&:= \! \sup _{(u, v)} {\mathsf{E}}_t\left[ \left( \sum _{j = 1}^{\nu } |\partial _u R_j(\rho , \mathbf{h}(u, v))|\right) ^2 \!+\! \frac{1}{\sin ^2 u}\left( \sum _{j = 1}^{\nu } |\partial _v R_j(\rho , \mathbf{h}(u, v))|\right) ^2\ | \ {\fancyscript{G}}\right] \\ E_2&:= \sup _{(u, v)} {\mathsf{E}}_t\left[ \left( \sum _{j = 1}^{\nu } |\partial _u \Phi (w_j(\rho , \mathbf{h}(u, v))) w_{j}^{2}(\rho , \mathbf{h}(u, v))|\right) ^2 \right. \\&\left. + \frac{1}{\sin ^2 u} \left( \sum _{j = 1}^{\nu } |\partial _v\Phi (w_j(\rho , \mathbf{h}(u, v))) w_{j}^{2}(\rho , \mathbf{h}(u, v))|\right) ^2\ | \ {\fancyscript{G}}\right] . \end{aligned}$$

As for \(E_1\), (156) and (157) give

$$\begin{aligned} \sup _{(u, v)} \left[ \left( \sum _{j = 1}^{\nu } |\partial _u R_j|\right) ^2 + \frac{1}{\sin ^2 u}\left( \sum _{j = 1}^{\nu } |\partial _v R_j|\right) ^2\right] \le \frac{9 + 4\sqrt{3}}{192} {\mathfrak {m}}_4\text {W} \rho ^4 \end{aligned}$$

for every \(\rho \) in \([0, \text {R}]\), the RHS being a \({\fancyscript{G}}\)-measurable function. Apropos of \(E_2\), start from (149) and notice that, in view of (153) and the complete monotonicity of \(\Phi \),

$$\begin{aligned} (|\Phi ^{'}(w_j)| \cdot |w_j| + 2|\Phi (w_j)|) \le \left( \left| \Phi ^{'}\left( -\frac{19}{128}\right) \right| \sqrt{k_0(w)} + 2\left| \Phi \left( -\frac{19}{128}\right) \right| \right) \end{aligned}$$

holds for every \(j = 1, \dots , \nu \), and \((\rho , \mathbf{u})\) in \([0, \text {R}] \times \Omega \). Then, by virtue of Lyapunov’s inequality and Theorem 19 in [43], inequalities (152), with \(l= 0\), (161) and (163) become

$$\begin{aligned} \sup _{(u, v) \in D} \sum _{j = 1}^{\nu } |w_j(\rho , \mathbf{h}(u, v))|^2&\le 16k_0(w) {\mathfrak {m}}_4\text {W}\rho ^4 \end{aligned}$$
$$\begin{aligned} \sup _{(u, v) \in D} \sum _{j = 1}^{\nu } \left| \frac{\partial }{\partial x} w_j(\rho , \mathbf{h}(u, v)) \right| ^2&\le \frac{613}{64} {\mathfrak {m}}_4\text {W}\rho ^4 \end{aligned}$$
$$\begin{aligned} \sup _{(u, v) \in D} \sum _{j = 1}^{\nu } \left| \frac{\partial ^2}{\partial x^2} w_j(\rho , \mathbf{h}(u, v)) \right| ^2&\le 16 W_{2}^{*} {\mathfrak {m}}_4\text {W}\rho ^4 \end{aligned}$$

respectively. These last inequalities, in combination with (153) and (162), entail

$$\begin{aligned} \sup _{(u, v)} \left[ \left( \sum _{j = 1}^{\nu } |\partial _u \Phi (w_j) w_{j}^{2}|\right) ^2 + \frac{1}{\sin ^2 u}\left( \sum _{j = 1}^{\nu } |\partial _v \Phi (w_j) w_{j}^{2}|\right) ^2\right] \le \overline{E}_2 {\mathfrak {m}}_4\text {W} \rho ^4 \end{aligned}$$

for every \(\rho \) in \([0, \text {R}]\) with

$$\begin{aligned} \overline{E}_2 := 4(9 + 4\sqrt{3}) \left( \left| \Phi ^{'}\left( -\frac{19}{128}\right) \right| \sqrt{k_0(w)} + 2\left| \Phi \left( -\frac{19}{128}\right) \right| \right) ^2 \left( k_0(w) + \frac{613}{64}\right) ^2. \end{aligned}$$

This concludes the analysis of the first summand in the RHS of (170), after noting that the upper bound provided by (177) is \({\fancyscript{G}}\)-measurable. To proceed, for notational simplicity put

$$\begin{aligned} A&:= -\frac{1}{2} \sum _{j = 1}^{\nu } \pi _{j, \nu }^2 \left( \,\, \int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^2 \mu _0(\text {d}\mathbf{v}) - 1 \right) \\ B&:= -\frac{1}{3!} \sum _{j = 1}^{\nu } \pi _{j, \nu }^3 \int \limits _{{\mathbb {R}}^3}[\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^3 \mu _0(\text {d}\mathbf{v}) \\ H&:= \sum _{j = 1}^{\nu } R_j(\rho , \mathbf{u}) + \sum _{j = 1}^{\nu } \Phi (w_j(\rho , \mathbf{u})) w_{j}^{2}(\rho , \mathbf{u}) \end{aligned}$$

so that (141) can be re-written as \(\hat{{\mathcal {N}}} = \exp \{-\rho ^2/2 + A\rho ^2 + iB\rho ^3 + H\}\). Observe that \({\mathsf{E}}_t[A^2\ | \ {\fancyscript{G}}] = \frac{1}{4} \text {Z}(\mathbf{u})\) and \({\mathsf{E}}_t[(\varDelta _{S^2} A)^2\ | \ {\fancyscript{G}}] = \frac{1}{4} \text {Z}_L(\mathbf{u})\) hold by definition, and invoke (95)–(96) to write

$$\begin{aligned} |{\mathsf{E}}_t[\varDelta _{S^2} A\ | \ {\fancyscript{G}}]|&\le \frac{1}{2} \text {X}_L(\mathbf{u}) \\ |{\mathsf{E}}_t[\varDelta _{S^2} B\ | \ {\fancyscript{G}}]|&\le \frac{1}{6} \text {Y}_L(\mathbf{u}) \end{aligned}$$

for every \(\mathbf{u}\) in \(\Omega \). To deal with the Laplacian of \(H\), start from the sum of the \(R_j\)’s. After writing the Laplacian in coordinates, the combination of (156) with (158) gives

$$\begin{aligned} \sup _{\mathbf{u}\in \Omega } \sum _{j = 1}^{\nu } |\varDelta _{S^2} R_j(\rho , \mathbf{u})| \le \frac{105 + 53\sqrt{3}}{6}{\mathfrak {m}}_4\text {W}\rho ^4. \end{aligned}$$

Analogously, combining (174)–(176) with (153), (162) and (164) yields

$$\begin{aligned} \sup _{\mathbf{u}} \sum _{j = 1}^{\nu } |\varDelta _{S^2} \Phi (w_j(\rho , \mathbf{u})) w_{j}^{2}(\rho , \mathbf{u})| \le \overline{\Phi }(\varDelta ) {\mathfrak {m}}_4\text {W}\rho ^4 \end{aligned}$$


$$\begin{aligned} \overline{\Phi }(\varDelta )&:= 4(2 + \sqrt{3})\left[ 16\sqrt{\frac{613}{1024}}\ \left| \Phi ^{'}\left( -\frac{19}{128}\right) \right| \ k_0(w) + \frac{613}{1024}\ \left| \Phi \left( -\frac{19}{128}\right) \right| \right. \\&\left. + 16\left| \Phi \left( -\frac{19}{128}\right) \right| \ k_0(w) \right] + (9 + 4\sqrt{3}) \left[ \frac{613}{64}\ \left| \Phi ^{''}\left( -\frac{19}{128}\right) \right| \ k_0(w) \right. \\&+ \left| \Phi ^{'}\left( -\frac{19}{128}\right) \right| \cdot \left( \frac{613}{16}\sqrt{k_0(w)} + 16 k_0(w) \sqrt{W_{2}^{*}}\right) \\&\left. + \left| \Phi \left( -\frac{19}{128}\right) \right| \cdot \left( 16W_{2}^{*} + 16k_0(w) + \frac{613}{32}\right) \right] . \end{aligned}$$

Hence, the second summand in the RHS of (170) admits the following bound:

$$\begin{aligned}&e^{-\rho ^2/2} \left| {\mathsf{E}}_t\left[ \varDelta _{S^2} \text {Log}\hat{{\mathcal {N}}}(\rho ; \mathbf{u})\ | \ {\fancyscript{G}} \right] \right| \le \frac{1}{2} \rho ^2 e^{-\rho ^2/2} \text {X}_L(\mathbf{u}) \nonumber \\&\quad + \frac{1}{6} \rho ^3 e^{-\rho ^2/2} \text {Y}_L(\mathbf{u}) + \left( \frac{105 + 53\sqrt{3}}{6} + \overline{\Phi }(\varDelta )\right) \rho ^4e^{-\rho ^2/2} {\mathfrak {m}}_4\text {W} \end{aligned}$$

for every \((\rho , \mathbf{u})\) in \([0, \text {R}] \times \Omega \). Apropos of the third summand in the RHS of (170), recall from A.9 that \(\sup _{\mathbf{u}} |\varDelta _{S^2} \text {Log}\hat{{\mathcal {N}}}| \le \rho ^2 \wp _L(\rho )\) holds for every \(\rho \) in \([0, \text {R}]\). Therefore, in view of (86), one has

$$\begin{aligned} e^{-\rho ^2/2} {\mathsf{E}}_t\left[ |\varDelta _{S^2} \text {Log}\hat{{\mathcal {N}}}(\rho , \mathbf{u})| 1\!\!1_T\ \ | \ {\fancyscript{G}}\right] \le 9 e^{-\rho ^2/2} \rho ^2 \wp _L(\rho ) \text {Z}(\mathbf{u}). \end{aligned}$$

To deal with the last summand in the RHS of (170), a bound for \(\big |\hat{{\mathcal {N}}} - e^{-\rho ^2/2}\big |\) can be derived from the elementary inequalities \(|e^{i x} - 1| \le |x|\) and \(|e^z - 1| \le |z|e^{|z|}\), valid for every \(x\) in \({\mathbb {R}}\) and \(z\) in \({\mathbb {C}}\), respectively. Whence, one gets

$$\begin{aligned} \big |\hat{{\mathcal {N}}} - e^{-\rho ^2/2}\big |1\!\!1_{T(\mathbf{u})^c} \le e^{-\rho ^2/6} \left( |H|e^{|H|} + |B|\rho ^3 + |A|\rho ^2\right) \end{aligned}$$

which, in turn, yields

$$\begin{aligned}&\big |\hat{{\mathcal {N}}} - e^{-\rho ^2/2}\big | \cdot \big | \varDelta _{S^2} \text {Log}\hat{{\mathcal {N}}}\big | 1\!\!1_{T(\mathbf{u})^c} \le e^{-\rho ^2/6} \Big (|H|e^{|H|}\rho ^2 \wp _L(\rho ) + |B \varDelta _{S^2} A| \rho ^5 \nonumber \\&\quad + |B \varDelta _{S^2} B| \rho ^6 + |B \varDelta _{S^2} H| \rho ^3 + |A \varDelta _{S^2} A| \rho ^4 + |A \varDelta _{S^2} B| \rho ^5 + |A \varDelta _{S^2} H| \rho ^2\Big ) .\nonumber \\ \end{aligned}$$

At this stage, note that \(\sup _{\mathbf{u}} |A| \le \frac{1}{2}(1 + {\mathfrak {m}}_2)\), \(\sup _{\mathbf{u}} |B| \le \frac{1}{6}{\mathfrak {m}}_3\) and \(\sup _{\mathbf{u}} e^{|H|} \le c(N)\) for every \(\rho \) in \([0, \text {R}]\), in view of (148) and (153). Thus, taking account of (147), (174) and (178)–(179) gives

$$\begin{aligned} \left( |H|e^{|H|}\rho ^2 \wp _L(\rho ) + |B \varDelta _{S^2} H| \rho ^3 + |A \varDelta _{S^2} H| \rho ^2\right) \le \rho ^2 \wp _H(\rho ) {\mathfrak {m}}_4\text {W}\end{aligned}$$


$$\begin{aligned} \wp _H(\rho )&{:=} c(N) \left( c_0(R) + 16 \left| \Phi \left( -\frac{19}{128}\right) \right| k_0(w)\right) \rho ^4 \wp _L(\rho )\\&+ \left( \frac{105 + 53\sqrt{3}}{6} + \overline{\Phi }(\varDelta )\right) \cdot \left( \frac{1}{6}{\mathfrak {m}}_3\rho ^5 + \frac{1}{2}(1 + {\mathfrak {m}}_2) \rho ^4\right) . \end{aligned}$$

For the remaining terms in (182), take the conditional expectation and write

$$\begin{aligned}&{\mathsf{E}}_t\left[ |B \varDelta _{S^2} A| \rho ^5 + |B \varDelta _{S^2} B| \rho ^6 + |A \varDelta _{S^2} A| \rho ^4 + |A \varDelta _{S^2} B| \rho ^5\ | \ {\fancyscript{G}}\right] \nonumber \\&\quad \le \frac{1}{2} \left\{ {\mathsf{E}}_t[(B^2 + (\varDelta _{S^2} B)^2)\ | \ {\fancyscript{G}}] \cdot (\rho ^5 + \rho ^6) + \frac{1}{4} (\text {Z}(\mathbf{u}) + \text {Z}_L(\mathbf{u})) \cdot (\rho ^4 + \rho ^5) \right\} . \nonumber \\ \end{aligned}$$

Then, an application of the Lyapunov inequality shows that

$$\begin{aligned} B^2&\le \frac{1}{36}{\mathfrak {m}}_{3}^{2} \text {W} \end{aligned}$$
$$\begin{aligned} (\varDelta _{S^2} B)^2&\le \frac{1}{36} \left( \,\,\int \limits _{{\mathbb {R}}^3}\sup _{\begin{array}{c} j = 1, \dots , \nu \\ \mathbf{u}\in \Omega \end{array}} |\varDelta _{S^2} [\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^3| \mu _0(\text {d}\mathbf{v})\right) ^2 \text {W} \end{aligned}$$

the two RHSs being \({\fancyscript{G}}\)-measurable. To evaluate the integral in (186), it is enough to write the Laplacian w.r.t. the coordinates \((u, v)\) and to recall that \(\max \{1/\sin ^2 u, 1/|\cot u|\} \le 4(2 + \sqrt{3})\) for every \((u, v)\) in \(D\), so that (154) leads to

$$\begin{aligned} \int \limits _{{\mathbb {R}}^3}\sup _{\begin{array}{c} j = 1, \dots , \nu \\ \mathbf{u}\in \Omega \end{array}} |\varDelta _{S^2} [\varvec{\psi }_{j, \nu }(\mathbf{u}) \cdot \mathbf{v}]^3| \mu _0(\text {d}\mathbf{v}) \le (234 + 123\sqrt{3}) {\mathfrak {m}}_3. \end{aligned}$$

There are now all the elements to complete the proof of Proposition 11 by setting, in view of (170)–(173), (177) and (180)–(186),

$$\begin{aligned} z_1(\rho )&:= c(N)e^{-\rho ^2/6}\left[ 3(9 + 4\sqrt{3}){\mathfrak {m}}_{3}^{2}\rho ^4 + \frac{9 + 4\sqrt{3}}{48}{\mathfrak {m}}_4\rho ^2 + 4 \overline{E}_2 {\mathfrak {m}}_4\rho ^2\right] \\&+ \left( \frac{105 + 53\sqrt{3}}{6} + \overline{\Phi }(\varDelta )\right) {\mathfrak {m}}_4\rho ^2e^{-\rho ^2/2} + {\mathfrak {m}}_4\wp _H(\rho )e^{-\rho ^2/6} \\&+ \frac{1 + (234 + 123\sqrt{3})^2}{72} {\mathfrak {m}}_{3}^{2} (\rho ^3 + \rho ^4) e^{-\rho ^2/6} \end{aligned}$$


$$\begin{aligned} z_2(\rho )&:= \frac{1}{2}e^{-\rho ^2/2} \\ z_3(\rho )&:= \frac{1}{6}\rho e^{-\rho ^2/2}\\ z_4(\rho )&:= 9 \wp _L(\rho ) e^{-\rho ^2/2} + \frac{1}{8}(\rho ^2 + \rho ^3) e^{-\rho ^2/6} \\ z_5(\rho )&:= c(N)\rho ^2 e^{-\rho ^2/6} \\ z_6(\rho )&:= \frac{1}{8}(\rho ^2 + \rho ^3) e^{-\rho ^2/6}. \end{aligned}$$

A.12 Proof of (103) and (110)–(111)

The identities at issue are proved by induction on \(n\). They hold true for \(n = 1\) in view of the following remarks. As to (103), it suffices to observe that \(\zeta _{1, 1} \equiv 1\), \(\varvec{\psi }_{1, 1}(\mathbf{u}) = \mathbf{u}\) and to exploit (47)–(48). Identity (110) holds thanks to \(\eta _{1, 1} \equiv 1\), \(\varvec{\psi }_{1, 1}(\mathbf{u}) = \mathbf{u}\) and the definition of \(l_3(\mathbf{u})\). As far as (111) is concerned, it is enough to observe that \(\pi _{1, 1} \equiv 1\) and \(\varvec{\psi }_{1, 1}(\mathbf{u}) = \mathbf{u}\). When \(n \ge 2\), one has to verify the identities

$$\begin{aligned}&\int \limits _{(0, 2\pi )^{n - 1}} \left[ \left( \text {B}(\mathbf{u}) \text {O}_{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi }, \varvec{\theta })\mathbf{e}_3 \cdot \mathbf{e}_s\right) ^2 - \frac{1}{3} \right] u_{(0, 2\pi )}^{\otimes _{n - 1}}(\text {d}\varvec{\theta }) = \left( (\mathbf{u}\cdot \mathbf{e}_s)^{2} - \frac{1}{3}\right) \zeta _{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi }),\\&\int \limits _{(0, 2\pi )^{n - 1}} \,\,\int \limits _{{\mathbb {R}}^3}\left[ \left( \text {B}(\mathbf{u}) \text {O}_{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi }, \varvec{\theta })\mathbf{e}_3 \cdot \mathbf{v}\right) ^3 - \frac{3}{5}|\mathbf{v}|^2 \left( \text {B}(\mathbf{u}) \text {O}_{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi }, \varvec{\theta })\mathbf{e}_3 \cdot \mathbf{v}\right) \right] \\&\quad u_{(0, 2\pi )}^{\otimes _{n - 1}}(\text {d}\varvec{\theta }) \mu _0(\text {d}\mathbf{v}) = l_3(\mathbf{u}) \eta _{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi }),\\&\int \limits _{(0, 2\pi )^{n - 1}} \left( \text {B}(\mathbf{u}) \text {O}_{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi }, \varvec{\theta })\mathbf{e}_3 \cdot \mathbf{e}_s\right) u_{(0, 2\pi )}^{\otimes _{n - 1}}(\text {d}\varvec{\theta }) = (\mathbf{u}\cdot \mathbf{e}_s) \pi _{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi }) \end{aligned}$$

for every \(s = 1, 2, 3\), \({\mathfrak {t}}_n\) in \({\mathbb {T}}(n)\), \(\varvec{\varphi }\) in \([0, \pi ]^{n-1}\), \(\mathbf{u}\) in \(S^2\) and every choice of \(\text {B}\) as in (29). After recalling the definition of the \(k\)-th Legendre polynomial \(P_k\), all the above equalities can be deduced from the common formula

$$\begin{aligned} \int \limits _{(0, 2\pi )^{n - 1}} P_k\Big ( \text {B}(\mathbf{u}) \text {O}_{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi }, \varvec{\theta })\mathbf{e}_3 \cdot \varvec{\xi }\Big )u_{(0, 2\pi )}^{\otimes _{n - 1}}(\text {d}\varvec{\theta }) = P_k(\mathbf{u}\cdot \varvec{\xi }) f_{j,n}^{(k)}({\mathfrak {t}}_n, \varvec{\varphi }). \end{aligned}$$

Here, \(\varvec{\xi }\) denotes any unit vector while, for any \(k\) in \({\mathbb {N}}\), \(f_{1,1}^{(k)}({\mathfrak {t}}_1, \emptyset ) \equiv 1\) and

$$\begin{aligned} f_{j, n}^{(k)}({\mathfrak {t}}_n, \varvec{\varphi }) := \left\{ \begin{array}{l@{\quad }l} f_{j, n_l}^{(k)}({\mathfrak {t}}_{n}^{l}, \varvec{\varphi }^l) P_k(\cos \varphi _{n-1}) &{} \text {for} \ j = 1, \dots , n_l \\ f_{j - n_l, n_r}^{(k)}({\mathfrak {t}}_{n}^{r}, \varvec{\varphi }^r) P_k(\sin \varphi _{n-1}) &{} \text {for} \ j = n_l + 1, \dots , n. \end{array} \right. \end{aligned}$$

It is worth noting that \(f_{j, n}^{(1)} = \pi _{j, n}^{*}\), \(f_{j, n}^{(2)} = \zeta _{j, n}^{*}\) and \(f_{j, n}^{(3)} = \eta _{j, n}^{*}\). Now, in view of the same argument used in Sect. 2.1 to verify that (41) and (42) are equal, one gets

$$\begin{aligned} \int \limits _{(0, 2\pi )^{n - 1}} \left[ \text {B}(\mathbf{u}) \text {O}_{j, n}^{*}({\mathfrak {t}}_n, \varvec{\varphi }, \varvec{\theta })\mathbf{e}_3 \cdot \varvec{\xi }\right] ^m \text {d}\varvec{\theta } = \int \limits _{(0, 2\pi )^{n - 1}} \left[ \mathbf{q}_{j, n}({\mathfrak {t}}_n, \varvec{\varphi }, \varvec{\theta }, \mathbf{u}) \cdot \varvec{\xi }\right] ^m \text {d}\varvec{\theta } \end{aligned}$$

for any unit vector \(\varvec{\xi }\) and \(m\) in \({\mathbb {N}}\), which implies that the LHS of (187) can be written as \(\int _{(0, 2\pi )^{n - 1}} P_k(\mathbf{q}_{j, n}({\mathfrak {t}}_n, \varvec{\varphi }, \varvec{\theta }, \mathbf{u}) \cdot \varvec{\xi }) u_{(0, 2\pi )}^{\otimes _{n - 1}}(\text {d}\varvec{\theta })\). Taking \(j\) in \(\{1, \dots , n_l\}\), (43) and the inductive hypothesis yield

$$\begin{aligned}&\int \limits _{(0, 2\pi )^{n - 1}} P_k(\mathbf{q}_{j, n}({\mathfrak {t}}_n, \varvec{\varphi }, \varvec{\theta }, \mathbf{u}) \cdot \varvec{\xi }) u_{(0, 2\pi )}^{\otimes _{n - 1}}(\text {d}\varvec{\theta }) \\&\quad = \int \limits _{(0, 2\pi )} \int \limits _{(0, 2\pi )^{n_l - 1}} P_k(\mathbf{q}_{j, n_l}({\mathfrak {t}}_n^l, \varvec{\varphi }^l, \varvec{\theta }^l, \varvec{\psi }^l(\varphi _{n-1}, \theta _{n-1}, \mathbf{u}))\cdot \varvec{\xi }) \\&\quad \quad u_{(0, 2\pi )}^{\otimes _{n_l - 1}}(\text {d}\varvec{\theta }^l) u_{(0, 2\pi )}(\text {d}\theta _{n-1})\\&\quad = f_{j, n_l}^{(k)}({\mathfrak {t}}_{n}^{l}, \varvec{\varphi }^l) \int \limits _{(0, 2\pi )} P_k(\varvec{\psi }^l(\varphi _{n-1}, \theta _{n-1}, \mathbf{u}) \cdot \varvec{\xi }) u_{(0, 2\pi )}(\text {d}\theta _{n-1}). \end{aligned}$$

Then, one can write \(\varvec{\xi }\) as \(\cos \beta \sin \alpha \mathbf{a}(\mathbf{u}) + \sin \beta \sin \alpha \mathbf{b}(\mathbf{u}) + \cos \alpha \mathbf{u}\) for a suitable \((\alpha , \beta )\) in \([0, \pi ] \times [0, 2\pi )\), so that \(\varvec{\psi }^l \cdot \varvec{\xi }= \sin \varphi \sin \alpha \cos (\theta - \beta ) + \cos \varphi \cos \alpha \). Then, in view of the well-known addition theorem for the Legendre polynomials (see, e.g., (VII’) on page 268 of [59]),

$$\begin{aligned} \int \limits _{(0, 2\pi )} P_k(\varvec{\psi }^l(\varphi _{n-1}, \theta _{n-1}, \mathbf{u}) \cdot \varvec{\xi }) u_{(0, 2\pi )}(\text {d}\theta _{n-1}) = P_k(\mathbf{u}\cdot \varvec{\xi })P_k(\cos \varphi _{n-1}) \end{aligned}$$

which completes the proof of (187) for \(j \le n_l\), thanks to the definition of \(f_{j, n}^{(k)}\). The proof is completed by applying, mutatis mutandis, this very same argument to the case \(j > n_l\).

A.13 Proof of (124)

The main aim is to find a recursive inequality—reminiscent of (138)—for the conditional expectation

$$\begin{aligned} \text {A}_{\lambda }(\nu , \tau _{\nu }; k, s) := {\mathsf{E}}_t\left[ \left( \,\,\int \limits _{\Omega _k} \big |{\fancyscript{D}}^{'} \text {S}_{k, s}\big |^2u_{S^2}(\text {d}\mathbf{u})- \lambda \sum _{j=1}^{\nu } \pi _{j, \nu }^4\right) \ \big | \ \nu , \tau _{\nu }\right] \end{aligned}$$

where \(\lambda \) is a positive parameter. For the sake of notational simplicity, the following devices will be adopted: Omission of the asterisks appearing in (22) and (27); removal of indices \((k, s)\) in \(\text {A}_{\lambda }(\nu , \tau _{\nu }; k, s)\) and of the subscript \(k\) in \(\Omega _k\) and \(\text {B}_k\); introduction of the symbols \(\varvec{\varphi }, \varvec{\theta }, \overline{\varvec{\varphi }}, \overline{\varvec{\theta }}\) to indicate \((\varphi _1, \dots , \varphi _n), (\theta _1, \dots , \theta _n), (\varphi _1, \dots , \varphi _{n-1}), (\theta _1, \dots , \theta _{n-1})\) respectively. In this notation one can write

$$\begin{aligned}&\text {A}_{\lambda }(n+1, {\mathfrak {t}}_{n, k}) = \int \limits _{[0, \pi ]^n}\int \limits _{(0, 2\pi )^n}\int \limits _{\Omega } \Bigg |{\fancyscript{D}}^{'} \sum _{j = 1}^{n+1} \pi _{j, n+1}^{2}({\mathfrak {t}}_{n, k}, \varvec{\varphi }) \Bigg [3 \Bigg (\text {B}(\mathbf{u})\nonumber \\&\quad \times \text {O}_{j, n+1}({\mathfrak {t}}_{n, k}, \varvec{\varphi }, \varvec{\theta })\mathbf{e}_3 \cdot \mathbf{e}_s\Bigg )^2 - 1 \Bigg ] \Bigg |^2 u_{S^2}(\text {d}\mathbf{u})u_{(0, 2\pi )}^{\otimes _n}(\text {d}\varvec{\theta }) \beta ^{\otimes _n}(\text {d}\varvec{\varphi })\nonumber \\&\quad - \lambda \int \limits _{[0, \pi ]^n} \left[ \sum _{j = 1}^{n+1} \pi _{j, n+1}^{4}({\mathfrak {t}}_{n, k}, \varvec{\varphi })\right] \beta ^{\otimes _n}(\text {d}\varvec{\varphi }) . \end{aligned}$$

The concept of germination explained in Sect. 1.5 is used to express the \(\pi \)’s and the \(\text {O}\)’s relative to \({\mathfrak {t}}_{n, k}\) in terms of the \(\pi \)’s and the \(\text {O}\)’s associated with \({\mathfrak {t}}_n\) according to

$$\begin{aligned} \pi _{j, n+1}({\mathfrak {t}}_{n, k}, \varvec{\varphi }) = \left\{ \begin{array}{l@{\quad }l} \pi _{j, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}) &{} \text {for} \ j < k \\ \pi _{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }})\cos \varphi _{h} &{} \text {for} \ j = k \\ \pi _{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }})\sin \varphi _{h} &{} \text {for} \ j = k + 1 \\ \pi _{j-1, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}) &{} \text {for} \ j > k + 1 \end{array} \right. \end{aligned}$$


$$\begin{aligned} \text {O}_{j, n+1}({\mathfrak {t}}_{n, k}, \varvec{\varphi }, \varvec{\theta }) = \left\{ \begin{array}{l@{\quad }l} \text {O}_{j, n}\big ({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }}\big ) &{} \text {for} \ j < k \\ \text {O}_{k, n}\big ({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }}\big ) \ \text {M}^l(\varphi _h, \theta _h) &{} \text {for} \ j = k \\ \text {O}_{k, n}\big ({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }}\big ) \ \text {M}^r(\varphi _h, \theta _h) &{} \text {for} \ j = k + 1 \\ \text {O}_{j-1, n}\big ({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }}\big ) &{} \text {for} \ j > k + 1 \end{array} \right. \nonumber \\ \end{aligned}$$

where \(\Sigma [{\mathfrak {t}}_n, k] : \{1, \dots , n-1\} \rightarrow \{1, \dots , n\}\) is an injection depending on \({\mathfrak {t}}_n\) and \(k\), while \(h\) is the element of \(\{1, \dots , n\}\) excluded from the range of \(\Sigma [{\mathfrak {t}}_n, k]\). If \(k = 1\) (\(n\), respectively) the first line (the last line, respectively) in (189)–(190) must be omitted. Therefore, the terms in (188) become

$$\begin{aligned}&\sum _{j = 1}^{n+1} \pi _{j, n+1}^{2}({\mathfrak {t}}_{n, k}, \varvec{\varphi }) \Big [3 \Big (\text {B}(\mathbf{u})\text {O}_{j, n+1}({\mathfrak {t}}_{n, k}, \varvec{\varphi }, \varvec{\theta })\mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 - 1 \Big ] \nonumber \\&\quad = \sum _{j = 1}^{n} \pi _{j, n}^{2}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}) \Big [3 \Big (\text {B}(\mathbf{u})\text {O}_{j, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k]\overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 - 1 \Big ] \nonumber \\&\quad \quad - 3\pi _{k, n}^{2}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }})\Big (\text {B}(\mathbf{u})\text {O}_{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 \nonumber \\&\quad \quad + 3\cos ^2\varphi _h \pi _{k, n}^{2}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }})\Big (\text {B}(\mathbf{u})\text {O}_{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\nonumber \\&\quad \quad \times \text {M}^l(\varphi _h, \theta _h) \mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 + 3\sin ^2\varphi _h \pi _{k, n}^{2}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }})\nonumber \\&\quad \quad \times \Big (\text {B}(\mathbf{u})\text {O}_{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\text {M}^r(\varphi _h, \theta _h) \mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 \end{aligned}$$


$$\begin{aligned} \sum _{j = 1}^{n+1} \pi _{j, n+1}^{4}({\mathfrak {t}}_{n, k}, \varvec{\varphi })&= \sum _{j = 1}^{n} \pi _{j, n}^{4}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }})\nonumber \\&- 2\cos ^2\varphi _h \sin ^2\varphi _h \pi _{k, n}^{4}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}). \end{aligned}$$

At this stage, apply the operator \({\fancyscript{D}}^{'}\) to the RHS of (191) and then consider the square of the corresponding norm. With a view to this application, it is useful to introduce the symbol \(\bullet \) to indicate: The product when \({\fancyscript{D}}^{'}\) is either \(\text {Id}\) or \(\varDelta _{S^2}\); the scalar product when \({\fancyscript{D}}^{'}\) is either \(\nabla _{S^2}\) or \(\nabla _{S^2}\varDelta _{S^2}\) and, when \({\fancyscript{D}}^{'}\) is the Hessian, for any pair of symmetric 2-forms \((\omega _1, \omega _2)\), \(\omega _1 \bullet \omega _2\) stands for \(\sum _{ij} \omega _1(V_i, V_j) \omega _2(V_i, V_j)\) where \(\{V_1, V_2\}\) is any orthonormal basis of vector fields. This procedure leads to the sum of the following three terms

$$\begin{aligned} T1 \!&:= \! \Big |\sum _{j = 1}^{n} \pi _{j, n}^{2}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}) {\fancyscript{D}}^{'}\Big [3 \Big (\text {B}(\mathbf{u})\text {O}_{j, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k]\overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k]\overline{\varvec{\theta }})\mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 \!- \!1 \Big ]\Big |^2 \nonumber \\ T2 \!&:= \! 6 \Bigg \{\!\sum _{j = 1}^{n} \pi _{j, n}^{2}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}) {\fancyscript{D}}^{'}\Big [3 \Big (\text {B}(\mathbf{u})\text {O}_{j, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k]\overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 \!-\! 1\! \Big ]\Bigg \}\nonumber \\&\bullet \pi _{k, n}^{2}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}) \cdot \Big \{ - {\fancyscript{D}}^{'}\Big (\text {B}(\mathbf{u})\text {O}_{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 \nonumber \\&+ \cos ^2\varphi _h {\fancyscript{D}}^{'}\Big (\text {B}(\mathbf{u})\text {O}_{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\text {M}^l(\varphi _h, \theta _h) \mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 \nonumber \\&+ \sin ^2\varphi _h {\fancyscript{D}}^{'} \Big (\text {B}(\mathbf{u})\text {O}_{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\text {M}^r(\varphi _h, \theta _h) \mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2\Big \} \nonumber \\ T3&:= \pi _{k, n}^{4}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}) \cdot \Big | - {\fancyscript{D}}^{'}\Big (\text {B}(\mathbf{u})\text {O}_{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 \nonumber \\&+ \cos ^2\varphi _h {\fancyscript{D}}^{'}\Big (\text {B}(\mathbf{u})\text {O}_{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\text {M}^l(\varphi _h, \theta _h) \mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 \nonumber \\&+ \sin ^2\varphi _h {\fancyscript{D}}^{'} \Big (\text {B}(\mathbf{u})\text {O}_{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\text {M}^r(\varphi _h, \theta _h) \mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2\Big |^2. \nonumber \end{aligned}$$

Following (188), one has to consider the integral \(\int _{[0, \pi ]^n} \int _{(0, 2\pi )^n} \int _{\Omega } \) of \(T1\), \(T2\) and \(T3\) respectively, as well as the integral \(\int _{[0, \pi ]^n}\) of the RHS of (192). Then, observe that

$$\begin{aligned}&\int \limits _{[0, \pi ]^n} \int \limits _{(0, 2\pi )^n}\int \limits _{\Omega } (T1) u_{S^2}(\text {d}\mathbf{u})u_{(0, 2\pi )}^{\otimes _n}(\text {d}\varvec{\theta }) \beta ^{\otimes _n}(\text {d}\varvec{\varphi }) \nonumber \\&\quad - \lambda \int \limits _{[0, \pi ]^n} \left[ \sum _{j = 1}^{n} \pi _{j, n}^{4}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }})\right] \beta ^{\otimes _n}(\text {d}\varvec{\varphi }) = \text {A}_{\lambda }(n, {\mathfrak {t}}_n) \quad \quad \end{aligned}$$

holds since the measures \(u_{(0, 2\pi )}^{\otimes _n}\) and \(\beta ^{\otimes _n}\) are exchangeable, i.e. invariant under permutation of the coordinates. As to the integral of \(T2\), it is worth remarking that \(T2\) depends on \((\varphi _h, \theta _h)\) only through \(\cos ^2\varphi _h\), \(\sin ^2\varphi _h\), \(\text {M}^{l}(\varphi _h, \theta _h)\) and \(\text {M}^{r}(\varphi _h, \theta _h)\). Therefore, since \(\bullet \) behaves like the scalar product, one is led to consider the integral

$$\begin{aligned} \int \limits _{0}^{2\pi } \Big (\text {B}(\mathbf{u})\text {O}_{k, n}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }})\text {M}^e(\varphi _h, \theta _h) \mathbf{e}_3 \cdot \mathbf{e}_s\Big )^2 u_{(0, 2\pi )}(\text {d}\theta _h) \end{aligned}$$

which, after putting \(\varvec{\xi }:= (\text {B}\text {O}_{k, n})^{t}\ \mathbf{e}_s\), becomes

$$\begin{aligned} \int \limits _{0}^{2\pi } \left( \text {M}^l(\varphi _h, \theta _h) \ \mathbf{e}_3 \cdot \varvec{\xi }\right) ^2 u_{(0, 2\pi )}(\text {d}\theta _h) = \frac{1}{2}\sin ^2\varphi _h + \left( 1 - \frac{3}{2}\sin ^2\varphi _h\right) \xi _{3}^{2} \end{aligned}$$

when \(e = l\) and

$$\begin{aligned} \int \limits _{0}^{2\pi } \left( \text {M}^r(\varphi _h, \theta _h) \ \mathbf{e}_3 \cdot \varvec{\xi }\right) ^2 u_{(0, 2\pi )}(\text {d}\theta _h) = \frac{1}{2}\cos ^2\varphi _h + \left( 1 - \frac{3}{2}\cos ^2\varphi _h\right) \xi _{3}^{2} \end{aligned}$$

when \(e = r\). At this stage, the identities

$$\begin{aligned}&- \xi _{3}^{2} + \cos ^2\varphi _h \sin ^2\varphi _h + \cos ^2\varphi _h \left( 1 - \frac{3}{2}\sin ^2\varphi _h\right) \xi _{3}^{2} \\&\quad + \sin ^2\varphi _h \left( 1 - \frac{3}{2}\cos ^2\varphi _h\right) \xi _{3}^{2} = - \cos ^2\varphi _h \sin ^2\varphi _h (3\xi _{3}^{2} - 1) \end{aligned}$$

and \(\int _{0}^{\pi } (-6 \cos ^2\varphi _h \sin ^2\varphi _h) \beta (\text {d}\varphi _h) = 3 \varLambda _b\) show that

$$\begin{aligned}&\sum _{k=1}^{n} \int \limits _{[0, \pi ]^n}\,\,\int \limits _{(0, 2\pi )^n}\,\,\int \limits _{\Omega } (T2) u_{S^2}(\text {d}\mathbf{u})u_{(0, 2\pi )}^{\otimes _n}(\text {d}\varvec{\theta }) \beta ^{\otimes _n}(\text {d}\varvec{\varphi }) \\&\quad = 3 \varLambda _b \int \limits _{[0, \pi ]^n}\,\,\int \limits _{(0, 2\pi )^n}\,\,\int \limits _{\Omega } (T1) u_{S^2}(\text {d}\mathbf{u})u_{(0, 2\pi )}^{\otimes _n}(\text {d}\varvec{\theta }) \beta ^{\otimes _n}(\text {d}\varvec{\varphi }). \end{aligned}$$

Whence, thanks to (193), one gets

$$\begin{aligned}&\sum \limits _{k=1}^{n}\,\, \int \limits _{[0, \pi ]^n}\int \limits _{(0, 2\pi )^n}\int \limits _{\Omega } (T2) u_{S^2}(\text {d}\mathbf{u})u_{(0, 2\pi )}^{\otimes _n}(\text {d}\varvec{\theta }) \beta ^{\otimes _n}(\text {d}\varvec{\varphi }) \nonumber \\&\quad = 3 \varLambda _b \text {A}_{\lambda }(n, {\mathfrak {t}}_n) + 3 \lambda \varLambda _b \int \limits _{[0, \pi ]^{n-1}} \left[ \sum \limits _{j = 1}^{n} \pi _{j, n}^{4}({\mathfrak {t}}_n, \overline{\varvec{\varphi }})\right] \beta ^{\otimes _{n-1}}(\text {d}\overline{\varvec{\varphi }}). \quad \quad \end{aligned}$$

Moreover, for the term \(- 2\cos ^2\varphi _h \sin ^2\varphi _h \pi _{k, n}^{4}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }})\) in (192), one has

$$\begin{aligned}&- \lambda \sum \limits _{k=1}^{n}\,\, \int \limits _{[0, \pi ]^n} (- 2\cos ^2\varphi _h \sin ^2\varphi _h) \pi _{k, n}^{4}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}) \beta ^{\otimes _n}(\text {d}\varvec{\varphi }) \nonumber \\&\quad = - \lambda \varLambda _b \int \limits _{[0, \pi ]^{n-1}} \left[ \sum _{j = 1}^{n} \pi _{j, n}^{4}({\mathfrak {t}}_n, \overline{\varvec{\varphi }})\right] \beta ^{\otimes _{n-1}}(\text {d}\overline{\varvec{\varphi }}). \end{aligned}$$

Combining (193)–(195) yields

$$\begin{aligned}&\frac{1}{n} \sum \limits _{k = 1}^{n} \text {A}_{\lambda }(n \!+\! 1, {\mathfrak {t}}_{n, k})\!=\! \left( 1 \!+\! \frac{3 \varLambda _b}{n}\right) \text {A}_{\lambda }(n, {\mathfrak {t}}_n) \!+\! \frac{2 \lambda \varLambda _b}{n} \int \limits _{[0, \pi ]^{n-1}} \left[ \sum \limits _{j = 1}^{n} \pi _{j, n}^{4}({\mathfrak {t}}_n, \overline{\varvec{\varphi }})\right] \beta ^{\otimes _{n-1}}(\text {d}\overline{\varvec{\varphi }}) \nonumber \\&\qquad + \frac{1}{n} \sum \limits _{k = 1}^{n}\,\int \limits _{[0, \pi ]^n}\int \limits _{(0, 2\pi )^n}\int \limits _{\Omega }(T3) u_{S^2}(\text {d}\mathbf{u})u_{(0, 2\pi )}^{\otimes _n}(\text {d}\varvec{\theta }) \beta ^{\otimes _n}(\text {d}\varvec{\varphi }). \end{aligned}$$

Then, it remains to consider the term containing \(T3\) by showing that there exists a value \(\lambda _0 = \lambda _0({\fancyscript{D}}^{'})\) such that

$$\begin{aligned}&2 \lambda \varLambda _b \int \limits _{[0, \pi ]^{n-1}} \left[ \sum _{j = 1}^{n} \pi _{j, n}^{4}({\mathfrak {t}}_n, \overline{\varvec{\varphi }})\right] \beta ^{\otimes _{n-1}}(\text {d}\overline{\varvec{\varphi }}) \nonumber \\&\quad + \sum _{k = 1}^{n}\,\, \int \limits _{[0, \pi ]^{n}}\int \limits _{(0, 2\pi )^n}\int \limits _{\Omega }(T3) u_{S^2}(\text {d}\mathbf{u})u_{(0, 2\pi )}^{\otimes _n}(\text {d}\varvec{\theta }) \beta ^{\otimes _n}(\text {d}\varvec{\varphi }) \le 0 \end{aligned}$$

for every \(\lambda \ge \lambda _0\), \(n \ge 2\), \({\mathfrak {t}}_n\) in \({\mathbb {T}}(n)\) and \(s = 1, 2, 3\). In fact, the LHS can be written as

$$\begin{aligned}&\sum _{k = 1}^{n} \int \limits _{[0, \pi ]^n} \pi _{k, n}^{4}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}) \Bigg \{ 2 \lambda \varLambda _b + \int \limits _{(0, 2\pi )^n}\int \limits _{\Omega } \\&\quad \cos ^2\varphi _h \sin ^2\varphi _h \Big | {\fancyscript{D}}^{'} \big [ \mathbf{e}_{s}^{t} \text {B}(\mathbf{u}) \text {O}_{k, n}\big ({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }}\big ) \ \text {K}(\varphi _h, \theta _h) \\&\quad \times \text {O}_{k, n}\big ({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }}\big )^t \text {B}(\mathbf{u})^t \mathbf{e}_s \big ]\Big |^2u_{S^2}(\text {d}\mathbf{u})u_{(0, 2\pi )}^{\otimes _n}(\text {d}\varvec{\theta }) \Bigg \} \beta ^{\otimes _n}(\text {d}\varvec{\varphi }) \end{aligned}$$


$$\begin{aligned} \text {K}(\varphi , \theta ) \!:=\! \left( \begin{array}{c@{\quad }c@{\quad }c} 2 \cos ^2\theta \cos \varphi \sin \varphi &{} 2 \cos \theta \sin \theta \cos \varphi \sin \varphi &{} \cos \theta (\cos ^2\varphi - \sin ^2\varphi ) \\ 2 \cos \theta \sin \theta \cos \varphi \sin \varphi &{} 2 \sin ^2\theta \cos \varphi \sin \varphi &{} \sin \theta (\cos ^2\varphi - \sin ^2\varphi ) \\ \cos \theta (\cos ^2\varphi - \sin ^2\varphi ) &{} \sin \theta (\cos ^2\varphi - \sin ^2\varphi ) &{} -2 \cos \varphi \sin \varphi \\ \end{array} \right) . \end{aligned}$$

Then, after putting \(R = (r_{ij})_{ij} := \text {O}_{k, n} \text {K} \text {O}_{k, n}^t\) and \(f_{ij}^{(s)}(\mathbf{u}) := \big (\mathbf{e}_s \cdot \text {B}(\mathbf{u})\mathbf{e}_i\big ) \big (\mathbf{e}_s \cdot \text {B}(\mathbf{u})\mathbf{e}_j\big )\), one notes that

$$\begin{aligned} \left| {\fancyscript{D}}^{'} \sum _{ij} r_{ij} f_{ij}^{(s)}(\mathbf{u})\right| ^2 \le 9 \sum _{ij} |r_{ij}|^2 \big | {\fancyscript{D}}^{'} f_{ij}^{(s)}(\mathbf{u}) \big |^2 \end{aligned}$$

and that \(\max _{ij} |r_{ij}|^2 \le \ \mid \mid \!\text {O}_{k, n} \! \mid \mid ^{4}_{*} \cdot \mid \mid \!\text {K} \! \mid \mid ^{2}_{*} \le 36\). Whence,

$$\begin{aligned}&\sum _{k = 1}^{n} \int \limits _{[0, \pi ]^n} \pi _{k, n}^{4}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}) \left\{ 2 \lambda \varLambda _b + \int \limits _{(0, 2\pi )^n}\int \limits _{\Omega }\right. \\&\qquad \left. \cos ^2\varphi _h \sin ^2\varphi _h \Big | {\fancyscript{D}}^{'} \big [ \mathbf{e}_{s}^{t} \text {B}(\mathbf{u}) \text {O}_{k, n}\big ({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }}\big ) \ \text {K}(\varphi _h, \theta _h) \right. \\&\qquad \left. \times \, \text {O}_{k, n}\big ({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\theta }}\big )^t \text {B}(\mathbf{u})^t \mathbf{e}_s \big ]\Big |^2u_{S^2}(\text {d}\mathbf{u})u_{(0, 2\pi )}^{\otimes _n}(\text {d}\varvec{\theta }) \right\} \beta ^{\otimes _n}(\text {d}\varvec{\varphi }) \\&\quad \le \sum _{k = 1}^{n} \int \limits _{[0, \pi ]^n} \pi _{k, n}^{4}({\mathfrak {t}}_n, \Sigma [{\mathfrak {t}}_n, k] \overline{\varvec{\varphi }}) \\&\qquad \times \left\{ 2 \lambda \varLambda _b + 324 \cos ^2\varphi _h \sin ^2\varphi _h \int \limits _{\Omega } \sum _{ij} \big | {\fancyscript{D}}^{'} f_{ij}^{(s)}(\mathbf{u}) \big |^2 u_{S^2}(\text {d}\mathbf{u})\right\} \beta ^{\otimes _n}(\text {d}\varvec{\varphi }) \end{aligned}$$

and the RHS is zero when \(\lambda = \lambda _0({\fancyscript{D}}^{'}) := 81\int _{\Omega } \sum _{ij} \big | {\fancyscript{D}}^{'} f_{ij}^{(s)}(\mathbf{u}) \big |^2 u_{S^2}(\text {d}\mathbf{u})\), thanks to the fact that \(\int _{0}^{\pi } \cos ^2\varphi _h \sin ^2\varphi _h \beta (\text {d}\varphi _h) = -\frac{1}{2}\varLambda _b\). Therefore, in view of (196)–(197),

$$\begin{aligned} \frac{1}{n} \sum _{k= 1}^{n} \text {A}_{\lambda _0}(n+1, {\mathfrak {t}}_{n, k}) \le \left( 1 + \frac{3 \varLambda _b}{n}\right) \text {A}_{\lambda _0}(n, {\mathfrak {t}}_n) \end{aligned}$$

holds for any \(n \ge 2\). This inequality entails \(a_{\lambda _0}(n+1) \le (1 + \frac{3 \varLambda _b}{n}) a_{\lambda _0}(n)\), where \(a_{\lambda }(\nu ) := {\mathsf{E}}_t\Big [\text {A}_{\lambda }(\nu , \tau _{\nu })\ \big | \ \nu \Big ]\). At this stage, the same argument developed in A.1 shows that

$$\begin{aligned} a_{\lambda _0}(n) \le \frac{2 a_{\lambda _0}(2)}{2 + 3\varLambda _b} \cdot \frac{\varGamma (n + 3\varLambda _b)}{\varGamma (n) \varGamma (2 + 3\varLambda _b)} \end{aligned}$$

holds for every \(n \ge 2\), since \(2 + 3\varLambda _b > 0\) for any choice of \(b\) satisfying (2)–(3). Whence,

$$\begin{aligned} {\mathsf{E}}_t\left[ \int \limits _{\Omega _k} \big |{\fancyscript{D}}^{'} \text {S}_{k, s}\big |^2u_{S^2}(\text {d}\mathbf{u})\right] \le a_{\lambda _0}(1) e^{-t} + a_{\lambda _0}(2) e^{-t} \frac{e^{(1 + 3\varLambda _b)t} - 1}{1 + 3\varLambda _b} + \lambda _0 e^{\varLambda _b t} \end{aligned}$$

is valid for every \(t \ge 0\) with the proviso that \(\frac{e^{(1 + 3\varLambda _b)t} - 1}{1 + 3\varLambda _b} := t\) when \(\varLambda _b = -1/3\), concluding the proof.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Dolera, E., Regazzini, E. Proof of a McKean conjecture on the rate of convergence of Boltzmann-equation solutions. Probab. Theory Relat. Fields 160, 315–389 (2014).

Download citation


  • Berry–Esseen inequalities
  • Central limit theorem
  • Global analysis on \(S^2\)
  • Maxwellian molecules
  • Random measure
  • Wild-McKean sum

Mathematics Subject Classification (2000)

  • 60F05
  • 60G57
  • 82C40