Journal of Statistical Physics

, Volume 172, Issue 3, pp 781–794

# Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model

Open Access
Article

## Abstract

We consider Hermitian random band matrices H in $$d \geqslant 1$$ dimensions. The matrix elements $$H_{xy},$$ indexed by $$x, y \in \varLambda \subset \mathbb {Z}^d,$$ are independent, uniformly distributed random variable if $$|x-y|$$ is less than the band width W,  and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size $$|\varLambda |$$ of the matrix.

## Keywords

Quantum dynamics Random matrix theory Anderson model

## 1 Introduction

Random band matrices $$H=\left( H_{xy}\right) _{x,y \in \varGamma }$$ represent systems on a large finite graph with a metric. They are the natural intermediate models to study quantum propagation in disordered systems as they interpolate in between the Wigner matrices and Random Schrödinger operators. The elements $$H_{xy}$$ are independent random variables with variance $$\sigma _{xy}^2=\mathbb {E}|H_{xy}|^2$$ depending on the distance between the two sites. The variance decays with the distance on the scale W, called the band width of the matrix H. This terminology comes from the simplest model in which the graph is a path on N vertices labelled by $$\varGamma =\{1,2,\ldots , N\},$$ and the matrix elements $$H_{xy}$$ are zero if $$|x-y| \geqslant W.$$ If $$W=O(1)$$ we obtain the one-dimensional Anderson type model (see [1]) and if $$W=N$$ we recover the Wigner matrix. In the general Anderson model, introduced in [1], a random on-site potential V is added to a deterministic Laplacian on a graph that is typically a regular box in $$\mathbb {Z}^d\,.$$ For higher dimensional models in which the graph is $$\varGamma$$ is a box in $$\mathbb {Z}^d$$, see [2].

In [3] it was proved that the quantum dynamics of d-dimensional band matrix is given by a superposition of heat kernels up to time scales $$t \ll W^{d/3}\,.$$ Note that diffusion is expected to hold for $$t \sim W^{2}$$ for $$d=1$$ and up to any time for $$d \geqslant 3$$ when the thermodynamic limit is taken. The threshold d / 3 on the exponent is due to technical estimates on Feynman graphs.

The approach of this paper is similar with the one in [3]. We normalize the entries of the matrix such that the rate of quantum jumps is of order one. In contrast with [3] in this paper double-rooted Feynman graphs are used to estimate the variance of the quantum diffusion. The main result of this paper is upgrading the previous results on the convergence of expectation of the quantum diffusion from [3] to convergence in high probability.

## 2 Model and Main Result

Let $$\mathbb {Z}^d$$ be the infinite lattice with the Euclidean norm $$|\cdot |_{\mathbb {Z}^d}$$ and let M be the number of points situated at distance at most $$W (W \geqslant 2)$$ from the origin, i.e.
\begin{aligned} M=M(W)=\big |\big \{ x \in \mathbb {Z}^d: 1\leqslant |\cdot |_{\mathbb {Z}^d} \leqslant W\big \}\big |\,. \end{aligned}
For simplicity, we avoid working directly on an infinite lattice.
Throughout our proof, we consider a d-dimensional finite periodic lattice $$\varLambda _N \subset \mathbb {Z}^d$$ ($$d\geqslant 2$$) of linear size N equipped with the Euclidean norm $$|\cdot |_{\mathbb {Z}^d}$$. Specifically, we take $$\varLambda _N$$ to be a cube centered around the origin with the side length N, i.e.
\begin{aligned} \varLambda _N:=( [-N/2,N/2) \cap \mathbb {Z})^d\,. \end{aligned}
We regard $$\varLambda _N$$ periodic, i.e. we equip it with the periodic addition and periodic distance
\begin{aligned} |x|:=\text {inf}\{|x+N\nu |_{\mathbb {\mathbb {Z}^d}}: \nu \in {Z}^d \}\,. \end{aligned}
We analyze random matrices H with band width W and with elements $$H_{xy}$$, where x and y are indices of points in $$\varLambda _N$$ . For introducing H we first define a matrix
\begin{aligned} S_{xy}:=\frac{\varvec{\mathrm {1}}(1\leqslant |x-y|\leqslant W)}{(M-1)}\,. \end{aligned}
We consider $$A=A^*=(A_{xy})$$ a Hermitian random matrix whose upper triangular entries ($$A_{xy}:x \leqslant y$$) are independent random variables uniformly distributed on the unit circle $$\mathbb {S}^1 \subset \mathbb {C}$$ . We define the random band matrix $$(H_{xy})$$ through
\begin{aligned} H_{xy}&:=\sqrt{S_{xy}}A_{xy}\,. \end{aligned}
Note that H is Hermitian and $$|H_{xy}|^{2}=S_{xy}$$ .

Throughout our investigation we will use the simplified notation $$\sum \limits _{y_1}$$ for$$\sum \limits _{y_1 \in \varLambda _N}$$.

Our main quantity is
\begin{aligned} P(t,x)=|(e^{-itH/2})_{0x}|^{2}\,. \end{aligned}
The function P(tx) describes the quantum transition probability of a particle starting in $$x_0$$ and ending up at position x after time t .
Let $$\kappa >0$$ . We introduce the macroscopic time and space coordinates T and X, which are independent of W, and consider the microscopic time and space coordinates
\begin{aligned} t\;&=\; W^{d\kappa }T\,, \\ x\;&=\; W^{1+d \kappa /2}X\,. \end{aligned}
Using the definition of the quantum probability and the scaling that we have introduced before, we define the random variable that we are going to investigate by
\begin{aligned} Y_{T,\kappa , W}(\phi )\equiv Y_T(\phi ):=\sum _{x}P(W^{d\kappa }T, x)\phi \left( \frac{x}{W^{1+d\kappa /2}}\right) \,, \end{aligned}
(2.1)
where $$\phi \in C_b(\mathbb {R}^d)$$ is a test function in $$\mathbb {R}^d$$ .

Our main result gives an estimate for the variance of the random variable $$Y_T(\phi )$$ up to time scales $$t=O(W^{d\kappa })$$ if $$\kappa <1/3$$ .

### Theorem 1

Fix $$T_0>0$$ and $$\kappa$$ such that $$0< \kappa <1/3\,.$$ Choose a real number $$\beta$$ satisfying $$0<\beta <2/3-2\kappa \,.$$ Then there exists C $$\geqslant 0$$ and $$W_0 \geqslant 0$$ depending only on $$T_0$$, $$\kappa$$ and $$\beta$$ such that for all $$T \in [0,T_0]\,,$$ $$W \geqslant W_0$$ and $$N \geqslant W^{1+\frac{d}{6}}$$ we have
\begin{aligned} Var(Y_T(\phi )) \leqslant \frac{C ||\phi ||^{2}_\infty }{W^{d\beta }}\,. \end{aligned}

### Remark 1

Using the estimate that we obtain in Theorem 2.1 and Chebyshev inequality for the second moment we obtain the convergence in high probability of the random variable $$Y_{T}(\phi )\,.$$ We think that the same technique can be implemented for a graphical representation with 2p directed chains with $$p \in \mathbb {N}\,.$$ This approach should give similar estimates on the 2p-th moment of our random variable that we further use in the Chebyshev’s inequality to get the desired conclusion.

## 3 Graphical Representation

In this section we give the exact formula of the quantity of our analysis and we motivate the graphical representation that we will use in order to compute the upper bound.

### 3.1 Expansion in Non-backtracking Powers

First, as in [3] we define $$H^{(n)}_{x_0x_n}$$ by
\begin{aligned} H^{(0)}&:=\mathbb {Id}\,,\nonumber \\ H^{(1)}&:= H\,,\nonumber \\ H^{(n)}_{x_0 x_n}&:=\sum \limits _{x_1,\ldots {},x_{{n}-1}}\left( \prod _{i=0}^{n-2}\varvec{\mathrm {1}}(x_{i} \ne x_{i+2})\right) H_{x_0x_1},\ldots {},H_{x_{{n}-1}x_{n}} \;\;(n \geqslant 2)\,. \end{aligned}
(3.1)
The following result is proved in [3] .

### Lemma 1

Let $$U_k$$ be the kth Chebyshev polynomial of the second kind and let
\begin{aligned} \alpha _k(t)\;:=\; \frac{2}{\pi } \int \limits _{-1}^{1}\sqrt{1-\zeta ^2}e^{-it\zeta } U_k(\zeta )d\zeta \,. \end{aligned}
We define the quantity $$a_m(t)\;:=\;\sum \limits _{k \geqslant 0}\frac{\alpha _{m+2k}(t)}{(M-1)^{k}}\,.$$ We have that
\begin{aligned} e^{-itH/2}\;=\;\sum \limits _{m\geqslant 0}a_m(t)H^{(m)}\,. \end{aligned}
(3.2)
We will use also the abbreviation
\begin{aligned} \langle X ; Y \rangle \;:=\; \mathbb {E}XY-\mathbb {E}X\mathbb {E}Y\,. \end{aligned}
Plugging in the definition of $$Y_T(\phi )$$ we have
\begin{aligned} {{\mathrm{Var}}}(Y_T(\phi ))\;=\;\langle Y_T(\phi ) ; Y_T(\phi )\rangle \;&=\;\sum _{y_1,y_2}\phi \left( \frac{y_1}{W^{1+d\kappa /2}}\right) \phi \left( \frac{y_2}{W^{1+d\kappa /2}}\right) \langle P(t,y_1) ; P(t,y_2)\rangle \\ \;&\leqslant \;||\phi ||^{2}_{\infty }\sum _{y_1}\sum _{y_2}|\langle P(t,y_1) ; P(t,y_2)\rangle |\,. \end{aligned}
Moreover,
\begin{aligned}&\langle P(t,y_1) ; P(t,y_2)\rangle \\&\quad \;=\;\sum \limits _{n_{11},n_{12}\geqslant 0}\sum \limits _{n_{21},n_{22} \geqslant 0}a_{n_{11}}(t)\overline{a_{n_{12}}(t)}a_{n_{21}}(t)\overline{a_{n_{22}}(t)}\big \langle H_{0y_1}^{(n_{11})}H_{y_10}^{(n_{12})} ; H_{0y_2}^{(n_{21})}H_{y_20}^{(n_{22})} \big \rangle \,. \end{aligned}
We summarize the graphical representation of $$\langle H_{0y_1}^{(n_{11})}H_{y_10}^{(n_{12})} ; H_{0y_2}^{(n_{21})}H_{y_20}^{(n_{22})} \rangle \,.$$

### 3.2 Graphical Representation

We define a graph $$\mathcal {L}$$ which consists of two rooted directed chains $$\mathcal {L}_1$$ and $$\mathcal {L}_2$$ by
\begin{aligned} \mathcal {L}(n_{11},n_{12},n_{21},n_{22})\;\equiv \; \mathcal {L} \;:=\;\mathcal {L}_1(n_{11},n_{12})\sqcup \mathcal {L}_2(n_{21},n_{22})\,, \end{aligned}
where $$\mathcal {L}_k(n_{k1},n_{k2})$$ is a rooted directed chain of length $$n_{k1}+n_{k2} \geqslant 1$$ for $$k \in \{1,2 \}.$$ We denote the set of vertices of the graph $$\mathcal {L}$$ by $$V(\mathcal {L})$$ and the set of edges by $$E(\mathcal {L})$$. Each of the rooted directed chains contains two distinct vertices denoted by $$r(\mathcal {L}_k)$$ ($$\textit{root}$$) and $$s(\mathcal {L}_k)$$ ($$\textit{summit}$$) defined as the unique vertex such that the path $$r(\mathcal {L}_k)\rightarrow s(\mathcal {L}_k)$$ has length $$n_{k1}$$. Note that if $$n_{k1}=0$$ or $$n_{k2}=0$$ then $$r(\mathcal {L}_k)=s(\mathcal {L}_k)$$ . Using the orientation of the edges, for each $$e \in E(\mathcal {L})$$ we denote the vertex $$a(e) \in V(\mathcal {L})$$ as predecessor and the vertex $$b(e) \in V(\mathcal {L})$$ as successor (see Fig. 1). Similarly, for each vertex $$i \in V(\mathcal {L})$$ , we denote the adjacent vertices, a(i) and b(i) as the predecessor and the successor of i (see Fig. 2). The root and the summit are drawn using white dots and all other vertices using black dots. Hence, the set of vertices can be split as $$V(\mathcal {L})=V_{w}(\mathcal {L})\sqcup V_{b}(\mathcal {L}),$$ where the subscript w stands for the white vertices and b for the black vertices.
Each vertex $$i \in V(\mathcal {L})$$ carries a $$\textit{label}$$ $$x_i \in \varLambda _N$$ . The labels $$\varvec{\mathrm {x}}=(x_i)_{i \in V(\mathcal {L})}$$ can be split according to the needs, e.g. $$\varvec{\mathrm {x}}=(\varvec{\mathrm {x}}_1, \varvec{\mathrm {x}}_2),$$ where $$\varvec{\mathrm {x}}_k:=(\varvec{\mathrm {x}}_i)_{i \in V(\mathcal {L}_k)},\; k \in \{1,2 \},$$ or $$\varvec{\mathrm {x}}=(\varvec{\mathrm {x}}_b,\varvec{\mathrm {x}}_w),\; \varvec{\mathrm {x}}_b:=(x_i)_{i \in V_b(\mathcal {L})}$$ and $$\varvec{\mathrm {x}}_w:=(x_i)_{i \in V_w(\mathcal {L})}\,.$$
For each configuration of labels $$\varvec{\mathrm {x}}$$ we assign a lumping $$\varGamma =\varGamma (\varvec{\mathrm {x}})$$ of the set of edges $$E(\mathcal {L})$$ as in [3] . A lumping is an equivalence relation on $$E(\mathcal {L})$$ . We use the notation $$\varGamma =\{\gamma \}_{\gamma \in \varGamma }$$ where $$\gamma \in \varGamma$$ is a lump, i.e. an equivalence class of $$\varGamma$$ . The lumping $$\varGamma = \varGamma (\varvec{\mathrm {x}})$$ associated with the labels $$\varvec{\mathrm {x}}$$ is given by the equivalence relation
\begin{aligned} e \;\sim \; e' \;\Leftrightarrow \; \{x_{a(e)}, x_{b(e)} \}\;=\;\{x_{a(e')}, x_{b(e')} \}\,. \end{aligned}
(3.3)
The summation over $$\varvec{\mathrm {x}}$$ is performed with respect to the indicator function
\begin{aligned}&Q_{y_1,y_2}(\varvec{\mathrm {x}})\\&\quad \;:=\;\varvec{\mathrm {1}}(x_{r(\mathcal {L}_1)}=0)\varvec{\mathrm {1}}(x_{r(\mathcal {L}_2)}=0)\varvec{\mathrm {1}}(x_{s(\mathcal {L}_1)}=y_1)\varvec{\mathrm {1}}(x_{s(\mathcal {L}_2)}=y_2)\prod \limits _{i \in V_b(\mathcal {L})}\varvec{\mathrm {1}}(x_{a(i)}\ne x_{b(i)})\,. \end{aligned}
Throughout the proof we will use the notation
\begin{aligned} x_{e}\;=\;(x_{a(e)}, x_{b(e)})\,. \end{aligned}
Using the graph $$\mathcal {L}$$ we may now write the covariance as
\begin{aligned} \big \langle H_{0y_1}^{(n_{11})}H_{y_10}^{(n_{12})} ; H_{0y_2}^{(n_{21})}H_{y_20}^{(n_{22})}\big \rangle \;=\;\sum \limits _{\varvec{\mathrm {x}} \in \varLambda _N^{V(\mathcal {L})}}Q_{y_1,y_2}(\varvec{\mathrm {x}})A(\varvec{\mathrm {x}})\,, \end{aligned}
(3.4)
where
\begin{aligned} A(\varvec{\mathrm {x}})\;=\;\mathbb {E}\prod \limits _{e \in E(\mathcal {L})}H_{x_e}-\mathbb {E}\prod \limits _{e \in E(\mathcal {L}_1)}H_{x_e}\mathbb {E}\prod \limits _{e \in E(\mathcal {L}_2)}H_{x_e}\,. \end{aligned}
(3.5)
We further define the $$\textit{value}$$ of the lumping $$\varGamma$$ by
\begin{aligned} V_{y_1, y_2}(\varGamma )\;:=\;\sum \limits _{\varvec{\mathrm {x}}}\varvec{\mathrm {1}}(\varGamma (\varvec{\mathrm {x}})\;=\;\varGamma )Q_{y_1,y_2}(\varvec{\mathrm {x}})A(\varvec{\mathrm {x}})\,. \end{aligned}
Let $$\mathfrak {P}_{c}(E(\mathcal {L}))$$ be the set of connected even lumpings, i.e. the set of all lumpings $$\varGamma$$ for which each lump $$\gamma \in \varGamma$$ has even size and there exists $$\gamma \in \varGamma$$ such that $$\gamma \cap E(\mathcal {L}_k) \ne \emptyset \,,$$ for $$k \in \{1,2\}\,.$$

Using that $$\mathbb {E}H_{xy}=0$$ , it is not hard to see that the graphical representation of the variance yields to the following result (for further details, see [4]) .

### Lemma 2

We have that
\begin{aligned} \big \langle H_{0y_1}^{(n_{11})}H_{y_10}^{(n_{12})} ; H_{0y_2}^{(n_{21})}H_{y_20}^{(n_{22})}\big \rangle \;=\; \sum \limits _{\varGamma \in \mathfrak {P}_{c}(E(\mathcal {L}))}V_{y_1,y_2}(\varGamma )\,. \end{aligned}
We define the set of all connected pairings
\begin{aligned} \mathfrak {M}_c\;:=\;\bigsqcup \limits _{n_{11}, n_{12}, n_{21}, n_{22}}\{ \varPi \in \mathfrak {P}_c(E(\mathcal {L}(n_{11}, n_{12}, n_{21}, n_{22})))\; : \; |\pi |=2\,, \forall \pi \in \varPi \}\,. \end{aligned}
We call the lumps $$\pi \in \varPi$$ of a pairing $$\varPi$$ $$\textit{bridges}$$. Moreover, with each pairing $$\varPi \in \mathfrak {M}_c$$ we associate its underlying graph $$\mathcal {L}(\varPi )$$, and regard $$n_{11}(\varPi )$$ and $$n_{12}(\varPi )$$, $$n_{21}(\varPi )$$ and $$n_{22}(\varPi )$$ as functions on $$\mathfrak {M}_c$$ in self-explanatory notation. We abbreviate $$V(\varPi )=V(\mathcal {L}(\varPi ))$$ and $$E(\varPi )=E(\mathcal {L}(\varPi ))$$. We refer to $$V(\varPi )$$ as the set of vertices of $$\varPi$$ and to $$E(\varPi )$$ as the set of edges of $$\varPi$$ .
Let us define the indicator function
\begin{aligned} J_{ \{ e,e' \} }(\varvec{\mathrm {x}})\;:=\;\varvec{\mathrm {1}}(x_{a(e)}=x_{b(e')})\varvec{\mathrm {1}}(x_{a'(e)}=x_{b(e)})\,. \end{aligned}
(3.6)
Using the same reasoning as in Section 4 of [4] and Equation 4.14 of [4], we obtain the following bound.

### Lemma 3

We have
\begin{aligned}&|\langle P(t,y_1); P(t,y_2) \rangle |\nonumber \\&\quad \leqslant \;\sum \limits _{\varPi \in \mathfrak {M}_c}\big |a_{n_{11}(\varPi )}(t)\overline{a_{n_{12}(\varPi )}(t)}a_{n_{21}(\varPi )}(t)\overline{a_{n_{22}(\varPi )}(t)}\big |\sum _{\varvec{\mathrm {x}}}Q_{y_1,y_2}(\varvec{\mathrm {x}})\prod \limits _{\{e,e'\} \in \varPi }S_{x_e}\prod \limits _{\pi \in \varPi }J_{\pi }(\varvec{\mathrm {x}})\,. \end{aligned}
(3.7)

### 3.3 Collapsing of Parallel Bridges

We further construct as in [4] the skeleton $$\varSigma =S(\varPi )$$ of a pairing $$\varPi \in \mathfrak {M}_c$$ by collapsing all parallel bridges of $$\varPi$$ . By definition the bridges $$\{e_1, e'_1\}$$ and $$\{e_2, e'_2 \}$$ are $$\textit{parallel}$$ if $$b(e_1)=a(e_2)\in V_b(\varPi )$$ and $$b(e'_2)=a(e'_1)\in V_b(\varPi )$$ . To each $$\varPi \in \mathfrak {M}_c$$ we associate a couple $$(\varSigma , l_{\varSigma })$$, where $$\varSigma \in \mathfrak {M}_c$$ has no parallel bridges and $$l_{\varSigma }:=(l_\sigma )_{\sigma \in \varSigma } \in \mathbb {N}^{\varSigma }$$ . The integer $$l_{\sigma }$$ denotes the number of parallel bridges of $$\varPi$$ that were collapsed into the bridge $$\sigma$$ of $$\varSigma \,.$$ Conversely, for any given couple $$(\varSigma , l_{\varSigma }),$$ where $$\varSigma \in \mathfrak {M}_c$$ has no parallel bridges and $$l_\varSigma \in \mathbb {N}^{\varSigma }$$, we define $$\varPi =G_{l_{\varSigma }}(\varSigma )$$ as the pairing obtained from $$\varSigma$$ by replacing for each bridge $$\sigma \in \varSigma$$ , the bridge $$\sigma$$ with $$l_{\sigma }$$ parallel bridges (Fig. 3). This construction gives a bijective mapping $$\varPi \longleftrightarrow (\varSigma , l_{\varSigma })\,.$$ We further define the set of admissible skeletons as
\begin{aligned} \mathfrak {G}\;:=\;\{S(\varPi ): \varPi \in \mathfrak {M}_c \}\,. \end{aligned}
(3.8)
Note that all $$\varSigma \in \mathfrak {G}$$ are connecting.

The following result is to check from the definition of $$\mathfrak {G}$$; see Lemma 7.4 (ii) in [3] .

### Lemma 4

Let $$\{e, e'\} \in \varSigma$$. Then e and $$e'$$ are adjacent only if $$e\cap e' \in V_w(\varSigma )\,.$$

In the following we rewrite the right hand side of (2.7) using the summation over skeleton pairings $$\varSigma =S(\varPi )$$, followed by different ways of expanding the bridges of $$\varSigma$$ . For this, let $$\varPi =G_{l_\varSigma }(\varSigma )$$ . We further define $$|l_{\varSigma }|:=\sum _{\sigma \in \varSigma }l_{\sigma }$$ for $$\varSigma \in \mathfrak {G}$$ and $$l_{\varSigma } \in \mathbb {N}^{\varSigma }$$. For the skeleton $$\varSigma \in \mathfrak {G}$$ of the pairing $$\varPi =G_{l_{\varSigma }}(\varSigma )$$ we use the notation $$n_{ij}(\varSigma , l_{\varSigma })$$ for $$n_{ij}(\varPi )$$, for all $$i, j \in \{1,2\}$$ .

Parametrising $$\varPi$$ using $$\varSigma$$ and $$l_{\varSigma }$$ and neglecting the non-backtracking condition in the definition of $$Q_{y_1,y_2}(\varvec{\mathrm {x}})$$ we obtain the following upper bound (for full details see Lemma 7.6 in [3]) .
\begin{aligned}&\sum \limits _{\varvec{\mathrm {x}} \in \varLambda _N^{V(\varSigma )}}Q_{y_1, y_2}(\varvec{\mathrm {x}})\prod \limits _{\{e,e'\} \in \varSigma }\left( S^{l_{ \{ e,e' \} } }\right) _{x_{e}}\prod \limits _{\sigma \in \varSigma }J_{\sigma }(\varvec{\mathrm {x}})\nonumber \\&\quad \leqslant \;\sum \limits _{\varvec{\mathrm {x}} \in \varLambda _N^{V(\varSigma )}}\varvec{\mathrm {1}}(0=x_{r(\mathcal {L}_1(\varSigma ))})\varvec{\mathrm {1}}(0=x_{r(\mathcal {L}_2(\varSigma ))})\varvec{\mathrm {1}}(y_1=x_{s(\mathcal {L}_1(\varSigma ))}) \varvec{\mathrm {1}}(y_2=x_{s(\mathcal {L}_2(\varSigma ))})\nonumber \\&\quad \prod \limits _{\{e,e'\} \in \varSigma }\left( S^{l_{ \{ e,e' \} } }\right) _{x_{e}}\prod \limits _{\sigma \in \varSigma }J_{\sigma }(\varvec{\mathrm {x}})\,. \end{aligned}
(3.9)
Let us define
\begin{aligned} R(\varSigma )\;:=\;\sum \limits _{\varvec{\mathrm {x}}\in \varLambda _N^{V(\varSigma )}}\varvec{\mathrm {1}}(x_{r(\mathcal {L}_1(\varSigma ))}=0)\varvec{\mathrm {1}}(x_{r(\mathcal {L}_2(\varSigma ))}=0)\prod \limits _{\{e,e'\} \in \varSigma }\left( S^{l_{ \{ e,e' \} } }\right) _{x_{e}}\prod \limits _{\sigma \in \varSigma }J_{\sigma }(\varvec{\mathrm {x}})\,. \end{aligned}
The following result is obtained using (2.9) .

### Lemma 5

We have that
\begin{aligned}&\sum \limits _{y_1}\sum \limits _{y_2}\langle P(t,y_1) ; P(t,y_2)\rangle \\&\quad \leqslant \sum \limits _{\varSigma \in \mathfrak {G}}\sum \limits _{l_{\varSigma }}\big |a_{n_{11}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{12}(\varSigma , l_{\varSigma })}(t)}a_{n_{21}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{22}(\varSigma , l_{\varSigma })}(t)}\big |R(\varSigma )\,. \end{aligned}

The following result follows easy from the definition of $$S_{xy}$$.

### Lemma 6

Let $$l \in \mathbb {N}$$ . For each $$x,y \in \varLambda _N$$ we have
1. (i)

$$\sum \limits _{y}(S^l)_{xy}=(\frac{M}{M-1})^{l}\,.$$

2. (ii)

$$(S^l)_{xy}\;\leqslant \;(\frac{M}{M-1})^{l-1}\frac{1}{M-1}\,.$$

### 3.4 Orbits of Vertices

Let us fix $$\varSigma \in \mathfrak {G}$$ . On the set of vertices $$V(\varSigma )$$ we construct the $$\textit{orbits of vertices}$$ as in [3] . We define $$\tau : V(\varSigma ) \rightarrow V(\varSigma )$$ as follows. Let $$i \in V(\varSigma )$$ and let e be the unique edge such that $$\{\{i, b(i)\}, e\} \in \varSigma$$ . Then, for any vertex i of $$\varSigma \in \mathfrak {G}$$ we define $$\tau i :=b(e)$$. We denote the orbit of the vertex $$i \in \varSigma$$ by $$[i]\;:=\; \{ \tau ^n i : n \in \mathbb {N}\}$$ .

We order the edges of $$\varSigma$$ in some arbitrary fashion and denote this order by < . Each bridge $$\sigma \in \varSigma$$ ”sits between” the orbits $$\zeta _1(\sigma )$$ and $$\zeta _2(\sigma )$$. More precisely, let $$\sigma =\{e , e'\}$$ with $$e <e'$$ and $$e=\{ i, b(i)\}$$ . Then, $$\zeta _1(\sigma ):=[i]$$ and $$\zeta _2(\sigma ):=[b(i)]$$ .

Let $$Z(\varSigma ):=\{[i] : i \in V(\varSigma )\}$$ be the set of orbits of $$\varSigma$$ . This set contains four distinguished orbits $$\{ [r(\mathcal {L}_1)],[r(\mathcal {L}_2)],[s(\mathcal {L}_1)],[s(\mathcal {L}_2)] \}$$ which need not be distinct (Fig. 4). Let $$|\varSigma |$$ be the number of bridges of the skeleton $$\varSigma \in \mathfrak {G}$$ and let $$L(\varSigma )=|Z^*(\varSigma )|$$ with $$Z^*(\varSigma ):=Z(\varSigma )\setminus \{[r(\mathcal {L}_1)],[r(\mathcal {L}_2)]\}$$ .

The following result is an adaptation of the $$\textit{2/3-rule}$$ introduced in Lemma 7.7 of [3] .

### Lemma 7

We have the inequality
\begin{aligned} L(\varSigma )\;\leqslant \;\frac{2|\varSigma |}{3}+\frac{2}{3}\,. \end{aligned}

### Proof

Let $$Z'(\varSigma ):=Z(\varSigma )\setminus \{ [r(\mathcal {L}_1)], [r(\mathcal {L}_2)], [s(\mathcal {L}_1)], [s(\mathcal {L}_2)]\}$$ . Using the same reasoning as in the proof of the $$\textit{2/3 rule}$$ in [3] we obtain that each orbit contains at least 3 vertices.

The total number of vertices of $$\varSigma$$ not including $$\{r(\mathcal {L}_1),r(\mathcal {L}_2),s(\mathcal {L}_1),s(\mathcal {L}_2) \}$$ is $$2|\varSigma |-4$$ . It follows that $$3|Z'(\varSigma )|\;\leqslant \; 2|\varSigma |-4 \Leftrightarrow |Z'(\varSigma )|\;\leqslant \; 2|\varSigma |/3-4/3$$.

Using that $$|Z^*(\varSigma )|\; \leqslant \; |Z'(\varSigma )|+2$$, we obtain $$|Z^*(\varSigma )|\;\leqslant \; 2|\varSigma |/3+2/3$$ . $$\square$$

We remark that Lemma 2.7 is sharp in the sense that there exists $$\varSigma \in \mathfrak {G}$$ such that the estimate of Lemma 2.7 saturates.

## 4 The Case $$|\varSigma |\;\geqslant \; 3$$

Using Lemma 2.7 and the same argument as in Section 7.5 of [3] we obtain the following result.

### Lemma 8

Let $$\varSigma \in \mathfrak {G}$$ and $$l_{\varSigma } \in \varLambda _N^{V(\varSigma )}$$. We have that
\begin{aligned} R(\varSigma )\;\leqslant \; C \left( \frac{M}{M-1}\right) ^{|l_{\varSigma }|}M^{-|\varSigma |/3+ 2/3}\,. \end{aligned}

### 4.1 Estimation of the Variance for $$|l_{\varSigma }| \ll M^{1/3}$$

Let $$\mu < \frac{1}{3}$$ . In the summation (2.12) we introduce a cut-off at $$|l_{\varSigma }|< M^{\mu }$$ . We define
\begin{aligned} E^{\leqslant } \;:=\; \sum \limits _{\varSigma \in \mathfrak {G}}\sum \limits _{|l_{\varSigma }|\leqslant M^{\mu }}\big |a_{n_{11}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{12}(\varSigma , l_{\varSigma })}(t)}a_{n_{21}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{22}(\varSigma , l_{\varSigma })}(t)}\big |R(\varSigma )\,. \end{aligned}
The following result is proved in [3] , Lemma 7.9 .

### Lemma 9

1. (i)

For any time t and for any $$n \in \mathbb {N}$$ we have $$|a_n(t)|\leqslant \frac{Ct^n}{n!}\,,$$ for some constant $$C\,.$$

2. (ii)

We have $$\sum \limits _{n\geqslant 0}|a_n(t)|^2=1+O(M^{-1})\,,$$ uniformly in $$t \in \mathbb {R}$$ .

A new estimate on $$\sum _{l_{\varSigma }}\varvec{\mathrm {1}}(|l_{\varSigma }|\;\leqslant \; M^{\mu })|a_{n_{11}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{12}(\varSigma , l_{\varSigma })}(t)}a_{n_{21}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{22}(\varSigma , l_{\varSigma })}(t)}|$$ is established in the following lemma. The new technique is based on splitting the summation according to the bridges that touch the rooted directed chains (Fig. 5).

### Lemma 10

For any $$\varSigma \in \mathfrak {G}$$ with $$|\varSigma |\geqslant 3$$ we have
\begin{aligned} \sum \limits _{l_{\varSigma }}\varvec{\mathrm {1}}(|l_{\varSigma }|\;\leqslant \; M^{\mu })|a_{n_{11}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{12}(\varSigma , l_{\varSigma })}(t)}a_{n_{21}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{22}(\varSigma , l_{\varSigma })}(t)}| \leqslant \frac{CM^{\mu (|\varSigma |-2)}}{(|\varSigma |-3)!}\,. \end{aligned}

### Proof

Let $$\varSigma \in \mathfrak {G}$$ . We denote each path by $$\mathcal {S}_1 \equiv r(\mathcal {L}_1(\varSigma ))\rightarrow s(\mathcal {L}_1(\varSigma ))$$, $$\mathcal {S}_2 \equiv s(\mathcal {L}_1(\varSigma ))\rightarrow r(\mathcal {L}_1(\varSigma ))$$, $$\mathcal {S}_3 \equiv r(\mathcal {L}_2(\varSigma ))\rightarrow s(\mathcal {L}_2(\varSigma ))$$ and $$\mathcal {S}_4 \equiv s(\mathcal {L}_2(\varSigma ))\rightarrow r(\mathcal {L}_2(\varSigma ))$$ . There always exists a bridge connecting $$\mathcal {S}_{i}$$ and $$\mathcal {S}_{j}$$ , for $$i \ne j\,.$$ Without loss of generality we choose $$\sigma _1$$ connecting $$\mathcal {S}_1$$ and $$\mathcal {S}_2$$ .

We have the following cases :

(i) There is a bridge $$\sigma _2 \in \varSigma$$ between $$\mathcal {S}_3$$ and $$\mathcal {S}_4$$ .

Let $$\bar{\varSigma }\; :=\; \varSigma \setminus \{\sigma _1, \sigma _3 \}$$ . There exist the functions $$f_1(l_{\bar{\varSigma }})$$, $$f_2(l_{\bar{\varSigma }})$$, $$f_3(l_{\bar{\varSigma }})$$ and $$f_4(l_{\bar{\varSigma }})$$ such that $$n_{11}(\varSigma , l_{\varSigma })=f_1(l_{\bar{\varSigma }})+l_{\sigma _1}$$ and $$n_{12}(\varSigma , l_{\varSigma })=f_2(l_{\bar{\varSigma }})+l_{\sigma _1}$$, $$n_{21}(\varSigma , l_{\varSigma })=f_3(l_{\bar{\varSigma }})+l_{\sigma _2}$$ and $$n_{22}(\varSigma , l_{\varSigma })=f_4(l_{\bar{\varSigma }})+l_{\sigma _2}$$ . Note that $$n_{11}(\varSigma , l_{\varSigma })$$ and $$n_{21}(\varSigma , l_{\varSigma })$$ do not represent the same linear combination of elements of $$l_\varSigma$$ .

We get
\begin{aligned}&\sum \limits _{l_{\varSigma }}\varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })|a_{n_{11}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{12}(\varSigma , l_{\varSigma })}(t)}a_{n_{21}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{22}(\varSigma , l_{\varSigma })}(t)}|\\&\quad \;=\;\sum \limits _{l_{\bar{\varSigma }}}\sum \limits _{l_{\sigma _1}}\sum \limits _{l_{\sigma _2}}\varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })|a_{f_1(l_{\bar{\varSigma }})+l_{\sigma _1}}(t)\overline{a_{f_2(l_{\bar{\varSigma }})+l_{\sigma _1}}(t)}a_{f_3(l_{\bar{\varSigma }})+l_{\sigma _2}}(t)\overline{a_{f_4(l_{\bar{\varSigma }})+l_{\sigma _2}}(t)}|\,. \end{aligned}
Using the elementary inequality $$|abcd| \leqslant |a|^{2}|c|^{2}+|b|^{2}|d|^{2}$$ we obtain
\begin{aligned}&\sum \limits _{l_{\bar{\varSigma }}}\sum \limits _{l_{\sigma _1}}\sum \limits _{l_{\sigma _2}}\varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })|a_{f_1(l_{\bar{\varSigma }})+l_{\sigma _1}}(t)\overline{a_{f_2(l_{\bar{\varSigma }})+l_{\sigma _1}}(t)}a_{f_3(l_{\bar{\varSigma }})+l_{\sigma _2}}(t)\overline{a_{f_4(l_{\bar{\varSigma }})+l_{\sigma _2}}(t)}|\nonumber \\&\quad \leqslant \;\sum \limits _{l_{\bar{\varSigma }}}\sum \limits _{l_{\sigma _1}}\sum \limits _{l_{\sigma _2}} \varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })(|a_{f_1(l_{\bar{\varSigma }})+l_{\sigma _1}}(t)|^{2}|a_{f_3(l_{\bar{\varSigma }})+l_{\sigma _2}}(t)|^2\nonumber \\&\qquad +|a_{f_2(l_{\bar{\varSigma }})+l_{\sigma _1}}(t)|^2|a_{f_4(l_{\bar{\varSigma }})+l_{\sigma _2}}(t)|^2)\,. \end{aligned}
(4.1)
Using the inequality between the indicator functions $$\sum \limits _{l_{\bar{\varSigma }}}\varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })\leqslant \sum \limits _{l_{\bar{\varSigma }}}\varvec{\mathrm {1}}(|l_{\bar{\varSigma }}|\leqslant M^{\mu })$$ and Lemma 3.2 (ii) we obtain that
\begin{aligned}&\sum \limits _{l_{\bar{\varSigma }}}\sum \limits _{l_{\sigma _1}}\sum \limits _{l_{\sigma _2}}\varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })|a_{f_1(l_{\bar{\varSigma }})+l_{\sigma _1}}(t)|^{2}|a_{f_3(l_{\bar{\varSigma }})+l_{\sigma _2}}(t)|^2\\&=\sum \limits _{{l_{\bar{\varSigma }}}}\varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })\sum \limits _{l_{\sigma _1}}|a_{f_1(l_{\bar{\varSigma }})+l_{\sigma _1}}(t)|^{2} \sum \limits _{l_{\sigma _2}}|a_{f_3(l_{\bar{\varSigma }})+l_{\sigma _2}}(t)|^2\\&\leqslant \; \sum \limits _{{l_{\bar{\varSigma }}}} \varvec{\mathrm {1}}(|l_{\bar{\varSigma }}|\leqslant M^{\mu })2\,. \end{aligned}
Note that the same argument holds for $$|a_{f_2(l_{\bar{\varSigma }})+l_{\sigma _1}}(t)|^2|a_{f_4(l_{\bar{\varSigma }})+l_{\sigma _2}}(t)|^2$$  .
Using that $$|\bar{\varSigma }|=|\varSigma |-2$$ we obtain that
\begin{aligned} \sum \limits _{l_1+l_2+\cdots {} +l_{|\bar{\varSigma }|}\leqslant M^{\mu }}2&\;\leqslant \; \sum \limits _{l_2+\cdots {}+l_{|\bar{\varSigma }|}=M^{\mu }-l_1}2\nonumber \\&\;\leqslant \; \sum \limits _{l_1=1}^{M^{\mu }}2{M^{\mu }-l_{1}-1 \atopwithdelims (){|\varSigma |-3}}\nonumber \\&\;\leqslant \; C\frac{M^{\mu (|\varSigma |-2)}}{(|\varSigma |-3)!}\,. \end{aligned}
(4.2)
(ii) There is no bridge connecting $$\mathcal {S}_3$$ and $$\mathcal {S}_4$$ . In this case we consider two bridges $$\sigma _3$$ and $$\sigma _4$$ that are touching $$\mathcal {S}_3$$ and $$\mathcal {S}_4$$ respectively. We further define $$\bar{\varSigma }\;:=\;\varSigma \setminus \{ \sigma _1, \sigma _3, \sigma _4 \}$$ .
We have that
\begin{aligned}&\sum \limits _{l_{{\varSigma }}}\varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })|a_{n_{11}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{12}(\varSigma , l_{\varSigma })}(t)}a_{n_{21}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{22}(\varSigma , l_{\varSigma })}(t)}|\\&\quad =\sum \limits _{l_{\bar{\varSigma }}}\sum \limits _{l_{\sigma _1}}\sum \limits _{l_{\sigma _3}}\sum \limits _{l_{\sigma 4}}\varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })| a_{f_1(\bar{l},l_{\sigma _3},l_{\sigma _4})+l_{\sigma _1}}(t)\overline{a_{f_2(\bar{l},l_{\sigma _3},l_{\sigma _4})+l_{\sigma _1}}(t)}a_{f_3(\bar{l})+\eta _3 l_{\sigma _3}}\\&\qquad (t)\overline{a_{f_4(\bar{l})+\eta _4 l_{\sigma _4}}(t)}|\,, \end{aligned}
where $$\eta _3\,, \eta _4 \in \{1, 2\}\,.$$ Using the same inequality as in (3.1) and that $$l_{\bar{\sigma }}$$ and $$l_{\sigma _1}$$ are distinct we obtain that
\begin{aligned}&\sum \limits _{l_{\bar{\varSigma }}}\sum \limits _{l_{\sigma _1}}\sum \limits _{l_{\sigma _3}}\varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })|a_{f_1(\bar{l},l_{\sigma _3},l_{\sigma _4})+l_{\sigma _1}}(t)|^{2}|a_{f_3(\bar{l})+\eta _3 l_{\sigma _3}}(t)|^{2}\nonumber \\&\quad \leqslant \; \sum \limits _{l_{\bar{\varSigma }}}\sum \limits _{l_{\sigma _1}}\varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })| a_{f_1(\bar{l},l_{\sigma _3},l_{\sigma _4})+l_{\sigma _1}}(t)|^{2}\sum \limits _{l_{\sigma _3}}|a_{f_3(\bar{l})+\eta _3 l_{\sigma _3}}(t)|^{2}\,. \end{aligned}
(4.3)
The same holds for $$\overline{a_{f_2(\bar{l},l_{\sigma _3},l_{\sigma _4})+l_{\sigma _1}}(t)}$$ and $$\overline{a_{f_4(\bar{l})+\eta _4 l_{\sigma _4}}(t)}$$ .

Now the claim follows like in (i). $$\square$$

Let $$m=|\varSigma |$$ . Using Lemma 3.3 and the same reasoning as in Section 7.6 of [3] we obtain
\begin{aligned} E^{\leqslant }&\;\leqslant \; C \sum \limits _{m \leqslant M^{\mu }}\frac{M^{\mu (|\varSigma |-2)}}{(|\varSigma |-3)!}\frac{M^{2/3}}{M^{|\varSigma |/3}}\left( \frac{M}{M-1}\right) ^{M^{\mu }} \end{aligned}
(4.4)
It is easy to see, as in Section 7.6 of [3] , that $$|\{\varSigma : |\varSigma |=m\}|\;\leqslant \; 2^{m}m!$$ .
Finally, we obtain that
\begin{aligned} E^{\leqslant }&\;\leqslant \;C \sum \limits _{m=3}^{M^{\mu }}2^{m}m!\frac{M^{\mu (|\varSigma |-2)}}{(|\varSigma |-3)!}\frac{M^{2/3}}{M^{|\varSigma |/3}}\left( \frac{M}{M-1}\right) ^{M^{\mu }}\nonumber \\&\;\leqslant \; CM^{\mu (|\varSigma |-2)}M^{2/3-|\varSigma |/{3}}\nonumber \\&\;\leqslant \;CM^{\mu -1/3}\,. \end{aligned}
(4.5)

### 4.2 Estimation of the Variance for $$|l_{\varSigma }| \geqslant M^{1/3}$$

Let us define
\begin{aligned} E^{>} \;:=\; \sum \limits _{\varSigma \in \mathfrak {G}}\sum \limits _{|l_{\varSigma }| \geqslant M^{\mu }}|a_{n_{11}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{12}(\varSigma , l_{\varSigma })}(t)}a_{n_{21}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{22}(\varSigma , l_{\varSigma })}(t)}|R(\varSigma )\,. \end{aligned}
(4.6)
We also define the new set of variables
\begin{aligned} p_1\equiv p_1(\varSigma , l_{\varSigma })&\;=\;\frac{n_{11}(\varSigma , l_{\varSigma })+n_{12}(\varSigma , l_{\varSigma })}{2}\,,\nonumber \\ p_2 \equiv p_2(\varSigma , l_{\varSigma })&\;=\;\frac{n_{21}(\varSigma , l_{\varSigma })+n_{22}(\varSigma , l_{\varSigma })}{2}\,; \end{aligned}
(4.7)
\begin{aligned} q_1 \equiv q_1(\varSigma , l_{\varSigma })&\;=\;\frac{n_{11}(\varSigma , l_{\varSigma })-n_{12}(\varSigma , l_{\varSigma })}{2}\,,\nonumber \\ q_2 \equiv q_2(\varSigma , l_{\varSigma })&\;=\;\frac{n_{21}(\varSigma , l_{\varSigma })-n_{22}(\varSigma , l_{\varSigma })}{2}\,; \end{aligned}
(4.8)
Note that $$p_1+p_2=|l_{\varSigma }|$$ .
As in [3], using Lemma 3.2 (i) and the inequality $$\frac{p!}{(p-q)!} \leqslant \frac{(p+q)!}{p!}$$ we obtain
\begin{aligned} |a_{p_1+q_1}(t)a_{p_1-q_1}(t)a_{p_2+q_2}(t)a_{p_2-q_2}(t)|\;\leqslant \; C\frac{t^{2(p_1+p_2)}}{p_1!p_1!p_2!p_2!}\,. \end{aligned}
(4.9)
Using the time scale $$t \sim CM^{\kappa }$$ we obtain that
\begin{aligned} E^{>}\;\leqslant \;\sum \limits _{|l_{\varSigma }|>M^{\mu }}\frac{(CM^{\kappa }T)^{2(p_1+p_2)}}{p_1!p_1!p_2!p_2!}M^{2/3}\sum \limits _{m}^{p_1+p_2}\left( \frac{C(p_1+p_2)}{M^{1/3}}\right) ^{m}\,. \end{aligned}
(4.10)
As in Section 7.7 of [3], using that $$C p_1!p_1!p_2!p_2!\geqslant p_1^{2p_1}p_2^{2p_2}$$ , for some constant C, we obtain that
\begin{aligned} E^{>}&\;\leqslant \; \sum \limits _{|l_{\varSigma }|>M^{\mu }}M^{2/3}\left( \frac{CM^{\kappa }T}{p_1+p_2}\right) ^{2(p_1+p_2)} +\sum \limits _{|l_{\varSigma }|>M^{\mu }} M^{1/3}\left( \frac{CM^{2\kappa }T^{2}}{(p_1+p_2)M^{1/3}}\right) ^{p_1+p_2}\nonumber \\&\;\leqslant \; \sum \limits _{|l_{\varSigma }|>M^{\mu }}M^{2/3}\left( CM^{\kappa -\mu }T\right) ^{2(p_1+p_2)} +\sum \limits _{|l_{\varSigma }|>M^{\mu }} M^{1/3}\left( CM^{2\kappa -1/3-\mu }T^{2}\right) ^{p_1+p_2}\nonumber \\&\;\leqslant \;(CM^{\kappa -\mu +1/3M^{\mu }}T)^{2M^{\mu }}+(CM^{2\kappa -1/3-\mu + 2/3M^{\mu }}T)^{M^{\mu }}\,. \end{aligned}
(4.11)
Choosing $$\mu \;=\;1/3-\beta$$ with the condition $$1/3-\kappa>\mu -\kappa \;>\; 1/3M^{\mu }\;>\;1/3M^{1/3}$$ (where we have $$0 \;<\;\beta \;<\; 2/3-2\kappa -2/3M^{\mu } \;\leqslant \; 2/3-2\kappa$$) completes the proof of Theorem 1 in the case $$|\varSigma |\;\geqslant \; 3\,.$$

## 5 Estimation for the Variance in the Case $$|\varSigma |\;\leqslant \;2$$

### 5.1 Estimation for the Variance in the Case $$|\varSigma |\;=\;0$$ and $$|\varSigma |\;=\;1$$

For $$|\varSigma |=0$$ we obtain that $$\langle H_{00} ; H_{00}\rangle$$ vanishes. Also, for $$|\varSigma |=1$$ we estimate term of the form
\begin{aligned} \langle \delta _{0y_1}H_{0y_1};\delta _{0y_2}H_{0y_2} \rangle \;=\;\delta _{0y_1}\delta _{0y_2}\langle H_{00};H_{00} \rangle \,. \end{aligned}
Given that $$\langle H_{00}; H_{00} \rangle =0$$ it follows that in the cases $$|\varSigma |=0$$ and $$|\varSigma |=1$$ the quantity of interest is deterministic.

### 5.2 Estimation of the Variance in the Case $$|\varSigma |\;=\;2$$

Given that the two rooted directed chains are connected we obtain that the graph with $$l_{\sigma _1}$$ bridges that touch $$\mathcal {S}_1$$ and $$\mathcal {S}_2$$ and $$l_{\sigma _2}$$ bridges that touch $$\mathcal {S}_3$$ and $$\mathcal {S}_4$$ gives no contribution to the value of the variance. Also, we obtain, up to permutations, four different possible configurations. In all four cases it holds that $$y_1=y_2$$ .

We have that
\begin{aligned}&\sum \limits _{l_{\sigma _1}+l_{\sigma _2}}|a_{n_{11}(\varSigma , l_{\varSigma })}(t)|^{2}|a_{n_{12}(\varSigma , l_{\varSigma })}(t)|^{2}R(\varSigma )\nonumber \\&\qquad =\;\sum \limits _{l_{\sigma _1}+l_{\sigma _2} \leqslant M^{\mu }}|a_{n_{11}(\varSigma , l_{\varSigma })}(t)|^{2}|a_{n_{12}(\varSigma , l_{\varSigma })}(t)|^{2}R(\varSigma )\nonumber \\&\qquad \quad +\sum \limits _{l_{\sigma _1} +l_{\sigma _2} \geqslant M^{\mu }}|a_{n_{11}(\varSigma , l_{\varSigma })}(t)|^{2}|a_{n_{12}(\varSigma , l_{\varSigma })}(t)|^{2}R(\varSigma )\,. \end{aligned}
(5.1)
Using Lemma 3.2 (ii) twice and Lemma 2.6 (ii) and (i) we obtain that
\begin{aligned}&\sum \limits _{l_{\sigma _1}+l_{\sigma _2} \leqslant M^{\mu }}|a_{n_{11}(\varSigma , l_{\varSigma })}(t)|^{2}|a_{n_{12}(\varSigma , l_{\varSigma })}(t)|^{2}R(\varSigma )\nonumber \\&\qquad \leqslant \;2C \sum \limits _{l_{\sigma _1}+l_{\sigma _2}\leqslant M^{\mu }}\sum \limits _{\varvec{\mathrm {x}} \in \varLambda _N^{V(\varSigma )}}(S^{l_{\sigma _1}})_{x_{e}}(S^{l_{\sigma _2}})_{x_{e}}\nonumber \\&\qquad \leqslant \; 2C\frac{1}{M-1}\left( \frac{M}{M-1}\right) ^{l_{\sigma _1}-1}\sum \limits _{l_{\sigma _1}+l_{\sigma _2} \leqslant M^{\mu }}\sum \limits _{\varvec{\mathrm {x}} \in \varLambda _N^{V(\varSigma )}}(S^{l_{\sigma _2}})_{x_{e}}\nonumber \\&\qquad \leqslant \;2C \frac{M^{\mu }}{M-1}\,. \end{aligned}
(5.2)
Using again the time scale $$t \sim CM^{\kappa }$$ we obtain that
\begin{aligned}&\sum \limits _{l_{\sigma _1}+l_{\sigma _2} \geqslant M^{\mu }}|a_{n_{11}(\varSigma , l_{\varSigma })}(t)|^{2}|a_{n_{12}(\varSigma , l_{\varSigma })}(t)|^{2}R(\varSigma )\\&\qquad \leqslant \; \sum \limits _{l_{\sigma _1}+l_{\sigma _2} \geqslant M^{\mu }}M^{2/3}\left( \frac{CM^{\kappa }T}{l_{\sigma _1}+l_{\sigma _2}}\right) ^{2(l_{\sigma _1}+l_{\sigma _2})}\nonumber \\&\qquad \quad +\sum \limits _{l_{\sigma _1}+l_{\sigma _2} \geqslant M^{\mu }} M^{1/3}\left( \frac{CM^{2\kappa }T^{2}}{(l_{\sigma _1}+l_{\sigma _2})M^{1/3}}\right) ^{l_{\sigma _1}+l_{\sigma _2}}\,. \end{aligned}
As in (3.11) we obtain that
\begin{aligned}&\sum \limits _{l_{\sigma _1}+l_{\sigma _2} \geqslant M^{\mu }}|a_{n_{11}(\varSigma , l_{\varSigma })}(t)|^{2}|a_{n_{12}(\varSigma , l_{\varSigma })}(t)|^{2}R(\varSigma )\;\nonumber \\&\qquad \leqslant \;(CM^{\kappa -\mu +1/3M^{\mu }}T)^{2M^{\mu }}+(CM^{2\kappa -1/3-\mu + 2/3M^{\mu }}T)^{M^{\mu }}\,. \end{aligned}
(5.3)
As before, we choose $$\mu \;=\;1/3-\beta$$ . This completes the proof of Theorem 1.

## Notes

### Acknowledgements

This result is based on a Semester Project in ETH Zürich under the supervision of Prof. Dr. Antti Knowles. The author is grateful to Prof. Antti Knowles for the careful guiding into understanding the problem.

## References

1. 1.
Anderson, P.: Absences of diffusion in certain random lattices. Phys. Rev. 109, 1492–1505 (1958)
2. 2.
Spencer T.: Random banded and sparse matrices (Chapter 23). In: Akemann, G., Baik, J., Di Francesco, P. (eds.) Öxford handbook of Random Matrix TheoryGoogle Scholar
3. 3.
Erdös, L., Knowles, A.: Quantum diffusion and eigenfunction delocalization in a random band matrix model. Commun. Math. Phys. 303, 509–554 (2011)
4. 4.
Erdös, L., Knowles, A.: The Altshuler-Shklovskii formulas for random band matrices II: the general case. Preprint arXiv:1309.5107, to appear in Ann. H. Poincaré