Our main goal in this chapter is to remove the bounded degrees assumption in Theorem 5.2 and replace it with the assumption that the degree of the root has an exponential tail.

FormalPara Theorem 6.1 ([31])

Let G n be a sequence of (possibly random) planar graphs such that \(G_n \xrightarrow {\mathrm {loc}} (U,\rho )\) and there exist C, c > 0 such that \(\mathbb {P}(\deg (\rho ) \geq k) \leq Ce^{-ck}\) for every k. Then U is almost surely recurrent.

As discussed in Sect. 1.2, the last theorem is immediately applicable in the setting of random planar maps. It is well known that the degree of the root in the UIPT and the UIPQ has an exponential tail. See [5, Lemma 4.1 and 4.2] or [26] for the UIPT and [8, Proposition 9] for the UIPQ.

FormalPara Corollary 6.2 ([31])

The UIPT/UIPQ are almost surely recurrent.

6.1 Star-Tree Transform

We present here a transformation which transforms any planar map G to a planar map G with maximal degree of 4. We call this transformation GG the star-tree transform . Recall that a balanced rooted tree is a finite rooted tree in which every non-leaf vertex has precisely two children and the distance of the leaves from the root differs by at most 1. The transformation is performed as follows.

  1. 1.

    Subdivide each edge e by adding a new vertex w e of degree two in the “middle”. See Fig. 6.1b. Denote the resulting graph by G′.

  2. 2.

    For every vertex v ∈ V (G), replace all edges incident to v in G′ by a balanced binary tree rooted at v, whose leaves are the neighbors of v in G′. We perform this in a fashion which preserves the cyclic order of these neighbors and thus preserves planarity. Furthermore, add two extra vertices and attach them to the root. Denote this tree by T v. See Fig. 6.1d.

Fig. 6.1
figure 1

The star-tree transform. (a) An original edge of G. (b) Subdividing an edge. (c) The “star” of a vertex in G′. (d) Transforming the star of v into a tree T v

Remark 6.3

The careful reader will notice that we have not specified precisely what is T v (if degG(v) is not a power of 2 there may several balanced binary trees with deg(v) leaves) and in which way precisely we identify the leaves of T v with the neighbors of v in G′ (we may rotate the tree and get a different identification while still preserving planarity). This is a subtle yet important issueFootnote 1 and our convention is that the choice of tree and identification are performed uniformly at random from all the possible choices. This will be crucially used in Claim 6.13.

Lemma 6.4

Let G be a planar map and G its star-tree transform. We set edge resistances on G by putting R e = 1∕d G(v), where v is the vertex of G for which e  T v and d G(v) is the degree of v in G. If the network (G , R e) is recurrent, then G is recurrent as well.

Proof

It is clear that from the point of view of recurrence versus transience, the two edges leading to the two “extra” neighbors of each root do not matter and can be removed. Hence for the rest of the proof we write T v for the previously defined tree with these two edges removed. The purpose of these extra edges will become apparent later in the proof of Theorem 6.1.

Assume G is transient and let a ∈ V (G) be some vertex. There is a flow θ from a to such that \(\mathcal {E}(\theta ) < \infty \). We will construct a flow θ on (G , R e) from a to with finite energy, showing that (G , R e) is transient, giving the theorem. First we define a flow θ′ from a to infinity in G′ in the natural manner: for each edge e = (x, y) of G we set θ′(x, w e) = θ′(w e, y) = θ(x, y). Obviously \(\mathcal {E}(\theta ')=2 \mathcal {E}(\theta )\).

Next we provide some notation. We denote by A the set of vertices that were added to form G′ in the first step of the star-tree transform, that is, the white vertices in Fig. 6.1. Each vertex w ∈ A is a leaf of precisely two trees T u and T v, where {u, v} was the edge of G that w divided. We call u and v the tree roots of w. We denote by B the set of vertices that were added to G in the second step of the star-tree transform, that is, the gray vertices in Fig. 6.1d. The vertices of V (G) are the black discs in Fig. 6.1. Each vertex of x ∈ V (G) ∪ B is a member of a single tree T v; we call v the tree root of x. Lastly, for any x ∈ V (G) ∪ B we denote by C x ⊂ A the set of leaves of T v, where v is the tree root of x, for which the path from the leaf to the root of T v goes through x; in other words, C x is the set of leaves of T v which are the “descendants” of x. If x ∈ A, then we set C x = {x}.

To define θ , let e = (x, y) be an edge of T v. Assume that x is closer to the root of T v than y in graph distance. We set

$$\displaystyle \begin{aligned}\theta^*(e) = \sum_{w \in C_y} \theta'(v, w)\, .\end{aligned}$$

The construction of θ is depicted in Fig. 6.2.

Fig. 6.2
figure 2

The construction of the flow θ from θ. (a) An original edge of G which has flow θ 1. (b) The flow passes through the divided edge. (c) The flow going out from a vertex of G in G′. (d) The division of the flow in T v

We will now show that \(\mathcal {E}(\theta ^*) \leq 2 \mathcal {E}(\theta ')\) where the energy of θ is taken in the network (G , R e). Let v ∈ V (G) and write h for the height of T v, that is, h is the maximal graph distance from a leaf of T v to its root. Note that since the tree is balanced, the distances from the leaves to the root vary by at most 1. Let e = (x, y) be an edge of T v and assume that x is closer than y to the root of T v. By the construction of θ , the contribution of e to \(\mathcal {E}(\theta ^*)\) is

$$\displaystyle \begin{aligned}R_e\theta^*(e)^2 = \frac{1}{d_G(v)} \left(\sum_{w\in C_y}\theta'(v,w)\right)^2.\end{aligned}$$

If the graph distance of y from the root is  ∈{1, …, h}, then |C y|≤ 2 h. Hence by Cauchy-Schwarz

$$\displaystyle \begin{aligned}R_e\theta^*(e)^2 \leq \frac{2^{h-\ell}}{d_G(v)}\sum_{w\in C_y}\theta'(v,w)^2 \, .\end{aligned}$$

Summing over all edges in T v at distance from the root, we go over each leaf of T v precisely once. Thus,

$$\displaystyle \begin{aligned}\sum_{\substack{e=(x,y)\in T_v \\ d_{G^*}(y,v) = \ell}} R_e\theta^*(e)^2 \leq \frac{2^{h-\ell}}{d_G(v)} \sum_{w\in C_v}\theta'(v,w)^2 \, .\end{aligned}$$

We now sum over all edges in T v by summing over  ∈{1, …, h}. We get

$$\displaystyle \begin{aligned}\sum_{e\in T_v} R_e\theta^*(e)^2 \leq \frac{2^h}{d_G(v)}\sum_{w\in C_v}\theta'(v,w)^2 \leq 2 \sum_{w\in C_v}\theta'(v,w)^2 \, ,\end{aligned}$$

since h ≤log2(d G(v)) + 1. Lastly, we sum this over all v ∈ V (G) to obtain that

$$\displaystyle \begin{aligned}\mathcal{E}(\theta^*) \leq 2\mathcal{E}(\theta') = 4\mathcal{E}(\theta) \, ,\end{aligned}$$

concluding our proof. □

6.2 Stationary Random Graphs and Markings

6.2.1 Stationary Random Graphs

Recall that Theorem 6.1 and the entire setup of Chap. 5 is adapted to the case when G n is itself random. The reason is that in Definition 5.1 we consider the graph distance ball \(B_{G_n}(\rho _n,r)\) as a random variable in the probability space \((\mathcal {G}_{\bullet },d_{\mathrm {loc}})\), where ρ n conditioned on G n is a uniformly chosen random vertex.

Let us emphasize that this is not the same as drawing a sample of {G n} and claiming that almost surely \(G_n\xrightarrow {\mathrm {loc}}(U,\rho )\). For example, let G n be a path of length n with probability 1∕2 and an n × n square grid with probability 1∕2, independently for all n. In this case \(G_n \xrightarrow {\mathrm {loc}} (U,\rho )\) where \(U=\mathbb {Z}\) with probability 1∕2 and \(U=\mathbb {Z}^2\) with probability 1∕2, however, almost surely on the sequence {G n}, the local limit of G n does not exist.

In many cases it is useful to take a random root drawn from the stationary distribution on G n, that is, the probability distribution on vertices giving each vertex v probability \(\deg _{G_n}(v)/2|E(G_n)|\). In a similar fashion to Definition 5.1, we define this type of local convergence.

Definition 6.5

Let {G n} be a sequence of (possibly random) finite graphs with non-empty sets of edges. We say that \(G_n \xrightarrow [\pi ]{\mathrm {loc}} (U,\rho )\) where (U, ρ) is a random rooted graph, if for every integer r ≥ 1,

$$\displaystyle \begin{aligned} B_{G_n}(\rho_n,r) \stackrel{d}{\longrightarrow} B_U(\rho,r), \end{aligned}$$

where ρ n is a randomly chosen vertex from G n with distribution proportional to the vertex degrees. We call such a limit a stationary local limit .

Let us remark that \(G_n \xrightarrow {\mathrm {loc}} (U,\rho )\) does not imply that \(G_n \xrightarrow [\pi ]{\mathrm {loc}} (U',\rho ')\) for some (U′, ρ′). Indeed, let G n be a path of length n attached to a complete graph on \(\sqrt {n}\) vertices. Then the local limit of G n is \(\mathbb {Z}\), however the limit according to a stationary random root does not exist.

The reason for taking the \(\xrightarrow [\pi ]{\mathrm {loc}}\) limit rather than the uniform limit as before is that the random walk on the limit (U, ρ) starting from ρ is then stationary.

Claim 6.6

Assume that \(G_n \xrightarrow [\pi ]{\mathrm {loc}} (U,\rho )\). Conditioned on (U, ρ), let X 1 be a uniformly chosen neighbor of ρ. Then (U, X 1) is equal in law to (U, ρ). Similarly, if {X n}n≥0 is the simple random walk on (U, ρ), then for each n ≥ 0 the law of (U, X n) coincides with the law of (U, ρ).

Proof

If H is a finite graph and v is a vertex chosen with probability proportional to its degree, then it is immediate that a uniformly chosen random neighbor of v is distributed according to the stationary distribution. Thus for any fixed r > 0 the ball \(B_{G_n}(\rho _n,r)\) has the same distribution as \(B_{G_n}(X_1,r)\) where ρ n is drawn from the stationary distribution on G n and X 1 is a uniform neighbor of ρ n. The claim follows now by definition. □

Definition 6.7

A random rooted graph (G, ρ) is called a stationary random graph if (G, X 1) has the same distribution as (G, ρ), where the vertex X 1 is a uniform neighbor of ρ conditioned on (G, ρ).

We would like to develop a simple abstract framework that will allow us to comfortably move from \(\xrightarrow {\mathrm {loc}}\) convergence to \(\xrightarrow [\pi ]{\mathrm {loc}}\) convergence and vice versa. This is straightforward when {G n} are a sequence of deterministic graphs with uniformly bounded average degree but is less obvious when G n themselves are random. For this we need to degree bias our random graphs.

Definition 6.8

Denote by \(\mathbb {P}\) the law of a random rooted graph (G, ρ) and assume that \(\mathbb {E} \deg (\rho ) \in (0,\infty )\). The probability measure μ on \((\mathcal {G}_{\bullet }, d_{\mathrm {loc}})\) defined by

$$\displaystyle \begin{aligned}\mu(\mathcal{A}) := {1 \over \mathbb{E} \deg(\rho)} \sum_{k \geq 1} k \,\,\mathbb{P}(\mathcal{A} \cap \{\deg(\rho) = k\}) \, ,\end{aligned}$$

for any event \(\mathcal {A} \subset (\mathcal {G}_{\bullet }, d_{\mathrm {loc}})\) is called the degree biasing of \(\mathbb {P}\). Similarly, if we assume that almost surely ρ is not an isolated vertex, then the probability measure ν defined by

$$\displaystyle \begin{aligned}\nu(\mathcal{A}) = {1 \over \mathbb{E}[\deg(\rho)^{-1}]} \sum_{k \geq 1} {\mathbb{P}(\mathcal{A} \cap \{\deg(\rho) = k\}) \over k} \, ,\end{aligned}$$

is called the degree unbiasing of \(\mathbb {P}\).

Lemma 6.9

Assume that (G, ρ) is a random rooted graph such that G is almost surely finite, that the distribution of ρ given G is uniform and that \(\mathbb {E} \deg (\rho ) \in (0,\infty )\) . Then the degree biasing of (G, ρ) is a stationary random graph.

Conversely, assume that (G π, ρ π) is a stationary random graph such that G π is almost surely finite and has no isolated vertices. Then its degree unbiasing (G, ρ) is such that G is almost surely finite and ρ conditioned on G is uniformly distributed.

Proof

We will prove only the first statement and the second is similar. Denote by (G π, ρ π) a random variable drawn according to the degree biasing of (G, ρ). Let H be a fixed finite graph and denote by degH(v) the degree of a vertex v in H. By definition we have that

$$\displaystyle \begin{aligned} \mathbb{P} ( (G^\pi,\rho^\pi) = (H,v) ) = {\deg_H(v) \cdot \mathbb{P}( (G,\rho)=(H,v) ) \over \mathbb{E} \deg(\rho)} \, .\end{aligned} $$
(6.1)

Let X 1 be a uniformly chosen neighbor of ρ π. Then by (6.1)

Since ρ is uniformly distributed given G, the quantity \(\mathbb {P} ( (G,\rho ) = (H,v) )\) is the same for all v. So

$$\displaystyle \begin{aligned}\mathbb{P} ( (G^\pi, X_1) = (H,u)) = {\deg_H(u) \mathbb{P} ( (G,\rho) = (H,u) ) \over \mathbb{E} \deg(\rho)}\end{aligned}$$

so by (6.1) the required assertion follows. □

Corollary 6.10

Assume that {G n} is a sequence of random graphs that are almost surely finite and that \(\mathbb {E} \deg (\rho _n) \in (0,\infty )\) where ρ n is a uniformly chosen vertex of G n . Let \((G_n^\pi ,\rho _n^\pi )\) be the degree biasing of (G n, ρ n). Assume that \(G_n \xrightarrow {\mathrm {loc}} (U,\rho )\) and that \(\mathbb {E} \deg (\rho ) < \infty \) and that \(\mathbb {E} \deg (\rho _n) \to \mathbb {E} \deg (\rho )\) . Then \(G_n^\pi \xrightarrow [\pi ]{\mathrm {loc}} (U^\pi ,\rho ^\pi )\) where (U π, ρ π) is the degree biasing of (U, ρ). Furthermore, (U, ρ) and (U π, ρ π) are absolutely continuous with respect to each other.

Conversely, assume that \(\{G_n^\pi \}\) is a sequence of random graphs that are almost surely finite and have no isolated vertices. Denote by \(\rho _n^\pi \) a random vertex of \(G_n^\pi \) drawn with probability proportional to the vertex degrees and by (G n, ρ n) the degree unbiasing of \((G_n^\pi ,\rho _n^\pi )\) . If \(G_n^\pi \xrightarrow [\pi ]{\mathrm {loc}} (U^\pi ,\rho ^\pi )\) , then \(G_n \xrightarrow {\mathrm {loc}} (U,\rho )\) where (U, ρ) is the degree unbiasing of (U π, ρ π). Furthermore, (U, ρ) and (U π, ρ π) are absolutely continuous with respect to each other.

Proof

We start by proving the first assertion. Let (H, v) be a finite rooted graph and r > 0 a fixed integer. Then by Definition 6.8

$$\displaystyle \begin{aligned}\mathbb{P} ( B_{G_n^\pi}(\rho_n^\pi, r) = (H,v) ) = { \deg_H(v) \mathbb{P} (B_{G_n}(\rho_n,r) = (H,v)) \over \mathbb{E} \deg (\rho_n)} \, .\end{aligned}$$

Since \(G_n \xrightarrow {\mathrm {loc}} (U,\rho )\) and \(\mathbb {E} \deg (\rho _n) \to \mathbb {E} \deg (\rho )\) we obtain that

$$\displaystyle \begin{aligned} \lim_{n \to \infty} \mathbb{P} ( B_{G_n^\pi}(\rho_n^\pi, r) = (H,v) ) = {\deg_H(v) \mathbb{P} (B_{U}(\rho,r) = (H,v)) \over \mathbb{E} \deg (\rho)}= \mathbb{P} ( B_{U^\pi}(\rho^\pi,r) = (H,v)) , \end{aligned}$$

where the last equality is also by Definition 6.8. The absolute continuity of (U, ρ) and (U π, ρ π) follows immediately from the definition.

The second statement follows by the same proof. Note that \(\mathbb {E} [\deg (\rho _n^\pi )^{-1}] \to \mathbb {E} [ \deg (\rho ^\pi )^{-1}]\) by definition since \(B_{G_n^\pi }(\rho _n^\pi ,1)\) converges in distribution to \(B_{U^\pi }(\rho ^\pi ,1)\) and the function f((G, ρ)) =deg(ρ)−1 is a bounded continuous function on \(\mathcal {G}_\bullet \). □

We end this subsection by addressing the somewhat technical issue of verifying the condition \(\mathbb {E} \deg (\rho _n) \to \mathbb {E}\deg (\rho )\) in Corollary 6.10. It is not guaranteed just by requiring \(\sup _n \mathbb {E} \deg (\rho _n) <\infty \) as can be seen in the example of a path of length n where we choose \(\sqrt {n}\) arbitrary vertices and add \(\sqrt {n}\) loops to each one; in this example deg(ρ) = 2 almost surely, and \(\mathbb {E} \deg (\rho _n) = 4 + o(1)\). However, we now show that it is always possible to “truncate” the finite graphs G n by removing edges touching vertices of large degrees so that the limit is unchanged and the average degrees converge to the expected degree of the limit. Given a finite graph G and an integer k ≥ 1 we denote by G ∧ k the graph obtained from G by erasing all the edges touching vertices of degree at least k. We note that even when G is connected, G ∧ k may be disconnected and may have isolated vertices. As we defined in Sect. 5.1, by (G ∧ k, ρ) we mean (G ∧ k[ρ], ρ) where G ∧ k[ρ] is the connected component of ρ in G ∧ k, hence it is a member of \(\mathcal {G}_\bullet \) even when it is disconnected. All statements in this chapter, most importantly Corollary 6.10, do not assume the graphs involved are connected.

Lemma 6.11

Let {G n} be a sequence of random finite graphs such that \(G_n \xrightarrow {\mathrm {loc}} (U,\rho )\) and \(\mathbb {E} \deg (\rho ) < \infty \) . Then there exists a sequence k(n) →∞ such that

Furthermore, if we set \(G_n^{\prime } = G_n \wedge k(n)\) , then

$$\displaystyle \begin{aligned}\mathbb{E} \deg_{G_n^{\prime}} (\rho_n) \to \mathbb{E} \deg(\rho) \, ,\end{aligned}$$

where ρ n is a uniformly chosen vertex of \(G_n^{\prime }\).

Proof

We first show that for any sequence k(n) → we have that \(G_n \wedge k(n) \xrightarrow {\mathrm {loc}} (U,\rho )\). Indeed, since \(G_n \xrightarrow {\mathrm {loc}} (U,\rho )\) we have that for any fixed integer r ≥ 1

$$\displaystyle \begin{aligned}\mathbb{P} \Big ( \max \big \{ \deg (v) : v \in B_{G_n}(\rho_n, r) \big \} \geq k(n) \Big ) \to 0 \, .\end{aligned}$$

If \(\max \{ \deg (v) : v \in B_{G_n}(\rho _n, r+1) \} < k(n)\), then \(B_{G_n}(\rho _n,r) = B_{G_n\wedge k(n)}(\rho _n,r)\). Since G n and G n ∧ k(n) have the same set of vertices we deduce that for any fixed r ≥ 1 and any rooted graph (H, v)

$$\displaystyle \begin{aligned}\mathbb{P} \big ( B_{G_n\wedge k(n)}(\rho_n,r) =(H,v)) \to \mathbb{P}( B_U(\rho,r) = (H,v)) \, .\end{aligned}$$

Secondly, since deg(ρ n) converges in distribution to deg(ρ) we have that there exists a sequence k(n) → such that \(\mathbb {E} [\deg (\rho _n)\wedge k(n)] \to \mathbb {E} \deg (\rho )\). Indeed, by dominated convergence we have that \(\displaystyle \mathbb {E} [\deg (\rho ) \wedge k] \to _{k \to \infty } \mathbb {E} \deg (\rho )\). Furthermore, for any fixed k the function f((G, ρ)) =deg(ρ) ∧ k is a bounded and continuous on \(\mathcal {G}_\bullet \), thus \(\mathbb {E} [\deg (\rho _n) \wedge k] \to _{n \to \infty } \mathbb {E} [\deg (\rho ) \wedge k]\). Hence for any ε > 0 there exist k and N such that for all n ≥ N we have that \(|\mathbb {E}[\deg (\rho _n) \wedge k] - \mathbb {E} \deg (\rho )| \leq \varepsilon \). It is an exercise that this implies the existence of k(n).

Lastly, \(\limsup \mathbb {E} \deg _{G_n^{\prime }}(\rho _n) \leq \mathbb {E} \deg (\rho )\) since \(\deg _{G_n^{\prime }}(\rho _n) \leq \deg _{G_n}(\rho _n) \wedge k(n)\). We also have that \(\deg _{G_n^{\prime }}(\rho _n) \stackrel {d}{\longrightarrow } \deg (\rho )\), hence by Fatou’s lemma \(\liminf \mathbb {E} \deg _{G_n^{\prime }}(\rho _n) \geq \mathbb {E} \deg (\rho )\), and hence the second assertions follows. □

6.2.2 Markings

Given a locally convergent sequence of (possibly random) graphs G n, we wish to apply the star-tree transform on them to create a sequence \(G_n^*\) and take its local limit of that while “remembering”, in light of Lemma 6.4, the original degrees of G n. The approach is a rather straightforward extension of the abstract setting of Sect. 5.1, see also [2]. We consider the space of triples (G, ρ, M) where G = (V, E) is a graph, ρ ∈ V  is a vertex and \(M:E\to \mathbb {R}\) is a function assigning real values to the edges. We endow the space with a metric by setting the distance between (G 1, ρ 1, M 1) and (G 2, ρ 2, M 2) to be 2R where R is the maximal value such that there exists a rooted graph isomorphism φ between \(B_{G_1}(\rho _1,R)\) and \(B_{G_2}(\rho _2,R)\) such that |M 1(e) − M 2(φ(e))|≤ R −1 for all edges e ∈ E(G) both of whose end points are in \(B_{G_1}(\rho _1,R)\). It is easy to check that this space is again a Polish space, so again we may define convergence in distribution of random variables taking values in this space.

We say that such a random triplet (U, ρ, M) is stationary if conditioned on (U, ρ, M) a uniformly chosen random neighbor X 1 of ρ satisfies that (U, ρ, M) has the same law as (U, X 1, M) in the space of isomorphism classes of rooted graphs with markings (that is, rooted isomorphisms that preserve the markings). Given a marking M we extend it to \(M:E(U)\cup V(U)\to \mathbb {R}\) by setting M(v) =maxe:ve M(e) for any v ∈ V (U). We say that (U, ρ, M) has an exponential tail if for some A <  and β > 0 we have that \(\mathbb {P}(M(\rho )\geq s)\leq A e^{-\beta s}\) for all s ≥ 0.

In the following lemma we consider a stationary triplet (U, ρ, M) that has an exponential tail and compare the hitting probabilities of certain sets when we endow the graphs with two sets of edge resistances: the first are the usual unit resistances, and in the second we may change the edge resistances arbitrarily but only on edges with high M values. We tailored the lemma this way in order to show that (G , R e) from Lemma 6.4 is recurrent.

Lemma 6.12

Let (U, ρ, M) be a stationary, bounded degree rooted random graph with markings which has an exponential tail. Conditioned on (U, ρ, M) and given some finite set B  U, let P ρ denote the unit-resistance random walk on U starting from ρ and let \({\mathbf {P}}^{\prime }_{\rho }\) denote the random walk on U with edge resistances \(R^{\prime }_e\) satisfying that \(R^{\prime }_e=1\) whenever \(M(e)\leq 21 \beta ^{-1} \log |B|\) . Then almost surely on (U, ρ, M) there exists K < ∞ such that for any finite subset B  U with |B|≥ K we have

$$\displaystyle \begin{aligned}\big | {\mathbf{P}}_\rho(\tau_{U\setminus B} < \tau^+_\rho) - {\mathbf{P}}^{\prime}_\rho(\tau_{U \setminus B} < \tau^+_\rho) \big | \leq {1 \over |B|} \, .\end{aligned}$$

Proof

For every pair of integers T, s ≥ 1 we set

$$\displaystyle \begin{aligned} \mathcal{A}_{T,s} = \left\{ {\mathbf{P}}_\rho(\exists t< T : M(X_t)\ge s) \le T^3 e^{-\beta s/2} \right\} \, . \end{aligned}$$

Since (U, ρ, M) is stationary and has an exponential tail for any t ≥ 0 we have

$$\displaystyle \begin{aligned} \mathbb{E}\big [{\mathbf{P}}_\rho(M(X_t)\ge s) \big ] \le A e^{-\beta s} \, , \end{aligned}$$

hence by the union bound

$$\displaystyle \begin{aligned} \mathbb{E}\big [{\mathbf{P}}_\rho(\exists t < T : M(X_t)\ge s) \big ] \le ATe^{-\beta s} \, . \end{aligned}$$

Thus by Markov’s inequality

$$\displaystyle \begin{aligned} \mathbb{P}\left(\mathcal{A}_{T,s}^c\right) \le A T^{-2} e^{-\beta s/2} \, . \end{aligned}$$

By Borel-Cantelli we deduce that almost surely \(\mathcal {A}_{T,s}\) occurs for all but finitely many pairs T, s. Conditioned on (U, ρ, M), we may consider only finite subsets B ⊂ U which contain ρ, since otherwise both probabilities in the statement of the lemma are 1. Let B be such a subset. By the commute time identity Lemma 2.26, and since the maximum degree of U is bounded,

$$\displaystyle \begin{aligned} {\mathbf{E}}_\rho(\tau_{U \setminus B}) \le C \mathcal{R}_{\mathrm{eff}}(\rho \leftrightarrow U\setminus B) |B| \le C |B|{}^2 \, , \end{aligned}$$

for some constant C > 0. The last inequality is since the resistance is bounded by |B| since there is a path of length at most |B| from ρ to U ∖ B. By Markov’s inequality,

$$\displaystyle \begin{aligned} {\mathbf{P}}_\rho(\tau_{U \setminus B} \ge T) \leq {C|B|{}^2 \over T} \, . \end{aligned}$$

Write S = {v ∈ U : M(v) ≥ s}. For every T, s for which \(\mathcal {A}_{T,s}\) occurs we have

$$\displaystyle \begin{aligned} {\mathbf{P}}_\rho\left(\tau_S < \tau^+_{\{\rho\}\cup U\setminus B}\right) \le {\mathbf{P}}_\rho(\tau_{U\setminus B}\ge T)~+~{\mathbf{P}}_\rho(\exists t < T:M(X_t)\ge s) \le {C|B|{}^2 \over T}~+~T^3 e^{-\beta s/2}. \end{aligned}$$

We now choose T = 2C|B|3 and \(s=21\beta ^{-1}\log |B|\) so that the right hand side of the last inequality is at most |B|−1 when |B| is sufficiently large. It is clear that we can couple two random walks starting from ρ, one walking on U with unit resistances and the other on (U, R e), so that they remain together until they visit a vertex of S. Hence, when |B| is large enough so that the chosen T, s are such that \(\mathcal {A}_{T,s}\) holds we deduce from the last inequality that with probability at least 1 −|B|−1 the simple random walk on U visits {ρ}∪ U ∖ B before visiting S, concluding our proof. □

6.3 Proof of Theorem 6.1

We now proceed to wrapping up the proof of Theorem 6.1. Recall that we have a sequence of finite planar graphs {G n} such that \(G_n \xrightarrow {\mathrm {loc}} (U,\rho )\) and with \(\mathbb {P} ( \deg (\rho ) \geq k) \leq Ce^{-ck}\). Our goal is to prove that (U, ρ) is almost surely recurrent.

Let us explain how we use Lemma 6.11 and Corollary 6.10 to truncate and degree bias G n and (U, ρ) so that we may assume without loss of generality that \(G_n \xrightarrow [\pi ]{\mathrm {loc}} (U,\rho )\). Indeed, if it does not hold that \(\mathbb {E} \deg (\rho _n) \to \mathbb {E} \deg (\rho )\) we consider G n ∧ k(n) of Lemma 6.11 which has the same limit (U, ρ). Since k(n) → the graphs G n ∧ k(n) have non-empty set of edges (we assume that G n have non-empty sets of edges otherwise (U, ρ) is an isolated vertex), and thus we may apply Corollary 6.10 and deduce that the degree biasing (G n ∧ k(n), ρ n) converges to the degree biasing of (U, ρ) which is absolutely continuous with respect to (U, ρ), and in particular, it is recurrent almost surely if and only if (U, ρ) is. We also erase from G n ∧ k(n) all isolated vertices that may have been created in the truncation, since these are drawn with probability 0 after the degree bias. This will be important for us later when we unbias the graphs. Lastly, it is an easy computation using Definition 6.8 that we still have \(\mathbb {P} ( \deg (\rho ) \geq k) \leq Ce^{-ck}\) (possibly for some other positive constants C, c). Thus, from now on we assume without loss of generality that \(G_n \xrightarrow [\pi ]{\mathrm {loc}} (U,\rho )\) and that deg(ρ) has an exponential tail and that G n have no isolated vertices almost surely.

Recall now the definitions and notations of Sect. 6.1. Consider the star-tree transform \(G_n^*\) of G n and let \(\rho _n^*\) be a random vertex of \(T_{\rho _n}\) drawn according to the stationary distribution of \(T_{\rho _n}\). Similarly, conditioned on (U, ρ), let U be the star-tree transform of U and ρ be a random vertex of T ρ drawn according to the stationary distribution of T ρ. Furthermore, we put markings on \(G_n^*\) and U by marking each edge e of \(G_n^*\) or U with deg(v) whenever e is in the tree T v and deg(v) is the degree of v in G n or U, respectively. Denote these markings by M n and M, respectively.

Claim 6.13

We have that \((G_n^*,\rho _n^*, M_n)\) for each n and (U , ρ , M) are stationary, and,

$$\displaystyle \begin{aligned}(G_n^*, \rho_n^*, M_n) \stackrel{d}{\longrightarrow} (U^*, \rho^*, M) \, .\end{aligned}$$

Proof

Since for any fixed integer r > 0, the laws of \(B_{G_n^*}(\rho _n^*,r)\) and \(B_{U^*}(\rho ^*,r)\) are determined by \(B_{G_n}(\rho _n,r)\) and B U(ρ, r), respectively, see Remark 6.3. We obtain that

$$\displaystyle \begin{aligned}(G_n^*,\rho_n^*, M_n) \stackrel{d}{\longrightarrow} (U^*,\rho^*, M) \, .\end{aligned}$$

Secondly, it is immediate to check that for each v ∈ G n we have that the number of edges in T v is precisely \(2\deg _{G_n}(v)\). This is the reason why we added the two “extra” neighbors to the root of T v in the star tree transform described in Sect. 6.1. Thus, conditioned on G n, for any \(x\in G_n^*\) such that x ∈ T v for some v ∈ G n we have that

$$\displaystyle \begin{aligned}\mathbb{P} ( \rho_n^*=x \mid G_n) = {\deg_{G_n}(v) \over 2|E(G_n)|} \cdot {\deg_{T_v}(x) \over 2 |E(T_v)|} = {\deg_{T_v}(x) \over 2|E(G_n^*)|} \, ,\end{aligned}$$

or in other words, \((G_n^*,\rho _n^*, M_n)\) is a stationary random graph and since it converges to (U , ρ , M), the latter is also stationary. □

Lemma 6.14

The triplet (U , ρ , M) has an exponential tail.

Proof

We observe that M(ρ ) =deg(v) where v is either ρ or one of its neighbors. Hence it suffices to show that if (U, ρ) is a stationary local limit such that deg(ρ) has an exponential tail, then the random variable D(ρ) =maxv:{ρ,v}∈ E(U)deg(v) has an exponential tail. We have

$$\displaystyle \begin{aligned} \mathbb{P}(D(\rho)\geq k) \leq \mathbb{P}(\deg(\rho)\geq k) + \mathbb{P}(\deg(\rho)\leq k \text{ and } D(\rho)\geq k) \, .\end{aligned} $$
(6.2)

The probability of the first term on the right hand side decays exponentially in k due to our assumption on (U, ρ). Conditioned on (U, ρ), let X 1 be a uniformly chosen random neighbor of ρ. Then clearly

$$\displaystyle \begin{aligned}\mathbb{P}(\deg(X_1)\geq k \, \mid \, \deg(\rho)\leq k \text{ and } D(\rho)\geq k ) \geq k^{-1} \, .\end{aligned}$$

However, by stationarity \(\mathbb {P}(\deg (X_1) \geq k) = \mathbb {P}(\deg (\rho ) \geq k)\), which decays exponentially. We conclude that the second term on the right hand side of (6.2) decays exponentially as well. □

Consider the stationary random graph (U , ρ , M). By Lemma 6.14 it has an exponential tail. Consider the edge resistances

$$\displaystyle \begin{aligned} R^{\mathrm{unit}}_e \equiv 1 \, ,\qquad R^{\mathrm{mark}}_e = {1 \over M(e)} \, . \end{aligned}$$

In view of Lemma 6.4, it suffices to show that the network (U , R mark) is almost surely recurrent, for then it will follow that U is almost surely recurrent. To prove the former, we apply the second assertion of Corollary 6.10 which allows us to assume without loss of generality that (U , ρ ) is a local limit of finite planar maps (rather than a stationary local limit). In the beginning of the proof we assumed that almost surely G n have no isolated vertices (they were erased after the degree biasing), hence the same holds for \(G_n^*\) and we may use Corollary 6.10. Since (U , ρ ) is now a local limit of finite planar maps with degrees bounded by 4 we may apply Theorem 5.8 to obtain an almost sure constant c > 0 and a sequence of sets B k ⊂ U such that

  1. 1.

    ck ≤|B k|≤ c −1 k, and

  2. 2.

    \(\mathcal {R}_{\mathrm {eff}}(\rho ^{*} \leftrightarrow U^* \setminus B_k \,\, ; \,\, \{R^{\mathrm {unit}}_e\}) \geq c \log k\),

where we added to the conclusion of Theorem 5.8 that B k ≥ ck since adding vertices to B k makes the lower bound on the resistance even better.

We now define one extra set of edge resistances on U which will allow us to interpolate between the edge resistances R unit and R mark. For each integer k ≥ 1 we define

$$\displaystyle \begin{aligned} R^{\mathrm{mid}}_e = \begin{cases} 1 & M(e) \le C\log k,\\ M^{-1}(e) & \text{otherwise} \, , \end{cases} \end{aligned}$$

where C > 0 is some large constant that will be chosen later. We will use P, P mark and P mid to denote the probability measures, conditioned on (U , ρ , M), of random walks on U with edge resistances \(\{R^{\mathrm {unit}}_e\}\), \(\{R^{\mathrm {mark}}_e\}\) and \(\{R^{\mathrm {mid}}_e\}\), respectively.

Lemma 6.15

For some other constant c > 0 we have

$$\displaystyle \begin{aligned}\mathcal{R}_{\mathrm{eff}}(\rho^{*}\leftrightarrow U^*\setminus B_k \,\, ; \{R^{\mathrm{mid}}_e\}) \geq c\log k \, .\end{aligned} $$

Proof

We may assume k is large enough so that \(M(e)\le C\log k\) for every edge e incident to ρ . By Claim 2.22 we have

$$\displaystyle \begin{aligned}\mathcal{R}_{\mathrm{eff}}(\rho^{*} \leftrightarrow U^* \setminus B_k \,\, ;\,\, \{R^{\mathrm{unit}}_e\}) \leq {1 \over {\mathbf{P}}_{\rho^{*}} ( \tau_{U^*\setminus B_k} < \tau^+_{\rho^{*}} ) } \, ,\end{aligned} $$

hence

$$\displaystyle \begin{aligned}{\mathbf{P}}_{\rho^{*}} ( \tau_{U^*\setminus B_k} < \tau^+_{\rho^{*}} ) \leq {1 \over c \log k} \, ,\end{aligned} $$

by our assumption on B k above. By Lemma 6.12 it follows that

$$\displaystyle \begin{aligned}{\mathbf{P}}^{\mathrm{mid}}_{\rho^{*}} ( \tau_{U^*\setminus B_k} < \tau^+_{\rho^{*}} ) \leq {2 \over c \log k} \, ,\end{aligned} $$

when k is large enough and the constant C > 0 in the definition of \(\{R^{\mathrm {mid}}_e\}\) is chosen large enough with respect to β. Using Claim 2.22 again and the fact that U has degrees bounded by 4 concludes the proof. □

We need yet another easy general fact about electric networks.

Claim 6.16

Consider a finite network G in which all resistances are bounded above by 1. Then for any integer m ≥ 1 and any two vertices a ≠ z we have

$$\displaystyle \begin{aligned}\mathcal{R}_{\mathrm{eff}} ( B_G(a,m) \leftrightarrow z) \geq \mathcal{R}_{\mathrm{eff}}(a \leftrightarrow z) - m \, .\end{aligned} $$

Proof

Let θ m be the unit current flow from B(a, m) to z. For a vertex v ∈ B(a, m) denote

$$\displaystyle \begin{aligned}\alpha_v = \sum_{u \not \in B(a,m): u \sim v}\theta^m(vu)\end{aligned} $$

so that α v ≥ 0 for all v ∈ B(a, m) and ∑vB(a,m) α v = 1. For a vertex v ∈ B(a, m) let θ a, v be a unit flow putting flow 1 on some shortest path from a to v in B(a, m). Set

$$\displaystyle \begin{aligned} \theta = \sum_{v\in B(a,m)}\alpha_v(\theta^m + \theta^{a,v}) \, . \end{aligned}$$

By Thomson’s principle (Theorem 2.28), Jensen’s inequality and since ∑v α v = 1 we have

$$\displaystyle \begin{aligned} \mathcal{R}_{\mathrm{eff}}(a\leftrightarrow z) &\le \mathcal{E}(\theta) = \mathcal{E}(\theta^m) + \sum_e r_e \big [ \sum_{v\in B(a,m)} \alpha_v \theta^{a,v}(e) \big ]^2 \le \mathcal{E}(\theta^m) + \sum_{v\in B(a,m)} \alpha_v \sum_e r_e \left(\theta^{a,v}(e)\right)^2 \\ &\leq \mathcal{E}(\theta^m) + \sum_{v\in B(a,m)} \alpha_v \cdot m = \mathcal{R}_{\mathrm{eff}}(B(a,m)\leftrightarrow z) + m \, . \end{aligned} $$

□ □

We are finally ready to conclude the proof of the main theorem of this chapter.

Proof of Theorem 6.1

By Lemma 6.15 and Claim 6.16 we have that the sets B k obtained earlier satisfy that for any m ≥ 0

$$\displaystyle \begin{aligned} \mathcal{R}_{\mathrm{eff}}(B_{U^*}(\rho^{*},m) \leftrightarrow U^*\setminus B_k \,\, ; \{R^{\mathrm{mid}}_e\}) \ge c\log k-m \, . \end{aligned}$$

Moreover, for every edge e,

$$\displaystyle \begin{aligned} R^{\mathrm{mark}}_e \geq {R^{\mathrm{mid}}_e \over C \log k} \, , \end{aligned}$$

hence

$$\displaystyle \begin{aligned} \mathcal{R}_{\mathrm{eff}}(B_{U^*}(\rho^{*},m) \leftrightarrow U^*\setminus B_k \,\, ; \{R^{\mathrm{mark}}_e\}) \ge c/C - m/C \log k \, . \end{aligned}$$

By taking k → we deduce that there exists c > 0 such that for any m ≥ 1

$$\displaystyle \begin{aligned} \mathcal{R}_{\mathrm{eff}}(B_{U^*}(\rho^{*},m) \leftrightarrow \infty;\{R^{\mathrm{mark}}_e\})\ge c \, . \end{aligned}$$

Consider the current unit flow from ρ to in \((U^*, \{R^{\mathrm {mark}}_e\})\). If this flow had finite energy, then for any ε > 0 there would exists m ≥ 1 such that \(\mathcal {R}_{\mathrm {eff}}(B_{U^*}(\rho ^{*},m) \leftrightarrow \infty ;\{R^{\mathrm {mark}}_e\})\leq \varepsilon \), which is a contradiction to the above. Hence

$$\displaystyle \begin{aligned} \mathcal{R}_{\mathrm{eff}}(\rho^{*} \leftrightarrow \infty;\{R^{\mathrm{mark}}_e\})=\infty \, , \end{aligned}$$

that is, \((U^*, \{R^{\mathrm {mark}}_e\})\) is almost surely recurrent. The theorem now follows by Lemma 6.4. □