# Mixing time for random walk on supercritical dynamical percolation

## Abstract

We consider dynamical percolation on the *d*-dimensional discrete torus \({\mathbb {Z}}_n^d\) of side length *n*, where each edge refreshes its status at rate \(\mu =\mu _n\le 1/2\) to be open with probability *p*. We study random walk on the torus, where the walker moves at rate 1 / (2*d*) along each open edge. In earlier work of two of the authors with A. Stauffer, it was shown that in the subcritical case \(p<p_c({\mathbb {Z}}^d)\), the (annealed) mixing time of the walk is \(\Theta (n^2/\mu )\), and it was conjectured that in the supercritical case \(p>p_c({\mathbb {Z}}^d)\), the mixing time is \(\Theta (n^2+1/\mu )\); here the implied constants depend only on *d* and *p*. We prove a quenched (and hence annealed) version of this conjecture up to a poly-logarithmic factor under the assumption \(\theta (p)>1/2\). When \(\theta (p)>0\), we prove a version of this conjecture for an alternative notion of mixing time involving randomised stopping times. The latter implies sharp (up to poly-logarithmic factors) upper bounds on exit times of large balls throughout the supercritical regime. Our proofs are based on percolation results (e.g., the Grimmett–Marstrand Theorem) and an analysis of the volume-biased evolving set process; the key point is that typically, the evolving set has a substantial intersection with the giant percolation cluster at many times. This allows us to use precise isoperimetric properties of the cluster (due to G. Pete) to infer rapid growth of the evolving set, which in turn yields the upper bound on the mixing time.

## Keywords

Dynamical percolation Random walk Mixing times Stopping times## Mathematics Subject Classification

Primary 60K35 60K37## 1 Introduction

This paper studies random walk on dynamical percolation on the torus \({\mathbb {Z}}^d_n\). The edges refresh at rate \(\mu \le 1/2\) and switch to open with probability *p* and closed with probability \(1-p\) where \(p>p_c({\mathbb {Z}}^d)\) with \(p_c({\mathbb {Z}}^d)\) being the critical probability for bond percolation on \({\mathbb {Z}}^d\). The random walk moves at rate 1. When its exponential clock rings, the walk chooses one of the 2*d* adjacent edges with equal probability. If the bond is open, then it makes the jump, otherwise it stays in place.

We represent the state of the system by \((X_t,\eta _t)\), where \(X_t\in {\mathbb {Z}}_n^d\) is the location of the walk at time *t* and \(\eta _t\in \{0,1\}^{E({\mathbb {Z}}_n^d)}\) is the configuration of edges at time *t*, where \(E({\mathbb {Z}}_n^d)\) stands for the edges of the torus. We emphasise at this point that \((X_t,\eta _t)\) is Markovian, while the location of the walker \((X_t)\) is not.

One easily checks that \(\pi \times \pi _p\) is the unique stationary distribution and that the process is reversible; here \(\pi \) is uniform distribution and \(\pi _p\) is product measure with density *p* on the edges. Moreover, if the environment \(\{\eta _t\}\) is fixed, then \(\pi \) is a stationary distribution for the resulting time inhomogeneous Markov process.

*d*and

*p*are considered fixed, while

*n*and \(\mu =\mu _n\) are the two parameters which are varying. The focus of [9] was to study the total variation mixing time of \((X,\eta )\), i.e.

### Theorem 1.1

They also established the same mixing time when one looks at the walk and averages over the environment.

In the present paper we focus on the supercritical regime. We study both the full system and the quenched mixing times. We start by defining the different notions of mixing that we will be using. First of all we write \({\mathbb {P}}_{x,\eta }\left( \cdot \right) \) for the probability measure of the walk, when the environment process is conditioned to be \(\eta = (\eta _t)_{t\ge 0}\) and the walk starts from *x*. We write \({\mathcal {P}}\) for the distribution of the environment which is dynamical percolation on the torus, a measure on càdlàg paths \([0,\infty ) \mapsto \{0,1\}^{E({\mathbb {Z}}^d_{n})}\). We write \({\mathcal {P}}_{\eta _0}\) to denote the measure \({\mathcal {P}}\) when the starting environment is \(\eta _0\). Abusing notation we write \({\mathbb {P}}_{x,\eta _0}\left( \cdot \right) \) to mean the law of the full system when the walk starts from *x* and the initial configuration of the environment is \(\eta _0\). To distinguish it from the quenched law, we always write \(\eta _0\) in the subscript as opposed to \(\eta \).

We first mention the result from the companion paper [8] which is an upper bound on the quenched mixing time and the hitting time of large sets for all values of *p*. We write \(\tau _A\) for the first time *X* hits the set *A*.

### Theorem 1.2

*n*and for all \(\varepsilon \), random walk in dynamical percolation on \({\mathbb {Z}}_n^d\) with parameters

*p*and \(\mu \) satisfies for all

*x*

*k*we have

Our first result concerns the quenched mixing time in the case when \(\theta (p)>1/2\).

### Theorem 1.3

*d*and

*p*) so that for all

*n*sufficiently large we have

### Remark 1.4

We note that when \(1/\mu <(\log n)^b\) for some \(b>0\), then Theorem 1.3 follows from Theorem 1.2. (Take \(\varepsilon =n^{-3d}\) in (1.1) and do a union bound over *x*.) So we are going to prove Theorem 1.3 in the regime when \(1/\mu > (\log n)^{d+2}\).

Our second result concerns the mixing time at a stopping time in the quenched regime for all values of \(p>p_c({\mathbb {Z}}^d)\). We first give the definition of this notion of mixing time that we are using.

### Definition 1.5

### Theorem 1.6

*d*and

*p*) so that for all

*n*sufficiently large we have

Finally we give a consequence for random walk on dynamical percolation on all of \({\mathbb {Z}}^d\). This is defined analogously to the process on the torus \({\mathbb {Z}}_n^d\) above, where the edges refresh at rate \(\mu \).

### Corollary 1.7

*X*be the random walk on dynamical percolation on \({\mathbb {Z}}^d\) and for \(r>0\) let

*d*and

*p*) so that for all

*r*

*Notation* For positive functions *f*, *g* we write \(f\sim g\) if \(f(n)/g(n)\rightarrow 1\) as \(n\rightarrow \infty \). We also write \(f(n) \lesssim g(n)\) if there exists a constant \(c \in (0,\infty )\) such that \(f(n) \le c g(n)\) for all *n*, and \(f(n) > rsim g(n)\) if \(g(n) \lesssim f(n)\). Finally, we use the notation \(f(n) \asymp g(n)\) if both \(f(n) \lesssim g(n)\) and \(f(n) > rsim g(n)\).

*Related work* Various references to random walk on dynamical percolation has been provided in [9]. In a different direction than we are pursuing, in a recent paper, Andres et al. [1] have obtained a quenched invariance principle for random walks with time-dependent ergodic degenerate weights that are assumed to take strictly positive values. More recently, Biskup and Rodriguez in [2] were able to handle the case where the weights can be zero, and hence the dynamical percolation case.

### 1.1 Overview of the proof

In this subsection we explain the high level idea of the proof and also give the structure of the paper. First we note that when we fix the environment to be \(\eta \), we obtain a time inhomogeneous Markov chain. To study its mixing time, we use the theory of evolving sets developed by Morris and Peres adapted to the inhomogeneous setting, which was done in [8]. We recall this in Sect. 2. In particular we state a theorem by Diaconis and Fill that gives a coupling of the chain with the Doob transform of the evolving set. (Diaconis and Fill proved it in the time homogeneous setting, but the adaptation to the time inhomogeneous setting is straightforward.) The importance of the coupling is that conditional on the Doob transform of the evolving set up to time *t*, the random walk at time *t* is uniform on the Doob transform at the same time. This property of the coupling is going to be crucial for us in the proofs of Theorems 1.3 and 1.6.

The size of the Doob transform of the evolving set in the inhomogeneous setting is again a submartingale, as in the homogeneous case. The crucial quantity we want to control is the amount by which its size increases. This increase will be large only at *good times*, i.e. when the intersection of the Doob transform of the evolving set with the giant cluster is a substantial proportion of the evolving set. Hence we want to ensure that there are enough *good times*. We would like to emphasise that in this case we are using the random walk to infer properties of the evolving set. More specifically, in Sect. 4 we give an upper bound on the time it takes the random walk to hit the giant component. Using this and the coupling of the walk with the evolving set, in Sect. 5 we establish that there are enough good times. We then employ a result of Gábor Pete which states that the isoperimetric profile of a set in a supercritical percolation cluster coincides with its lattice profile. We apply this result to the sequence of good times, and hence obtain a good drift for the size of the evolving set at these times.

We conclude Sect. 5 by constructing a stopping time upper bounded by \((n^2+1/\mu )(\log n)^a\) with high probability so that at this time the Doob transform of the evolving set has size at least \((1-\delta ) (\theta (p)-\delta ) n^d\). In the case when \(\theta (p)>1/2\) we can take \(\delta >0\) sufficiently small so that \((1-\delta ) (\theta (p)-\delta )>1/2\). Using the uniformity of the walk on the Doob transform of the evolving set again, we deduce that at this stopping time the walk is close to the uniform distribution in total variation with high probability. This yields Theorem 1.3.

To finish the proof of Theorem 1.6 the idea is to repeat the above procedure to obtain \(k=k(\varepsilon )\) sets whose union covers at least \(1-\delta \) of the whole space for a sufficiently small \(\delta \). Then we define \(\tau \) by choosing one of these times uniformly at random. At time \(\tau \) the random walk will be uniform on a set with measure at least \(1-\delta \), and hence this means that the total variation from the uniform distribution at this time is going to be small. Since this time is with high probability smaller than *k* times the mixing time, this finishes the proof.

## 2 Evolving sets for inhomogeneous Markov chains

In this section we give the definition of the evolving set process for a discrete time inhomogeneous Markov chain.

*p*(

*x*,

*y*) and stationary distribution \(\pi \). The evolving-set process \(\{S_n\}_{n\ge 0}\) is a Markov chain on subsets of \(\Omega \) whose transitions are described as follows. Let

*U*be a uniform random variable on [0, 1]. If \(S\subseteq \Omega \) is the present state, we let the next state \(\widetilde{S}\) be defined by

*y*is in

*S*after one step. Note that \(\Omega \) and \(\varnothing \) are absorbing states and it is immediate to check that

*y*, are maximally coupled.

*p*with stationary distribution \(\pi \) we define for

*S*with \(\pi (S)> 0\)

*S*when the transition probability for the Markov chain is

*p*and as always the stationary distribution is \(\pi \).

For \(r\in [\min _x\pi (x), 1/2]\) we define \(\psi _p(r) :=\inf \{\psi _p(S): \pi (S)\le r\}\) and \(\psi _p(r)=\psi _p(1/2)\) for \(r\ge 1/2\). We define \(\varphi _p(r)\) analogously. We now recall a lemma from Morris and Peres [7] that will be useful later.

### Lemma 2.1

*p*be a transition matrix on the finite state space \(\Omega \) with \(p(x,x)\ge \gamma \) for all

*x*. Let \(\pi \) be a stationary distribution. Then for all sets \(S\subseteq \Omega \) with \(\pi (S)>0\) we have

*k*to time \(k+1\) is given by \(p_{k+1}(x,y)\) where we assume that the probability measure \(\pi \) is a stationary distribution for each \(p_k\). In this case, we say that \(\pi \) is a stationary distribution for the inhomogeneous Markov chain. Let \(Q_k= Q_{p_k}\) be as defined in (2.1). We then obtain a time inhomogeneous Markov chain \(S_0,S_1,\ldots \) on subsets of \({\mathcal {S}}\) generated by

*U*is as before a uniform random variable on [0, 1]. We call this the evolving set process with respect to \(p_1,p_2,\ldots \) and stationary distribution \(\pi \).

*p*, then we define the Doob transform with respect to being absorbed at \(\Omega \) via

### Theorem 2.2

*X*be a time inhomogeneous Markov chain. Then there exists a Markovian coupling of

*X*and the Doob transform \((S_t)\) of the associated evolving sets so that for all starting points

*x*and all times

*t*we have \(X_0=x\), \(S_0=\{x\}\) and for all

*w*

We write \(\varphi _n = \varphi _{p_n}\) and \(\psi _n=\psi _{p_n}\), where \(p_n\) is the transition matrix at time *n*.

### Lemma 2.3

*S*be the Doob transform with respect to absorption at \(\Omega \) of the evolving set process associated to a time inhomogeneous Markov chain

*X*with \({\mathbb {P}}_{}\left( X_{n+1}=x\;\vert \;X_n=x\right) \ge \gamma \) for all

*n*and

*x*, where \(0<\gamma \le 1/2\). Then for all

*n*and all \(S_0\ne \varnothing \) we have

### Proof

## 3 Preliminaries on supercritical percolation

In this section we collect some standard results for supercritical percolation on \({\mathbb {Z}}_n^d\) that will be used throughout the paper. We write \({\mathcal {B}}(x,r)\) for the box in \({\mathbb {Z}}^d\) centred at *x* of side length *r*. We also use \({\mathcal {B}}(x,r)\) to denote the obvious subset of \({\mathbb {Z}}_n^d\) whenever \(r<n\). We denote by \(\partial {\mathcal {B}}(x,r)\) the inner vertex boundary of the ball.

The following lemma might follow from known results but as we could not find a reference, we include its proof.

### Lemma 3.1

*c*depending on \(\varepsilon , d, p, \alpha \) so that for all

*n*

### Proof

*A*percolate to distance \(n^\beta /2\). More precisely, we let \(A(x) = \{x \leftrightarrow \partial {\mathcal {B}}(x,n^\beta )\}\). We will first show that for all sets \(D\subseteq {\mathbb {Z}}_n^d\) with \(|D|=\gamma n^d\), where \(\gamma \in (0,1]\), and for all \(\varepsilon \in (0,\theta (p))\) there exists \(c>0\) depending on \(\varepsilon , d, p, \gamma \) so that for all

*n*

*c*depending only on

*d*and

*p*. We now fix a lattice \({\mathcal {L}}\). So for all

*n*large enough we can upper bound the probability appearing in (3.2) byWe now note that for points \(x\in {\mathcal {L}}\cap D\) the events

*A*(

*x*) are independent. Using a concentration inequality for sums of i.i.d. random variables and the fact that \(|{\mathcal {L}}\cap D|\le n^{d(1-\beta )}\) we obtainwhere

*c*is a positive constant depending on \(\gamma \) and \(\varepsilon \). Plugging this back into (3.2) gives

*c*.

*c*(depending on \(\delta \),

*d*and

*p*) so that for large

*n*and for all \(x,y\in {\mathcal {B}}(0,n(1-\delta ))\)Using this and a union bound we now get

### Corollary 3.2

*c*so that

### Proof

*k*we obtain

*i*, since the percolation clusters are independent, by conditioning on \({\mathcal {G}}_1,\ldots , {\mathcal {G}}_{i-1}\) and using Lemma 3.1 we get

We perform percolation in \({\mathbb {Z}}_n^d\) with parameter \(p>p_c\). Let \({{\mathcal {C}}}_1, {{\mathcal {C}}}_2, \ldots \) be the clusters in decreasing order of their size. We write \({{\mathcal {C}}}(x)\) for the cluster containing the vertex \(x\in {\mathbb {Z}}_n^d\). For any \(A\subseteq {\mathbb {Z}}_n^d\), we denote by \({\mathrm{diam}}(A)\) the diameter of *A*.

### Proposition 3.3

*c*so that for all

*r*and for all

*n*we have

### Proof

*r*centred at 0. Then we haveUsing the standard coupling between bond percolation on \({\mathbb {Z}}_n^d\) and bond percolation on \({\mathbb {Z}}^d\) and [4, Theorems 8.18 and 8.21] we obtainLemma 3.1 now gives us that

### Corollary 3.4

*t*. Then for all \(k\in {\mathbb {N}}\), there exists a positive constant

*c*so that for all \(\varepsilon <\theta (p)\) we have as \(n\rightarrow \infty \)

### Remark 3.5

## 4 Hitting the giant component

In this section we give an upper bound on the time it takes the random walk to hit the giant component. From now on we fix \(d\ge 2\) and \(p>p_c({\mathbb {Z}}^d)\), and as before *X* is the random walk on the dynamical percolation process where the edges refresh at rate \(\mu \).

*Notation* For every \(t>0\) we denote by \({\mathcal {G}}_t\) the giant component of the dynamical percolation process \((\eta _t)\) breaking ties randomly. (As we saw in Corollary 3.4 with high probability there are no ties in the time interval that we consider.)

### Proposition 4.1

- (i)
\(\min _{x,\eta _0}{\mathbb {P}}_{x,\eta _0}\left( \frac{11d\log n}{\mu }\le \sigma \le \frac{(\log n)^{3d+8}}{\mu }\right) =1-o(1)\) as \(n\rightarrow \infty \) and

- (ii)
\(\min _{x,\eta _0}{\mathbb {P}}_{x,\eta _0}\left( X_{\sigma } \in {\mathcal {G}}_\sigma \right) \ge \alpha \).

### Proof

*X*hits the giant component, i.e.

*c*to be determined, \(T_0=0\) and inductively for all \(i\ge 0\)

*Proof of*(i). By the strong Markov property we obtain for all

*n*large enough and all \(x,\eta _1\)

*t*with constant probability \(c_1\). Hence the same is true for the process

*X*on \({\mathbb {Z}}_n^d\) for all starting states \(x_0\) and configurations \(\eta _0\).

*Proof of*(ii). We fix \(x,\eta _0\) and we consider two cases:

- (1)
\({\mathbb {P}}_{x,\eta _0}\left( \tau <T_1\right) >\frac{1}{(\log n)^{d+2}}\) or

- (2)
\({\mathbb {P}}_{x,\eta _0}\left( \tau < T_1\right) \le \frac{1}{(\log n)^{d+2}}\).

*c*in the definition of

*r*satisfying \(c>50d^2\) we have

*n*/ 3. It then follows from Corollary 3.4 that the first term on the right-hand side of the last display is bounded below by a positive constant.

*t*, i.e. it is the connected component of the percolation configuration such that \(X_t\in {{\mathcal {C}}}_t\). Next we define inductively a sequence of stopping times \(S_i\) as follows: \(S_0=11d\log n/\mu \) and for \(i\ge 0\) we let \(S_{i+1}\) be the first time after time \(S_i\) that an edge opens on the boundary of \({{\mathcal {C}}}_{S_i}\). For all \(i\ge 0\) we define

*A*we have \(T_1\ge S_{(c\log n)^{d+1}}\), since \(r=2(c\log n)^{d+2}\) and by the triangle inequality we have for all \(i\le (c\log n)^{d+1} -1\)

*c*sufficiently large by Proposition 3.3 and large deviations for a Poisson random variable we get

*n*sufficiently large

- (1)
\(Y_{(c\log n)^{d+1}} \ge E_{(c\log n)^{d+1}}\) on \(A_{(c\log n)^{d+1}-1}\) and

- (2)
\(E_{(c\log n)^{d+1}}\) is independent of \(\{A_0,\ldots ,A_{(c\log n)^{d+1}-1},Y_1\ldots Y_{(c\log n)^{d+1}-1}\}\). Therefore we deduce

*i*, one can define an exponential random variable \(E_i\) with parameter \(2d(c\log n)^d\mu \) such that (1) \(Y_i \ge E_i\) on \(A_{i-1}\) and (2) \(E_i\) is independent of \(\{A_0,\ldots ,A_{i-1},Y_1,\ldots , Y_{i-1},E_{i+1}, \ldots , E_{(c\log n)^{d+1}}\}\). We therefore obtain

We now state and prove a lemma that will be used later on in the paper.

### Lemma 4.2

### Proof

*c*comes from Corollary 3.4. We now define an event

*A*as follows

*B*to be the event that there exists a time \(t\in [\sigma , \sigma + 1/((\log n)^{d+1}\mu )]\) and an edge

*e*such that \(d(X_t,e)>c\log n\), the edge

*e*updates at time

*t*and this update disconnects \(X_t\) from \({\mathcal {G}}_t\). Then we have

*i*-th time after time \(\sigma \) that either

*X*attempts a jump or an edge within distance \(c\log n\) from

*X*refreshes. Then \(\tau _i \sim \text {Exp}(1+c_1(\log n)^d\mu )\) for a positive constant \(c_1\) and they are independent. These times define a Poisson process of rate \(1+c_1(\log n)^d \mu \). Using basic properties of exponential variables, the probability that at a point of this Poisson process an edge is refreshed is

*X*refresh constitute a Poisson process \({\mathcal {N}}\) of rate \(c_1 (\log n)^d\mu \). So we now obtain as \(n\rightarrow \infty \)

## 5 Good and excellent times

As we already noted in Remark 1.4 we are going to consider the case where \(1/\mu > (\log n)^{d+2}\).

*X*at integer times. When we fix the environment at all times to be \(\eta \), then we obtain a discrete time Markov chain with time inhomogeneous transition probabilities

If *G* is a subgraph of \({\mathbb {Z}}_n^d\) and \(S\subseteq V(G)\), we write \(\partial _G S\) for the edge boundary of *S* in *G*, i.e. the set of edges of *G* with one endpoint of *S* and the other one in \(V(G){\setminus } S\).

We note that for every *t*, \(\eta _t\) is a subgraph of \({\mathbb {Z}}_n^d\) with vertex set \({\mathbb {Z}}_n^d\).

### Definition 5.1

*t*

*good*if \(|S_t\cap {\mathcal {G}}_t| \ge \tfrac{|S_t|}{(\log n)^{4d+12}}\). We call a good time

*t*

*excellent*if

*G*(

*a*) and \(G_e(a)\) be the set of good and excellent times

*t*respectively with \(0\le t\le (\log n)^{a}\left( n^2+\tfrac{1}{\mu }\right) \).

As we already explained in the Introduction, we will obtain a strong drift for the size of the evolving set at excellent times. So we need to ensure that there are enough excellent times. We start by showing that there is a large number of good times. More formally we have the following:

### Lemma 5.2

### Proof

*t*we let \({\mathcal {F}}_t\) be the \(\sigma \)-algebra generated by the evolving set and the environment at integer times up to time

*t*.

*t*in this interval \(X_t\in {\mathcal {G}}_t\). Note that since \(1/\mu > (\log n)^{d+2}\), this interval has length larger than 1. This establishes (5.1).

*i*almost surely

*n*sufficiently large and all \(x,\eta _0\)

Next we show that there are enough excellent times.

### Lemma 5.3

### Proof

For almost every environment, there is an infinite number of good times that we denote by \(t_1,t_2,\ldots \). For every good time *t* we define \(I_t\) to be the indicator that *t* is excellent.

Again to simplify notation we write \(G=G(8d+26+\gamma )\) and \(G_e= G_e(8d+26+\gamma )\). Note that if *t* is good and at least half of the edges of \(\partial _{\eta _t}S_t\) do not refresh during \([t,t+1]\), then *t* is an excellent time (note that if \(\partial _{\eta _t}S_t=\varnothing \), then *t* is automatically excellent). Let \(E_1,\ldots , E_{|\partial _{\eta _t}S_t|}\) be the first times at which the edges on the boundary \(\partial _{\eta _t}S_t\) refresh. They are independent exponential random variables with parameter \(\mu \).

*s*. Then for all

*t*, on the event \(\{t\in G\}\) we haveSince \({\mathbb {P}}_{x,\eta _0}\left( E_i>1\right) = e^{-\mu }\) and \(\mu \le 1/2\), there exists \(n_0\) so that for all \(n\ge n_0\) we have for all \(x, \eta _0\)Let \(A=\{ |G|\ge (\log n)^{\gamma } \cdot n^2\}\). By Lemma 5.2 we get \({\mathbb {P}}_{x,\eta _0}\left( A^c\right) \le 1/n^\alpha \) for all \(n\ge n_0\) and all \(x,\eta _0\). Let \(G=\{t_1,\ldots , t_{|G|}\}\). On the event

*A*we have

*n*and this concludes the proof. \(\square \)

Let \(\tau _1, \tau _2,\ldots \) be the sequence of excellent times. Then the previous lemma immediately gives

### Corollary 5.4

## 6 Mixing times

In this section we prove Theorems 1.3, 1.6 and Corollary 1.7. From now on \(d\ge 2\), \(p>p_c({\mathbb {Z}}^d)\) and \(\frac{1}{\mu } >(\log n)^{d+2}\).

### 6.1 Good environments and growth of the evolving set

The first step is to obtain the growth of the Doob transform of the evolving set at excellent times. We will use the following theorem by Pete [10] which shows that the isoperimetric profile of the giant cluster basically coincides with the profile of the original lattice.

For a subset \(S\subseteq {\mathbb {Z}}_n^d\) we write \(S\subseteq {\mathcal {G}}\) to denote \(S\subseteq V({\mathcal {G}})\) and we also write \(|{\mathcal {G}}| = |V({\mathcal {G}})|\).

### Theorem 6.1

*n*sufficiently large

### Remark 6.2

Pete [10] only states that the probability appearing above tends to 1 as \(n\rightarrow \infty \), but a close inspection of the proof actually gives the polynomial decay. Mathieu and Remy [6] have obtained similar results.

### Corollary 6.3

*n*sufficiently large

### Proof

We only need to prove the statement for all *S* that are disconnected, since the other case is covered by Theorem 6.1. Let *A* be the event appearing in the probability of Theorem 6.1.

*S*be a disconnected set satisfying \(S\subseteq {\mathcal {G}}\) and \(c (\log n)^{\frac{d}{d-1}} \le |S|\le (1-\delta )|{\mathcal {G}}|\). Let \(S=S_1\cup \cdots \cup S_k\) be the decomposition of

*S*into its connected components. Then we claim that on the event

*A*we have for all \(i\le k\)

*A*; (ii) \(|S_i|<c(\log n)^{d/(d-1)}\), in which case the inequality is trivially true by taking \(\alpha \) small in Theorem 6.1, since the boundary contains at least one vertex. Therefore we deduce,

Recall that for a fixed environment \(\eta \) we write *S* for the Doob transform of the evolving set process associated to *X* and \(\tau _1, \tau _2, \ldots \) are the excellent times as in Definition 5.1 and we take \(\tau _0=0\).

### Definition 6.4

*good*environment if the following conditions hold:

- (1)
for all \(\frac{11d\log n}{\mu }\le t\le t(n)\log n\) the giant cluster \({\mathcal {G}}_t\) has size \(|{\mathcal {G}}_t|\in ((1-\delta )\theta (p) n^d, (1+\delta )\theta (p) n^d)\),

- (2)for all \(\frac{11d\log n}{\mu }\le t\le t(n)\log n, \, \forall \, S\subseteq {\mathcal {G}}_t\) with$$\begin{aligned} c_1(\log n)^{\frac{d}{d-1}}\le |S| \le (1-\delta )|{\mathcal {G}}_t| \quad \text { we have } \quad |\partial _{\eta _{t}}S|\ge \frac{c_2 |S|^{1-1/d}}{(\log n)}, \end{aligned}$$
- (3)
\({\mathbb {P}}_{x,\eta }\left( \tau _N\le t(n)\right) \ge 1-\frac{1}{n^{10d}}\) for all

*x*, - (4)
\({\mathbb {P}}_{x,\eta }\left( \tau _N<\infty \right) =1\) for all

*x*.

To be more precise we should have defined a \((\delta , c_1, c_2)\)-good environment. But we drop the dependence on \(c_1\) and \(c_2\) to simplify the notation.

### Lemma 6.5

### Proof

*n*sufficiently large and all \(\eta _0\)

*c*and \(\alpha \) prove (6.1). Corollary 5.4, Markov’s inequality and a union bound over all

*x*immediately imply

### Proposition 6.6

*c*so that the following holds: for all

*n*, if \(\eta \) is a \(\delta \)-good environment, then for all starting points

*x*we have

*Z*using the isoperimetric profile will be crucial in the proof of Proposition 6.6.

### Lemma 6.7

*n*sufficiently large and for all \(1\le i\le N\) (recall Definition 6.4) we have almost surelywhere \({\mathcal {F}}_t\) is the \(\sigma \)-algebra generated by the evolving set up to time

*t*and \((\tau _i)\) is the sequence of excellent times associated to the environment \(\eta \) and \(\varphi \) is defined as

*c*a positive constant.

### Proof

*Z*is a positive supermartingale and since \(\eta \) is a \(\delta \)-good environment, we have \(\tau _N<\infty \)\({\mathbb {P}}_\eta \)-almost surely. We thus get for all \(0\le i\le N-1\)

*n*sufficiently large

*t*to be an excellent time, and hence we get from Definition 5.1

*c*is a positive constant and for the second inequality we used (6.8) again.

*c*a positive constant and \(\beta = 4d+9-12/d\). We now note that if \(\pi (S_t)\le 1/2\), then \(Z_t = (\pi (S_t))^{-1/2}\). If \(\pi (S_t) >1/2\), then \(Z_t = \sqrt{\pi (S_t^c)}/\pi (S_t)\le \sqrt{2}\). Since \(\varphi (r)=\varphi (1/2)\) for all \(r>1/2\), we get that in all cases

### Proof of Proposition 6.6

*f*is increasing, and hence we can apply [7, Lemma 11 (iii)] to deduce that for all \(\varepsilon >0\) if

*Z*we getThe first event appearing on the right hand side of (6.14) implies that \(|S_{\tau _N}^c|\ge |S_{\tau _N}^c\cap {\mathcal {G}}_{\tau _N}| \ge \delta |{\mathcal {G}}_{\tau _N}|\). Since \(\eta \) is a \(\delta \)-good environment, by (1) of Definition 6.4 we have that \(|{\mathcal {G}}_{\tau _N}|\ge (1-\delta )\theta (p) n^d\). Therefore we obtainBy Markov’s inequality and the two inclusions above we now concludewhere

*c*is a positive constant and in the last inequality we used (6.13). Since \(\eta \) is a \(\delta \)-good environment, this now implies that

### 6.2 Proof of Theorem 1.3

In this section we prove Theorem 1.3. First recall the definition of the stopping time \(\tau _\delta \) as the first time *t* that \(|S_t\cap {\mathcal {G}}_t|\ge (1-\delta )|{\mathcal {G}}_t|\).

### Lemma 6.8

*p*be such that \(\theta (p)>1/2\). There exists \(n_0\) and \(\delta >0\) so that for all \(n\ge n_0\), if \(\eta \) is a \(\delta \)-good environment, then for all

*x*

### Proof

*X*when the environment is \(\eta \), we obtain

### Corollary 6.9

*p*be such that \(\theta (p)>1/2\). Then there exist \(\delta \in (0,1)\) and \(n_0\) such that for all \(n\ge n_0\) and all starting environments \(\eta _0\) we have

### Proof

*x*and

*y*we have

*x*and

*y*

The following lemma will be applied later in the case where *R* is a constant or a uniform random variable.

### Lemma 6.10

*R*be a random time independent of

*X*and such that the following holds: there exists \(\delta \in (0,1)\) such that for all starting environments \(\eta _0\) we have

*R*, then for all \(n\ge n_0\), all

*x*,

*y*and \(\eta _0\)

### Proof

We fix \(x_0, y_0\) and let *X*, *Y* be two walks moving in the same environment \(\eta \) and started from \(x_0\) and \(y_0\) respectively. We now present a coupling of *X* and *Y*. We divide time into rounds of length \(R_1, R_2,\ldots \) and we describe the coupling for every round.

*X*and

*Y*did not couple after \(R_1\) steps, then they have reached some locations \(X_{R_1}=x_1\) and \(Y_{R_1} = y_1\). In the second round we couple them using again the corresponding optimal coupling, i.e.

*R*, i.e. the bound on the probability given in the statement of the lemma is uniform over all starting points

*x*and

*y*and the initial environment, we get that for all \(\eta _0\)

*c*to be determined. Let

*E*denote the number of good environments in the first

*k*rounds. We now get

*E*from below by \({\mathrm{Bin}}(k,1-\delta )\), the first probability decays exponentially in

*k*. For the second probability, on the event that there are enough good environments, since the probability of not coupling in each round is at most \(1-\delta \), by successive conditioning we get

*n*sufficiently large

*n*sufficiently large

### Proof of Theorem 1.3

*R*satisfies the condition of Lemma 6.10 for \(n\ge n_0\). So applying Lemma 6.10 we get for all

*n*sufficiently large and all \(x_0,y_0\) and \(\eta _0\)

*n*sufficiently large

### 6.3 Proof of Theorem 1.6

### Proof of Theorem 1.6

*X*using the Diaconis Fill coupling. We define inductively, \(\xi _{i+1}\) as the first time after \(\xi _i+t(n)\) that all edges refresh at least once. In order to now define \(\tau _{i+1}\), we start a new evolving set process which at time \(\xi _{i+1}\) is the singleton \(\{X_{\xi _{i+1}}\}\). (This new restart does not affect the definition of the earlier \(\tau _j\)’s.) To simplify notation, we call this process again \(S_t\) and we couple it with the walk

*X*using the Diaconis Fill coupling. Next we define

*good*environment if \(\eta \) is a \(\delta \)-good environment and \(\xi _k\le 2kt(n)\). Lemma 6.5 and the definition of the \(\xi _i\)’s give for all \(\eta _0\)

*c*so that if \(\eta \) is a good environment, then for all \(x_0\) and for all \(1\le i\le k\) we have

*n*sufficiently large and for all \(x_0, \eta _0\)This and Markov’s inequality now give that for all

*n*sufficiently large

*t*is stochastically bounded by a Poisson random variable of parameter \(\mu \cdot t\cdot dn^d\). Therefore, the number of possible percolation configurations in the interval \([\xi _i, \xi _i+t(n)]\) is dominated by a Poisson variable \(N_i\) of parameter \(\mu \cdot t(n)\cdot dn^d\). By the concentration of the Poisson distribution, we obtain

*n*sufficiently large

*n*sufficiently large and all \(\eta _0\)

*good*if it satisfies the following for all \(x_0\)

*x*. Since \(\eta \) is a good environment, then for some \(\delta '<\varepsilon /50\) we have for all

*n*sufficiently large

*ck*times. More specifically, when \(X_0=x_0\), we let \(\sigma _1= \tau (x_0)\wedge (\log n)t(n)\). Then, since \(\eta \) is a good environment, we obtain

*X*at time \(\sigma _1\). If \(X_{\sigma _1}=x\in A_1\), then at this time we stop with probability

*n*sufficiently large. Therefore, summing over all \(x\in A_1\) we get that

*i*-th round. Suppose we have not stopped up to the \(i-1\)-st round. We define the set of good points for the

*i*-th round via

*i*-th round is

*ck*-th round, then we set \(T=\sigma _{ck+1}\). Notice, however, that

### Proof of Corollary 1.7

*X*being a random walk on dynamical percolation on \({\mathbb {Z}}_n^d\). From Theorem 1.6 there exists

*a*so that for all

*n*large enough and all

*x*and \(\eta _0\)

## Notes

### Acknowledgements

We thank Sam Thomas and the referee for a careful reading of the manuscript and providing a number of useful comments. We also thank Microsoft Research for its hospitality where parts of this work were completed. The third author also acknowledges the support of the Swedish Research Council and the Knut and Alice Wallenberg Foundation.

## References

- 1.Andres, S., Chiarini, A., Deuschel, J.-D., Slowik, M.: Quenched invariance principle for random walks with time-dependent ergodic degenerate weights. Ann. Probab.
**46**(1), 302–336 (2018)MathSciNetCrossRefzbMATHGoogle Scholar - 2.Biskup, M., Rodriguez, P.-F.: Limit theory for random walks in degenerate time-dependent random environments. J. Funct. Anal.
**274**(4), 985–1046 (2018)MathSciNetCrossRefzbMATHGoogle Scholar - 3.Diaconis, P., Fill, J.A.: Strong stationary times via a new form of duality. Ann. Probab.
**18**(4), 1483–1522 (1990)MathSciNetCrossRefzbMATHGoogle Scholar - 4.Grimmett, G.: Percolation, Volume 321 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 2nd edn. Springer, Berlin (1999)Google Scholar
- 5.Levin, D.A., Peres, Y., Wilmer, E.L.: Markov Chains and Mixing Times. American Mathematical Society, Providence, RI (2009).
**(With a chapter by James G. Propp and David B. Wilson)**zbMATHGoogle Scholar - 6.Mathieu, P., Remy, E.: Isoperimetry and heat kernel decay on percolation clusters. Ann. Probab.
**32**(1A), 100–128 (2004)MathSciNetCrossRefzbMATHGoogle Scholar - 7.Morris, B., Peres, Y.: Evolving sets, mixing and heat kernel bounds. Probab. Theory Relat. Fields
**133**(2), 245–266 (2005)MathSciNetCrossRefzbMATHGoogle Scholar - 8.Peres, Y., Sousi, P., Steif, J.E.: Quenched exit times for random walk on dynamical percolation. Markov Process. Relat. Fields
**24**, 715–731 (2018)zbMATHGoogle Scholar - 9.Peres, Y., Stauffer, A., Steif, J.E.: Random walks on dynamical percolation: mixing times, mean squared displacement and hitting times. Probab. Theory Relat. Fields
**162**(3–4), 487–530 (2015)MathSciNetCrossRefzbMATHGoogle Scholar - 10.Pete, G.: A note on percolation on \(\mathbb{Z}^d\): isoperimetric profile via exponential cluster repulsion. Electron. Commun. Probab.
**13**, 377–392 (2008)MathSciNetCrossRefzbMATHGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.