## Abstract

We define a dynamical simple symmetric random walk in one dimension, and show that there almost surely exist exceptional times at which the walk tends to infinity. This is in contrast to the usual dynamical simple symmetric random walk in one dimension, for which such exceptional times are known not to exist. In fact we show that the set of exceptional times has Hausdorff dimension 1/2 almost surely, and give bounds on the rate at which the walk diverges at such times. We also show noise sensitivity of the event that our random walk is positive after *n* steps. In fact this event is maximally noise sensitive, in the sense that it is quantitatively noise sensitive for any sequence \(\varepsilon _n\) such that \(n\varepsilon _n\rightarrow \infty \). This is again in contrast to the usual random walk, for which the corresponding event is known to be noise stable.

## Introduction and results

Consider two simple symmetric random walks in one dimension. The first, at each step independently, jumps upwards with probability 1/2 or downwards with probability 1/2. The second begins facing upwards and, at each step independently, decides to take a step in the direction it is facing with probability 1/2; or switches direction and takes a step the other way with probability 1/2.

We call the first of these two random walks the *compass* random walk, as it has an in-built sense of direction, and the second the *switch* random walk, as it only decides whether or not to switch directions. These two random walks have exactly the same distribution—they are simple symmetric random walks—although, as we will see when we define them rigorously, they are different functions of the underlying randomness. This means that when we talk about noise sensitivity or dynamical sensitivity of the two walks, they may (and do) have very different properties.

We now define carefully the objects of interest. Let \(X_1,X_2,\ldots \) be independent random variables satisfying

for each \(i\in \mathbb {N}\). Define, for each \(n\ge 0\),

and

where we take the empty sum to be zero, so \(Y_0=Z_0=0\). We call \(Y = (Y_n,\, n\ge 0)\) the compass random walk, and \(Z = (Z_n,\, n\ge 0)\) the switch random walk. We can think of \(Y = Y(X)\) and \(Z=Z(X)\) as functions of the sequence of random variables \(X=(X_1,X_2,\ldots )\). It is easy to see that, although they are different functions, the two walks *Y* and *Z* have the same distribution. Indeed, the written descriptions at the beginning of this section make clear that each of the two walks is a natural one-dimensional interpretation of the “ant in the labyrinth” or the “drunkard’s walk”. However, *Z* is more sensitive than *Y* to changes in the sequence *X*, in a sense that we will make precise below.

We now introduce dynamical versions of our random walks *Y* and *Z*. For each \(j\ge 1\), let \((N_j(t), t\ge 0)\) be an independent Poisson process of rate 1, and for each \(i\ge 0\), let \(X_j^i\) be an independent random variable with \(\mathbb {P}(X_j^i = 1) = \mathbb {P}(X_j^i = -1) = 1/2\). Then define

In words, \(X_j(t)\) has the same distribution as \(X_j\) and rerandomises itself at the times of the Poisson process \(N_j(t)\). Write \(Y(t) = Y(X(t))\) and \(Z(t) = Z(X(t))\), or more explicitly

for each \(n\ge 0\).

For each fixed \(t\ge 0\), the sequences \(Y(t) = (Y_0(t), Y_1(t),\ldots )\) and \(Z(t) = (Z_0(t), Z_1(t),\ldots )\) are simple symmetric random walks and therefore recurrent almost surely, in that \(Y_n(t)=0\) for infinitely many values of *n* almost surely, and similarly for \(Z_n(t)\). Benjamini et al. [4, Corollary 1.10] showed that recurrence for *Y* is *dynamically stable* in that

Our main result is that, in contrast, recurrence for *Z* is dynamically sensitive. Define

and more generally for \(\alpha \ge 0\),

### Theorem 1

There exist exceptional times of transience for the switch random walk: \(\mathcal {E}\) is non-empty almost surely. In fact, the Hausdorff dimension of \(\mathcal {E}_\alpha \) equals 1/2 almost surely for any \(\alpha \in [0,1/2)\). On the other hand, \(\mathcal {E}_\alpha \) is empty almost surely for any \(\alpha >1/2\).

It is an interesting question as to whether \(\mathcal {E}_{1/2}\) is empty or not. It is possible that the methods that we use to prove Theorem 1 could be extended to investigate this more delicate case, but this would require more detailed analysis of random walk sample paths that is beyond the scope of this paper.

We also show that the event that \(Z_n\) is positive is noise sensitive. In fact we prove a stronger quantitative noise sensitivity result.

### Theorem 2

Let \((\varepsilon _n, n\ge 1)\) be any sequence in (0, 1) such that \(n\varepsilon _n\rightarrow \infty \). The sequence of events \((\{Z_n>0\}, n\ge 1)\) is quantitatively noise sensitive with respect to the sequence \((\varepsilon _n, n\ge 1)\), by which we mean that

as \(n\rightarrow \infty \).

We note that the usual definition of (quantitative) noise sensitivity uses \(-\log (1-\varepsilon _n)\) in place of \(\varepsilon _n\) above, but since \(\varepsilon _n\in (0,1)\), this is equivalent to our statement.

We observe that if \(\liminf n\varepsilon _n <\infty \), then for arbitrarily large values of *n* none of the first *n* bits are rerandomised by time \(\varepsilon _n\), and therefore one cannot expect the events \(\{Z_n(0)>0\}\) and \(\{Z_n(\varepsilon _n)>0\}\) to decorrelate. In this sense Theorem 2 is as strong as it possibly could be; we say that the events \((\{Z_n>0\}, n\ge 1)\) are *maximally noise sensitive*.

Again, Theorem 2 is in stark contrast to the corresponding statement for the compass random walk. In fact, the event that \(Y_n\) is positive is known to be noise *stable* [5], in that

## Background and notation

### Motivation and existing results

Noise sensitivity and dynamical sensitivity has been an active area of research in probability at least since the papers of Häggström et al. [14] and Benjamini et al. [5]. One of the highlights of the subject is the proof that the existence of an infinite component in critical percolation in two dimensions is dynamically sensitive [12, 20]. The survey of Steif [21] and book by Garban and Steif [13] provide further background and references.

Benjamini et al. [4] considered many properties of a quite general dynamical sequence of random variables, incorporating results on what we call the compass random walk *Y*. They showed that for the compass random walk, as well as the dynamical stability of recurrence that we mentioned before Theorem 1, the strong law of large numbers and the law of the iterated logarithm are also dynamically stable: almost surely there are no exceptional times at which either of these laws does not hold for *Y*(*t*). The paper [4] provided the initial motivation for our project, as we wished to know more about the sensitivity to dynamics of random walks, in particular whether there exist one-dimensional random walks for which recurrence is dynamically sensitive.

It is not too difficult to check that the strong law of large numbers is dynamically stable for the switch random walk as well as the compass random walk, but it follows from our results that the law of the iterated logarithm is dynamically sensitive for the switch walk; indeed, by symmetry, Theorem 1 implies that there almost surely exist times *t* at which \(Z_n(t)\) is negative for all large *n*.

Benjamini et al. [4] also considered random walks in higher dimensions. They showed that in \(\mathbb {Z}^d\), transience for the higher-dimensional analogue of the compass random walk is dynamically stable when \(d\ge 5\). For \(d\in \{3,4\}\) they showed that transience is dynamically sensitive and the set of exceptional times almost surely has Hausdorff dimension \((4-d)/2\). They conjectured that for \(d=2\) recurrence should be dynamically sensitive, which was proven by Hoffman [15], who also showed that the Hausdorff dimension of the set of exceptional times of transience is 1 almost surely. Hoffman and Amir [2] then showed that almost surely there were times where the origin was the only position to be visited finitely many times. Further properties of dynamical random walks were investigated by Khoshnevisan et al. [16, 17].

The sequences \(\{Y_n>0\}\) and \(\{Z_n>0\}\) have exactly the same distribution—as sequences—and yet one is noise stable and one is noise sensitive. Warren [23], inspired by work of Tsirelson [22], gave a similar example of such a pair: writing

the process \((W_n, n\ge 0)\) is also a simple symmetric random walk, and therefore has the same distribution as \((Y_n, n\ge 0)\), yet the events \(\{W_n>0\}\) are noise sensitive.

The object that we refer to as the switch random walk is also known by other names. It has been called the *coin-turning* random walk by Engländer and Volkov who introduced more general versions in [9], and these were further studied by Engländer et al. [10]. It has also been called the *bootstrap* random walk by Collevecchio, Hamza and Shi, who studied the pair (*Y*, *Z*) in [8]; Collevecchio, Hamza and Liu gave a further generalisation in [7].

### Layout of paper

This paper is organised as follows. In Sect. 2.3 we introduce some notation and outline some well-known facts about random walks that will be used extensively in our proofs. In Sect. 3 we give a rough sketch of the proofs of Theorems 1 and 2 that should give the reader an idea of the main arguments involved. We then carry out the proof of Theorem 2 in Sect. 4. The proof of Theorem 1 is substatially more complex, and we give an outline in Sect. 5, which reduces the bulk of the task to proving two propositions, Proposition 1 for the lower bound on the Hausdorff dimension and Proposition 2 for the upper bound, together with several technical lemmas. The proof of Proposition 1 is the most interesting part of the paper and substantially different from existing proofs of related results. Rather than relying on the methods detailed in [13] such as randomised algorithms or the spectral sample, it instead uses more hands-on methods, leaning heavily on the independence of increments of random walks. We carry this out in Sect. 6. Then in Sect. 7 we prove Proposition 2, which mainly consists of elementary but intricate approximations. Finally, in Sect. 8 we prove the technical lemmas required to complete the proof of Theorem 1.

### Notation and preparatory results

Throughout, we write \(f(n)\lesssim g(n)\) if there exists a constant \(c\in (0,\infty )\) such that \(f(n)\le c g(n)\) for all large *n*, and \(f(n)\asymp g(n)\) if both \(f(n)\lesssim g(n)\) and \(g(n)\lesssim f(n)\). We use \(\approx \) only in heuristics to mean “is roughly equal to”. We write \(\mathbb {P}_x\) for the probability measure under which our random walks begin from *x*, rather than 0. To be precise, we mean that under \(\mathbb {P}_x\),

and similarly for \(Z_n(t)\), \(Y_n\) and \(Y_n(t)\).

We will use the Fortuin–Kasteleyn–Ginibre (FKG) inequality [11] using the partial order on \(\{-1,1\}^\mathbb {N}\) given by setting \((x_1,x_2,\ldots )\le (y_1,y_2,\ldots )\) if \(x_i\le y_i\) for all \(i\in \mathbb {N}\). This says that if *f* and *g* are either both increasing functions or both decreasing functions with respect to this partial order, then

and if *f* is increasing but *g* is decreasing, then

We gather here some useful and well-known facts about simple symmetric random walks.

### Lemma 1

Suppose that \(j\ge 2\). If \(|z|\le j^{3/4}\) and \(z\equiv j\) (mod 2), then

If \(z\not \equiv j\) (mod 2) then \(\mathbb {P}(Z_j=z)=0\).

### Proof

This is simply a version of the local central limit theorem: see for example [18, Proposition 2.5.3 and Corollary 2.5.4]. \(\square \)

### Lemma 2

For any \(j\ge 2\) and \(x>0\),

### Proof

This is an application of a simple Chernoff-style bound. For any \(\lambda >0\),

Noting that

we get

and choosing \(\lambda = x/j\) gives the result. \(\square \)

### Lemma 3

For any \(z,j\in \mathbb {N}\),

### Proof

This is a version of the reflection principle. Note that

Now by reflecting the random walk at the first hitting time of \(-z\) (applying the strong Markov property), we have

which establishes the result. \(\square \)

### Corollary 1

For any \(n\ge 1\),

### Proof

We have

Applying Lemma 3, the above equals \(\frac{1}{2}\mathbb {P}_0(Z_{n-1}\in [0,1])\), and by Lemma 1 this is of order \(n^{-1/2}\). \(\square \)

## Sketch proofs

For \(t\ge 0\) let \(I_0(t) = 0\), and for \(k\ge 1\) define

the *k*th index for which our Bernoulli random variables disagree at times 0 and *t*. We think of *t* being small, so that for many indices *i* we have \(X_i(t) = X_i(0)\), and we call \(I_k(t)\) the “*k*th change” (at time *t* relative to time 0). We call the steps of the random walk between \(0=I_0(t)\) and \(I_1(t)-1\) the *first period*, the steps between \(I_1(t)\) and \(I_2(t)-1\) the *second period*, and so on. For each *k* we let \(J_k(t) = I_k(t)-I_{k-1}(t)\) be the length of the *k*th period.

Our first key observation is that the increments of \(Z_n(0)\) and \(Z_n(t)\) are equal during odd periods (that is, for \(n\in [I_{2k},I_{2k+1}(t)-1]\)); and the increments of \(Z_n(0)\) and \(-Z_n(t)\) are equal during even periods (that is, for \(n\in [I_{2k+1}(t),I_{2k+2}(t)-1]\)). See Fig. 1.

To see why Theorem 2 is true, let \(t=\varepsilon \in (0,1)\) and run the random walks up to step *n*. Let \(U_n(t)\) be the sum of the increments of \(Z_n(0)\) over odd periods up to step *n*, and \(V_n(t)\) be the sum of the increments over even periods up to step *n*. Then clearly

(Note that \(U_n(t)\) and \(V_n(t)\) depend on *t* because the periods depend on *t*, even though \(Z_n(0)\) itself does not depend on *t*.) Of course, we can also write \(Z_n(t)\) as the sum of its increments over odd periods, plus the sum of its increments over even periods. But the increments of \(Z_n(t)\) over odd periods are equal to the increments of \(Z_n(0)\) over odd periods, and the increments of \(Z_n(t)\) over even periods are precisely *minus* the increments of \(Z_n(0)\) over even periods. Thus

As a result,

Now we note that—as long as \(t\gg 1/n\), so that there are many periods by step *n*—the quantities \(U_n(t)\) and \(V_n(t)\) have *almost* the same distribution when *n* is large, and are *almost* independent. They are also symmetric and have small probability of being equal or equalling zero. If *U* and *V* are independent symmetric continuous random variables, then \(\mathbb {P}(U>|V|)=1/4\). Approximating this statement with \(U_n(t)\) and \(V_n(t)\) in place of *U* and *V* gives that

as \(n\rightarrow \infty \), which is what is needed to prove Theorem 2 since clearly \(\mathbb {P}(Z_n(0)>0)^2\rightarrow 1/4\).

Theorem 1 is significantly more difficult to prove. We give a sketch of a proof of the existence of exceptional times, whose main ideas are also the key to the most difficult part of calculating the Hausdorff dimension of the set of such times. There will be a much more detailed proof outline in Sect. 5.

It is simpler to deal with \(\mathcal {E}_0\) rather than \(\mathcal {E}\) or \(\mathcal {E}_\alpha \) for much of the proof. We define the event

that the random walk *Z*(*t*) is positive for its first *n* steps, and consider

the Lebesgue amount of time in [0, 1] during which *Z*(*t*) stays positive for its first *n* steps. To show the existence of exceptional times, ignoring some technical issues, it essentially suffices to show that

for some finite constant *C*, from which we can deduce that \(\mathbb {P}(\kappa _n > 0) \ge 1/C\) and let \(n\rightarrow \infty \).

For the first moment, by Fubini’s theorem and stationarity,

Corollary 1 tells us that \(\mathbb {P}(P_n(0))\asymp n^{-1/2}\).

For the second moment, again applying Fubini’s theorem and stationarity, a simple argument (using Fubini’s theorem and stationarity, and which we will give in full later) gives

Our task is therefore to show that \(\int _0^1 \mathbb {P}(P_n(0)\cap P_n(t)) \lesssim \mathbb {P}(P_n(0))^2 \asymp n^{-1}\).

During the even periods, the increments of *Z*(0) and *Z*(*t*) are mirrored. One can use this to show that the probability that both *Z*(0) and *Z*(*t*) remain positive over an even period is smaller than the square of the probability that *Z*(0) stays positive over the same period. The total length of the even periods is roughly *n*/2 provided *t* is not too small, and so (skipping over several important details) we might hope that, at least when *t* is not too small,

The details required to show this involve sewing together the increments over the even periods to create one random walk path of length roughly *n*/2. It is possible to do this in a very simple and natural way, except for one remaining issue: we cannot ignore the first period, on which the two random walks *Z*(0) and *Z*(*t*) are equal. On this period clearly the best upper bound we can get on the probability that both random walks stay positive is simply \(\mathbb {P}(Z_{I_1(t)-1}(0)>0)\), rather than this quantity squared. A more reasonable overall upper bound is therefore

This does indeed hold, and since \(I_1(t)\approx 2/t\), we have \(\mathbb {P}(Z_{I_1(t)-1}(0)>0)\asymp (2/t)^{-1/2}\), so that

as required. One may further note that an extra factor of \(t^{-\gamma }\) in the integral would not make any difference to the calculation provided that \(\gamma <1/2\), which combined with Frostman’s lemma essentially gives us the lower bound of 1/2 on the Hausdorff dimension.

## Proof of Theorem 2: noise sensitivity for \(\{Z_n>0\}\)

Fix a sequence \((\varepsilon _n, n\ge 1)\) with \(\varepsilon _n\in (0,1)\) for all *n* and \(n\varepsilon _n\rightarrow \infty \). Many of the definitions in this section will depend implicitly on \(\varepsilon _n\). Recall that for \(t\ge 0\) we defined \(I_0(t) = 0\), and for \(k\ge 1\),

the start of the \((k+1)\)th period. Let

We note that, since each \(X_i\) has rerandomised by time \(\varepsilon _n\) with probability \(1-e^{-\varepsilon _n}\), the period length \(I_k(\varepsilon _n)-I_{k-1}(\varepsilon _n)\) is a Geometric random variable of parameter \((1-e^{-\varepsilon _n})/2\). Thus by the law of large numbers we have \(I_{K(n)}(\varepsilon _n) \approx n\).

There will be three main parts to this proof. In the first part, we show that the probability that the sum of the increments of a random walk on the odd periods is larger than the modulus of the sum of the increments on the even periods converges to 1/4. In the second part, we will prove Theorem 2 but with \(I_{K(n)}(\varepsilon _n)\) in place of *n*. Finally, in the third part, we will transfer from using \(I_{K(n)}(\varepsilon _n)\) to *n*.

**Part 1: Probability that sum of increments on odd periods exceed modulus of sum of increments on even periods converges to 1/4.**

Define

and

In words, \(U_n\) is the sum of the increments of a simple symmetric random walk (in fact *Y*, though this is not important) over the odd periods up to step roughly *n*, and \(V_n\) is the sum over the even periods up to step roughly *n*. This is, of course, not quite true, since \(I_{K(n)}(\varepsilon _n)\) is unlikely to be exactly *n*. On the positive side, this gives \(U_n\) and \(V_n\) some nice properties: in particular, they are identically distributed.

We claim that

To see this, we observe that

The first eight terms are all equal, and the last tends to 0 as \(n\rightarrow \infty \). Thus

as claimed.

**Part 2: Proving Theorem**2**but with**\(I_{K(n)}(\varepsilon _n)\)**in place of***n*.

Noting that *K*(*n*) is even, we now let

and

Clearly we have \(Z_{I_{K(n)}(\varepsilon _n)}(0) = U'_n + V'_n\). Moreover, since the increments of \(Z(\varepsilon _n)\) and *Z*(0) are equal on odd periods and mirrored on even periods, we have

Thirdly, note that (again recalling that *K*(*n*) is even) \(U'_n\) and \(V'_n\) have the same joint distribution as \(U_n\) and \(V_n\). Thus we have

which we have just shown (in Part 1) converges to 1/4 as \(n\rightarrow \infty \). Thus

establishing the theorem with \(I_{K(n)}(\varepsilon _n)\) in place of *n*.

We remark here that so far, the proof works for any value of \(\varepsilon _n\in (0,1)\). However, if \(\varepsilon _n\) is too small, then the value of *K*(*n*) is not large, which will cause problems in the following.

**Part 3: Transferring from**\(I_{K(n)}(\varepsilon _n)\)**to***n*.

We claim that

We will use the elementary bounds, for any events *A*, *B*, \(A'\) and \(B'\),

and

For the upper bound, using the first fact above,

and for the lower bound, using the second fact above,

We will show that

the three other similar terms can be dealt with similarly. To do this, we first note that for any \(x_n,y_n>0\),

We first consider \(\mathbb {P}( |I_{K(n)}(\varepsilon _n) - n| > x_n)\). We use Markov’s inequality to see that

and using the fact that \(I_{K(n)}(\varepsilon _n)\) is a sum of *K*(*n*) independent Geometric random variables of parameter \((1-e^{-\varepsilon _n})/2\), we have

Recalling that \(K(n) = 2\lfloor n(1-e^{-\varepsilon _n})/4 \rfloor \), the above is at most

Thus

Choosing the value \(x_n = n^{5/8}/(1-e^{-\varepsilon _n})^{3/8}\), we have

by our assumption that \(n\varepsilon _n\rightarrow \infty \).

We now move on to the second term on the right-hand side of (4). Choosing \(y_n = n^{3/8}/\varepsilon _n^{1/8}\), since \((Z_j(0), j\ge 0)\) is a simple symmetric random walk and \(y_n\ll n^{1/2}\), by the central limit theorem we have

For the final term in (4), by the strong Markov property and Lemma 3,

Since \(x_n = n^{5/8}/(1-e^{-\varepsilon _n})^{3/8} \ll n^{6/8}/\varepsilon _n^{2/8} = y_n^2\), the central limit theorem tells us that the above also converges to zero as \(n\rightarrow \infty \). Combining this with (5) and (6), we see from (4) that

This, together with very similar bounds on the other three terms mentioned above, establishes (3). In Part 2 we showed that

and clearly \(\mathbb {P}(Z_n(0)>0)\rightarrow 1/2\), so the proof of Theorem 2 is complete.

## Outline of the proof of Theorem 1: Hausdorff dimension of exceptional times is 1/2

We now outline the main steps in turning the heuristic in Sect. 3 into a rigorous proof that the Hausdorff dimension of

is 1/2 almost surely for any \(\alpha \in [0,1/2)\). Since \(\mathcal {E}_\alpha \subset \mathcal {E}_0\) for any \(\alpha \ge 0\), it suffices to give an upper bound on the dimension of \(\mathcal {E}_0\) and a lower bound on the dimension of \(\mathcal {E}_\alpha \) for \(\alpha \in (0,1/2)\). This also, of course, implies that \(\mathcal {E}\) is non-empty almost surely and therefore that there exist exceptional times of transience. We will proceed by stating a series of results, whose proofs we delay until later sections.

### Lower bound on Hausdorff dimension of \(\mathcal {E}_\alpha \)

As in the sketch proof, we define the event

that the random walk *Z*(*t*) is positive up to step *n*, and similarly

We will use these events for much of the proof. However, to consider \(\mathcal {E}_\alpha \) for \(\alpha >0\), we will also need the more complicated events

that the random walk *Z*(*t*) remains above the curve \(i^\alpha \) for all steps \(i\le n\), and similarly for \(P^\alpha _n\). Here we could consider any \(\alpha \ge 0\), though we will mostly think of \(\alpha \in [0,1/2)\). Note that \(P^0_n(t)=P_n(t)\).

Let

be the set of times at which the random walk stays above the curve \(i^\alpha \) up to step *n*. We write \(\bar{T}^\alpha _n\) for the closure of \(T^\alpha _n\) and \(T^\alpha =\bigcap _n T^\alpha _n\). Finally define, for \(\gamma \in [0,1)\),

Our lower bound on the Hausdorff dimension of \(\mathcal {E}_\alpha \) will be based on the following corollary of [20, Lemma 6.2], which in turn is an application of Frostman’s lemma.

### Lemma 4

Suppose that for some \(\alpha \ge 0\) and \(\gamma \in (0,1)\) we have

Then the Hausdorff dimension of \(\bigcap _n \bar{T}^\alpha _n\) is at least \(\gamma \) with strictly positive probability.

Given Lemma 4, which we will prove in Sect. 8, our main task in proving the lower bound becomes to show that \(\mathbb {E}[\Phi ^\alpha _n(\gamma )]\) is bounded above for each \(\alpha ,\gamma <1/2\). This will be the most difficult (and most novel) part of our proof, and will be carried out in Sect. 6.

### Proposition 1

For any \(\alpha ,\gamma \in [0,1/2)\),

Combining Lemma 4 and Proposition 1 tells us that for any \(\alpha ,\gamma \in [0,1/2)\), the Hausdorff dimension of \(\bigcap _n \bar{T}^\alpha _n\) is at least \(\gamma \) with strictly positive probability. This is not quite what was promised in Theorem 1, which in fact says that the Hausdorff dimension of \(\mathcal {E}_\alpha \) is 1/2 almost surely for any \(\alpha \in [0,1/2)\). Moving from \(\bigcap _n \bar{T}^\alpha _n\) to \(T^\alpha \) is a technicality that can be handled in basically the same way as [14, Lemma 3.2]; and of course \(T^\alpha \subset \mathcal {E}_\alpha \). Finally, showing that the Hausdorff dimension of \(\mathcal {E}_\alpha \) is at least 1/2 almost surely, rather than with positive probability, follows from standard ergodicity arguments (of course this cannot hold for \(T^\alpha \), since with positive probability \(Z_2(t)=0\) for all \(t\in [0,1]\)). The following lemmas take care of these steps. We will prove them in Sect. 8.

### Lemma 5

For any \(\alpha \ge 0\), we have

almost surely.

### Lemma 6

For each \(\alpha \ge 0\), the Hausdorff dimension of \(\mathcal {E}_\alpha \) is a constant (possibly depending on \(\alpha \)) almost surely.

### Upper bound on Hausdorff dimension of \(\mathcal {E}_0\)

The following definitions are more or less standard in the noise sensitivity literature. For a function \(f:\{-1,1\}^\mathbb {N}\rightarrow \mathbb {R}\) and random variables \(X_1,X_2,\ldots \) taking values in \(\{-1,1\}\), we say that \(m\in \mathbb {N}\) is *pivotal* for *f* if

Of course this definition depends on the realisation of \(X_1,X_2,\ldots \), although we note that it is independent of the value of \(X_m\in \{-1,1\}\). For an event *E*, we say that *m* is pivotal for *E* if *m* is pivotal for the indicator function of *E*. We define the *influence* of the *m*th bit (on *E*) to be

and the *total influence* of *E* to be

For technical reasons, we will need the following generalisations of \(P_n\) and *T*. For \(k\in 2\mathbb {Z}_+\), define the event

that *Z* is zero at step *k* and positive for the next *n* steps, and let

be the set of times at which \(Z_k(t)\) is zero and \(Z_i(t)\) is strictly positive from step \(k+1\) onwards.

Our next lemma is just a rephrasing of [20, Theorem 8.1] into our setting, and gives us a condition for bounding the Hausdorff dimension of \(T'_k\) in terms of the total influence of \(P_{k,n}\).

### Lemma 7

The Hausdorff dimension of \(T'_k\) is almost surely at most

### Proof

This is almost exactly the second part of the statement of [20, Theorem 8.1] translated into our notation. There is an extra condition that the events \(P_{k,n}\) must depend only on finitely many random variables, but this is clearly satisfied since \(P_{k,n}\) depends only on \(X_1,\ldots ,X_{n+k}\). \(\square \)

To implement Lemma 7 we now need an upper bound on the influences of \(P_n\).

### Proposition 2

For any \(m=1,2,\ldots ,n\), we have

This result will be proved in Sect. 7. Combining Proposition 2 with Lemma 7 will give us the upper bound of 1/2 on the Hausdorff dimension of \(T^0\) and hence \(\mathcal {E}\). We carry out the details in Sect. 5.4.

### \(\mathcal {E}_\alpha \) is empty for \(\alpha >1/2\)

The final part of Theorem 1 says that \(\mathcal {E}_\alpha \) is empty almost surely when \(\alpha >1/2\). The proof of this fact follows a fairly standard argument. For \(\alpha ,t\ge 0\) and \(n\in \mathbb {N}\) define the event \(L^\alpha _n(t) = \{Z_n(t) \ge n^\alpha \}\), and for \(k\in \mathbb {N}\) let \(\mathcal L^\alpha _n(k) = \int _0^k \mathbb {1}_{L^\alpha _n(t)} \mathop {}\mathrm {d}t\). Note that

By Fubini’s theorem and stationarity,

By Markov’s inequality, for any \(\lambda >0\),

Since \(Z_n\) is a sum of *n* independent and identically distributed random variables,

When \(\lambda \le 1\) we have \(e^\lambda /2 + e^{-\lambda }/2 \le 1+3\lambda ^2/4\), so fixing \(\alpha \in (1/2,1]\) and choosing \(\lambda = n^{\alpha -1}\), we have

Thus, again with \(\alpha \in (1/2,1]\) and \(\lambda = n^{\alpha -1}\),

On the other hand, letting \(T = \inf \{t\ge 0 : Z_n(t) \ge n^\alpha \}\), we have

Let \(T' = \inf \{t\ge T : \text {one of the first n steps rerandomises}\}\). Then clearly, provided \(T<\infty \),

However, by the strong Markov property, \(T'-T\) is exponentially distributed with parameter *n*. Thus

and therefore

Combining this with (7) and (8), for any \(\alpha \in (1/2,1]\) we have

By the Borel-Cantelli lemma, for any \(\alpha \in (1/2,1]\), the probability that for infinitely many *n*, there exists a time in [0, 1] such that \(L^\alpha _n(t)\) occurs, is zero. Thus \(\mathcal {E}_\alpha \) is empty almost surely for \(\alpha \in (1/2,1]\). The same is trivially true for \(\alpha >1\).

### Completing the proof of Theorem 1

We now tie together the results from Sects. 5.1, 5.2 and 5.3 to complete the proof of Theorem 1.

### Proof of Theorem 1

We showed in Sect. 5.3 that \(\mathcal {E}_\alpha \) is empty almost surely for \(\alpha >1/2\), so it remains to show that the Hausdorff dimension of \(\mathcal {E}_\alpha \) is 1/2 for any \(\alpha \in [0,1/2)\). As stated at the beginning of Sect. 5, it suffices to show that the Hausdorff dimension of \(\mathcal {E}_\alpha \) is at least 1/2 for \(\alpha >0\) and the Hausdorff dimension of \(\mathcal {E}_0\) is at most 1/2.

By Lemma 4 and Proposition 1, we know that for any \(\alpha ,\gamma \in [0,1/2)\), the Hausdorff dimension of \(\bigcap _n \bar{T}^\alpha _n\) is at least \(\gamma \) with strictly positive probability. By Lemma 5, the same holds for \(T^\alpha \), and since \(T^\alpha \subset \mathcal {E}_\alpha \), the same holds for \(\mathcal {E}_\alpha \). Lemma 6 then tells us that the Hausdorff dimension of \(\mathcal {E}_\alpha \) must be at least 1/2 almost surely.

Moving on to the upper bound, take \(k\in 2\mathbb {Z}_+\) and \(m\in \{k+1,k+2,\ldots ,k+n\}\). If \(Z_k\ne 0\) then *m* cannot be pivotal for \(P_{k,n}\), so

But by the Markov property,

Thus

and so, applying Proposition 2,

By the Markov property

and by Corollary 1 we have \(\mathbb {P}(P_n)\asymp n^{-1/2}\). Combining this with (9), we see that there exist constants \(c,c'\in (0,\infty )\) such that

which converges to 1 as \(n\rightarrow \infty \) for each fixed *k*. From Lemma 7 we obtain that the Hausdorff dimension of \(T_k'\) is almost surely at most \((1+1)^{-1} = 1/2\).

Finally,

which as a countable union of sets of Hausdorff dimension at most 1/2 almost surely, itself has Hausdorff dimension at most 1/2 almost surely. This completes the proof. \(\square \)

## Proof of Proposition 1: bounding \(\mathbb {E}[\Phi ^\alpha _n(\gamma )]\) from above

First note that, by Fubini’s theorem,

By stationarity, this is bounded above by

and since \(P_n^\alpha (u) \subset P_n(u)\) for any \(\alpha ,u\ge 0\), this is at most

The following lemma says that the probability of \(P_n^\alpha \) is of the same order as the probability as \(P_n\). It is a simple application of [19, Theorem 2] and we will prove it later in this section.

### Lemma 8

For any \(\alpha <1/2\),

We now want to bound \(\mathbb {P}(P_n(0) \cap P_n(t))\). As suggested in the sketch proof in Sect. 3, the main idea is that on even periods two mirrored random walks (representing the walk at time 0 and time *t*) must both be larger than 0. The difficulty is in handling the dependencies between periods, and for this we need some more definitions. We recall first that \(I_0(t)=0\) and for \(j\ge 1\)

the *j*th index for which our Bernoulli random variables disagree at times 0 and *t*. We call the steps between \(I_{j-1}(t)\) and \(I_j(t)-1\) the “*j*th period”, and let \(J_j(t) = I_j(t)-I_{j-1}(t)\) be the length of the *j*th period.

For each \(j\ge 1\), define the event

which says that our dynamical random walk is positive throughout the *j*th period at both time 0 and time *t*. For each \(i\ge 0\), let

the average of the two walks *Z*(0) and *Z*(*t*). Note that, for each *t*, during odd periods the increments of \(W_i(t)\) are equal to the increments of \(Z_i(0)\); and during even periods, \(W_i(t)\) is constant. (When we talk about increments we mean as *i* changes, keeping *t* fixed.)

When *j* is odd, define the event

that *W*(*t*) is positive throughout the *j*th period. Note that, since \(W_i(t)\) is the average of \(Z_i(0)\) and \(Z_i(t)\), if both of these are positive, then so is \(W_i(t)\). That is, if *j* is odd, then \(A_j(t) \subset A'_j(t)\).

Making the same comparison when *j* is even would not be useful since *W* is constant. Instead, when *j* is even, let \(B^{(j)}_i(t)\), \(i\ge 0\) be an independent simple random walk started from \(W_{I_{j-1}(t)-1}(t)\) and define

Figure 2 shows a realisation of *Z*(0), *Z*(*t*), *W*(*t*), \(B^{(2)}(t)\) and \(B^{(4)}(t)\).

We need to rule out some unlikely events. Let

which we think of as the event that the odd periods (not including the first) are not too short,

which we think of as the event that the even periods are not too short,

the event that both the odd and even periods are not too short, and

the event that we have at least \(2\lfloor nt/8\rfloor +1\) periods before step *n*.

We note that for each *j*, when *t* is small \(J_j(t)\) has expectation roughly 2/*t*, so when *n* is large the above events should all occur with probability close to 1. The following lemma, which we prove later in the section, quantifies this more precisely.

### Lemma 9

There exists a constant \(\delta >0\) such that for any \(t\in [0,1]\) and \(n\in \mathbb {N}\),

For now we will work on the event \(E_n(t)\). Also define, for \(k\in \mathbb {N}\),

Our next result translates the probability that we want to bound, which is that of \(V_k(t)\), into probabilities of events involving *W*(*t*) and \(B^{(j)}(t)\). The probabilities on the right are squared, reflecting the fact that we have two random walks (one at time 0 and another at time *t*) that must both stay positive. Apart from the first period, which is important to retain separately, only the even periods are included, since they are the ones on which the two random walks are mirrored.

### Proposition 3

For any \(k,n\in \mathbb {N}\) with \(n\ge 2k\) and any \(t\in [0,1]\),

The proof of this result involves carefully separating out as much independence as possible between the different periods and applying the FKG inequality. Again we postpone the proof to later in the section in order to continue with our overarching proof of Proposition 1.

Next observe that since \(B^{(j)}(t)\) is simply an independent random walk started from \(W_{I_{j-1}(t)-1}(t)\), it has the same distribution as *W* itself over the \((j+1)\)th period. This inspires our next proposition, which allows us to telescope the product from Proposition 3 back into a statement only about *W*.

### Proposition 4

For any \(k,n\in \mathbb {N}\) with \(n\ge 2k\) and any \(t\in [0,1]\),

Combining Propositions 3 and 4, and then using elementary bounds, allows us to prove the following.

### Proposition 5

Suppose that \(t\in [0,1]\) and \(n\in \mathbb {N}\). Then for any \(k\ge nt/4\), we have

Leaving the proof of Proposition 5 until later, we now observe that

where the last equality used the independence of *Z*(0) and the lengths of the periods at time *t*. By Proposition 5, the first term on the last line above is at most a constant times \(1/(nt^{1/2})\), and by Corollary 1 and Lemma 9, the second term is at most a constant times \(n^{-1/2}\exp (-\delta nt)\) for some constant \(\delta >0\). Thus

and so

For \(\gamma <1/2\), the first integral on the right-hand side above is finite and the second integral (which can be approximated by integrating separately over (0, 1/*n*] and (1/*n*, 1)) is of order \(n^{\gamma -1}\). Therefore, for \(\gamma <1/2\),

Recalling from the start of the section that

and from Lemma 8 that for any \(\alpha <1/2\),

we have for \(\alpha ,\gamma <1/2\) that

This completes the proof of Proposition 1, subject to proving all of the intermediary results above.

Before we begin to prove these results, we will need another elementary lemma as an ingredient in the proof of Proposition 3.

### Lemma 10

If \((S_i,\, i\ge 0)\) is a simple symmetric random walk, then for any \(x,y,k\in \mathbb {N}\),

This is easily proved by induction. We include a proof later, but now proceed with the much more interesting proofs of Propositions 3 and 4. These proofs contain the main ideas of our paper.

### Proof of Proposition 3

Our first step is to move from \(A_j(t)\) to \(A'_j(t)\). To do so, we go via a third collection of events which we call \(\tilde{A}_j(t)\). When *j* is odd, let \(\tilde{A}_j(t) = A'_j(t)\). We have already mentioned that if *j* is odd, then

When *j* is even, define the event

We claim that when *j* is even, we also have \(A_j(t)\subset \tilde{A}_j(t)\). Indeed, suppose that *j* is even. We show that if \(\omega \not \in \tilde{A}_j(t)\) then \(\omega \not \in A_j(t)\). If \(\omega \not \in \tilde{A}_j(t)\) then there exists \(i\in [I_{j-1}(t),I_j(t)-1]\) such that either \(Z_i(0)\le 0\), in which case clearly \(\omega \not \in A_j(t)\), or

Then

so since the increments of \(Z_i(t)\) are the negative of the increments of \(Z_i(0)\) during even periods,

and therefore \(Z_i(t)\le 0\). Thus \(\omega \not \in A_j(t)\), establishing our claim. We deduce that, for any \(k\in \mathbb {N}\),

Note that the increments of \(Z_i(0)\) on even periods are independent of the whole process \(W_i(t)\). Combining this fact with Lemma 10, we have

for any \(k\in \mathbb {N}\), where \(\mathcal {F}_{I(t)} = \sigma (I_j(t),j\ge 0)\). Combining (10) and (11) and taking expectations to remove the conditioning, for any \(k\in \mathbb {N}\) we have

Applying Bayes’ formula and then ignoring the odd terms for \(j\ge 3\), we have

We now apply the FKG inequality (2). Recalling that

and noting that the two events above are increasing and decreasing respectively, we get that

where the inequality comes from (2) and the equality follows from symmetry about \(W_{I_{2j-1}(t)-1}(t)\) (recalling that \(B^{(2j)}_0(t) = W_{I_{2j-1}(t)-1}(t)\)). Substituting this into (12), we have shown that

as required. \(\square \)

### Proof of Proposition 4

We work by induction on *k*. For \(k=1\), we have

On the event \(A'_1(t)\cap E_n(t)\), the law of \((B^{(2)}_i(t))_{i\in [1,J_2(t)]}\) is identical to that of \((W_{I_2(t)-1+i}(t))_{i\in [1,J_3(t)]}\), and therefore

establishing the claim in the case \(k=1\). The general case is very similar: assuming that the claim holds for \(k-1\), we have

Considering the last term on the right-hand side above, we note that \(B^{(2k)}(t)\) is independent of \(A'_{2j}(t)\) given \(A'_{2j-1}(t)\) for all \(j<k\), and therefore the above equals

Provided that \(2k\le n\), on the event \(\bigcap _{j=1}^k A'_{2j-1}(t)\cap E_n(t)\), the law of \((B^{(2k)}_i(t))_{i\in [1,J_{2k}(t)]}\) is identical to that of \((W_{I_{2k}(t)-1+i}(t))_{i\in [1,J_{2k+1}(t)]}\), and therefore

which establishes the claim for *k*, completing the proof. \(\square \)

The proof of our third proposition in this section, Proposition 5, does not contain any major ideas; it simply combines the results above with some elementary approximations.

### Proof of Proposition 5

Combining Propositions 3 and 4, we have

Recalling that \(A'_{2j-1}(t)\) requires that \(W_i(t)\) is positive on the \((2j-1)\)th period, whereas \(W_i(t)\) is constant on even periods, we note that

and therefore

Now, \(W_i(t)\) is simply a simple symmetric random walk during odd periods, and constant on even periods. Thus the probability that it stays positive up to step \(I_{2\lfloor k/2\rfloor +1}(t)-1\) is exactly the probability that a simple symmetric random walk stays positive up to step \(J_1(t) + J_3(t) + \cdots + J_{2\lfloor k/2\rfloor +1}(t)-1\). We deduce that

On the event \(E_n(t)\subset E^{\text {odd}}_n(t)\), we have

and therefore for any \(k\ge nt/4\),

where the equality holds by stationarity of *Z*(*t*) and the independence of \(A'_1(t)\) and \(E_n(t)\) (since \(E_n(t)\) only involves periods 2 and later). We know from Corollary 1 that

and we claim that

To see this, note that \(I_1(t)\) is independent of *Z*(0), so

But by Markov’s inequality

and by Corollary 1,

which establishes the claim. Substituting our approximations into (13), we have shown that for any \(k\ge nt/4\),

as required. \(\square \)

We now proceed with the proofs of our minor lemmas.

### Proof of Lemma 8

Recalling that

we use the fact that \(\mathbb {P}(P_n^\alpha ) = \mathbb {P}(P_n^\alpha | P_n)\mathbb {P}(P_n)\). From Corollary 1 we know that \(\mathbb {P}(P_n)\asymp n^{-1/2}\). It therefore suffices to show that \(\mathbb {P}(P_n^\alpha ) \asymp \mathbb {P}(P_n)\) for any \(\alpha <1/2\). Fix \(\alpha '\in (\alpha ,1/2)\). We apply [19, Theorem 2], which says that we may choose \(\delta >0\) such that

Choose *k* such that \(\delta i^{\alpha '} \ge i^\alpha \) for all \(i\ge k\). Then

which completes the proof. \(\square \)

### Proof of Lemma 9

We begin by considering \(E^{\text {odd}}_n(t)\). In order for \(E^{\text {odd}}_n(t)^c\) to occur, the sum of \(\lfloor nt/8\rfloor \) independent geometric random variables of parameter \((1-e^{-t})/2\) must be smaller than *n*/8; which is equivalent to a Binomial random variable of parameters \((\lceil n/8\rceil , (1-e^{-t})/2)\) being larger than \(\lfloor nt/8\rfloor \). Letting *Y* be such a random variable, we have

so

This proves the required decay for \(\mathbb {P}(E^{\text {odd}}_n(t)^c)\), and \(\mathbb {P}(E^{\text {even}}_n(t))=\mathbb {P}(E^{\text {odd}}_n(t))\). The proof for \(\mathbb {P}(E'_n(t)^c)\) uses a very similar Chernoff bound, noting that \(I_j(t)\) is a sum of *j* independent Geometric random variables of parameter \((1-e^{-t})/2\). \(\square \)

### Proof of Lemma 10

Fix \(y\in \mathbb {N}\) and let

We claim, by induction on *k*, that \(p_{x,k}\) is non-decreasing in *x* for \(x\le y\). By symmetry this is enough to prove the lemma. Clearly the claim holds for \(k=0\). For general *k*, if \(x=y\) then by symmetry

which is larger than \(p_{y-1,k+1}\) by definition. On the other hand if \(x<y\), then by the induction hypothesis,

This completes the proof of our final lemma in this section, and therefore the proof of Proposition 1. \(\square \)

## Proof of Proposition 2: influences of \(P_n\)

In this section we give estimates on the influence of each bit \(m=1,2,\ldots ,n\) on the event \(P_n\). Proposition 2 stated that for \(m=1,\ldots ,n\),

where \(\mathcal I_m(P_n)\) is the probability that the *m*th bit is pivotal for \(P_n\), and it will be our aim to prove this. We will keep *n* fixed and say “*m* is pivotal” as shorthand for “*m* is pivotal for \(P_n\)”.

### Translating \(\mathcal I_m(P_n)\) into elementary properties of the random walk

To reduce the amount of work we will take advantage of the fact that

which holds since the event that *m* is pivotal is independent of the value of \(X_m\):

We now write down an explicit condition for the event \(\{m \text { is pivotal}\}\cap P_n\) to occur. We claim that for \(m=1,2,\ldots ,n\),

In words, *m* is pivotal and \(P_n\) holds if and only if *Z* stays positive for the first *n* steps, and hits \(2Z_{m-1}\) between steps *m* and *n*.

To see why this is true, call the path of *Z* up to step \(m-1\) the *first portion* of the walk, and the path from step *m* to step *n* the *second portion*. Of course \(P_n\) entails that both portions remain positive. In order for *m* to be pivotal, we also need that when we change the sign of the *m*th bit, and therefore reflect the second portion of the path about \(Z_{m-1}\), the second portion no longer remains positive. This holds if and only if the second portion (before reflection) hits \(2Z_{m-1}\). See Fig. 3.

If \(m=1\) then trivially \(Z_{m-1}=0\), so (15) reduces to

Thus, by Corollary 1, \(\mathbb {P}(\{1 \text { is pivotal}\}\cap P_n)\) is of order \(n^{-1/2}\). Proposition 2 therefore holds for \(m=1\) and we may assume that from now on \(m\ge 2\).

Returning to (15) in the case \(m\ge 2\), the next step is to split the event that *m* is pivotal over the possible values of \(Z_{m-1}\). Writing \(\mathbb {P}_z\) for the probability measure under which our walk starts from *z* instead of 0, by (14) and (15)

By the ballot theorem [3] (or see [1] for a thorough introduction), the probability that a simple symmetric random walk starting from 0 stays positive up to step \(m-1\) and finishes at *z* is \(z/(m-1)\) times the probability that the random walk finishes at *z*; thus

### A lower bound on the influences of \(P_n\)

Define the events

Let

We want to bound \(\mathbb {P}_z(L\cap U)\) from below when \(z\le l(m,n)\). The following corollary of Lemmas 1 and 3 will be useful.

### Corollary 2

If \(0\le z \le \sqrt{n-m+1}\) then

and if \(0\le z \le l(m,n)\) then

### Proof

From Lemma 3,

and by Lemma 1, this is of order

The first part of the result now follows from the fact that \(z\le \sqrt{n-m+1}\). The second part is very similar: using Lemmas 3 and 1,

and clearly \(\mathbb {P}_z(U)\le 1\) so the proof is complete. \(\square \)

### Lemma 11

For \(z\in [0,l(m,n)]\), we have

### Proof

We would like to use the FKG inequality. Unfortunately, neither *L* nor *U* is either increasing or decreasing as a function of *X*. However, if we replace the switch random walk *Z* with the compass random walk *Y*, setting

then \(L'\) and \(U'\) are both increasing. Thus the FKG inequality (1) tells us that

and since *Y* and *Z* have the same distribution,

The result now follows from Corollary 2. \(\square \)

Substituting the result of Lemma 11 into (16) gives that

Applying Lemma 1 again tells us that for \(z\in [1,l(m,n)]\), we have \(\mathbb {P}_0(Z_{m-1}=z)\asymp (m-1)^{-1/2}\); so

If \(m\le n/2\), then the right-hand side above is of order \(n^{-1/2}\), and if \(m>n/2\), it is of order \((n-m+1)/n^{3/2}\). In either case this completes the proof of the lower bound in Proposition 2.

### An upper bound on the influences of \(P_n\)

We will now bound (16) from above. This direction is far more involved as we need to consider the entire sum; for the lower bound we could restrict to just the values of *z* that gave the biggest contribution. We recall the definitions of *L* and *U* from (17). As part of our proof we will have to bound several sums of the following form.

### Lemma 12

If \(c\in \mathbb {N}\) and \(r\ge 0\) then

### Proof

Letting \(C = \lceil \sqrt{c}\rceil \), we have

\(\square \)

Let \(M=\lfloor (m-1)^{3/4}\rfloor \). We begin our upper bound on (16) by splitting the sum depending on whether *z* is larger or smaller than *M*: from (16),

We label the two sums in (18) by (18 i) and (18 ii).

Addressing the second sum first, we note that \(\mathbb {P}_z(L)\) is increasing in *z*, so

By Lemma 2 with \(x=M\), we have

If \(m-1 > (n-m+1)^{1/2}\) then we use the trivial bound \(\mathbb {P}_{m-1}(L)\le 1\), or if \(m-1\le (n-m+1)^{1/2}\) then we apply Corollary 2 to obtain

Putting these estimates together, we have shown that

By considering the two cases \(m<\sqrt{n}\) and \(m\ge \sqrt{n}\) separately, one can check that in either case the above is at most a constant times \((n-m+1)n^{-3/2}\), as required. It thus remains to bound (18 i).

To do this we split it again depending on whether *z* exceeds \(\lfloor (n-m+1)^{1/2}\rfloor \). If it does not, we bound \(\mathbb {P}_z(L\cap U)\) above by \(\mathbb {P}_z(L)\) and apply Lemma 1 and Corollary 2. Letting \(M' = M\wedge \lfloor (n-m+1)^{1/2}\rfloor \), we obtain

If \(m\le n/2\), then by Lemma 12,

whereas if \(m>n/2\), then

Applying these two bounds to (19) gives that

as required.

When \(z>(n-m+1)^{1/2}\) then we bound \(\mathbb {P}_z(L\cap U)\) above by \(\mathbb {P}_z(U)\) instead of \(\mathbb {P}_z(L)\). Applying Lemma 1, we have

Thus

If \(m>n/2\), then the above is at most

and by Lemma 12, this is of order at most \((n-m+1)/n^{3/2}\). On the other hand, if \(m<n/2\) and \(M'\le M\), then

and by Lemma 12, this is of order at most

Since \(e^{-x/2} \le x^{-1/2}\) for all \(x>0\), this is bounded above by \((n-m+1)^{-1/2}\). Thus we have shown that when \(M'\le M\),

and of course when \(M'>M\) the sum is empty and (21)\(=0\). Combining this with (20), we have shown that

which completes the proof of Proposition 2.

## Proofs of Lemmas 4, 5 and 6

To complete our proof of the lower bound on the Hausdorff dimension of \(\mathcal {E}\) outlined in Sect. 5, we need several technical lemmas. In this section we prove those results, beginning with Lemma 4, which is based on [20, Lemma 6.2].

### Proof of Lemma 4

If we let \(\mu _n^\alpha \) be the measure on [0, 1] given by

then noting that \(\mu _n^\alpha \) is supported on \(\bar{T}_n^\alpha \), [20, Lemma 6.2] gives a sufficient condition for the Hausdorff dimension of \(\bigcap _n \bar{T}_n^\alpha \) to be at least \(\gamma \). This condition is that there exists a finite constant *c* such that for infinitely many *n*,

In order to prove our lemma it therefore suffices to show that this condition holds with positive probability for \(\alpha <1/2\).

We start by bounding \(\mu _n^\alpha ([0,1])\) from below. By the Paley-Zygmund inequality,

By Fubini’s theorem and stationarity,

Also, for any \(\gamma \in [0,1)\),

Substituting these estimates into (22), we have

so fixing \(\gamma \) to take the value in the statement of the lemma and letting \(S = \sup _n \mathbb {E}[\Phi _n^\alpha (\gamma )]\), we have

Now note that

so the second part of our desired condition requires us to show that \(\Phi _n^\alpha (\gamma )\le c\) for some constant *c* and infinitely many *n*. By Markov’s inequality,

and therefore

By Fatou’s lemma we deduce that

and the proof is complete. \(\square \)

Our proof of Lemma 5 is based on the equivalent result for percolation by Häggström, Peres and Steif [14, Lemma 3.2].

### Proof of Lemma 5

Recall that for each *j*, \((N_j(t), t\ge 0)\) is a Poisson process of rate 1 that decides when \(X_j\) rerandomises. For \(i\ge 0\), let \(\tau ^{(i)}_j = \inf \{t\ge 0 : N_j(t)=i\}\), the time of the *i*th rerandomisation of \(X_j\).

Fix *i* and *j*. Since each step of the random walk evolves (in time) independently, almost surely at time \(\tau ^{(i)}_j\) the random walk hits both 0 and \(2Z_{j-1}(\tau ^{(i)}_j)\) after step *j*; thus for large enough *n*, the random walk hits 0 before step *n* regardless of the state of step *j*. The random walk therefore also falls below the line \(i\mapsto i^\alpha \) before step *n* (for large enough *n*), regardless of the state of step *j*. That is, almost surely, \(\tau ^{(i)}_j \not \in \bar{T}_n^\alpha \setminus T_n^\alpha \) for all large *n*.

However, since the system only changes when one of the \(X_j\) rerandomises, for each \(\alpha \ge 0\) and \(n\in \mathbb {N}\) we have

Thus for each *N* we have

However, since the \(T_n^\alpha \) are nested,

so the left-hand side is also empty almost surely, as required. \(\square \)

Finally, Lemma 6 is a standard application of the ergodic theorem.

### Proof of Lemma 6

To apply the ergodic theorem (see for example [6, Theorem 24.1] and the surrounding chapter for further details), we should formally construct our probability space. For each \(i\in \{0,1,2,\ldots \}\) and \(j\in \mathbb {N}\) we take a Bernoulli random variable \(B^{(i)}_j\) and an exponential random variable \(E^{(i)}_j\) of parameter 1. We view our space \(\Omega \) as the set of sequences \((((B^{(i)}_j, E^{(i)}_j)_{i\ge 0})_{j\ge 1})\), with the product \(\sigma \)-algebra. We can then define \(X_j(t)\) to take the value \(B^{(i)}_t\) whenever \(\sum _{k<i}E^{(i)}_j \le t < \sum _{k\le i} E^{(i)}_j\). We have the shift map \(\theta : \Omega \rightarrow \Omega \) which maps \((((B^{(i)}_j, E^{(i)}_j)_{i\ge 0})_{j\ge 1})\) to \((((B^{(i)}_j, E^{(i)}_j)_{i\ge 0})_{j\ge 2})\); in practical terms, \(\theta \) deletes \(X_1(t)\) and builds our (dynamical) random walks from \((X_2(t),X_3(t),\ldots )\) instead. Standard methods show that \(\theta \) is ergodic. Define

For any \(\alpha \ge 0\), the Hausdorff dimension of \(\mathcal {E}_\alpha \cup \mathcal {E}'_\alpha \) is invariant under \(\theta \), and therefore constant almost surely by the ergodic theorem. By symmetry, the Hausdorff dimension of \(\mathcal {E}_\alpha \) equals that of \(\mathcal {E}'_\alpha \). Since the Hausdorff dimension of the union of two sets is the maximum of their Hausdorff dimensions, the Hausdorff dimension of \(\mathcal {E}_\alpha \) must therefore equal that of \(\mathcal {E}_\alpha \cup \mathcal {E}'_\alpha \), and thus be constant almost surely. \(\square \)

## References

- 1.
Addario-Berry, L., Reed, B.A.: Ballot theorems, old and new. In: Győri, E., Katona, G.O.H., Lovász, L., Sági, G. (eds.) Horizons of Combinatorics, pp. 9–35. Springer, Berlin (2008)

- 2.
Amir, G., Hoffman, C.: A special set of exceptional times for dynamical random walk on \(\mathbb{Z}^{2}\). Electron. J. Probab.

**13**(1), 1927–1951 (2008) - 3.
André, D.: Solution directe du probleme résolu par M. Bertrand. CR Acad. Sci. Paris

**105**(436), 7 (1887) - 4.
Benjamini, I., Häggström, O., Peres, Y., Steif, J.E.: Which properties of a random sequence are dynamically sensitive? Ann. Probab.

**31**(1), 1–34 (2003) - 5.
Benjamini, I., Kalai, G., Schramm, O.: Noise sensitivity of Boolean functions and applications to percolation. Publications Mathématiques de l’Institut des Hautes Études Scientifiques

**90**(1), 5–43 (1999) - 6.
Billingsley, P.: Probability and Measure. Wiley, New York (2008)

- 7.
Collevecchio, A., Hamza, K., Liu, Y.: Invariance principle for biased bootstrap random walks. Stochast. Process. Appl.

**129**(3), 860–877 (2019) - 8.
Collevecchio, A., Hamza, K., Shi, M.: Bootstrap random walks. Stochast. Process. Appl.

**126**(6), 1744–1760 (2016) - 9.
Engländer, J., Volkov, S.: Turning a coin over instead of tossing it. J. Theor. Probab.

**31**(2), 1097–1118 (2018) - 10.
Englander, J., Volkov, S., Wang, Z.: The coin-turning walk and its scaling limit. Electron. J. Probab.

**25**(3), 69–106 (2020) - 11.
Fortuin, C.M., Kasteleyn, P.W., Ginibre, J.: Correlation inequalities on some partially ordered sets. Commun. Math. Phys.

**22**(2), 89–103 (1971) - 12.
Garban, C., Pete, G., Schramm, O.: The Fourier spectrum of critical percolation. Acta Mathematica

**205**(1), 19–104 (2010) - 13.
Garban, C., Steif, J.E.: Noise Sensitivity of Boolean Functions and Percolation. Cambridge University Press, Cambridge (2014)

- 14.
Häggström, O., Peres, Y., Steif, J.E.: Dynamical percolation. Annales de l’Institut Henri Poincare (B) Probability and Statistics

**33**(4), 497–528 (1997) - 15.
Hoffman, C.: Recurrence of simple random walk on \(\mathbb{Z}^{2}\) is dynamically sensitive. ALEA

**1**(1), 35–45 (2005) - 16.
Khoshnevisan, D., Levin, D.A., Méndez-Hernández, P.J.: On dynamical Gaussian random walks. Ann. Probab.

**33**(4), 1452–1478 (2005) - 17.
Khoshnevisan, D., Levin, D.A., Méndez-Hernández, P.J.: Exceptional times and invariance for dynamical random walks. Probab. Theory Related Fields

**134**(3), 383–416 (2006) - 18.
Lawler, G.F., Limic, V.: Random Walk: A Modern Introduction, vol. 123. Cambridge University Press, Cambridge (2010)

- 19.
Ritter, G.A.: Growth of random walks conditioned to stay positive. Ann. Probab.

**9**(4), 699–704 (1981) - 20.
Schramm, O., Steif, J.E.: Quantitative noise sensitivity and exceptional times for percolation. Ann. Math. (2)

**171**(2), 619–672 (2010) - 21.
Steif, J.E.: A Survey of dynamical percolation. In: Bandt, C., Zähle, M., Mörters, P. (eds.) Fractal Geometry and Stochastics IV. Progress in Probability, vol. 61. Birkhäuser, Basel (2009)

- 22.
Tsirelson, B.: Triple points: from non-Brownian filtrations to harmonic measures. Geom. Funct. Anal.

**7**(6), 1096–1142 (1997) - 23.
Warren, J.. Splitting: Tanaka’s SDE revisited. Preprint, arXiv:math.PR/9911115 (1999)

## Acknowledgements

MR would like to thank Emily Atkinson, who spent a portion of her summer internship exploring an earlier unsuccessful method to attempt to prove Theorem 2, and Jon Warren for pointing out the example in [23]. He would also like to thank the Royal Society for funding his University Research Fellowship. MP would like to thank the University of Bath for his University Research Scholarship.

## Author information

### Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Prigent, M., Roberts, M.I. Noise sensitivity and exceptional times of transience for a simple symmetric random walk in one dimension.
*Probab. Theory Relat. Fields* (2020). https://doi.org/10.1007/s00440-020-00978-7

Received:

Revised:

Published:

### Keywords

- Random walk
- Dynamical sensitivity
- Exceptional times
- Noise sensitivity
- Hausdorff dimension

### Mathematics Subject Classification

- 60G50
- 82C41
- 28A78