A Brownian Particle in a Microscopic Periodic Potential
Abstract
We study a model for a massive test particle in a microscopic periodic potential and interacting with a reservoir of light particles. In the regime considered, the fluctuations in the test particle’s momentum resulting from collisions typically outweigh the shifts in momentum generated by the periodic force, so the force is effectively a perturbative contribution. The mathematical starting point is an idealized reduced dynamics for the test particle given by a linear Boltzmann equation. In the limit that the mass ratio of a single reservoir particle to the test particle tends to zero, we show that there is convergence to the Ornstein–Uhlenbeck process under the standard normalizations for the test particle variables. Our analysis is primarily directed towards bounding the perturbative effect of the periodic potential on the particle’s momentum.
Keywords
Brownian limit Linear Boltzmann equation OrnsteinUhlenbeck process Nummelin splitting1 Introduction
The Ornstein–Uhlenbeck process offers a homogenized picture for the motion of a massive particle interacting with a gas of lightweight particles at fixed temperature [33]. In this description, the spatial degrees of freedom are driven ballistically by momentum variables which are themselves governed by a diffusion equation that includes a drift term corresponding to the drag felt by the massive particle as it accumulates speed and has more frequent collisions with the gas. Under diffusive rescaling, the spatial variables converge in law to a Brownian motion. This result follows by an elementary analysis of the closed formulas available for the Ornstein–Uhlenbeck process [26]. The Brownian motion description for the test particle transport is effectively “more macroscopic” than the Ornstein–Uhlenbeck model since the fluctuations in the particle’s momentum are integrated into infinitesimal spatial “jumps” for the Brownian particle.
In the other direction, we may consider derivations of the Ornstein–Uhlenbeck process from models that are “more microscopic”. These relatively microscopic descriptions may merely be more complicated stochastic models for the test particle such as a linear Boltzmann equation, or more fundamentally, a reduced dynamics for the test particle beginning from a full microscopic model that includes the evolution of the degrees of freedom for the gas. The stochastic model in the former case should be regarded as an intermediary picture between the Ornstein–Uhlenbeck and the Hamiltonian dynamics arising in some limit; see [30] for a discussion of the low density limit. In the Boltzmann models, the test particle undergoes a Markovian dynamics, whereas for the Hamiltonian model including the gas, the randomness is only in the initial configuration, and the resulting dynamics for the test particle given by integrating out the gas is nonMarkovian. In the other direction, the contrast between the Ornstein–Uhlenbeck and the Boltzmanntype dynamics is that the momentum in the Boltzmann case makes discrete jumps, which are individually small in the Brownian limit, corresponding to collisions with gas particles rather than evolving with continuous trajectories according to a Langevin equation as in the Ornstein–Uhlenbeck case. We refer to the book [26] for a discussion of these various levels of description for a Brownian particle.
Rigorous mathematical derivations of the Ornstein–Uhlenbeck process were achieved in [3, 15] from stochastic models giving an effective description of the test particle as it receives collisions from particles in a background gas. For models that begin with a full mechanical Hamiltonian model including the test particle and the gas, derivations of the Ornstein–Uhlenbeck process from the reduced dynamics of the test particle were obtained in [10, 16, 31].
In this article we consider the Brownian regime for a stochastic model in which a onedimensional test particle makes jumps in momentum, interpreted as collisions with a background gas, and is acted upon by a force from an external, spatially periodic potential field. With the presence of the field, the momentum process is no longer Markovian since it drifts at a rate depending on the particle’s position. The momentum of the particle has two contributions: the total displacement in momentum generated by the field, which is given by a time integral of the force, and the sum of the momentum jumps from collisions. As a result of the specific scaling regime considered, which includes the period length of the potential, the force field typically makes a smallerscale contribution to the test particle’s momentum than the fluctuations in momentum due to the jumps identified with “collisions”. The vanishing of the force contribution is an averaged effect driven by the frequent rate at which the test particle is typically passing through the period cells of the potential field. The Brownian limit of the model to firstorder thus yields the same Ornstein–Uhlenbeck process as if the force were set to zero. Our analysis is focused on obtaining a sharp upper bound for the influence of the external potential on the momentum of the particle, and our techniques improve those applied to a related model in [9]. Ultimately, the main contributions to the total drift in momentum due to the forcing are made during “rare” time periods at which the test particle’s momentum returns to “small” values. The results of this article are extended in [7] to prove that the integral of the force, or net displacement in momentum due to the potential, converges in law to a fractional diffusion whose rate depends on the amount time that the limiting Ornstein–Uhlenbeck process spends at zero momentum, i.e., the local time at zero.
Our model is a linear Boltzmann dynamics for a onedimensional particle making elastic collisions with the gas and including a spatially periodic potential. The jump rate kernel is the onedimensional case of the formula appearing in [30, Chap. 8.6], which corresponds to a hardrod interaction between the test particle and a single reservoir particle. However, since the model is onedimensional, it cannot be derived from a mechanical microscopic dynamics in the BoltzmannGrad limit. We thus regard our model as phenomenological, and we argue that the resulting behavior that we find is qualitatively the same as what should be expected in an analogous threedimensional model for a Brownian particle in a onedimensional periodic potential.
We think of our model as corresponding to an experimental situation for a large atom or molecule in a periodic standingwave light field and interacting with a dilute background gas. A periodic optical force on an atom can be produced experimentally by counterpropagating lasers; see, for instance, [22] or the reviews [1, 25]. A classical treatment of the atom is reasonable in the regime where the potential is effectively weak because the test particle is typically not constrained by the potential and the coherent quantum effects for the test particle will be suppressed by interactions with the gas.
1.1 Model and Results
The technical assumptions for our main results are the following:
List 1.1
 1.
The potential \(V(x)\) is nonnegative, has period \(a>0\), and is continuously differentiable.
 2.
The probability measure \(\mu \) on \({\mathbb R}^{2}\) for the initial location in phase space \((X_{0},P_{0})\) has finite moments.
The following theorems are the main results of this article. Theorem 1.3 states that as \(\lambda \searrow 0\) the momentum process \(P_{\frac{t}{\lambda }}\) rescaled by a factor \(\lambda ^{\frac{1}{2}}\) converges to an Ornstein–Uhlenbeck process. Theorem 1.2 bounds the cumulative drift from the periodic force, although only the weaker limit result (1.6) is required for the proof of Theorem 1.3. The estimates developed to prove (1.5) are extended in [7] to prove that the process \((\lambda ^{\frac{1}{4}}D_{\frac{t}{\lambda } },t\in [0,T])\) converges in law to a timefractional diffusion as \(\lambda \searrow 0\). In particular, the exponent \(\iota =\frac{1}{4}\) is the smallest possible such that the expectation of \(\lambda ^{\iota }D_{\frac{t}{\lambda }}\) is uniformly bounded for small \(\lambda >0\) and \(\limsup _{\lambda \rightarrow 0} \mathbb {E}^{(\lambda )}\big [\sup _{0\le t\le T}\big  \lambda ^{\frac{1}{4}}D_{\frac{t}{\lambda }}\big  \big ]>0\).
Theorem 1.2
Theorem 1.3
In the limit \(\lambda \searrow 0\), there is convergence in law of the process \(\lambda ^{\frac{1}{2}}P_{\frac{t}{\lambda }}\) to the Ornstein–Uhlenbeck process \(\mathfrak {p}_{t}\) over the interval \(t\in [0,\,T]\). The convergence is with respect to the uniform metric on paths.
Since the position process \(X_{t}=X_{0}+\frac{\lambda }{m}\int _{0}^{t}drP_{r} \) is driven by the momentum process, it follows from Theorem 1.3 that \(\lambda ^{\frac{1}{2}}X_{\frac{t}{\lambda }}\) converges in law as \(\lambda \searrow 0\) to the process \(\mathfrak {q}_{t}\) defined in (1.4).
1.2 Further Discussion
This article concerns the dynamics of a Brownian particle that feels a force from a onedimensional periodic potential. We focus on a regime in which the potential is “microscopic”. By “microscopic”, we mean that the potential has an amplitude \(\sup _{x,x'}V(x)V(x')\) that is much smaller than the typical kinetic energy \(\frac{M}{\beta }=\lambda ^{1}\frac{m}{\beta }\) of the test particle at equilibrium with the heat bath, and that the period \(a\) is small enough so that the typical rate at which the particle passes through the period cells \((a^{2}M\beta )^{\frac{1}{2}}\) is much faster than the rate of energy relaxation \(\approx \lambda \gamma \) for the test particle.
For our mathematical analysis, the force \(F(x)=\frac{dV}{dx}\big (\frac{x}{\lambda }\big )\) is taken to have a period \(a\lambda \) which scales proportionally to the mass ratio \(\lambda =\frac{m}{M}\). This is not essential to these results, and only the broad features described above are critical. The same can be said about the amplitude of the potential.
Theorem 1.3 states that to first approximation under Brownian rescaling, the momentum is an Ornstein–Uhlenbeck process with no dependence on the potential. This classical treatment of the particle allows for comparisons with quantum models. A similar model for a onedimensional quantum particle was studied in [6] for which the potential is a periodic \(\delta \)potential. In that case, the singular potential makes a firstorder change to the dynamics characterized by spatial subdiffusion caused by quantum reflections even though the periodic potential is “microscopic” in a similar sense as described above. See [4, 13, 19] for examples of experimental investigations of quantum reflections of atoms from potentials generated through laser light. Analogous quantum models with smoother potentials will behave more like their classical counterparts.
A threedimensional linear Boltzmann dynamics for a particle in a gas of hard spheres and under the influence of a onedimensional periodic potential will have the same limit result up to the constants as in Theorem 1.3 for the degree of freedom in the direction of the potential. Although the momentum for a single spatial degree of freedom is not Markovian in the linear Boltzmann description, it becomes “more Markovian” in the Brownian limit as is seen in the limiting threedimensional Ornstein–Uhlenbeck process. The rates (1.2) can then be replaced by the effective rates that emerge for a single degree of freedom in the three dimensional case, which have the same qualitative features for our purposes.
1.2.1 Features of the Model
1.2.2 Rough Picture of the Behavior in the Brownian Regime \(\lambda \ll 1\)
The above arguments motivate that \(D_{\frac{t}{\lambda }}\) spends the greater portion of the time interval \(t\in [0,T]\) behaving as a constant, or, said differently, its larger fluctuations are typically concentrated on a small fraction of the interval \([0,T]\). Let us consider the order of the contributions to \(D_{t}\) that are likely to occur for the periods of time when \(P_{r}\) returns to the region around the origin, that is, \(P_{r}= O (1)\). If \(P_{r}\) is behaving roughly as a random walk for \(t\in [0,\frac{T}{\lambda }]\) with some very weak friction, then we expect that \(P_{r}\) spends on the order of \(\lambda ^{\frac{1}{2}}\) time in the vicinity of the origin. If there are central limit theoremlike cancellations between the increments \(\int _{ t_{n1} }^{t_{n}}dr\frac{dV}{dx}(X_{r})\) in those time periods, then \(D_{\frac{t}{\lambda }}\) should be expected to be on the scale \(\lambda ^{\frac{1}{4}}\).
1.2.3 Techniques and Strategy of the Proof
The main difficulty in showing that \(\lambda ^{\frac{1}{2}}P_{\frac{t}{\lambda }}\) converges in law to the Ornstein–Uhlenbeck process \(\mathfrak {p}_{t}\) is to show that the component \( D_{\frac{t}{\lambda }}\) of the momentum is typically \( o (\lambda ^{\frac{1}{2}})\) for \(t\in [0,T]\). As indicated by the heuristics of Sect. 1.2.2, we should expect, in fact, that typically \(\sup _{0\le t\le T} D_{\frac{t}{\lambda }}\) is \( O (\lambda ^{\frac{1}{4}})\).
One of the main ingredients in our analysis is a splitting technique that consists in introducing an artificial “atom” into the state space by embedding the original process as a component of a process with an enlarged state space. In principle, the benefit for having an extended state space with an atom is that the trajectories for the process \(S_{t}\) can be decomposed into a series of i.i.d. parts, i.e., life cycles, corresponding to time intervals \([R_{n},R_{n+1})\) where \(R_{n}\) are the return times to the atom. This would allow the integral functional \(D_{t}\) to be written as a pair of boundary terms plus a sum of i.i.d. random variables with a random number of terms. For Markov chains such a technique for embedding an atom was developed independently in [28] and [2] and is referred to as Nummelin splitting or merely splitting. When it comes to splitting a Markov process, there are different schemes available. In [17] there is a sequence of split processes constructed which contain marginal processes that are arbitrarily close to the original process. The construction in [21] involves a larger state space \(\Sigma \times [0,1]\times \Sigma \) although an exact copy of the original process is embedded as a marginal. The idea that splitting constructions could be used as a tool to prove certain limit theorems for Markov processes was suggested in an unpublished paper [32].
1.2.4 The Unit Conventions and Organization of the Article
Throughout the remainder of the article, we will remove units by setting \(\beta =a=m=1\), and picking \(\eta \) such that \( \gamma =\frac{1}{2}\); recall that \(\gamma \) is defined below (1.4). We assume List 1.1 in all theorems, lemmas, etc. unless otherwise stated.

Section 2 presents the splitting structure that allows us to decompose the dynamics into a series of life cycles as sketched in Sect. 1.2.3.

Section 3 is directed towards gaining control over the frequency and duration of life cycles in the limit \(\lambda \searrow 0\).

Section 4 demonstrates how to bound the fluctuations of the integral functional \(\int _{0}^{t}dr\frac{dV}{dx}(X_{r})\) over the time period of a single life cycle.

Sections 5 and 6 contain the proofs respectively for Theorems 1.2 and 1.3.

Various proofs are placed in Sect. 7 to avoid diverting the reader from the main points in earlier sections.
2 Nummelin Splitting
The split process that we define here is a truncated version of that in [21]. In the context of a larger probability space, the drift in momentum \(D_{t}=\int _{0}^{t}dr\frac{dV}{dx}(X_{r})\) may be viewed as a martingale plus a few small “boundary” terms. This allows us to apply martingale techniques. For those familiar with the terminology related to Nummelin splitting, we outline the extension of the process as follows: We introduce a resolvent chain embedded in the original process, we split the chain using Nummelin’s technique, and we extend the resolvent chain to a nonMarkovian process which contains an embedded version of the original process.
 1.
\(0=\tilde{\tau }_{0}\), \(\tilde{\tau }_{n}\le \tilde{\tau }_{n+1}\), and \(\tilde{\tau }_{n}\rightarrow \infty \) almost surely.
 2.
The chain \((\tilde{S}_{\tilde{\tau }_{n}})\) has the same law as \((\tilde{\sigma }_{n})\).
 3.
For \(t\in [\tilde{\tau }_{n},\tilde{\tau }_{n+1})\), then \(Z_{t}=Z_{\tilde{\tau }_{n}}\).
 4.
Conditioned on the information known up to time \(\tilde{\tau }_{n}\) for \(\tilde{S}_{t}\), \(t\in [0,\tilde{\tau }_{n}]\) and \(\tilde{\tau }_{m}\), \(m\le n\), and also the value \(\tilde{S}_{\tilde{\tau }_{n+1}}\), the law for the trajectories \(S_{t}\), \(t\in [\tilde{\tau }_{n},\tilde{\tau }_{n+1}]\) (which refers also to the length \(\tilde{\tau }_{n+1}\tilde{\tau }_{n}\)) agrees with the law for the original process conditioned on knowing the values \(S_{\tilde{\tau }_{n}}\) and \(S_{\tilde{\tau }_{n+1}}\).
Now that we have defined the split process \(\tilde{S}_{t}\), we can proceed to define the “life cycles”. Let \(R_{m}'\) be the value \(\tau _{\tilde{n}_{m}}\) for \(\tilde{n}_{m}=min \{ n\in \mathbb {N}\,\big \,\sum _{k=0}^{n}\chi (Z_{\tau _k}= 1) =m \} \). In other words, \(R_{m}'\) is the \(m\)th partition time to visit the atom set \(\Sigma \times 1\), and we use the convention that \(R_{0}'=0\). Define \(R_{m}\), \(m\ge 1\) to be the partition time following \(R_{m}'\). The \(m\)th life cycle is the time interval \([R_{m},R_{m+1})\). Intuitively, it may at first seem more natural to define \(S_{R_{m}'}\) as the beginning of the life cycle. However, the distribution for \(R_{1}'\) will depend on the initial distribution \(\tilde{S}_{0}\). It is better to consider the beginning of the life cycle to be the partition time \(R_{m}\) following \(R_{m}'\), which has distribution \(\tilde{\nu }\) with respect to information known up to time \(R_{m}'\). Although the conditional distribution for \(\tilde{S}_{R_{m}}\) is independent of the value \(\tilde{S}_{R_{m}'}\in \Sigma \times 1\), successive live cycles \([R_{n1},R_{n})\), \([R_{n},R_{n+1})\) are obviously not independent since, for instance, there is almost sure convergence \(\lim _{t\nearrow R_{n}} S_{t}=S_{R_{n}}\). Let \(d\mathbf {N}_{t}\) be the counting measure on \({\mathbb R}^{+}\) such that \(\int _{(t_{1},t_{2}]} d\mathbf {N}_{r}=\mathbf {N}_{t_{2}}\mathbf {N}_{t_{1}}\) for \(0\le t_{1}<t_{2}\), i.e., the number of partition times over the interval \((t_{1},t_{2}]\). The following proposition lists some independence properties that follow closely from the construction of the split process. The measure \(\nu \) in the statement of Proposition 2.1 can be regarded as a generic normalized measure satisfying (2.1) for some \(h:\Sigma \rightarrow [0,1]\) although we will choose it to be of the specific form in Convention 2.2 later in the text.
Proposition 2.1
 1.
The distribution for \(\tilde{S}_{R_{n}}\) is \(\tilde{\nu }\) when conditioned on all information known up to time \(R_{n}'\): \(\tilde{\mathcal {F}}_{R_{n}'}\).
 2.
The sequence of trajectories \(\big (S_{t},\, d\mathbf {N}_{t} : \, t\in [R_{n},R_{n+1}'] \big ) \) are i.i.d. for \(n\ge 1\), and \(\big (S_{t},\, d\mathbf {N}_{t} : \, t\in [R_{n},R_{n+1}'] \big ) \) is independent of \(\big (\tilde{S}_{t},\,d\mathbf {N}_{t}: \, t\notin (R_{n}',R_{n+1}) \big ) \).
 3.
The trajectory \(\big (\tilde{S}_{t},\,d\mathbf {N}_{t}: \, t\in [R_{n},R_{n+1}] \big ) \) is independent of \(\big (\tilde{S}_{t},\, d\mathbf {N}_{t}: \, t\notin (R_{n}',R_{n+2}) \big ) \). In particular, \(\big (\tilde{S}_{t},\, d\mathbf {N}_{t}: \, t\in [R_{n},R_{n+1}] \big ) \) is independent of \(\big (\tilde{S}_{t},\, d\mathbf {N}_{t}: \, t\in [R_{m},R_{m+1}] \big ) \) for \(nm\ge 2\).
Proof
Statement (1), which is given in [21, Prop. 2.13], follows immediately from the construction. Statements (2) and (3) follow from Part (1), the strong Markov property at the times \(R_{n}\), and the independence of the partition times from the past [21, Prop. 2.6]. For instance, \(\tilde{S}_{R_{n}}\) has distribution \(\tilde{\nu }\) independently of \(\big (S_{t},\, d\mathbf {N}_{t} : \, t\in [0,R_{n}'] \big ) \) by Part (1). By the strong Markov property for \(\tilde{S}_{t}\) at the time \(R_{n}\), the trajectory \(\big (S_{t} : \, t\in [R_{n},R_{n+1}'] \big ) \) is independent of \(\big (\tilde{S}_{t},\, d\mathbf {N}_{t} : \, t\in [0,R_{n}'] \big ) \) when given the state \(\tilde{S}_{R_{n}}\) and has the same law as \(\big (S_{t}: \, t\in [0 ,R_{1}'] \big ) \) when \(\tilde{S}_{0}\) has distribution \(\tilde{\nu }\). The partition times \(\tau _{m}\) over the interval \([R_{n},R_{n}']\), encoded by \(\int _{R_{n}}^{t} d\mathbf {N}_{r}\) for \(t\in [R_{n},R_{n+1}'] \), are independent of \(\big (S_{t},\, d\mathbf {N}_{t} : \, t\in [0,R_{n}'] \big ) \) by [21, Prop. 2.6] (and also independent of the process \(S_{t}\) for all \(t\in {\mathbb R}^{+}\)). \(\square \)
 \(\tilde{S}_{t}=(S_{t},Z_{t})\)

State of the split process at time \(t\)
 \(\tau _{m}\in {\mathbb R}^{+}\)

\(m\)th partition time
 \(\tilde{\sigma }_{m}= \tilde{S}_{\tau _{m}}\)

\(m\)th state of the split chain
 \((\sigma _{m} , \zeta _{m}) =\tilde{\sigma }_{m}\)

\(\sigma _{m}\) and \(\zeta _{m}\) are the state and binary components, respectively, of \(\tilde{\sigma }_{m}\)
 \(\mathbf {N}_{t}\in \mathbb {N}\)

Number of partition times \(\tau _{m}\), \(m\ge 1\) to occur up to time \(t\)
 \(R_{m}' \in {\mathbb R}^{+}\)

\(m\)th partition time visiting the set \(\Sigma \times 1\)
 \(R_{m} \in {\mathbb R}^{+}\)

Partition time succeeding \(R_{m}'\) and the beginning of the \(m\)th life cycle
 \(\tilde{N}_{t} \in \mathbb {N}\)

Number of returns to the atom up to time \(t\)
 \(\tilde{n}_{m} \in \mathbb {N}\)

Number of partition times in the interval \((0,R_{m}]\)
 \(\mu \rightarrow \tilde{\mu }\)

The splitting of a measure \(\mu \) on \(\Sigma \) as defined in (2.2)
 \(\mathcal {F}_{t}\)

Information up to time \(t\) for the original process \(S_{r}\) and the \(\tau _{m}\)
 \(\tilde{\mathcal {F}}_{t}\)

Information up to time \(t\) for the split process \(\tilde{S}_{r}\) and the \(\tau _{m}\)
 \(\tilde{\mathcal {F}}_{t}'\)

Information for \(\tilde{S}_{t}\) and the \(\tau _{m}\) before time \(R_{n+1}\), where \(R_{n}'\le t<R_{n+1}'\) plus knowledge of the time \( R_{n+1}\) itself
We will henceforth attach the subscript \(\lambda \) to the transition map \( \mathcal {T}\) to emphasize the dependence of the dynamics on this parameter. There is some flexibility in the choice of \(\nu \) and \(h\) in the criterion (2.1), although choosing them to be independent of \(\lambda >0\) adds a little extra constraint. By Part (1) of Proposition 2.3, we can select a pair \(\nu \), \(h\) that is independent of \(\lambda \), and where both are functions of the energy. We will use the symbol \(\nu \) for both the measure and the corresponding density.
Convention 2.2
The compact support of \(h:\Sigma \rightarrow [0,1]\) implies that the extended state space for the split dynamics is effectively \( \Sigma \times 0 \cup {{\mathrm{supp}}}(h) \times 1 \subset \tilde{\Sigma } \) since other states in \( \tilde{\Sigma }=\Sigma \times \{0,1\} \) will not be visited. Any supremum, minimum, etc. over \(\tilde{\Sigma }\) refers to this contracted set. Parts (2) and (3) of the proposition below are elementary consequences of the splitting structure defined above and the proof is contained in Sect. 7.1.
Proposition 2.3
 1.There is a constant \(\mathbf {u} >0\) such that the \(h\) and \(\nu \) in Convention 2.2 satisfy \(\mathcal {T}_{\lambda }(s,ds' )\ge h(s)\nu (ds')\,\) for all \(s,s'\in \Sigma \) and \(\lambda <1\). Also, the transition measures \(\mathcal {T}_{\lambda }(s,ds')\) have densities over the domains \(\{s'\in \Sigma \,\big \, H(s')\ne H(s)\}\), which have the following bound$$\begin{aligned} \sup _{\lambda \le 1}\mathop {{\mathrm {ess}}\,{\mathrm {sup}}}\limits _{ \begin{array}{c} H(s)>l\\ H(s)\ne H(s') \end{array}}\frac{{\mathcal {T}}_{\lambda }(s,ds^{\prime })}{ds^{\prime }}<\infty . \end{aligned}$$
 2.The invariant state of both the split chain \((\tilde{\sigma }_{n})\) and the split process \((\tilde{S}_{t})\) is the splitting of the invariant state of the original process, i.e.,Thus, the “atom” has measure \( \int _{\Sigma }ds h(s)\Psi _{\infty ,\lambda }(s) >0 \).$$\begin{aligned} \tilde{\Psi }_{\infty ,\lambda }(s,0)= \big (1h(s)\big )\Psi _{\infty ,\lambda }(s)\quad \text {and}\quad \tilde{\Psi }_{\infty ,\lambda }(s,1)= h(s)\Psi _{\infty ,\lambda }(s). \end{aligned}$$
 3.If \(\mathbf {t}\) is a partition time, the distribution for \(\tilde{S}_{\mathbf {t}}\) conditioned on \(\tilde{\mathcal {F}}_{\mathbf {t}^{}}\) is the splitting of the \(\delta \)distribution at \(S_{\mathbf {t}}\):In particular, \(\tilde{\mathbb {P}}^{(\lambda )}\big [Z_{\mathbf {t} }= 1 \,\big  \,\tilde{\mathcal {F}}_{\mathbf {t}^{}}\big ]=h(S_{\mathbf {t}})\). The strong Markov property at the time \(\mathbf {t}\) and stationarity give us that$$\begin{aligned} \tilde{\delta }_{ S_{\mathbf {t}} }(s,z)=\delta (sS_{\mathbf {t}})\big (\chi (z=0)\big (1h(S_{\mathbf {t}})\big )+\chi (z=1)h(S_{\mathbf {t}})\big ). \end{aligned}$$where \(\mathcal {L}_{\mu }\) refers to the law starting from the distribution \(\mu \).$$\begin{aligned} \mathcal {L}\big ((\tilde{S}_{\mathbf {t}+r})\,\big \, \tilde{\mathcal {F}}_{\mathbf {t}^{}}\big )= \mathcal {L}_{\tilde{\delta }_{ S_{\mathbf {t}}}}\big ((\tilde{S}_{r }) \big ), r\in {\mathbb R}^{+}, \end{aligned}$$
Proposition 2.4
 1.For \(g\in L^{\infty }(\tilde{\Sigma })\),In particular, if \(g\in L^{\infty }(\Sigma )\) does not depend on the binary variable, then the numerator on the right side above is equal to \(\int _{\tilde{\Sigma }}d\tilde{s}\tilde{\Psi }^{(\lambda )}_{\infty }(\tilde{s}) g(\tilde{s})=\int _{\Sigma }ds \Psi _{\infty ,\lambda }(s) g(s)\).$$\begin{aligned} \tilde{\mathbb {E}}_{\tilde{\nu }}^{ (\lambda )} \Big [\sum _{m=0}^{\tilde{n}_{1}} g(\tilde{\sigma }_{m}) \Big ]=\tilde{\mathbb {E}}_{\tilde{\nu }}^{ (\lambda )} \Big [\sum _{m=1}^{\tilde{n}_{1}+1} g(\tilde{\sigma }_{m}) \Big ]=\frac{ \int _{\tilde{\Sigma }}d\tilde{s}\tilde{\Psi }^{(\lambda )}_{\infty }(\tilde{s}) g(\tilde{s})}{\int _{\Sigma }ds\Psi ^{(\lambda )}_{\infty }(s) h(s) }. \end{aligned}$$
 2.For \(g\in L^{\infty }(\Sigma )\),$$\begin{aligned} \tilde{\mathbb {E}}_{\tilde{\nu }}^{ (\lambda )} \Big [\int \limits _{0}^{R_{1}}dr g(S_{r}) \Big ]=\frac{ \int _{\Sigma }ds\Psi _{\infty ,\lambda }(s) g(s) }{\int _{\Sigma }ds\Psi _{\infty ,\lambda }(s) h(s)}. \end{aligned}$$
 3.For \(g\in L^{\infty }(\Sigma )\) with \(\Psi _{\infty ,\lambda }(g)=0\) and \(s_{1},s_{2}\in \Sigma \),where \(\tilde{\delta }_{s}\) is the splitting of the \(\delta \)measure at \(s\in \Sigma \).$$\begin{aligned} \tilde{\mathbb {E}}_{\tilde{\delta }_{s_{1}}}^{ (\lambda )} \Big [\int \limits _{0}^{R_{1}}dr g(S_{r}) \Big ] \tilde{\mathbb {E}}_{\tilde{\delta }_{s_{2}}}^{ (\lambda )} \Big [\int \limits _{0}^{R_{1}}dr g(S_{r}) \Big ] =\big (\mathfrak {R}^{(\lambda )}g\big )(s_{1})\big (\mathfrak {R}^{(\lambda )}g\big )(s_{2}), \end{aligned}$$
 4.For \(g\in L^{\infty }(\Sigma )\) with \(\Psi _{\infty ,\lambda }(g)=0\),$$\begin{aligned} \tilde{\mathbb {E}}_{\tilde{\nu }}^{ (\lambda )} \Big [\int \limits _{0}^{R_{1}}dr g(S_{r}) \int \limits _{r}^{R_{2}}dr'g(S_{r'}) \Big ]= \frac{ \int _{\Sigma }ds\Psi _{\infty ,\lambda }(s) g(s)\big (\mathfrak {R}^{(\lambda )}g\big )(s)}{ \int _{\Sigma }ds\Psi _{\infty ,\lambda }(s)h(s)}. \end{aligned}$$
Proof
The following proposition lists a few martingales related to the number \(\tilde{N}_{t}\) of returns to the atom up to time \(t\in {\mathbb R}^{+}\).
Proposition 2.5
Proof
3 The Frequency of Returns to the Atom
Sections 3.1 and 3.2 effectively bound the frequency of returns to the atom from above and below, respectively.
3.1 Bounding the Number of Returns to the Atom
Recall that \(\tilde{N}_{t}\) is defined for the split process as the number of returns to the atom set up to time \(t\in {\mathbb R}^{+}\). We will now focus on bounding the expectation of \(\tilde{N}_{t}\) for \(t=\frac{T}{\lambda }\) in the limit of small \(\lambda \). By Proposition 2.5 the expectation of \( \tilde{N}_{t}\) with respect to the split statistics is equal to the expectation of \(\int _{0}^{t}dr h(S_{r})\) with respect to the original statistics. The time integral of the process \( h(S_{t})\) keeps track of the amount of time that \(S_{t}\) loiters in the low momentum region where \(h:\Sigma \rightarrow {\mathbb R}^+\) has support and the life cycles regenerate. However, it is useful to work with a process that serves the same purpose as \(\int _{0}^{t}dr h(S_{r})\) but that is easier to handle. A convenient option is the increasing part of the drift \(\mathbf {A}_{t}^{+}\) in the semimartingale decomposition for \( \mathbf {Q}_{t}:=(2H_{t})^{\frac{1}{2}} \), which increases at a decaying rate away from the low momentum region; see the discussion below and Part (2) of Proposition 3.1. Functions of the energy \(H(x,p)=\frac{1}{2}p^{2}+V(x)\) have the advantage of being invariant under the Hamiltonian evolution, which makes energy related quantities a desirable starting point for gaining some control over the typical behavior of the dynamics.
The following proposition states some basic facts for the functions \(\mathcal {A}_{\lambda }^{\pm } \), \( \mathcal {V}_{\lambda ,n} \), \(\mathcal {V}_{\lambda ,n}^{+}\), and \(\mathcal {K}_{\lambda ,n}\). The proofs of Parts 1–4 of Proposition 3.1 are placed in Sect. 7.2, and we do not include the proofs of Parts 57 which require similar calculusbased arguments. The function \(\mathcal {D}_\lambda :{\mathbb R}\rightarrow {\mathbb R}\) in Part (1) of Proposition 3.1 is the drift rate in momentum due to collisions: \(\mathcal {D}_{\lambda }(p)=\int _{{\mathbb R}}dp^{\prime }(p^{\prime }p) {\mathcal {J}}_{\lambda }(p,p^{\prime })\).
Proposition 3.1
 1.
For all \((x,p)\in \Sigma \), \(\mathcal {A}_{\lambda }^{}(x,p)\le \mathcal {D}_\lambda (p)\). In particular, \(\mathcal {A}_{\lambda }^{}(x,p)\le C(\lambda p+\lambda ^{2}p^{2})\).
 2.
For all \((x,p)\in \Sigma \), \(\mathcal {A}_{\lambda }^{+}(x,p)\le \frac{C}{1+p^{2}}\).
 3.
As \(\lambda \rightarrow 0\), we have \(\int _{\Sigma }ds\mathcal {A}_{\lambda }^{+}(s)= 1+ O (\lambda ^{\frac{1}{2}})\).
 4.
For all \((x,p)\in \Sigma \), \(\mathcal {K}_{\lambda ,n}(x,p)\le C_n(1+\lambda p)\).
 5.
For all \((x,p)\in \Sigma \), \(\mathcal {V}_{\lambda ,n}(x,p)\le C(1+\lambda p)^{n+1}\).
 6.
For all \((x,p)\in \Sigma \), \(\mathcal {V}_{\lambda ,n}^{+}(x,p)\le C_n\).
 7.
For all \((x,p)\in \Sigma \), \(\mathcal {V}_{\lambda }(x,p)\ge c\).
Lemma 3.2 states that the energy process \(H_{t}:=H(X_{t},P_{t})\) typically does not go above the scale \(\lambda ^{1}\) over the time interval \([0,\frac{T}{\lambda }]\). The proof is based on martingale analysis and the bounds in Proposition 3.1 and does not involve the Nummelin splitting structure.
Lemma 3.2
Proof
We will work with the process \(\mathbf {Q}_{t}:=(2 H_{t})^{\frac{1}{2}}\). The reader should think of \(\mathbf {Q}_{t}\) as being roughly the absolute value of the momentum \(P_{t}\). If \(P_{t}\) were a symmetric random walk making steps every unit of time, then the result would follow by Doob’s maximal inequality with \(\mathbf {Q}_{t}\) replaced by \(P_{t}\) (supposing that the tail distribution of the jumps decays sufficiently fast). The situation for our jump rates should, in principle, be even more accommodating since the jump rates (1.2) tend to drag a momentum with large absolute value down to a momentum with smaller absolute value. However, for the purposes of this lemma, it is useful to discard the term associated with these large downward jumps in the decomposition (3.2) of \(\mathbf {Q}_{t}\) because it is less analytically wieldy and it is not helpful on the time scales \(\frac{T}{\lambda }\) for \(T\) fixed and \(\lambda \ll 1\).
The last term in (3.4) is bounded similarly to \(\mathbb {E}^{(\lambda )}\big [\sup _{0\le t\le \frac{T}{\lambda } }\big \mathbf {m}_{t}\big ^{n} \big ]^{\frac{1}{n}}\). \(\square \)
The following lemma bounds the expected number of returns to the atom up to time \(\frac{T}{\lambda }\) for \(\lambda \ll 1\).
Lemma 3.3
Proof
3.2 Fractional Moments for the Duration of Life Cycles
Proposition 3.4
 1.There is a \(C>0\) such that for \(\lambda <1\),$$\begin{aligned} \tilde{\mathbb {E}}^{(\lambda )}_{\tilde{\nu }}\big [\tilde{n}_{1}\big ]\le C \lambda ^{\frac{1}{2}}\quad \text {and} \quad \tilde{\mathbb {E}}^{(\lambda )}_{\tilde{\nu }}\big [R_{1}\big ]\le C \lambda ^{\frac{1}{2}}. \end{aligned}$$
 2.Each fractional moment \(0<\alpha <\frac{1}{2}\) is uniformly bounded for \(\lambda <1\),$$\begin{aligned} \sup _{\lambda <1}\tilde{\mathbb {E}}^{(\lambda )}_{\tilde{\nu }}\big [\tilde{n}_{1}^{\alpha }\big ]<\infty \quad \text {and} \quad \sup _{\lambda <1}\tilde{\mathbb {E}}^{(\lambda )}_{\tilde{\nu }}\big [R_{1}^{\alpha }\big ]<\infty . \end{aligned}$$
Before beginning with the proof of Proposition 3.4, we must establish Lemmas 3.5 and 3.6 below. The following trivial lemma bounds the length of time up to the first partition time \(\tau _{1}\) independently of the initial state \(\tilde{s}\in \tilde{S}\). Although the time intervals between partition times are not exponentially distributed, there is still an exponential bound on their densities.
Lemma 3.5
Proof
Lemma 3.6
Proof
The positivevalued, increasing process \( \mathbf {A}_{t}^{+}\) is difficult to analyze directly, so our strategy will be to write it using the other terms in the semimartingale decomposition of \(\mathbf {Q}_{t}\) as we did before at the end of the proof of Lemma 3.3: \( \mathbf {A}_{t}^{+}= \mathbf {Q}_{t}\mathbf {Q}_{0}\mathbf {M}_{t}+\mathbf {A}_{t}^{} \). In fact we can immediately throw away the positive terms \( \mathbf {Q}_{t}\), \(\mathbf {A}_{t}^{}\) in this expression for \(\mathbf {A}_{t}^{+}\) since we are looking for a lower bound; see (3.11). Our analysis will rely on applications of Proposition 3.1 and Lemma 3.2 to bound the remaining martingale term.
Proof of Proposition 3.4
 (i).
\(\gamma <\lambda \),
 (ii).
\(\lambda \le \gamma \) and \(\gamma \) sufficiently small.
4 Bounding Integral Functionals Over a Life Cycle
In this section we prove Proposition 4.1, which effectively bounds the expected fluctuations for the momentum drift \(D_{t}=\int \limits _{0}^{t}dr\frac{dV}{dx}(X_{r})\) over the period of a single life cycle.
Proposition 4.1
 1.For any \(m\in \mathbb {N}\), there is a \(C>0\) such that$$\begin{aligned} \sup _{\lambda \le 1} \tilde{\mathbb {E}}_{\tilde{\nu }}^{ (\lambda )} \Big [\sup _{0\le t\le R_{1}}\Big (\int \limits _{0}^{t}dr \frac{dV}{dx}(X_{r}) \Big )^{2m} \Big ]< C. \end{aligned}$$
 2.There is a \(C>0\) such that for all \((x,p,z)\in \tilde{\Sigma }\),$$\begin{aligned} \sup _{\lambda \le 1} \tilde{\mathbb {E}}_{(x,p,z)}^{ (\lambda )} \Big [\Big \int \limits _{0}^{R_{1}}dr \frac{dV}{dx}(X_{r}) \Big  \Big ]< C\big (1+\log (1+p)\big ). \end{aligned}$$
Theorem 4.2
The analysis in the proof of Proposition 4.1 also applies to Proposition 4.3, which is easier because the “velocity function” \(g(x,p)=\frac{dV}{dx}(x)\) of Proposition 4.1 does not have explicit decay for \(p\gg 1\). The decay for \(\frac{dV}{dx}(x)\) at high momentum only occurs as a timeaveraged effect, which is exposed in Lemma 4.7.
Proposition 4.3
 1.For any \(m\in \mathbb {N}\), there is a \(C>0\) such that$$\begin{aligned} \sup _{\lambda \le 1} \tilde{\mathbb {E}}_{\tilde{\nu }}^{ (\lambda )} \Big [\sup _{0\le t\le R_{1}}\Big (\int \limits _{0}^{t}dr g(S_{r}) \Big )^{2m} \Big ]< C. \end{aligned}$$
 2.There is a \(C>0\) such that for all \((x,p,z)\in \tilde{\Sigma }\),$$\begin{aligned} \sup _{\lambda \le 1} \tilde{\mathbb {E}}_{(x,p,z)}^{ (\lambda )} \Big [\Big \int \limits _{0}^{R_{1}}dr g(S_{r}) \Big  \Big ]< C\big (1+\log (1+p)\big ). \end{aligned}$$
4.1 An Inequality for Summation Functionals Over a Life Cycle
Recall that \(\sigma _{n}=S_{\tau _{n}}\) denotes the resolvent chain and that \(\tilde{\delta }_{s}=\chi (z=0) (1h(s))\delta _{s}+\chi (z=1)h(s)\delta _{s} \) is the splitting of the \(\delta \)distribution at \(s\in \Sigma \). The following lemma states that the generalized resolvent \((U^{(\lambda )}g)(s)\) can be used to bound the expression \(\tilde{\mathbb {E}}_{\tilde{\delta }_{s}}^{(\lambda )}\big [\sum _{n=1}^{\tilde{n}_{1}} g(\sigma _{n}) \big ]\).
Lemma 4.4
Proof
The following lemma states that an additive functional of the resolvent chain \(\sum _{n}g_{\lambda }(\sigma _{n})\) has arbitrary finite moments when the summation is over a single life cycle and \(g_{\lambda }\ge 0\) has sufficient decay at large momentum. In other words, not much typically happens over a single life cycle.
Lemma 4.5
Proof
4.2 Inequalities for the Momentum Drift
The first two parts in the lemma below follow from the conservation of energy and the quadratic formula and do not depend on the potential being periodic. The third part of Lemma 4.6 is a statement about mixing on the torus. If the particle begins with a high momentum \(P_{0}\gg 1\) and is stopped at a random exponential time \(\tau \), then the distribution on the torus \(\mathbb {T}=[0,1)\) at the stopping time will be roughly uniform–even in the presence of the bounded periodic potential \(V(x)\).
Lemma 4.6
 1.
\(\sup _{t\in {\mathbb R}^{+}} \big \int \limits _{0}^{t}dr\frac{dV}{dx}(X_{r}) \big \le 2\sup _{x}V(x) P_{0}^{1} \), and
 2.
\(\Big  \int \limits _{0}^{t}dr\frac{dV}{dx}(X_{s})\frac{ V(X_{t})V(X_{0})}{ P_{0}}\Big \le 2t\sup _{x}\big \frac{dV}{dx}(x)\big  \sup _{x}V(x) P_{0}^{2} \).
 3.Suppose further that \(V(x)\) has period one. If \(\tau \) is exponentially distributed with mean \(\mathbf {r}^{1}\) and \(F:\mathbb {T}\rightarrow {\mathbb R}\) is a function on the torus and bounded, then$$\begin{aligned} \Big \mathbb {E}_{(X_{0},P_{0})}\big [F(X_{\tau })\big ]\int \limits _{\mathbb {T}}dx F(x)\Big \le \mathbf {r}\Vert F\Vert _{\infty }P_{0}^{1}+ O (P_{0}^{2}). \end{aligned}$$
Proof
Lemma 4.7
 1.
\(\big  \mathbf {C}^{(\lambda )}_{n}(x,p) \big  \le Cmax \big (\frac{1}{1+p^{2n}}, \lambda ^{2n} \big ) \),
 2.
\(\big  \mathbf {C}^{(\lambda )}_{0}(x,p)\big \le Cmax \big (\frac{1}{1+p^{2}}, \lambda \big )\).
Proof
 (i).
arbitrary \(p\),
 (ii).
\(1\ll p\le \lambda ^{1}\),
 (iii).
\( \lambda ^{1}<p\).
 (i).For arbitrary \(s\in \Sigma \), we havesince \(\tau _{2}\tau _{1}\) is a mean one exponential.$$\begin{aligned} \mathbb {E}^{(\lambda )}_{s}\Big [\Big \int \limits _{0}^{\tau _{1}}dr \frac{dV}{dx}(X_{r})\Big ^{v} \Big ] \le \sup _{x\in \mathbb {T}}\big \frac{dV}{dx}(x)\big ^{v} \mathbb {E}\big [\tau _{1}^{v}\big ] \le v!\,\sup _{x}\big \frac{dV}{dx}(x)\big ^{v}, \end{aligned}$$(4.19)
 (ii).Next we consider \(s=(x,p)\) for the regime \(1\ll p<\lambda ^{1}\). As long as the momentum stays below \(2\lambda ^{1}\) over the time interval \([0,\tau _{1}]\), the collisions will occur with Poisson rate smaller than \(\mathcal {E}_{\lambda }(2\lambda ^{1})\), which is uniformly finite by (4.17). Thus, in that case, the expected number of collisions up to time \(\tau _{1}\) is uniformly finite for \(\lambda <1\), and as a consequence the momentum of the particle will not fluctuate significantly from its initial value \(p\). To show that \(p\) typically stays well below \(2\lambda ^{1}\), let us bound the probability of the event that \(P_{r}\notin \big [\frac{1}{2}p,\frac{3}{2}p]\) for some \(r\le \tau _{1}\):where \(w\ge 1\), \(\varsigma \) is the first jump time such that \(P_{r}\) leaves \( \big [\frac{1}{2}p,\frac{3}{2}p\big ]\), and \(J_{r}=P_{r}p+\int _{0}^{t}dr \frac{dV}{dx}(X_{r})\) is the sum of the momentum jumps up to time \(r\). The first inequality in (4.20) is Jensen’s, and for the second inequality, we have used \((x+y)^{w}\le 2^{w}(x^{w}+y^{w})\) and (4.19) to bound the contribution of the potential drift. The probability densities of individual momentum jumps conditioned to jump from momentum \(\hat{p}\), \(d_{\hat{p}}(p')=\frac{ \mathcal {J}_{\lambda }(\hat{p},p')}{ \int _{\Sigma }dp'' \mathcal {J}_{\lambda }(\hat{p},p'')} \), have uniformly controlled Gaussian tails for \(\hat{p}\le 2\lambda ^{1} \) and occur with Poisson rate \(\mathcal {E}_{\lambda }(P_{r})\le \mathcal {E}_{\lambda }(2\lambda ^{1})\) for \(r\le \varsigma \). Thus the expectation of \(\sup _{0\le r\le \varsigma \wedge \tau _{1}}\big  J_{r}^{w}\) above is uniformly finite. Since \(w\ge 1\) is arbitrary, it follows that the probability on the first line of (4.20) decays superpolynomially quickly for \(p\gg 1\). Now we bound \(\mathbb {E}_{s}^{(\lambda )}\big [\big \int _{0}^{\tau _{1}}dr\,\frac{dV}{dx}(X_{r}) \big ^{v} \big ]\). Define the times \(t_{n}^{\prime }=t_{n}\wedge \tau _{1}\wedge \varsigma \), where \(t_{n}\) is the time of the \(n\)th momentum jump. By writing$$\begin{aligned}&\mathbb {P}_{s}^{(\lambda )}\Big [P_{r}\notin \big [\frac{1}{2}p,\frac{3}{2}p\big ] \text { for some} r\le \tau _{1} \Big ]\nonumber \\&\quad \le \left( \frac{2}{p}\right) ^{w}\mathbb {E}^{(\lambda )}_{s}\Big [\sup _{0\le r\le \varsigma \wedge \tau _{1}}\big  P_{r}p^{w} \Big ] \nonumber \\&\quad \le \left( \frac{4}{p}\right) ^{w}\sup _{\begin{array}{c} (x,p)\\ p\le \lambda ^{1} \end{array}}\mathbb {E}^{(\lambda )}_{(x,p)}\Big [\sup _{0\le r\le \varsigma \wedge \tau _{1}}\big  J_{r}^{w} \Big ] + w!\,\left( \frac{4}{p}\right) ^{w}\sup _{x}\left \frac{dV}{dx}(x)\right ^{w}, \end{aligned}$$(4.20)we can apply the triangle inequality to get$$\begin{aligned} \int \limits _{0}^{\tau _{1}}dr\,\frac{dV}{dx}(X_{r}) = \sum _{n=0}^{\infty } \int \limits _{t_{n}^{\prime }}^{t_{n+1}^{\prime }}dr\frac{dV}{dx}(X_{r})+\chi (\varsigma \le \tau _{1}) \int \limits _{\varsigma }^{\tau _{1}}dr\,\frac{dV}{dx}(X_{r}), \end{aligned}$$where the second inequality follows by CauchySchwarz and because \(\tau _{1}\) is a mean one exponential. The probability \(\mathbb {P}_{s^{\prime }}^{(\lambda )}\big [\varsigma \le \tau _{1} \big ]\) decays faster than any polynomial by (4.20). The first term on the right side of (4.21) has the bound$$\begin{aligned}&\mathbb {E}_{s^{\prime }}^{(\lambda )}\Big [\Big \int \limits _{0}^{\tau _{1}}dr \frac{dV}{dx}(X_{r}) \Big ^{v} \Big ]^{\frac{1}{v}} \nonumber \\&\le \mathbb {E}_{s^{\prime }}^{(\lambda )}\Big [\Big (\sum _{n=0}^{\infty } \Big \int \limits _{t_{n}^{\prime }}^{t_{n+1}^{\prime }}dr \frac{dV}{dx}(X_{r}) \Big \Big )^{v}\Big ]^{\frac{1}{v}}+\mathbb {E}_{s^{\prime }}^{(\lambda )}\Big [\chi (\varsigma \le \tau _{1}) \Big  \int \limits _{\varsigma }^{\tau _{1}}dr \frac{dV}{dx}(X_{r}) \Big ^{v} \Big ]^{\frac{1}{v}} \nonumber \\&\le \mathbb {E}_{s^{\prime }}^{(\lambda )}\Big [\Big (\sum _{n=0}^{\infty } \Big \int \limits _{t_{n}^{\prime }}^{t_{n+1}^{\prime }}dr\frac{dV}{dx}(X_{r}) \Big \Big )^{v}\Big ]^{\frac{1}{v}}+\Big ((2\,v)!\,\sup _{x\in \mathbb {T}}\big \frac{dV}{dx}(x)\big ^{2v} \Big )^{\frac{1}{2v}}\mathbb {P}_{s^{\prime }}^{(\lambda )}\big [\varsigma \le \tau _{1} \big ]^{\frac{1}{2v}},\nonumber \\ \end{aligned}$$(4.21)where \(\mathcal {N}_{t}\) is the number of collisions up to time \(t\). The above inequality uses the definition of the \(t_{n}^{\prime }\)’s to conclude that for each \(n\), either \(t_{n}^{\prime }=t_{n+1}^{\prime }\) so that \(\int _{t_{n}^{\prime }}^{t_{n+1}^{\prime }}dr\frac{dV}{dx}(X_{r})=0 \), or \( P_{t_{n}^{\prime }} \ge \frac{1}{2}p\) so that we can apply Part (1) of Lemma 4.6 to bound \(\big \int _{t_{n}^{\prime }}^{t_{n+1}^{\prime }}dr\frac{dV}{dx}(X_{r})\big \). The counting process \(\mathcal {N}_{t}\) has Poisson rate \(\mathcal {E}_{\lambda }(P_{t})\) at time \(t\). For times \(t<\varsigma \), we have that \(\mathcal {E}_{\lambda }(P_{t})\le \sup _{\lambda <1} \mathcal {E}_{\lambda }(2\lambda ^{1}):=\mathbf {r}\) and$$\begin{aligned} \mathbb {E}_{s^{\prime }}^{(\lambda )}\Big [\Big (\sum _{n=0}^{\infty } \Big \int \limits _{t_{n}^{\prime }}^{t_{n+1}^{\prime }}dr \frac{dV}{dx}(X_{r}) \Big \Big )^{v}\Big ]\le \Big (\frac{4\sup _{x\in \mathbb {T}}V(x)}{p}\Big )^{v}\mathbb {E}_{s^{\prime }}^{(\lambda )}\big [\mathcal {N}_{\varsigma }^{v} \big ], \end{aligned}$$(4.22)where \(N_{t}^{\prime }\) is a Poisson process with rate \(\mathbf {r}\) and the random variable \(\tau \) is mean one, exponentially distributed, and independent of \(N_{t}^{\prime }\). The first inequality can be seen by a construction \(N^{\prime }_{\tau }\approx \mathcal {N}_{\varsigma }+\mathcal {N}_{\tau }^{\prime }\) for a jump process \(\mathcal {N}^{\prime }_{r}\) with Poisson jump rate \( \mathbf {r}\mathcal {E}_{\lambda }(P_{t})\) for \(t\le \varsigma \) and rate \(\mathbf {r}\) for \(t>\varsigma \) whose jumps are decided independently of the jumps of \(\mathcal {N}_{r}\).$$\begin{aligned} \mathbb {E}_{s^{\prime }}^{(\lambda )}\big [\mathcal {N}_{\varsigma }^{v} \big ]\le \mathbb {E}\big [(N^{\prime }_{\tau })^{v} \big ]=\frac{1}{1+\mathbf {r}}\sum _{n=0}^{\infty }n^{v}\left( \frac{\mathbf {r}}{1+\mathbf {r}} \right) ^{n}<\infty , \end{aligned}$$
 (iii).For the regime \(p>\lambda ^{1}\), our analysis must treat the possibility that many collisions occur over the time interval \([\tau _{1},\tau _{2}]\) (specifically, when \(p\gg \lambda ^{1}\)). Let \(\vartheta =\tau _{1}\wedge \vartheta ^{\prime } \) where \(\vartheta ^{\prime }\) is the hitting time that the absolute value of the momentum \(P_{t}\) jumps below \(\lambda ^{1}\). The hitting time \(\vartheta ^{\prime } \) is finite, and, in fact, has an expectation that is bounded by a multiple of \(\lambda ^{1}\) independently of the initial momentum \(p>\lambda ^{1}\). However, the details for these points do not matter for this proof. Let \(\varphi _{s}\) be the distribution on \(\mathbb {T}\times [\lambda ^{1},\lambda ^{1}]\) for \((X_{\vartheta ^{\prime }},P_{\vartheta ^{\prime }})\) starting from \(s\in \Sigma \). By the triangle inequality and the strong Markov property$$\begin{aligned} \mathbb {E}_{s}^{(\lambda )}\Big [\Big \int \limits _{0}^{\tau _{1}}dr \frac{dV}{dx}(X_{r})\Big ^{v}\Big ]^{\frac{1}{v}}&\le \mathbb {E}_{s}^{(\lambda )}\Big [\Big \int \limits _{0}^{\vartheta }dr\frac{dV}{dx}(X_{r})\Big ^{v}\Big ]^{\frac{1}{v}}\nonumber \\&+ \mathbb {E}_{\varphi _{s}}^{(\lambda )}\Big [\Big \int \limits _{0}^{\tau _{1}}dr \frac{dV}{dx}(X_{r})\Big ^{v}\Big ]^{\frac{1}{v}}. \end{aligned}$$(4.23)
Part (2): We now seek to take full advantage of the averaging that results from integrating \( \frac{dV}{dx}(X_{r}) \) between two random times \(r\in [\tau _{1},\tau _{2}]\). If only the upper limit of integration were random, such as for the expression \( \big  \mathbb {E}_{s}^{(\lambda )}\big [\int _{0}^{\tau _{1}}dr\frac{dV}{dx}(X_{r}) \big ] \big \), then we would only have an upper bound proportional to \(max \big (\frac{1}{1+p}, \lambda \big )\). The bound for \( \mathbf {C}^{(\lambda )}_{0}(s) \) in the region \(p\ge \lambda ^{1}\) follows from Part (1), so we will focus our analysis on the regime \(1 \ll  p < \lambda ^{1}\). We will proceed by approximating the quantity \( \mathbf {C}^{(\lambda )}_{0}(s) \) by expressions that are progressively easier to analyze.
We can reconstruct the counting process \(\mathcal {N}_{t}\) for the number of collisions up to time \(\varsigma \) as follows. Let \(N^{\prime }\) be a Poisson clock with rate \(\mathbf {r}=\mathcal {E}_{\lambda }(2\lambda ^{1})\) as in Part (1). The Poisson rate of jumps \(\mathcal {E}_{\lambda }(P_{t})\) for the process \(\mathcal {N}_{t}\) satisfies \(\mathcal {E}_{\lambda }(P_{t})\le \mathbf {r}\) for times \(t\le \varsigma \). At each jump time \(r_{n}\le \varsigma \) for the Poisson process \(N^{\prime }\), we then flip an independent coin with weight \(\mathbf {r}^{1} \mathcal {E}_{\lambda }(P_{r_{n}})\) to determine if a jump for \(\mathcal {N}_{t}\) (i.e. a collision) occurred at time \(r_{n}\). This construction recovers the statistics for \(\mathcal {N}_{t}\). We then define \(r_{n}^{\prime }=r_{n}\wedge \tau _{1}\) for \(n\le N^{\prime }_{\varsigma \wedge \tau _{1}}\). Conditioned on the past \(\mathcal {F}_{r^{\prime }_{n}}\) and the event \(\tau _{1}>r_{n}^{\prime }\), the increment \(r_{n+1}^{\prime }r_{n}^{\prime }\) is exponentially distributed with mean \((1+\mathbf {r})^{1}\). When conditioned on the event \(\tau _{1}=r_{n+1}^{\prime }\), the increment \(r_{n+1}^{\prime }r_{n}^{\prime }\) is exponential with mean \(1\).
4.3 Proof of Proposition 4.1
Proof of Proposition 4.1
5 Bounds for the Cumulative Potential Forcing
In this section we prove Theorem 1.2.
5.1 The Martingale Approximating the Potential Drift Process
Lemma 5.1
Proof
Define \(\upsilon _{\lambda }:=\int _{\Sigma }d\nu (s) \check{\upsilon }_{\lambda }(s)\).
Lemma 5.2
The value \(\upsilon _{\lambda }\in {\mathbb R}^{+}\) is uniformly bounded for \(\lambda <1\), and \(\upsilon _{\lambda }\) depends continuously on the parameter \(\lambda \).
Proof
The following lemma relates the predictable quadratic variation of \( \tilde{M}_{t}\) to the counting process \(\tilde{N}_{t}\) and is somewhat stronger than we require.
Lemma 5.3
Proof
5.2 Proof of Theorem 1.2
Proof of Theorem 1.2
6 Convergence to the Ornstein–Uhlenbeck Process
In this section we will now prove Theorem 1.3. As a preliminary, Sect. 6.1 characterizes the martingale and drift components in the semimartingale decomposition for the jump process \(J_{t}\) defined in (1.3). The main ingredient for the study of \(J_{t}\) is the bound in Lemma 3.2 on the typical energy of the particle obtained over the time interval \([0,\frac{T}{\lambda }]\) for small \(\lambda \).
6.1 Limiting Behavior for the Jump Process
Proposition 6.1
 1.
\(\frac{1}{8(\lambda +1)} \le \mathcal {E}_\lambda (p)\le \frac{1}{8(\lambda +1)}\left( 1+C\lambda p\right) \) and \(\lambda p \le C\mathcal {E}_\lambda (p),\)
 2.
\( \big \mathcal {D}_{\lambda }(p) +\frac{\lambda p}{2}\big \le C\lambda ^2 (p+p^{2})\),
 3.
\( \left \mathcal {Q}_{\lambda }(p)1 \right \le C\lambda +C\lambda p+C\lambda ^{3}p^3\),
 4.
\( \Pi _{\lambda ,2m}(p)\le C_m(1+\lambda p)^{2m+1} \).
Lemma 6.2
Proof
The following lemma gives a central limit theorem for the martingale \(M_{t}^{(\lambda )}=\lambda ^{\frac{1}{2}}M_{\frac{t}{\lambda }}\).
Lemma 6.3
As \(\lambda \searrow 0\) the martingale \(M^{(\lambda )}_{t}=\lambda ^{\frac{1}{2}} M_{\frac{t}{\lambda }}\) converges in law with respect to the uniform metric to a standard Brownian motion \(\mathbf {B}\) over the interval \(t\in [0,T]\).
Proof
 (i).
For each \(t\in {\mathbb R}^{+}\), the predictable quadratic variation process \(\langle M^{(\lambda )}\rangle _{t}\) converges in probability to \(t\) as \(\lambda \searrow 0\).
 (ii).For any \(\epsilon >0\), then as \(\lambda \rightarrow 0\)$$\begin{aligned} \mathbb {P}^{(\lambda )}\Big [\sup _{0\le r\le \frac{T}{\lambda }}\big (M_{r}M_{r}\big )^{2}> \frac{\epsilon }{\lambda } \Big ]\longrightarrow 0. \end{aligned}$$
 (i).We prove a somewhat stronger statement. Note thatFor the expectation of the supremum of the difference between \(\langle M^{(\lambda )}\rangle _{t}\) and \(t\) over the interval \([0,T]\), we have$$\begin{aligned} \langle M^{(\lambda )}\rangle _{t}t= \lambda \int \limits _{0}^{\frac{t}{\lambda }}dr\big (\mathcal {Q}_{\lambda }(P_{r})1\big ). \end{aligned}$$where the second inequality is for some \(C>0\) by Part (3) of Proposition 6.1. The expectations in the second line above are bounded uniformly for \(\lambda <1\) by Lemma 3.2 since \(P_{r}\le 2^{\frac{1}{2}}H_{r}^{\frac{1}{2}}\). The above implies that \(\langle M^{(\lambda )}\rangle _{t}\) converges in probability to \(t\) as \(\lambda \searrow 0\).$$\begin{aligned} \mathbb {E}^{(\lambda )}\Big [\sup _{t\in [0,T]} \Big  \langle M^{(\lambda )}\rangle _{t}t \Big \Big ]&\le \lambda \mathbb {E}^{(\lambda )}\Big [\int \limits _{0}^{\frac{T}{\lambda }}dr\big \mathcal {Q}_{\lambda }(P_{r})1\big \Big ]\\&\le CT\lambda ^{\frac{1}{2}}\Big (\lambda ^{\frac{1}{2}}+ \mathbb {E}^{(\lambda )}\Big [\sup _{0\le r\le \frac{T}{\lambda }} \lambda ^{\frac{1}{2}} P_{r}\Big ]\\&+\,\lambda \mathbb {E}^{(\lambda )}\Big [\sup _{0\le r\le \frac{T}{\lambda }} \lambda ^{\frac{3}{2}} P_{r}^{3}\Big ]\Big )\\&= O (\lambda ^{\frac{1}{2}}), \end{aligned}$$
 (ii).Recall that \({\mathcal {N}}_{t}\) is the number of collisions over the time interval \([0,t]\) and that \(t_{1},\ldots , t_{{\mathcal {N}}_{t}}\) are the corresponding jump times. The probability has the following bounds:The second inequality is Jensen’s, and the first inequality is Chebyshev’s followed by the elementary relation$$\begin{aligned}&\mathbb {P}^{(\lambda )}\Big [\sup _{0\le r\le \frac{T}{\lambda }}\big (M_{r}M_{r^}\big )^{2}> \frac{\epsilon }{\lambda } \Big ]\nonumber \\&\quad \le \frac{\lambda }{\epsilon }\mathbb {E}^{(\lambda )}\Big [\Big (\sum _{n=1}^{{\mathcal {N}}_{\frac{T}{\lambda }}} \big (M_{t_{n}}M_{t_{n}^}\big )^{4}\Big )^{\frac{1}{2}}\Big ] \le \frac{\lambda }{\epsilon }\mathbb {E}^{(\lambda )}\Big [\sum _{n=1}^{{\mathcal {N}}_{\frac{T}{\lambda }}} \big (M_{t_{n}}M_{t_{n}^}\big )^{4}\Big ]^{\frac{1}{2}} \nonumber \\&\quad = \frac{\lambda }{\epsilon }\mathbb {E}^{(\lambda )}\Big [\sum _{n=1}^{{\mathcal {N}}_{\frac{T}{\lambda }}} \mathbb {E}^{(\lambda )}\Big [\big (M_{r}M_{r^}\big )^{4}\,\big \,P_{r^},\,{\mathcal {N}}_{r}={\mathcal {N}}_{r^{}}+1 \Big ]\Big _{r=t_{n}}\Big ]^{\frac{1}{2}}\nonumber \\&\quad = \frac{\lambda }{\epsilon }\mathbb {E}^{(\lambda )}\Big [\sum _{n=1}^{{\mathcal {N}}_{\frac{T}{\lambda }}} \frac{\Pi _{\lambda ,4}(P_{t_{n}^})}{\mathcal {E}_{\lambda }(P_{t_{n}^})} \Big ]^{\frac{1}{2}}= \frac{\lambda }{\epsilon }\mathbb {E}^{(\lambda )}\Big [\int \limits _{0}^{\frac{T}{\lambda }}dr \Pi _{\lambda , 4}(P_{r}) \Big ]^{\frac{1}{2}}. \end{aligned}$$(6.3)The first equality in (6.3) holds since the process$$\begin{aligned} \sup _{1\le m\le n} a_{m}\le \Big (\sum _{m=1}^{n} a_{m}^{2}\Big )^{\frac{1}{2}}, a_{n}\ge 0. \end{aligned}$$is a mean zero martingale. The second equality uses that a jump for \(M_{r}\) is a jump for \(P_{r}\) (since they differ by a continuous process) and that the conditional expectation for \(\big (P_{r}P_{r^{}}\big )^{4}\) given the value \(P_{r^{}}\) and the information that \(r\in {\mathbb R}^{+}\) is a jump time is given by the ratio of \(\Pi _{\lambda ,4}(P_{r^})\) by \(\mathcal {E}_{\lambda }(P_{r^{}})\):$$\begin{aligned} \sum _{n=1}^{{\mathcal {N}}_{t}}\Big (\big (M_{t_{n}}M_{t_{n}^}\big )^{4} \mathbb {E}^{(\lambda )}\Big [\big (M_{r}M_{r^}\big )^{4}\,\big \,P_{r^},\,{\mathcal {N}}_{r}={\mathcal {N}}_{r^{}}+1 \Big ]\Big _{r=t_{n}} \Big ) \end{aligned}$$The last equality follows because the jump times \(t_{n}\) occur with Poisson rate \(\mathcal {E}_{\lambda }(P_{r})\).$$\begin{aligned}&\mathbb {E}^{(\lambda )}\Big [\big (M_{r}M_{r^{}}\big )^{4}\,\big \,P_{r^{}},\,{\mathcal {N}}_{r}={\mathcal {N}}_{r^{}}+1 \Big ]\\&= \mathbb {E}^{(\lambda )}\Big [\big (P_{r}P_{r^}\big )^{4}\,\big \,P_{r^},\,{\mathcal {N}}_{r}={\mathcal {N}}_{r^{}}+1 \Big ] =\frac{\Pi _{\lambda ,4}(P_{r^})}{\mathcal {E}_{\lambda }(P_{r^})}. \end{aligned}$$
6.2 Proof of Theorem 1.3
Proof of Theorem 1.3
7 Miscellaneous Proofs
7.1 Proofs for Sect. 2
Proof of Proposition 2.3
Part (2): This follows easily from the construction of the split process.
Part (3): It is almost surely true that a collision will not occur at the partition time \(\mathbf {t}\). Since the trajectory \(S_{t}\) is continuous between collision times \(\lim _{t\nearrow \mathbf {t}}S_{t}=S_{\mathbf {t}}\). Thus, information about the state \(S_{\mathbf {t}}\) will be contained in the \(\sigma \)algebra \(\tilde{\mathcal {F}}_{\mathbf {t}^{}}\). We claim that the probability of the binary component \( Z_{\mathbf {t}}\) being \(1\) given \(\tilde{\mathcal {F}}_{\mathbf {t}^{}}\) has probability \(h(S_{\mathbf {t}})\), which would imply that \(\tilde{S}_{\mathbf {t}}\) has distribution \(\tilde{\delta }_{S_{\mathbf {t}}}\) given \(\mathcal {F}_{\mathbf {t}^{}}\).
7.2 Proofs for Propositions 3.1 and 6.1
We will begin with the proof of Proposition 6.1 since the proof of Part (1) of Proposition 3.1 depends on it.
Proof of Proposition 6.1
To ease the notations, we denote \(g(q)=(2\pi )^{\frac{1}{2}}e^{\frac{q^2}{2}}\). Since \(\mathcal {E}_\lambda (p) \), \(\mathcal {D}_\lambda (p) \), and \(\Pi _\lambda ^{(2m)}(p)\) are even functions, we will assume without loss of generality that \(p>0\).
Part (4): Finally, reasoning as above, it is easy to produce an upper bound for \(\Pi _\lambda ^{(2m)}(p)\) which is a polynomial of degree \(2m+1\) in \(\lambda p\). \(\square \)
Proof of Proposition 3.1
Our first observation is that \(\mathcal {A}_\lambda (x,p)\), \(\mathcal {K}_{\lambda ,n}(x,p)\), and \(\mathcal {V}_\lambda (x,p)\) are symmetric in \(p\in {\mathbb R}\). Hence we can assume without loss of generality that \(p>0\).
7.3 Proofs for Sect. 5
Proof of Lemma 5.3
Footnotes
 1.
These velocities refer to the original length scale, before stretching by a factor of \(\lambda ^{1}\).
Notes
Acknowledgments
We are grateful to Professor Höpfner for sending a copy of Touati’s unpublished paper and giving helpful comments. We also thank an anonymous referee for offering many useful suggestions towards improving the presentation of this article. This work is supported by the European Research Council Grant No. 227772 and NSF Grant DMS08446325.
References
 1.Adams, C.S., Sigel, M., Mlynek, J.: Atom optics. Phys. Rep. 240, 143–210 (1994)ADSCrossRefGoogle Scholar
 2.Athreya, K.B., Ney, P.: A new approach to the limit theory of recurrent Markov chains. Trans. Am. Math. Soc. 245, 493–501 (1978)CrossRefMATHMathSciNetGoogle Scholar
 3.Brunnschweiler, A.: A connection between the Boltzmann equation and the Ornstein–Uhlenbeck process. Arch. Ration. Mech. Anal. 76, 247–263 (1981)CrossRefMATHMathSciNetGoogle Scholar
 4.Birkl, G., Gatzke, M., Deutsch, I.H., Rolston, S.L., Phillips, W.D.: Bragg scattering from atoms in optical lattices. Phys. Rev. Lett. 75, 2823–2827 (1998)ADSCrossRefGoogle Scholar
 5.Chung, K.L.: A Course in Probability Theory. Academic Pres, New York (1976)Google Scholar
 6.Clark, J.T.: Suppressed dispersion for a randomly kicked quantum particle in a Dirac comb. J. Stat. Phys. 150, 940–1015 (2013)ADSCrossRefMATHMathSciNetGoogle Scholar
 7.Clark, J.T.: A limit theorem to a timefractional diffusion. Lat. Am. J. Probab. Math. Stat. 10(1), 117–156 (2013)MATHMathSciNetGoogle Scholar
 8.Clark, J., Dubois, L.: Bounds for the statemodulated resolvent of a linear Boltzmann generator. J. Phys. A 45, 225207 (2012)ADSCrossRefMathSciNetGoogle Scholar
 9.Clark, J., Maes, C.: Diffusive behavior for randomly kicked Newtonian particles in a periodic medium. Commun. Math. Phys. 301, 229–283 (2011)ADSCrossRefMATHMathSciNetGoogle Scholar
 10.Dürr, D., Goldstein, S., Lebowitz, J.L.: A mechanical model for a Brownian motion. Commun. Math. Phys. 78, 507–530 (1981)ADSCrossRefMATHGoogle Scholar
 11.Dzhaparidze, K., Valkeila, E.: On the Hellinger type distances for filtered experiments. Probab. Theory Relat. Fields 85, 105–117 (1990)CrossRefMATHMathSciNetGoogle Scholar
 12.Freidlin, M.I., Wentzell, A.D.: Random perturbations of Hamiltonian systems. Mem. Am. Math. Soc. 109(523) (1994)Google Scholar
 13.Friedman, N., Ozeri, R., Davidson, N.: Quantum reflection of atoms from a periodic dipole potential. J. Opt. Soc. Am. B 15, 1749–1755 (1998)ADSCrossRefGoogle Scholar
 14.Hall, P., Heyde, C.C.: Martingale Limit Theory and its Application. Academic Press, New York (1980)MATHGoogle Scholar
 15.Hennion, H.: Sur le mouvement d’une particule lourde soumise à des collisions dans un système infini de particules légères. Z. Wahrscheinlichkeitstheorie Verw. Geb. 25, 123–154 (1973)CrossRefMATHMathSciNetGoogle Scholar
 16.Holley, R.: The motion of a heavy particle in a one dimensional gas of hard spheres. Probab. Theory Relat. Fields 17, 181–219 (1971)MathSciNetGoogle Scholar
 17.Höpfner, R., Löcherbach, E.: Limit theorems for null recurrent Markov processes. Mem. Am. Math. Soc. 161 (2003)Google Scholar
 18.Jacod, J., Shiryaev, A.N.: Limit Theorems for Stochastic Processes. Springer, Berlin (1987)CrossRefMATHGoogle Scholar
 19.Kunze, S., Dürr, S., Rempe, G.: Bragg scattering of slow atoms from a standing light wave. Europhys. Lett. 34, 343–348 (1996)ADSCrossRefGoogle Scholar
 20.Komorowski, T., Landim, C., Olla, S.: Fluctuations in Markov Processes. Springer, Berlin (2012)CrossRefMATHGoogle Scholar
 21.Löcherbach, E., Loukianova, D.: On Nummelin splitting for continuous time Harris recurrent Markov processes and application to kernel estimation for multidimensional diffusions. Stoch. Processes Appl. 118, 1301–1321 (2008)CrossRefMATHGoogle Scholar
 22.McClelland, J.J.: Atomoptical properties of a standingwave light field. J. Opt. Soc. Am. B 12, 1761–1768 (1995)ADSCrossRefGoogle Scholar
 23.Meyn, S.P., Tweedie, R.L.: Generalized resolvents and Harris recurrence of Markov processes. Contemp. Math. 149, 227–250 (1993)CrossRefMathSciNetGoogle Scholar
 24.Montroll, E.W., Weiss, G.H.: Random walks on lattices, II. J. Math. Phys. 6, 167–181 (1965)ADSCrossRefMathSciNetGoogle Scholar
 25.Morsch, O.: Dynamics of Bose–Einstein condensates in optical lattices. Rev. Mod. Phys. 78, 179–215 (2006)ADSCrossRefGoogle Scholar
 26.Nelson, E.: Dynamical Theories of Brownian Motion. Princeton University Press, Princeton (1967)MATHGoogle Scholar
 27.Neveu, J.: Potentiel Markovien récurrent des chaînes de Harris. Ann. Inst. Fourier 22, 7–130 (1972)CrossRefMathSciNetGoogle Scholar
 28.Nummelin, E.: A splitting technique for Harris recurrent Markov chains. Z. Wahrsheinlichkeitstheorie Verw. Geb. 43, 309–318 (1978)CrossRefMATHMathSciNetGoogle Scholar
 29.Pollard, D.: Convergence of Stochastic Processes. Springer, New York (1984)CrossRefMATHGoogle Scholar
 30.Spohn, H.: Large Scale Dynamics of Interacting Particles. Springer, Berlin (1991)CrossRefMATHGoogle Scholar
 31.Szász, D., Tóth, B.: Towards a unified dynamical theory of the Brownian particle in an ideal gas. Commun. Math. Phys. 111, 41–62 (1987)ADSCrossRefMATHGoogle Scholar
 32.Touati, A.: Théorèmes limites pour les processus de Markov récurrents. C. R. Acad. Sci. Paris Sér. I Math. 305(19), 841–844 (1987)MATHMathSciNetGoogle Scholar
 33.Uhlenbeck, G.E., Ornstein, L.S.: On the theory of Brownian motion. Phys. Rev. 36, 823–841 (1930)ADSCrossRefMATHGoogle Scholar