Existence of Periodic Solutions in Distribution for Stochastic Newtonian Systems


Periodic phenomena such as oscillation have been studied for many years. In this paper, we verify the stochastic version of Levinson’s conjecture, which confirmed the existence of stochastic periodic solutions for second order Newtonian systems with dissipativeness. First, we provide a stochastic Duffing’s equation to display our result. Then, we apply Wong–Zakai approximation method and Lyapunov’s method to stochastic second order Newtonian systems driven by Brownian motions. With the help of Horn’s fixed point theorem, we prove that this kind of systems is stochastic dissipative and admits periodic solutions in distribution.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5


  1. 1.

    Adler, P.M., Mityushev, V.V.: Resurgence flows in three-dimensional periodic porous media. Phys. Rev. E 82(1), 016317 (2010)

    ADS  MathSciNet  Article  Google Scholar 

  2. 2.

    Bally, V., Millet, A., Sanz-Sole, M.: Approximation and support theorem in Holder norm for parabolic stochastic partial differential equations. Ann. Probab. 23, 178–222 (1995)

    MathSciNet  MATH  Article  Google Scholar 

  3. 3.

    Billingsley, P.: Convergence of Probability Measures. Wiley Series in Probability and Statistics: Probability and Statistics, 2nd edn. Wiley, New York (1999)

  4. 4.

    Burton, T.A.: Stability and Periodic Solutions of Ordinary and Functional-Differential Equations. Mathematics in Science and Engineering, vol. 178. Academic Press, Orlando (1985)

  5. 5.

    Burton, T.A., Zhang, B.: Uniform ultimate boundedness and periodicity in functional-differential equations. Tohoku Math. J. 42, 93–100 (1990)

    MathSciNet  MATH  Article  Google Scholar 

  6. 6.

    Burton, T.A., Zhang, S.: Unified boundedness, periodicity, and stability in ordinary and functional-differential equations. Ann. Mat. Pura Appl. 145, 129–158 (1986)

    MathSciNet  MATH  Article  Google Scholar 

  7. 7.

    Clarke, F.H., Ledyaev, Y.S., Stern, R.J.: Asymptotic stability and smooth Lyapunov functions. J. Differ. Equ. 149(1), 69–114 (1998)

    ADS  MathSciNet  MATH  Article  Google Scholar 

  8. 8.

    Chen, Z., Lin, W.: Square-mean pseudo almost automorphic process and its application to stochastic evolution equations. J. Funct. Anal. 261(1), 69–89 (2011)

    MathSciNet  MATH  Article  Google Scholar 

  9. 9.

    Chen, F., Han, Y., Li, Y., Yang, X.: Periodic solutions of Fokker–Planck equations. J. Differ. Equ. 263, 285–298 (2017)

    ADS  MathSciNet  MATH  Article  Google Scholar 

  10. 10.

    Dragmir, S.S.: Some Gronwall Type Inequalities and Applications. Nova Science Publishers, New York (2003)

    Google Scholar 

  11. 11.

    Friedman, A.: Stochastic Differential Equations and Applications, vol. 1 and 2. Academic Press, Cambridge (1975)

    Google Scholar 

  12. 12.

    Feng, C., Wu, Y., Zhao, H.: Anticipating random periodic solutions-I. SDEs with multiplicative linear noise. J. Funct. Anal. 271(2), 365–417 (2016)

    MathSciNet  MATH  Article  Google Scholar 

  13. 13.

    Hairer, M., Pardoux, E.: A Wong–Zakai theorem for stochastic PDEs. J. Math. Soc. Jpn. 67(4), 1551–1604 (2015)

    MathSciNet  MATH  Article  Google Scholar 

  14. 14.

    Horn, W.A.: Some fixed point theorems for compact maps and flows in Banach spaces. Trans. Am. Math. Soc. 149, 39–404 (1970)

    MathSciNet  Google Scholar 

  15. 15.

    Ikeda, N., Nakao, S., Yamato, Y.: A class of approximations of Brownian motion. Publ. RIMS Kyoto Univ. 13, 285–300 (1977)

    MathSciNet  MATH  Article  Google Scholar 

  16. 16.

    Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes, 2nd edn. North-Holland, Amsterdam (1989)

    Google Scholar 

  17. 17.

    Ji, M., Qi, W., Shen, Z., Yi, Y.: Existence of periodic probability solutions to Fokker–Planck equations with applications. J. Funct. Anal. 277(11), 108281 (2019)

    MathSciNet  MATH  Article  Google Scholar 

  18. 18.

    Jiang, X., Yang, X., Li, Y.: Affine periodic solutions in distribution of stochastic differential equations. arXiv:1908.11499 (2019)

  19. 19.

    Kelley, D., Melbourne, I.: Smooth approximation of stochastic differential equations. Ann. Probab. 44, 479–520 (2016)

    MathSciNet  MATH  Article  Google Scholar 

  20. 20.

    Khasminskii, R.: Stochastic stability of differential equations. In: Milstein, G.N., Nevelson, M.B. (eds.) Completely Revised and Enlarged 2nd edn. Stoch. Model. Appl. Probab., vol. 66. Springer, Heidelberg (2012)

  21. 21.

    Küpper, T., Li, Y., Zhang, B.: Periodic solutions for dissipative-repulsive systems. Tohoka Math. J. 52, 321–329 (2000)

    MathSciNet  MATH  Article  Google Scholar 

  22. 22.

    Levinson, N.: Transformation theory of non-linear differential equations of the second order. Ann. Math. 45, 723–737 (1944)

    MathSciNet  MATH  Article  Google Scholar 

  23. 23.

    Li, Y., Wang, H., Yang, X.: Fink type conjecture on affine-periodic solutions and Levinson’s conjecture to Newtonian systems. Discret. Contin. Dyn. Syst. Ser. B 23(6), 2607–2623 (2018)

    MathSciNet  MATH  Google Scholar 

  24. 24.

    Liu, Z., Wang, W.: Farvard separation method for almost periodic stochastic differential equations. J. Differ. Equ. 260, 8109–8136 (2016)

    ADS  MATH  Article  Google Scholar 

  25. 25.

    Nakatsugawa, K., Fujii, T., Saxena, A., Tanda, S.: Time operators and time crystals: self-adjointness by topology change. J. Phys. A 53(2), 025301 (2020)

    ADS  MathSciNet  Article  Google Scholar 

  26. 26.

    Poincaré, H.: Les méthodes nouvelles de la mécanique céleste, vol. I. Gauthiers-Villars, Paris (1892)

    Google Scholar 

  27. 27.

    Poincaré, H.: Les méthodes nouvelles de la mécanique céleste, vol. II. Gauthiers-Villars, Paris (1893)

    Google Scholar 

  28. 28.

    Poincaré, H.: Les méthodes nouvelles de la mécanique céleste, vol. III. Gauthiers-Villars, Paris (1899)

    Google Scholar 

  29. 29.

    Prodan, E., Nordlander, P.: On the Kohn–Sham equations with periodic background potentials. J. Stat. Phys. 111(3–4), 967–992 (2003)

    MathSciNet  MATH  Article  Google Scholar 

  30. 30.

    Rao, M.M.: Conditional Measures and Applications, vol. 2. CRC Press, New York (2005)

    Google Scholar 

  31. 31.

    Shen, J., Lu, K.: Wong–Zakai approximations and center manifolds of stochastic differential equations. J. Differ. Equ. 8, 4929–4977 (2017)

    ADS  MathSciNet  MATH  Article  Google Scholar 

  32. 32.

    Shen, J., Zhao, J., Lu, K., Wang, B.: The Wong–Zakai approximations of invariant manifolds and foliations for stochastic evolution equations. J. Differ. Equ. 266(8), 4568–4623 (2019)

    ADS  MathSciNet  MATH  Article  Google Scholar 

  33. 33.

    Stroock, D.W.: Probability Theory: An Analytic View, 2nd edn. Cambridge University Press, Cambridge (2011)

    Google Scholar 

  34. 34.

    Stroock, D.W., Varadhan, S.R.S.: On the support of diffusion processes with applications to the strong maximum principle. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, vol. 3, pp. 333–359 (1972)

  35. 35.

    Sussmann, H.J.: An interpretation of stochastic differential equations as ordinary differential equations which depend on the sample point. Bull. Am. Math. Soc. 83, 296–298 (1977)

    MathSciNet  MATH  Article  Google Scholar 

  36. 36.

    Sussmann, H.J.: On the gap between dererministic and stochastic and stochastic ordinary differential equations. Ann. Probab. 6, 19–41 (1978)

    MATH  Article  Google Scholar 

  37. 37.

    Twardowska, K.: An approximation theorem of Wong–Zakai type for nonlinear stochastic partial differential equations. Stoch. Anal. Appl. 13(5), 601–626 (1995)

    MathSciNet  MATH  Article  Google Scholar 

  38. 38.

    Ur Rahman, A., Khalid, M., Naeem, S.N., Elghmaz, E.A., El-Tantawy, S.A., El-Sherif, L.S.: Periodic and localized structures in a degenerate Thomas–Fermi plasma. Phys. Lett. A 10, 126257 (2020)

    Article  Google Scholar 

  39. 39.

    Wang, K., Fan, A.: Uniform persistence and periodic solution of chemostat-type model with antibiotic. Discret. Contin. Dyn. Syst. B. 4(3), 789 (2004)

    MathSciNet  MATH  Google Scholar 

  40. 40.

    Wong, E., Zakai, M.: On the relation between ordinary and stochastic differential equations. Int. J. Eng. Sci. 3, 213–229 (1965)

    MathSciNet  MATH  Article  Google Scholar 

  41. 41.

    Wong, E., Zakai, M.: On the convergence of ordinary integrals to stochastic integrals. Ann. Math. Stat. 36, 1560–1564 (1965)

    MathSciNet  MATH  Article  Google Scholar 

  42. 42.

    Yoshizawa, T.: Stability Theory and the Existence of Periodic Solutions and Almost Periodic Solutions. Applied Mathematical Sciences, vol. 14. Springer, New York (1975)

  43. 43.

    Zabreiko, P., Krasnosel’skii, M.: Iteration of operators and fixed points. Dokl. Akad. Nauk SSSR 196, 1006–1009 (1971)

    MathSciNet  Google Scholar 

  44. 44.

    Zaouch, Fouzi: Time-periodic solutions of the time-dependent Ginzburg–Landau equations of superconductivity. Z. Angew. Math. Phys. 54(6), 905–918 (2003)

    MathSciNet  MATH  Article  Google Scholar 

  45. 45.

    Zhao, H., Zheng, Z.: Random periodic solutions of random dynamic systems. J. Differ. Equ. 246(5), 20202–2038 (2009)

    Article  Google Scholar 

Download references


This work was supported by National Basic Research Program of China (Grant No. 2013CB834100), National Natural Science Foundation of China (Grant No. 11901231), National Natural Science Foundation of China (Grant No. 11901080), National Natural Science Foundation of China (Grant No. 11571065) and National Natural Science Foundation of China (Grant No. 11171132). We thank the anonymous referees for their valuable suggestions and comments.

Author information



Corresponding author

Correspondence to Xiaomeng Jiang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Communicated by Jorge Kurchan.


Appendix A: Gronwall’s Inequalities and Convergence of Variables and Stochastic Processes

To obtain our results, Gronwall-type inequalities are necessary. We list two of them below which are cited from [10].

Lemma A.1

Let u, \(\Psi \) and \(\chi \) be real continuous functions in [ab] and \(\chi \) is nonnegative. Suppose that in [ab] we have

$$\begin{aligned} u(t)\le \Psi (t)+\int ^t_a\chi (s)u(s)ds. \end{aligned}$$

Then following results hold.

  1. 1.

    ([10, Theorem 1]). In [ab],

    $$\begin{aligned} u(t)\le \Psi (t)+\int ^t_a\chi (s)\Psi (s)\exp \left[ \int ^t_s\chi (r)dr\right] ds. \end{aligned}$$
  2. 2.

    ([10, Corollary 2]). Moreover if \(\Psi \) is differentiable, then

    $$\begin{aligned} u(t)\le \Psi (a)\left( \int ^t_a\chi (r)dr\right) +\int ^t_a\exp \left( \int ^t_s\chi (r)dr\right) \Psi '(s)ds. \end{aligned}$$

We will state some important results about the convergence of random variables in our proofs, which are listed below. The first one is Skorokhod’s representation theorem.

Lemma A.2

([3, Theorem 6.7]) Suppose that \(\{p_n:p_n\in \mathcal {P}(\mathbb {R}^l)\}_{n=1}^\infty \) converge to \(p_0\in \mathcal {P}(\mathbb {R}^l)\) point-wise. Then there exist random variables \(\{x_n\}_{n=1}^\infty \) and \(x_0\) defined in another common probability space \((\hat{\Omega },\hat{\mathcal {F}},\hat{\mathbb {P}})\) such that

$$\begin{aligned} p_{x_n}=p_n,~n=0,1,\ldots \end{aligned}$$


$$\begin{aligned} x_n\xrightarrow {a.s.}x_0 \end{aligned}$$

as \(n\rightarrow \infty \).

Although \(L^2(\mathbb {P},\mathbb {R}^{m})\) is lack of compactness, closed balls in \(L^2(\mathbb {P},\mathbb {R}^{m})\) admit weak convergent sequences due to Prohorov’s theorem (see [3, Theorems 5.1 and 5.2]).

Lemma A.3

Suppose that there is a positive constant \(R>0\) and random variables \(\{x_n\}_{n=1}^\infty \) in \(L^2(\mathbb {P},\mathbb {R}^{m})\) satisfying

$$\begin{aligned} \Vert x_n\Vert _2\le R,~n=1,2,\ldots . \end{aligned}$$

Then this sequence admits a subsequence \(\{x_{n_k}\}_{k=1}^\infty \) and a random variable \(x_0\in L^2(\mathbb {P},\mathbb {R}^{m})\) such that

$$\begin{aligned} x_{n_k}\xrightarrow {~d~}x_0 \end{aligned}$$

as \(k\rightarrow \infty \).


This is a direct corollary of [24, Theorem 3.1]. \(\square \)

Appendix B: Some Results on Conditional Expectations and Martingales

We also need some results about conditional expectations and discrete martingales. Suppose that \(\{\mathcal {V}_k\}_{k=1}^\infty \) is a class of increasing \(\sigma \)-algebras. Suppose the mentioned conditional expectations exist. Here, we denote the conditional expectation of random variable x with respect to \(\sigma \)-algebra \(\mathcal {H}\) as \(\mathbb {E}[x|\mathcal {H}]\) (which is still a random variable). For more details for conditional expectations, we recommend [30].

Lemma B.1

For any random variable \(x\in L^2(\mathbb {P},\mathbb {R}^{m})\), conditional expectations admit following properties.

  1. i.

    ([30, Proposition 1 (ii)]) For any \(k_1\le k_2\),

    $$\begin{aligned} \mathbb {E}[\mathbb {E}[x|\mathcal {V}_{k_1}]|\mathcal {V}_{k_2}]=\mathbb {E}[\mathbb {E}[x|\mathcal {V}_{k_2}]|\mathcal {V}_{k_1}]=\mathbb {E}[x|\mathcal {V}_{k_1}]~a.s. \end{aligned}$$

    and in particular,

    $$\begin{aligned} \mathbb {E}[\mathbb {E}[x|\mathcal {V}_{k_1}]]=\mathbb {E}[x]. \end{aligned}$$
  2. ii.

    ([30, Theorem 4]) For any \(k\in \mathbb {N}\) and \(q\in [1,\infty ]\),

    $$\begin{aligned} \varpi \left( \mathbb {E}[x|\mathcal {V}_k]\right) \le \mathbb {E}\left[ \varpi (x)|\mathcal {V}_k\right] \end{aligned}$$

    if \(\varpi :\mathbb {R}\rightarrow \mathbb {R}^+\) is a continuous convex function and \(\mathbb {E}[\cdot |\mathcal {V}_k]:L^q(\mathbb {P},\mathbb {R}^{m})\rightarrow L^q(\mathbb {P},\mathbb {R}^{m})\) is a positive linear contraction on all \(L^q(\mathbb {P},\mathbb {R}^{m})\). Namely,

    $$\begin{aligned} \Vert \mathbb {E}[x|\mathcal {V}_k]\Vert _q\le \Vert x\Vert _q. \end{aligned}$$

It is clear that \(\{\mathbb {E}[x|\mathcal {V}_k]\}_{k\in \mathbb {N}}\) is a discrete martingale with respect to \(\{\mathcal {V}_k\}_{k\in \mathbb {N}}\). For the convergence of discrete martingales, Doob’s martingale convergence theorem and Lévy’s zero one law are helpful tools. We state a version in [33] as follows.

Lemma B.2

([33, Corollary 5.2.4]) For any \(x\in L^q(\mathbb {P},\mathbb {R}^{m})\) with \(p\in [1,\infty )\),

$$\begin{aligned} \mathbb {E}[x|\mathcal {V}_k]\xrightarrow {a.s.,~q}\mathbb {E}\left[ x\Bigg |\bigvee ^\infty _{k=0}\mathcal {V}_k\right] ~as~k\rightarrow \infty . \end{aligned}$$

In particular if x is \(\bigvee \limits ^\infty _{k=0}\mathcal {V}_k\)-measurable,

$$\begin{aligned} \mathbb {E}[x|\mathcal {V}_k]\xrightarrow {a.s.}x~as~k\rightarrow \infty . \end{aligned}$$


$$\begin{aligned} \mathbb {E}[x|\mathcal {V}_k]\xrightarrow {~q~}x~as~k\rightarrow \infty \end{aligned}$$

uniformly with respect to \(x\in \bar{S}_R\) for any fixed \(r>0\).


Equations (B.22)–(B.24) are proved in [33, Corollary 5.2.4]. While the uniform convergence in (B.24) comes from two facts. First fact is stated in Lemma B.1 ii. Second fact is that for any \(x\in S_R\) and \(k_1\ge k_2\ge 0\),

$$\begin{aligned} \mathbb {E}\left[ |\mathbb {E}[x|\mathcal {V}_{k_1}]|^q|\mathcal {V}_{k_2}|\right]&\ge \left| \mathbb {E}[\mathbb {E}[x|\mathcal {V}_{k_1}]|\mathcal {V}_{k_2}]\right| ^q\\&=|\mathbb {E}[x|\mathcal {V}_{k_2}]|^q. \end{aligned}$$

That is, \(\{\mathbb {E}[x|\mathcal {V}_k]\}\) is a submartingale. Following a similar argument in Corollary 5.2.4 of [33], this sequence of random variables converges to x uniformly on \(S_R\) in mean square. \(\square \)

Appendix C: Proof of Lemma 3.1

We verify the dissipativity of (10) via Lyapunov’s method. First, let

$$\begin{aligned} U(x,y)=\lambda |y|^2+|x+y|^2+(2\lambda +2)V(x). \end{aligned}$$

One can verify that there exist \(\mu ,\nu >0\) such that

$$\begin{aligned} \nu (|x|^2+|y|^2)\le U(x,y)\le \mu (|x|^2+|y|^2). \end{aligned}$$

One can verify that conditions (H1)–(H3) hold for f and g. For any smooth enough function \(\mathcal {U}:\mathbb {R}^+\times \mathbb {R}^l\times \mathbb {R}^l\rightarrow \mathbb {R}\), define operator \(\mathcal {L}\) as

$$\begin{aligned} \mathcal {L}\mathcal {U}(t,x,y)=\partial _t\mathcal {U}(t,x,y)+(\nabla \mathcal {U}^\top f)(t,x,y)+\frac{1}{2}{{\,\mathrm{\mathbf{tr}}\,}}\left[ g^\top {{\,\mathrm{\mathbf{Hess}}\,}}\mathcal {U}g\right] (t,x,y), \end{aligned}$$

where \({{\,\mathrm{\mathbf{tr}}\,}}\) denotes trace of matrix and \({{\,\mathrm{\mathbf{Hess}}\,}}\) denotes the Hessian matrix.

According to Itô’s formula, we have

$$\begin{aligned}&dU(x(t),y(t))\\&\quad =\mathcal {L}U(t,x(t),y(t))dt+(\nabla U^\top g)(t,x(t),y(t))d\xi (t)\\&\quad =\Big [2x^\top +2y^\top +(2\lambda +2)\nabla V^\top (x),2x^\top +(2\lambda +2)y^\top \Big ](t)\\&\quad \quad \times \left( \begin{matrix} y\\ -A(t,x)y-\nabla V(x)+e(t) \end{matrix} \right) (t)dt\\&\qquad +\frac{1}{2}{{\,\mathrm{\mathbf{tr}}\,}}\left[ \left( \begin{matrix} 0&{}0\\ 0&{}I_l \end{matrix} \right) \left( \begin{matrix} 2I_l+(2\lambda +2){{\,\mathrm{\mathbf{Hess}}\,}}V(x)&{}2I_l\\ 2I_l&{}(2\lambda +2)I_l \end{matrix} \right) \left( \begin{matrix} 0&{}0\\ 0&{}I_l \end{matrix} \right) \right] (t)dt\\&\qquad +\Big [2x^\top +2y^\top +(2\lambda +2)\nabla V^\top (x),2x^\top +(2\lambda +2)y^\top \Big ](t)\cdot \left( \begin{matrix} 0&{}0\\ 0&{}I_l \end{matrix} \right) d\xi (t)\\&\quad =\Big [2x\cdot y-2x^\top A(\cdot ,x)y+2|y|^2-(2\lambda +2)y^\top A(\cdot ,x)y-2x^\top \nabla V(x)\\&\qquad +2x^\top e+(2\lambda +2)y^\top e+l(\lambda +1)\Big ](t)dt\\&\qquad +\Big [2x^\top +(2\lambda +2)y^\top \Big ](t)d\xi (t). \end{aligned}$$

Now take a nonnegative function \(h:\mathbb {R}\rightarrow \mathbb {R}^+\) such that

  1. (a)

    \(h(t)\le \frac{1}{\lambda ^2}\;\forall \;t\ge 0\);

  2. (b)

    there exist a constant \(\sigma >0\) and a sequence \(\{t_k\}^{+\infty }_{k=1}\) satisfying \(\lim \limits _{k\rightarrow +\infty }t_k=+\infty \) such that

    $$\begin{aligned} \int ^{t_{k+1}}_{t_{k}}h(s)ds\ge \sigma \;\forall \;k\in \mathbb {N}_+; \end{aligned}$$
  3. (c)

    there exists \(M>0\) such that

    $$\begin{aligned} \int ^t_0e^{-\int ^t_sh(r)dr}ds\le M. \end{aligned}$$

Therefore by Itô’s formula, we have

$$\begin{aligned} \mathbb {E}&\left[ e^{\int ^t_0h(s)ds}U(x(t),y(t))\right] -\mathbb {E}[U(x_0,y_0)]\\&=\mathbb {E}\left[ \int ^t_0e^{\int ^s_0h(r)dr}\Big (h(s)U(x(s),y(s))+\mathcal {L}U(x(s),y(s))\Big )ds\right] . \end{aligned}$$

Since e is continuous and \(\mathcal {T}\)-periodic with respect to t, there exists a constant \(\kappa >0\) such that

$$\begin{aligned} |e(t)|\le \kappa \;\forall \;~t\ge 0. \end{aligned}$$

Note that

$$\begin{aligned}&h(t)U(x,y)+\mathcal {L}U(x,y)\\&\quad =h(t)\Big [\lambda |y|^2+|x+y|^2+(2\lambda +2)V(x)\Big ]\\&\quad \quad -(\lambda +2)y^\top Ay-2x^\top \nabla V(x)+l(\lambda +1)+2x^\top e(t)+(2\lambda +2)y^\top e(t)\\&\quad \quad -\lambda \Bigg [\left| \sqrt{A}y\right| ^2+\frac{2}{\lambda }\left( \sqrt{A}y\right) ^\top \sqrt{A}x\\&\quad \quad -\frac{2}{\lambda }\left( \sqrt{A}y\right) ^\top \left( \sqrt{A}\right) ^{-1}x-\frac{2}{\lambda }\left( \sqrt{A}y\right) ^\top \left( \sqrt{A}\right) ^{-1}y\Bigg ]\\&\quad =h(t)\Big [\lambda |y|^2+|x+y|^2+(2\lambda +2)V(x)\Big ]\\&\quad \quad -(\lambda +2)y^\top Ay-2x^\top \nabla V(x)+l(\lambda +1)+2x^\top e(t)+(2\lambda +2)y^\top e(t)\\&\quad \quad -\lambda \left| \sqrt{A}y+\frac{\sqrt{A}}{\lambda }x-\left( \lambda \sqrt{A}\right) ^{-1}x-\left( \lambda \sqrt{A}\right) ^{-1}y\right| ^2\\&\quad \quad +\lambda \left| \frac{\sqrt{A}}{\lambda }x-\left( \lambda \sqrt{A}\right) ^{-1}x-\left( \lambda \sqrt{A}\right) ^{-1}y\right| ^2\\&\quad \le \left[ \frac{2\beta +2}{\lambda ^2}+\frac{2\beta }{\lambda }+\frac{K}{\alpha \lambda }+1-\eta \right] |x|^2\\&\quad \quad +\left[ \frac{2}{\lambda ^2}+\frac{1}{\lambda }+\frac{K}{\alpha \lambda }+1-\alpha (\lambda +2)\right] |y|^2\\&\quad \quad +\left[ \frac{K}{\gamma \lambda }-1\right] x^\top \nabla V(x)+(l+1)\lambda +(\lambda +2)\kappa ^2, \end{aligned}$$

where K is a positive constant. When \(\lambda \) is large enough, there exist \(K_1,K_2>0\) such that

$$\begin{aligned} h(t)U(x,y)+\mathcal {L}U(x,y)\le -K_1\left( |x|^2+|y|^2\right) +K_2. \end{aligned}$$


$$\begin{aligned} \mathbb {E}&\left[ e^{\int ^t_0h(s)ds}U(x(t),y(t))\right] -\mathbb {E}[U(x_0,y_0)]\\&\le \mathbb {E}\left[ \int ^t_0e^{\int ^s_0h(r)dr}\Big (-K_1(|x|^2+|y|^2)+K_2\Big )ds\right] \\&\le K_2\int ^t_0e^{\int ^s_0h(r)dr}ds. \end{aligned}$$

Therefore by (C.25) and Condition (b), we have

$$\begin{aligned} \mathbb {E}&[|x(t)|^2+|y(t)|^2]\\&\le \frac{1}{\nu }\mathbb {E}[U(x(t),y(t))]\\&\le \frac{1}{\nu }e^{-\int ^{t}_0h(s)ds}\mathbb {E}[U(x_0,y_0)]+\frac{K_2}{\nu }\int ^t_0e^{-{\int ^t_sh(r)dr}}ds\\&\le \frac{\mu }{\nu }e^{-\int ^{t}_0h(s)ds}\mathbb {E}[|x_0|^2+|y_0|^2]+\frac{MK_2}{\nu }. \end{aligned}$$

Without loss of generality, let \(t_1=0\) in \(\{t_k\}^{+\infty }_{k=1}\). Condition (b) leads to that

$$\begin{aligned}&e^{-\int ^t_0h(s)ds}\\&=e^{-\int ^{t}_{t_k}h(s)ds}\prod ^k_{i=1}e^{-\int ^{t_{i+1}}_{t_i}h(s)ds}\\&\le e^{-(k-1)\sigma } \end{aligned}$$

for all \(t\ge t_k\). Therefore, we have

$$\begin{aligned}&\Vert (x(t),y(t))\Vert _2\\&=\Big [\mathbb {E}\left( |x(t)|^2+|y(t)|^2\right) \Big ]^{\frac{1}{2}}\\&\le \left[ \frac{\mu }{\nu }e^{-\int ^{t}_0h(s)ds}\mathbb {E}[|x_0|^2+|y_2|^2]+\frac{MK_2}{\nu }\right] ^{\frac{1}{2}}\\&\le \left[ \frac{\mu }{\nu }e^{-(k-1)\sigma }\Vert (x_0,y_0)\Vert _2^2+\frac{MK_2}{\nu }\right] ^{\frac{1}{2}}. \end{aligned}$$


$$\begin{aligned} B_0=\left( 1+\frac{MK_2}{\nu }\right) ^{\frac{1}{2}}. \end{aligned}$$

When \(\Vert (x_0,y_0)\Vert _2<B_1\) for any \(B_1>0\), there exists \(k_0\in \mathbb {N}_+\) such that for all \(t\ge t_{k_0}\),

$$\begin{aligned} \Vert (x(t),y(t))\Vert _2\le B_0, \end{aligned}$$

which means that system (10) is \(B_0\)-dissipative. Moreover, since (x(t), y(t)) is continuous, there exists a constant \(M_0\ge B_0\) such that for all \(t\in \mathbb {R}^+\),

$$\begin{aligned} \Vert (x(t),y(t))\Vert _2\le M_0. \end{aligned}$$

Appendix D: Proof of Lemma 3.2

Since system (11) is \(B_0\)-dissipative, there exists a \(T_{B_1}\) for any \(B_1>0\) and \(x_0\in L(\mathbb {P},\mathbb {R}^{m})\) satisfying \(\Vert x_0\Vert _2\le B_1\) such that

$$\begin{aligned} \Vert X(t,\omega ,x_0)\Vert _2\le B_0 \end{aligned}$$

for all \(t\ge T_{B_1}\).

On the other hand, according to Theorem 2.1, for every fixed \(N\in \mathbb {N}\) satisfying \(N\mathcal {T}\ge T_{B_1}\), \(\{X_\epsilon \}\) converge to X uniformly on \([0,N\mathcal {T}]\) as \(\epsilon \rightarrow 0\). Thus for all \(\nu >0\), there exists an \(\epsilon _{\nu ,N}\) such that

$$\begin{aligned} \Vert X_\epsilon (t,\omega ,x_0)-X(t,\omega ,x_0)\Vert _2\le \nu \end{aligned}$$

for all \(0<\epsilon <\epsilon _{\nu ,N}\), \(t\in [0,N\mathcal {T}]\) and \(\Vert x_0\Vert _2<\infty \).

Therefore when \(\Vert x_0\Vert _2\le B_1\) and \(t\in [T_{B_1},N\mathcal {T}]\), we have

$$\begin{aligned} \Vert X_\epsilon (t,\omega ,x_0)\Vert _2&\le \Vert X(t,\omega ,x_0)\Vert _2+\Vert X(t,\omega ,x_0)-X_\epsilon (t,\omega ,x_0)\Vert _2\\&\le B_0+\nu . \end{aligned}$$

This ends the proof.\(\square \)

In fact, we can estimate \(X_\epsilon \) by Theorem 2.1 and Gronwall’s inequality.

Lemma D.1

Suppose that conditions (H1)–(H2) hold. Then for all \(t\in [0,T]\) for any fixed \(T\in \mathbb {R}^+\), \(x_0\) satisfying \(\Vert x_0\Vert _2\le B_0\) and \(\nu >0\) there exist \(K>0\) and \(\varepsilon >0\) such that for \(0<\epsilon <\varepsilon \),

$$\begin{aligned} \mathbb {E}|X_\epsilon (t,\omega )|^2\le Kt+\nu . \end{aligned}$$


Rewrite (11) in integral Itô’s form and we have

$$\begin{aligned} X^i(t,\omega )&=x^i_0+\int ^t_0f^i(s,X(s,\omega ))ds\\&\quad +\frac{1}{2}\sum ^m_{j=1}\sum ^{m}_{k=1}\int ^t_0g^{kj}(s,X(s,\omega ))\partial _kg^{ij}(s,X(s,\omega ))ds\\&\quad +\sum ^m_{j=1}\int ^t_0g^{ij(s,X(s,\omega ))}dW^j(s,\omega )\\&=:x^i_0+\int ^t_0F^i(s,X(s,\omega ))ds+\sum ^m_{j=1}\int ^t_0g^{ij(s,X(s,\omega ))}dW^j(s,\omega ) \end{aligned}$$

for \(i=1,\ldots ,m\). Here \(F(t,x)=(F^1(t,x),\ldots ,F^{m})\) also satisfies conditions (H1)–(H2). Thus we have,

$$\begin{aligned} \mathbb {E}\left| X(t,\omega )\right| ^2\le \mathbb {E}|x_0|^2+K(1+t)\int ^t_0(1+\mathbb {E}|X(s,\omega )|^2)ds. \end{aligned}$$

Therefore by Gronwall’s inequality, we have

$$\begin{aligned} \mathbb {E}|X(t,\omega )|^2\le K(\Vert x_0\Vert _2,T)t. \end{aligned}$$

Then similar to Lemma 3.2, we have for any \(x_0\) satisfying \(\Vert x_0\Vert _2\le B_0\), \(t\in [0,T]\) and \(\nu >0\), there exist \(K(B_0,T)>0\) and \(\varepsilon >0\) such that for \(0<\epsilon <\varepsilon \),

$$\begin{aligned} \mathbb {E}|X_\epsilon (t,\omega )|^2\le K(B_0,T)t+\nu . \end{aligned}$$

\(\square \)

Appendix E: Proof of Lemma 3.3

When proving the existence of periodicity for stochastic differential equations, we will apply Horn’s fixed point theory.

Lemma E.1

([14, Theorem 6]) Suppose that \(S_0\subset S_1\subset S_2\) are convex subsets of some Banach space \(\mathcal {H}\). Moreover, \(S_0\), \(S_2\) are compact and \(S_1\) is open relative to \(S_2\). Suppose \(\Gamma :S_2\rightarrow \mathcal {H}\) is continuous mapping and there is a integer \(M_0\in \mathbb {N}\) such that

$$\begin{aligned}&\Gamma ^k(S_1)\subset S_2,~1\le k\le M_0-1,\\&\Gamma ^k(S_1)\subset S_0,~M_0\le k\le 2M_0-1. \end{aligned}$$

Then \(\Gamma \) admits a fixed point in \(S_0\).

Now we can prove Lemma 3.3.

Proof of Theorem 3.1

Throughout this proof, we adopt a following notation,

$$\begin{aligned} S_R:=\{x\in L^2(\mathbb {P},\mathbb {R}^{m}):\Vert x\Vert _2< R\}. \end{aligned}$$

Step 1 approximation of initial state

Let \(w:\Omega \rightarrow \mathbb {R}\) be a one-dimensional random variable according with normal distribution N(0, 1) independent of \(x_0\) and W with respect to \((\Omega ,\mathbb {P},\mathcal {F})\). The density function and distribution function of w are

$$\begin{aligned} f_w(u)&=\frac{1}{\sqrt{2\pi }}e^{-\frac{u^2}{2}},\\ F_w(u)&=\int ^{u}_{-\infty }f_w(v)dv. \end{aligned}$$

Since \(F_w:\mathbb {R}\cup \{\pm \infty \}\rightarrow [0,1]\) is monotone increasing, \(F_w^{-1}: [0,1] \rightarrow \mathbb {R}\cup \{\pm \infty \}\) is well-defined. For any \(k\in \mathbb {N}_+\), let

$$\begin{aligned} U_{k,1}&=\left\{ \omega \in \Omega :w(\omega )\le F_w^{-1}\left( \frac{1}{2^k}\right) \right\} ,\\ U_{k,j+1}&=\left\{ \omega \in \Omega :w(\omega )\le F_w^{-1}\left( \frac{j+1}{2^k}\right) \right\} \Big \backslash \bigcup ^{2^k-1}_{j=1}U_{k,j},~j=1,2,\ldots ,2^k-1. \end{aligned}$$

Thus, for any \(j\in \{1,2,\ldots ,2^k\}\),

$$\begin{aligned} \mathbb {P}[\omega \in U_{k,j}]=\frac{1}{2^k}. \end{aligned}$$

For any fixed \(x_0\in S_{B_1}\) denoted as \(x_0=(x_0^1,\ldots ,x_0^{m})\), let

$$\begin{aligned} \Omega _{k,0}&=\{\omega \in \Omega :\exists i=1,\ldots ,m~ s.t.~ |x_0^i(\omega )|\ge k^3\}\\ \Omega _{k,1}&=\{\omega \in \Omega :\forall i=1,\ldots ,m, |x_0^i(\omega )|<k^3\}. \end{aligned}$$

It is easy to see that \(\Omega _{k,0}\cup \Omega _{k,1}=\Omega \) and \(\Omega _{k,0}\cap \Omega _{k,1}=\emptyset \). By Chebyshev’s inequality,

$$\begin{aligned} \mathbb {P}[\omega \in \Omega _{k,0}]&\le \mathbb {P}[\omega \in \{\omega \in \Omega :|x_0|\ge k^3\}]\\&\le \frac{B_1^2}{k^6}. \end{aligned}$$

Let \(Z_k\) be the set of l-dimensional integer vectors \(\varvec{\lambda }=(\lambda ^1,\ldots ,\lambda ^{m})\) in the block \([-k^32^k,k^32^k)^{m}\). Take a set-valued map \(\Lambda _k:Z_k\rightarrow \sigma \left( \mathbb {R}^{m}\right) \) as

$$\begin{aligned} \Lambda _k({\varvec{\lambda }})=\Bigg [\frac{\lambda ^1}{2^k},\frac{\lambda ^1+1}{2^k}\Bigg )\times \cdots \times \Bigg [\frac{\lambda ^{m}}{2^k},\frac{\lambda ^{m}+1}{2^k}\Bigg )\cap (-k^3,k^3)^{m} \end{aligned}$$

for all \({\varvec{\lambda }}\in Z_k\). Let

$$\begin{aligned} V_{k,\varvec{\lambda }}:=\{\omega :x_0(\omega )\in \Lambda _k(\varvec{\lambda })\}. \end{aligned}$$


$$\begin{aligned} \Omega _{k,j,\varvec{\lambda }}=U_{k,j}\cap V_{k,\varvec{\lambda }}. \end{aligned}$$

Then for any \(j_1\ne j_2\) and \(\varvec{\lambda }_1\ne \varvec{\lambda }_2\),

$$\begin{aligned} \Omega _{k,j_1,\varvec{\lambda }_1}\cap \Omega _{k,j_2,\varvec{\lambda }_2}=\emptyset . \end{aligned}$$


$$\begin{aligned} \bigcup _{\begin{array}{c} 1\le j\le 2^k\\ \varvec{\lambda }\in Z_k \end{array}}\Omega _{k,j,\varvec{\lambda }}=\Omega _{k,1}, \end{aligned}$$


$$\begin{aligned} \mathbb {P}[\omega \in \Omega _{k,j,\varvec{\lambda }}]\le \mathbb {P}[\omega \in U_{k,j}]\le \frac{1}{2^k}. \end{aligned}$$


$$\begin{aligned} \chi _{k,0}(\omega ):=\chi _{\Omega _{k,0}}(\omega )=\left\{ {\begin{matrix} 1,&{}\quad \omega \in \Omega _{k,0},\\ 0,&{}\quad \omega \notin \Omega _{k,0}, \end{matrix}} \right. \end{aligned}$$
$$\begin{aligned} \chi _{k,j,\varvec{\lambda }}(\omega ):=\chi _{\Omega _{k,j,\varvec{\lambda }}}(\omega )=\left\{ {\begin{matrix} 1,&{}\quad \omega \in \Omega _{k,j,\varvec{\lambda }},\\ 0,&{}\quad \omega \notin \Omega _{k,j,\varvec{\lambda }}. \end{matrix}} \right. \end{aligned}$$

Then \(\{\chi _{k,0}\}\cup \{\chi _{k,j,\varvec{\lambda }}\}_{1\le j\le 2^k}^{\varvec{\lambda }\in Z_k}\) spans a subspace of \(L^2(\mathbb {P},\mathbb {R}^{m})\) with finite dimension. Let

$$\begin{aligned} x_k=\sum _{\begin{array}{c} 1\le j\le 2^k\\ \varvec{\lambda }\in Z_k \end{array}}\left( \frac{2\lambda ^1+1}{2^{k+1}},\ldots ,\frac{2\lambda ^{m}+1}{2^{k+1}}\right) ^\top \chi _{k,j,\varvec{\lambda }}. \end{aligned}$$

It is clear that \(x_k\in L^2(\mathbb {P},\mathbb {R}^{m})\) for all \(k\in \mathbb {N}_+\). Then for any \(k\in \mathbb {N}_+\) and almost every \(\omega \in \Omega _{k,1}\),

$$\begin{aligned} |x_0(\omega )-x_k(\omega )|\le \frac{m^{\frac{1}{2}}}{2^{k+1}}. \end{aligned}$$


$$\begin{aligned} \mathbb {P}[\omega \in \Omega _{k,1}]&=1-\mathbb {P}[\omega \in \Omega _{k,0}]\\&\ge 1-\frac{B_1^2}{k^6}\\&\rightarrow 1\;as\;k\rightarrow \infty . \end{aligned}$$


$$\begin{aligned} x_k\xrightarrow {~\text {a.s.}~} x_0 ~ as~ k\rightarrow \infty . \end{aligned}$$


$$\begin{aligned} \Vert x_{k+1}-x_{k}\Vert _2^2&\le \int _{\Omega _{k,1}}\frac{l}{2^{2k+2}}d\mathbb {P}+\int _{\Omega _{k+1,1}\backslash \Omega _{k,1}}(k+1)^3d\mathbb {P}\\&\le \frac{l}{2^{2k+2}}+(k+1)^3\mathbb {P}[\omega \in \Omega _{k,0}]\\&\le \frac{l}{2^{2k+2}}+\frac{B_1^2(k+1)^3}{k^6}\\&\le \frac{l}{2^{2k+2}}+\frac{8B_1^2}{k^3}. \end{aligned}$$

Then for any \(k_1<k_2\),

$$\begin{aligned} \Vert x_{k_2}-x_{k_1}\Vert _2&\le \sum ^{k_2-1}_{k=k_1}\Vert x_{k+1}-x_k\Vert _2\\&\le \sum ^{k_2-1}_{k=k_1}\left( \frac{l}{2^{2k+2}}+\frac{8B_1^2}{k^3}\right) ^{\frac{1}{2}}\\&\le l^{\frac{1}{2}}\sum ^{k_2-1}_{k=k_1}\frac{1}{2^{k+1}}+2\sqrt{2}B_1\sum ^{k_2-1}_{k=k_1}\frac{1}{k^{\frac{3}{2}}}\\&\le \frac{l^{\frac{1}{2}}}{2^{k_1}}+\frac{2\sqrt{2}B_1}{k_1^{\frac{1}{2}}}\\ \rightarrow&0~as~k_1,k_2\rightarrow \infty . \end{aligned}$$

Thus, \(\{x_k\}\) is a Cauchy sequence in \(L^2(\mathbb {P},\mathbb {R}^{m})\). Therefore,

$$\begin{aligned} x_k\xrightarrow {~2~} x_0~as~k\rightarrow \infty . \end{aligned}$$

In the left of this proof, some notations are adopted. For \(k\in \mathbb {N}\),

$$\begin{aligned} L^2_k(\mathbb {P},\mathbb {R}^{m})&:=\text {span}\Big (\{\chi _k,0\}\cup \{\chi _{k,j,\varvec{\lambda }}:1\le j\le 2^k, \varvec{\lambda }\in Z_k\}\Big ),\\ \sigma _k&:=\sigma \Big (\{\Omega _{k,0}\}\cup \{\Omega _{k,j,\varvec{\lambda }}:1\le j\le 2^k, \varvec{\lambda }\in Z_k\}\Big ). \end{aligned}$$

Then for any fixed \(k\in \mathbb {N}\), \(L^2_k(\mathbb {P},\mathbb {R}^{m})\) is a subspace of \(L^2(\mathbb {P},\mathbb {R}^{m})\) with finite dimensions which can be viewed as \(\mathbb {R}^{m_k}\) where

$$\begin{aligned} m_k=k^{3m}2^{mk+k+m}+1. \end{aligned}$$

\(\{\sigma _k\}_{k\in \mathbb {N}}\) is an increasing class of sub-algebras of \(\mathcal {F}\). Moreover,

$$\begin{aligned} \bigvee _{k=0}^\infty \sigma _k=\mathcal {F}. \end{aligned}$$

Step 2 estimate of\({\varvec{X_\epsilon (t,\omega ,x_k)}}\)

Since (14) is \(B_0\)-dissipative,

$$\begin{aligned} \Vert X_\epsilon (t,\omega ,x_0)\Vert _2\le B_0,~t\in [T_{B_1},N\mathcal {T}]. \end{aligned}$$

Since that \(\lim \limits _{k\rightarrow \infty }\Vert x_k-x_0\Vert _2=0\), for any \(\varrho >0\), there exists a \(k_{\varrho }\in \mathbb {N}_+\) such that for any \(k>k_{\varrho }\)

$$\begin{aligned} \Vert x_k-x_0\Vert _2<\varrho . \end{aligned}$$

Thus, we have

$$\begin{aligned} \Vert X_\epsilon (t,\omega ,x_k)\Vert _2&\le \Vert X_\epsilon (t,\omega ,x_k)-X(t,\omega ,x_k)\Vert _2+\Vert X(t,\omega ,x_k)-X(t,\omega ,x_0)\Vert _2\\&\quad +\Vert X(t,\omega ,x_0)-X_\epsilon (t,\omega ,x_0)\Vert _2+\Vert X_\epsilon (t,\omega ,x_0)\Vert _2\\&\le [K(B_1+\varrho ,N\mathcal {T})+K(B_1,N\mathcal {T})]o(1)+B_0\\&\quad +\Vert X(t,\omega ,x_k)-X(t,\omega ,x_0)\Vert _2. \end{aligned}$$

By (H1)–(H2),

$$\begin{aligned}&\mathbb {E}|X(t,\omega ,x_k)-X(t,\omega ,x_0)\Vert ^2\\&\le K(l)\mathbb {E}|x_k-x_0|^2\\&\quad +K(m)\sum ^{m}_{i=1}\mathbb {E}\left| \int ^t_0(f^i(s,X(s,\omega ,x_k))-f^i(s,X(s,\omega ,x_0)))ds\right| ^2\\&\quad +K(m)\sum ^{m}_{i=1}\mathbb {E}\left| \sum ^m_{j=1}\int ^t_0(g^{ij}(s,X(s,\omega ,x_k))-g^{ij}(s,X(s,\omega ,x_0)))dW^j(s,\omega )\right| ^2\\&\quad +K(m)\mathbb {E}\left| \sum ^m_{j=1}\sum ^{m}_{\alpha =1}\int ^t_0(g^{\alpha j}\partial _\alpha g^{ij}(s,X(s,\omega ,x_k))-g^{\alpha j}\partial _\alpha g^{ij}(s,X(s,\omega ,x_0)))ds\right| ^2\\&\le K(m)\Vert x_k-x_0\Vert ^2_2+K(m,m)(1+N\mathcal {T})\int ^t_0\mathbb {E}|X(s,\omega ,x_k)-X(s,\omega ,x_0)|^2ds. \end{aligned}$$

Then by Gronwall’s inequality,

$$\begin{aligned} \mathbb {E}|X(t,\omega ,x_k)-X(t,\omega ,x_0)|^2&\le \Vert x_k-x_0\Vert _2^2K(m,m)(1+N\mathcal {T})t\\&\le K(m,m)(1+N\mathcal {T})^2\varrho ^2.\nonumber \end{aligned}$$


$$\begin{aligned} \Vert X(t,\omega ,x_k)-X(t,\omega ,x_0)\Vert _2\le K(m,m)(1+N\mathcal {T})\varrho . \end{aligned}$$


$$\begin{aligned} \Vert X_\epsilon (t,\omega ,x_k)\Vert _2\le K(B_1+\varrho ,N\mathcal {T})o(1)+K(m,m)(1+N\mathcal {T})\varrho +B_0. \end{aligned}$$

Then for any \(\nu >0\), let \(\varrho ,\epsilon _{B_1,\varrho ,N}>0\) small enough such that

$$\begin{aligned} \varrho \le \frac{\nu }{2K(m,m)(1+N\mathcal {T})}, \end{aligned}$$


$$\begin{aligned} K(B_1+\varrho ,N\mathcal {T})o(1)<\frac{\nu }{2}. \end{aligned}$$

Then by Lemma B.1 ii, for \(k>k_\varrho \), \(0<\epsilon <\min \{\epsilon _{\nu ,N},\epsilon _{B_1,\varrho ,N}\}\), \(n\in \mathbb {N}\) and \(t\in [T_{B_1},N\mathcal {T}]\),

$$\begin{aligned} \Vert X_{\epsilon ,n}(t,\omega ,x_k)\Vert _2\le \Vert X_\epsilon (t,\omega ,x_k)\Vert _2\le B_0+\nu . \end{aligned}$$

Step 3 construction of Poincaré map

Without loss of generality, assume that for any \(k\in \mathbb {N}\), \(1\le j\le 2^k\) and \(\varvec{\lambda }\in Z_k\), \(\Omega _{k,0}\) and \(\Omega _{k,j,\varvec{\lambda }}\) are not zero measure sets. By Lemma 2.1 (1) and (3), we know that for all \(t\in \mathbb {R}_+\) and \(x_0\in L^2(\mathbb {P},\mathbb {R}^{m})\), \(X_\epsilon (t,\omega ,x_0)\) is in \(L^2(\mathbb {P},\mathbb {R}^{m})\) and is \(\mathcal {F}^{t+\epsilon }_0\)-measurable. This leads to that \(\mathbb {E}[X_\epsilon (t,\omega ,x_{k_1})|\sigma _{k_2}]\) exists for any \(k_1\le k_2\). Let

$$\begin{aligned} X_{\epsilon ,n}(t,\omega ,x_k):=\mathbb {E}[X_\epsilon (t,\omega ,x_{k})|\sigma _n] \end{aligned}$$

for \(k\le n\). Note that \(\sigma _k\) is generated by finite subsets of \(\Omega \), \(X_{\epsilon ,n}\) can be represented as

$$\begin{aligned} \mathbb {E}[X_\epsilon (t,\omega ,x_k)|\Omega _{k,0}]+\sum _{\begin{array}{c} 1\le j\le 2^k\\ \varvec{\lambda }\in Z_k \end{array}}\mathbb {E}[X_\epsilon (t,\omega ,x_k)|\Omega _{k,j,\varvec{\lambda }}]. \end{aligned}$$

Thus, \(X_{\epsilon ,n}(t,\omega ,x_k)\in L^2_k(\mathbb {P},\mathbb {R}^{m})\). Since \(L^2_k(\mathbb {P},\mathbb {R}^{m})\subset L^2_n(\mathbb {P},\mathbb {R}^{m})\) for \(k\le n\), define Poincaré maps \(P_n:S_R\cap L^2_n(\mathbb {P},\mathbb {R}^{m})\rightarrow P_n(S_R\cap L^2_n(\mathbb {P},\mathbb {R}^{m}))\) as

$$\begin{aligned} P_n(x_k):=X_{\epsilon ,n}(\mathcal {T},\omega ,x_k). \end{aligned}$$

Now we need to verify two assertions below:

  1. (a)

    \(P_n(x_k)\xrightarrow {~2~}X_\epsilon (\mathcal {T},\omega ,x_k)\) as \(n\rightarrow \infty \);

  2. (b)

    \(\lim \limits _{n\rightarrow \infty }d_{BL}\Big (p_{P_n^\zeta (x_k)},p_{X_{\epsilon ,n}(\zeta \mathcal {T},\omega ,x_k)}\Big )=0\) for any fixed \(k\in \mathbb {N}\) and \(\zeta =2,3,\ldots .\)

We first deal with Assertion (a) Since \(\{\sigma _k\}\) is an increasing class of \(\sigma \)-algebras, by Lemma B.1.i we have

$$\begin{aligned} \mathbb {E}[P_{n_2}(x_k)|\sigma _{n_1}]&=\mathbb {E}[X_{\epsilon ,n_2}(\mathcal {T},\omega ,x_k)|\sigma _{n_1}]\\&=\mathbb {E}[\mathbb {E}[X_\epsilon (\mathcal {T},\omega ,x_k)|\sigma _{n_2}]|\sigma _{n_1}]\\&=\mathbb {E}[X_\epsilon (\mathcal {T},\omega ,x_k)|\sigma _{n_1}]\\&=P_{n_1}(x_k) \end{aligned}$$

for all \(n_1\le n_2\). Thus, \(\{P_n(x_k)\}_{n=1}^\infty \) is a discrete martingale for all \(k\in \mathbb {N}\) with respect to \(\{\sigma _n\}_{n=1}^\infty \). Then by Lemma B.2, \(\{P_n(x_k)\}\) converges both a.s. and in \(L^2(\mathbb {P},\mathbb {R}^{m})\). Moreover,

$$\begin{aligned} \lim _{n\rightarrow \infty }P_n(x_k)&=\mathbb {E}\left[ X_\epsilon (\mathcal {T},\omega ,x_k)\Bigg |\bigvee ^{\infty }_{n=1}\sigma _n\right] \\&=\mathbb {E}[X_\epsilon (\mathcal {T},\omega ,x_k)|\mathcal {F}]\\&=X_\epsilon (\mathcal {T},\omega ,x_k) \end{aligned}$$

a.s. and in \(L^2(\mathbb {P},\mathbb {R}^{m})\).

For Assertion (b), we adopt induction. Consider the case that \(\zeta =2\). Note that,

$$\begin{aligned} P^2_n(x_k)-X_{\epsilon ,n}(2\mathcal {T},\omega ,x_k)&=X_{\epsilon ,n}(\mathcal {T},\omega ,X_{\epsilon ,n}(\mathcal {T},\omega ,x_k))-X_{\epsilon ,n}(2\mathcal {T},\omega ,x_k)\\&=X_{\epsilon ,n}(\mathcal {T},\omega ,X_{\epsilon ,n}(\mathcal {T},\omega ,x_k))-X_\epsilon (\mathcal {T},\omega ,X_{\epsilon ,n}(\mathcal {T},\omega ,x_k))\\&\quad +X_\epsilon (\mathcal {T},\omega ,X_{\epsilon ,n}(\mathcal {T},\omega ,x_k))-X_\epsilon (\mathcal {T},\omega ,X_\epsilon (\mathcal {T},\omega ,x_k))\\&\quad +X_\epsilon (\mathcal {T},\omega ,X_\epsilon (\mathcal {T},\omega ,x_k))-X_\epsilon (2\mathcal {T},\omega ,x_k)\\&\quad +X_\epsilon (2\mathcal {T},\omega ,x_k)-X_{\epsilon ,n}(2\mathcal {T},\omega ,x_k)\\&=:\Delta _1+\Delta _2+\Delta _3+\Delta _4. \end{aligned}$$

Since \(X_\epsilon (\cdot ,\cdot ,\cdot )\) generates a continuous cocycle*** and \(\mathbb {P}\) is invariant under \(\theta _{\cdot }\), we have

$$\begin{aligned} X_\epsilon (\mathcal {T},\omega ,X_\epsilon (\mathcal {T},\omega ,x_k))\mathop {=\!=\!=\!=}\limits ^{d}&X_\epsilon (\mathcal {T},\theta _{\mathcal {T}}\omega ,X_\epsilon (\mathcal {T},\omega ,x_k))\\ {\mathop {=\!=\!=\!=}\limits ^{a.s.}}&X_\epsilon (2\mathcal {T},\omega ,x_k). \end{aligned}$$

Thus, \(\Delta _3\,\mathop {=\!=\!=\!=}\limits ^{d}\, 0\). Following the discuss of Assertion (a), we have \(\Delta _4\xrightarrow {~2~} 0\) as \(n\rightarrow \infty \). For \(\Delta _2\), we have

$$\begin{aligned} X_{\epsilon ,n}(\mathcal {T},\omega ,x_k)\rightarrow X_\epsilon (\mathcal {T},\omega ,x_k)~as~n\rightarrow \infty \end{aligned}$$

a.s. and in \(L^2(\mathbb {P},\mathbb {R}^{m})\) like \(\Delta _4\). Therefore, by a standard argument similar to the uniqueness of random differential equations and ordinary differential equations satisfying Lipschitz conditions, we have

$$\begin{aligned} X_\epsilon (\mathcal {T},\omega ,X_{\epsilon ,n}(\mathcal {T},\omega ,x_k))\rightarrow X_\epsilon (\mathcal {T},\omega ,X_\epsilon (\mathcal {T},\omega ,x_k))~as~n\rightarrow \infty \end{aligned}$$

a.s. and in \(L^2(\mathbb {P},\mathbb {R}^{m})\). Thus, \(\Delta _2\xrightarrow {~2~} 0\) as \(n\rightarrow \infty \). For \(\Delta _1\), note that

$$\begin{aligned} 0&\le \Vert \Delta _1\Vert _2\\&\le \Vert X_{\epsilon ,n}(\mathcal {T},\omega ,X_{\epsilon ,n}(\mathcal {T},\omega ,x_k))-X_{\epsilon ,n}(\mathcal {T},\omega ,X_\epsilon (\mathcal {T},\omega ,x_k))\Vert _2\\&\quad +\Vert X_{\epsilon ,n}(\mathcal {T},\omega ,X_\epsilon (\mathcal {T},\omega ,x_k))-X_{\epsilon }(\mathcal {T},\omega ,X_\epsilon (\mathcal {T},\omega ,x_k))\Vert _2\\&\quad +\Vert X_{\epsilon }(\mathcal {T},\omega ,X_\epsilon (\mathcal {T},\omega ,x_k))-X_{\epsilon }(\mathcal {T},\omega ,X_{\epsilon ,n}(\mathcal {T},\omega ,x_k))\Vert _2\\&=:\Delta _{11}+\Delta _{12}+\Delta _{13}. \end{aligned}$$

Following the discussion about \(\Delta _2\) and \(\Delta _4\), we have

$$\begin{aligned}&\Delta _{12}\xrightarrow {~2~} 0,\\&\Delta _{13}\xrightarrow {~2~} 0 \end{aligned}$$

as \(n\rightarrow \infty \). Moreover by Lemma B.1. ii, we have \(\Vert \Delta _{11}\Vert _2\rightarrow 0\) as \(n\rightarrow \infty \). Therefore, \(\Delta _1\xrightarrow {~2~} 0\) as \(n\rightarrow \infty \). Hence,

$$\begin{aligned} \lim _{n\rightarrow \infty }d_{BL}\Big (p_{P_n^\zeta (x_k)}, p_{X_{\epsilon ,n}(\zeta \mathcal {T},\omega ,x_k)}\Big )=0 \end{aligned}$$

for any fixed \(k\in \mathbb {N}\). This means that Assertion (b) holds for \(\zeta =2\).

Assume that Assertion (b) holds for all \(2<\tilde{\zeta }\le \zeta \) where \(\zeta >2\). We only need is to verify that

$$\begin{aligned} \lim _{n\rightarrow \infty }d_{BL}\Big (p_{P_n^{\zeta +1}(x_k)}, p_{X_{\epsilon ,n}((\zeta +1)\mathcal {T},\omega ,x_k)}\Big )=0. \end{aligned}$$

Note that

$$\begin{aligned} P^{\zeta +1}_n(x_k)&=P_n\circ P^{\zeta }_n(x_k)\\&=X_{\epsilon ,n}(\mathcal {T},\omega ,P^\zeta _n(x_k)). \end{aligned}$$


$$\begin{aligned}&P^{\zeta +1}_n(x_k)-X_{\epsilon ,n}((\zeta +1)\mathcal {T},\omega ,x_k)\\&=X_{\epsilon ,n}(\mathcal {T},\omega ,P^\zeta _n(x_k))-X_{\epsilon ,n}((\zeta +1)\mathcal {T},\omega ,x_k)\\&=X_{\epsilon ,n}(\mathcal {T},\omega ,P^\zeta _n(x_k))-X_{\epsilon ,n}(\mathcal {T},\omega ,X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k))\\&\quad +X_{\epsilon ,n}(\mathcal {T},\omega ,X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k))-X_{\epsilon ,n}(\mathcal {T},\omega ,X_{\epsilon ,n}(\zeta \mathcal {T},\omega ,x_k))\\&\quad +X_{\epsilon ,n}(\mathcal {T},\omega ,X_{\epsilon ,n}(\zeta \mathcal {T},\omega ,x_k))-X_{\epsilon ,n}((\zeta +1)\mathcal {T},\omega ,x_k)\\&=:\Delta ^{\zeta }_1+\Delta ^\zeta _2+\Delta ^\zeta _3. \end{aligned}$$

For \(\Delta ^\zeta _1\), observe that

$$\begin{aligned} P^\zeta _n(x_k)-X_\epsilon (\zeta \mathcal {T},\omega ,x_k)&=P^\zeta _n(x_k)-X_{\epsilon ,n}(\zeta \mathcal {T},\omega ,x_k)\\&\quad +X_{\epsilon ,n}(\zeta \mathcal {T},\omega ,x_k)-X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k). \end{aligned}$$

By the induction hypothesis,

$$\begin{aligned} \lim _{n\rightarrow \infty }d_{BL}\Big (p_{P^\zeta _n(x_k)},p_{X_{\epsilon ,n}(\zeta \mathcal {T},\omega ,x_k)}\Big )=0. \end{aligned}$$


$$\begin{aligned} X_{\epsilon ,n}(\zeta \mathcal {T},\omega ,x_k)\xrightarrow {~2~}X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k). \end{aligned}$$


$$\begin{aligned} \lim _{n\rightarrow \infty }d_{BL}\Big (p_{P^\zeta _n(x_k)},p_{X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k)}\Big )=0. \end{aligned}$$

This leads to

$$\begin{aligned} \lim _{n\rightarrow \infty }\Big (\Vert P^\zeta _n(x_k)\Vert _2-\Vert X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k)\Vert _2\Big )=0. \end{aligned}$$

Then by Skorokhod’s representation theorem Lemma A.2, there exists another common probability space \((\tilde{\Omega },\tilde{\mathcal {F}},\tilde{\mathbb {P}})\), a random variable \(X^\zeta _k\) and \(\{P^\zeta _{k,n}\}\) such that

$$\begin{aligned} p_{X^\zeta _k}\Big |_{\tilde{\mathbb {P}}}&=p_{X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k)},\\ p_{P^\zeta _{k,n}}\Big |_{\tilde{\mathbb {P}}}&=p_{P^\zeta _n(x_k)} \end{aligned}$$


$$\begin{aligned} P^\zeta _{k,n}\xrightarrow {a.s.}X^\zeta _k \end{aligned}$$

as \(n\rightarrow \infty \). Furthermore, by (E.34) we have

$$\begin{aligned} \Vert P^\zeta _{k,n}\Vert _2-\Vert X^\zeta _k\Vert _2&=\Vert P^\zeta _n(x_k)\Vert _2-\Vert X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k)\Vert _2\\&\le \Vert P^\zeta _n(x_k)\Vert _2-\Vert X_{\epsilon ,n}(\zeta \mathcal {T},\omega ,x_k)\Vert _2\\&\quad +\Vert X_{\epsilon ,n}(\zeta \mathcal {T},\omega ,x_k)-X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k)\Vert _2\\&\rightarrow 0~as~n\rightarrow \infty . \end{aligned}$$

Therefore, for any \(\varphi >0\) there exists \(n_0>0\), when \(n\ge n_0\),

$$\begin{aligned} \Vert P^\zeta _{k,n}\Vert _2\le \Vert X^\zeta _k\Vert _2+\varphi . \end{aligned}$$

Together with (E.35), there is a subsequence of \(\{P^\zeta _{k,n}\}\) (assume itself without loss of generality) such that

$$\begin{aligned} P^\zeta _{k,n}\xrightarrow {~2~}X^\zeta _k \end{aligned}$$

as \(n\rightarrow \infty \). Moreover, by a similar way with \(\Delta _{11}\),

$$\begin{aligned} \Vert X_{\epsilon ,n}(\mathcal {T},\omega ,P^\zeta _{n,k})-X_{\epsilon ,n}(\mathcal {T},\omega ,X^\epsilon _k)\Vert _2\rightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \). Thus,

$$\begin{aligned}&\lim _{n\rightarrow \infty }d_{BL}\Big (p_{X_{\epsilon ,n}(\mathcal {T},\omega ,P^\zeta _n(x_k))},p_{X_{\epsilon ,n}(\mathcal {T},\omega ,X_\epsilon (\zeta \mathcal {T},\omega ,x_k))}\Big )\\&=\lim _{n\rightarrow \infty }d_{BL}\Big (p_{X_{\epsilon ,n}(\mathcal {T},\omega ,P^\zeta _{n,k})},p_{X_{\epsilon ,n}(\mathcal {T},\omega ,X^\epsilon _k)}\Big )\\&=0. \end{aligned}$$

By uniqueness of solutions for (14), we have \(\Vert \Delta ^\zeta _2\Vert _2\rightarrow 0\) as \(n\rightarrow \infty \). For \(\Delta _3^\zeta \), note that

$$\begin{aligned} \Delta ^\zeta _3&=X_{\epsilon ,n}(\mathcal {T},\omega ,X_{\epsilon ,n}(\zeta \mathcal {T},\omega ,x_k))-X_{\epsilon ,n}(\mathcal {T},\omega ,X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k))\\&\quad +X_{\epsilon ,n}(\mathcal {T},\omega ,X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k))-X_{\epsilon }(\mathcal {T},\omega ,X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k))\\&\quad +X_{\epsilon }(\mathcal {T},\omega ,X_{\epsilon }(\zeta \mathcal {T},\omega ,x_k))-X_\epsilon ((\zeta +1)\mathcal {T},\omega ,x_k)\\&\quad +X_\epsilon ((\zeta +1)\mathcal {T},\omega ,x_k)-X_{\epsilon ,n}((\zeta +1)\mathcal {T},\omega ,x_k). \end{aligned}$$

By a similar argument with \(\Delta _3\), we get that \(\Delta ^\zeta _3\xrightarrow {~d~}0\) as \(n\rightarrow \infty \). Therefore,

$$\begin{aligned} \lim _{n\rightarrow \infty }d_{BL}\Big (p_{P^{\zeta +1}_n(x_k)},p_{X_{\epsilon ,n}((\zeta +1)\mathcal {T},\omega ,x_k)}\Big )=0. \end{aligned}$$

Hence by induction, Assertion (b) holds for all \(k\in \mathbb {N}\) and \(\zeta =2,3,\ldots \).

Now for any \(x\in S_R\), let

$$\begin{aligned} \bar{x}_k:=\mathbb {E}[x|\sigma _k],~k\in \mathbb {N}. \end{aligned}$$

Then \(\bar{x}_k\in L^2_k(\mathbb {P},\mathbb {R}^{m})\). Define Poincaré maps \(P_\epsilon :S_R\rightarrow L^2(\mathbb {P},\mathbb {R}^{m})\) as

$$\begin{aligned} P_\epsilon (x):=X_\epsilon (\mathcal {T},\omega ,x). \end{aligned}$$


$$\begin{aligned} \lim _{k\rightarrow \infty }\lim _{n\rightarrow \infty }P_{n}(\bar{x}_k)=P_\epsilon (x) \end{aligned}$$

a.s. and in \(L^2(\mathbb {P},\mathbb {R}^{m})\). Thus, \(P_\epsilon \) is weakly continuous and weakly compact in weak topology. Moreover by Assertion b),

$$\begin{aligned} p_{P^\zeta _\epsilon (x)}=p_{X_\epsilon (\zeta \mathcal {T},\omega ,x)} \end{aligned}$$

for all \(\zeta =2,3,\ldots \).

Step 4 existence of periodic solution in distribution

Let \(\nu =\frac{1}{2}\) in Step 2. Let \(B_1>B_0+1>0\) be the constants in Definition 2.2. Let

$$\begin{aligned} S_{R,n}:=L^2_n(\mathbb {P},\mathbb {R}^{m})\cap S_{R}.\\ \end{aligned}$$

It can be verified that

$$\begin{aligned} \bar{S}_{R,n}&=L^2_n(\mathbb {P},\mathbb {R}^{m})\cap \bar{S}_R\\&=\{x\in L^2_n(\mathbb {P},\mathbb {R}^{m}):\Vert x\Vert _2\le R\}. \end{aligned}$$


$$\begin{aligned} \Vert X_{\epsilon ,n}(t,\omega ,x_0)\Vert&\le \Vert X_\epsilon (t,\omega ,x_0)\Vert _2\le B_0,\\ |X_{\epsilon ,n}(t,\omega ,x_k)\Vert&\le \Vert X_\epsilon (t,\omega ,x_k)\Vert _2\le B_0+\frac{1}{2} \end{aligned}$$

for all \(n\in \mathbb {N}\), \(0<\epsilon <\epsilon _{\nu ,N}\) and \(t\in [T_{B_1},N\mathcal {T}]\). Thus by (E.38), there exists \(N_1\in \mathbb {N}\) such that \(N_1\mathcal {T}\ge T_{B_1}>(N_1-1)\mathcal {T}\), \(N_1<N\) and

$$\begin{aligned} \Vert P^\zeta _\epsilon (x_0)\Vert _2=\Vert X_\epsilon (\zeta \mathcal {T},\omega ,x_0)\Vert _2\le B_0 \end{aligned}$$

for all \(\xi \ge N_1\) and all \(x_0\in S_{B_1}\). Moreover, by the definition of \(P_\epsilon \), we have

$$\begin{aligned} \lim _{k\rightarrow \infty }\lim _{n\rightarrow \infty }P_n(x_k)=P_\epsilon (x_0) \end{aligned}$$

a.s. and in mean square. On one hand for all \(k\in \mathbb {N}\), \(\Vert x_k\Vert _2\le \Vert x_0\Vert _2\le B_1\). Thus, \(x_k\) converges to \(x_0\) in mean square uniformly on \(S_{B_1}\). That is for any \(\vartheta >0\), there exists \(k_0\in \mathbb {N}\) such that for all \(k\ge k_0\) and \(x_0\in S_{B_1}\),

$$\begin{aligned} \Vert x_k-x_0\Vert _2\le \vartheta . \end{aligned}$$

Hence, we have all \(t\in [0,T]\),

$$\begin{aligned}&\Vert X_\epsilon (t,\omega ,x_k)-X_\epsilon (t,\omega ,x_0)\Vert _2^2\\&\le K\Vert x_k-x_0\Vert ^2_2+K_\epsilon \int ^t_0\Vert X_\epsilon (s,\omega ,x_k)-X_\epsilon (s,\omega ,x_0)\Vert _2^2ds, \end{aligned}$$

where K and \(K_\epsilon \) are independent with k and \(x_0\). Therefore by Gronwall’s inequality,

$$\begin{aligned} \Vert X_\epsilon (t,\omega ,x_k)-X_\epsilon (t,\omega ,x_0)\Vert _2^2&\le K_\epsilon \Vert x_k-x_0\Vert _2^2\\&\le K_\epsilon \vartheta ^2. \end{aligned}$$

That is, \(X_\epsilon (t,\omega ,x_k)\) converges to \(X_\epsilon (t,\omega ,x_0)\) in mean square uniformly with respect to \(t\in [0,T]\) and \(x_0\in S_{B_1}\) as \(k\rightarrow \infty \). On the other hand, note that

$$\begin{aligned} \Vert X_{\epsilon ,n}(t,\omega ,x_k)\Vert ^2_2&\le \Vert X_{\epsilon }(t,\omega ,x_k)\Vert ^2_2\\&\le K\Vert x_k\Vert ^2_2+K_\epsilon +K\int ^t_0\Vert X_{\epsilon }(s,\omega ,x_k)\Vert ^2_2ds\\&\le K(B_1,\epsilon )+K\int ^t_0\Vert X_{\epsilon }(s,\omega ,x_k)\Vert ^2_2ds. \end{aligned}$$

Hence by Gronwall’s inequality,

$$\begin{aligned} \Vert X_{\epsilon ,n}(t,\omega ,x_k)\Vert ^2_2\le K(B_1,\epsilon ). \end{aligned}$$

Thus by Lemma B.2, \(X_{\epsilon ,n}(t,\omega ,x_k)\) converges to \(X_\epsilon (t,\omega ,x_k)\) in mean square uniformly with respect to \(x_0\in S_{B_1}\) and \(t\in [0,T]\). Therefore for any fixed \(\zeta \), \(P^\zeta _n(x_k)\) converges to \(P^\zeta _\epsilon (x_0)\) uniformly on \(S_{B_1}\). Hence for any fixed \(\zeta \) such that \(\zeta \mathcal {T}\ge T_{B_1}\), there exists \(\tilde{k}_0\) and \(n_0\) such that for all \(x_0\in S_{B_1}\), \(k\ge \tilde{k}_1\) and \(n\ge n_0\),

$$\begin{aligned} \Vert P^\zeta _n(x_k)\Vert _2&\le \Vert P^\zeta _\epsilon (x_0)\Vert _2+\Vert P^\zeta _\epsilon (x_0)-P^\zeta _n(x_k)\Vert _2\\&\le \Vert X_\epsilon (\zeta \mathcal {T},\omega ,x_0)\Vert _2+\frac{1}{2}\\&\le B_0+\frac{1}{2}. \end{aligned}$$

Meanwhile we also have for all \(\zeta \) such that \(\zeta \mathcal {T}\le T_{B_1}\), \(\epsilon \) small enough and all \(x_0\in S_1\),

$$\begin{aligned} \Vert P^\zeta _n(x_k)\Vert _2&\le \Vert X_\epsilon (\zeta \mathcal {T},\omega ,x_0)\Vert _2+\Vert P_\epsilon ^\zeta (x_0)-P^\zeta _n(x_k)\Vert _2\\&\le K(B_1,T_{B_1})+\frac{1}{2}, \end{aligned}$$

by (D.29).

Let \(B_2=\max \{K(B_1,T_{B_1})+\frac{1}{2},B_1\}\). Then \(\bar{S}_{B_0+\frac{1}{2},n}\subset S_{B_1,n}\subset \bar{S}_{B_2,n}\) are convex subsets of \(L^2_n(\mathbb {P},\mathbb {R}^{m})\) with \(\bar{S}_{B_0+\frac{1}{2}_n}\) and \(S_{B_2,n}\) compact and \(S_{B_1,n}\) open relative to \(\bar{S}_{B_2,n}\). Meanwhile, \(\bar{S}_{B_0+\frac{1}{2},n}\) and \(\bar{S}_{B_2,n}\) are compact with respect to the topology of \(\mathcal {P}(\mathbb {R}^{m})\). Moreover,

$$\begin{aligned}&P^\zeta _\epsilon (S_{B_1,n})\subset \bar{S}_{B_2,n},~1\le \xi \le N_1-1,\\&P^\zeta _\epsilon (S_{B_1,n})\subset \bar{S}_{B_0+\frac{1}{2},n},~N_1\le \xi \le N. \end{aligned}$$

Then by Horn’s fixed point theorem Lemma E.1, there exists a random variable \(x_{\epsilon ,\mathcal {T},n}\in \bar{S}_{B_0+\frac{1}{2},n}\) such that

$$\begin{aligned} X_{\epsilon ,n}(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T},n})=P_n(x_{\epsilon ,\mathcal {T},n})\mathop {=\!=\!=\!=}\limits ^{d}x_{\epsilon ,\mathcal {T},n}. \end{aligned}$$

Since \(\{x_{\epsilon ,\mathcal {T},n}\}_{n=1}^\infty \subset \bar{S}_{B_0+\frac{1}{2}}\), according to Lemma A.3 there exists a subsequence \(\{x_{\epsilon ,\mathcal {T},n_k}\}\) and a random variable \(x_{\epsilon ,\mathcal {T}}\in \bar{S}_{B_0+\frac{1}{2}}\) such that

$$\begin{aligned} x_{\epsilon ,\mathcal {T},n_k}\xrightarrow {~d~}x_{\epsilon ,\mathcal {T}}~as~k\rightarrow \infty . \end{aligned}$$

Thus by weak uniqueness of solutions for (14)

$$\begin{aligned} X_\epsilon (\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T},n_k})\xrightarrow {~d~}X_\epsilon (\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T}})~as~k\rightarrow \infty . \end{aligned}$$

Note that \(X_{\epsilon ,n}(t,\omega ,x)\) converges to \(X_{\epsilon }(t,\omega ,x)\) as \(n\rightarrow \infty \) in mean square uniformly with respect to \(t\in [0,T]\) and \(x\in S_{B_0+\frac{1}{2}}\), we can obtain uniform convergence of this sequence in distribution. Therefore for any \(\vartheta >0\) there exists an \(\tilde{n}_0\) and a \(\tilde{k}_1\) such that for all \(n\ge \tilde{n}_0\) and \(k\ge \tilde{k}_1\) we have

$$\begin{aligned} d_{BL}\left( p_{X_{\epsilon ,n}(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T},n_k})},p_{X_{\epsilon }(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T},n_k})}\right)&\le \frac{\vartheta }{2},\\ d_{BL}\left( p_{X_{\epsilon }(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T},n_k})},p_{X_{\epsilon }(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T}})}\right)&\le \frac{\vartheta }{2} \end{aligned}$$

Hence, there exists \(K_0\ge \tilde{k}_1\) satisfying \(n_{K_0}\ge \tilde{n}_0\) such that for all \(k\ge K_0\),

$$\begin{aligned}&d_{BL}\left( p_{X_{\epsilon ,n_k}(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T},n_k})},p_{X_{\epsilon }(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T}})}\right) \\&\le d_{BL}\left( p_{X_{\epsilon ,n_k}(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T},n_k})},p_{X_{\epsilon }(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T},n_k})}\right) \\&\quad +d_{BL}\left( p_{X_{\epsilon }(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T},n_k})},p_{X_{\epsilon }(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T}})}\right) \\&\le \vartheta . \end{aligned}$$

That is,

$$\begin{aligned} x_{\epsilon ,\mathcal {T}}&\xleftarrow {~d~} x_{\epsilon ,\mathcal {T},n_k}\\&\mathop {=\!=\!=\!=}\limits ^{d}X_{\epsilon ,n_k}(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T},n_k})\\&\xrightarrow {~d~}X_{\epsilon }(\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T}}). \end{aligned}$$


$$\begin{aligned} X_\epsilon (\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T}})\mathop {=\!=\!=\!=}\limits ^{d}x_{\epsilon ,\mathcal {T}}. \end{aligned}$$

Since \(X_\epsilon (t,\omega ,x_{\epsilon ,\mathcal {T}})\) is a solution of (14) starting from \(x_{\epsilon ,\mathcal {T}}\), we have

$$\begin{aligned} X_\epsilon (t+\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T}}) {\mathop {=\!=\!=\!=}\limits ^{a.s.}}&X_\epsilon (t,\theta _{\mathcal {T}}\omega ,X_\epsilon (\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T}}))\\ \mathop {=\!=\!=\!=}\limits ^{d}&X_\epsilon (t,\omega ,X_\epsilon (\mathcal {T},\omega ,x_{\epsilon ,\mathcal {T}}))\\ \mathop {=\!=\!=\!=}\limits ^{d}&X_\epsilon (t,\omega ,x_{\epsilon ,\mathcal {T}}). \end{aligned}$$

Then, \(\{X_\epsilon (t,\omega ,x_{\epsilon ,\mathcal {T}})\}_{t\in \mathbb {R}_+}\) is a periodic solution of (14) in distribution. \(\square \)

Appendix F: Proof of Lemma 3.4

According to the hypotheses, there exists \(N_0\in \mathbb {N}\) such that for all \(n\ge N_0\), \(\frac{1}{n}\le \varepsilon \) and

$$\begin{aligned} \Vert x_{\frac{1}{n},\mathcal {T}}\Vert _2\le R. \end{aligned}$$

Hence, \(\{x_{\frac{1}{n},\mathcal {T}}\}_{n=N_0}^{\infty }\) is a bounded sequence in \(L^2(\mathbb {P},\mathbb {R}^{m})\). By Lemma A.3, there exists a random variable \(x_{\mathcal {T}}\in S_{R}\) and a subsequence of \(\{x_{\frac{1}{n},\mathcal {T}}\}\) (still denote as itself) such that

$$\begin{aligned} x_{\frac{1}{n},\mathcal {T}}\xrightarrow {~d~}x_\mathcal {T} \end{aligned}$$

as \(n\rightarrow \infty \). Then by Lemma A.2, there exists a common probability space \((\bar{\Omega },\bar{\mathcal {F}},\bar{\mathbb {P}})\), random variables \(\bar{x}_{\mathcal {T}}\) and \(\{\bar{x}_{n,\mathcal {T}}\}\) such that

$$\begin{aligned}&\bar{x}_{\mathcal {T}}\mathop {=\!=\!=\!=}\limits ^{d}x_{\mathcal {T}},\\&\bar{x}_{n,\mathcal {T}}\mathop {=\!=\!=\!=}\limits ^{d}x_{\frac{1}{n},\mathcal {T}}, \end{aligned}$$


$$\begin{aligned} \bar{x}_{n,\mathcal {T}}\xrightarrow {~2~}\bar{x}_{\mathcal {T}} \end{aligned}$$

as \(n\rightarrow \infty \). This leads to that

$$\begin{aligned} X_{\frac{1}{n}}(\mathcal {T},\omega ,\bar{x}_{n,\mathcal {T}})\mathop {=\!=\!=\!=}\limits ^{d}&X_{\frac{1}{n}}(\mathcal {T},\omega ,{x}_{n,\mathcal {T}})\nonumber \\ \mathop {=\!=\!=\!=}\limits ^{d}&x_{\frac{1}{n},\mathcal {T}}\nonumber \\ \mathop {=\!=\!=\!=}\limits ^{d}&\bar{x}_{{n},\mathcal {T}}\nonumber \\ \xrightarrow {~d~}&\bar{x}_{\mathcal {T}}. \end{aligned}$$

By Theorem 2.1, \(X_\epsilon (t,\omega ,x)\xrightarrow {~2~}X(t,\omega ,x)\) uniformly on \([0,N\mathcal {T}]\) and uniformly respect to \(x\in S_{R}\). Thus for any \(\nu >0\), there exists a constant \(\tilde{N}_1\in \mathbb {N}\) such that for all \(n_1\ge \tilde{N}_1\) and all \(n_2\in \mathbb {N}\),

$$\begin{aligned} \left\| X_{\frac{1}{n_1}}(\mathcal {T},\omega ,\bar{x}_{n_2,\mathcal {T}})-X(\mathcal {T},\omega ,\bar{x}_{n_2,\mathcal {T}})\right\| _2\le \frac{\nu }{2}. \end{aligned}$$

On the other hand, according to the uniqueness of solutions for stochastic differential equations, there exists a constant \(\tilde{N}_2\in \mathbb {N}\) such that for all \(n_2\ge \tilde{N}_2\),

$$\begin{aligned} \left\| X(\mathcal {T},\omega ,\bar{x}_{n_2,\mathcal {T}})-X(\mathcal {T},\omega ,\bar{x}_{\mathcal {T}})\right\| _2\le \frac{\nu }{2}. \end{aligned}$$

Therefore, when \(n\ge \max \{\tilde{N}_1,\tilde{N}_2\}\), we have

$$\begin{aligned} \left\| X_{\frac{1}{n}}(\mathcal {T},\omega ,\bar{x}_{n,\mathcal {T}})-X(\mathcal {T},\omega ,\bar{x}_{\mathcal {T}})\right\| _2&\le \left\| X_{\frac{1}{n}}(\mathcal {T},\omega ,\bar{x}_{n,\mathcal {T}})-X(\mathcal {T},\omega ,\bar{x}_{n,\mathcal {T}})\right\| _2\\&\quad +\left\| X(\mathcal {T},\omega ,\bar{x}_{n,\mathcal {T}})-X(\mathcal {T},\omega ,\bar{x}_{\mathcal {T}})\right\| _2\\&\le \nu . \end{aligned}$$

That is,

$$\begin{aligned} X_{\frac{1}{n}}(\mathcal {T},\omega ,\bar{x}_{n,\mathcal {T}})\xrightarrow {~2~}X(\mathcal {T},\omega ,\bar{x}_{\mathcal {T}}) \end{aligned}$$

as \(n\rightarrow \infty \). Together with (F.39), we have

$$\begin{aligned} x_\mathcal {T}\mathop {=\!=\!=\!=}\limits ^{d}\bar{x}_{\mathcal {T}}\mathop {=\!=\!=\!=}\limits ^{d}X(\mathcal {T},\omega ,\bar{x}_{\mathcal {T}})\mathop {=\!=\!=\!=}\limits ^{d}X(\mathcal {T},\omega ,x_\mathcal {T}). \end{aligned}$$

With a similar discussion with \(X_\epsilon (t,\omega ,x_{\epsilon ,\mathcal {T}})\), \(\{X(t,\omega ,x_{\mathcal {T}})\}_{t\in \mathbb {R}_+}\) is a periodic solution of (11) in distribution.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Jiang, X., Li, Y. & Yang, X. Existence of Periodic Solutions in Distribution for Stochastic Newtonian Systems. J Stat Phys (2020). https://doi.org/10.1007/s10955-020-02583-3

Download citation


  • Levinson’s conjecture
  • Stochastic Newtonian systems
  • Wong–Zakai approximations
  • Periodic solutions in distribution
  • Lyapunov’s method