# Law of large numbers and central limit theorem under nonlinear expectations

• Shige Peng
Open Access
Research

## Abstract

The main achievement of this paper is the finding and proof of Central Limit Theorem (CLT, see Theorem 12) under the framework of sublinear expectation. Roughly speaking under some reasonable assumption, the random sequence $$\{1/\sqrt {n}(X_{1}+\cdots +X_{n})\}_{i=1}^{\infty }$$ converges in law to a nonlinear normal distribution, called G-normal distribution, where $$\{X_{i}\}_{i=1}^{\infty }$$ is an i.i.d. sequence under the sublinear expectation. It’s known that the framework of sublinear expectation provides a important role in situations that the probability measure itself has non-negligible uncertainties. Under such situation, this new CLT plays a similar role as the one of classical CLT. The classical CLT can be also directly obtained from this new CLT, since a linear expectation is a special case of sublinear expectations. A deep regularity estimate of 2nd order fully nonlinear parabolic PDE is applied to the proof of the CLT. This paper is originally exhibited in arXiv.(math.PR/0702358v1).

## Keywords

Central limit theorem Nonlinear expectation Probability measure uncertainty

## 1 Introduction

The law of large numbers (LLN) and the central limit theorem (CLT) have a long history, and widely been known as two fundamental results in probability theory and statistical analysis.

Recently, problems of model uncertainties in statistics, measures of risk, and superhedging in finance motivated us to introduce, in (Peng 2007), and (Peng 2008) (see also (Peng 2004), (Peng 2005), and references herein), a new notion of sublinear expectation, called “ G-expectation”, and the related “G-normal distribution” (see Def. 5) from which we were able to define G-Brownian motion as well as the corresponding stochastic calculus. The notion of G-normal distribution plays the same important role in the theory of sublinear expectation as that of normal distribution in the classic probability theory. It is then natural and interesting to ask if we have the corresponding LLN and CLT under a sublinear expectation and, in particular, if the corresponding limit distribution of the CLT is a G-normal distribution. This paper gives an affirmative answer. The proof of our CLT is short since we borrow a deep interior estimate of fully nonlinear PDEs in (Wang 1992) which extended a profound result of (Caffarelli 1989) (see also (Cabre and Caffarelli 1997)) to parabolic PDEs. The assumptions of our LLN and CLT can still be improved. But the phenomenon discovered plays the same important role in the theory of nonlinear expectation as that of the classical LLN and CLT in classic probability theory.

## 2 Sublinear expectations

Let Ω be a given set and let $$\mathcal {H}$$ be a linear space of real functions defined on Ω such that if $$X_{1},\cdots,X_{n}\in \mathcal {H}$$, then $$\varphi (X_{1},\cdots,X_{n})\in \mathcal {H}$$ for each $$\varphi \in C_{poly}(\mathbb {R}^{n})$$, where $$C_{poly}(\mathbb {R})$$ denotes the space of continuous functions with polynomial growth, i.e., there exist constants C and k≥0, such that |φ(x)|≤C(1+|x|k). $$\mathcal {H}$$ is considered as a space of “random variables”.

Here, we use $$C_{poly}(\mathbb {R}^{n})$$ in our framework only for some technique reason. In general it can be replaced by $$C_{b}(\mathbb {R}^{n})$$, the space of bounded and continuous functions; by $$lip_{b}(\mathbb {R}^{n})$$, the space of bounded and and Lipschitz continuous functions; or by $$L^{0} (\mathbb {R}^{n})$$, the space of Borel measurable functions.

### Definition 1

A sublinear expectation$$\mathbb {E}$$ on $$\mathcal {H}$$is a functional $$\mathcal {H}\mapsto \lbrack -\infty,\infty ]$$ satisfying the following properties: for all $$X,Y\in \mathcal {H}$$ such that $$\mathbb {E}[|X|], \mathbb {E}[|Y|]<\infty$$, we have (a) Monotonicity: if XY then $$\mathbb {E}[X]\geq \mathbb {E}[Y].$$(b)Sub-additivity (or self–dominated property):
$$\mathbb{E}[X]-\mathbb{E}[Y]\leq \mathbb{E}[X-Y].$$
(c) Positive homogeneity:$$\mathbb {E}[\lambda X]=\lambda \mathbb {E}[X],\ \ \forall \lambda \geq 0$$. (d) Constant translatability:$$\mathbb {E}[X+c]=\mathbb {E}[X]+c$$.
For each given p≥1, we denote by $$\mathcal {H}_{p}$$, the collection of $$X\in \mathcal {H}$$ such that $$\mathbb {E}[|X|^{p}]<\infty$$. It can be checked (see (Peng 2007) and (Peng 2008)) that
$$\mathbb{E}\left[|X+Y|^{p}\right]^{1/p}\leq \mathbb{E}\left[|X|^{p}\right]^{1/p}+\mathbb{E} \left[|Y|^{p}\right]^{1/p}.$$
We also have $$\mathcal {H}_{q}\subseteq \mathcal {H}_{p}$$, for 1≤pq<, and if $$\frac {1}{p}+\frac {1}{q}=1$$, then for each $$X\in \mathcal {H}_{p}$$ and $$Y\in \mathcal {H}_{q}$$ we have $$X\cdot Y\in \mathcal {H} _{1}$$ and
$$\mathbb{E}[|X\cdot Y|]\leq \mathbb{E}\left[|X|^{p}\right]^{1/p}\mathbb{E}\left[|Y|^{q}\right]^{1/q}.\$$
It follows that $$\mathcal {H}_{p}$$ is a linear space and the sublinear expectation $$\mathbb {E}[\cdot ]$$ naturally induces a norm $$\left \Vert X\right \Vert _{p}:=\mathbb {E}[|X|^{p}]^{1/p}$$ on $$\mathcal {H}_{p}$$. The completion of $$\mathcal {H}_{p}$$ under this norm forms a Banach space. The expectation $$\mathbb {E}[\cdot ]$$ can be extended to this Banach space as well. This extended $$\mathbb {E}[\cdot ]$$ still satisfies the above (a)–(d). But in this paper only the pre-Banach space $$\mathcal {H}_{p}$$ is involved.

### Proposition 1

Let $$X,Y\in \mathcal {H}_{1}$$ be such that $$\mathbb {E} [Y]=-\mathbb {E}[-Y]$$. Then, we have
$$\mathbb{E}[X+Y]=\mathbb{E}[X]+\mathbb{E}[Y].$$
In particular, if $$\mathbb {E}[Y]=\mathbb {E}[-Y]=0$$, then $$\mathbb {E} [X+Y]=\mathbb {E}[X]$$.

### proof

It is simply because we have $$\mathbb {E}[X+Y]\leq \mathbb {E}[X]+\mathbb {E}[Y]$$ and
$$\mathbb{E}[X+Y]\geq \mathbb{E}[X]-\mathbb{E}[-Y]=\mathbb{E}[X]+\mathbb{E} [Y]\text{.}$$

## 3 Law of large numbers

### Theorem 1

(Law of Large Numbers) Let a sequence X1,X2,⋯ in $$\mathcal {H} _{2}$$ be such that
$$\mathbb{E}\left[X_{i}^{2}\right]=\overline{\sigma}^{2},\ \ \ \mathbb{E}[X_{i} X_{i+j}]=\mathbb{E}[-X_{i}X_{i+j}]=0,\ \ i,j=1,2,\cdots,$$
(1)
where $$\overline {\sigma }\in (0,\infty)$$ is a fixed number. Then, the sum
$$S_{n}=X_{1}+\cdots+X_{n}$$
(2)
satisfies the following law of large numbers:
$$\ {\lim}_{n\rightarrow \infty}\left \Vert \frac{S_{n}}{n}\right \Vert_{2}^{2} ={\lim}_{n\rightarrow \infty}\mathbb{E}\left[\left|\frac{S_{n}}{n}\right|^{2}\right]=0.$$
Moreover, the convergence rate is dominated by
$$\mathbb{E}\left[\left|\frac{S_{n}}{n}\right|^{2}\right]\leq \frac{\overline{\sigma}^{2}}{n}.\$$

### proof

By a simple calculation, we have, using Proposition 1,
\begin{aligned} \mathbb{E}\left[\left|\frac{S_{n}}{n}\right|^{2}\right] & =\frac{1}{n^{2}}\mathbb{E}\left[S_{n} ^{2}\right]=\frac{1}{n^{2}}\mathbb{E}\left[S_{n-1}^{2}+2S_{n-1}X_{n}+X_{n}^{2}\right]\\ & =\frac{1}{n^{2}}\mathbb{E}\left[S_{n-1}^{2}+X_{n}^{2}\right]\leq \frac{1}{n^{2}}\left\{ \mathbb{E}\left[S_{n-1}^{2}\right]+\mathbb{E}\left[X_{n}^{2}\right]\right\} \\ & \leq \cdots=\frac{1}{n^{2}}n\mathbb{E}\left[X_{1}^{2}\right]=\frac{\overline{\sigma }^{2}}{n}. \end{aligned}

### Remark 1

The above condition (1) is easily extended to the situation $$\mathbb {E[}(X_{i}-\mu)^{2}]=\overline {\sigma }, \mathbb {E[}(X_{i} -\mu)(X_{i+j}-\mu)]=0$$ and $$\mathbb {E[-}(X_{i}-\mu)(X_{i+j}-\mu)]=0$$, for i,j=1,2,⋯. In this case, we have
$${\lim}_{n\rightarrow \infty}\mathbb{E}\left[\left|\frac{S_{n}}{n}-\mu\right|^{2}\right]=0.$$

## 4 Central limit theorem

We now consider a generalization of the notion of the distribution under $$\mathbb {E}$$ of a random variable. To this purpose, we make a set $$\widetilde {\Omega }$$ a linear space of real functions $$\widetilde {\mathcal {H} }$$ defined on $$\widetilde {\Omega }$$ as well as a sublinear expectation $$\widetilde {\mathbb {E}}[\cdot ]$$ in exact the same way as $$\Omega, \mathcal {H}$$, and $$\mathbb {E}$$ defined in Section 2. We can similarly define $$\widetilde {\mathcal {H}}_{p}$$ for p≥1.

### Definition 2

Two random variables, $$X\in \mathcal {H}$$, under $$\mathbb {E}[\cdot ]$$ and $$Y\in \widetilde {\mathcal {H}}$$ under $$\widetilde {\mathbb {E}}[\cdot ]$$, are said to be identically distributed if, for each $$\varphi \in C_{poly}(\mathbb {R})$$ such that $$\varphi (X)\in \mathcal {H}_{1}$$, we have $$\varphi (Y)\in \widetilde {\mathcal {H}}_{1}$$ and
$$\mathbb{E}[\varphi(X)]=\widetilde{\mathbb{E}}[\varphi(Y)].\ \$$

### Definition 3

A random variable $$X\in \mathcal {H}$$is said to be independent under $$\mathbb {E}[\cdot ]$$ from $$Y=(Y_{1},\cdots,Y_{n})\in \mathcal {H}^{n}$$ if for each test function $$\varphi \in C_{poly}(\mathbb {R}^{n+1})$$ such that $$\varphi (X,Y)\in \mathcal {H}_{1}$$, we have $$\varphi (X,y)\in \mathcal {H}_{1}$$, for each $$y\in \mathbb {R}^{n}$$ and, with $$\overline {\varphi }(y):=\mathbb {E} [\varphi (X,y)]$$, we have
$$\mathbb{E}[\varphi(X,Y)]=\mathbb{E}[\overline{\varphi}(Y)].$$
A random variable $$X\in \mathcal {H}_{2}$$ is said to be weakly independent of Y if the above test functions φ are taken only among, instead of $$C_{poly}(\mathbb {R}^{n+1})$$,
$$\varphi(x,y)=\psi_{0}(y)+\psi_{1}(y)x+\psi_{2}(y)x^{2},\ \ \psi_{i}\in C_{b}\left(\mathbb{R}^{n}\right),\ \ i=1,2,3.$$

### Remark 2

In the case of linear expectation, this notion is just the classical independence. Note that, under sublinear expectations, “X is independent from Y” does not automatically imply that “ is independent from X”.

### Remark 3

If we assume in the above law of large numbers that the sequence X1,X2,⋯ is dynamically independent and identically distributed from each other and that $$\mathbb {E}[X_{1}]=\mathbb {E}[-X_{1}]=0, \mathbb {E}\left [X_{1} ^{2}\right ]<\infty$$. Then, LLN holds.

We denote by $$lip_{b}(\mathbb {R})$$ the collection of all uniformly Lipschitz and bounded real functions on $$\mathbb {R}$$. It is a linear space.

### Definition 4

A sequence of random variables $$\left \{ \eta _{i}\right \}_{i=1}^{\infty }$$ in $$\mathcal {H}$$ is said to converge in distribution under $$\mathbb {E}$$ if for each $$\varphi \in lip_{b}(\mathbb {R}), \left \{ \mathbb {E}[\varphi (\eta _{i})]\right \}_{i=1}^{\infty }$$ converges.

### Definition 5

A random variable $$\xi \in \widetilde {\mathcal {H}}$$ is called G-normal distributed under $$\widetilde {\mathbb {E}}$$, if for each $$\varphi \in lip_{b}(\mathbb {R})$$, the following function defined by
$$u(t,x):=\mathbb{E}\left[\varphi(x+\sqrt{t}\xi)\right],\ (t,x)\in \lbrack0,\infty)\times \mathbb{R}\$$
is the unique (bounded and continuous) viscosity solution of the following parabolic PDE defined on $$[0,\infty)\times \mathbb {R}$$:
$$\partial_{t}u-G(\partial_{xx}^{2}u)=0,\ \ u|_{t=0}=\varphi,$$
(3)
where $$G=G_{\underline {\sigma },\overline {\sigma }}(\alpha)$$ is the following sublinear function parameterized by $$\underline {\sigma }$$ and $$\overline {\sigma } \$$with $$0\leq \underline {\sigma }\leq \overline {\sigma }$$:
$$G(\alpha)=\frac{1}{2}\left(\overline{\sigma}^{2}\alpha^{+}-\underline{\sigma} ^{2}\alpha^{-}\right),\ \ \alpha \in \mathbb{R}.$$
Here, we denote α+:= max{0,α} and α:=(−α)+.

### Remark 4

A simple construction of a G-normal distributed random variable ξ is to take $$\widetilde {\Omega }=\mathbb {R}, \widetilde {\mathcal {H}}=C_{poly} (\mathbb {R})$$. The expectation $$\widetilde {\mathbb {E}}$$ is defined by $$\widetilde {\mathbb {E}}[\varphi ]:=u^{\varphi }(1,0)$$, where u=uφ is the unique polynomial growth and continuous viscosity solution of (3) with $$\varphi \in C_{poly}(\mathbb {R})=\widetilde {\mathcal {H}}_{1}$$. The G-normal distributed random variable is $$\xi (\omega)\equiv \omega, \omega \in \widetilde {\Omega }=\mathbb {R}$$.

Our main result is:

### Theorem 2

(Central Limit Theorem) Let a sequence {Xi}i=1 in $$\mathcal {H}_{3}\mathcal {\ }$$be identically distributed with each other. We also assume that, each Xn+1 is independent (or weakly independent) from (X1,⋯,Xn)for n=1,2,⋯. We assume, furthermore, that
$$\mathbb{E}[X_{1}]=\mathbb{E}[-X_{1}]=0\text{,\ \ }\mathbb{E}\left[X_{1} ^{2}\right]=\overline{\sigma}^{2},\ -\mathbb{E}\left[-X_{1}^{2}\right]=\underline{\sigma}^{2},$$
for some fixed numbers $$0<\underline {\sigma }\leq \overline {\sigma }<\infty$$. Then, $$\left \{ X_{i}\right \}_{i=1}^{\infty }$$ converges in law to the G-normal distribution: for each $$\varphi \in lip_{b}(\mathbb {R})$$,
$${\lim}_{n\rightarrow \infty}\mathbb{E}\left[\varphi\left(\frac{S_{n}}{\sqrt{n} }\right)\right]=\widetilde{\mathbb{E}}[\varphi(\xi)],$$
(4)

where ξis G-normal distributed under $$\widetilde {\mathbb {E}}$$..

### proof

For a function $$\varphi \in lip_{b}(\mathbb {R})$$ and a small but fixed h>0, let V be the unique viscosity solution of
$$\partial_{t}V+G(\partial_{xx}^{2}V)=0,\ (t,x)\in \lbrack0,1+h]\times \mathbb{R}\text{,}\ \ V|_{t=1+h}=\varphi.$$
(5)
We have, according to the definition of G-normal distribution,
$$V(t,x)=\widetilde{\mathbb{E}}\left[\varphi\left(x+\sqrt{1+h-t}\xi\right)\right].$$
Particularly,
$$V(h,0)=\widetilde{\mathbb{E}}[\varphi(\xi)],\ \ V(1+h,x)=\varphi (x).$$
(6)
Since (5) is a uniformly parabolic PDE and G is a convex function, thus, by the interior regularity of V (see Wang (Wang 1992), Theorem 4.13), we have
$$\left \Vert V\right \Vert_{C^{1+\alpha/2,2+\alpha}([0,1]\times \mathbb{R})}<\infty,\ \text{for some }\alpha \in(0,1).$$
We set $$\delta =\frac {1}{n}$$ and S0=0. Then,
\begin{aligned} V\left(1,\sqrt{\delta}S_{n}\right)-V(0,0)&=\sum\limits_{i=0}^{n-1}\left\{V\left((i+1)\delta,\sqrt {\delta}S_{i+1}\right)-V\left(i\delta,\sqrt{\delta}S_{i}\right)\right\} \\ & =\sum\limits_{i=0}^{n-1}\left\{ \left[V\left((i+1)\delta,\sqrt{\delta}S_{i+1} \right)-V\left(i\delta,\sqrt{\delta}S_{i+1}\right)\right]\right.\\&\left.+\left[V\left(i\delta,\sqrt{\delta}S_{i+1} \right)-V\left(i\delta,\sqrt{\delta}S_{i}\right)\right]\right\} \\ & =\sum\limits_{i=0}^{n-1}\left\{ \partial_{t}V\left(i\delta,\sqrt{\delta}S_{i} \right)\delta+\frac{1}{2}\partial_{xx}^{2}V\left(i\delta,\sqrt{\delta}S_{i}\right)X_{i+1} ^{2}\delta\right.\\&\quad \left.+\partial_{x}V\left(i\delta,\sqrt{\delta}S_{i}\right)X_{i+1}\sqrt{\delta }+I_{\delta}^{i}\right\} \end{aligned}
with, by Taylor’s expansion,
\begin{aligned} & I_{\delta}^{i}=\int_{0}^{1}\left[\partial_{t}V\left((i+\beta)\delta,\sqrt{\delta }S_{i+1}\right)-\partial_{t}V\left(i\delta,\sqrt{\delta}S_{i+1}\right)\right]d\beta \delta \\ & +\left[\partial_{t}V\left(i\delta,\sqrt{\delta}S_{i+1}\right)-\partial_{t}V\left(i\delta,\sqrt{\delta}S_{i}\right)\right]\delta \\ & +\int_{0}^{1}\int_{0}^{1}\left[\partial_{xx}^{2}V\left(i\delta,\sqrt{\delta} S_{i}+\gamma \beta X_{i+1}\sqrt{\delta}\right)-\partial_{xx}^{2}V\left(i\delta,\sqrt{\delta}S_{i}\right)\right]\beta d\beta d\gamma X_{i+1}^{2}\delta. \end{aligned}
Thus,
\begin{aligned} & \mathbb{E}\left[\sum\limits_{i=0}^{n-1}\partial_{t}V\left(i\delta,\sqrt{\delta}S_{i} \right)\delta+\frac{1}{2}\partial_{xx}^{2}V\left(i\delta,\sqrt{\delta}S_{i}\right)X_{i+1} ^{2}\delta+\partial_{x}V\left(i\delta,\sqrt{\delta}S_{i}\right)X_{i+1}\sqrt{\delta }\right]\\&\quad-\mathbb{E}[-I_{\delta}]\\ & \leq \mathbb{E}\left[V\left(1,\sqrt{\delta}S_{n}\right)\right]-V(0,0)\\ &\! \leq\! \mathbb{E}\left[\sum\limits_{i=0}^{n-1}\partial_{t}V\left(i\delta,\sqrt{\delta} S_{i}\right)\delta\,+\,\frac{1}{2}\partial_{xx}^{2}V\left(i\delta,\sqrt{\delta}S_{i} \right)X_{i+1}^{2}\delta\,+\,\partial_{x}V\left(i\delta,\sqrt{\delta}S_{i}\right)X_{i+1} \sqrt{\delta}\!\right]\\&\quad+\mathbb{E}[I_{\delta}] \end{aligned}
Since $$\mathbb {E}\left [\partial _{x}V\left (i\delta,\sqrt {\delta }S_{i}\right)X_{i+1}\sqrt {\delta }\right ]=\mathbb {E}\left [-\partial _{x}V\left (i\delta,\sqrt {\delta }S_{i}\right)X_{i+1} \sqrt {\delta }\right ]=0$$, and
$$\mathbb{E}\left[\frac{1}{2}\partial_{xx}^{2}V\left(i\delta,\sqrt{\delta}S_{i} \right)X_{i+1}^{2}\delta\right]=\mathbb{E}\left[G\left(\partial_{xx}^{2}V\left(i\delta,\sqrt{\delta} S_{i}\right)\right)\delta\right]$$
We have, by applying $$\partial _{t}V\left (i\delta,\sqrt {\delta }S_{i}\right)+\frac {1} {2}\partial _{xx}^{2}V\left (i\delta,\sqrt {\delta }S_{i}\right)=0$$,
\begin{aligned} &\mathbb{E}\left[\sum\limits_{i=0}^{n-1}\partial_{t}V\left(i\delta,\sqrt{\delta}S_{i} \right)\delta+\frac{1}{2}\partial_{xx}^{2}V\left(i\delta,\sqrt{\delta}S_{i}\right)X_{i+1} ^{2}\delta+\partial_{x}V\left(i\delta,\sqrt{\delta}S_{i}\right)X_{i+1}\sqrt{\delta}\right]\\&=0. \end{aligned}
It then follows that
$$-\mathbb{E}\left[-\sum\limits_{i=0}^{n-1}I_{\delta}^{i}\right]\leq \mathbb{E}\left[V\left(1,\sqrt{\delta }S_{n}\right)\right]-V(0,0)\leq \mathbb{E}\left[\sum\limits_{i=0}^{n-1}I_{\delta}^{i}\right].$$
But since both tV and $$\partial _{xx}^{2}V$$ are uniformly α-Hölder continuous in x and $$\frac {\alpha }{2}$$-Hölder continuous in t on $$[0,1]\times \mathbb {R}$$, we then have $$|I_{\delta }^{i}|\leq C\delta ^{1+\alpha /2}\left [1+|X_{i+1}|+|X_{i+1}|^{2+\alpha }\right ]$$. It follows that
$$\mathbb{E}\left[|I_{\delta}^{i}|\right]\leq C\delta^{1+\alpha/2}\left(1+\mathbb{E} \left[|X_{1}|^{\alpha}\right]+\mathbb{E}\left[|X_{1}|^{2+\alpha}\right]\right).$$
Thus,
\begin{aligned} -C\left(\frac{1}{n}\right)^{\alpha/2}\left(1+\mathbb{E}\left[|X_{1}|^{\alpha}+|X_{1}|^{2+\alpha}\right]\right) & \leq \mathbb{E}\left[V\left(1,\sqrt{\delta}S_{n}\right)\right]-V(0,0)\\ & \!\leq\! C\left(\frac{1}{n}\right)^{\alpha/2}\left(1+\mathbb{E}\left[|X_{1}|^{\alpha}\,+\,|X_{1} |^{2+\alpha}\right]\right). \end{aligned}
As n, we thus have
$${\lim}_{n\rightarrow \infty}\mathbb{E}\left[V\left(1,\sqrt{\delta}S_{n} \right)\right]=V(0,0).$$
(7)
On the other hand, we have, for each t,t∈[0,1+h] and $$x\in \mathbb {R}$$,
\begin{aligned} |V(t,x)-V(t^{\prime},x)| & =|\widetilde{\mathbb{E}}\left[\varphi(x+\sqrt{1+h-t} \xi)\right]-\widetilde{\mathbb{E}}\left[\varphi\left(\sqrt{1+h-t^{\prime}}\xi\right)\right]|\\ & \leq|\widetilde{\mathbb{E}}\left[\varphi\left(x+\sqrt{1+h-t}\xi\right)-\varphi \left(x+\sqrt{1+h-t^{\prime}}\xi\right)\right]|\\ & \leq k_{\varphi}|\left(|\sqrt{1+h-t}-\sqrt{1+h-t^{\prime}}|\right)\widetilde {\mathbb{E}}[|\xi|]\\ & \leq C\sqrt{|t-t^{\prime}|}, \end{aligned}
where kφ denotes the Lipschitz constant of φ. Thus, $$|V(0,0)-V(0,h)|\leq C\sqrt {h}$$ and, by (6),
\begin{aligned} & |\mathbb{E}\left[V\left(1,\sqrt{\delta}S_{n}\right)\right]-\mathbb{E}\left[\varphi\left(\sqrt{\delta} S_{n}\right)\right]|\\ & =|\mathbb{E}\left[V\left(1,\sqrt{\delta}S_{n}\right)\right]-\mathbb{E}\left[V\left(1+h,\sqrt{\delta} S_{n}\right)\right]|\leq C\sqrt{h}. \end{aligned}
It follows form (7) and (6) that
$$\limsup_{n\rightarrow \infty}|\mathbb{E}\left[\varphi\left(\frac{S_{n}}{\sqrt{n} }\right)\right]-\widetilde{\mathbb{E}}[\varphi(\xi)]|\leq2C\sqrt{h}.$$
Since h can be arbitrarily small, we thus have
$${\lim}_{n\rightarrow \infty}\mathbb{E}\left[\varphi\left(\frac{S_{n}}{\sqrt{n} }\right)\right]=\widetilde{\mathbb{E}}[\varphi(\xi)].$$

### Corollary 1

The convergence (4) holds for the case where φ is a bounded and uniformly continuous function.

### proof

We can find a sequence $$\left \{ \varphi _{k}\right \}_{k=1}^{\infty }$$ in $$lip_{b}(\mathbb {R})$$ such that φkφ uniformly on $$\mathbb {R}$$. By
\begin{aligned} |\mathbb{E}\left[\varphi\left(\frac{S_{n}}{\sqrt{n}}\right)\right]-\widetilde{\mathbb{E}} [\varphi(\xi)]| & \leq|\mathbb{E}\left[\varphi(\frac{S_{n}}{\sqrt{n} })\right]-\mathbb{E}\left[\varphi_{k}\left(\frac{S_{n}}{\sqrt{n}}\right)\right]|\\ & +|\widetilde{\mathbb{E}}[\varphi(\xi)]-\widetilde{\mathbb{E}}[\varphi_{k}(\xi)]|+|\mathbb{E}\left[\varphi_{k}\left(\frac{S_{n}}{\sqrt{n}}\right)\right]-\widetilde {\mathbb{E}}[\varphi_{k}(\xi)]|. \end{aligned}
We can easily check that (4) holds. □

## Notes

### Funding

The author received no specific funding for this work.

### Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

### Authors’ contributions

The author read and approved the final manuscript.

### Competing interests

The author declares that he has no competing interest.

## References

1. Cabre, X., Caffarelli, L.A.: Fully nonlinear elliptic partial differential equations. American Mathematical Society (1997).Google Scholar
2. Caffarelli, L. A.: Interior estimates for fully nonlinear equations. Ann. of Math. 130, 189–213 (1989).
3. Peng, S.: Filtration Consistent Nonlinear Expectations and Evaluations of Contingent Claims, Acta Mathematicae Applicatae Sinica. Engl. Ser. 20(2), 1–24 (2004).Google Scholar
4. Peng, S.: Nonlinear expectations and nonlinear Markov chains. Chin. Ann. Math.26B(2), 159–184 (2005).
5. Peng, S.: G–Expectation, G–Brownian Motion and Related Stochastic Calculus of Itô’s type (2007). in Stochastic Analysis and Applications, The Abel Symposium 2005, Abel Symposia2, Edit. Benth et. al., 541–567, Springer-Verlag.Google Scholar
6. Peng, S.: Multi-Dimensional G-Brownian Motion and Related Stochastic Calculus under G-Expectation (2008). Stochastic Processes and their Applications 118(12), 2223–2253.
7. Wang, L.: On the regularity of fully nonlinear parabolic equations: II. Comm. Pure Appl. Math. 45, 141–178 (1992).