1 Introduction

The growing interest in the subject of fractional calculus owes to its wide applications in many real world phenomena, such as anomalous diffusion [1, 2], random and disordered media [3, 4], finance [5,6,7], electrical circuits [8], automatic control system [9], etc. In contrast to the classical calculus, the tools of fractional calculus characterize the evolution process more precisely and give rise to more realistic mathematical modeling of physical problems.

Fractional neutral networks are now considered as powerful tools as they can model simple systems [10, 11], or produce a content-addressable memory using the collective properties of the neutral networks [12]. In order to enhance the essential performance of neutral activity, the existence and stability of the solutions of the neutral networks is the first prerequisite. In the last few years, several results on this topic were obtained. Examples include finite-time stability [13], asymptotic stability [9, 14,15,16], exponential stability [9, 17], and Mittag-Leffler stability [18,19,20,21]. The general method for analyzing the stability is based on Lyapunov’s method (including the first and second methods of Lyapunov) and other mathematical techniques.

Multi-scale stochastic fractional differential systems recently received considerable attention, for instance, see [6, 9, 22]. In a recent article, Ding and Nieto [23] obtained the analytical solution of multi-time scale fractional stochastic differential equations governed by fractional Brownian noise. In recent years, many researchers have shown their interest in investigating stochastic systems. For some important results on the existence and uniqueness of solutions to such systems, we refer the reader to the articles [24,25,26,27].

In this paper, we investigate neutral networks modeled by the following multi-time scale fractional stochastic differential system:

$$\begin{aligned} \begin{gathered}{{d}}Y(t)+d\mathcal{I}^{1-\alpha}_{0^{+}}\bigl( \mathcal{A}_{1}Y(t)-Y_{0} \bigr)= \bigl( \mathcal{A}_{2} Y(t)+f\bigl(Y(t)\bigr) \bigr)\,dt+\varPi(t)\,d \mathcal{A}_{2}(t),\\ Y(0)=Y_{0},\end{gathered} \end{aligned}$$
(1.1)

where \(\mathcal{I}^{1-\alpha}_{0^{+}}\) is the Riemann–Liouville fractional integral operator, \(\frac{1}{2}<\alpha\leq1\), \(\mathcal{A}_{1}, \mathcal{A}_{2} \in\mathbb{R}^{n}\times\mathbb{R}^{n}\), \(f: \mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is a nonlinear function, \(\varPi(t)\) is a matrix describing intensity of the perturbation, \(\mathcal{A}_{2}\) denotes Brownian noise on \([0,T]\) (see [23]), and \(Y_{0}\) is a real-valued random variable on a complete probability space \((\varOmega, \mathcal{F}, \mathcal{P})\). If \(\varPi\equiv0\) and \(Y_{0}\) is constant, then system (1.1) becomes a deterministic system and reduces to a multi-time scale fractional differential system.

We arrange the rest of this paper as follows. In Sect. 2, we recall some preliminary concepts of Brownian noise and fractional calculus related to our work. Section 3 contains the main results. An example illustrating the obtained theory is presented in Sect. 4. Concluding remarks are given in Sect. 5.

2 Preliminaries

In this section, we outline some preliminary concepts of fractional calculus [28] and Brownian noise [29, 30] related to our work.

Definition 2.1

Let \(\alpha>0\) and \(f: (0,\infty)\rightarrow{\mathbb {R}}\) be integrable. Then the Riemann–Liouville fractional integral of order α for the function f is defined as

$$\begin{aligned} \mathcal{I}^{\alpha}_{0^{+}}f(t)=\frac{1}{\varGamma(\alpha)} \int^{t}_{0}(t-\tau )^{\alpha-1}f(\tau)\,d\tau,\quad t>0, \end{aligned}$$

where \(\varGamma(\cdot)\) is the gamma function.

It is well known that the following properties hold for the Riemann–Liouville fractional integral operators [19, 31]:

  1. (i)

    \((\mathcal{I}^{\alpha}_{0^{+}}f)(t)\) is nondecreasing with respect to f;

  2. (ii)

    \(\mathcal{I}^{\alpha}_{0^{+}}\) is compact, and \(\sigma (\mathcal{I}^{\alpha}_{0^{+}})=\{0\}\), where σ is the spectral set of the operator \(\mathcal{I}^{\alpha}_{0^{+}}\);

  3. (iii)

    \(\mathcal{I}^{\alpha}_{0^{+}}\mathcal{I}^{\beta }_{0^{+}}=\mathcal{I}^{\beta}_{0^{+}}\mathcal{I}^{\alpha}_{0^{+}}=\mathcal {I}^{\alpha+\beta}_{0^{+}}\);

  4. (iv)

    for the real-valued continuous function f,

    $$\begin{aligned} \bigl\Vert \mathcal{I}^{\alpha}_{0^{+}}f \bigr\Vert \leq \mathcal{I}^{\alpha}_{0^{+}} \Vert f \Vert , \end{aligned}$$
    (2.1)

    where \(\alpha, \beta>0\) and \(\|\cdot\|\) denotes an arbitrary norm.

Definition 2.2

The Riemann–Liouville fractional derivative of order \(\alpha\in (m-1, m]\), \(m\in\mathbb{N}^{+}\) for a function \(f\in C([0,T])\) is defined as

$$\begin{aligned} D^{\alpha}_{0^{+}}f(t)=\frac{1}{\varGamma(m-\alpha)}\frac{d^{m}}{dt^{m}} \int ^{t}_{0}(t-\tau)^{m-\alpha-1}f(\tau)\,d\tau,\quad t>0, \end{aligned}$$

while the Caputo fractional derivative \(({}^{C}D^{\alpha}_{0^{+}}f)(t)\) of order \(\alpha>0\) is defined by

$$\begin{aligned} \bigl({}^{C}D^{\alpha}_{0^{+}}f\bigr) (t)=D^{\alpha}_{0^{+}} \Biggl(f(t)-\sum _{i=0}^{m-1}\frac {f^{(i)}(0)}{i!} t^{i} \Biggr),\quad t>0. \end{aligned}$$
(2.2)

Note that, if \(f^{(i)}(0)=0\), \(i=0,1,\ldots, m-1\), then \(({}^{C}D^{\alpha}_{0^{+}}f)(t)\) coincides with \((D^{\alpha}_{0^{+}}f)(t)\). Moreover, the Riemann–Liouville fractional derivative cannot be used in some physical problems as it requires the knowledge of the noninteger order derivatives of the function at \(t=0^{+}\). On the other hand, this issue does not arise in the application of Caputo fractional derivative.

On the other hand, if f is continuously differentiable up to order m, then the Caputo fractional derivative can be defined as

$$\bigl({}^{C}D^{\alpha}_{0^{+}}f\bigr) (t)= \frac{1}{\varGamma(m-\alpha)} \int^{t}_{0}(t-\tau )^{m-\alpha-1}f^{(m)}( \tau)\,d\tau,\quad t>0, m-1< \alpha\leq m, m\in\mathbb{N}^{+}, $$

which is known as a smooth fractional derivative.

Property 2.1

Let \(m-1<\alpha\leq m\), where \(m\in\mathbb{N}^{+}\). Then the following formulae hold:

$$\begin{aligned} \bigl({D^{\alpha}_{0^{+}}}\mathcal{I}^{\alpha}_{0^{+}}f \bigr) (t)=f(t), \qquad\bigl(\mathcal {I}^{\alpha}_{0^{+}}{D^{\alpha}_{0^{+}}}f \bigr) (t)=f(t)-\sum_{k=1}^{m} \frac {(\mathcal{I}^{m-\alpha}_{0^{+}}f)^{(m-k)}(0^{+})}{\varGamma(\alpha -k+1)}t^{\alpha-k},\quad t>0. \end{aligned}$$

The Laplace transforms of the Riemann–Liouville fractional derivative and Caputo derivative are

$$\begin{gathered} \bigl(\mathcal{L}D^{\alpha}_{0^{+}}f\bigr) (s)=s^{\alpha}( \mathcal{L}f) (s)-\sum_{i=0}^{m-1}s^{i} \bigl(D^{\alpha-i-1}_{0^{+}}f\bigr) \bigl(0^{+}\bigr),\quad t>0, m-1< \alpha\leq m, m\in\mathbb{N}^{+}. \\ \bigl(\mathcal{L}{}^{C}D^{\alpha}_{0^{+}}f\bigr) (s)=s^{\alpha}(\mathcal{L}f) (s)-\sum_{i=0}^{m-1}s^{\alpha-i-1}f^{(i)} \bigl(0^{+}\bigr),\quad t>0, m-1< \alpha\leq m, m\in \mathbb{N}^{+}.\end{gathered} $$

Contrary to the Riemann–Liouville fractional derivative, one can notice that only integer order derivatives of function f appear in the Laplace transform of the Caputo fractional derivative.

In relation to the Brownian noise, let us recall the Itô formula.

Lemma 2.3

(Itô formula)

Let \(Y(t)\) be such that \(dY(t)=u(t)\,dt+v(t)\,d\mathcal{A}_{2}(t)\), where u, v are given functions. Furthermore, assume that \(f^{\prime}(Y)\) and \(f^{\prime\prime}(Y)\) exist and are continuous for \(Y\in \mathbb{R}\). Then

$$\begin{aligned} df\bigl(Y(t)\bigr)= \biggl(f^{\prime}\bigl(Y(t)\bigr)u(t)+ \frac{1}{2}f^{\prime\prime }\bigl(Y(t)\bigr)v^{2}(t) \biggr) \,dt+f^{\prime}\bigl(Y(t)\bigr)v(t)\,d\mathcal{A}_{2}(t). \end{aligned}$$

Now we give a generalized form of the Itô formula [24] and an integral inequality with singular kernel [32].

Lemma 2.4

Let \(\frac{1}{2}<\alpha<1\), and \(Y(t)\) satisfy

$$\begin{aligned} dY(t)=b(t,Y)\,dt+\sigma_{1}(t,Y)\,d\mathcal{A}_{2}(t)+ \sigma_{2}(dt)^{\alpha}. \end{aligned}$$

Furthermore, let \(V\in C(\mathbb{R}_{+}\times\mathbb{R}^{n}, \mathbb{R}^{m})\) be such that \(V_{t}\), \(V_{Y}\), \(V_{YY}\) exist and are continuous for \((t,Y)\in\mathbb{R}_{+}\times\mathbb{R}^{n}\), where \(V_{Y}\) is an \(m\times n\) Jacobian matrix of \(V(t,Y)\) and \(V_{YY}\) is an \(m\times n\) Hessian matrix whose elements are m-dimensional vectors. Then

$$\begin{aligned} dV(t,Y) =& \biggl(V_{t}(t,Y)+V_{Y}(t,Y)b(t,Y)+ \frac{1}{2}\sigma_{1}(t,Y)^{\mathrm{T}}V_{YY}(t,Y) \sigma_{1}(t,Y) \biggr)\,dt \\ &{}+V_{Y}(t,Y)\sigma_{1}(t,Y)\,d\mathcal{A}_{2}(t)+V_{Y}(t,Y) \sigma _{2}(t,Y) (dt)^{\alpha}. \end{aligned}$$
(2.3)

Lemma 2.5

Let \(0<\beta<1\), and consider the time interval \([0, T )\), where \(T<\infty\). Assume that a is a nonnegative locally integrable function on \([0, T )\), and b and g are nonnegative nondecreasing continuous functions defined on \([0,T)\), with both bounded by a positive constant M. If \(v(t)\) is nonnegative and locally integrable on \([0, T )\) satisfying

$$\begin{aligned} v(t)\leq a(t)+b(t) \int_{0}^{t}v(\tau)\,d\tau+g(t) \int_{0}^{t}(t-\tau)^{\beta -1}v(\tau)\,d\tau, \end{aligned}$$

then

$$\begin{aligned} v(t)\leq a(t)+\sum_{n=1}^{\infty}\sum _{i=0}^{n}\left ( \textstyle\begin{array}{c} n\\ i \end{array}\displaystyle \right )b^{n-i}(t)g^{i}(t)\frac{(\varGamma(\beta))^{\beta}}{\varGamma(i\beta +n-i)} \int_{0}^{t}(t-\tau)^{i\beta-(i+1-n)}a(\tau)\,d\tau. \end{aligned}$$

3 Main results

3.1 Existence and uniqueness of solutions for FNN

Let \(C((0,T],L^{2}(\varOmega;\mathbb{R}^{n}))=C((0,T],L^{2}(\varOmega,\mathcal {F},\mathbb{P};\mathbb{R}^{n}))\) denote the Banach space of all continuous functions from \((0,T]\) into \(L^{2}(\varOmega;\mathbb{R}^{n})\) equipped with the sup norm. In our analysis, \(\mathbb{E}\) stands for the mathematical expectation.

Now we state the assumption needed in the sequel.

Condition 3.1

Let \(f(Y(t))\) be a real-valued continuous function, and there exist positive constants L, M such that

$$\bigl\Vert f\bigl(Y_{1}(t)\bigr)-f\bigl(Y_{2}(t)\bigr) \bigr\Vert \leq L \bigl\Vert Y_{1}(t)-Y_{2}(t) \bigr\Vert \quad\forall Y_{1}, Y_{2}\in \mathbb{R}^{n}, $$

and

$$\bigl\Vert f\bigl(Y(t)\bigr) \bigr\Vert ^{2}\leq M \bigl\Vert Y(t) \bigr\Vert ^{2}. $$

Theorem 3.1

Let \(f(Y)\) satisfy Condition 3.1, and \(\lim_{T\rightarrow \infty}\mathbb{E}\int_{0}^{T}\|\varPi(t)\|^{2}\,dt<\infty\). Then, for any \(Y_{0}\in C((0,T],L^{2}(\varOmega,\mathbb{R}^{n}))\), system (1.1) has a unique solution.

Proof

Note that system (1.1) is equivalent to the integral system

$$\begin{aligned} Y(t) =&Y_{0} \biggl(1+\frac{t^{1-\alpha}}{\varGamma(2-\alpha)} \biggr)- \frac {1}{\varGamma(1-\alpha)} \int_{0}^{t}(t-\tau)^{-\alpha} \mathcal{A}_{1} Y(\tau)\, d\tau \\ &{}+ \int_{0}^{t} \bigl(\mathcal{A}_{2} Y( \tau)+f\bigl(Y(\tau)\bigr) \bigr)\,d\tau+ \int_{0}^{t}\varPi(\tau)\,d\mathcal{A}_{2}( \tau). \end{aligned}$$
(3.1)

Thus we only need to prove that system (3.1) has a unique solution in the space \(C((0,T],L^{2}(\varOmega;\mathbb{R}^{n}))\).

Define an operator \(\mathcal{R}\) on the space \(C((0,T],L^{2}(\varOmega ;\mathbb{R}^{n}))\) as

$$\begin{aligned} (\mathcal{R}Y) (t) =&Y_{0} \biggl(1+\frac{t^{1-\alpha}}{\varGamma(2-\alpha)} \biggr)- \frac{1}{\varGamma(1-\alpha)} \int_{0}^{t}(t-\tau)^{-\alpha} \mathcal{A}_{1} Y(\tau)\,d\tau \\ &{}+ \int_{0}^{t} \bigl(\mathcal{A}_{2} Y( \tau)+f\bigl(Y(\tau)\bigr) \bigr)\,d\tau+ \int_{0}^{t}\varPi(\tau)\,d\mathcal{A}_{2}( \tau). \end{aligned}$$
(3.2)

Step 1. Here it will be shown that \(\mathcal{R}\) maps \(C((0,T],L^{2}(\varOmega;\mathbb{R}^{n}))\) into itself. For sufficiently small \(\delta>0\), we apply the inequality \(|a+b|^{2}\leq 2|a|^{2}+2|b|^{2}\) together with Hölder’s inequality to obtain

$$\begin{aligned} &\mathbb{E} \biggl\Vert \int_{0}^{t+\delta}(t+\delta-s)^{\alpha-1}\mathcal {A}_{1} Y(s)\,ds- \int_{0}^{t}(t-s)^{\alpha-1} \mathcal{A}_{1} Y(s)\,ds \biggr\Vert ^{2} \\ &\quad\leq2 \int_{0}^{t} \bigl\Vert (t+\delta-s)^{\alpha-1}-(t-s)^{\alpha-1} \bigr\Vert ^{2}\,ds\cdot\mathbb{E} \int_{0}^{t} \bigl\Vert \mathcal{A}_{1} Y(s) \bigr\Vert ^{2}\,ds \\ &\qquad{}+2 \int_{t}^{t+\delta} \bigl\Vert (t+\delta-s)^{\alpha-1} \bigr\Vert ^{2}\,ds\cdot \mathbb{E} \int_{t}^{t+\delta} \bigl\Vert \mathcal{A}_{1} Y(s) \bigr\Vert ^{2}\,ds\\ &\quad=:I_{1}+I_{2}. \end{aligned}$$

For \(I_{1}\), \(\sup_{t\in(0,T]}\mathbb{E}\|Y(t)\|^{2}\) is bounded as \(Y\in C((0,T],L^{2}(\varOmega;\mathbb{R}^{n}))\). Since \(t^{\alpha-1}\in L^{2}((0,T],\mathbb{R}^{n})\), we have \(I_{1}\rightarrow0\) as \(\delta\rightarrow0\). Similarly, for \(I_{2}\), we have

$$\begin{aligned} &\mathbb{E} \int_{t}^{t+\delta}(t+\delta-s)^{2\alpha-2}\,ds\cdot \mathbb {E} \int_{t}^{t+\delta} \bigl\Vert \mathcal{A}_{1} Y(s) \bigr\Vert ^{2}\,ds \leq\sup_{s\in(0,T]}\mathbb{E} \bigl( \bigl\Vert Y(s) \bigr\Vert ^{2}\bigr) \Vert \mathcal{A}_{1} \Vert ^{2}\frac {\delta^{2\alpha}}{2\alpha-1}. \end{aligned}$$

Since \(\frac{1}{2}<\alpha<1\) and \(\sup_{t\in(0,T]}\mathbb{E}\|Y(t)\| ^{2}\) is bounded, we have \(I_{2}\rightarrow0\) as \(\delta\rightarrow0\). Therefore, \(\mathcal{R}Y\) is a continuous stochastic process on \((0,T]\) in the sense of mean square.

On the other hand, by Hölder’s inequality, we obtain the estimate

$$\begin{aligned} \bigl\Vert (\mathcal{R}Y) (t) \bigr\Vert ^{2} \leq& 4 \biggl( \biggl(\frac{\varGamma(2-\alpha )+t^{1-\alpha}}{\varGamma(2-\alpha)} \biggr)^{2} \Vert u_{0} \Vert ^{2}+\frac{1}{\varGamma (1-\alpha)} \biggl\Vert \int_{0}^{t}(t-s)^{-\alpha} \mathcal{A}_{1} Y(s)\,ds \biggr\Vert ^{2} \\ &{}+ \biggl\Vert \int_{0}^{t}\bigl(\mathcal{A}_{2} Y(s)+f \bigl(Y(s)\bigr)\bigr)\,ds \biggr\Vert ^{2}+ \biggl\Vert \int _{0}^{t}\varPi(s)\,d\mathcal{A}_{2}(s) \biggr\Vert ^{2} \biggr) \\ \leq&4 \biggl( \biggl(\frac{\varGamma(2-\alpha)+t^{1-\alpha}}{\varGamma(2-\alpha )} \biggr)^{2} \Vert u_{0} \Vert ^{2}+\frac{1}{\varGamma(1-\alpha)} \biggl( \int_{0}^{t}(t-s)^{-\alpha } \bigl\Vert \mathcal{A}_{1} Y(s) \bigr\Vert \,ds \biggr)^{2} \\ &{}+ \biggl( \int_{0}^{t} \bigl\Vert \mathcal{A}_{2} Y(s)+f\bigl(Y(s)\bigr) \bigr\Vert \,ds \biggr)^{2}+ \biggl\Vert \int _{0}^{t}\varPi(s)\,d\mathcal{A}_{2}(s) \biggr\Vert ^{2} \biggr) \\ \leq&4 \biggl( \biggl(\frac{\varGamma(2-\alpha)+t^{1-\alpha}}{\varGamma(2-\alpha )} \biggr)^{2} \Vert u_{0} \Vert ^{2} \\ &{}+ \biggl(\frac{t^{2(1-\alpha)}}{\varGamma(2-\alpha)} \Vert \mathcal{A}_{1} \Vert ^{2} +\bigl( \Vert \mathcal{A}_{2} \Vert ^{2}+M\bigr)t \biggr)\sup _{0\leq s\leq t} \bigl\Vert Y(s) \bigr\Vert ^{2}\\ &{}+ \biggl\Vert \int_{0}^{t}\varPi(s)\,d\mathcal{A}_{2}(s) \biggr\Vert ^{2} \biggr). \end{aligned}$$

Taking the expectation of the both sides of the above inequality and using Itô’s isometry, we get

$$\begin{aligned} \mathbb{E} \bigl\Vert (\mathcal{R}Y) (t) \bigr\Vert ^{2} \leq&4 \biggl( \biggl(\frac{\varGamma(2-\alpha)+t^{1-\alpha}}{\varGamma(2-\alpha )} \biggr)^{2}\mathbb{E} \Vert u_{0} \Vert ^{2}\\ &{}+ \biggl(\frac{t^{2(1-\alpha)}}{\varGamma(2-\alpha )} \Vert \mathcal{A}_{1} \Vert ^{2}+\bigl( \Vert \mathcal{A}_{2} \Vert ^{2}+M\bigr)t \biggr)\mathbb{E}\sup _{0\leq s\leq t} \bigl\Vert Y(s) \bigr\Vert ^{2} \\ &{}+\mathbb{E} \int_{0}^{t} \bigl\Vert \varPi(s) \bigr\Vert ^{2}\,ds \biggr). \end{aligned}$$

As \(\frac{1}{2}<\alpha<1\), \(\sup_{t\in(0,T]}\mathbb{E}\|(\mathcal {R}Y)(t)\|^{2}<\infty\) for any \(Y\in C((0,T],L^{2}(\varOmega;\mathbb{R}^{n}))\). So the operator \(\mathcal{R}\) maps \(C((0,T],\mathbb{R}^{n})\) into itself.

Step 2. We show that the sequence \(\{Y^{(k)}\}\) is a Cauchy sequence with

$$\begin{aligned} Y^{(k+1)}(t) =& Y_{0} \biggl(1+\frac{t^{1-\alpha}}{\varGamma(2-\alpha)} \biggr)- \frac{1}{\varGamma(1-\alpha)} \int_{0}^{t}(t-\tau)^{-\alpha}{ \mathcal{A}_{1}} Y^{(k)}(\tau)\,d\tau \\ &{}+ \int_{0}^{t} \bigl({\mathcal{A}_{2}} Y^{(k)}(\tau)+f\bigl(Y^{(k)}(\tau)\bigr) \bigr)\,d\tau+ \int_{0}^{t}\varPi(\tau)\,d{\mathcal {A}_{2}}(\tau),\quad k=0,1,2,\ldots. \end{aligned}$$

Letting \(Y^{(0)}(t)\equiv Y_{0}\) and using Condition 3.1 and Hölder’s inequality, we obtain

$$\begin{aligned} \bigl\Vert Y^{(k+1)}(t)-Y^{(k)}(t) \bigr\Vert ^{2} \leq&\frac{3}{\varGamma(1-\alpha)} \biggl\Vert \int_{0}^{t}(t-s)^{-\alpha}\mathcal {A}_{1} \bigl(Y^{(k-1)}(s)-X^{(k)}(s) \bigr)\,ds \biggr\Vert ^{2} \\ &{}+3 \biggl\Vert \int_{0}^{t} \mathcal{A}_{2} \bigl(Y^{(k)}(s)-Y^{(k-1)}(s) \bigr)\,ds \biggr\Vert ^{2} \\ &{}+3 \biggl\Vert \int_{0}^{t} \bigl(f\bigl(Y^{(k)}(s) \bigr)-f\bigl(Y^{(k-1)}(s)\bigr) \bigr)\,ds \biggr\Vert ^{2} \\ \leq&\frac{3 \Vert \mathcal{A}_{1} \Vert ^{2}t^{1-\alpha}}{\varGamma(2-\alpha)} \int _{0}^{t}(t-s)^{-\alpha} \bigl\Vert Y^{(k-1)}(s)-Y^{(k)}(s) \bigr\Vert ^{2}\,ds \\ &{}+3\bigl( \Vert \mathcal{A}_{2} \Vert ^{2}+L^{2} \bigr)t \int_{0}^{t} \bigl\Vert Y^{(k-1)}(s)-Y^{(k)}(s) \bigr\Vert ^{2}\,ds. \end{aligned}$$
(3.3)

For convenience, we set

$$\begin{aligned} \epsilon^{(k+1)}(t)=\mathbb{E} \bigl\Vert Y^{(k+1)}(s)-Y^{(k)}(s) \bigr\Vert ^{2} \end{aligned}$$

and define two operators \(\mathcal{J}_{1}\) and \(\mathcal{J}_{2}\) as

$$\begin{aligned}& (\mathcal{J}_{1}\varphi) (t)=\frac{3 \Vert \mathcal{A}_{1} \Vert ^{2}T^{1-\alpha }}{\varGamma(2-\alpha)} \int_{0}^{t}(t-s)^{-\alpha}\varphi(s)\,ds, \\& (\mathcal{J}_{2}\varphi) (t)=3\bigl( \Vert B \Vert ^{2}+L^{2}\bigr)T \int_{0}^{t}\varphi(s)\,ds. \end{aligned}$$

Then inequality (3.3) can be rewritten compactly as

$$\begin{aligned} \epsilon^{(k+1)}(t)\leq\bigl((\mathcal{J}_{1}+ \mathcal{J}_{2})\epsilon ^{(k)}\bigr) (t), \quad t\in(0,T]. \end{aligned}$$

It follows from Property 2.1 that \(\mathcal{J}_{1}\) and \(\mathcal{J}_{2}\) are nondecreasing with respect to \(\varphi\in C((0,T],\mathbb{R})\), and so the above inequality reduces to

$$\begin{aligned} \epsilon^{(k+1)}(t)\leq\bigl((\mathcal{J}_{1}+ \mathcal{J}_{2})^{k}\epsilon ^{(0)}\bigr) (t), \quad k=1,2,\ldots. \end{aligned}$$

From the fact that the operators \(\mathcal{J}_{1}\) and \(\mathcal{J}_{2}\) commute and are compact on \(C([0,T],\mathbb{R})\), it follows that \(\sigma(\mathcal{J}_{1})=\sigma(\mathcal{J}_{2})=\{0\}\), where \(\sigma(\cdot)\) represents the spectral set of the operator. Thus the sequence \(\{Y^{(k)}\}\) is a Cauchy sequence, and the limit Y of \(\{Y^{(k)}\}\) corresponds to a solution of system (1.1).

Finally, we establish the uniqueness of solutions. Let \(Y_{1}\), \(Y_{2}\) be two solutions to system (1.1). In view of the elementary inequality \(|a+b+c|^{2}\leq 3|a|^{2}+3|b|^{2}+3|c|^{2}\), Condition 3.1, and Hölder’s inequality, we find that

$$\begin{aligned} \mathbb{E} \bigl\Vert Y_{1}(t)-Y_{2}(t) \bigr\Vert ^{2} \leq&\frac{3 \Vert \mathcal{A}_{1} \Vert ^{2}t^{1-\alpha}}{\varGamma(2-\alpha)} \int _{0}^{t}(t-s)^{-\alpha}\mathbb{E} \bigl\Vert Y_{1}(s)-Y_{2}(s) \bigr\Vert ^{2}\,ds \\ &{}+3\bigl( \Vert \mathcal{A}_{2} \Vert ^{2}+L^{2} \bigr)t \int_{0}^{t}\mathbb{E} \bigl\Vert Y_{1}(s)-Y_{2}(s) \bigr\Vert ^{2}\,ds, \end{aligned}$$

which, by Lemma 2.5, leads to

$$\begin{aligned} \mathbb{E} \bigl( \bigl\Vert Y_{1}(t)-Y_{2}(t) \bigr\Vert ^{2} \bigr)=0, \quad t\in[0,T]. \end{aligned}$$
(3.4)

In consequence, we get \(Y_{1}(t)=Y_{2}(t)\) on \([0,T]\) in the sense of mean square. Hence, system (1.1) has a unique solution in the sense of mean square. This completes the proof. □

3.2 Asymptotic stability analysis

We analyze the asymptotic stability of system (1.1) via the Lyapunov functional method. Let us first define the asymptotic stability.

Definition 3.2

The neutral networks driven by Brownian noise are called asymptotically stable in the sense of mean square, provided that the solution \(Y(t,Y_{0})\) satisfies the inequality

$$\begin{aligned} \lim_{T\rightarrow\infty}\mathbb{E} \int_{0}^{T} \bigl\Vert Y(t,Y_{0}) \bigr\Vert ^{2}\,dt< \infty. \end{aligned}$$

In the following we study the asymptotic stability for the case \(Y_{0}=0\). There is no loss of generality as any nonzero initial state can be shifted to the origin via a change of variables. In case the initial state for system \(Y_{0}\neq0\), we can introduce the change of variable \(\widehat{Y}=Y-Y_{0}\) such that \(\widehat{Y}_{0}=0\), and the new system has a zero initial state.

Theorem 3.3

Let \(\mathcal{A}_{1}\) be a positive definite matrix and \(Y_{0}=0\). If there exists a positive diagonal matrix P such that \(f(Y(t))\leq PY(t)\) and \(-\mu P^{\mathrm{T}}P-2\mathcal{A}_{2}-2P^{\mathrm{T}}\) positive definite, where \(\mu>0\), then system (1.1) is asymptotically stable in the sense of mean square.

Proof

Choose a Lyapunov functional given by

$$\begin{aligned}[b] V\bigl(t,Y(t)\bigr)={}&Y^{\mathrm{T}}(t)Y(t)+\frac{2}{\varGamma(1-\alpha)} \int_{0}^{t}(t-\tau )^{-\alpha}Y^{\mathrm{T}}( \tau)\mathcal{A}_{1} Y(\tau)\,d\tau\\&+\mu \int _{0}^{t}f^{\mathrm{T}}\bigl(Y(\tau)\bigr)f \bigl(Y(\tau)\bigr)\,d\tau,\end{aligned} $$
(3.5)

where Y is the solution of system (1.1). Observe that V is nonnegative and positive definite.

Applying the generalized Itô formula in Lemma 2.4 to V, we obtain

$$\begin{aligned} dV\bigl(t,Y(t)\bigr) =& \biggl(\mu f^{\mathrm{T}}\bigl(Y(t)\bigr)f \bigl(Y(t)\bigr)-\frac{2\alpha}{\varGamma (1-\alpha)} \int_{0}^{t}(t-\tau)^{-1-\alpha}X^{\mathrm{T}}( \tau)\mathcal{A}_{1} Y(\tau)\,d\tau \\ &{}+2Y^{\mathrm{T}}(t) \mathcal{A}_{2} Y(t)+2Y^{\mathrm{T}}(t)f\bigl(Y(t)\bigr)+\frac{1}{2} \varPi^{\mathrm{T}}(t)\varPi(t) \biggr)\, dt \\ &{}+2Y^{\mathrm{T}}(t)\varPi(t)\,d\mathcal{A}_{2}(t)- \frac{2}{\varGamma(2-\alpha )}Y^{\mathrm{T}}(t)\mathcal{A}_{1}Y(t) (dt)^{1-\alpha}. \end{aligned}$$
(3.6)

Furthermore, we have

$$\begin{aligned} V\bigl(t,Y(t)\bigr) =&V(0,Y_{0})+ \int_{0}^{t} \bigl(\mu f^{\mathrm{T}}\bigl(Y( \tau)\bigr)f\bigl(Y(\tau )\bigr)+2Y^{\mathrm{T}}(\tau)\mathcal{A}_{2} Y(\tau)+2Y^{\mathrm{T}}(\tau)f\bigl(Y(\tau )\bigr) \bigr)\,d\tau \\ &{}+\frac{2}{\varGamma(1-\alpha)} \int_{0}^{t}(t-\tau)^{-\alpha}Y^{\mathrm{T}}(\tau)\mathcal{A}_{1} Y(\tau)\,d\tau+\frac{1}{2} \int_{0}^{t}\varPi^{\mathrm{T}}(\tau)\varPi( \tau)\,d\tau \\ &{}+2 \int_{0}^{t} Y^{\mathrm{T}}(\tau)\varPi(\tau) \,d\mathcal{A}_{2}(\tau) -\frac{2}{\varGamma(1-\alpha)} \int_{0}^{t}(t-\tau)^{-\alpha}Y^{\mathrm{T}}( \tau )\mathcal{A}_{1} X(\tau)\,d\tau \\ =&V(0,Y_{0})+ \int_{0}^{t} \bigl(\mu f^{\mathrm{T}}\bigl(Y( \tau)\bigr)f\bigl(Y(\tau)\bigr)+2Y^{\mathrm{T}}(\tau)\mathcal{A}_{2} Y(\tau)+2Y^{\mathrm{T}}(\tau)f\bigl(Y(\tau)\bigr) \bigr)\,d\tau \\ &{}+\frac{1}{2} \int_{0}^{t}\varPi^{\mathrm{T}}(\tau)\varPi( \tau)\,d\tau +2 \int_{0}^{t}Y^{\mathrm{T}}(\tau)\varPi(\tau)\,d \mathcal{A}_{2}(\tau) \\ \leq&V(0,Y_{0})+ \int_{0}^{t}Y^{\mathrm{T}}(\tau) \bigl(\mu P^{\mathrm{T}}P+2\mathcal {A}_{2}+2P^{\mathrm{T}}\bigr)Y( \tau)\,d\tau+\frac{1}{2} \int_{0}^{t}\varPi^{\mathrm{T}}(\tau)\varPi( \tau)\,d\tau \\ &{}+2 \int_{0}^{t}Y^{\mathrm{T}}(\tau)\varPi(\tau)\,d \mathcal{A}_{2}(\tau). \end{aligned}$$
(3.7)

Note that \(\mathbb{E}\int_{0}^{T}\varphi(t)\,d\mathcal{A}_{2}(t)=0\). Taking the expectation of both sides of the above inequality leads to

$$ \begin{aligned}[b]\mathbb{E}V\bigl(t,X(t)\bigr) \leq{}&\mathbb{E}V(0,Y_{0})+ \mathbb{E} \int_{0}^{t}Y^{\mathrm{T}}(\tau) \bigl(\mu P^{\mathrm{T}}P+2\mathcal{A}_{2}+2P^{\mathrm{T}}\bigr)Y(\tau) \,d\tau\\&+\frac {1}{2}\mathbb{E} \int_{0}^{t}\varPi^{\mathrm{T}}(\tau)\varPi( \tau)\,d\tau.\end{aligned} $$
(3.8)

Let

$$\begin{aligned} Y^{\mathrm{T}}(t) \bigl(\mu P^{\mathrm{T}}P+2 \mathcal{A}_{2}+2P^{\mathrm{T}}\bigr)Y(t)\leq - \lambda_{\min}(Q)Y^{\mathrm{T}}(t)Y(t), \end{aligned}$$
(3.9)

where \(Q=-\mu P^{\mathrm{T}}P-2\mathcal{A}_{2}-2P^{\mathrm{T}}\) is a positive definite matrix, and \(\lambda_{\min}(Q)\) stands for the smallest eigenvalue of Q.

From inequalities (3.8) and (3.9), we obtain

$$\begin{aligned} \mathbb{E} \int_{0}^{T} \bigl\Vert Y(t) \bigr\Vert ^{2}\,dt \leq&\frac{\mathbb{E} \Vert Y_{0} \Vert ^{2}+\frac {1}{2}\int_{0}^{t}\mathbb{E} \Vert \varPi(\tau) \Vert ^{2}\,d\tau-\mathbb {E}V(T,Y(T))}{\lambda(Q)} \\ \leq&\frac{\mathbb{E} \Vert Y_{0} \Vert ^{2}+\frac{1}{2}\int_{0}^{t}\mathbb{E} \Vert \varPi(\tau ) \Vert ^{2}\,d\tau}{\lambda(Q)}< \infty, \end{aligned}$$
(3.10)

which accomplishes that system (1.1) is asymptotically stable in the sense of mean square. The proof is finished. □

4 Numerical simulation

In this section, we give two examples to illustrate the effectiveness of the obtained stability result.

Example 4.1

Consider two-neuron multi-scale neutral networks with Brownian noise. The network parameters are chosen as follows:

$$\begin{aligned} A=\left [ \textstyle\begin{array}{c@{\quad}c} 1 & 1\\ 1 & 3 \end{array}\displaystyle \right ], \qquad B=\left [ \textstyle\begin{array}{c@{\quad}c} -3 & -1\\ -1 & -10 \end{array}\displaystyle \right ], \qquad\varPi(t)=\left [ \textstyle\begin{array}{c@{\quad}c} \frac{\sqrt{3}}{2} & -\frac{1}{2}\\ \frac{1}{2} & \frac{\sqrt{3}}{2} \end{array}\displaystyle \right ]\left [ \textstyle\begin{array}{c} \mathrm{e}^{-t}\\ \mathrm{e}^{-t} \end{array}\displaystyle \right ], \end{aligned}$$
(4.1)

and \(f(X(t))=\tanh(X(t))=\frac{\mathrm{e}^{X(t)}-\mathrm {e}^{-X(t)}}{\mathrm{e}^{X(t)}+\mathrm{e}^{-X(t)}}\), where \(X(t)=[X_{1}(t),X_{2}(t)]^{\mathrm{T}}\). We set the initial state as \(X_{0}=[0,0]^{\mathrm{T}}\).

Using the given data, one can find that A is positive definite, \(P=\operatorname{diag}[1, 3]\), and the matrix \(-P^{\mathrm{T}}P-2B-2P^{\mathrm{T}}\) is positive definite. Then, according to Theorem 3.3, system (1.1) is asymptotically stable in the mean square sense. In order to illustrate the effectiveness of the obtained stability result, we plot two figures Figs. 12. Figure 1 presents the standard Brownian noise. Figure 2 is the solution of system (1.1). From Fig. 2, one can observe that the numerical result is in agreement with the obtained stability result.

Figure 1
figure 1

Standard Brownian noise

Figure 2
figure 2

Stability of system with standard Brownian noise

Example 4.2

Consider three-neuron multi-scale neutral networks with Brownian noise. The network parameters are chosen as follows:

$$ A=\left [ \textstyle\begin{array}{c@{\quad}c@{\quad}c} 1 & 1 & 0\\ 1 & 2 & 1\\ 1 & 1 & 5 \end{array}\displaystyle \right ], \qquad B=\left [ \textstyle\begin{array}{c@{\quad}c@{\quad}c} -2 & -1 & -1\\ -1 & -12 & 0\\ -1 & -3 & -5 \end{array}\displaystyle \right ], \qquad\varPi(t)=\left [ \textstyle\begin{array}{c@{\quad}c@{\quad}c} \frac{\sqrt{2}}{2} & -\frac{1}{2} & -\frac{1}{2}\\ \frac{1}{2} & \frac{\sqrt{2}}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & \frac{\sqrt{2}}{2} \end{array}\displaystyle \right ]\left [ \textstyle\begin{array}{c} \mathrm{e}^{-t}\\ \mathrm{e}^{-t}\\ \mathrm{e}^{-t} \end{array}\displaystyle \right ], $$
(4.2)

and \(f(X(t))=\tanh(X(t))=\frac{\mathrm{e}^{X(t)}-\mathrm {e}^{-X(t)}}{\mathrm{e}^{X(t)}+\mathrm{e}^{-X(t)}}\), where \(X(t)=[X_{1}(t),X_{2}(t),X_{3}(t)]^{\mathrm{T}}\). We set the initial state as \(X_{0}=[0,0,0]^{\mathrm{T}}\).

Using the given data, one can find that A is positive definite, \(P=\operatorname{diag}[1, 3, 2]\), and the matrix \(-P^{\mathrm{T}}P-2B-2P^{\mathrm{T}}\) is positive definite. Then, according to Theorem 3.3, system (1.1) is asymptotically stable in the mean square sense. The solution of system (1.1) is shown in Fig. 3. From Fig. 3, one can observe that the numerical result agrees with the obtained stability result.

Figure 3
figure 3

Stability of system with standard Brownian noise

5 Conclusions

In this work, we applied the operator theory and fixed point theory to obtain the existence and uniqueness of solutions for a multi-scale stochastic fractional differential neutral network under some simple conditions. Then we analyzed asymptotic stability of the network by means of the first method of Lyapunov. The feasibility and effectiveness of the obtained stability result is verified by numerical simulation.

It is well known that the Mittag-Leffler stability and the exponential stability have faster convergence rate than the asymptotic stability near the origin. In our future work, we plan to investigate the stability of solutions to stochastic systems involving Caputo–Fabrizio type fractional derivatives with the aid of the Lyapunov method and integral inequalities. For some recent results on Caputo–Fabrizio differential equations, we refer the reader to work of Baleanu and his co-workers in [33,34,35,36,37,38,39].