, 2019:26

# Numerical algorithm for nonlinear delayed differential systems of nth order

• Josef Rebenda
• Zdeněk Šmarda
Open Access
Research

## Abstract

The purpose of this paper is to propose a semi-analytical technique convenient for numerical approximation of solutions of the initial value problem for p-dimensional delayed and neutral differential systems with constant, proportional and time varying delays. The algorithm is based on combination of the method of steps and the differential transformation. Convergence analysis of the presented method is given as well. Applicability of the presented approach is demonstrated in two examples. A system of pantograph type differential equations and a system of neutral functional differential equations with three types of delays are considered. The accuracy of the results is compared to those obtained by the Laplace decomposition algorithm, the residual power series method and Matlab package DDENSD. A comparison of computing time is presented, too, showing reliability and efficiency of the proposed technique.

## Keywords

Differential transformation Method of steps Delayed differential system Multiple delays

## MSC

34K28 34K07 34K40 65L03

## 1 Introduction

Systems of functional differential equations (FDEs), in particular delayed or neutral differential equations, are often used to model processes in the real world. To give some examples, we mention models in population dynamics [1], neuromechanics [2], machine tool vibrations [3], etc. Further models and details can be found, for instance, in monographs [4] and [5].

Semi-analytical methods expressing solutions to problems with delays in a series form have been studied in the last two decades. Methods such as the variational iteration method (VIM) [6], Adomian decomposition method (ADM) [7], homotopy perturbation method (HPM) [8], homotopy analysis method (HAM) [9] and also methods based on the Taylor theorem such as the differential transformation (DT) [10], Taylor collocation method [11] and Taylor polynomial method [12] have been developed to approximate solutions to different problems for FDEs. Other ways to use the series approach in solving FDEs are, e.g., the method of polynomial quasisolutions [13, 14], finite difference methods [15, 16], and the functional analytic technique (FAT) [17, 18].

The main aim of the work is to apply a combination of the method of steps and DT as a convenient tool for finding an approximate solution to the initial value problem for functional differential systems used in dynamical models. Convergence analysis and error estimates of the method are investigated as well. We give some experimental results in Sect. 4 to show that the algorithm produces reliable results with the same or better efficiency than the reference methods.

## 2 Methods

The main idea of our approach is to combine the differential transformation and general method of steps.

The differential transformation has been, and still is, an active research topic during the last years. As examples of recently published results, we mention research papers [19, 20, 21, 22, 23]. These papers among other publications contain new algorithms and their applications to solving different problems involving differential equations.

### Definition 1

The differential transformation of a real function $$u(t)$$ at a point $$t_{0} \in \mathbb{R}$$ is $$\mathcal{D} \{ u(t) \} [t_{0}] = \{ U(k)[t _{0}] \}_{k=0}^{\infty }$$, where the kth component $$U(k)[t_{0}]$$ of the differential transformation of the function $$u(t)$$ at $$t_{0}$$ is defined as
$$U(k)[t_{0}] = \frac{1}{k!} \biggl[ \frac{d^{k}u(t)}{dt^{k}} \biggr] _{t=t_{0}},$$
(1)
assuming that the original function $$u(t)$$ is analytic.

### Definition 2

The inverse differential transformation of $$\{ U(k)[t_{0}] \}_{k=0} ^{\infty }$$ at $$t_{0}$$ is defined as
$$u(t) = \mathcal{D}^{-1} \bigl\{ \bigl\{ U(k)[t_{0}] \bigr\} _{k=0}^{\infty } \bigr\} [t_{0}]= \sum_{k=0}^{\infty }U(k)[t_{0}] (t-t_{0})^{k}.$$
(2)
In applications, the function $$u(t)$$ is usually expressed in the form of finite series
$$u(t) = \sum_{k=0}^{N}U(k)[t_{0}] (t-t_{0})^{k}.$$
(3)
In Sect. 4, we use the following transformation formulas, which are derived from definitions (1), (2) and proved in [24].

### Lemma 1

Assume that$$W(k)$$, $$U(k)$$and$$U_{i} (k)$$are thekth components of the differential transformations of functions$$w(t)$$, $$u(t)$$and$$u_{i} (t)$$, $$i=1,2$$, at$$t_{0} \in \mathbb{R}$$, respectively, and let$$q, q_{j} \in (0,1)$$, $$j=1,2$$. Moreover, assume that$$t_{0}=0$$. Denote$$\mathbb{N}_{0} = \mathbb{N} \cup \{ 0 \}$$.
\begin{aligned}& \textit{If}\quad w(t) = { \frac{d^{n}u(t)}{dt^{n}}}, \quad \textit{then } W(k) = { \frac{(k+n)!}{k!}}U(k+n). \\& \textit{If} \quad w(t) = u_{1} (t) u_{2} (t),\quad \textit{then } W(k) = \sum_{l=0}^{k} U_{1} (l) U_{2} (k-l). \\& \textit{If}\quad w(t) = u(qt), \quad \textit{then } W(k) =q^{k} U(k). \\& \textit{If}\quad w(t) = u_{1}(q_{1}t)u_{2}(q_{2}t), \quad \textit{then } W(k) = \sum_{l=0}^{k} q_{1}^{l}q_{2}^{k-l}U_{1}(l)U_{2}(k-l). \\& \textit{If}\quad w(t) = { \frac{d^{m}u(qt)}{d(qt)^{m}}} = \frac{d^{m} u(t)}{dt ^{m}} \bigg\vert _{t=qt} = u^{(m)}(qt), \quad \textit{then } W(k) = { \frac{(k+m)!}{k!}}q^{k} U(k+m). \\& \textit{If}\quad w(t) = t^{n}, \quad \textit{then } W(k) =\delta (k-n), \textit{where } \delta ( k - n ) = \delta _{kn} \textit{ (Kronecker delta)}. \\& \textit{If}\quad w(t) = \operatorname {e}^{\lambda t}, \quad \textit{then } W(k) = { \frac{\lambda ^{k}}{k!}}. \\& \textit{If}\quad w(t) = \cos t, \quad \textit{then } {W(k) := C(k)} = \textstyle\begin{cases} (-1)^{\frac{k}{2}} \frac{1}{k!} & \textit{if } k=2n, n \in \mathbb{N} _{0}, \\ 0 & \textit{if } k=2n+1, n \in \mathbb{N}_{0}. \end{cases}\displaystyle \\& \textit{If}\quad w(t) = \sin t, \quad \textit{then } {W(k):= S(k)} = \textstyle\begin{cases} (-1)^{\frac{k-1}{2}} \frac{1}{k!} & \textit{if } k=2n+1, n \in \mathbb{N}_{0}, \\ 0 & \textit{if } k=2n, n \in \mathbb{N}_{0}. \end{cases}\displaystyle \end{aligned}

### Remark 1

Transformation formulas for shifted arguments $$w(t)=u(t-a)$$ are often proved and applied in papers. However, using these formulas when solving initial value problems for delayed differential equations is not convenient since the uniqueness of solutions is violated. The reason is that the values of the initial vector function for $$t < 0$$ are not taken into account.

One of the drawbacks of the common approach to the differential transformation is that there is no use of direct transformation formulas for equations with nonlinear terms containing unknown function $$u(t)$$, for instance, $$f(u)= \operatorname {e}^{\cos {u}}$$ or $$f(u) = \sqrt{1+u^{4}}$$.

Fortunately, the corresponding transformations can be calculated using the Adomian polynomials $$A_{n}$$, in which each solution $$u_{i}$$ is replaced by the corresponding components $$U_{i} (k)$$ of the differential transformation $$\{ U_{i} (k) \}_{k=0}^{\infty }$$, see [25]. Suppose that $$F(k)$$ is the kth component of the differential transformation of a nonlinear term $$f(u)$$, then
\begin{aligned} F(k) &= \sum_{n=0}^{\infty } A_{n} \bigl(U(0),U(1),\ldots, U(n)\bigr)\delta (k-n) = A_{k}\bigl(U(0),U(1), \ldots, U(k)\bigr) \\ &= \frac{1}{k!} \frac{d^{k}}{dt^{k}} \Biggl[ f \Biggl( \sum _{l=0}^{ \infty } U(l) t^{l} \Biggr) \Biggr]_{t=0}, \quad k \geq 0. \end{aligned}
(4)
Recently, it turned out that there is another way to work with nonlinearities in DT [26].

The second method, namely the method of steps, enables us to replace the terms involving constant or time-dependent delays by the initial vector function and its derivatives. Then the original initial value problem for a system of delayed or neutral differential equations is simplified to the initial problem for a system of ordinary differential equations. Details on the method of steps can be found, e.g., in monographs [4, 5, 27].

## 3 Results

The subject of our interest is a system of p functional differential equations of nth order with multiple delays $$\alpha _{1} (t),\ldots, \alpha _{r} (t)$$ in the following form:
\begin{aligned} {\mathbf{u}}^{(n)}(t) = {}&\mathbf{f}\bigl(t, \mathbf{u}(t), \mathbf{u}'(t),\ldots, \mathbf{u}^{(n-1)}(t), \mathbf{u}_{1}\bigl( \alpha _{1}(t)\bigr), \mathbf{u}_{2}\bigl(\alpha _{2}(t)\bigr),\ldots, \mathbf{u}_{r}\bigl(\alpha _{r}(t)\bigr) \bigr), \end{aligned}
(5)
where $$\mathbf{u}^{(n)} (t) =(u_{1}^{(n)}(t),\ldots, u_{p}^{(n)}(t))^{T}$$, $${\mathbf{u}^{(k)} (t) = (u_{1}^{(k)}(t),\ldots, u_{p}^{(k)}(t))^{T}}$$, $$k=0,1,\ldots, n-1$$ and $$\mathbf{f} = (f_{1},\ldots, f_{p})^{T}$$ are p-dimensional vector functions, $$\mathbf{u} _{i}(\alpha _{i}(t))= (\mathbf{u}(\alpha _{i}(t)),\mathbf{u}'(\alpha _{i}(t)),\ldots, \mathbf{u}^{(m_{i})}(\alpha _{i}(t)))$$ are $$(m_{i} \cdot p)$$-dimensional vector functions, $$m_{i} \leq n$$, $$i=1,2,\ldots,r$$, $$r \in \mathbb{N}$$ and $$f_{j} \colon [0,\infty ) \times \mathbb{R}^{np} \times \mathbb{R}^{\omega p}$$ are continuous real functions for $$j=1,2,\ldots,p$$, where $$\omega = \sum_{i=1}^{r} m_{i}$$.
We consider three types of delays $$\alpha _{i}$$:
1. 1.

$$\alpha _{i}(t) = q_{i} t$$, where $$q_{i} \in (0,1)$$ (proportional delay).

2. 2.

$$\alpha _{i}(t) = t-\tau _{i}$$, where $$\tau _{i}>0$$ is a real constant (constant delay).

3. 3.

$$\alpha _{i}(t)=t-\tau _{i}(t)$$, where $$\tau _{i}(t) \geq \tau _{i0}>0$$ for $$t>0$$ is a real function (time-dependent or time-varying delay).

Let $$t^{*}= \min_{1\leq i \leq r} \{\inf_{t>0} (\alpha _{i}(t) ) \} \leq 0$$, $$m= \max \{m_{1},m_{2},\ldots,m_{r}\} \leq n$$. In the case $$m=n$$, system (5) is a neutral system, otherwise it is a delayed differential system.

If $$t^{*}<0$$, an initial vector function $$\varPhi (t) = (\phi _{1}(t),\ldots,\phi _{p}(t))^{T}$$ must be assigned to system (5) on the interval $$[t^{*}, 0]$$. Moreover, we assume that $$\phi _{j}(t) \in C ^{n}([t^{*},0],\mathbb{R})$$ for $$j=1,\ldots,p$$.

We look for a solution of system (5) with the following initial conditions:
$$\mathbf{u}(0)=\mathbf{v}_{0},\quad\quad \mathbf{u}'(0)= \mathbf{v}_{1},\quad\quad \ldots, \quad\quad \mathbf{u}^{(n-1)}(0) = \mathbf{v}_{n-1} ,$$
(6)
and the initial vector function $$\varPhi (t)$$ on interval $$[t^{*}, 0]$$ satisfying
$$\varPhi (0) = \mathbf{u}(0),\quad\quad \ldots, \quad\quad \varPhi ^{(n-1)}(0)= \mathbf{u}^{(n-1)}(0).$$
(7)
We solve initial value problem (5), (6) and (7) subject to the following hypotheses:
1. (H1)

The functions $$f_{j}$$, $$j=1,\ldots, p$$ are analytic in $$[0,T^{*}] \times \mathbb{R}^{np} \times \mathbb{R}^{\omega p}$$.

2. (H2)

The initial value problem (5), (6) and (7) has a unique solution on some interval $$[0, T^{*}]$$.

### Remark 2

Hypothesis (H2) is valid, for example, if the delay functions $$\alpha _{i}$$ are Lipschitz continuous on $$[0,T^{*}]$$, the functions $$\phi _{j}, \phi '_{j},\ldots, \phi _{j}^{(n)}$$ are Lipschitz continuous on $$[t^{*},0]$$, and the functions $$f_{j}$$ are continuous with respect to t on $$[0,T^{*}]$$ and Lipschitz continuous with respect to the rest of the variables on $$\mathbb{R}^{np} \times \mathbb{R}^{\omega p}$$. More details and other types of sufficient conditions for existence of a unique solution can be found in [5, Sects. 3.2 and 3.3], or [27, Sect. 2.2].

We start with the method of steps. We substitute the initial vector function $$\varPhi (t)$$ and its derivatives in all places where the unknown functions with constant or time-dependent delays and derivatives of those functions take place. This turns the delayed system (5) into a system of ordinary differential equations or differential equations with proportional delays in the case when system (5) contains proportional delays.

For example, if $$\alpha _{1} (t) = t-\tau _{1}$$, $$\alpha _{2} (t) = t- \tau _{2}$$, $$\alpha _{3} (t) = q_{3} t$$ and $$\alpha _{4} (t) = t-\tau _{4}(t)$$, applying the method of steps changes (5) into the system
$$\mathbf{u}^{(n)}(t) = \mathbf{f}\bigl(t,\mathbf{u}(t), \ldots,\mathbf{u} ^{(n-1)}(t),\mathbf{\varPhi }_{1} (t-\tau _{1}),\mathbf{\varPhi }_{2} (t-\tau _{2}), \mathbf{u}_{3}(q_{3} t), \mathbf{\varPhi }_{4} \bigl(t-\tau _{4}(t)\bigr) \bigr),$$
(8)
where
\begin{aligned} &\mathbf{\varPhi }_{i}(t-\tau _{i} ) = \bigl(\varPhi (t- \tau _{i}),\varPhi '(t-\tau _{i}),\ldots, \varPhi ^{(m_{i})}(t-\tau _{i})\bigr), \quad i=1,2, \\ & \mathbf{u}_{3} (q_{3} t) = \bigl(\mathbf{u}(q_{3} t), \mathbf{u}'(q_{3} t),\ldots, \mathbf{u}^{(m_{3})} (q_{3} t)\bigr), \\ &\mathbf{\varPhi }_{4}\bigl(t-\tau _{4}(t) \bigr)= \bigl( \varPhi \bigl(t-\tau _{4}(t)\bigr),\varPhi '\bigl(t- \tau _{4}(t)\bigr),\ldots,\varPhi ^{(m_{4})}\bigl(t-\tau _{4}(t)\bigr)\bigr), \end{aligned}
and $$m_{l} \leq n$$ for $$l=1,2,3,4$$. Then we transform the initial conditions (6). Definition (1) gives
$$\mathbf{U}(k) = \frac{1}{k!} \mathbf{u}^{(k)} (0).$$
After applying the differential transformation, the initial value problem for a system of FDEs is reduced to a system of recurrence algebraic relations
\begin{aligned} \mathbf{U}(k+n) = \mathcal{F} \bigl( k, \mathbf{U}(k), \mathbf{U}(k+1),\ldots, \mathbf{U}(k+n-1) \bigr). \end{aligned}
(9)
Solving this recurrence and then using the inverse transformation (2), we get an approximate solution of system (5) in the series form
$${\mathbf{u}(t) = \sum_{k=0}^{N} \mathbf{U}(k)t^{k}}.$$
If $$t^{*}<0$$, we denote $$t_{\alpha _{i}} = \inf \{ t: \alpha _{i} (t)>0 \}$$ and $$t_{\alpha } =\min_{1 \leq i \leq r} \{ t_{\alpha _{i}}: t_{\alpha _{i}} \neq 0 \}$$. Then the approximate solution $$\mathbf{u}(t)$$ is valid on the intersection of its convergence interval and the interval $$[0,T^{*}] \cap [0, t_{\alpha }]$$, whereas $$\mathbf{u}(t) = \varPhi (t)$$ on the interval $$[t^{*},0]$$. If $$t^{*}=0$$, the approximate solution $$\mathbf{u}(t)$$ is valid on the intersection of its convergence interval with $$[0,T^{*}]$$.

Now we formulate and prove two theorems on convergence and an error estimate of the approximate solution to the studied problem obtained using the differential transformation.

### Theorem 1

Let hypotheses (H1) and (H2) be valid and denote$$\mathbf{F}_{k}(t) = \mathbf{U}(k)t^{k}$$. If there exist a constantδ, $$0<\delta < 1$$, and$$k_{0} \in \mathbb{N}$$such that$$\Vert \mathbf{F}_{k+1}(t)\Vert \leq \delta \Vert \mathbf{F}_{k}(t)\Vert$$for all$$k \geq k_{0}$$, then the series$$\sum_{k=0}^{\infty } \mathbf{F}_{k}(t)$$converges to a unique solution on the interval$$J=[0,\gamma ]$$, $$\gamma \leq T^{*}$$.

### Proof

Denote $$C^{n}(J)$$ the Banach space of vector-valued functions $$\mathbf{h}(t) = (h_{1}(t),h_{2}(t),\ldots, h_{p}(t))^{T}$$ with continuous derivatives up to order n and norm
$$\bigl\Vert \mathbf{h}(t) \bigr\Vert = \max_{i=1,\ldots, p} \max _{j=0,\ldots, n} \max_{t \in J} \bigl\vert h_{i}^{(j)} (t) \bigr\vert .$$
Denote
$$\mathbf{S}_{l} = \sum_{k=0}^{l} \mathbf{F}_{k}(t).$$
Now it is sufficient to prove that sequence $$\{\mathbf{S}_{l} \}$$ is a Cauchy sequence in the Banach space $$C^{n}(J)$$. Due to
$$\Vert \mathbf{S}_{l+1}-\mathbf{S}_{l} \Vert = \bigl\Vert \mathbf{F}_{l+1}(t) \bigr\Vert \leq \delta \bigl\Vert \mathbf{F}_{l}(t) \bigr\Vert \leq \cdots \leq \delta ^{l-n_{0}+1} \bigl\Vert \mathbf{F}_{n_{0}}(t) \bigr\Vert ,$$
for every $$l,m \in \mathbb{N}$$, $$l \geq m >n_{0}$$, we get
\begin{aligned} \Vert \mathbf{S}_{l}-\mathbf{S}_{m} \Vert &= \Biggl\Vert \sum_{j=m}^{l-1} ( \mathbf{S} _{j+1}-\mathbf{S}_{j} ) \Biggr\Vert \leq \sum _{j=m}^{l-1} \Vert \mathbf{S}_{j+1}- \mathbf{S}_{j} \Vert \leq \sum_{j=m}^{l-1} \delta ^{j-n_{0}+1} \bigl\Vert \mathbf{F} _{n_{0}}(t) \bigr\Vert \\ &= \delta ^{m-n_{0}+1}\bigl(1+\delta +\delta ^{2} + \cdots + \delta ^{l-m-1} \bigr) \bigl\Vert \mathbf{F}_{n_{0}}(t) \bigr\Vert \\ & =\frac{1-\delta ^{l-m}}{1-\delta } \delta ^{m-n_{0}+1} \bigl\Vert \mathbf{F} _{n_{0}}(t) \bigr\Vert . \end{aligned}
(10)
Since $$0<\delta <1$$, it follows that
$$\lim_{l,m \rightarrow \infty } \Vert \mathbf{S}_{l}- \mathbf{S}_{m} \Vert = 0.$$
Therefore, $$\{\mathbf{S}_{l} \}$$ is a Cauchy sequence in the Banach space $$C^{n}(J)$$ and the proof is complete. □

### Theorem 2

Suppose that the assumptions of Theorem 1are valid. Then for the truncated series$$\sum_{k=0}^{m}\mathbf{F}_{k}(t)$$the following error estimate holds:
$$\Biggl\Vert \mathbf{u}(t) - \sum_{k=0}^{m} \mathbf{F}_{k}(t) \Biggr\Vert \leq \frac{1}{1- \delta } \delta ^{m-m_{0}+1} \max_{i=1,\ldots, p} \max_{j=0,\ldots, n} \biggl\vert \frac{m_{0}!}{(m_{0} -j)!} {U}_{i}(m_{0}) \gamma ^{m_{0}-j} \biggr\vert .$$
for any$$m_{0} \geq 0$$, $$m \geq m_{0}$$.

### Proof

Without loss of generality, we can choose $$m_{0} \geq n$$, where n is the order of system (5). From inequality (10) we have
\begin{aligned} \Vert \mathbf{S}_{l} -\mathbf{S}_{m} \Vert & \leq \frac{1-\delta ^{l-m}}{1- \delta } \delta ^{m-m_{0}+1} \bigl\Vert \mathbf{F}_{m_{0}}(t) \bigr\Vert \\ &= \frac{1-\delta ^{l-m}}{1-\delta } \delta ^{m-m_{0}+1} \max_{i=1,\ldots, p} \max _{j=0,\ldots, n} \biggl\vert \frac{m_{0}!}{(m _{0} -j)!} {U}_{i}(m_{0}) \gamma ^{m_{0}-j} \biggr\vert , \end{aligned}
(11)
for $$l \geq m \geq m_{0}$$. From $$0 <\delta <1$$ it follows $$(1- \delta ^{l-m})< 1$$. Hence inequality (11) can be reduced to
$$\Vert \mathbf{S}_{l}-\mathbf{S}_{m} \Vert \leq \frac{1}{1-\delta } \delta ^{m-m_{0}+1} \max_{i=1,\ldots, p} \max _{j=0,\ldots, n} \biggl\vert \frac{m_{0}!}{(m_{0} -j)!} {U}_{i}(m_{0}) \gamma ^{m_{0}-j} \biggr\vert .$$
Here we use the fact that for $$l \rightarrow \infty$$, $$\mathbf{S} _{l} \rightarrow \mathbf{u}(t)$$, and so the proof is complete. □

### Remark 3

Recent results on error estimates and convergence of Taylor series can be found, e.g., in [28].

## 4 Applications and discussion

As the first application, we have chosen the initial value problem, which has been solved in [29] using the Laplace decomposition method (LDM) and in [30] using the residual power series method (RPSM).

### Example 1

We are looking for a solution of a 3-dimensional system of pantograph equations
\begin{aligned}& u_{1}' (t) = 2 u_{2} \biggl( \frac{t}{2} \biggr) + u_{3} (t) - t \cos \biggl( \frac{t}{2} \biggr), \\& u_{2}' (t) = 1 - t \sin (t) -2 u_{3}^{2} \biggl( \frac{t}{2} \biggr), \\& u_{3}' (t) = u_{2} (t) - u_{1} (t) - t \cos (t), \end{aligned}
(12)
subject to the initial conditions
$$u_{1} (0) = -1,\quad\quad u_{2}(0)=0, \quad \quad u_{3}(0)= 0.$$
(13)
Since system (12) contains proportional delays only, we do not have to use the method of steps. Applying DT formulas in Lemma 1 to (12), we get a system of recurrence relations
\begin{aligned}& (k+1) U_{1} (k+1) = 2 \frac{1}{2^{k}} U_{2} (k) + U_{3} (k) - \frac{1}{2^{k-1}} C(k-1), \\ & (k+1) U_{2} (k+1) = \delta (k) - S(k-1) -2 \sum _{l=0}^{k} \frac{1}{2^{k}} U_{3} (l) U_{3} (k-l), \\ & (k+1) U_{3} (k+1) = U_{2} (k) - U_{1} (k) - C(k-1). \end{aligned}
(14)
From the initial conditions we have $$U_{1}(0)=-1$$, $$U_{2}(0)=0$$, $$U_{3}(0)=0$$. Solving system (14), we get
\begin{aligned}& (k=0)\quad U_{1}(1) = 2 U_{2}(0)+ U_{3} (0) =0, \\ & \hphantom{(k=0)\quad }U_{2}(1) = \delta (0) - 2 \bigl(U_{3} (0) \bigr)^{2}= 1, \\ & \hphantom{(k=0)\quad }U_{3}(1) = U_{2}(0) - U_{1}(0) =1, \\ & (k=1) \quad U_{1}(2) = \frac{1}{2} \biggl( 2 \frac{1}{2} U_{2}(1)+U _{3}(1) - C(0) \biggr)= \frac{1}{2}, \\ & \hphantom{(k=1)\quad }U_{2}(2) = \frac{1}{2} \biggl( \delta (1) - 2 \frac{1}{2} \bigl(U_{3} (0) U_{3} (1) + U_{3} (1) U_{3} (0)\bigr) \biggr) = 0, \\ & \hphantom{(k=1)\quad }U_{3}(2) = \frac{1}{2} \bigl( U_{2}(1) - U_{1}(1) - C(0) \bigr)= 0. \end{aligned}
For $$k \geq 2$$, we find
\begin{aligned}& U_{1}(3)=0, \quad \quad \quad U_{1}(4)=- \frac{1}{4!}, \quad \quad \quad U_{1}(5)=0, \quad \quad \quad \ldots, \\ & U_{2}(3) =-\frac{1}{2}, \hphantom{U_{1}(4)}U_{2}(4) = 0, \hphantom{U_{1}(5)=0} U_{2}(5)= \frac{1}{4!}, \quad \quad \quad \ldots, \\ & U_{3}(3)= -\frac{1}{3!}, \hphantom{U_{1}(4)}U_{3}(4)= 0, \hphantom{U_{1}(5)=0} U_{3}(5)= \frac{1}{5!}, \quad \quad \quad \ldots. \end{aligned}
Application of the inverse differential transformation (2) gives a solution to (12), (13) in the form
\begin{aligned}& u_{1}(t) = -1 + \frac{1}{2} t^{2} - \frac{1}{4!} t^{4} + \cdots = - \sum _{k=0}^{N} (-1)^{k} \frac{t^{2k}}{(2k)!}, \\ & u_{2}(t) = t - \frac{1}{2} t^{3} + \frac{1}{4!} t^{5} - \cdots = \sum_{k=0}^{N} (-1)^{k} \frac{t^{2k+1}}{(2k)!}, \\ & u_{3}(t) = t - \frac{1}{3!} t^{3} + \frac{1}{5!} t^{5} - \cdots = \sum_{k=0}^{N} (-1)^{k} \frac{t^{2k+1}}{(2k+1)!}. \end{aligned}
When $$N \rightarrow \infty$$, the series converge to the Taylor expansions of the closed-form solutions
$$u_{1}(t)= -\cos t, \quad\quad u_{2}(t) = t \cos t, \quad \quad u_{3}(t) = \sin t.$$
Comparison of absolute errors of the presented DT technique with LDM and RPSM for $$N=2$$ is done in Tables 1, 2 and 3. We see that DT and RPSM produce the same results, which are close to the values of the closed form solutions, whereas LDM shows significant deviations. We obtain similar results when comparing computing times, see Tables 4, 5 and 6.
Table 1

Error analysis of $$u_{1}$$ on $$[0,1]$$

t

Exact solution −cost

DT $$u_{1}$$

Abs. errors DT

Abs. errors LDM

Abs. errors RPSM

0.2

−0.9800665

−0.9800666

1.0E−7

8.904E−5

1.0E−7

0.4

−0.9210609

−0.9210666

5.7E−6

1.511E−3

5.7E−6

0.6

−0.8253335

−0.8254000

6.65E−5

8.051E−3

6.65E−5

0.8

−0.6967067

−0.6970666

3.599E−4

2.665E−2

3.599E−4

1.0

−0.5403023

−0.5416666

1.3642E−3

6.766E−2

1.3642E−3

Table 2

Error analysis of $$u_{2}$$ on $$[0,1]$$

t

Exact solution tcost

DT $$u_{2}$$

Abs. errors DT

Abs. errors LDM

Abs. errors RPSM

0.2

0.1960133

0.1960133

0.0

5.496E−6

0.0

0.4

0.3684243

0.3684266

2.3E−6

1.808E−4

2.3E−6

0.6

0.4952013

0.4952400

3.87E−5

1.408E−3

3.87E−5

0.8

0.5573653

0.5576533

2.89E−4

6.069E−3

2.89E−4

1.0

0.5403023

0.5416666

1.3643E−3

1.890E−2

1.3643E−3

Table 3

Error analysis of $$u_{3}$$ on $$[0,1]$$

t

Exact solution sint

DT $$u_{3}$$

Abs. errors DT

Abs. errors LDM

Abs. errors RPSM

0.2

0.1986693

0.1986693

0.0

6.4558E−5

0.0

0.4

0.3894183

0.3894186

3.0E−7

9.9595E−4

3.0E−7

0.6

0.5646424

0.5646480

5.60E−6

4.8397E−3

5.60E−6

0.8

0.7173561

0.7173973

4.12E−5

1.4613E−2

4.12E−5

1.0

0.8414709

0.8416666

1.957E−3

3.3917E−2

1.957E−3

Table 4

Comparison of computing time for $$u_{1}$$

t

DT

LDM

RPSM

0.2

6.3E−5

8.7E−4

6.3E−5

0.4

6.5E−5

6.7E−4

6.5E−5

0.6

6.5E−5

6.9E−4

6.5E−5

0.8

6.4E−5

7.2E−4

6.4E−5

1.0

6.6E−5

8.8E−4

6.6E−5

Table 5

Comparison of computing time for $$u_{2}$$

t

DT

LDM

RPSM

0.2

6.6E−5

6.7E−4

6.6E−5

0.4

6.4E−5

6.7E−4

6.4E−5

0.6

6.7E−5

6.7E−4

6.7E−5

0.8

6.5E−5

8.5E−4

6.5E−5

1.0

6.5E−5

8.3E−4

6.5E−5

Table 6

Comparison of computing time for $$u_{3}$$

t

DT

LDM

RPSM

0.2

6.6E−5

7.7E−4

6.6E−5

0.4

6.7E−5

6.6E−4

6.7E−5

0.6

6.6E−5

6.7E−4

6.6E−5

0.8

6.6E−5

8.3E−4

6.6E−5

1.0

6.7E−5

8.5E−4

6.7E−5

### Remark 4

In [29], the authors used LDM and obtained only approximate solutions of the initial value problem (12), (13). Applying RPSM, the authors were able to find closed-form solutions in [30]. However, the calculations are too complicated, and the residual functions (RPSM) and initial guesses (LDM) contain analytical forms of functions sin and cos, which means that these methods are not convenient for use in a purely numerical software.

As the second application, we have chosen a system with all three types of delays considered to show reliability and efficiency of the proposed approach in solving difficult tasks.

### Example 2

Let us solve a nonlinear system of neutral delayed differential equations
\begin{aligned} \begin{gathered} u'''_{1} = u_{1}'''(t-2)u_{1} \biggl(\frac{t}{3} \biggr)+\sqrt[3]{u _{1}^{2}}+ u'_{2} \biggl(t-\frac{1}{2} \operatorname {e}^{-t} \biggr), \\ u'''_{2} = \frac{1}{2}u_{2}''' \biggl(\frac{t}{2} \biggr) +u'_{2}(t-1)u _{1} \biggl(\frac{t}{3} \biggr) \end{gathered} \end{aligned}
(15)
with initial functions
\begin{aligned} \begin{gathered} \phi _{1}(t) = \operatorname {e}^{t}, \\ \phi _{2}(t) =t^{2} \end{gathered} \end{aligned}
(16)
for $$t \in [-2,0]$$, and initial conditions
\begin{aligned} \begin{gathered} u_{1}(0) = 1, \quad\quad u'_{1}(0)= 1, \quad\quad u_{1}''(0)=1, \\ u_{2}(0) = 0, \quad\quad u'_{2}(0)= 0, \quad \quad u''_{2}(0)=2. \end{gathered} \end{aligned}
(17)
Following the method of steps, we get
\begin{aligned} \begin{gathered} u'''_{1} = \operatorname {e}^{(t-2)}u_{1} \biggl(\frac{t}{3} \biggr)+ + \sqrt[3]{u _{1}^{2}}+ 2t- \operatorname {e}^{-t}, \\ u'''_{2} = \frac{1}{2}u_{2}''' \biggl(\frac{t}{2} \biggr) +2(t-1)u _{1} \biggl( \frac{t}{3} \biggr). \end{gathered} \end{aligned}
(18)
System (18) cannot be solved by the classical DT approach because of the nonlinear term $$f(u) = \sqrt[3]{u_{1}^{2}}$$, hence we apply modified Adomian formula for differential transformation components to the nonlinear term $$f(u)$$. Applying DT to (18), we get the system
\begin{aligned}& \begin{aligned}[b] (k+1) (k+2) (k+3)U_{1}(k+3) &= \operatorname {e}^{-2} \sum_{l=0}^{k} \frac{1}{l!} \biggl( \frac{1}{3} \biggr) ^{k-l}U_{1}(k-l) +F_{1}(k) \\ &\quad{} +2\delta (k) -\frac{(-1)^{k}}{k!}, \end{aligned} \end{aligned}
(19)
\begin{aligned}& \begin{aligned}[b] (k+1) (k+2) (k+3) \biggl(1 -\frac{1}{2^{k+1}} \biggr) U_{2}(k+3) &= 2 \sum_{l=0}^{k} \delta (l-1) \biggl(\frac{1}{3} \biggr)^{k-l}U_{1}(k-l) \\ &\quad{} - \frac{2}{3^{k}}U_{1}(k), \end{aligned} \end{aligned}
(20)
where $$F_{1}(k)$$ is the kth component of the transformed function $$f(u) = \sqrt[3]{u_{1}^{2}}$$. Applying formula (4) and the transformed initial conditions
\begin{aligned}& U_{1}(0) =1, \quad\quad U_{1}(1)=1, \quad\quad U_{1}(2)=\frac{1}{2}, \\& U_{2}(0) =0, \quad\quad U_{2}(1)=0, \quad\quad U_{2}(2)=1, \end{aligned}
we obtain
\begin{aligned}& F_{1}(0) = \sqrt[3]{U_{1}^{2}(0)} = 1, \\& F_{1}(1) = \frac{2}{3} \frac{U_{1}(1)}{\sqrt[3]{U_{1}(0)}} = \frac{2}{3}, \\& F_{1}(2) = \frac{2}{3} \frac{U_{1}(2)}{\sqrt[3]{U_{1}(0)}} - \frac{1}{9} \frac{U_{1}^{2}(1)}{\sqrt[3]{U_{1}^{4}(0)}}= \frac{2}{9}, \\& \vdots \end{aligned}
Solving the system of recurrence relations (19)–(20), we get
\begin{aligned}& \begin{aligned} k=0: \quad &U_{1}(3) = \frac{1}{6} \bigl( \operatorname {e}^{-2}U_{1}(0) +F_{1}(0) + 1 \bigr)= \frac{2+\operatorname {e}^{-2}}{6}, \\ &U_{2}(3) = \frac{1}{3} \bigl( -2U_{1}(0) \bigr)=- \frac{2}{3}, \end{aligned} \\& \begin{aligned} k=1: \quad &U_{1}(4) = \frac{\operatorname {e}^{-2}}{24} \biggl( \frac{1}{3}U_{1}(1) + U _{1}(0) \biggr)+ \frac{1}{24} F_{1}(1) + 1 ) = \frac{4\operatorname {e}^{-2}+5}{72}, \\ &U_{2}(4) = \frac{1}{18} \biggl(2U_{1}(0)- \frac{2}{3}U_{1}(1) \biggr)= \frac{2}{27}, \end{aligned} \\& \begin{aligned} k=2: \quad &U_{1}(5) = \frac{\operatorname {e}^{-2}}{60} \biggl( \frac{1}{9}U_{1}(2) + \frac{1}{3}U_{1}(1) + \frac{1}{2}U_{1}(0) \biggr) + \frac{1}{60} \biggl( F_{1}(2) - \frac{1}{2} \biggr) = \frac{16\operatorname {e}^{-2}-5}{1080}, \\ &U_{2}(5) = \frac{2}{105} \biggl(\frac{2}{3}U_{1}(1) -\frac{2}{9}U_{1}(2) \biggr) =\frac{2}{189}. \end{aligned} \end{aligned}
Applying the inverse differential transformation, we obtain an approximate solution to the initial value problem (15)–(17):
\begin{aligned}& u_{1}(t) = 1+t +\frac{1}{2} t^{2}+ \frac{2+\operatorname {e}^{-2}}{6}t^{3}+ \frac{4\operatorname {e}^{-2}+5}{72}t^{4}+ \frac{16\operatorname {e}^{-2}-5}{1080}t^{5} +\cdots , \\& u_{2}(t) = t^{2}- \frac{2}{3}t^{3} + \frac{2}{27}t^{4} + \frac{2}{189}t^{5} + \cdots . \end{aligned}
As we do not know the exact solution of the given problem, we are limited to comparing approximate solutions. Comparison of values obtained by the proposed approach and values obtained by Matlab package DDENSD in Table 7 shows good correspondence between the results. Comparing computing times in Table 8, we can see that the presented method produces reliable results much faster than Matlab package DDENSD.
Table 7

Comparison of values of solution components obtained by DT and Matlab

t

Method

DT

Matlab

$$u_{1}$$

$$u_{2}$$

$$u_{1}$$

$$u_{2}$$

0.00

1.0000

0.0000

1.0000

0.0000

0.05

1.0513

0.0024

1.0513

0.0024

0.10

1.1051

0.0093

1.1050

0.0093

0.15

1.1618

0.0203

1.1614

0.0203

0.20

1.2209

0.0348

1.2204

0.0348

0.25

1.2832

0.0524

1.2822

0.0524

0.30

1.3481

0.0726

1.3469

0.0726

0.35

1.4160

0.0951

1.4146

0.0951

Table 8

Comparison of computing time

t

Method

DT

Matlab

$$u_{1}$$

$$u_{2}$$

$$u_{1}$$

$$u_{2}$$

0.05

7.9E−5

6.5E−5

6.2E−2

6.2E−2

0.10

6.9E−5

7.0E−5

8.1E−2

6.2E−2

0.15

7.1E−5

6.8E−5

6.2E−2

6.2E−2

0.20

7.0E−5

6.9E−5

6.3E−2

6.2E−2

0.25

7.0E−5

7.0E−5

6.2E−2

6.2E−2

0.30

6.3E−5

6.9E−5

6.3E−2

6.3E−2

0.35

6.9E−5

6.8E−5

6.2E−2

6.3E−2

### Remark 5

System (15) contains all three types of delay which were considered in this paper. Moreover, it contains a term which is nonlinear (nonpolynomial) in the dependent variable $$u_{1}$$. In this sense, the present paper contains more complicated systems in applications than papers about other semi-analytical methods like VIM [6], ADM [7] or HPM [8].

## 5 Conclusions

The approach presented in this paper is an effective semi-analytical technique, convenient for numerical approximation of a unique solution to the initial value problem for systems of functional differential equations, in particular delayed and neutral differential equations. Considering systems of equations with three types of delays brings a generalization with respect to the problem studied in [20]. The comparison of results was done against the Laplace decomposition method, residual power series method and Matlab package DDENSD. The need of computational work is reduced compared to the other methods. The differential transformation algorithm gives an approximate solution which is in good concordance with reference results produced by Matlab. Under certain circumstances, it is possible to identify the unique solution to the initial value problem in closed form. Further steps can be done in the development of the presented technique for systems with distributed and state dependent delays.

## Notes

### Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

### Funding

The first author was supported by the Grant CEITEC 2020 (LQ1601) with financial support from the Ministry of Education, Youth and Sports of the Czech Republic under the National Sustainability Programme II. The work of the second author was supported by the Grant FEKT-S-17-4225 of Faculty of Electrical Engineering and Communication, Brno University of Technology.

### Competing interests

The authors declare that they have no competing interests.

## References

1. 1.
Györi, I.: Oscillation and comparison results in neutral differential equations and their applications to the delay logistic equation. Comput. Math. Appl. 18(10–11), 893–906 (1989)
2. 2.
Insperger, T., Milton, J., Stepan, G.: Semidiscretization for time delayed neural balance control. SIAM J. Appl. Dyn. Syst. 14(3), 1258–1277 (2015)
3. 3.
Kalmar-Nagy, T., Stepan, G., Moon, F.C.: Subcritical Hopf bifucration in the delay equation model for machine tool vibrations. Nonlinear Dyn. 26, 121–142 (2011)
4. 4.
Hale, J.K., Verduyn Lunel, S.M.: Introduction to Functional Differential Equations. Springer, New York (1993)
5. 5.
Kolmanovskii, V., Myshkis, A.: Introduction to the Theory and Applications of Functional Differential Equations. Kluwer Academic, Dordrecht (1999)
6. 6.
Chen, X., Wang, L.: The variational iteration method for solving a neutral functional-differential equation with proportional delays. Comput. Math. Appl. 59, 2696–2702 (2010)
7. 7.
Blanco-Cocom, L., Estrella, A.G., Avila-Vales, E.: Solving delay differential systems with history functions by the Adomian decomposition method. Appl. Math. Comput. 218, 5994–6011 (2013)
8. 8.
Shakeri, F., Dehghan, M.: Solution of delay differential equations via a homotopy perturbation method. Math. Comput. Model. 48, 486–498 (2008)
9. 9.
Duarte, J., Januario, C., Martins, N.: Analytical solutions of an economic model by the homotopy analysis method. Appl. Math. Sci. 10(49), 2483–2490 (2016) Google Scholar
10. 10.
Rebenda, J., Šmarda, Z.: A semi-analytical approach for solving nonlinear systems of functional differential equations with delay. In: Simos, T.E. (ed.) 14th International Conference of Numerical Analysis and Applied Mathematics (ICNAAM 2016). AIP Conference Proceedings, vol. 1863, p. 530003. AIP Publishing, Melville (2017) Google Scholar
11. 11.
Bellour, A., Bousselsal, M.: Numerical solution of delay integro-differential equations by using Taylor collocation method. Math. Methods Appl. Sci. 37, 1491–1506 (2014)
12. 12.
Sezer, M., Akyuz-Dascioglu, A.: Taylor polynomial solutions of general linear differential-difference equations with variable coefficients. Appl. Math. Comput. 174, 753–765 (2006)
13. 13.
Cherepennikov, V.B., Ermolaeva, P.G.: Smooth solutions of an initial-value problem for some differential difference equations. Numer. Anal. Appl. 3, 174–185 (2010)
14. 14.
Cherepennikov, V.B.: Numerical analytical method of studying some linear functional differential equations. Numer. Anal. Appl. 6, 236–246 (2013)
15. 15.
Jain, R.K., Agarwal, R.P.: Finite difference method for second order functional differential equations. J. Math. Phys. Sci. 7(3), 301–3016 (1973)
16. 16.
Agarwal, R.P., Chow, Y.M.: Finite-difference methods for boundary-value problems of differential equations with deviating arguments. Comput. Math. Appl. 12A(11), 1143–1153 (1986)
17. 17.
Petropoulou, E.N., Siafarikas, P.D., Tzirtzilakis, E.E.: A “discretization” technique for the solution of ODEs. J. Math. Anal. Appl. 331, 279–296 (2007)
18. 18.
Petropoulou, E.N., Siafarikas, P.D., Tzirtzilakis, E.E.: A “discretization” technique for the solution of ODEs II. Numer. Funct. Anal. Optim. 30, 613–631 (2009)
19. 19.
Šamajová, H., Li, T.: Oscillators near Hopf bifurcation. Komunikácie (Žilina) 17, 83–87 (2015) Google Scholar
20. 20.
Rebenda, J., Šmarda, Z.: A differential transformation approach for solving functional differential equations with multiple delays. Commun. Nonlinear Sci. Numer. Simul. 48, 246–257 (2017)
21. 21.
Yang, X.-J., Tenreiro Machado, J.A., Srivastava, H.M.: A new numerical technique for solving the local fractional diffusion equation: two-dimensional extended differential transform approach. Appl. Math. Comput. 274, 143–151 (2016)
22. 22.
Šamajová, H.: Semi-analytical approach to initial problems for systems of nonlinear partial differential equations with constant delay. In: Mikula, K., Sevcovic, D., Urban, J. (eds.) Proceedings of EQUADIFF 2017 Conference, pp. 163–172. Spektrum STU Publishing, Bratislava (2017) Google Scholar
23. 23.
Rebenda, J., Šmarda, Z., Khan, Y.: A new semi-analytical approach for numerical solving of Cauchy problem for differential equations with delay. Filomat 31(15), 4725–4733 (2017)
24. 24.
Šmarda, Z., Diblík, J., Khan, Y.: Extension of the differential transformation method to nonlinear differential and integro-differential equations with proportional delays. Adv. Differ. Equ. 2013, 69 (2013)
25. 25.
Šmarda, Z., Khan, Y.: An efficient computational approach to solving singular initial value problems for Lane–Emden type equations. J. Comput. Appl. Math. 290, 65–73 (2015)
26. 26.
Rebenda, J.: An application of Bell polynomials in numerical solving of nonlinear differential equations. In: 17th Conference on Applied Mathematics, APLIMAT 2018—Proceedings, pp. 891–900. Spektrum STU, Bratislava (2018) Google Scholar
27. 27.
Bellen, A., Zennaro, M.: Numerical Methods for Delay Differential Equations. Oxford University Press, Oxford (2003)
28. 28.
Warne, P.G., Polignone Warne, D.A., Sochacki, J.S., Parker, G.E., Carothers, D.C.: Explicit a-priori error bounds and adaptive error control for approximation of nonlinear initial value differential systems. Comput. Math. Appl. 52, 1695–1710 (2006)
29. 29.
Widatalla, S., Koroma, M.A.: Approximation algorithm for a system of pantograph equations. J. Appl. Math. 9, Article ID 714681 (2012)
30. 30.
Komashynska, I., Al-Smadi, M., Al-Habahbeh, A., Ateiwi, A.: Analytical approximate solutions of systems of multipantograph delay differential equations using residual power-series method. Aust. J. Basic Appl. Sci. 8(10), 664–675 (2014) Google Scholar