# Jacobi Collocation Methods for Solving Generalized Space-Fractional Burgers’ Equations

• Qingqing Wu
• Xiaoyan Zeng
Original Paper

## Abstract

The aim of this paper is to obtain the numerical solutions of generalized space-fractional Burgers’ equations with initial-boundary conditions by the Jacobi spectral collocation method using the shifted Jacobi–Gauss–Lobatto collocation points. By means of the simplified Jacobi operational matrix, we produce the differentiation matrix and transfer the space-fractional Burgers’ equation into a system of ordinary differential equations that can be solved by the fourth-order Runge–Kutta method. The numerical simulations indicate that the Jacobi spectral collocation method is highly accurate and fast convergent for the generalized space-fractional Burgers’ equation.

## Keywords

Generalized space-fractional Burgers’ equations Jacobi spectral collocation methods Differentiation matrix Shifted Jacobi–Gauss–Lobatto collocation points

65M70 35R11

## 1 Introduction

The one-dimensional generalized space-fractional Burgers’ equation is
\begin{aligned} {\left\{ \begin{array}{ll} \partial _t u +\epsilon u \partial _x u - \mu \partial _x^2 u + \eta \partial _x^\nu u= g(x), \quad (x,t) \in (0,1) \times (0,T), \\ u(-1,t) = 0, \quad u(1,t) =0, \quad t \in (0,T),\\ u(x, 0) = u_0(x), \quad x \in (0, 1), \end{array}\right. } \end{aligned}
(1)
where $$u = u(x,t)$$, $$\epsilon$$, $$\mu$$ and $$\eta$$ are arbitrary positive constants and $$\nu \in (0,1)$$ is the fractional order of the left-sided Caputo derivative [20] in the space direction, i.e.,
\begin{aligned} \partial _x^\nu u (x,t)={}_0^{\rm c} D_x^{\nu } u(x,t) := \frac{1}{\Gamma (1 - \nu )} \int _0^x (x - y)^{ - \nu } \frac{{\partial }u(y,t)}{{\partial }y} {\mathrm {d}}y. \end{aligned}

The generalized space-fractional Burgers’ equation describes the physical process of a weakly nonlinear sound wave propagating in one direction through a gas-filled tube [23]. By the above definition of the Caputo derivative, we can see that the value of the fractional derivative at $$x'$$ depends on the function values at $$x <x'$$. The fractional derivative in the equation thus can depict the accumulation (memory) effect of wall friction when the sound wave passes through the boundary layer [5, 14]. Similar phenomena occur in other physical processes, such as shallow water waves [13], waves in bubbly liquids [18]. In 1998, the existence, uniqueness and self-similarity of the Cauchy problem for fractional Burgers’ equation in the multidimensional generalized space were discussed in [3]. There are two common methods of deducing the analytical solutions: Adomian decomposition methods [7] and variational iteration methods [12, 19]. To solve fractional differential equations numerically, the finite difference method and the spectral method are often used (see, e.g., [8, 16, 23] for finite difference methods and [1, 2, 6, 15, 25, 26] for spectral methods).

Spectral methods are well-known highly accurate methods and have been widely used to numerically solve partial differential equations (PDEs) for several decades [4, 9, 10, 11, 17, 21, 22, 24, 27]. Spectral methods are normally implemented by a collocation, Galerkin and Tau approach in practical applications. In this paper, we focus on how to use spectral Jacobi collocation methods to numerically solve the generalized space-fractional Burgers’ equation defined in (1) using the fractional Jacobi differentiation matrix.

This paper is arranged as follows. The next section introduces preliminaries for standard Jacobi spectral collocation methods. In Sect. 3, we introduce the shifted Jacobi polynomials, simplify the expression of the Jacobi operational matrix which is originally deduced in [2, 6], and present the formula of the fractional Jacobi differentiation matrix at the shifted Jacobi–Gauss–Lobatto collocation (JGLC) points. In Sect. 4, we transfer the space-fractional Burgers’ equation into a system of ODEs. Two examples of space-fractional Burgers’ equations are then provided and the numerical results obtained by the proposed method are presented in the tables and Fig. 1. Finally, the conclusion is made in the last section.

## 2 Differentiation Matrix and Jacobi Polynomials

Let u be a function defined on $$\varXi$$, $$x_j\in \varXi , j=0,1,\cdots ,N$$ be the collocation points, and $$h_j(x), j=0,1,\cdots ,N$$ be Lagrange basis polynomials. Then, the approximation of u is
\begin{aligned} u_N(x)=\sum _{j=0}^N u(x_j) h_j(x) . \end{aligned}
(2)
We hence have
\begin{aligned} \partial _x^{k} u_N(x)=\sum _{j=0}^N u(x_j) \partial _x^{k} h_j(x). \end{aligned}
The vector of values of the $$k\hbox {th}$$ derivative at the collocation points can be written as $$\varvec{D}^{(k)}$$ multiplying the vector of function values
\begin{aligned} \varvec{u}^{(k)} = \varvec{D}^{(k)} \varvec{u}, \end{aligned}
(3)
where
\left\{ \begin{aligned} &\varvec{u}= \left[ u(x_0), u(x_1),\cdots , u(x_N) \right] ^{\text {T}}, \\& \varvec{u}^{(k)}= \left[ \partial _x^{k} u(x_0), \partial _x^{k} u(x_1),\cdots , \partial _x^{k} u(x_N) \right] ^{\text {T}}, \\& \varvec{D}^{(k)}= (d_{ij}^{(k)})= ( \partial _x^{k}h_j(x_i)). \end{aligned} \right.
The $$(N+1)\times (N+1)$$ matrix, $$\varvec{D}^{(k)}$$, is the k-order differentiation matrix. In practice, we can use $$(\varvec{D}^{(1)})^k$$ to obtain $$\varvec{D}^{(k)}$$.
The standard Jacobi polynomials of degree n, denoted by $$P^{\alpha ,\beta }_{n}$$, for $$\alpha >-1$$, $$\beta >-1$$, are orthogonal with respect to the weight function $$\omega ^{\alpha ,\beta }=(1-x)^{\alpha }(1+x)^\beta$$ over $$\varLambda :=[-1,1].$$ From [22], $$P^{\alpha ,\beta }_{n}$$ can be written as
\begin{aligned} P^{\alpha ,\beta }_{n}(x)=\frac{\Gamma (n+\alpha +1)}{n! \Gamma (n+\alpha +\beta +1)}\sum _{k=0}^n (-1)^k \left( {\begin{array}{c}n\\ k\end{array}}\right) \frac{\Gamma (n+k+\alpha +\beta +1)}{\Gamma (k+\alpha +1)}\left( \frac{1-x}{2}\right) ^k. \end{aligned}
(4)
With the symmetry relation $$P^{\alpha ,\beta }_{n}(-x)=(-1)^n P^{\beta ,\alpha }_{n}(x)$$, $$P^{\alpha ,\beta }_{n}$$ hence can also be written as
\begin{aligned} P^{\alpha ,\beta }_{n}(x) =\frac{(-1)^{n}\Gamma (n+\beta +1)}{n! \Gamma (n+\alpha +\beta +1)}\sum _{k=0}^n (-1)^k \left( {\begin{array}{c}n\\ k\end{array}}\right) \frac{\Gamma (n+k+\alpha +\beta +1)}{\Gamma (k+\beta +1)}\left( \frac{1+x}{2}\right) ^k. \end{aligned}
(5)
If Jacobi–Gauss–Lobatto (JGL) points, zeros of $$(1-x^2)P^{\alpha +1,\beta +1}_{N-1}(x)$$, are used as collocation points in (2), the Lagrange basis polynomials $$h_j(x)$$ can be formulated by the Jacobi polynomials as
\begin{aligned} h_j(x)=\sum _{i=0}^N t_{ij} P_{i}^{\alpha ,\beta }(x), \end{aligned}
(6)
where
\begin{aligned} t_{ij}& = \frac{\omega _j^{\alpha ,\beta }}{\tilde{\gamma }_i^{\alpha ,\beta }} P_{i}^{\alpha ,\beta }(x_j), \end{aligned}
(7)
\begin{aligned} \tilde{\gamma }_i^{\alpha ,\beta }& = \left\{ \begin{aligned}&\frac{2^{\alpha +\beta +1}\Gamma (i+\alpha +1)\Gamma (i+\beta +1)}{(2i+\alpha +\beta +1)i!\Gamma (i+\alpha +\beta +1)},\quad i=0,\cdots ,N-1,\\&\left(2+\frac{\alpha +\beta +1}{N}\right)\frac{2^{\alpha +\beta +1}\Gamma (N+\alpha +1)\Gamma (N+\beta +1)}{(2N+\alpha +\beta +1)N!\Gamma (N+\alpha +\beta +1)},\quad i=N, \end{aligned} \right. \end{aligned}
(8)
and
\begin{aligned} {\omega }_i^{\alpha ,\beta }=\left\{ \begin{aligned}&\frac{2^{\alpha +\beta +1}(\beta +1)\Gamma ^2(\beta +1)\Gamma (N)\Gamma (N+\alpha +1)}{\Gamma (N+\beta +1)\Gamma (N+\alpha +\beta +2)},\quad i=0,\\&\frac{\tilde{G}_{N-2}^{\alpha +1,\beta +1}}{(1-x_i^2)^2(\partial _x P_{N-1}^{\alpha +1,\beta +1}(x_i))^2}, \quad i=1,\cdots ,N-1,\\&\frac{2^{\alpha +\beta +1}(\alpha +1)\Gamma ^2(\alpha +1)\Gamma (N)\Gamma (N+\beta +1)}{\Gamma (N+\alpha +1)\Gamma (N+\alpha +\beta +2)}, \quad i=N. \end{aligned} \right. \end{aligned}
(9)
Here $$\tilde{G}_{N-2}^{\alpha +1,\beta +1}=\frac{2^{\alpha +\beta +3}\Gamma (N+\alpha +1)\Gamma (N+\beta +1)}{(N-1)!\Gamma (N+\alpha +\beta +2)}$$. The corresponding differentiation matrix for JGLC points can be found in [22].

## 3 Fractional Jacobi Differentiation Matrix

In this section, we derive the Caputo fractional differentiation matrix at shifted Jacobi collocation points, which is later used to solve fractional PDEs. The shifted Jacobi polynomials, $$P_{0,L,i}^{\alpha ,\beta }(x)$$, which are defined on [0, L], may be obtained from $$P_{0,L,i}^{\alpha ,\beta }(x)=P_{i}^{\alpha ,\beta }(\frac{2x}{L}-1)$$.

The shifted Jacobi polynomials satisfy the following orthogonality condition:
\begin{aligned} \int _{0}^L P_{0,L,i}^{\alpha ,\beta }(x) P_{0,L,j}^{\alpha ,\beta }(x)\omega _{0,L}^{\alpha ,\beta } {\text {d}}x =\delta _{i,j}\gamma _{0,L,i}^{\alpha ,\beta }, \end{aligned}
where $$\omega _{0,L}^{\alpha ,\beta }(x)=(L-x)^\alpha x^\beta$$ and
\begin{aligned} \gamma _{0,L,i}^{\alpha ,\beta }=\frac{L^{\alpha +\beta +1}\Gamma (i+\alpha +1)\Gamma (i+\beta +1)}{(2i+\alpha +\beta +1)i!\Gamma (i+\alpha +\beta +1)}. \end{aligned}
(10)
The shifted Jacobi–Gauss–Lobatto collocation points are $$y_j=\frac{ L}{2}(x_j +1),j=0,1,\cdots ,N.$$ As the case in standard Jacobi polynomials, the approximate function $$u_N$$ of function u defined on [0, L] can be written as
\begin{aligned} u_N(x)=\sum _{j=0}^N u(y_j) \sum _{i=0}^N t_{ij} P_{0,L,i}^{\alpha ,\beta }(x). \end{aligned}
(11)
The differentiation matrix corresponds to the first derivative, $$\varvec{D}_s^{(1)}=\frac{2}{L}\varvec{D}^{(1)}$$ when the shifted Jacobi polynomials are used.
For the Caputo derivative, we have
\begin{aligned} \partial _x^\nu u_N(x)=\sum _{j=0}^N u(y_j) \sum _{i=0}^N t_{ij} \,\partial _x^\nu P_{0,L,i}^{\alpha ,\beta }(x). \end{aligned}
According to [2, 6], the fractional derivative of Jacobi polynomial can be approximated by
\begin{aligned} \partial _x^\nu P_{0,L,i}^{\alpha ,\beta }(x)\approx \sum _{j = 0}^N \varDelta _{\nu }(i,j)P_{0,L,j}^{\alpha ,\beta }(x), \end{aligned}
(12)
where
\begin{aligned} \varDelta _{\nu }(i,j)=\left\{ \begin{array}{ll} 0, &{} i=0,\cdots ,\lceil \nu \rceil -1,\\ \sum\limits_{k = \lceil \nu \rceil }^i \delta _{i,j,k},&{} i=\lceil \nu \rceil ,\cdots ,N, \end{array} \right. \end{aligned}
(13)
and
\begin{aligned} \begin{aligned} \delta _{i,j,k}&= \frac{(-1)^{i-k} L^{\alpha + \beta - \nu + 1} \Gamma (j + \beta + 1) \Gamma (i + \beta + 1) \Gamma (i + k + \alpha + \beta + 1)}{\gamma _{L,j}^{\alpha ,\beta } \Gamma (j + \alpha + \beta + 1) \Gamma (k + \beta + 1) \Gamma (i + \alpha + \beta + 1) \Gamma (k - \nu + 1) (i - k)!} \\&\quad \times \sum _{l = 0}^j \frac{(-1)^{j-l} \Gamma (j + l + \alpha + \beta + 1) \Gamma (\alpha + 1) \Gamma (l + k + \beta - \nu + 1)}{\Gamma (l + \beta + 1) \Gamma (l + k + \alpha + \beta - \nu + 2) (j - l)! l!}. \end{aligned} \end{aligned}
(14)
Let $$\varvec{T} : =(t_{ij})$$, $$\varvec{\varDelta }_{\nu }:= (\varDelta _{\nu }(i,j))$$ and $$\varvec{ J}:= (P_{0,L,i}^{\alpha ,\beta }(y_j))= (P_{i}^{\alpha ,\beta }(x_j))$$. We then can define the fractional Jacobi differentiation matrix as
\begin{aligned} ^{\rm c}\varvec{D}^{(\nu )}_s :={J}' \varvec{\varDelta }_{\nu }' {T} . \end{aligned}
Note that the matrix $$\varvec{\varDelta }_{\nu }$$ is the so-called Jacobi operational matrix in [6] and the formula of $$\varDelta _{\nu }(i,j)$$ still can be simplified. The simplified Jacobi operational matrix will be provided in the latter part of the section.
From (5) and the definition of shifted Jacobi polynomials, we have the analytic form of shifted Jacobi polynomials as
\begin{aligned} P^{\alpha ,\beta }_{0,L,i}(x)& = P^{\alpha ,\beta }_{i}\left( \frac{2x}{L}-1\right) \\& = \sum _{k=0}^i (-1)^{i-k}\frac{\Gamma (i+\beta +1)\Gamma (i+k+\alpha +\beta +1)}{k!(i-k)!\Gamma (i+\alpha +\beta +1)\Gamma (k+\beta +1)}\left( \frac{1}{L}\right) ^k x^k. \end{aligned}
According to [20], $$\partial _x^\nu x^k =0, k < \nu$$ and $$\partial _x^\nu x^k= \frac{\Gamma (k+1)}{\Gamma (k+1-\nu )} x^{k-\nu }, k \ge \nu$$, it follows that
\begin{aligned}&\partial _x^\nu P^{\alpha ,\beta }_{0,L,i}(x)\\& = \sum _{k=\lceil \nu \rceil }^i \frac{(-1)^{i-k}\Gamma (i+\beta +1)\Gamma (i+k+\alpha +\beta +1)}{(i-k)!\Gamma (k+1-\nu )\Gamma (i+\alpha +\beta +1)\Gamma (k+\beta +1)}\left( \frac{1}{L}\right) ^k x^{k-\nu }. \end{aligned}
An approximation of $$x^{\mu }$$ can be calculated as
\begin{aligned} x^{\mu } \approx \sum _{j=0}^N b_{j,\mu } P_{0,L,j}^{\alpha ,\beta }(x), \end{aligned}
where
\begin{aligned} b_{j,\mu }:=\frac{1}{\gamma _{0,L,j}^{\alpha ,\beta }}\int _0^L x^{\mu }P_{0,L,j}^{\alpha ,\beta }(x)\omega _{0,L}^{\alpha ,\beta }{\text {d}}x . \end{aligned}
(15)
Therefore
\begin{aligned} \partial _x^\nu P^{\alpha ,\beta }_{0,L,i}(x)\approx & {} \sum _{k=\lceil \nu \rceil }^i L^{-k} \frac{(-1)^{i-k}\Gamma (i+\beta +1)\Gamma (i+k+\alpha +\beta +1)}{(i-k)!\Gamma (k+1-\nu )\Gamma (i+\alpha +\beta +1)\Gamma (k+\beta +1)} \nonumber \\&\times \sum _{j=0}^N b_{j,k-\nu } P_{0,L,j}^{\alpha ,\beta }(x). \end{aligned}
(16)

The exact formula of the coefficient $$b_{j,\mu }$$ is provided in the following lemma.

### Lemma 1

If$$b_{j,\mu }$$is defined as in (15), then
\begin{aligned} b_{j,\mu }=L^{\alpha +\beta +\mu +1}\frac{\Gamma (j+\alpha +1)\Gamma (\beta +\mu +1)}{j! \Gamma (j+\alpha +\beta +1)}\sum _{l=0}^j (-1)^{l} \left( {\begin{array}{c}j\\ l\end{array}}\right) \frac{\Gamma (j+l+\alpha +\beta +1)}{\Gamma (l+\alpha +\beta +\mu +2)}. \end{aligned}

### Proof

\begin{aligned} \int _0^L x^{\mu } P^{\alpha ,\beta }_{0,L,j}(x)\omega _{0,L}^{\alpha ,\beta }(x){\text {d}}x& = \left( \frac{L}{2}\right) ^{\alpha +\beta +\mu +1}\int _{-1}^1 (x+1)^{\mu } P^{\alpha ,\beta }_{j}(x)\omega ^{\alpha ,\beta }(x){\text {d}}x\\& = \left( \frac{L}{2}\right) ^{\alpha +\beta +\mu +1}\int _{-1}^1 P^{\alpha ,\beta }_{j}(x)\omega ^{\alpha ,\beta +\mu }(x){\text {d}}x. \end{aligned}
According to (4), we have
\begin{aligned}&\int _{-1}^1 P^{\alpha ,\beta }_{j}(x)\omega ^{\alpha ,\beta +\mu }(x){\text {d}}x\nonumber \\& = \frac{\Gamma (j+\alpha +1)}{j! \Gamma (j+\alpha +\beta +1)}\sum _{l=0}^j \left( {\begin{array}{c}j\\ l\end{array}}\right) \frac{\Gamma (j+l+\alpha +\beta +1)}{(-1)^{l}2^l\Gamma (l+\alpha +1)}\int _{-1}^{1}\omega ^{\alpha +l,\beta +\mu }(x){\text {d}}x\nonumber \\& =2^{\alpha +\beta +\mu +1}\frac{\Gamma (j+\alpha +1)\Gamma (\beta +\mu +1)}{j! \Gamma (j+\alpha +\beta +1)}\sum _{l=0}^j (-1)^{l} \left( {\begin{array}{c}j\\ l\end{array}}\right) \frac{\Gamma (j+l+\alpha +\beta +1)}{\Gamma (l+\alpha +\beta +\mu +2)}. \end{aligned}
(17)
The last equality in the above equations holds because
\begin{aligned} \int _{-1}^{1}\omega ^{\alpha +l,\beta +\mu }(x){\text {d}}x =2^{\alpha +\beta +l+\mu +1}\frac{\Gamma (\alpha +l+1)\Gamma (\beta +\mu +1)}{\Gamma (\alpha +\beta +l+\mu +2)}. \end{aligned}
The proof hence is complete.

If we modify the proof of the above lemma using (5) instead of (4) to compute (17) , we can verify the result in (13) and (14).

### Lemma 2

For arbitrary numbers $$\mu , \gamma ,$$
\begin{aligned} \sum _{k=0}^i \left( {\begin{array}{c}-\gamma \\ k\end{array}}\right) \left( {\begin{array}{c}\mu +\gamma \\ i-k\end{array}}\right) =\left( {\begin{array}{c}\mu \\ i\end{array}}\right) . \end{aligned}
(18)

### Proof

It is true that
\begin{aligned} \sum _{i=0}^\infty \left( {\begin{array}{c}\mu \\ i\end{array}}\right) x^i=(1+x)^{\mu }=(1+x)^{-\gamma }(1+x)^{\mu +\gamma } =\sum _{k=0}^\infty \left( {\begin{array}{c}-\gamma \\ k\end{array}}\right) x^k \sum _{j=0}^\infty \left( {\begin{array}{c}\mu +\gamma \\ j\end{array}}\right) x^{j}. \end{aligned}
(19)
Compare the coefficient of $$x^i$$ of the leftmost and rightmost side in the above equation and we complete the proof.

### Lemma 3

Ifjis a nonnegative integer, then
\begin{aligned}&\frac{1}{j! \Gamma (j+\alpha +\beta +1)}\sum _{l=0}^j (-1)^l \left( {\begin{array}{c}j\\ l\end{array}}\right) \frac{\Gamma (j+l+\alpha +\beta +1)}{\Gamma (l+\mu +\alpha +\beta +2)} \\& =\left( {\begin{array}{c}\mu \\ j\end{array}}\right) \frac{1}{\Gamma (j+\mu +\alpha +\beta +2)}. \end{aligned}

### Proof

According to the properties of gamma function, we have
\begin{aligned} \Gamma (j+l+\alpha +\beta +1)= \Gamma (j+\alpha +\beta +1)\prod _{m=0}^{l-1}(m+j+\alpha +\beta +1), \end{aligned}
and
\begin{aligned} \Gamma (j+\alpha +\beta +\mu +2)= \Gamma (l+\alpha +\beta +\mu +2)\prod _{m=l+1}^{j}(m+\alpha +\beta +\mu +1). \end{aligned}
By the definition of the binomial coefficients, we have
\begin{aligned}&\frac{(-1)^l\prod\limits _{m=0}^{l-1}(m+j+\alpha +\beta +1)}{l!}\\&\quad =\prod _{m=0}^{l-1}\frac{-j-\alpha -\beta -1-m}{m+1}=\left( {\begin{array}{c}-j-\alpha -\beta -1\\ l\end{array}}\right) , \end{aligned}
and
\begin{aligned}&\frac{\prod\limits _{m=l+1}^{j}(m+\alpha +\beta +\mu +1)}{(j-l)!}\\& =\prod _{m=0}^{j-l-1}\frac{j+\alpha +\beta +\mu +1-m}{m+1}=\left( {\begin{array}{c}j+\alpha +\beta +\mu +1\\ j-l\end{array}}\right) . \end{aligned}
Hence
\begin{aligned}&\frac{1}{j! \Gamma (j+\alpha +\beta +1)}\sum _{l=0}^j (-1)^l \left( {\begin{array}{c}j\\ l\end{array}}\right) \frac{\Gamma (j+l+\alpha +\beta +1)}{\Gamma (l+\mu +\alpha +\beta +2)} \\& =\frac{1}{\Gamma (j+\mu +\alpha +\beta +2)}\sum _{l=0}^j \left( {\begin{array}{c}-j-\alpha -\beta -1\\ l\end{array}}\right) \left( {\begin{array}{c}j+\alpha +\beta +\mu +1\\ j-l\end{array}}\right) . \end{aligned}
Apply Lemma 2 to the above equation, we then complete the proof.
According to above lemmas, the coefficients of the approximation of $$x^{\mu }$$ can be written as
\begin{aligned} b_{j,\mu }=\frac{L^{\alpha +\beta +\mu +1}}{\gamma _{0,L,j}^{\alpha ,\beta }} \left( {\begin{array}{c}\mu \\ j\end{array}}\right) \frac{\Gamma (\alpha +1+j)\Gamma (\beta +1+\mu )}{\Gamma (j+\alpha +\beta +\mu +2)}. \end{aligned}
Because
\begin{aligned} \left( {\begin{array}{c}x\\ y\end{array}}\right) =\frac{\Gamma (x+1)}{\Gamma (y+1)\Gamma (x-y+1)}, \end{aligned}
the Caputo fractional derivative of shifted Jacobi polynomials can be computed by
\begin{aligned} \partial _x^\nu P^{\alpha ,\beta }_{0,L,i}(x) =& L^{\alpha +\beta +1-\nu }\sum _{k=\lceil \nu \rceil }^i \frac{(-1)^{i-k}\Gamma (i+\beta +1)\Gamma (i+k+\alpha +\beta +1)}{(i-k)!\Gamma (i+\alpha +\beta +1)\Gamma (k+\beta +1)} \\&\times \sum _{j=0}^N \frac{\Gamma (j+\alpha +1)\Gamma (\beta +k-\nu +1)}{\gamma _{0,L,j}^{\alpha ,\beta }j!\Gamma (k-\nu -j+1)\Gamma (j+\alpha +\beta +k-\nu +2)} P_{0,L,j}^{\alpha ,\beta }(x). \end{aligned}
To sum up, we present the below theorem.

### Theorem 1

The differentiation matrix for the Caputo fractional derivative of$$u_N$$defined in (11) using shifted Jacobi polynomials can be calculated by
\begin{aligned} ^{\rm c}\varvec{D}^{(\nu )}_s :=\varvec{J}' \varvec{\varDelta }_{\nu }' \varvec{T} , \end{aligned}
where$$\varvec{T }: =(t_{ij})$$with$$t_{ij}$$defined in (7)–(9), $$\varvec{\varDelta }_{\nu }:= \left( \varDelta _{\nu }(i,j)\right)$$with
\begin{aligned}&\varDelta _{\nu }(i,j)=\left\{ \begin{array}{ll} 0, &{} i=0,\cdots ,\lceil \nu \rceil -1,\\ \sum\limits_{k = \lceil \nu \rceil }^i \tilde{\delta }_{i,j,k},&{} i=\lceil \nu \rceil ,\cdots ,N, \end{array} \right. \end{aligned}
(20)
\begin{aligned}&\begin{aligned} \tilde{\delta }_{i,j,k}=&\frac{L^{\alpha +\beta +1-\nu }(-1)^{i-k}\Gamma (i+\beta +1)\Gamma (i+k+\alpha +\beta +1)}{(i-k)!\Gamma (i+\alpha +\beta +1)\Gamma (k+\beta +1)} \\&\times \frac{1}{\gamma _{0,L,j}^{\alpha ,\beta }} \frac{\Gamma (j+\alpha +1)\Gamma (\beta +k-\nu +1)}{j!\Gamma (k-\nu -j+1)\Gamma (j+\alpha +\beta +k-\nu +2)}, \end{aligned} \end{aligned}
(21)
and$$\varvec{J}:= (P_{i}^{\alpha ,\beta }(x_j))$$with$$x_j,j=0,\cdots ,N$$, being the standard Jacobi–Gauss–Lobatto collocation points.

Obviously, the formula for $$\varDelta _{\nu }(i,j)$$ in the above theorem is much simpler than in (13)–(14). One thing should be mentioned that the result of the multiplication between the above fractional Jacobi differentiation matrix and a vector consisting of function values is only an approximation of the vector of values of corresponding fractional derivative of the function at Jacobi collocation points due to (12).

When $$\alpha =\beta =-1/2$$, the Jacobi polynomials are
\begin{aligned} P^{-\frac{1}{2},-\frac{1}{2}}_i(x)=T_i(x)\frac{\Gamma (i+\frac{1}{2})}{\Gamma (i+1)\Gamma (\frac{1}{2})}, \end{aligned}
where $$T_i(x)$$ is the Chebyshev polynomial of degree i. Hence, we can derive the following corollary.

### Corollary 1

The differentiation matrix for the Caputo fractional derivative of$$u_N$$defined in (2) using shifted Chebyshev polynomials can be calculated by
\begin{aligned} ^{\rm c}\varvec{D}^{(\nu )}_s :=\varvec{J}' \varvec{\varDelta }_{\nu }' \varvec{\widetilde{T}} , \end{aligned}
where matrix$$\varvec{\widetilde{T} }: =\left( (-1)^i\frac{2\cos (ij\pi /N)}{\tilde{c}_i\tilde{c}_j N} \right)$$, $$\tilde{c}_0=\tilde{c}_N=2,$$$$\tilde{c}_j=1,j=1,\cdots ,N-1$$, $$\varvec{\varDelta }_{\nu }:= \left( \varDelta _{\nu }(i,j)\right)$$with
\begin{aligned}&\varDelta _{\nu }(i,j)=\left\{ \begin{array}{ll} 0, &{} i=0,\cdots ,\lceil \nu \rceil -1,\\ \sum\limits_{k = \lceil \nu \rceil }^i \tilde{\delta }_{i,j,k},&{} i=\lceil \nu \rceil ,\cdots ,N, \end{array} \right. \end{aligned}
(22)
\begin{aligned}&\begin{aligned} \tilde{\delta }_{i,j,k}=&\frac{L^{-\nu }(-1)^{i-k} i \Gamma (i+k)}{(i-k)!\Gamma (k+1/2)}\times \frac{1}{c_j} \frac{\Gamma (k-\nu +1/2)}{\Gamma (k-\nu -j+1)\Gamma (j+k-\nu +1)}, \end{aligned} \end{aligned}
(23)
$${c}_0=1,$$$${c}_j=1/2,j=1,\cdots ,N$$, and$$\varvec{J}:= ((-1)^i\cos (ij\pi /N))$$.

When $$\alpha =\beta =0,$$ the Jacobi polynomials $$P^{0,0}_i(x)$$ become Legendre polynomials $$L_i(x)$$. Hence, for shifted Legendre polynomials, we have the following corollary.

### Corollary 2

The differentiation matrix for the Caputo fractional derivative of$$u_N$$defined in (11) using shifted Legendre polynomials can be calculated by
\begin{aligned} ^{\rm c}\varvec{D}^{(\nu )}_s :=\varvec{J}' \varvec{\varDelta }_{\nu }' \varvec{{T}} , \end{aligned}
where$$\varvec{{T} }: =({t}_{ij})$$with
\begin{aligned} t_{ij}=\left\{ \begin{aligned}&\frac{ 2i+1}{N(N+1)L^2_N(x_j)}L_i(x_j), \quad i=0,1,\cdots ,N-1,\\&\frac{ 1}{(N+1)L_N(x_j)}, \quad i=N, \end{aligned} \right. \end{aligned}
$$\varvec{\varDelta }_{\nu }:= \left( \varDelta _{\nu }(i,j)\right)$$with
\begin{aligned}&\varDelta _{\nu }(i,j)=\left\{ \begin{array}{ll} 0, &{} i=0,\cdots ,\lceil \nu \rceil -1,\\ \sum\limits_{k = \lceil \nu \rceil }^i \tilde{\delta }_{i,j,k},&{} i=\lceil \nu \rceil ,\cdots ,N, \end{array} \right. \end{aligned}
(24)
\begin{aligned}&\begin{aligned} \tilde{\delta }_{i,j,k}& = L^{-\nu }\sum _{k=\lceil \nu \rceil }^i (-1)^{i-k}\frac{\Gamma (i+k+1)}{(i-k)!\Gamma (k+1)} \frac{(2j+1) \Gamma (k-\nu +1)}{\Gamma (k-\nu -j+1)\Gamma (j+k-\nu +2)}, \end{aligned} \end{aligned}
(25)
and$$\varvec{J}:= \left( L_i(x_j)\right)$$with$$x_j,j=0,\cdots ,N$$being the standard Legendre–Gauss–Lobatto collocation points.

## 4 Algorithms for 1D Generalized Space-Fractional Burgers’ Equations and the Numerical Example

In this section, we are interested in using the Jacobi spectral collocation method to solve the one-dimensional generalized space-fractional Burgers’ equations (1).

Firstly, let $$\varLambda _N = \{x_{N,i}^{\alpha ,\beta }, i=0,1,\cdots ,N\}$$ be a set of shifted JGLC points. The shifted Jacobi spectral collocation method for (1) is to find $$u_N(\cdot ,t) \in P_N(\varLambda )$$, such that
\begin{aligned} {\left\{ \begin{array}{ll} \partial _t {u_N} \left( x_{N,i}^{\alpha ,\beta },t\right) =- \epsilon {u_N} \left( x_{N,i}^{\alpha ,\beta },t\right) \partial _x {u_N} \left( x_{N,i}^{\alpha ,\beta },t\right) + \mu \partial _x^2 {u_N} \left( x_{N,i}^{\alpha ,\beta },t\right) \\ \;\quad \quad \quad \quad \qquad - \eta \partial _x^\nu {u_N} \left( x_{N,i}^{\alpha ,\beta },t\right) + g(x_{N,i}^{\alpha ,\beta },t),\quad i = 1,2,\cdots ,N-1, t \in (0,T),\\ {u}_N \left( x_{N,0}^{\alpha ,\beta },t \right) = 0, \quad {u}_N \left( x_{N,N}^{\alpha ,\beta },t \right) = 0, \quad t \in (0,T),\\ {u}_N \left( x_{N,i}^{\alpha ,\beta },0 \right) = u_0 \left( x_{N,i}^{\alpha ,\beta } \right) , \quad i =0,1,\cdots ,N. \end{array}\right. } \end{aligned}
(26)
Suppose that $$\varvec{D}^{(k)}_{1:N-1}$$ and $$^{\rm c}\varvec{D}^{(\nu )}_{1:N-1}$$ are square matrices by deleting the first, last row and column, respectively, from $$\varvec{D}^{(k)}_s$$ and $$^{\rm c}\varvec{D}^{(\nu )}_s$$. According to (3), we then can rewrite (26) approximately in the following matrix–vector form:
\begin{aligned} {\left\{ \begin{array}{ll} \partial _t \varvec{{u}}(t) = - \epsilon \varvec{u}(t)\cdot \left( \varvec{D}^{(1)}_{1:N-1}\varvec{u}(t)\right) + \mu \varvec{D}^{(2)}_{1:N-1}\varvec{u}(t) - \eta \,^{\rm c}\varvec{D}^{(\nu )}_{1:N-1}\varvec{u}(t) + \varvec{g}(t), \\ \varvec{u}(0) = \varvec{u}_0, \end{array}\right. } \end{aligned}
(27)
where
\begin{aligned} \begin{aligned} \partial _t \varvec{{u}}(t)&= \left[ \partial _t {u}_N \left( x_{N,1}^{\alpha ,\beta },t\right) ,\partial _t {u}_N \left( x_{N,2}^{\alpha ,\beta },t\right) ,\cdots ,\partial _t{u}_N\left( x_{N,N-1}^{\alpha ,\beta },t\right) \right] ^{\text {T}}, \\ {\varvec{u}}(t)&= \left[ {u}_N \left( x_{N,1}^{\alpha ,\beta },t\right) ,{u}_N \left( x_{N,2}^{\alpha ,\beta },t\right) ,\cdots ,{u}_N\left( x_{N,N-1}^{\alpha ,\beta },t\right) \right] ^{\text {T}}, \\ \varvec{u}_0&= \left[ {u}_0 \left( x_{N,1}^{\alpha ,\beta } \right) , {u}_0 \left( x_{N,2}^{\alpha ,\beta } \right) , \cdots , {u}_0 \left( x_{N,N - 1}^{\alpha ,\beta } \right) \right] ^{\text {T}},\\ {\varvec{g}}(t)&= \left[ {g} \left( x_{N,1}^{\alpha ,\beta },t\right) ,{g} \left( x_{N,2}^{\alpha ,\beta },t\right) ,\cdots ,{g}\left( x_{N,N-1}^{\alpha ,\beta },t\right) \right] ^{\text {T}}. \end{aligned} \end{aligned}
The system of ODEs (27) can be solved by the fourth-order Runge–Kutta method numerically.

To demonstrate the algorithm above, two examples are considered.

### Example 1

The first numerical example is modified from the example in [25] by changing the domain from $$[-1,1]$$ to [0, 1] using linear transformation. In the example, the two-variable function g(xt) in (1) is defined by the following equations:
\left\{\begin{aligned} g(x,t)& = g_1(x,t)+g_2(x,t)+g_3(x,t)+g_4(x,t),\nonumber \\ g_1(x,t)& = -8{\text {e}}^{-t}(x^3-x^2),\nonumber \\ g_2(x,t)& = 64\varepsilon {\text {e}}^{-2t}(x^3-x^2)(3x^2-2x),\nonumber \\ g_3(x,t)& = -8\mu {\text {e}}^{-t}(6x-2),\nonumber \\ g_4(x,t)& = 8\eta {\text {e}}^{-t}\left( \frac{\Gamma (4)}{\Gamma (4-\nu )}x^{3-\nu }-\frac{\Gamma (3)}{\Gamma (3-\nu )}x^{2-\nu }\right) . \end{aligned}\right.
(28)
We can see from the above equations that the smoothness of the function g(x) is affected by the exponent $$\nu$$. The initial condition is chosen to be
\begin{aligned} u_0=8(x^3-x^2). \end{aligned}
(29)
The exact solution of (1) then is $$8{\text {e}}^{-t}(x^3-x^2)$$.

### Example 2

For the second example, we select g(xt) to be defined by
\left\{\begin{aligned} g(x,t)& = g_1(x,t)+g_2(x,t)+g_3(x,t)+g_4(x,t),\nonumber \\ g_1(x,t)& = -\left( {\text {e}}^{x}-({\text e} -1)x-1\right) {\text {e}}^{-t},\nonumber \\ g_2(x,t)& = \varepsilon \left( {\text {e}}^{ x}-({\text {e}}-1)x-1\right) \left( {\text {e}}^{ x}-{\text e}+1\right) {\text {e}}^{-2t},\nonumber \\ g_3(x,t)& = -\mu {\text {e}}^{-t+ x},\nonumber \\ g_4(x,t)& = \eta \left( x^{\lceil \nu \rceil -\nu } E_{1,\lceil \nu \rceil +1-\nu }( x) -\frac{\Gamma (2)}{\Gamma (2-\nu )}({\text {e}}-1)x^{1-\nu }\right) {\text {e}}^{-t}, \end{aligned}\right.
(30)
where $$E_{a,b}(x)$$ is the two-parameter function of the Mittag–Leffler type. The initial condition is taken as
\begin{aligned} u_0={\text {e}}^{ x}-({\text {e}}-1)x-1. \end{aligned}
(31)
The exact solution of (1) hence is $${\text {e}}^{-t}\left( {\text {e}}^{ x}-({\text {e}}-1)x-1\right)$$.

To numerically solve the system of ODEs (27), we use the fourth-order Runge–Kutta method which is a high accurate algorithm. For the first example, we choose $$\Delta t = 1.0{\text {E}}-5$$ and list the root mean square error (RMSE) of the numerical solution at the simulation time $$T = 1$$ corresponding to the fractional orders 0.3, 0.6 and 0.9 in Table 1. As for the second example, we choose $$\Delta t = 1.0{\text {E}}-4$$ and we show the results of the second example at simulation time $$T = 1$$ corresponding to the fractional orders 0.9 in Table 2. These results are also plotted in log–log scale in Fig. 1.

The numerical results show the fast convergence of the methods and indicate the tendency that the accuracy decreases when $$\nu$$ increases for the numerical example.
Table 1

Root mean square errors at $$T=1$$ of solutions of (1) with g(xt) defined in (28) and $$u_0(x)$$ in (29) when $$\epsilon =\mu =\eta =1,$$$$\tau =1.0{\text {E}}-5$$

N

$$\nu =0.3$$

$$\nu =0.6$$

$$\nu =0.9$$

$$\alpha =-\frac{1}{2}$$

4

3.353 4E−005

8.618 3E−005

6.863 5E−005

8

2.055 5E−007

7.626 0E−007

8.521 9E−007

$$\beta =-\frac{1}{2}$$

12

1.479 7E−008

6.967 1E−008

9.762 3E−008

16

2.390 5E−009

1.331 2E−008

2.195 3E−008

$$\alpha =0$$

4

2.067 8E−005

4.883 9E−005

3.534 3E−005

8

1.396 3E−007

5.097 5E−007

5.524 3E−007

$$\beta =0$$

12

8.231 9E−009

3.744 6E−008

5.009 1E−008

16

1.102 2E−009

5.777 5E−009

8.806 0E−009

$$\alpha =\frac{1}{2}$$

4

1.441 8E−005

3.191 4E−005

2.134 6E−005

8

1.446 1E−007

5.011 1E−007

5.113 2E−007

$$\beta =\frac{1}{2}$$

12

8.648 3E−009

3.741 1E−008

4.714 1E−008

16

9.260 2E−010

4.514 9E−009

6.216 4E−009

Table 2

Root mean square errors at $$T=1$$ of solutions of (1) with g(xt) defined in (30) and $$u_0(x)$$ in (31) when $$\epsilon =\mu =\eta =1$$, $$\nu =0.9$$ and $$\tau =1.0{\text {E}}-4$$

N

$$\alpha =\beta =-\frac{1}{2}$$

$$\alpha =\beta =0$$

$$\alpha =\beta =\frac{1}{2}$$

4

7.376 9E−006

1.317 9E−006

4.622 4E−006

8

2.295 1E−008

1.501 2E−008

1.405 6E−008

12

2.646 3E−009

1.450 5E−009

1.489 4E−009

16

5.944 3E−010

2.792 2E−010

2.899 7E−010

## 5 Conclusion

In summary, this paper proposes the Jacobi collocation spectral method to numerically solve generalized space-fractional Burgers’ equations with initial-boundary conditions. We simplify the formula of the operational matrix introduced in [6] and derive the fractional differentiation matrix for Caputo fractional derivatives. We discretize the equation in its space direction at shifted Jacobi–Gauss–Lobatto collocation points and transform it into a system of ODEs. We then employ the fourth-order Runge–Kutta technique to solve the system. The numerical results of two examples agree well with the exact solutions and indicate that this approach can solve the problem very effectively.

## Notes

### Acknowledgements

This work is supported by the National Natural Science Foundation of China (Grant Nos. 11701358, 11774218). The authors wish to thank Professor Heping Ma and Professor Changpin Li for their valuable discussions.

## References

1. 1.
Afsane, S., Mahmoud, B.: Application Jacobi spectral method for solving the time-fractional differential equation. J. Comput. Appl. Math. 399, 49–68 (2018)
2. 2.
Bhrawy, A.H., Taha, M.T., Machado, J.A.T.: A review of operational matrices and spectral techniques for fractional calculus. Nonlinear Dyn. 81, 1023–1052 (2015)
3. 3.
Biler, P., Funaki, T., Woyczynski, W.A.: Fractal Burgers equations. J. Differ. Equ. 148(1), 9–46 (1998)
4. 4.
Canuto, C.G., Hussaini, M.Y., Quarteroni, A., Zang, T.A.: Spectral Methods: Fundamentals in Single Domains. Springer, New York (2010)
5. 5.
Chester, W.N.: Resonant ocillations in closed tubes. J. Fluid Mech. 18(1), 44–64 (1964)
6. 6.
Doha, E.H., Bhrawy, A.H., Ezz-Eldien, S.S.: A new Jacobi operational matrix: an application for solving fractional differential equations. Appl. Math. Model. 36, 4931–4943 (2012)
7. 7.
El-Shahed, M.: Adomian decomposition method for solving Burgers equation with fractional derivative. J. Fract. Calc. 24, 23–28 (2003)
8. 8.
Esen, A., Bulut, F., Oruç, Ö.: A unified approach for the numerical solution of time fractional Burgers’ type equations. Eur. Phys. J. Plus 131(4), 1–13 (2016)
9. 9.
Guo, B.Y.: Spectral Methods and Their Applications. World Scientific, Singapore (1998)
10. 10.
Guo, B.Y., Wang, L.L.: Jacobi interpolation approximations and their applications to singular differential equations. Adv. Comput. Math. 14, 227–276 (2001)
11. 11.
Guo, B.Y., Wang, L.L.: Jacobi approximations in non-uniformly Jacobi-weighted Sobolev spaces. J. Approx. Theory 128, 1–41 (2004)
12. 12.
Inc, M.: The approximate and exact solutions of the space- and time-fractional Burgers equations with initial conditions by variational iteration method. J. Math. Anal. Appl. 345(1), 476–484 (2008)
13. 13.
Kakutanil, T., Matsuuchi, K.: Effect of viscosity on long gravity waves. J. Phys. Soc. Jpn. 39, 237–246 (1975)
14. 14.
Keller, J.J.: Propagation of simple non-linear waves in gas filled tubes with friction. Z. Angew. Math. Phys. 32(2), 170–181 (1982)
15. 15.
Khatera, A.H., Temsaha, R.S., Hassanb, M.M.: A Chebyshev spectral collocation method for solving Burgers’-type equations. J. Comput. Appl. Math. 222, 333–350 (2008)
16. 16.
Li, D., Zhang, C., Ran, M.: A linear finite difference scheme for generalized time fractional Burgers equation. Appl. Math. Model. 40(11), 6069–6081 (2016)
17. 17.
Ma, H.P., Sun, W.W.: Optimal error estimates of the Legendre–Petrov–Galerkin method for the Korteweg-de Vries equation. SIAM J. Numer. Anal. 39, 1380–1394 (2001)
18. 18.
Miksis, M.J., Ting, L.: Effective equations for multiphase flows-waves in a bubbly liquid. Adv. Appl. Mech. 28, 141–260 (1991)
19. 19.
Momani, S.: Non-perturbative analytical solutions of the space- and time-fractional Burgers equations. Chaos Soliton. Fract. 28(4), 930–937 (2006)
20. 20.
21. 21.
Shen, J., Tang, T.: Spectral and High-Order Methods with Applications. Science Press, Beijing (2006)
22. 22.
Shen, J., Tang, T., Wang, L.L.: Spectral Methods Algorithms, Analysis and Applications. Springer Series in Computational Mathematics. Springer, Berlin (2011)Google Scholar
23. 23.
Sugimoto, N.: Burgers equation with a fractional derivative; hereditary effects on nonlinear acoustic waves. J. Fluid Mech. 225, 631–653 (1991)
24. 24.
Wu, H., Ma, H.P., Li, H.Y.: Optimal error estimates of the Chebyshev–Legendre method for solving the generalized Burgers equation. SIAM J. Numer. Anal. 41, 659–672 (2003)
25. 25.
Yang, Y.B., Ma, H.P.: The Legendre Galerkin–Chebyshev collocation method for generalized space-fractional Burgers equations. J. Numer. Meth. Comput. Appl. 38(3), 236–244 (2017)
26. 26.
Zayernouri, M., Karniadakis, G.E.: Fractional spectral collocation method. SIAM J. Sci. Comput. 36(1), A40–A62 (2014)
27. 27.
Zhao, T.G., Wu, Y.J., Ma, H.P.: Error analysis of Chebyshev–Legendre pseudo-spectral method for a class of nonclassical parabolic equation. J. Sci. Comput. 52, 588–602 (2012)