1 Introduction

Shortest path (SP) problems (or path planning problems) are modern and fruitful optimization problems with many practical engineering applications, especially in robot industry, mobile objects, military, unmanned under water vehicles and surgery planning [1, 12, 23]. In the literature, there are many different methods for solving the SP problems. For instance, some novel technique in the presence of obstacles were proposed by Latombe [10]. Wang et al. [23] considered the SP problem as a semi-infinite constrained optimization problem. Zamirian et al. [25] proposed a new method based on the parametrization method and fuzzy aggregation for solving these problems for a single grid and free moving object in a two and three dimensional space in the presence of obstacles. Tohidi and Samadi [21] utilized the Legendre spectral collocation method for solving the SP problem with boundary and interior barriers. More recently, using Haar wavelets, Mortezaee and Nazemi [13] proposed an approximation method for solving these problems. To find other methods for solving shortest path problems, we refer the reader to references [11, 18] among others.

However, more of the existing methods [1, 24, 25] used traditional and classical methods, such as wavelet collocation or measure theoretical approaches, to solve these problems. These methods usually, transform the basic problem to an optimal control one. These transformations lead to an increase in the dimension of the associated problem and also the weakness of the obtained approximate solution. For example, Mortezaee and Nazemi [13] considered the shortest path problem as an optimization problem. Then, by defining some artificial controls, they converted the problem into an optimal control problem. They expressed the control variables and derivatives of the state variables in the form of Haar variables and unknown coefficients. Using properties of Haar wavelets, they obtained a nonlinear programming problem. However, defining the artificial control functions can lead to the increase in the dimension of the associated problem.

Motivated by the aforementioned reasons, in this paper, we propose the Chebyshev Pseudo Spectral (CPS) method [4, 5, 7, 14, 22] to solve a SP problem. Applying the Chebyshev–Gauss–Lobatto (CGL) nodes, we convert the shortest path problem to a nonlinear programming problem (NLP). The proposed approach is implemented on some numerical examples and the accuracy of the method is compared with some other approaches. Obtained results show the high accuracy of the method over the results of the other methods.

The structure of the remainder of the paper is as follows. In Sect. 2, we apply the CPS method for a SP problem. In Sect. 3, we give the convergence of the method. In Sect. 4, we apply the presented method for solving some SP problems and compare the results with some other methods. Finally, In Sect. 5, we present the conclusions of the paper.

2 CPS method for SP problem

2.1 General form of SP problem

Solving a SP problem means to find an optimal path with the lowest cost from the initial state to the final state. The decision maker can define the cost as distance travelled, energy expended or the time exposed, etc. A general form of an optimal shortest path problem with boundary barriers \(f_1(x)\) and \(f_2(x)\) for path x(.) can be modelled by the following optimization problem:

$$\begin{aligned} \textit{Minimize}\quad J(x(.))=\int _{0}^{T}h({\dot{x}}(t))dt\nonumber \\ \textit{subject}~\textit{to}~{\left\{ \begin{array}{ll} f_1(t)\le x(t)\le f_2(t),\\ g(x(t))\le 0,\\ 0\le t\le T,\\ x(0)=\alpha ,~x(T)=\beta , \end{array}\right. } \end{aligned}$$
(2.1)

where \(x(t)=(x_1(.),x_2(.),\dots ,x_n(.))\) is a path with continuous derivatives, h(.),  \(f_1(.),\) \(f_2(.),\) and g(.) are continuous differentiable functions and, \(\alpha \) and \(\beta \) are start and final states, respectively.

2.2 CPS method

Lagrange interpolation in CGL nodes is important in approximation theory and especially CPS method. The resulting interpolating polynomial provides an approximation that is closest to the polynomial of best approximation of a continuous function under the maximum norm.

Here, we interpolate the optimal path in the CGL points to gain the best accuracy. The derivatives of these interpolating polynomials at these points are given exactly by a differentiation matrix. A similar approach was utilized in the works [4, 7, 14,15,16]. To utilize the CGL nodes, defined on the interval \([-1,1]\), the transformation \(t=\frac{T}{2}(\tau +1)\) must be used. Moreover, we must define

$$\begin{aligned} \begin{aligned} {\left\{ \begin{array}{ll} X(\tau )=x\left( \frac{T(\tau +1)}{2}\right) =x(t),\\ 0\le t\le 1, \\ -1\le \tau \le 1. \end{array}\right. } \end{aligned} \end{aligned}$$
(2.2)

So, \({\dot{x}}(t)=\frac{2}{T}{\dot{X}}(\tau ).\) By this transformation, system (2.1) can be converted to the following equivalent problem:

$$\begin{aligned} \begin{aligned} \textit{Minimize}\quad J(X(.))=\frac{T}{2}\int _{-1}^{1}h\left( \frac{2}{T}{\dot{X}}(\tau )\right) d\tau ~~~~~~~~~~~~~~\\ \textit{subject}~\textit{to}~{\left\{ \begin{array}{ll} f_1\left( \frac{T}{2}(\tau +1)\right) \le X(\tau )\le f_2\left( \frac{T}{2}(\tau +1)\right) ,\\ g\left( X(\tau )\right) \le 0,~-1\le \tau \le 1,\\ X(-1)=\alpha , ~X(1)=\beta . \end{array}\right. } \end{aligned} \end{aligned}$$
(2.3)

The CGL nodes on \([-1,1]\) are selected as follows:

$$\begin{aligned} \tau _k=\mathrm{cos}\left( \frac{N-k}{N}\pi \right) ,\quad k=0,1,\ldots ,N, \end{aligned}$$
(2.4)

where they are the roots of \((1-\tau ^2)\frac{d}{d\tau }T_N(\tau ),\) where \(T_{N}(\tau )=\mathrm{cos}(N\mathrm{cos}^{-1}(\tau )),~\tau \in [-1,1]\) is the Chebyshev polynomial of order N. For interpolating, the following Lagrange polynomials are utilized:

$$\begin{aligned} L_k(\tau )=\prod _{j=0,~{j\ne k}}^N\left( \frac{\tau -\tau _j}{\tau _k-\tau _j}\right) ,\quad k=0,1,2,\ldots ,N,~\tau \in [-1,1]. \end{aligned}$$
(2.5)

Note that \(L_{k}(\tau _{k})=1,~k=0,1,\ldots ,N\) and \(L_{k}(\tau _{j})=0,\) for all \(k\ne j.\)

Now, the Lagrange interpolation for the optimal solution of problem (2.3) can be defined as follows:

$$\begin{aligned} X(\tau )\simeq X^N(\tau )=\sum _{l=0}^{N}{\bar{x}}_l L_{l}(\tau ), \end{aligned}$$
(2.6)

where N is a sufficiently big number. Note that

$$\begin{aligned} X(\tau _{k})\simeq X^N(\tau _k)={\bar{x}}_k. \end{aligned}$$
(2.7)

Also,

$$\begin{aligned} X^\prime (\tau _{k})\simeq \sum _{l=0}^{N}{\bar{x}}_l D_{kl},\quad k=0,1,\ldots ,N, \end{aligned}$$
(2.8)

where

$$\begin{aligned} D_{kl}=L^{\prime }_{l}(\tau _{k})={\left\{ \begin{array}{ll} \frac{\mu _{k}}{\mu _{l}}(-1)^{k+1}\frac{1}{\tau _k-\tau _l},&{}\quad \textit{if}~k\ne l\\ -\frac{\tau _{k}}{2-2\tau _{k}^{2}},&{}\quad \textit{if}~1\le k=l\le N-1\\ -\frac{\left( 2N^2+1\right) }{6},&{}\quad \textit{if}~k=l=0\\ \frac{2N^2+1}{6},&{} \quad \textit{if} ~k=l=N, \end{array}\right. } \end{aligned}$$
(2.9)

and \(\mu _0 = \mu _N = 2\) and \(\mu _k=1\) for \(k=1,2,\ldots ,N-1\) (for details of the above relations, we refer to [4, 5, 20]). To approximate the integral in the objective function of problem (2.3), we use the Clenshaw–Curtis quadrature formula (see [3, 17]) which is as follows:

$$\begin{aligned} \int _{-1}^{1}H(\tau )d\tau \simeq \sum _{j=0}^N w_jH(\tau _j), \end{aligned}$$
(2.10)

where \(w_j, j=0,1,\ldots ,N\) are the weights of numerical approximation and for N even, are

$$\begin{aligned} {\left\{ \begin{array}{ll}w_0=w_N=\frac{1}{N^2-1},\\ w_s=w_{N-s}=\frac{4}{N}\sum \limits _{j=0}^\frac{N^{''}}{2}\left( \frac{1}{1-4j^2}\right) \mathrm{cos}\left( \frac{2\pi js}{N}\right) , \quad s=1,\ldots ,\frac{N}{2}. \end{array}\right. } \end{aligned}$$
(2.11)

and, for N odd:

$$\begin{aligned} {\left\{ \begin{array}{ll}w_0=w_N=\frac{1}{N^2},\\ w_s=w_{N-s}=\frac{4}{N}\sum \limits _{j=0}^\frac{(N-1)^{''}}{2}\left( \frac{1}{1-4j^2}\right) \mathrm{cos}\left( \frac{2\pi js}{N}\right) , \quad s=1,2,\ldots ,\frac{N-1}{2}. \end{array}\right. } \end{aligned}$$
(2.12)

In relations (2.11) and (2.12), the double prime means that the first and the last elements of summations have to be halved.

Lemma 2.1

[3, 17] Let \(\tau _0,\tau _1,\ldots ,\tau _N\) be the CGL nodes, and \(w_k,~k=0,1,2,\ldots ,N\) be defined by relation (2.11) (or (2.12)). Suppose that H(.) is a continuous function. Then

$$\begin{aligned} \int _{-1}^1 H(\tau )d\tau =\lim _{N\rightarrow \infty }\sum _{k=0}^Nw_kH(\tau _k). \end{aligned}$$

Now, using relations (2.7), (2.8) and (2.10), we can approximate the SP problem (2.3) by the following NLP problem:

$$\begin{aligned} \begin{aligned} \textit{Minimize}\quad J_N=\frac{T}{2}\sum _{k=0}^{N}w_kh\left( \frac{2}{T}\sum _{l=0}^N\bar{x}_lD_{kl}\right) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \textit{subject}~\textit{to}~{\left\{ \begin{array}{ll} f_1\left( \frac{T}{2}(\tau _k +1)\right) \le \bar{x}_k\le f_2\left( \frac{T}{2}(\tau _k+1)\right) ,\quad k=0,1,\ldots ,N,\\ g(\bar{x}_k)\le 0,\quad k=0,1,\ldots ,N,\\ \bar{x}_0=\alpha ,~\bar{x}_N=\beta . \end{array}\right. } \end{aligned} \end{aligned}$$
(2.13)

By solving the NLP problem (2.13), we can obtain a pointwise approximation for optimal path as:

$$\begin{aligned} x^*(t_k)\simeq \bar{x}^*_k,\quad k=0,1,\ldots ,N \end{aligned}$$
(2.14)

where \(t_k=\frac{T}{2}(\tau _k+1).\) Also, we have a continuous approximation as

$$\begin{aligned} x^*(t)\simeq \sum _{k=0}^N\bar{x}^*_kL_k\left( \frac{2}{T}t-1\right) ,\quad t\in [0,T]. \end{aligned}$$
(2.15)

The feasibility of NLP problem (2.13) and convergence of approximate optimal path are given in the next section.

3 The feasibility and convergence analysis

In this section, we analyze the feasibility of NLP problem (2.13), and the convergence of the gained approximate optimal path.

Here, assume that \(w^{m,p}\) is Sobolov space on \([-1,1]\), that consists of all functions \(\phi :[-1,1]\rightarrow {\mathbb {R}}^m\) such that \(\phi ^{(j)}(.),~j=0,1,2,\ldots ,m\) lie in \(L^p\) space with the following norm:

$$\begin{aligned} \Vert \phi \Vert _{w^{m,p}}=\sum _{j=1}^m\left( \int _{-1}^1\left\| \phi ^{(j)}(t)\right\| ^pdt\right) ^{\frac{1}{p}}. \end{aligned}$$

In this section, we need to the following lemma in Sobolov space.

Lemma 3.1

[2] For any given function \(\phi \in w^{m,\infty }\) there is a polynomial \(p_N(.)\) of degree N or less, such that

$$\begin{aligned}\Vert \phi (\tau )-p_{_N}(\tau )\Vert \le cc_0N^{-m},\end{aligned}$$

where c is a constant independent of N and \(c_0=\Vert \phi \Vert _{w^{m,\infty }}.\)

Remark 3.2

We note that, for any function \(\phi (.)\) in the norm \(L^\infty ,\) polynomial \(p_{_N}(.)\) with the smallest norm \(\Vert \phi (.)-p_{_N}(.)\Vert _{L^{\infty }}\) is the \(N\mathrm{th}\) order best polynomial approximation of \(\phi (.).\)

Now, we rewrite the shortest path problem (2.3) as follows:

(3.1)

where

$$\begin{aligned}G(X(\tau ))=\left( f_1\left( \frac{T}{2}(\tau +1)\right) - X(\tau ), X(\tau )- f_2\left( \frac{T}{2}(\tau +1)\right) , g(X(\tau ))\right) .\end{aligned}$$

Also, we rewrite the NLP problem (2.13) as follows:

(3.2)

where \(\bar{x}=(\bar{x}_0,\bar{x}_1,\ldots ,\bar{x}_N).\) To guarantee feasibility of NLP problem (3.2), we must relax its constraints and rewrite them as follows:

(3.3)

where \(m\ge 2\) is given, \(1 = (1, 1,\ldots , 1)\) and dot means inner product. The above relaxation is based on the Polak’s theory of consistent approximation (see [19]). We note that when N tends to infinite, there is no difference between constraints of problems (3.2) and (3.3).

Remark 3.3

Since the feasible solution X(.) of problem (3.1) has continuous derivative, there are compact sets \(\Omega _1\subseteq {\mathbb {R}}^n\) and \(\Omega _2\subseteq {\mathbb {R}}^n\) such that

$$\begin{aligned}X(\tau )\in \Omega _1,~{\dot{X}}(\tau )\in \Omega _2,~~\tau \in [-1,1].\end{aligned}$$

Moreover, since functions h(.) and G(.) are continuous differentiable, there are constants \(M_1\) and \(M_2\) such that for all \({\tilde{X}}(.)\) and \(\bar{X}(.):\)

$$\begin{aligned} \left| h\left( \frac{2}{T}\dot{{\tilde{X}}}(\tau )\right) -h\left( \frac{2}{T}\dot{\bar{X}}(\tau )\right) \right|\le & {} \frac{2M_1}{T}\left\| \dot{{\tilde{X}}}(\tau )-\dot{\bar{X}}(\tau )\right\| ,\end{aligned}$$
(3.4)
$$\begin{aligned} \left\| G\left( {\tilde{X}}(\tau )\right) -G\left( \bar{X}(\tau )\right) \right\|\le & {} M_2\left\| {\tilde{X}}(\tau )-\bar{X}(\tau )\right\| .\end{aligned}$$
(3.5)

Theorem 3.4

(Feasibility) Let \(X^*(.)\) be an optimal solution of the SP problem (3.1). Then, there is a positive integer K such that for any \(N\ge K,\) the NLP problem (3.3) has a feasible solution \(\bar{x}=(\bar{x}_0,\bar{x}_1,\ldots ,\bar{x}_N).\) Moreover, the feasible solution satisfies

$$\begin{aligned} \left\| X^*(\tau _k)-\bar{x}_k\right\| _{\infty }\le L(N-1)^{1-m},\quad k=0,1,2,\ldots ,N\end{aligned}$$
(3.6)

where \(\tau _k, k=0,1,\ldots ,N\) are the CGL nodes and L is a positive constant independent of N.

Proof

Let p(.) be the \((N-1)\mathrm{th}\) order best approximation of \({\dot{X}}^*(.)\) in the norm of \(L^\infty .\) By Lemma 3.1, there is a constant \(c_1\) independent of N such that

$$\begin{aligned} \left\| \dot{X}^*(\tau )-p(\tau )\right\| _{L^\infty }\le c_1(N-1)^{1-m}. \end{aligned}$$
(3.7)

Define

$$\begin{aligned} X^N(\tau )=X^*(-1)+\int _{-1}^\tau p(\tau )d\tau , \end{aligned}$$
(3.8)

and

$$\begin{aligned} \bar{x}_k=X^N(\tau _k),\quad k=0,1,\ldots ,N. \end{aligned}$$
(3.9)

We show that \(\bar{x}=(\bar{x}_0,\bar{x}_1,\ldots ,\bar{x}_n)\) is a feasible solution for problem (3.3). By (3.7)–(3.9), for all \(\tau \in [-1,1],\) we have:

$$\begin{aligned} \left\| X^*(\tau )-X^N(\tau )\right\|= & {} \left\| \int _{-1}^\tau \left( {\dot{X}}^*(s)-p(s)\right) ds\right\| \nonumber \\\le & {} \int _{-1}^{\tau }\left\| {\dot{X}}^*(s)-p(s)\right\| ds\nonumber \\\le & {} c_1(N-1)^{1-m}\int _{-1}^{\tau }ds\nonumber \\\le & {} (\tau +1)c_1(N-1)^{1-m}\le 2c_1(N-1)^{1-m}. \end{aligned}$$
(3.10)

Now, by relations (3.5) (i.e. Lipschitz property of function G(.) and (3.10) we get

$$\begin{aligned} \left\| G\left( X^*(\tau )\right) -G\left( X^{N}(\tau )\right) \right\| \le M_2\left\| X^*(\tau )-X^{N}(\tau )\right\| \le 2c_1M_2(N-1)^{1-m},\nonumber \\ \end{aligned}$$
(3.11)

where \(M_2\) is the Lipschitz constant of function G(.) which is independent of N. Hence

$$\begin{aligned} G(\bar{x}_k)=G\left( X^N(\tau _k)\right) \le G\left( X^*(\tau )\right) +2c_1M_2(N-1)^{1-m}.1\le 2c_1M_2(N-1)^{1-m}.\mathbf{1},\nonumber \\ \end{aligned}$$
(3.12)

where \(\mathbf{1}=(1,1,\ldots ,1)\) and dot means inner product. Also, from initial and end points conditions of problem (3.3) we have:

$$\begin{aligned}\left\| \left( X^*(-1)-\alpha \right) -\left( \bar{x}_0-\alpha \right) \right\| =\left\| X^*(-1)-\bar{x}_0\right\| \le 2c_1(N-1)^{1-m}.\end{aligned}$$

Hence

$$\begin{aligned} \left\| \bar{x}_0-\alpha \right\| \le \left\| X^*(-1)-\alpha \right\| +2c_1(N-1)^{1-m}=2c_1(N-1)^{1-m}. \end{aligned}$$
(3.13)

By a similar procedure we obtain

$$\begin{aligned} \left\| \bar{x}_N-\beta \right\| \le 2c_1(N-1)^{1-m}. \end{aligned}$$
(3.14)

So, if we select K such that \(max\{2c_1,2c_1M_2\}\le (N-1)^{\frac{1}{2}}\) for all \(N\ge K,\) then by (3.12)–(3.14) we achieve

$$\begin{aligned} {\left\{ \begin{array}{ll} G\left( \bar{x}_k\right) \le (N-1)^{\frac{3}{2}-m}.1,\quad k=0,1,2,\ldots ,N\\ \left\| \bar{x}_0-\alpha \right\| \le (N-1)^{\frac{3}{2}-m}\\ \Vert \bar{x}_N-\beta \Vert \le (N-1)^{\frac{3}{2}-m}. \end{array}\right. } \end{aligned}$$

Hence, we can imply that \(\bar{x}=(\bar{x}_0,\bar{x}_1,\ldots ,\bar{x}_N)\) is a feasible solution for \(\textit{NLP}\) problem (3.3). Finally, by selecting \(L=2c_1,\) we can follow relation (3.6) from (3.10).\(\square \)

Remark 3.5

From relation (3.10), both \(X^*(\tau _k)\) and \(\bar{x}_k\) (for \(k=0,1,2,\ldots ,N)\) are contained in some compact set. Hence, the feasible set of NLP problem (3.3) is compact. Therefore, the existence of optimal solution, for NLP problem (3.3), is guaranteed by the continuity of the cost function \(J_N(.).\)

Now, we show that the sequence of optimal solutions of NLP problem (3.3) is convergent to the optimal solution of the SP problem (3.1). The result is a result of [8, 9] and based on the Polak’s theory of consistent approximation [19]. Let \(\bar{x}^*=(\bar{x}_0^*,\bar{x}_1^*,\ldots ,\bar{x}_N^*)\) be an optimal solution to the problem (3.3). Define

$$\begin{aligned}X_N^*(\tau )=\sum _{k=0}^N\bar{x}_k^*L_k(\tau ),\quad \tau \in [-1,1],\end{aligned}$$

where \(L_k(.),~k=0,1,\ldots ,N\) are the Lagrange interpolating polynomials. Hence, we have a sequence of direct solutions \(\{\bar{x}^*=(\bar{x}_0^*,\bar{x}_1^*,\ldots ,\bar{x}_N^*)\}_{N=K}^{\infty }\) and their sequence of interpolating polynomials \(\{X_N^*(.)\}_{N=K}^\infty .\)

Assumption I It is assumed that the sequence \(\{(\bar{x}_0^*,{\dot{X}}_N^*(.))\}_{N=K}^{\infty }\) has a subsequence that uniformly converges to \((x_0^\infty ,q(.))\) where q(.) is a continuous function.

Theorem 3.6

(Convergence) Let \(\{\bar{x}^*=(\bar{x}_0^*,\bar{x}_1^*,\ldots ,\bar{x}_N^*)\}_{N=K}^{\infty }\) be a sequence of optimal solutions of the NLP problem (3.3) and let \(\{X_N^{*}(.)\}_{N=K}^\infty \) be their interpolating polynomial sequence satisfying Assumption I. Then,

$$\begin{aligned} X^*(\tau )=x_0^\infty +\int _{-1}^{\tau }q(s)ds,~-1\le \tau \le 1, \end{aligned}$$
(3.15)

is an optimal solution to the SP problem (3.1).

Proof

By Assumption I, there is a subsequence \(\{{\dot{X}}_{N_i}^*(.)\}_{i=1}^\infty \) of sequence \(\{\dot{X}_N^*(.)\}_{N=K}^\infty \) such that \(\lim _{i\rightarrow \infty }N_i=\infty \) and

$$\begin{aligned} \lim _{i\rightarrow \infty }{\dot{X}}_{N_i}^*(.)=q(.). \end{aligned}$$
(3.16)

Hence, by (3.15) and (3.16), we get

$$\begin{aligned} \lim _{i\rightarrow \infty }X_{N_i}^*=X^*(.). \end{aligned}$$
(3.17)

The remaining part of the proof has three steps. In Step 1, we show that \(X^*(.)\) is a feasible solution to problem (3.1). In Step 2, we prove the convergence of the cost function \(J_{N_i}(\bar{x}^*)\) to the continuous function \(J(X^*(.)).\) Finally, in Step 3, we show that \(X^*(.)\) is an optimal solution of problem (3.1).

Step 1 We show that \(X^*(.)\) satisfies the constraints of problem (3.1). Assume that \(X^*\) is not a solution of the first constraint. Then, there is a time \({\bar{t}}\in [-1,1]\) such that

$$\begin{aligned} G\left( X^*(\bar{t})\right) >0. \end{aligned}$$
(3.18)

Since the CGL nodes \(\tau _k, k=0,1,\ldots \) are dense in \([-1,1],\) i.e. the closure of \(\{\tau _k\}_{k=0}^\infty \) is \([-1,1]\) (see [6]), there exists a sequence \(\{k_{N_i}\}_{i=1}^\infty \) such that \(0<k_{N_i}<N_i\) and \(\lim _{i\rightarrow \infty }\tau _{k_{N_i}}=\bar{t}.\) Thus by continuity of function G(.), we get

$$\begin{aligned} \lim _{i\rightarrow \infty }G\left( X_{N_i}^* \left( \tau _{k_{N_i}}\right) \right) =G \left( \lim _{i\rightarrow \infty }X_{N_i}^*\left( \tau _{k_{N_i}}\right) \right) =G\left( X^*(\bar{t})\right) >0. \end{aligned}$$
(3.19)

Now, since \(X_{N_i}^*\) is the interpolating polynomial, we have

$$\begin{aligned} X_{N_i}^*\left( \tau _{k_{N_i}}\right) =\sum _{j=0}^{N_i}\bar{x}_j^*L_j\left( \tau _{k_{N_i}}\right) , \end{aligned}$$
(3.20)

and hence by the first constraint of (3.3), the following holds:

$$\begin{aligned} \lim _{i\rightarrow \infty }\Big (G\left( X^*_{N_i}\left( \tau _{k_{N_i}}\right) \right) \le \lim _{i\rightarrow \infty }(N_i-1)^{\frac{3}{2}-m}=0. \end{aligned}$$
(3.21)

The inequality (3.21) contradicts to inequality (3.19). Thus \(X^*(.)\) satisfies the first constraint of problem (3.3). Now, for initial and final conditions of problem (3.3), we have

$$\begin{aligned} 0\le \left\| X^*(-1)-\alpha \right\|= & {} \left\| \lim _{i\rightarrow \infty }\left( X_{N_i}^*(-1)-\alpha \right) \right\| =\lim _{i\rightarrow \infty }\left\| \left( X_{N_i}^*(-1)-\alpha \right) \right\| \\= & {} \lim _{i\rightarrow \infty }\left\| \bar{x}_0^*-\alpha \right\| \le \lim _{i\rightarrow \infty }(1-N_i)^{\frac{3}{2}-m}=0. \end{aligned}$$

Thus \(X^*(-1)=\alpha .\) By a similar manner, we can show that \(X^*(1)=\beta .\) Hence \(X^*(.)\) satisfies the constraints of problem (3.3).

Step 2 In this step we show that

$$\begin{aligned} \lim _{i\rightarrow \infty }J_{N_i}\left( \bar{x}^*\right) =J\left( X^*(.)\right) , \end{aligned}$$
(3.22)

where

$$\begin{aligned} J_{N_i}\left( \bar{x}^*\right) =\frac{T}{2}\sum _{k=0}^{N_i}w_k h\left( \frac{2}{T}\sum _{l=0}^{N_i}\bar{x}_l^*D_{kl}\right) , \end{aligned}$$

and

$$\begin{aligned} J\left( X^*(.)\right) =\frac{T}{2}\int _{-1}^1h\left( \frac{2}{T}{\dot{X}}^*(\tau )\right) d\tau . \end{aligned}$$

By relations (3.15)–(3.17), we get

$$\begin{aligned} \lim _{i\rightarrow \infty }\left\| {\dot{X}}_{N_i}^*(\tau _k)-{\dot{X}}^*(\tau _k)\right\| =\left\| \lim _{i\rightarrow \infty }\left( X_{N_i}^*(\tau _k)-{\dot{X}}^*(\tau _k)\right) \right\| =\left\| q(\tau _k)-q(\tau _k)\right\| =0.\nonumber \\ \end{aligned}$$
(3.23)

Also, by relation (3.4), we have:

$$\begin{aligned} \left\| h\left( \frac{2}{T}\dot{X}^*(\tau )\right) -h\left( \frac{2}{T}\sum _{l=0}^{N_i}\bar{x}_l^* D_{kl}\right) \right\| _{\infty }\le \frac{2M_1}{T}\left\| {\dot{X}}^*(\tau ) -\sum _{l=0}^{N_i}\bar{x}_l^* D_{kl}\right\| _\infty .\qquad \quad \end{aligned}$$
(3.24)

Furthermore, \(h(\frac{T}{2}{\dot{X}}^*(.))\) is continuous on \([-1,1]\). Thus, by Lemma 2.1 we have

$$\begin{aligned} \int _{-1}^1h\left( \frac{2}{T}{\dot{X}}^*(\tau )\right) d\tau =\lim _{i\rightarrow \infty }\sum _{k=0}^{N_i}w_kh\left( \frac{2}{T}{\dot{X}}^*(\tau _k)\right) . \end{aligned}$$
(3.25)

Therefore,

$$\begin{aligned}&\frac{T}{2}\int _{-1}^1h\left( \frac{2}{T}{\dot{X}}^*(\tau )\right) d\tau \nonumber \\&\quad =\frac{T}{2}\lim _{i\rightarrow \infty }\left( \sum _{k=0}^{N_i}w_k h\left( \frac{2}{T}\sum _{l=0}^{N_i}\bar{x}_l^* D_{kl}\right) \right) +\sum _{k=0}^{N_i}w_k\left( h\left( \frac{2}{T}{\dot{X}}^*(\tau _k)\right) -h\left( \frac{2}{T}\sum _{l=0}^{N_i}\bar{x}_l^* D_{kl}\right) \right) .\nonumber \\ \end{aligned}$$
(3.26)

From the uniform convergence of (3.23) and (3.24) and property of \(w_k\), defined by (2.11)–(2.12), we obtain

$$\begin{aligned} \begin{aligned}&\lim _{i\rightarrow \infty }\left\| \sum _{k=0}^{N_i}w_k\left( h\left( \frac{2}{T}{\dot{X}}^*(\tau _k)\right) -h\left( \frac{2}{T}\sum _{l=0}^{N_i}\bar{x}_l^* D_{kl}\right) \right) \right\| \\&\quad \le \lim _{i\rightarrow \infty }\frac{2M_1}{T} \sum _{k=0}^{N_i}w_k\left\| {\dot{X}}^*(\tau _k)-\sum _{l=0}^{N_i}\bar{x}_l^* D_{kl}\right\| \\&\quad = \lim _{i\rightarrow \infty }\frac{2M_1}{T}\sum _{k=0}^{N_i}w_k\left\| {\dot{X}}^*(\tau _k)-{\dot{X}}_{N_i}^*(\tau _k)\right\| =0. \end{aligned} \end{aligned}$$
(3.27)

By (3.26) and (3.27) we can achieve the equation (3.22).

Step 3 Let \(X^{**}(.)\) be an optimal solution for problem (3.1) with the property \(X^{**}(.)\in w^{m,\infty },~m\ge 2.\) By using the same discussions as in Step 2 and Theorem 3.4, there exists a sequence of feasible solutions \(\{{\tilde{x}}=({\tilde{x}}_0,{\tilde{x}}_1,\ldots ,{\tilde{x}}_N)\}_{N=K}^\infty \) of NLP problem (3.3) such that

$$\begin{aligned} \lim _{N\rightarrow \infty }J_{N}({\tilde{x}})=J\left( X^{**}(.)\right) \end{aligned}$$
(3.28)

Now, from optimality of \(X^{**}(.)\) and \(\bar{x}^*=(\bar{x}_0^*,\bar{x}_1^*,\ldots ,\bar{x}_N^*),\) we get

$$\begin{aligned} J\left( X^{**}(.)\right) \le J\left( X^*(.)\right)= & {} \lim _{i\rightarrow \infty }J_{N_i}\left( \bar{x}^*\right) \le \lim _{i\rightarrow \infty }J_{N_i}\left( {\tilde{x}}\right) \nonumber \\= & {} \lim _{N\rightarrow \infty }J_N({\tilde{x}})=J\left( X^{**}\right) . \end{aligned}$$
(3.29)

Hence,

$$\begin{aligned}J\left( X^{**}(.)\right) =J\left( X^*(.)\right) .\end{aligned}$$

Therefore, \(X^*(.)\) is an optimal solution of the SP problem (3.1).\(\square \)

Table 1 Comparison of the presented approach with other approaches, for Example 4.1

4 Examples

In this section, we apply our proposed method to some SP problems to illustrate the efficiency of our method compared with the other methods. The obtained NLP problems are solved in MATLAB software by using FMINCON function.

Example 4.1

As the first example, we consider the following one dimensional SP problem with lower and upper boundary barriers:

By assuming different values for N,  we solve the corresponding NLP problem (2.13) for this problem. In Table 1, the approximate objective functions \(J_N^*,\) for \(N=20,22,24\) are given. Moreover, in this table the optimal objective values of this problem using Zamirian et al. approach [24] and Legendre spectral method [21] are given. As one can see, for \(N=24,\) we reached \(J^*_N=3.27691,\) meanwhile the optimal value \(J^*\) of measure approach is 3.4191 and the optimal value of Legendre approach for \(N=25\) is \(J^*_N=3.2772.\) This results, confirm that our approach finds a shorter path than the two other mentioned approaches. In Fig. 1, the graph of this optimal path for \(N=20\) is given.

Fig. 1
figure 1

The graph of optimal path in Example 4.1 for \(N=20\)

Example 4.2

As the second example, we consider the problem of finding SP in the presence of five stationary circle obstacles, in the Euclidean space \({\mathbb {R}}^2\):

Let us, at first, assume the center of obstacles and radiuses as follows:

$$\begin{aligned} \begin{aligned} (\alpha _1,\beta _1)=(0.5,0.5),\quad R_1=\frac{1}{8}\\ (\alpha _2,\beta _2)=(0.7,0.8),\quad R_2=\frac{1}{8}\\ (\alpha _3,\beta _3)=(0.45,0.2),\quad R_3=\frac{1}{8}\\ (\alpha _4,\beta _4)=(0.3,0.7),\quad R_4=\frac{1}{8}\\ (\alpha _5,\beta _5)=(0.75,0.2),\quad R_5=\frac{1}{8} \end{aligned} \end{aligned}$$

We solve the corresponding nonlinear programming problem (2.13) of this problem for \(N=40.\) In Table 2, the approximate objective value of this problem compared with underdetermined coefficient (UC) approach [23] and Measure approach [1] is given. From this table, it is obvious that our method has higher order of accuracy. Also, in Fig. 2, one can see approximate optimal path for \(N=40.\)

Table 2 Comparison of the presented approach with other approaches, for Example 4.2
Fig. 2
figure 2

Approximate optimal path for Example 4.2 with \(N=40\) (case 1)

Let us now, change the center of obstacles as follows:

$$\begin{aligned} \begin{aligned} (\alpha _1,\beta _1)=(0.5,0.5), \quad R_1=\frac{1}{8}\\ (\alpha _2,\beta _2)=(0.7,0.85),\quad R_2=\frac{1}{8}\\ (\alpha _3,\beta _3)=(0.2,0.2),\quad R_3=\frac{1}{8}\\ (\alpha _4,\beta _4)=(0.3,0.8),\quad R_4=\frac{1}{8}\\ (\alpha _5,\beta _5)=(0.75,0.2),\quad R_5=\frac{1}{8} \end{aligned} \end{aligned}$$

Approximate optimal solution of our method (for \(N=40\)) in comparison with three other methods [1, 23, 25] is given in Table 3. Also, approximate optimal path of the proposed approach for \(N=40\) is given in Fig. 3.

Table 3 Comparison of the presented approach with other approaches, for Example 4.2
Fig. 3
figure 3

Approximate optimal path for Example 4.2 with \(N=40\) (case 2)

Example 4.3

As the third example, we consider the problem of finding SP in the presence of three stationary sphere obstacles, in the Euclidean space \({\mathbb {R}}^3.\)

At first, we assume the center of obstacles and radiuses are as follows:

$$\begin{aligned} \begin{aligned} (\alpha _1,\beta _1,\gamma _1)=(0.5,0.5,0.5),\quad R_1=\frac{1}{6},\quad r_1=0,\\ (\alpha _2,\beta _2,\gamma _2)=(0.3,0.3,0.1),\quad R_2=\frac{1}{5},\quad r_2=0,\\ (\alpha _3,\beta _3,\gamma _3)=(0.8,0.2,0.6),\quad R_3=\frac{1}{5},\quad r_3=0,\\ (\alpha _4,\beta _4,\gamma _4)=(0.2,0.2,0.8),\quad R_4=\frac{1}{5},\quad r_4=0,\\ (\alpha _5,\beta _5,\gamma _5)=(0.2,0.8,0.4),\quad R_5=\frac{1}{5},\quad r_5=0,\\ \end{aligned} \end{aligned}$$

We solve the corresponding nonlinear programming problem (2.13) of this problem for \(N=40.\) In Table 4, the approximate objective value of this problem compared with UC approach [23], Measure approach [1] and Zamirian et al. approach [25] is given. From this table, it is obvious that our method has higher order of accuracy compared with other mentioned methods. Also, in Fig. 4, one can see approximate optimal trajectory for \(N=40.\)

Table 4 Comparison of the presented approach with other approaches, in Example 4.3
Fig. 4
figure 4

Approximate optimal path for Example 4.3 with \(N=40\) (case 1)

Now, we assume that \(r_1=r_2=r_3=r_4=r_5=0.01.\) Then, the approximate objective value of our method compared with two other methods [12, 25] is given in Table 5.

Table 5 Comparison of the presented approach with other approaches, in Example 4.3

5 Conclusions

The general form of shortest path problem was solved by using Chebyshev pseudospectral method. The Chebyshev–Gauss–Lobatoo points were used to convert the problem into discrete form. By solving the obtained nonlinear programming problem, we found approximate path for the original shortest path problem. We presented the feasibility and convergence of the method. Several illustrative examples were given to show the applicability and efficiency of the proposed approach. The results were compared with other recent methods for solving shortest path problems.