Abstract
Chebyshev pseudo-spectral method is one of the most efficient methods for solving continuous-time optimization problems. In this paper, we utilize this method to solve the general form of shortest path problem. Here, the main problem is converted into a nonlinear programming problem and by solving of which, we obtain an approximate shortest path. The feasibility of the nonlinear programming problem and the convergence of the method are given. Finally, some numerical examples are considered to show the efficiency of the presented method over the other methods.
Similar content being viewed by others
1 Introduction
Shortest path (SP) problems (or path planning problems) are modern and fruitful optimization problems with many practical engineering applications, especially in robot industry, mobile objects, military, unmanned under water vehicles and surgery planning [1, 12, 23]. In the literature, there are many different methods for solving the SP problems. For instance, some novel technique in the presence of obstacles were proposed by Latombe [10]. Wang et al. [23] considered the SP problem as a semi-infinite constrained optimization problem. Zamirian et al. [25] proposed a new method based on the parametrization method and fuzzy aggregation for solving these problems for a single grid and free moving object in a two and three dimensional space in the presence of obstacles. Tohidi and Samadi [21] utilized the Legendre spectral collocation method for solving the SP problem with boundary and interior barriers. More recently, using Haar wavelets, Mortezaee and Nazemi [13] proposed an approximation method for solving these problems. To find other methods for solving shortest path problems, we refer the reader to references [11, 18] among others.
However, more of the existing methods [1, 24, 25] used traditional and classical methods, such as wavelet collocation or measure theoretical approaches, to solve these problems. These methods usually, transform the basic problem to an optimal control one. These transformations lead to an increase in the dimension of the associated problem and also the weakness of the obtained approximate solution. For example, Mortezaee and Nazemi [13] considered the shortest path problem as an optimization problem. Then, by defining some artificial controls, they converted the problem into an optimal control problem. They expressed the control variables and derivatives of the state variables in the form of Haar variables and unknown coefficients. Using properties of Haar wavelets, they obtained a nonlinear programming problem. However, defining the artificial control functions can lead to the increase in the dimension of the associated problem.
Motivated by the aforementioned reasons, in this paper, we propose the Chebyshev Pseudo Spectral (CPS) method [4, 5, 7, 14, 22] to solve a SP problem. Applying the Chebyshev–Gauss–Lobatto (CGL) nodes, we convert the shortest path problem to a nonlinear programming problem (NLP). The proposed approach is implemented on some numerical examples and the accuracy of the method is compared with some other approaches. Obtained results show the high accuracy of the method over the results of the other methods.
The structure of the remainder of the paper is as follows. In Sect. 2, we apply the CPS method for a SP problem. In Sect. 3, we give the convergence of the method. In Sect. 4, we apply the presented method for solving some SP problems and compare the results with some other methods. Finally, In Sect. 5, we present the conclusions of the paper.
2 CPS method for SP problem
2.1 General form of SP problem
Solving a SP problem means to find an optimal path with the lowest cost from the initial state to the final state. The decision maker can define the cost as distance travelled, energy expended or the time exposed, etc. A general form of an optimal shortest path problem with boundary barriers \(f_1(x)\) and \(f_2(x)\) for path x(.) can be modelled by the following optimization problem:
where \(x(t)=(x_1(.),x_2(.),\dots ,x_n(.))\) is a path with continuous derivatives, h(.), \(f_1(.),\) \(f_2(.),\) and g(.) are continuous differentiable functions and, \(\alpha \) and \(\beta \) are start and final states, respectively.
2.2 CPS method
Lagrange interpolation in CGL nodes is important in approximation theory and especially CPS method. The resulting interpolating polynomial provides an approximation that is closest to the polynomial of best approximation of a continuous function under the maximum norm.
Here, we interpolate the optimal path in the CGL points to gain the best accuracy. The derivatives of these interpolating polynomials at these points are given exactly by a differentiation matrix. A similar approach was utilized in the works [4, 7, 14,15,16]. To utilize the CGL nodes, defined on the interval \([-1,1]\), the transformation \(t=\frac{T}{2}(\tau +1)\) must be used. Moreover, we must define
So, \({\dot{x}}(t)=\frac{2}{T}{\dot{X}}(\tau ).\) By this transformation, system (2.1) can be converted to the following equivalent problem:
The CGL nodes on \([-1,1]\) are selected as follows:
where they are the roots of \((1-\tau ^2)\frac{d}{d\tau }T_N(\tau ),\) where \(T_{N}(\tau )=\mathrm{cos}(N\mathrm{cos}^{-1}(\tau )),~\tau \in [-1,1]\) is the Chebyshev polynomial of order N. For interpolating, the following Lagrange polynomials are utilized:
Note that \(L_{k}(\tau _{k})=1,~k=0,1,\ldots ,N\) and \(L_{k}(\tau _{j})=0,\) for all \(k\ne j.\)
Now, the Lagrange interpolation for the optimal solution of problem (2.3) can be defined as follows:
where N is a sufficiently big number. Note that
Also,
where
and \(\mu _0 = \mu _N = 2\) and \(\mu _k=1\) for \(k=1,2,\ldots ,N-1\) (for details of the above relations, we refer to [4, 5, 20]). To approximate the integral in the objective function of problem (2.3), we use the Clenshaw–Curtis quadrature formula (see [3, 17]) which is as follows:
where \(w_j, j=0,1,\ldots ,N\) are the weights of numerical approximation and for N even, are
and, for N odd:
In relations (2.11) and (2.12), the double prime means that the first and the last elements of summations have to be halved.
Lemma 2.1
[3, 17] Let \(\tau _0,\tau _1,\ldots ,\tau _N\) be the CGL nodes, and \(w_k,~k=0,1,2,\ldots ,N\) be defined by relation (2.11) (or (2.12)). Suppose that H(.) is a continuous function. Then
Now, using relations (2.7), (2.8) and (2.10), we can approximate the SP problem (2.3) by the following NLP problem:
By solving the NLP problem (2.13), we can obtain a pointwise approximation for optimal path as:
where \(t_k=\frac{T}{2}(\tau _k+1).\) Also, we have a continuous approximation as
The feasibility of NLP problem (2.13) and convergence of approximate optimal path are given in the next section.
3 The feasibility and convergence analysis
In this section, we analyze the feasibility of NLP problem (2.13), and the convergence of the gained approximate optimal path.
Here, assume that \(w^{m,p}\) is Sobolov space on \([-1,1]\), that consists of all functions \(\phi :[-1,1]\rightarrow {\mathbb {R}}^m\) such that \(\phi ^{(j)}(.),~j=0,1,2,\ldots ,m\) lie in \(L^p\) space with the following norm:
In this section, we need to the following lemma in Sobolov space.
Lemma 3.1
[2] For any given function \(\phi \in w^{m,\infty }\) there is a polynomial \(p_N(.)\) of degree N or less, such that
where c is a constant independent of N and \(c_0=\Vert \phi \Vert _{w^{m,\infty }}.\)
Remark 3.2
We note that, for any function \(\phi (.)\) in the norm \(L^\infty ,\) polynomial \(p_{_N}(.)\) with the smallest norm \(\Vert \phi (.)-p_{_N}(.)\Vert _{L^{\infty }}\) is the \(N\mathrm{th}\) order best polynomial approximation of \(\phi (.).\)
Now, we rewrite the shortest path problem (2.3) as follows:
where
Also, we rewrite the NLP problem (2.13) as follows:
where \(\bar{x}=(\bar{x}_0,\bar{x}_1,\ldots ,\bar{x}_N).\) To guarantee feasibility of NLP problem (3.2), we must relax its constraints and rewrite them as follows:
where \(m\ge 2\) is given, \(1 = (1, 1,\ldots , 1)\) and dot means inner product. The above relaxation is based on the Polak’s theory of consistent approximation (see [19]). We note that when N tends to infinite, there is no difference between constraints of problems (3.2) and (3.3).
Remark 3.3
Since the feasible solution X(.) of problem (3.1) has continuous derivative, there are compact sets \(\Omega _1\subseteq {\mathbb {R}}^n\) and \(\Omega _2\subseteq {\mathbb {R}}^n\) such that
Moreover, since functions h(.) and G(.) are continuous differentiable, there are constants \(M_1\) and \(M_2\) such that for all \({\tilde{X}}(.)\) and \(\bar{X}(.):\)
Theorem 3.4
(Feasibility) Let \(X^*(.)\) be an optimal solution of the SP problem (3.1). Then, there is a positive integer K such that for any \(N\ge K,\) the NLP problem (3.3) has a feasible solution \(\bar{x}=(\bar{x}_0,\bar{x}_1,\ldots ,\bar{x}_N).\) Moreover, the feasible solution satisfies
where \(\tau _k, k=0,1,\ldots ,N\) are the CGL nodes and L is a positive constant independent of N.
Proof
Let p(.) be the \((N-1)\mathrm{th}\) order best approximation of \({\dot{X}}^*(.)\) in the norm of \(L^\infty .\) By Lemma 3.1, there is a constant \(c_1\) independent of N such that
Define
and
We show that \(\bar{x}=(\bar{x}_0,\bar{x}_1,\ldots ,\bar{x}_n)\) is a feasible solution for problem (3.3). By (3.7)–(3.9), for all \(\tau \in [-1,1],\) we have:
Now, by relations (3.5) (i.e. Lipschitz property of function G(.) and (3.10) we get
where \(M_2\) is the Lipschitz constant of function G(.) which is independent of N. Hence
where \(\mathbf{1}=(1,1,\ldots ,1)\) and dot means inner product. Also, from initial and end points conditions of problem (3.3) we have:
Hence
By a similar procedure we obtain
So, if we select K such that \(max\{2c_1,2c_1M_2\}\le (N-1)^{\frac{1}{2}}\) for all \(N\ge K,\) then by (3.12)–(3.14) we achieve
Hence, we can imply that \(\bar{x}=(\bar{x}_0,\bar{x}_1,\ldots ,\bar{x}_N)\) is a feasible solution for \(\textit{NLP}\) problem (3.3). Finally, by selecting \(L=2c_1,\) we can follow relation (3.6) from (3.10).\(\square \)
Remark 3.5
From relation (3.10), both \(X^*(\tau _k)\) and \(\bar{x}_k\) (for \(k=0,1,2,\ldots ,N)\) are contained in some compact set. Hence, the feasible set of NLP problem (3.3) is compact. Therefore, the existence of optimal solution, for NLP problem (3.3), is guaranteed by the continuity of the cost function \(J_N(.).\)
Now, we show that the sequence of optimal solutions of NLP problem (3.3) is convergent to the optimal solution of the SP problem (3.1). The result is a result of [8, 9] and based on the Polak’s theory of consistent approximation [19]. Let \(\bar{x}^*=(\bar{x}_0^*,\bar{x}_1^*,\ldots ,\bar{x}_N^*)\) be an optimal solution to the problem (3.3). Define
where \(L_k(.),~k=0,1,\ldots ,N\) are the Lagrange interpolating polynomials. Hence, we have a sequence of direct solutions \(\{\bar{x}^*=(\bar{x}_0^*,\bar{x}_1^*,\ldots ,\bar{x}_N^*)\}_{N=K}^{\infty }\) and their sequence of interpolating polynomials \(\{X_N^*(.)\}_{N=K}^\infty .\)
Assumption I It is assumed that the sequence \(\{(\bar{x}_0^*,{\dot{X}}_N^*(.))\}_{N=K}^{\infty }\) has a subsequence that uniformly converges to \((x_0^\infty ,q(.))\) where q(.) is a continuous function.
Theorem 3.6
(Convergence) Let \(\{\bar{x}^*=(\bar{x}_0^*,\bar{x}_1^*,\ldots ,\bar{x}_N^*)\}_{N=K}^{\infty }\) be a sequence of optimal solutions of the NLP problem (3.3) and let \(\{X_N^{*}(.)\}_{N=K}^\infty \) be their interpolating polynomial sequence satisfying Assumption I. Then,
is an optimal solution to the SP problem (3.1).
Proof
By Assumption I, there is a subsequence \(\{{\dot{X}}_{N_i}^*(.)\}_{i=1}^\infty \) of sequence \(\{\dot{X}_N^*(.)\}_{N=K}^\infty \) such that \(\lim _{i\rightarrow \infty }N_i=\infty \) and
Hence, by (3.15) and (3.16), we get
The remaining part of the proof has three steps. In Step 1, we show that \(X^*(.)\) is a feasible solution to problem (3.1). In Step 2, we prove the convergence of the cost function \(J_{N_i}(\bar{x}^*)\) to the continuous function \(J(X^*(.)).\) Finally, in Step 3, we show that \(X^*(.)\) is an optimal solution of problem (3.1).
Step 1 We show that \(X^*(.)\) satisfies the constraints of problem (3.1). Assume that \(X^*\) is not a solution of the first constraint. Then, there is a time \({\bar{t}}\in [-1,1]\) such that
Since the CGL nodes \(\tau _k, k=0,1,\ldots \) are dense in \([-1,1],\) i.e. the closure of \(\{\tau _k\}_{k=0}^\infty \) is \([-1,1]\) (see [6]), there exists a sequence \(\{k_{N_i}\}_{i=1}^\infty \) such that \(0<k_{N_i}<N_i\) and \(\lim _{i\rightarrow \infty }\tau _{k_{N_i}}=\bar{t}.\) Thus by continuity of function G(.), we get
Now, since \(X_{N_i}^*\) is the interpolating polynomial, we have
and hence by the first constraint of (3.3), the following holds:
The inequality (3.21) contradicts to inequality (3.19). Thus \(X^*(.)\) satisfies the first constraint of problem (3.3). Now, for initial and final conditions of problem (3.3), we have
Thus \(X^*(-1)=\alpha .\) By a similar manner, we can show that \(X^*(1)=\beta .\) Hence \(X^*(.)\) satisfies the constraints of problem (3.3).
Step 2 In this step we show that
where
and
By relations (3.15)–(3.17), we get
Also, by relation (3.4), we have:
Furthermore, \(h(\frac{T}{2}{\dot{X}}^*(.))\) is continuous on \([-1,1]\). Thus, by Lemma 2.1 we have
Therefore,
From the uniform convergence of (3.23) and (3.24) and property of \(w_k\), defined by (2.11)–(2.12), we obtain
By (3.26) and (3.27) we can achieve the equation (3.22).
Step 3 Let \(X^{**}(.)\) be an optimal solution for problem (3.1) with the property \(X^{**}(.)\in w^{m,\infty },~m\ge 2.\) By using the same discussions as in Step 2 and Theorem 3.4, there exists a sequence of feasible solutions \(\{{\tilde{x}}=({\tilde{x}}_0,{\tilde{x}}_1,\ldots ,{\tilde{x}}_N)\}_{N=K}^\infty \) of NLP problem (3.3) such that
Now, from optimality of \(X^{**}(.)\) and \(\bar{x}^*=(\bar{x}_0^*,\bar{x}_1^*,\ldots ,\bar{x}_N^*),\) we get
Hence,
Therefore, \(X^*(.)\) is an optimal solution of the SP problem (3.1).\(\square \)
4 Examples
In this section, we apply our proposed method to some SP problems to illustrate the efficiency of our method compared with the other methods. The obtained NLP problems are solved in MATLAB software by using FMINCON function.
Example 4.1
As the first example, we consider the following one dimensional SP problem with lower and upper boundary barriers:
By assuming different values for N, we solve the corresponding NLP problem (2.13) for this problem. In Table 1, the approximate objective functions \(J_N^*,\) for \(N=20,22,24\) are given. Moreover, in this table the optimal objective values of this problem using Zamirian et al. approach [24] and Legendre spectral method [21] are given. As one can see, for \(N=24,\) we reached \(J^*_N=3.27691,\) meanwhile the optimal value \(J^*\) of measure approach is 3.4191 and the optimal value of Legendre approach for \(N=25\) is \(J^*_N=3.2772.\) This results, confirm that our approach finds a shorter path than the two other mentioned approaches. In Fig. 1, the graph of this optimal path for \(N=20\) is given.
Example 4.2
As the second example, we consider the problem of finding SP in the presence of five stationary circle obstacles, in the Euclidean space \({\mathbb {R}}^2\):
Let us, at first, assume the center of obstacles and radiuses as follows:
We solve the corresponding nonlinear programming problem (2.13) of this problem for \(N=40.\) In Table 2, the approximate objective value of this problem compared with underdetermined coefficient (UC) approach [23] and Measure approach [1] is given. From this table, it is obvious that our method has higher order of accuracy. Also, in Fig. 2, one can see approximate optimal path for \(N=40.\)
Let us now, change the center of obstacles as follows:
Approximate optimal solution of our method (for \(N=40\)) in comparison with three other methods [1, 23, 25] is given in Table 3. Also, approximate optimal path of the proposed approach for \(N=40\) is given in Fig. 3.
Example 4.3
As the third example, we consider the problem of finding SP in the presence of three stationary sphere obstacles, in the Euclidean space \({\mathbb {R}}^3.\)
At first, we assume the center of obstacles and radiuses are as follows:
We solve the corresponding nonlinear programming problem (2.13) of this problem for \(N=40.\) In Table 4, the approximate objective value of this problem compared with UC approach [23], Measure approach [1] and Zamirian et al. approach [25] is given. From this table, it is obvious that our method has higher order of accuracy compared with other mentioned methods. Also, in Fig. 4, one can see approximate optimal trajectory for \(N=40.\)
Now, we assume that \(r_1=r_2=r_3=r_4=r_5=0.01.\) Then, the approximate objective value of our method compared with two other methods [12, 25] is given in Table 5.
5 Conclusions
The general form of shortest path problem was solved by using Chebyshev pseudospectral method. The Chebyshev–Gauss–Lobatoo points were used to convert the problem into discrete form. By solving the obtained nonlinear programming problem, we found approximate path for the original shortest path problem. We presented the feasibility and convergence of the method. Several illustrative examples were given to show the applicability and efficiency of the proposed approach. The results were compared with other recent methods for solving shortest path problems.
References
Borzabadi, A.H., Kamyad, A.V., Farahi, M.H., Mehne, H.H.: Solving some optimal path planning problems using an approach based on measure theory. Appl. Math. Comput. 170, 1418–1435 (2005)
Canuto, C., Hussaini, Y., Quarteroni, A., Zang, T.A.: Spectral Methods in Fluid Dynamics (Scientific Computation). Springer, New York (1987)
Clenshaw, C.W., Curtis, A.R.: A method for numerical integration on an automatic computer. Numer. Math. 2, 197–205 (1960)
Fahroo, F., Ross, I.M.: Costate estimation by a legendre pseudospectral method. J. Guid. Control Dyn. 24(2), 270–277 (2001)
Fahroo, F., Ross, I.M.: Direct trajectory optimization by a Chebyshev pseudospectral method. J. Guid. Control Dyn. 25(1), 160–166 (2002)
Freud, G.: Orthogonal Polynomials. Pregamon Press, Elmsford (1971)
Ghaznavi, M., Noori Skandari, M.H.: An efficient pseudo-spectral method for nonsmooth dynamical systems. Iran. J. Sci. Technol. Trans. A Sci. (2016). https://doi.org/10.1007/s40995-016-0040-9
Gong, Q., Kang, W., Ross, I.M.: A pseudospectral method for the optimal control of constrained feedback linearizable systems. IEEE Trans. Autom. Control 51(7), 1115–1129 (2006)
Gong, Q., Ross, I.M., Kang, W., Fahroo, F.: Connections between the covector mapping theorem and convergence of pseudospectral methods for optimal control. Comput. Optim. Appl. 41(3), 307–335 (2008)
Latombe, J.C.: Robot Motion Planning. Kluwer Academic, Norwell, MA (1991)
Lu, Y., Yi, S., Liu, Y., Ji, Y.: A novel path planning method for biomimetic robot based on deep learning. Assem. Autom. 36(2), 186–191 (2016)
Ma, Y., Zamirian, M., Yang, Y., Xu, Y., Zhang, J.: Path planning for mobile objects in four-dimension based on particle swarm optimization method with penalty function. Math. Probl. Eng. 2013, 1–9 (2013)
Mortezaee, M., Nazemi, A.: A wavelet collocation scheme for solving some optimal path planning problems. ANZIAM 57, 461–481 (2015)
Noori Skandari, M.H., Ghaznavi, M.: Chebyshev pseudo-spectral method for Bratu’s problem. Iran. J. Sci. Technol. Trans. A Sci. 41, 913–921 (2017)
Noori Skandari, M.H., Kamyad, A.V., Effati, S.: Generalized Euler–Lagrange equation for nonsmooth calculus of variations. Nonlinear Dyn. 75(1–2), 85–100 (2014)
Noori Skandari, M.H., Kamyad, A.V., Effati, S.: Smoothing approach for a class of nonsmooth optimal control problems. Appl. Math. Model. 40(2), 886–903 (2016)
O’Hara, H., Smith, F.J.: Error estimation in the Clenshaw–Curtis quadrature formula. Comput. J. 11, 213–219 (1968)
Otte, M., Correll, N.: C-FOREST: parallel shortest path planning with superlinear speedup. IEEE Robot. Autom. Soc. 29(3), 798–806 (2013)
Polak, E.: Optimization: Algorithms and Consistent Approximations. Springer, Heidelberg (1997)
Shen, J., Tang, T., Wang, L.L.: Spectral Methods: Algorithm, Analysis and Applications. Springer, Berlin (2011)
Tohidi, E., Samadi, O.R.N.: Legendre spectral collocation method for approximating the solution of shortest path problems. Syst. Sci. Control Eng. 3, 62–68 (2015)
Trefethen, L.N.: Spectral Methods in MATLAB. Society for Industrial and Applied Mathematics, Philadelphia (2000)
Wang, Y., Lane, D.M., Falconer, G.J.: Two novel approaches for unmanned under water vehicle path planning: constrained optimisation and semi-infinite constrained optimisation. Robotica 18, 123–142 (2000)
Zamirian, M., Farahi, M.H., Nazemi, A.R.: An applicable method for solving the shortest path problems. Appl. Math. Comput. 190, 1479–1486 (2007)
Zamirian, M., Kamyad, A.V., Farahi, M.H.: A novel algorithm for solving optimal path planning problems based on parametrization method and fuzzy aggregation. Phys. Lett. A 373(38), 3439–3449 (2009)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Noori Skandari, M.H., Ghaznavi, M. A numerical method for solving shortest path problems. Calcolo 55, 14 (2018). https://doi.org/10.1007/s10092-018-0256-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10092-018-0256-5