4.1 Introduction

Fuzzy control has found extensive applications for modeling nonlinear systems in the past 10 years. According to the fuzzy approximation theorem of the fuzzy logic systems (FLSs) [1,2,3,4,5,6], researchers proposed many approximation-based adaptive fuzzy control design methods for nonlinear systems (see, e.g., [7,8,9,10,11,12] and the references therein).

It has been proved that adaptive backstepping technique is a powerful tool to solve tracking or regulation control problems of unknown nonlinear systems in or transformable to parameter strict-feedback form [13]. For such systems, many adaptive fuzzy backstepping controllers have been developed (see, e.g., [14,15,16,17,18,19] and the references therein), where FLSs or neural networks are used to approximate unknown nonlinear smooth functions. It is well known that, however, in standard backstepping design procedure, analytic computation of the first derivatives of virtual control signals \({{\alpha }_{i}}\) \((i=1,2,\ldots ,n-1)\), i.e., \({{\dot{\alpha }}_{i}}\), is necessary. Note that, the computation of \({{\dot{\alpha }}_{i}}\) requires the higher derivatives of \({{\dot{\alpha }}_{j}}\), \(j=0,1,\ldots ,i-1\). Obviously, as system dimension, i.e., n, increases, the computation of \({{\dot{\alpha }}_{i}}\) becomes increasingly complicated. This limits the theoretical results’ field of practical applications. Hence, how to reduce the computation of \({{\dot{\alpha }}_{i}}\) is crucial issue in controller design, which is a motivation of this chapter. In addition, the aforementioned approaches required the knowledge of the desired trajectory \({{y}_{d}}(t)\) and the first n derivatives, i.e., \(y_{d}^{(i)}(t),i=1,2,\ldots ,n\) should be available. It is important to note that in some important applications (e.g., land vehicle or aircraft) the desired trajectory may be generated by a planner, an outer-loop, or a user input device that does not provide higher derivatives. Relaxing the assumption motivates us for this work.

On the other hand, actuators, sensors or other system components in practical engineering fail frequently, which can cause system performance deterioration and lead to instability that can further produce catastrophic accidents. Thus, many effective fault tolerant control (FTC) approaches have been proposed to improve system reliability and to guarantee system stability in all situations [20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39].

In this chapter, a bank of command filters (see, e.g., [40, 41] and the references therein) are proposed to respectively generate the first derivations of the desired trajectory and virtual control signals. Then, by using backstepping technique, a robust adaptive fuzzy controller is proposed to guarantee that the tracking error converges to a neighborhood of the origin, where FLSs are utilized to approximate the unknown functions. The contributions form our work are generalized the following aspects:

  1. (1)

    The desired trajectory and only its first derivative are necessary for the control scheme presented in this chapter, which is more reasonable in practical applications. The theoretic results of this chapter are thus valuable in a wide field of practical applications;

  2. (2)

    Compared with the existing literatures concerning the standard backstepping design, the control scheme presented in this chapter does not need to compute the higher derivatives of virtual control signals in backstepping design procedures, which decreases the computation complexity;

  3. (3)

    Different from some results in literature where all system functions are known, the system functions considered in this chapter are unknown. In particular, the signs of control gain functions are also unknown.

  4. (4)

    The actuator fault model that is presented in this chapter integrates not only unknown gain faults, but also unknown bias faults,where both faults are dependent on the system state and will be approximated by FLSs.

The rest of this chapter is organized as follows. Section 4.2 formulates the problem under investigation. Nussbaum type gain and mathematical description of FLSs are also provided. In addition, some basic assumptions and preliminary results are given. In Sect. 4.3, the main technical results of this chapter are given, where command filters and adaptive fuzzy controller are designed, and the closed-loop system’s stability analysis is developed. A numerical example is presented in Sect. 4.4. Simulation results are presented to demonstrate the effectiveness of the proposed technique. Finally, Sect. 4.5 draws the conclusion.

4.2 Problem Statement and Preliminaries

4.2.1 Problem Statement

Considers the following uncertain nonlinear systems:

$$\begin{aligned} \left\{ \begin{array}{l} {{\dot{x}}_i} = {f_i}({{\bar{x}}_i}) + {g_i}({{\bar{x}}_i}){x_{i + 1}} + {d_i}({{\bar{x}}_{i + 1}},t),~~i = 1,2, \ldots ,n - 1; \\ {{\dot{x}}_n} = {f_n}({{\bar{x}}_n}) + {g_n}({{\bar{x}}_n})u(t) + {d_n}({{\bar{x}}_n},t); \\ y~ = {x_1} \\ \end{array} \right. \end{aligned}$$
(4.1)

where \({{\bar{x}}_{i}}={{({{x}_{1}},\ldots ,{{x}_{i}})}^{T}}\in {{R}^{i}},i=1,\ldots ,n\) is the state; y denotes the output; \({{u}}\in R\) is the input; \(f_{i}(\cdot )\in R\) and \(g_{i}(\cdot )\in R\), \(i=1,\ldots ,n\) are the unknown smooth functions; \(d_{i}(\cdot ,t)\), \(i=1,\ldots ,n\), denote the unknown dynamic disturbances.

In practical applications, actuators may fail. The fault model considered in this chapter can be described as follows:

$$\begin{aligned} u^{f}=g_f(\bar{x}_n)u+b_f(\bar{x}_n), t>t_F \end{aligned}$$
(4.2)

where \(g_f(\bar{x}_n)\) and \(b_f(\bar{x}_n)\) are smooth functions, which denote unknown gain fault and bias fault, respectively; \(t_F\) is an unknown fault occurrence time.

Control objective is to design an adaptive fuzzy controller by backstepping with command filter for system (4.1) such that output y can track accurately the desired trajectory \({{y}_{d}}\) as possible regardless of actuator fault and unknown dynamic disturbances.

To design appropriate controller, the following lemma and some assumptions are given.

Lemma 4.1

([42]) For \(\forall x\in R\), \(|x|-\tanh (x/\delta )x\le 0.2785\delta \), where \(\delta >0\in R\).

Assumption 4.1

There exist known constants \(g_{i0}>0\in R\) and \(g_{i1}>0\in R\) such that \({g_{i1}} \geqslant \left| {{g_i}({{\bar{x}}_i})} \right| \geqslant {g_{i0}} > 0,\forall {\bar{x}_i} \in {R^i},i = 1,2, \ldots ,n\).

Assumption 4.2

There exist unknown constant \(p^*_i\) and known smooth positive function \(\phi _i(\bar{x}_i)\) such that \(|d_i(\cdot ,t)|\le p^*_i \phi _i(\bar{x}_i)\).

Assumption 4.3

The desired trajectory \({{y}_{d}}(t)\) and its first derivative are bounded and available.

Assumption 4.4

\(g_f(\bar{x}_n)\) is bounded, i.e., there exist known constants \(g_{f0}>0\in R\) and \(g_1>0\in R\) such that \(g_{f1}\ge |g(\bar{x}_n)|\ge g_{f0}\).

Remark 4.1

In literature, the existing results concerning the trajectory tracking problems of the strict-feedback systems require the classical assumption that the desired trajectory \({{y}_{d}}(t)\) and the first n derivatives, i.e., \(y_{d}^{(i)}(t),i=0,1,\ldots ,n\) should be available. Just stated in Introduction, in some important applications (e.g., land vehicle or aircraft) the desired trajectory may be generated by a planner, an outer-loop, or a user input device that does not provide higher derivatives. Thus, in such case, these results do not work. Assumption 4.3 in this chapter is more reasonable in practical applications.

4.2.2 Nussbaum Type Gain

Any continuous function \(N(s):R\rightarrow R\) is a function of Nussbaum type if it has the following properties:

  1. (1)

    \(\underset{s\rightarrow +\infty }{\mathop {\lim }}\,\sup \frac{1}{s}\int _{0}^{s}{N(\varsigma )d\varsigma }=+\infty \);

  2. (2)

    \(\underset{s\rightarrow -\infty }{\mathop {\lim }}\,\inf \frac{1}{s}\int _{0}^{s}{N(\varsigma )d\varsigma }=-\infty \)

For example, the continuous functions \({{\varsigma }^{2}}\cos (\varsigma )\), \({{\varsigma }^{2}}\sin (\varsigma )\), and \({{e}^{{{\varsigma }^{2}}}}\cos ((\pi /2)\varsigma )\) verify the above properties and are thus Nussbaum-type functions [43].The even Nussbaum function \({{e}^{{{\varsigma }^{2}}}}\cos ((\pi /2)\varsigma )\) is used throughout this chapter.

Lemma 4.2

([44]) Let \(V(\cdot )\) and \(\varsigma (\cdot )\) be smooth functions defined on \([0,{{t}_{f}})\) with \(V(t)\ge 0,\forall t\in [0,{{t}_{f}})\), and \(N(\cdot )\) be an even smooth Nussbaum-type function. If the following inequality holds:

$$V(t)\le {{c}_{0}}+\int _{0}^{t}{(\underline{g}N(\varsigma )+1)\dot{\varsigma }d\tau },\forall t\in [0,{{t}_{f}})$$

where \(\underline{g}\ne 0\) is a constant, and \({{c}_{0}}\) represents a suitable constant, then \(V(t),\varsigma (t)\) and \(\int _{0}^{t}{\underline{g}N(\varsigma )\dot{\varsigma }}d\tau \) must be bounded on\([0,{{t}_{f}})\).

Lemma 4.3

([45]) Let \(V(\cdot )\) and \(\varsigma (\cdot )\) be smooth functions defined on \([0,{{t}_{f}})\) with \(V(t)\ge 0,\forall t\in [0,{{t}_{f}})\), and \(N(\cdot )\) be an even smooth Nussbaum-type function. For \(\forall t\in [0,{{t}_{f}})\), if the following inequality holds,

$$V(t)\le {{c}_{0}}+{{e}^{-{{c}_{1}}t}}\int _{0}^{t}{\underline{g}(\tau )N(\varsigma )\dot{\varsigma }\text {}{{e}^{{{c}_{1}}\tau }}}d\tau +{{e}^{-{{c}_{1}}t}}\int _{\text {}0}^{\text {}t}{\dot{\varsigma }{{e}^{{{c}_{1}}\tau }}d\tau }$$

where constant \({{c}_{1}}>0\), \(\underline{g}(\cdot )\) is a time-varying parameter which takes values in the unknown closed intervals \(I:=[{{l}^{-1}},{{l}^{+1}}]\) with \(0\notin I\), and \({{c}_{0}}\) represents some suitable constant, then V(t), \(\varsigma (t)\) and \(\int _{0}^{t}{\underline{g}(\tau )N(\varsigma )\dot{\varsigma }}d\tau \) must be bounded on \([0,{{t}_{f}})\).

4.2.3 Mathematical Description of Fuzzy Logic Systems

A fuzzy logic system consists of four parts: the knowledge base, the fuzzifier, the fuzzy inference engine working on fuzzy rules, and the defuzzifier. The knowledge base for FLS comprises a collection of fuzzy if-then rules of the following form:

$$\begin{aligned} \begin{aligned} {{R}^{l}}: ~&if ~{{x}_{1}} ~is~ A_{1}^{l} ~and~ {{x}_{2}} ~is~ A_{2}^{l}~\ldots ~and~ {{x}_{n}} ~is~ A_{n}^{l}, \\&then~ ~ y ~is~ {{B}^{l}},~\ l=1,2,\ldots ,M \\ \end{aligned} \end{aligned}$$

where \(\underline{x}={{\left[ {{x}_{1}},\ldots ,{{x}_{n}} \right] }^{T}}\subset {{R}^{n}}\) and y are the FLS input and output, respectively. Fuzzy sets \(A_{i}^{l}\) and \({{B}^{l}}\) are associated with the fuzzy functions \({{\mu }_{A_{i}^{l}}}({{x}_{i}})=\exp (-{{(\frac{{{x}_{i}}-a_{i}^{l}}{b_{i}^{l}})}^{2}})\) and \({{\mu }_{{{B}^{l}}}}({{y}^{l}})=1\), respectively. M is the rules number. Through singleton function, center average defuzzification and product inference, the FLS can be expressed as:

$$\begin{aligned} y(x)={\sum \limits _{l=1}^{M}{{{{\bar{y}}}^{l}}}\left( \prod \limits _{i=1}^{n}{{{\mu }_{A_{i}^{l}}}\left( {{x}_{i}} \right) } \right) }/{\sum \limits _{l=1}^{M}{\left( \prod \limits _{i=1}^{n}{{{\mu }_{A_{i}^{l}}}\left( {{x}_{i}} \right) } \right) }} \end{aligned}$$

where \({{\bar{y}}^{l}}={{\max }_{y\in R}}{{\mu }_{{{B}^{l}}}}\). Define the fuzzy basis functions as:

$$\begin{aligned} {{\xi }_{l}}(x)={\prod \limits _{i=1}^{n}{{{\mu }_{A_{i}^{l}}}\left( {{x}_{i}} \right) }}{\sum \limits _{l=1}^{M}{\left( \prod \limits _{i=1}^{n}/{{{\mu }_{A_{i}^{l}}}\left( {{x}_{i}} \right) } \right) }} \end{aligned}$$

and define \({{{\theta } }^{T}}{=}\,[{{\bar{y}}^{1}},{{\bar{y}}^{2}},\ldots ,{{\bar{y}}^{M}}]\,{=}\,[{{{\theta } }_{1}},{{{\theta } }_{2}},\ldots ,{{{\theta } }_{M}}]\) and \(\xi (x)\,{=}\,{{[{{\xi }_{^{1}}}(x),\ldots ,{{\xi }_{^{M}}}(x)]}^{T}}\), then the above FLS can be rewritten as:

$$\begin{aligned} y(x)={{{\theta } }^{T}}\xi (x) \end{aligned}$$

Lemma 4.4

([5, 6]) Let f(x) be a continuous function defined on a compact set \(\varOmega \). Then for any constant \(\varepsilon >0\), there exists a FLS such as

$$\underset{x\in \varOmega }{\mathop {\sup }}\,|f(x)-{{{\theta } }^{T}}\xi (x)|\le \varepsilon $$

By Lemma 4.4, we know, FLS can approximate any smooth function on a compact space. Due to this approximation capability, we can assume that the nonlinear function f(x) can be approximated as

$$\begin{aligned} f(x,{\theta } )={{{\theta } }^{T}}\xi (x) \end{aligned}$$

Define the optimal parameter vector \( {{{\theta } }^{*}}\) as

$${{{\theta } }^{*}}=\arg \underset{{\theta } \in \varOmega }{\mathop {\min }}\,[\underset{x\in U}{\mathop {\sup }}\,|f(x)-f(x,{{{\theta } }^{*}})|]$$

where \(\varOmega \) and U are compact regions for \({\theta } \) and x, respectively. Also the FLS minimum approximation error is defined as:

$$\varepsilon =f(x)-{{{\theta } }^{*T}}\xi (x)$$

From Lemma 4.4, the following assumption is made.

Assumption 4.5

There exist an unknown real bounded constant \(\varepsilon ^*>0\) such that \(|\varepsilon |\le \varepsilon ^*\) on compact sets \(\varOmega \) and U.

In this chapter, we use the above FLS to approximate the unknown function \(h_i(z_i),(i=1,\ldots ,n)\) will defined later, namely, there exists \({\theta } _{i}^{*}\) and \({{\varepsilon }_{i}}\) such that

$${{h}_{i}}({{z}_{i}})={\theta } _{i}^{*T}{{\xi }_{i}}({{z}_{i}})+{{\varepsilon }_{i}}$$

From Assumption 4.5, there exists an unknown positive real constant \(\varepsilon _i\) such that \(|\varepsilon _i|\le \varepsilon ^*_i\).

For notational simplicity, we use \(\bullet \) to denote \(\bullet (\cdot )\). For example, \(f_{i}\) is the abbreviation of \(f_{i}({{\bar{x}}_{i}})\).

4.3 Design of Adaptive Fuzzy Controller and Stability Analysis

Define

$$\begin{aligned} {{z}_{i}}={{x}_{i}}-{{\alpha }_{i-1}},~~i=1,2,\ldots ,n \end{aligned}$$
(4.3)

where \({{\alpha }_{\text {0}}}\text {=}{{y}_{d}}\), \({{\alpha }_{i-1}}\) (\(i=\text {2},\ldots ,n\)) is a virtual control which will be designed at each step, \({{\alpha }_{n}}=u\) is actual control input. The recursive design procedure contains n steps. From Step 1 to Step \(n-1\), \(\alpha _i~(i=1,\ldots ,n-1)\) is designed at each step. Finally an overall control law \(u(\alpha _n)\) is constructed at Step n.

In order to estimate the virtual control \(\alpha _{i-1}\) (\(i=2,\ldots , n\)), define the following command filter

$$\begin{aligned} \dot{\omega }_{i}=-\eta _\omega (\omega _{i}-\alpha _{i-1}),~~i=2,\ldots , n \end{aligned}$$
(4.4)

where \(\eta _\omega >0\) is a design parameter. Let us define the estimation error signal \(v_i\) as

$$\begin{aligned} v_i=\omega _i-\alpha _{i-1}, ~~i=2,\ldots , n \end{aligned}$$

Remark 4.2

The command filter (4.4) is constructed to avoid the computation of the higher derivatives of \(\alpha _{i-1}\), \(i=2,\ldots ,n\). It should be pointed out that the error \(v_i\) will be compensated at Step n in this chapter.

Step 1:

Now, consider \({{z}_{1}}\)-subsystem: \({{z}_{1}}={{x}_{1}}-{{\alpha }_{0}}\). Form (4.1) and (4.3), one has

$$\begin{aligned} \begin{aligned} {{\dot{z}}_1}&= {f_1}({{\bar{x}}_1}) + {g_1}({{\bar{x}}_1}){x_2} + {d_1}({{\bar{x}}_2},t) - {{\dot{y}}_d} \\&= {f_1}({{\bar{x}}_1}) + {g_1}({{\bar{x}}_1}){z_2} + {g_1}({{\bar{x}}_1}){\alpha _1} + {d_1}({{\bar{x}}_2},t) - {{\dot{y}}_d} \\ \end{aligned} \end{aligned}$$
(4.5)

Define the following function

$$\begin{aligned} {V_{z1}} = \int _{{\text { }}0}^{{\text { }}{z_1}} {\frac{\sigma }{{\left| {{g_1}(\sigma + {y_d})} \right| }}d\sigma } \end{aligned}$$
(4.6)

From the integral-type mean value theorem, it can be known that, there exists a constant \({\lambda _1} \in (0,1)\) such that \({V_{{z_1}}} = {z_1}^2/2g({\lambda _1}{z_1} + {y_d})\). Hence, from Assumption 4.1, we have

$$\frac{{{z_1}^2}}{{2{g_{10}}}} \geqslant {V_{{z_1}}} \geqslant \frac{{{z_1}^2}}{{2{g_{11}}}} > 0$$

which means that, \(V_{{z_1}}\) is a positive definite function of variable \({z_1}\).

Since \(\frac{{\partial \left| {{g^{ - 1}}(\sigma + {y_d})} \right| }}{{\partial {y_d}}} =\frac{{\partial \left| {{g^{ - 1}}(\bar{x},\sigma + {y_d})} \right| }}{{\partial \sigma }}\), we can obtain

$$\begin{aligned} \begin{aligned} {{\dot{V}}_{{z_1}}}&= \frac{{{z_1}}}{{\left| {{g_1}({x_1})} \right| }}{{\dot{z}}_1} + \int _{{\text { }}0}^{{\text { }}{z_1}} {\sigma \left[ {\frac{{\partial \left| {{g^{ - 1}}(\sigma + {y_d})} \right| }}{{\partial {y_d}}}{{\dot{y}}_d}} \right] } d\sigma \\&= \frac{{{z_1}}}{{\left| {{g_1}({x_1})} \right| }}{{\dot{z}}_1} + {{\dot{y}}_d}\left[ {\frac{{{z_1}}}{{\left| {{g_1}({x_1})} \right| }} - \int _{{\text { }}0}^{{\text { }}{z_1}} {\left[ {\frac{1}{{\left| {{g^{ - 1}}(\sigma + {y_d})} \right| }}d\sigma } \right] } } \right] \\&= \frac{{{z_1}}}{{\left| {{g_1}({x_1})} \right| }}[{f_1}({{\bar{x}}_1}) + {g_1}({{\bar{x}}_1}){z_2} + {g_1}({{\bar{x}}_1}){\alpha _1} + {d_1}({{\bar{x}}_2},t) - {{\dot{y}}_d}] +\\&~~~~{{\dot{y}}_d}\left[ {\frac{{{z_1}}}{{\left| {{g_1}({x_1})} \right| }} - \int _{{\text { }}0}^{{\text { }}{z_1}} {\frac{1}{{\left| {{g^{ - 1}}(\sigma + {y_d})} \right| }}d\sigma } } \right] \\ \end{aligned} \end{aligned}$$
(4.7)

Let \({\bar{z}_1} = {({x_1},{\omega _1},{\dot{\omega }_1})^\mathrm{T}}\) and

$$\begin{aligned} {h_1}({\bar{z}_1}) = \frac{{{f_1}({x_1})}}{{\left| {{g_1}({x_1})} \right| }} + \frac{{{{\dot{\omega }}_1}}}{{{z_1}}}\int _{{\text { }}0}^{{\text { }}{z_1}} {\left[ {\frac{1}{{\left| {{g^{ - 1}}(\sigma + {\omega _1})} \right| }}d\sigma } \right] } \end{aligned}$$
(4.8)
$$\begin{aligned} \varDelta _1(\bar{z}_1,\alpha _0,\dot{\alpha }_0,\omega _1,\dot{\omega }_1) = \frac{{{{\dot{y}}_d}}}{{{z_1}}}\int _{{\text { }}0}^{{\text { }}{z_1}} {\left[ {\frac{1}{{\left| {{g^{ - 1}}(\sigma + {y_d})} \right| }}d\sigma } \right] }- \frac{{{{\dot{\omega }}_1}}}{{{z_1}}}\int _{{\text { }}0}^{{\text { }}{z_1}} {\left[ {\frac{1}{{\left| {{g^{ - 1}}(\sigma + {\omega _1})} \right| }}d\sigma } \right] } \end{aligned}$$
(4.9)

Note that, \(h_i(\bar{z}_1)\) will be approximated by FLSs on a compact set \(\varOmega _{z_1}\) as: \(h_1(z_1)=\theta _1^{*T}{\xi _1}({\bar{z}_1})+\varepsilon _1(\bar{z}_1)\). From Assumption 4.5, we know, there exists an unknown constant \(\varepsilon _1^*\) such that \(|\varepsilon _1(\bar{z}_1) |\le \varepsilon _1^*\).

Then, we have

$$\begin{aligned} \begin{aligned} {{\dot{V}}_{{z_1}}}&= {z_1}[\frac{{{g_1}({{\bar{x}}_1})}}{{\left| {{g_1}({x_1})} \right| }}{z_2} + \frac{{{g_1}({{\bar{x}}_1})}}{{\left| {{g_1}({x_1})} \right| }}{\alpha _1} + \frac{{{d_1}({{\bar{x}}_2},t)}}{{\left| {{g_1}({x_1})} \right| }} + {h_1}({{\bar{z}}_1})]+\varDelta _1(\bar{z}_1,\alpha _0,\dot{\alpha }_0,\omega _1,\dot{\omega }_1) \\ \end{aligned} \end{aligned}$$
(4.10)

Virtual control \(\alpha _1\) is defined as follows:

$$\begin{aligned} {\alpha _1} = N({\varsigma _1})[{k_1}{z_1} + {h_1}({z_1},{\hat{\theta }_1}) + {\hat{b}_1}{\bar{\varphi }_1}({x_1})\tanh (\frac{{{z_1}{{\bar{\varphi }}_1}({{\bar{x}}_1})}}{{{\eta _1}}})] \end{aligned}$$
(4.11)
$$\begin{aligned} {\dot{\varsigma }_1} = {k_1}z_{_1}^2 + {h_1}({z_1},{\hat{\theta }_1}){z_1} + {\hat{b}_1}{\bar{\varphi }_1}({x_1}){z_1}\tanh (\frac{{{z_1}{{\bar{\varphi }}_1}({{\bar{x}}_1})}}{{{\eta _1}}}) \end{aligned}$$
(4.12)

where \(k_1>1\) is a design parameter; \(h_1(z_1,\hat{\theta }_1)=\hat{\theta }^T_1\xi _1(\bar{z}_1)\) and \(\hat{\theta }_1\) are estimates of \(\theta _1^{*T}{\xi _1}({\bar{z}_1})\) and \({\theta }^*_1\), respectively; \({\hat{b}_1}\) is an estimate of \(b_{_1}^*=\max \{ \varepsilon _{_1}^*,\frac{{p_1^*}}{{{g_{10}}}}\}\), \({\bar{\varphi }_1}({\bar{x}_1}) = 1 + {\varphi _1}({\bar{x}_1})\).

Hence, from Lemma 4.1 and Assumptions 4.1 and 4.2, (4.7) can be further developed as follows:

$$\begin{aligned} \begin{aligned} {{\dot{V}}_{{z_1}}}&\leqslant \frac{{{g_1}({{\bar{x}}_1})}}{{\left| {{g_1}({x_1})} \right| }}{z_1}{z_2} + \frac{{{g_1}({{\bar{x}}_1})}}{{\left| {{g_1}({x_1})} \right| }}{z_1}N({\varsigma _1}){{\dot{\varsigma }}_1} + {{\dot{\varsigma }}_1} - {{\dot{\varsigma }}_1} + \frac{{p_1^*{\varphi _1}({{\bar{x}}_1})}}{{{g_{10}}}}|{z_1}| + {h_1}({{\bar{z}}_1}){z_1} \\&= - {k_1}z_{_1}^2 + \frac{{{g_1}({{\bar{x}}_1})}}{{\left| {{g_1}({x_1})} \right| }}{z_1}{z_2} + \frac{{{g_1}({{\bar{x}}_1})}}{{\left| {{g_1}({x_1})} \right| }}{z_1}N({\varsigma _1}){{\dot{\varsigma }}_1} + {{\dot{\varsigma }}_1} + {h_1}({{\bar{z}}_1}){z_1} - {h_1}({z_1},{{\hat{\theta }}_1}){z_1} -\\&~~~~{{\hat{b}}_1}{{\bar{\varphi }}_1}({x_1}){z_1}\tanh (\frac{{{z_1}{{\bar{\varphi }}_1}({{\bar{x}}_1})}}{{{\eta _1}}}) + \frac{{p_1^*{\varphi _1}({{\bar{x}}_1})}}{{{g_{10}}}}|{z_1}| \\&\leqslant - {k_1}z_{_1}^2 + \frac{1}{4}z_2^2 + z_{_1}^2 + \frac{{{g_1}({{\bar{x}}_1})}}{{\left| {{g_1}({x_1})} \right| }}{z_1}N({\varsigma _1}){{\dot{\varsigma }}_1} + {{\dot{\varsigma }}_1} - {{\tilde{\theta }}_1}{\xi _1}({{\bar{z}}_1}){z_1} + \\&~~~~b_1^*[\left| {{z_1}} \right| {{\bar{\varphi }}_1}({{\bar{x}}_1}) - {z_1}{{\bar{\varphi }}_1}({{\bar{x}}_1})\tanh (\frac{{{z_1}{{\bar{\varphi }}_1}({{\bar{x}}_1})}}{{{\eta _1}}})] - {{\tilde{b}}_1}{{\bar{\varphi }}_1}({x_1}){z_1}\tanh (\frac{{{z_1}{{\bar{\varphi }}_1}({{\bar{x}}_1})}}{{{\eta _1}}}) \\&= - ({k_1} - 1)z_{_1}^2 + \frac{1}{4}z_2^2 + \frac{{{g_1}({{\bar{x}}_1})}}{{\left| {{g_1}({x_1})} \right| }}{z_1}N({\varsigma _1}){{\dot{\varsigma }}_1} + {{\dot{\varsigma }}_1} - {{\tilde{\theta }}_1}{\xi _1}({{\bar{z}}_1}){z_1} + b_1^*[\left| {{z_1}} \right| {{\bar{\varphi }}_1}({{\bar{x}}_1}) - \\&~~~~{z_1}{{\bar{\varphi }}_1}({{\bar{x}}_1})\tanh (\frac{{{z_1}{{\bar{\varphi }}_1}({{\bar{x}}_1})}}{{{\eta _1}}})] -{{\tilde{b}}_1}{{\bar{\varphi }}_1}({x_1}){z_1}\tanh (\frac{{{z_1}{{\bar{\varphi }}_1}({{\bar{x}}_1})}}{{{\eta _1}}})+\varDelta _1 \\ \end{aligned} \end{aligned}$$
(4.13)

where \({\tilde{\theta }_1} = \theta _1^* - {\theta _1},{\tilde{b}_1} = b_1^* - {b_1}\).

Consider the following function

$$\begin{aligned} {V_1}(t) = {V_{z{}_1}} + {\text { }}\frac{1}{2}{\tilde{\theta }_1}^T\varGamma _1^{ - 1}{\tilde{\theta }_1} + \frac{1}{{2{\lambda _1}}}\tilde{b}_{_1}^2 \end{aligned}$$
(4.14)

Adaptive laws are defined as follows:

$$\begin{aligned} {\dot{ \hat{ \theta }} _1} = {\varGamma _1}[{z_1}{\xi _1}({\bar{z}_1}) - {\sigma _1}{\hat{\theta }_1}] \end{aligned}$$
(4.15)
$$\begin{aligned} {\dot{ \hat{ b}}_1} = {\lambda _1}[{z_1}{\bar{\varphi }_1}({\bar{x}_1})\tanh (\frac{{{z_1}\bar{\varphi }{}_1({{\bar{x}}_1})}}{{{\eta _1}}}) - {\sigma _{b1}}{\hat{b}_1}] \end{aligned}$$
(4.16)

where \(\varGamma _1\) is a positive matrix with appropriate dimensions, \({\sigma _1} > 0\), \({\sigma _{b1}} > 0\), \({\eta _1} > 0\) and \({\lambda _1} > 0\) are design parameters.

Differentiating \(V_1\) with respect to time t and considering (4.9)–(4.12), we have

$$\begin{aligned} \begin{aligned} {\dot{V}_1} \leqslant&- ({k_1} - 1)z_{_1}^2 + \frac{1}{4}z_2^2 + \frac{{{g_1}({{\bar{x}}_1})}}{{\left| {{g_1}({x_1})} \right| }}{z_1}N({\varsigma _1}){\dot{\varsigma }_1} + {\dot{\varsigma }_1} +\\&0.2785{\eta _1}b_1^* - {\sigma _1}\tilde{\theta }_{_1}^T{\hat{\theta }_1} - {\text { }}{\sigma _{b1}}{\tilde{b}_1}{\hat{b}_1}+\varDelta _1 \end{aligned} \end{aligned}$$
(4.17)

where Lemma 4.1 is used, namely, \(0 \leqslant \left| x \right| - x\tanh (\frac{x}{\varepsilon }) \leqslant 0.2785\varepsilon ,{\text { }}\forall \varepsilon > 0, \forall x\in R\).

Since

$$\begin{aligned} {\sigma _1}\tilde{\theta }_1^\mathrm{T}{\hat{\theta }_1} \leqslant - \frac{{{\sigma _1}{{\left\| {{{\tilde{\theta }}_1}} \right\| }^2}}}{2} + \frac{{{\sigma _1}{{\left\| {\theta _1^ * } \right\| }^2}}}{2},~~ {\sigma _{b1}}{\tilde{b}_1}{\hat{b}_1} \leqslant - \frac{{{\sigma _{b1}}{{\tilde{b}}_1}^2}}{2} + \frac{{{\sigma _{b1}}b{{_1^ * }^2}}}{2} \end{aligned}$$
(4.18)

then (4.17) can be derived as

$$\begin{aligned} {\dot{V}_1} \leqslant - {c_1}{V_1} + \frac{1}{4}z_2^2 + \frac{{{g_1}({{\bar{x}}_1})}}{{\left| {{g_1}({x_1})} \right| }}{z_1}N({\varsigma _1}){\dot{\varsigma }_1} + \dot{\varsigma }+ {c_{\varepsilon 1}}+\varDelta _1 \end{aligned}$$
(4.19)

where

$$\begin{aligned} {c_{\varepsilon 1}} = 0.2785{\eta _1}b_{_1}^* + \frac{{{\sigma _1}{{\left\| {\theta _1^ * } \right\| }^2}}}{2} + \frac{{{\sigma _{b1}}b{{_1^ * }^2}}}{2} \end{aligned}$$
$$\begin{aligned} {c_1} = \min \{ 2({k_1} - 1){g_{10}},\frac{{{\sigma _1}}}{{{\lambda _{\min }}(\varGamma _1^{ - 1})}},\frac{{{\sigma _{b1}}}}{{{\lambda _1}}}\} \end{aligned}$$

Further, we have

$$\begin{aligned} \frac{d}{{dt}}({V_1}(t){e^{{c_1}t}}) \leqslant \frac{1}{4}{e^{{c_1}t}}z_{_2}^2 + \frac{{{g_1}(x)}}{{\left| {{g_1}(x)} \right| }}N({\varsigma _1}){\dot{\varsigma }_1}{e^{{c_1}t}} + {\dot{\varsigma }_1}{e^{{c_1}t}} + {c_{\varepsilon 1}}{e^{{c_1}t}}+\varDelta _1{e^{{c_1}t}} \end{aligned}$$
(4.20)

Let \( {\rho _1} = {c_{\varepsilon 1}}/{c_1}\), and integrating both the sides of the above inequality (4.20), it yields

$$\begin{aligned} \begin{aligned} {V_1}(t)&\leqslant {\rho _1} + [{V_1}(0) - {\rho _1}]{e^{ - {c_1}t}} + {e^{ - {c_1}t}}\int _0^t {\frac{1}{4}{e^{{c_1}t}}z_{_2}^2d\tau } +\\&~~~~{e^{ - {c_1}t}}\int _0^t {(\frac{{{g_1}(x)}}{{\left| {{g_1}(x)} \right| }}N({\varsigma _1}) + 1){e^{{c_1}t}}} {{\dot{\varsigma }}_1}d\tau + {e^{ - {c_1}t}}\int _0^t {{e^{{c_1}t}}\varDelta _1d\tau }\\&\leqslant {\rho _1} + {V_1}(0) + {e^{ - {c_1}t}}\int _0^t {\frac{1}{4}{e^{{c_1}t}}z_{_2}^2d\tau } + \\&~~~~{e^{ - {c_1}t}}\int _0^t {(\frac{{{g_1}(x)}}{{\left| {{g_1}(x)} \right| }}N({\varsigma _1}) + 1){e^{{c_1}t}}} {{\dot{\varsigma }}_1}d\tau +{e^{ - {c_1}t}}\int _0^t {{e^{{c_1}t}}\varDelta _1d\tau } \\ \end{aligned} \end{aligned}$$
(4.21)

Obviously, if there are not \({e^{ - {c_1}t}}\int _0^t {\frac{1}{4}{e^{{c_1}t}}z_{_2}^2d\tau }\) and \({e^{ - {c_1}t}}\int _0^t {{e^{{c_1}t}}\varDelta _1d\tau }\) in (4.21), then, from Lemmas 4.2 and 4.3, it can be obtained that \({V_1}(t),{\varsigma _1},{\hat{\theta }_1},{\hat{b}_1}\) are bounded in \([0,{{t}_{f}})\). On the other hand, if it can be proved that \(z_2(t)\) is bounded in \([0,t_f)\), from the following inequality

$$\begin{aligned} {e^{ - {c_1}t}}\int _0^t {\frac{1}{4}{e^{{c_1}t}}z_{_2}^2d\tau } \leqslant \frac{1}{4}{e^{ - {c_1}t}}\mathop {\sup }\limits _{\tau \in [0,t]} [z_2^2(\tau )]\int _0^t {{e^{{c_1}t}}d\tau \leqslant } \frac{1}{{4{c_1}}}{e^{ - {c_1}t}}\mathop {\sup }\limits _{\tau \in [0,t]} [z_2^2(\tau )] \end{aligned}$$
(4.22)

we can obtain that \({e^{ - {c_1}t}}\int _0^t {\frac{1}{4}{e^{{c_1}t}}z_{_2}^2d\tau }\) is bounded. From Lemmas 2 and 3, we further obtain that \({V_1}(t),{\varsigma _1},{\hat{\theta }_1},{\hat{b}_1}\) also are bounded in \([0,t_f)\).

Furthermore, from [43], the same results can be obtained when \({{t}_{f}}=+\infty \).

Notice that, the boundedness of \({{z}_{2}}\) will be considered in the next step, and the error \({e^{ - {c_1}t}}\int _0^t {{e^{{c_1}t}}\varDelta _1d\tau }\) will be compensated in Step n.

Remark 4.3

In [41], the error between \(\omega -1\) and \(\alpha _0\) is not considered in the stability analysis of the overall closed-loop system. Since there exists a difference between them, the effect of the error should be considered in the closed-loop system stability analysis. If not, the stability analysis is not complete.

Remark 4.4

It is valuable to point out, the signs of the control gain functions considered in this chapter are unknown as well as the control coefficients, which means that the system model is more general and the results obtained in this chapter thus have a great significance both on theory and on practical implication.

Step i (\(i=2, 3, \ldots , n-1\)):

In this step, consider the subsystem: \(z_i=x_i-\alpha _{i-1}\). From (4.1) and (4.3), we have

$$\begin{aligned} \begin{aligned} {{\dot{z}}_i}&= {f_i}({{\bar{x}}_i}) + {g_i}({{\bar{x}}_i}){z_{i + 1}} + {g_i}({{\bar{x}}_i}){\alpha _i} + {d_1}({{\bar{x}}_2},t) - {{\dot{\alpha }}_{i - 1}} \\ \end{aligned} \end{aligned}$$
(4.23)

Define the following Lyapunov function

$$\begin{aligned} {V_{{z_i}}} = \int _{{\text { }}0}^{{\text { }}{z_i}} {\frac{\sigma }{{\left| {{g_i}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}d\sigma } \end{aligned}$$
(4.24)

Similar to the analysis in the first step, it can be easily seen that \(V_{{z_i}}\) is a positive definite function of \(z_i\). Since

$$\begin{aligned} \frac{{\partial \left| {g_{^i}^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}{{\partial {\alpha _{i - 1}}}} = \frac{{\partial \left| {g_{^i}^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}{{\partial \sigma }} \end{aligned}$$
(4.25)

and from the derivation rule of compound function, we have

$$\begin{aligned} {{\dot{V}}_{{z_i}}}&= \frac{{{z_i}}}{{\left| {{g_i}({{\bar{x}}_i})} \right| }}{{\dot{z}}_i} + \nonumber \\&~~~~\int _{{\text { }}0}^{{z_i}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}{{\partial {{\bar{x}}_{i - 1}}}}{{\dot{ \bar{ x}}}_{i - 1}} + \frac{{\partial \left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}{{\partial {\alpha _{i - 1}}}}{{\dot{\alpha }}_{i - 1}}} \right] } d\sigma \nonumber \\&= \frac{{{z_i}}}{{\left| {{g_i}({{\bar{x}}_i})} \right| }}{{\dot{z}}_i} + \int _{{\text { }}0}^{{z_i}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}{{\partial {{\bar{x}}_{i - 1}}}}{{\dot{ \bar{ x}}}_{i - 1}}d\sigma } \right] } + \nonumber \\&~~~~{{\dot{\alpha }}_{i - 1}}\int _{{\text { }}0}^{{z_i}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}{{\partial {\alpha _{i - 1}}}}d\sigma } \right] } \nonumber \\&= \frac{{{z_i}}}{{\left| {{g_i}({{\bar{x}}_i})} \right| }}{{\dot{z}}_i} + \int _{{\text { }}0}^{{z_i}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}{{\partial {{\bar{x}}_{i - 1}}}}{{\dot{ \bar{ x}}}_{i - 1}}d\sigma } \right] } + \nonumber \\&~~~~{{\dot{\alpha }}_{i - 1}}\int _{{\text { }}0}^{{z_i}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}{{\partial \sigma }}d\sigma } \right] } \nonumber \\&= \frac{{{z_i}}}{{\left| {{g_i}({{\bar{x}}_i})} \right| }}{{\dot{z}}_i} + \int _{{\text { }}0}^{{z_i}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}{{\partial {{\bar{x}}_{i - 1}}}}{{\dot{ \bar{ x}}}_{i - 1}}d\sigma } \right] } + \nonumber \\&~~~~\frac{{{{\dot{\alpha }}_{i - 1}}{z_i}}}{{\left| {g(x)} \right| }} + {{\dot{\alpha }}_{i - 1}}\int _{{\text { }}0}^{{\text { }}{z_i}} {\frac{1}{{\left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}d\sigma } \nonumber \\ \end{aligned}$$
(4.26)

From the definition of the error between the command filter’s state and virtual control, we know, \(\alpha _{i-1}=\omega _i-v_i\). Replacing \(\alpha _{i-1}\) in (4.26) by \(\omega _i-v_i\), from (4.1) and (4.26), we have

$$\begin{aligned} \begin{aligned} {{\dot{V}}_{{z_i}}}&= \frac{{{z_i}}}{{\left| {{g_i}({{\bar{x}}_i})} \right| }}({f_i}({{\bar{x}}_i}) + {g_i}({{\bar{x}}_i}){z_{i + 1}} + {g_i}({{\bar{x}}_i}){\alpha _i} + {d_1}({{\bar{x}}_2},t) - {{\dot{\alpha }}_{i - 1}}) + \\&~~~~\int _{{\text { }}0}^{{z_i}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}{{\partial {{\bar{x}}_{i - 1}}}}{{\dot{ \bar{ x}}}_{i - 1}}d\sigma } \right] } + \frac{{{{\dot{\alpha }}_{i - 1}}{z_i}}}{{\left| g_i(\bar{x}_i) \right| }} + \\&~~~~{{\dot{\alpha }}_{i - 1}}\int _{{\text { }}0}^{{\text { }}{z_i}} {\frac{1}{{\left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}d\sigma } \\&= \frac{{{z_i}}}{{\left| {{g_i}({{\bar{x}}_i})} \right| }}({g_i}({{\bar{x}}_i}){z_{i + 1}} + {g_i}({{\bar{x}}_i}){\alpha _i} + {d_1}({{\bar{x}}_2},t)) + {h_i}({{\bar{z}}_i}){z_i}+\varDelta _i \\ \end{aligned} \end{aligned}$$
(4.27)

where \({\bar{z}_i} = {(\bar{x}_{_i}^T,{\omega _{i}},{\dot{\omega }_i})^\mathrm{T}} \in {\varOmega _{{{\bar{z}}_i}}} \subset {R^{i + 2}}\),

$$\begin{aligned} \begin{aligned} {h_i}({\bar{z}_i}) =&\frac{{{f_i}({{\bar{x}}_i})}}{{\left| {{g_i}({{\bar{x}}_i})} \right| }} + \frac{1}{{{z_i}}}\int _{{\text { }}0}^{{z_i}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\omega _{i}})} \right| }}{{\partial {{\bar{x}}_{i - 1}}}}{{\dot{ \bar{ x}}}_{i - 1}}d\sigma } \right] } + \\&\frac{{{{\dot{\omega }}_{i }}}}{{{z_i}}}\int _{{\text { }}0}^{{\text { }}{z_i}} {\frac{1}{{\left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\omega _{i }})} \right| }}d\sigma }\\ \end{aligned} \end{aligned}$$
(4.28)
$$\begin{aligned} \begin{aligned} \varDelta _i=&\int _{{\text { }}0}^{{z_i}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}{{\partial {{\bar{x}}_{i - 1}}}}{{\dot{ \bar{ x}}}_{i - 1}}d\sigma } \right] }+\\&{{\dot{\alpha }}_{i - 1}}\int _{{\text { }}0}^{{\text { }}{z_i}} {\frac{1}{{\left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}d\sigma }-\\&\frac{1}{{{z_i}}}\int _{{\text { }}0}^{{z_i}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\omega _{i}})} \right| }}{{\partial {{\bar{x}}_{i - 1}}}}{{\dot{ \bar{ x}}}_{i - 1}}d\sigma } \right] } - \frac{{{{\dot{\omega }}_{i }}}}{{{z_i}}}\int _{{\text { }}0}^{{\text { }}{z_i}} {\frac{1}{{\left| {g_i^{ - 1}({{\bar{x}}_{i - 1}},\sigma + {\omega _{i }})} \right| }}d\sigma }\\ \end{aligned} \end{aligned}$$
(4.29)

Note that, \(h_i(\bar{z}_i)\) will be approximated by FLSs on a compact set \(\varOmega _{z_i}\) as: \(h_i(z_i)=\theta _i^{*T}{\xi _i}({\bar{z}_i})+\varepsilon _i(\bar{z}_i)\). From Assumption 4.5, we know, there exists an unknown constant \(\varepsilon _i^*\) such that \(|\varepsilon _i(\bar{z}_i) |\le \varepsilon _i^*\).

The following virtual control is designed as follows:

$$\begin{aligned} {\alpha _i} = N({\varsigma _i})[{k_i}{z_i} + {h_i}({\bar{z}_i},{\hat{\theta }_i}) + {\hat{b}_i}\bar{\varphi }({\bar{x}_i})\tanh (\frac{{{z_i}\bar{\varphi }({{\bar{x}}_i})}}{{{\eta _i}}})] \end{aligned}$$
(4.30)
$$\begin{aligned} {\dot{\varsigma }_i} = {k_i}z_i^2 + {h_i}({\bar{z}_i},{\hat{\theta }_i}){z_i} + {\hat{b}_i}\bar{\varphi }({\bar{x}_i}){z_i}\tanh (\frac{{{z_i}\bar{\varphi }({{\bar{x}}_i})}}{{{\eta _i}}})] \end{aligned}$$
(4.31)

where \(k_i>1\frac{1}{4}\) is a design parameter; \({h_i}({\bar{z}_i},{\hat{\theta }_i}) = \hat{\theta }_i^T{\xi _i}({\bar{z}_i})\) is an estimate of \(\theta _i^{*T}{\xi _i}({\bar{z}_i})\); \(\hat{b}_i\) is an estimate of \(b_i^*\), \(b_{_i}^* = \max \{ \varepsilon _{_i}^*,\frac{{p_i^*}}{{{g_{10}}}}\}\), \({\bar{\varphi }_i}({\bar{x}_i}) = 1 + {\varphi _i}({\bar{x}_i})\).

Remark 4.5

It seems strange that \(k_i\) is set to be \(k_i>1\frac{1}{4}\). The purpose of “\(\frac{1}{4}\)” is to compensate for the term \(\frac{1}{4}z^2_i\) which derived in the previous step.

Similar to (4.13), substituting (4.30) and (4.31) into (4.27) and re-arranging it, we have

$$\begin{aligned} \begin{aligned} {{\dot{V}}_{{z_i}}} \leqslant&- ({k_1} - 1)z_{_i}^2 + \frac{1}{4}z_{i + 1}^2 + \frac{{{g_i}({{\bar{x}}_i})}}{{\left| {{g_i}({{\bar{x}}_i})} \right| }}{z_i}N({\varsigma _i}){{\dot{\varsigma }}_i} + {{\dot{\varsigma }}_i} - {{\tilde{\theta }}_i}{\xi _i}({{\bar{z}}_i}){z_i} + \\&b_i^*[\left| {{z_i}} \right| {{\bar{\varphi }}_i}({{\bar{x}}_i}) - {z_i}{{\bar{\varphi }}_i}({{\bar{x}}_i})\tanh (\frac{{{z_i}{{\bar{\varphi }}_i}({{\bar{x}}_i})}}{{{\eta _i}}})] - {{\tilde{b}}_i}{{\bar{\varphi }}_i}({x_i}){z_i}\tanh (\frac{{{z_i}{{\bar{\varphi }}_i}({{\bar{x}}_i})}}{{{\eta _i}}})+\varDelta _i \\ \end{aligned} \end{aligned}$$
(4.32)

where \({\tilde{\theta }_i} = \theta _{_i}^* - {\hat{\theta } _i}\) and \({\tilde{b} _i} = b _{_i}^* - {\hat{b} _i}\).

Consider the following Lyapunov function

$$\begin{aligned} {V_i}(t) = V_{i-1}+{V_{z{}_i}} + {\text { }}\frac{1}{2}{\tilde{\theta }_i}^T\varGamma _i^{ - 1}{\tilde{\theta }_i} + \frac{1}{{2{\lambda _i}}}\tilde{b}_i^2 \end{aligned}$$
(4.33)

The following adaptive laws are designed as follows:

$$\begin{aligned} {\dot{\hat{ \theta }} _i} = {\varGamma _i}[{z_i}{\xi _i}({\bar{z}_i}) - {\sigma _i}{\hat{\theta }_i}] \end{aligned}$$
(4.34)
$$\begin{aligned} {\dot{ \hat{ b}}_i} = {\lambda _i}[{z_i}{\bar{\varphi }_i}({\bar{x}_i})\tanh (\frac{{{z_i}{{\bar{\varphi }}_i}({{\bar{x}}_i})}}{{{\eta _i}}}) - {\sigma _{bi}}{\hat{b}_i}] \end{aligned}$$
(4.35)

where \({\varGamma _i}\) is a positive definite matrix, and \({\eta _i}> 0,{\sigma _i}> 0,{\sigma _{bi}} > 0\) and \({\lambda _i} > 0\) are design parameters.

Similar Step 1, differentiating \(V_i\) with respect to time t and considering (4.34) and (4.35), from Lemma 4.1, one has

$$\begin{aligned} \begin{aligned} {\dot{V}_i} \leqslant&\dot{V}_{i-1}- ({k_i} - 1\frac{1}{4})z_{_i}^2 + \frac{1}{4}z_{i + 1}^2 + \frac{{{g_i}({{\bar{x}}_i})}}{{\left| {{g_i}({{\bar{x}}_i})} \right| }}{z_i}N({\varsigma _i}){\dot{\varsigma }_i} + {\dot{\varsigma }_i} + \\&0.2785{\eta _i}b_i^* - {\sigma _i}\tilde{\theta }_{_i}^T{\hat{\theta }_i} - {\text { }}{\sigma _{bi}}{\tilde{b}_i}{\hat{b}_i}+\varDelta _i\\ \end{aligned} \end{aligned}$$
(4.36)

Since \({\sigma _i}\tilde{\theta }_i^\mathrm{T}{\hat{\theta }_i} \leqslant - \frac{{{\sigma _i}{{\left\| {{{\tilde{\theta }}_i}} \right\| }^2}}}{2} + \frac{{{\sigma _i}{{\left\| {\theta _i^ * } \right\| }^2}}}{2}\) and \({\sigma _{bi}}{\tilde{b}_i}{\hat{b}_i} \leqslant - \frac{{{\sigma _{bi}}{{\tilde{b}}_i}^2}}{2} + \frac{{{\sigma _{bi}}b{{_i^ * }^2}}}{2}\), then let \({c_{\varepsilon i}} =( 0.2785{\eta _i}b_i^* + \frac{{{\sigma _i}{{\left\| {\theta _i^ * } \right\| }^2}}}{2} + \frac{{{\sigma _{bi}}b{{_i^ * }^2}}}{2}\), \({c_i} = \min \{ 2({k_i} - 1\frac{1}{4}){g_{i0}},\frac{{{\sigma _i}}}{{{\lambda _{\min }}(\varGamma _i^{ - 1})}},\frac{{{\sigma _{bi}}}}{{{\lambda _i}}}\}\) and considering (4.17), then (4.36) can be developed as follows:

$$\begin{aligned} {\dot{V}_i} \leqslant \sum \nolimits ^i_{j=1}{(- {c_j}{V_j} +\frac{{{g_j}({{\bar{x}}_j})}}{{\left| {{g_j}({{\bar{x}}_j})} \right| }}{z_j}N({\varsigma _j}){\dot{\varsigma }_j} + {\dot{\varsigma }_j} + {c_{\varepsilon j}})} +\sum \nolimits ^i_{j=1}{\varDelta _j} \end{aligned}$$
(4.37)

Further, we have

$$\begin{aligned} \frac{d}{{dt}}({V_i}(t){e^{{c_i}t}}) \leqslant \frac{1}{4}{e^{{c_i}t}}z_{_{i + 1}}^2 +[ \sum \nolimits ^i_{j=1}{( \frac{{{g_j}({{\bar{x}}_j})}}{{\left| {{g_j}({{\bar{x}}_j})} \right| }}{z_j}N({\varsigma _j}){\dot{\varsigma }_j} + {\dot{\varsigma }_j} + {c_{\varepsilon j}})}]{e^{{c_i}t}} + \sum \nolimits ^i_{j=1}{\varDelta _j}{e^{{c_i}t}} \end{aligned}$$
(4.38)

As doing in the first step, integrating both the sides of (4.38), we have

$$\begin{aligned} \begin{aligned} {V_i}(t) \leqslant&{\rho _i} + {V_i}(0) + {e^{ - {c_i}t}}\int _0^t {\frac{1}{4}{e^{{c_i}t}}z_{_{i + 1}}^2d\tau } +\\&{e^{ - {c_i}t}}\sum \nolimits ^i_{j=1}\int _0^t {(\frac{{{g_j}({{\bar{x}}_j})}}{{\left| {{g_j}({{\bar{x}}_j})} \right| }}N({\varsigma _j}) + 1){e^{{c_i}t}}} {{\dot{\varsigma }}_j}d\tau + {e^{ - {c_i}t}}\sum \nolimits ^i_{j=1}\int _0^t {{e^{{c_i}t}}{{\varDelta _j}}d\tau } \\ \end{aligned} \end{aligned}$$
(4.39)

where \({\rho _i} = \frac{\sum \nolimits ^i_{j=1}{c_{\varepsilon j}}}{{c_i}}\).

Similar to step 1, if \(z_{i+1}\) is proved to be bounded and \(\sum \nolimits ^i_{j=1}{\varDelta _j}=0\), then, from Lemmas 4.2 and 4.3, one has, \({e^{ - {c_i}t}}\int _0^t {\frac{1}{4}{e^{{c_i}t}}z_{_{i + 1}}^2d\tau }\) is bounded, and \({V_i}(t),{\varsigma _i},{\hat{\theta }_i},{\hat{b}_i}\) further are bounded in \([0,+\infty )\).

Note that, the boundedness of \({{z}_{i+1}}\) will be considered in the next step while \(\sum \nolimits ^i_{j=1}{\varDelta _j}=0\) will be compensated in the last step.

Remark 4.6

From the aforementioned analysis, it is easily seen that virtual control laws \({{\alpha }_{i}}\) are continuous functions of variables \({{\bar{x}}_{i}}\), \({{\bar{z}}_{i}}\), \(\omega _1\), \(\dot{\omega }_1\) and \(\hat{\theta }_i\). Since these variables are available, the first derivative of \({{\alpha }_{i}}\), i.e., \({{\dot{\alpha }}_{i}}\), can be obtained by analytical computation. However, just stated in Introduction section, as system dimension, i.e., n, increases, the computation of the higher derivatives of \({{{\alpha }}_{i}}\) becomes increasingly complicated. In this chapter, by using command filter (4.4), only its first derivative is utilized, which reduce such computation complexity.

Step n:

Now, consider \({{z}_{n}}\)-subsystem: \({{z}_{n}}={{x}_{n}}-{{\alpha }_{n-1}}\). Form (4.1)–(4.3), one has

$$\begin{aligned} \begin{aligned} {\dot{z}_n}&= {f_n}({\bar{x}_n}) +{g_n}({\bar{x}_n}){g_f}({\bar{x}_n})u+{g_n}({\bar{x}_n})b_f({\bar{x}_n})- {\dot{\alpha }_{n - 1}}\\&=\bar{f}_n({\bar{x}_n})+{\bar{g}_n}({\bar{x}_n})u - {\dot{\alpha }_{n - 1}}\\ \end{aligned} \end{aligned}$$
(4.40)

where \(\bar{f}_n({\bar{x}_n})={f_n}({\bar{x}_n}) + {g_n}({\bar{x}_n})b_f({\bar{x}_n})\) and \({\bar{g}_n}({\bar{x}_n})={g_n}({\bar{x}_n}){g_f}({\bar{x}_n})\).

Define the following Lyapunov function

$$\begin{aligned} {V_{{z_n}}} = \int _{{\text { }}0}^{{\text { }}{z_n}} {\frac{\sigma }{{\left| {{\bar{g}_n}({{\bar{x}}_{n - 1}},\sigma + {\alpha _{n - 1}})} \right| }}d\sigma } \end{aligned}$$
(4.41)

From the analysis in the previous step, \(V_{z_n}\) is a positive definite function of \(z_n\).

Similar to the previous steps, differentiating \({{V}_{{{z}_{n}}}}\) with respect to time t, one has

$$\begin{aligned} \begin{aligned} {{\dot{V}}_{{z_n}}}&\leqslant \frac{{{z_n}}}{{\left| {{\bar{g}_n}({{\bar{x}}_n})} \right| }}({\bar{g}_n}({{\bar{x}}_n})u + {d_n}({{\bar{x}}_n},t)) + {h'_n}({{\bar{z}}_n}){z_n}+\varDelta _n \\ \end{aligned} \end{aligned}$$
(4.42)

where

$$\begin{aligned} \begin{aligned} {h'_n}({\bar{z}_n}) =&\frac{{{\bar{f}_n}({{\bar{x}}_n})}}{{\left| {{\bar{g}_n}({{\bar{x}}_n})} \right| }} + \frac{1}{{{z_n}}}\int _{{\text { }}0}^{{z_n}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {\bar{g}_n^{ - 1}({{\bar{x}}_{n}},\sigma + {\omega _{n}})} \right| }}{{\partial {{\bar{x}}_{n}}}}{{\dot{ \bar{ x}}}_{n }}d\sigma } \right] } + \\&\frac{{{{\dot{\omega }}_{n }}}}{{{z_n}}}\int _{{\text { }}0}^{{\text { }}{z_n}} {\frac{1}{{\left| {\bar{g}_n^{ - 1}({{\bar{x}}_{n}},\sigma + {\omega _{n }})} \right| }}d\sigma }\\ \end{aligned} \end{aligned}$$
(4.43)
$$\begin{aligned} \begin{aligned} \varDelta _n=&\int _{{\text { }}0}^{{z_n}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {\bar{g}_n^{ - 1}({{\bar{x}}_{n}},\sigma + {\alpha _{n - 1}})} \right| }}{{\partial {{\bar{x}}_{n}}}}{{\dot{ \bar{ x}}}_{n}}d\sigma } \right] }+{{\dot{\alpha }}_{n-1}}\int _{{\text { }}0}^{{\text { }}{z_i}} {\frac{1}{{\left| {\bar{g}_n^{ - 1}({{\bar{x}}_{n - 1}},\sigma + {\alpha _{n - 1}})} \right| }}d\sigma }-\\&\frac{1}{{{z_n}}}\int _{{\text { }}0}^{{z_i}{\text { }}} {\sigma \left[ {\frac{{\partial \left| {\bar{g}_n^{ - 1}({{\bar{x}}_{n}},\sigma + {\omega _{n}})} \right| }}{{\partial {{\bar{x}}_{n}}}}{{\dot{ \bar{ x}}}_{n}}d\sigma } \right] } - \frac{{{{\dot{\omega }}_{n}}}}{{{z_n}}}\int _{{\text { }}0}^{{\text { }}{z_n}} {\frac{1}{{\left| {\bar{g}_n^{ - 1}({{\bar{x}}_{n}},\sigma + {\omega _{n }})} \right| }}d\sigma }\\ \end{aligned} \end{aligned}$$
(4.44)

Adding and subtracting \(\sum \nolimits ^{n-1}_{j=1}{\varDelta _j}\) in the right side of (4.42), we have

$$\begin{aligned} \begin{aligned} {{\dot{V}}_{{z_n}}}&\leqslant \frac{{{z_n}{\bar{g}_n}({{\bar{x}}_n})}}{{\left| {{\bar{g}_n}({{\bar{x}}_n})} \right| }}u + \left| {{z_n}} \right| {\rho ^ * } + \frac{{\left| {{z_n}} \right| }}{{{g_{n0}}}}p_n^*{\varphi _n}({x_n}) + {h'_n}({{\bar{z}}_n}){z_n}+\sum \nolimits ^n_{j=1}{\varDelta _j}-\sum \nolimits ^{n-1}_{j=1}{\varDelta _j} \\ \end{aligned} \end{aligned}$$
(4.45)

Remark 4.7

The purpose of “adding and subtracting \(\sum \nolimits ^{n-1}_{j=1}{\varDelta _j}\)” is to remove the error terms \(\sum \nolimits ^{n-1}_{j=1}{\varDelta _j}\) (4.37), which is introduced by command filter (4.4) in the previous \(n-1\) steps.

It is easily seen that \(\varDelta _j(j=1,\ldots ,n)\) is a function of variables \(\bar{x}_j\), \(\bar{z}_j\), \(\bar{\alpha }_j\), \(\dot{\bar{ \alpha }}_j\), \(\bar{\omega }_j\) and \(\dot{\bar{\omega }}_j\), where \(\bar{x}_j=(x_1,\ldots ,x_j)^T\), \(\bar{z}_j=(z_1,\ldots ,z_j)^T\), \(\bar{\alpha }_j=(\alpha _0,\ldots ,\alpha _{j-1})^T\), \(\dot{\bar{\alpha }}_j=(\dot{\alpha }_0,\ldots ,\dot{\alpha }_{j-1})^T\), \(\bar{\omega }_j=(\omega _1,\ldots ,\omega _j)^T\), \(\dot{\bar{\omega }}_j=(\dot{\omega }_1,\ldots ,\dot{\omega }_j)^T\). Let

$$h(\bar{Z}_n)=h'(\bar{Z}_n)+\sum \nolimits ^{n-1}_{j=1}{\varDelta _j}$$

where \(\bar{Z}_n=(\bar{x}_n^T, \bar{z}_n^T, \bar{\alpha }_n^T, \dot{\bar{ \alpha }}_n^T, \bar{\omega }_n^T, \dot{\bar{\omega }}_n^T )^T\).

From the previous analysis, it is seen that \(h'(\bar{Z}_n)\) and \(\varDelta _j\) are smooth, which means that \(h(\bar{Z}_n)\) also is smooth. Hence, FLSs can be utilized to approximate it in the form: \(h(\bar{Z}_n)=\theta ^{*T}_n\xi _n(\bar{Z}_n)+\varepsilon _n(\bar{Z}_n)\). From Assumption 5, we know, there exists an unknown constant \(\varepsilon _n^*\) such that \(|\varepsilon _n(\bar{Z}_n)|\le \varepsilon _n^*\).

The actual control is defined as follows:

$$\begin{aligned} u = N({\varsigma _n})[{k_n}{z_n} + {h_n}({\bar{Z}_n},{\hat{\theta }_n}) + {\hat{b}_n}\bar{\varphi }({\bar{x}_n})\tanh (\frac{{{z_n}\bar{\varphi }({{\bar{x}}_n})}}{{{\eta _n}}})] \end{aligned}$$
(4.46)
$$\begin{aligned} {\dot{\varsigma }_n} = {k_n}z_n^2 + {h_n}({\bar{Z}_n},{\hat{\theta }_n}){z_n} + {\hat{b}_n}\bar{\varphi }({\bar{x}_n}){z_n}\tanh (\frac{{{z_n}\bar{\varphi }({{\bar{x}}_n})}}{{{\eta _n}}})] \end{aligned}$$
(4.47)

where \(k_n>\frac{1}{4}\) is a design parameter; \(h_n(\bar{Z}_n,\hat{\theta }_n)=\hat{\theta }_n^T\xi _n(\bar{Z}_n)\) is an estimate of \(\theta _n^{*T}\xi _n(\bar{Z}_n)\); \(\hat{b}_n\) is an estimate of \(b_{_n}^* = \max \{ \varepsilon _{_n}^*,\frac{{p_n^*}}{{{g_{10}}}}\}\); \({\bar{\varphi }_n}({\bar{x}_n}) = 1 + {\varphi _n}({\bar{x}_n})\).

Substituting (4.46) and (4.47) into (4.45), it yields

$$\begin{aligned} \begin{aligned} {{\dot{V}}_{{z_n}}} \leqslant&- {k_n}z_{_n}^2 + \frac{{{\bar{g}_n}({{\bar{x}}_n})}}{{\left| {{\bar{g}_n}({{\bar{x}}_n})} \right| }}{z_n}N({\varsigma _n}){{\dot{\varsigma }}_n} + {{\dot{\varsigma }}_n} - {{\tilde{\theta }}_n}{\xi _n}({{\bar{z}}_n}){z_n}-\sum \nolimits ^{n-1}_{j=1}{\varDelta _j} + \\&b_n^*[\left| {{z_n}} \right| {{\bar{\varphi }}_n}({{\bar{x}}_n}) - {z_n}{{\bar{\varphi }}_n}({{\bar{x}}_n})\tanh (\frac{{{z_n}{{\bar{\varphi }}_n}({{\bar{x}}_n})}}{{{\eta _n}}})] - {{\tilde{b}}_n}{{\bar{\varphi }}_n}({x_n}){z_n}\tanh (\frac{{{z_n}{{\bar{\varphi }}_n}({{\bar{x}}_n})}}{{{\eta _n}}}) \\ \end{aligned} \end{aligned}$$
(4.48)

where \(\tilde{\theta }_n=\theta ^*_n-\hat{\theta }_n\) and \(\tilde{b}_n=b^*_n-\hat{b}_n\).

Define the following Lyapunov function

$$\begin{aligned} {V_n}(t) =V_{n-1}+ {V_{z{}_n}} + {\text { }}\frac{1}{2}{\tilde{\theta }_n}^T\varGamma _n^{ - 1}{\tilde{\theta }_n} + \frac{1}{{2{\lambda _n}}}\tilde{b}_n^2 \end{aligned}$$
(4.49)

The following adaptive laws are defined as:

$$\begin{aligned} {\dot{ \hat{ \theta }} _n} = {\varGamma _n}[{z_n}{\xi _n}({\bar{Z}_n}) - {\sigma _n}{\hat{\theta }_n}] \end{aligned}$$
(4.50)
$$\begin{aligned} {\dot{ \hat{ b}}_n} = {\lambda _n}[{z_n}{\bar{\varphi }_n}({\bar{x}_n})\tanh (\frac{{{z_n}{{\bar{\varphi }}_n}({{\bar{x}}_n})}}{{{\eta _n}}}) - {\sigma _{bn}}{\hat{b}_n}] \end{aligned}$$
(4.51)

where \(\varGamma _n\) is a positive definite matrix, \({\eta _n}> 0,{\sigma _n}> 0,{\sigma _{bn}} > 0\) and \({\lambda _n} > 0\) are design parameters.

Differentiating \({{V}_{{{n}}}}\) with respect to time t and considering (4.50), (4.51) and Lemma 4.1, similar to the previous steps, one has

$$\begin{aligned} {\dot{V}_n} \leqslant \dot{V}_{n-1} - {k_n}z_{_n}^2 + \frac{{{\bar{g}_n}({{\bar{x}}_n})}}{{\left| {{\bar{g}_n}({{\bar{x}}_n})} \right| }}N({\varsigma _n}){\dot{\varsigma }_n} + {\dot{\varsigma }_n} + 0.2785{\eta _n}b_n^* - {\sigma _n}\tilde{\theta }_n^T{\hat{\theta }_n} - {\text { }}{\sigma _{bn}}{\tilde{b}_n}{\hat{b}_n} \end{aligned}$$
(4.52)

From Young’s inequality, we have

$$\begin{aligned} {\sigma _n}\tilde{\theta }_n^\mathrm{T}{\hat{\theta }_n} \leqslant - \frac{{{\sigma _n}{{\left\| {{{\tilde{\theta }}_n}} \right\| }^2}}}{2} + \frac{{{\sigma _n}{{\left\| {\theta _n^ * } \right\| }^2}}}{2},{\sigma _{bn}}{\tilde{b}_n}{\hat{b}_n} \leqslant - \frac{{{\sigma _{bn}}{{\tilde{b}}_n}^2}}{2} + \frac{{{\sigma _{bn}}b{{_n^ * }^2}}}{2} \end{aligned}$$
(4.53)

Let \({c_{\varepsilon n}} = 0.2785{\eta _n}b_n^* + \frac{{{\sigma _n}{{\left\| {\theta _n^ * } \right\| }^2}}}{2} + \frac{{{\sigma _{bn}}b{{_n^ * }^2}}}{2}\), then (4.52) can be derived as

$$\begin{aligned} \begin{aligned} {\dot{V}_n} \leqslant&\dot{V}_{n-1} - 2{k_n}\left| {{\bar{g}_n}({{\bar{x}}_n})} \right| {V_n} + \frac{{{\bar{g}_n}({{\bar{x}}_n})}}{{\left| {{\bar{g}_n}({{\bar{x}}_n})} \right| }}mv(t)N({\varsigma _n}){\dot{\varsigma }_n} + {\dot{\varsigma }_n} + \\&{c_{\varepsilon n}} - \frac{{{\sigma _n}{{\left\| {{{\tilde{\theta }}_n}} \right\| }^2}}}{2} - \frac{{{\text { }}{\sigma _{bn}}{{\left\| {{{\tilde{b}}_n}} \right\| }^2}}}{2}\\ \end{aligned} \end{aligned}$$
(4.54)

Let

$${c_n} = \min \{ 2{k_n}{\bar{g}_{n0}},\frac{{{\sigma _n}}}{{{\lambda _{\min }}(\varGamma _n^{ - 1})}},\frac{{{\sigma _{bn}}}}{{{\lambda _n}}}\}$$

from the analysis in the previous steps, then (4.54) can be further developed as follows:

$$\begin{aligned} {\dot{V}_n} \leqslant \sum ^n_{i=1}[\frac{{{\bar{g}_i}({{\bar{x}}_i})}}{{\left| {{\bar{g}_i}({{\bar{x}}_i})} \right| }}N({\varsigma _i}){\dot{\varsigma }_i} + {\dot{\varsigma }_i} + {c_{\varepsilon i}}] \end{aligned}$$
(4.55)

Further, we have

$$\begin{aligned} \frac{d}{{dt}}({V_n}(t){e^{{c_n}t}}) \leqslant {e^{{c_n}t}}\sum ^n_{i=1}[\frac{{{\bar{g}_i}({{\bar{x}}_i})}}{{\left| {{\bar{g}_i}({{\bar{x}}_i})} \right| }}N({\varsigma _i}){\dot{\varsigma }_i} + {\dot{\varsigma }_i} + {c_{\varepsilon i}}] \end{aligned}$$
(4.56)

where \(\bar{g}_i(\cdot )=g_i(\cdot )\), \(i=1,\ldots , n-1\).

Let \({\rho _n} = \frac{\sum \nolimits ^n_{j=1}{c_{\varepsilon j}}}{c_n}\). Similar to the previous steps, integrating both the sides of the above inequality, we have

$$\begin{aligned} \begin{aligned} {V_n}(t)&\leqslant {\rho _n} + [{V_n}(0) - {\rho _n}]{e^{ - {c_n}t}} + {e^{ - {c_n}t}}\int _0^t [{e^{{c_n}t}}\sum \nolimits ^n_{i=1} {(\frac{{{\bar{g}_i}({{\bar{x}}_i})}}{{\left| {{\bar{g}_i}({{\bar{x}}_i})} \right| }}N({\varsigma _i}) + 1)} {{\dot{\varsigma }}_i}]d\tau \\&\leqslant {\rho _n} + {V_n}(0) + {e^{ - {c_n}t}}\int _0^t [{e^{{c_n}t}}\sum \nolimits ^n_{i=1} {(\frac{{{\bar{g}_i}({{\bar{x}}_i})}}{{\left| {{\bar{g}_i}({{\bar{x}}_i})} \right| }}N({\varsigma _i}) + 1)} {{\dot{\varsigma }}_i}]d\tau \\ \end{aligned} \end{aligned}$$
(4.57)

From Lemmas 4.2 and 4.3, it is easily seen that \({V_n}(t),{\varsigma _n},{\hat{\theta }_n},{\hat{b}_n}\) are bounded in \([0,t_f)\). From [43], the same results can be obtained in \([0,+\infty )\). Thus, it can be obtained that \(z_n\) is bounded in \([0,+\infty )\), which means that \(z_{n-1}\) in \((n-1)\)th step is bounded. Doing the same reasoning, we finally obtained that all \({z_i}(t), i = 1,2, \ldots n\) are bounded.

From the definitions of \(V_{z_{i}}\) and \(V_i\), \(i=1,\ldots ,n\), we known

$$\begin{aligned} {V_n}(t) =\sum \nolimits ^n_{i=1}[ {V_{z{}_i}} + \frac{1}{2}{\tilde{\theta }_i}^T\varGamma _i^{ - 1}{\tilde{\theta }_i} + \frac{1}{{2{\lambda _i}}}\tilde{b}_i^2] \end{aligned}$$
(4.58)

From the previous analysis, we have

$$\begin{aligned} \frac{z^2_i}{2g_{i1}} \le {V_{{z_i}}} = \int _{{\text { }}0}^{{\text { }}{z_i}} {\frac{\sigma }{{\left| {{g_i}({{\bar{x}}_{i - 1}},\sigma + {\alpha _{i - 1}})} \right| }}d\sigma }\le \frac{z^2_i}{2g_{i0}} \end{aligned}$$
(4.59)

Hence, from (4.574.59), we have

$$|{\bar{z}_i}| \leqslant \sqrt{{\mu }},~~ {\left\| {\theta {}_i} \right\| ^2} \leqslant \frac{{{\mu }}}{{{\lambda _{\min }}(\varGamma _i^{ - 1})}},~~ b_i^2 \leqslant {\lambda _i}\mu ^2,~~i = 1,2, \ldots ,n,~~\forall t \geqslant 0$$

where \({\mu } = 2{\bar{g}_{\max }}(\rho {}_n + {V_n}(0) + {N_n})\), \({\tilde{g}_{\max }} = \mathop {\max }\limits _{1 \leqslant i \leqslant n} {\bar{g}_{i1}} > 0\), \(\bar{g}_{i1}={g}_{i1}\), \(i=1,\ldots ,n-1\), \(\bar{g}_{n1}={g}_{n1}g_{f1}\),

$$\begin{aligned} {N_n} = \mathop {\lim }\limits _{t \rightarrow + \infty } \sum \nolimits ^n_{i=1}\left[ {{e^{ - {c_n}t}}\int _0^t {(\frac{{{\bar{g}_i}({{\bar{x}}_i})}}{{\left| {{\bar{g}_i}({{\bar{x}}_i})} \right| }}N({\varsigma _i}) + 1){e^{{c_n}t}}} {{\dot{\varsigma }}_n}d\tau } \right] \end{aligned}$$
(4.60)

The above design procedures and analysis are summarized in the following theorem.

Theorem 4.1

Consider system (4.1) and fault (4.2). If Assumptions 4.14.5 hold, command filters (4.4), actual control defined by (4.46) and (4.47), and the adaptation laws (4.15), (4.16), (4.34), (4.35), (4.50) and (4.51) are employed, then the closed-loop system is asymptotically bounded with the tracking error converging to a neighborhood of the origin.

Proof

From the aforementioned analysis, it is easy to obtain the conclusion. The detailed proof is omitted here.

Fig. 4.1
figure 1

The time profiles of system output y and desired signal \(y_d\)

Fig. 4.2
figure 2

The time profiles of tracking error

4.4 Illustrative Example

In this example, a class of nonlinear systems are described as follows:

$$\begin{aligned} \left\{ \begin{aligned}&{{{\dot{x}}}_{1}}=x_1+(1+0.5\sin (x_1^2))x_2+0.2x_1\sin (x_2t) \\&{{{\dot{x}}}_{2}}= x_1x_2+(3-\cos (x_1x_2))u+0.1\cos (0.5x_2t)\\&y={{x}_{1}} \\ \end{aligned} \right. \end{aligned}$$
(4.61)

From (4.61), it is easily seen that, \(g_{10}=0.5\), \(g_{11}=1.5\), \(g_{20}=2\), \(g_{21}=4\), \(p^*_{1}=0.2\), \(\varphi _{1}=x_1\), \(p^*_{2}=0.1\) and \(\varphi _{2}=1\), which means that Assumptions 4.1 and 4.2 hold. In this work, the desired trajectory \({{y}_{d}}=0.1\sin (t)\). Obviously, Assumption 4.3 holds. The actuator fault considered in this simulation research is described as follows:

$$u^f=(1-0.5\sin (x_2))u+\cos (x_1x_2)$$

Obviously, \(g_{f0}=0.5\) and \(g_{f1}=1.5\), which means that Assumption 4.4 holds.

The control objective is to construct an adaptive state feedback controller for nonlinear system (4.61) such that the system output y tracks the desired reference signal \({{y}_{d}}\) with all the signals in the resulting closed-loop system being asymptotically bounded.

For this work, the following parameters are given as follows: \(k_1=k_2=3\), \(\varGamma _1=\varGamma _2=diag{1,1,1,1,1,1,1,1,1,1}\), \(\lambda _1=\lambda _2=1\), \({{\eta }_{1 }}={{\eta }_{2}}=0.01\), \({{\sigma }_{b1}}={{\sigma }_{b1}}=0.1\), \({{\theta }_{i}}\in {{R}^{10}}\), \(i=1,2\) are taken randomly in interval (0,1]. Initial state x(0) is set as \({{(0.2,0.1)}^{T}}\). The sample time is 0.08s.

Simulation results are shown in Figs. 4.1, 4.2 and 4.3. From Fig. 4.1, we can find that system (4.61) has good tracking performance. Figure 4.2 shows that the tracking error converges to a neighborhood of the origin. Meanwhile, the boundedness of control input signal is shown in Fig. 4.3.

Fig. 4.3
figure 3

The time profiles of control input signal

4.5 Conclusions

In this chapter, an adaptive fuzzy tracking fault-tolerant control problem of a class of uncertain strict-feedback nonlinear systems with actuator fault has been investigated. FLSs are used to approximate the unknown nonlinear functions. By applying adaptive command filtered backstepping recursive design, integral-type Lyapunov function method and Nussbaum-type gain technique, an adaptive fuzzy control scheme is proposed to guarantee that the closed-loop system is asymptotically bounded with the tracking error converging to a neighborhood of the origin.