Advertisement

Output feedback fault-tolerant control for a class of nonlinear systems via dynamic gain and neural network

  • Xiaoye Xi
  • Tingzhang LiuEmail author
  • Jianfei Zhao
  • Limin Yan
Open Access
ATCI 2019
  • 56 Downloads

Abstract

In this paper, by combining the dynamic gain and the self-adaptive neural network, an output feedback fault-tolerant control method was proposed for a class of nonlinear uncertain systems with actuator faults. First, the dynamic gain was introduced and the coordinate transformation of the state variables of the system was performed to design the corresponding state observers. Then, the observer-based output feedback controller was designed through the back-stepping method. The output feedback control method based on the dynamic gain can solve the adaptive fault-tolerant control problem when there are simple nonlinear functions with uncertain parameters in the system. For the more complex uncertain nonlinear functions in the system, in this paper, a single hidden layer neural network was used for compensation and the fault-tolerant control was realized by combining the dynamic gain. Finally, the height and posture control system of the unmanned aerial vehicle with actuator faults was taken as an example to verify the effectiveness of the proposed method.

Keywords

Fault-tolerant control Dynamic gain Neural network Output feedback 

1 Introduction

Fault-tolerant control of the nonlinear systems has always been a concern in the control field. Since nonlinear systems usually have complex structures and more uncertainties, it is more difficult to compensate once the fault occurs. At current stage, there have been many achievements in the research of the fault-tolerant control of nonlinear systems. But there are still many problems that need to be solved.

The main problems that need to be solved to achieve the fault-tolerant control of nonlinear systems are nonlinear functions in the system, uncertain parameters and unknown fault signals. The dynamic feedback is a good method to compensate for the uncertainties in the system [1, 2]. When the nonlinear functions and the uncertainties satisfied certain conditions in the system, the dynamic feedback can effectively perform adaptive compensation. The time-varying feedback was introduced for a class of nonlinear systems with time-varying uncertain parameters in [3]. The existence of the time-varying feedback was analyzed, and all state variables of the system were stabilized. However, the time-varying gain could not form a closed loop with the original system and it was an unbounded signal. Therefore, this method had great limitations. The time-varying gain that does not form a closed loop with the original system is convenient and intuitive in theoretical analysis, but is greatly limited in practical applications.

The time-varying dynamic gain that can form a closed loop with the system has always been a hot topic in adaptive control research. Applying the dynamic gain, an output feedback stabilization method for a class of uncertain nonlinear systems with control functions was designed in [4]. The dynamic equation of the gain was related to the output of the system and the state of the observer. So it could form a closed loop with the system. The stability of the whole closed-loop system was also demonstrated. On this basis, the stabilization based on the dynamic gain for systems with stronger nonlinearity and uncertainty was studied, respectively, in [5, 6]. And the papers [5, 6] have given different dynamic gain design methods and proof methods for the stability of the closed-loop system. The paper [7] realized the error tracking of nonlinear systems by using the dynamic gain, so that all variables of the closed-loop system were uniformly bounded and the output signal could track the reference signal with a set accuracy parameter.

When the fault occurs in the system, no matter it is the effectiveness loss fault of the actuator or the unknown stuck fault, both of them can be regarded as the uncertainties of the system and adaptively compensated by the dynamic gain. In [8], the fault-tolerant control of actuators with dead zone in the nonlinear system was considered and the dead-zone faults were compensated by introducing the dynamic gain. The paper [9] introduced a switching mechanism to the dynamic gain fault-tolerant control method which has accelerated the fault compensation. When applying the dynamic gain to solve the nonlinear control problem, it is usually necessary to assume that the nonlinear function should satisfy certain conditions, such as Lipschitz properties and Lipschitz-like properties. The dynamic gain cannot be used when the nonlinear function is complex that cannot satisfy certain conditions.

Neural networks are widely used in the adaptive control of various nonlinear systems [10]. Compensation to the nonlinear functions is achieved through adaptive weight which further solves the control problem [11]. In [12], comprehensive faults in nonlinear systems were considered and the fault-tolerant control method by using the radial basis function (RBF) neural network was studied. The designed observer was only used for faults information extraction and not for output feedback where the controller was the state feedback.

The paper [13] compensated the nonlinear functions for a class of nonlinear interconnected systems by the RBF neural network, and the state feedback fault-tolerant controller was designed by using the back-stepping method and combining the interconnected characteristics of the system. Since state feedback requires all variables of the system to be measurable, this proposed method cannot be well applied in practice. The paper [14] used the RBF neural network to design an output feedback fault-tolerant controller for a class of nonlinear systems while the effectiveness loss fault of the actuator and the unknown stuck fault were both considered. However, in [14], when designing the state observer, the effectiveness loss fault was not considered and the effectiveness of the proposed method was demonstrated by the simulation results without theoretical basis. In [15], an output feedback controller was designed when the system was normal without any fault and the compensation effect of the controller to the faults was demonstrated in the simulation results. On the other hand, the fault was considered in designing the output feedback controller [16]. However, to prove the stability of the system theoretically required harsh assumptions. Therefore, this method cannot be further promoted. When there is a fault in the system, especially an effectiveness loss fault, the output feedback based on the neural network will introduce new difficult-to-handle nonlinear items in the design process due to the actuator effectiveness loss. Therefore, more studies first design the controller when there is no fault, and then demonstrate the compensation effect to faults through simulation results which lacks the theoretical basis.

Most of the neural networks used in the literature were non-hidden or hidden where the weights of the hidden layers were artificially set rather than self-adaptively updated. Networks with hidden layers were rarely used in the control of the nonlinear systems. A single hidden layer neural network was combined with a filter to design an output feedback stabilization method for a class of nonlinear systems in [17]. This method was difficult to promote since it placed harsh requirements on the system. The paper [18] adopted the single hidden layer neural network to compensate the nonlinear functions in a quad-rotor UAV system and designed the output feedback trajectory tracking controller which was useful for the tracking control of the UAV. Although the single hidden layer network was adopted in [18], the weights of the hidden layers were artificially set constant rather than self-adaptively updated. When the network with hidden layers is applied in the system, and if the weights are adaptively updated, the system will become more complex and more parameters which are difficult to handle will emerge, and the stability will be affected too.

At the current stage, for the fault-tolerant control of the nonlinear system, various methods emerged and each method had its own advantages and disadvantages. Combining various fault-tolerant control methods to solve the fault compensation problem of the systems with stronger nonlinearity and uncertainty is still a problem that needs to be studied and solved.

In this paper, the dynamic gain was combined with the adaptive neural network. The simple nonlinearity, uncertainty and faults were adaptively compensated through the dynamic gain. For the more complex nonlinear functions, the dynamic single hidden layer neural network was used for approximation and the compensation was completed by combining the dynamic gain. The way of combining the dynamic gain with the neural network can make the adaptive single hidden layer network be successfully applied in the fault-tolerant control of the nonlinear system.

2 Problem formulation

Consider a class of nonlinear systems described by
$$\left\{ {\begin{array}{*{20}l} {\dot{\xi}_{1} = A_{1} (t)\xi_{2} + \Delta_{1} \left( {\xi_{1} } \right)} \hfill \\ \vdots \hfill \\ {\dot{\xi}_{N - 1} = A_{N - 1} (t)\xi_{N} + \Delta_{N - 1} \left( {\xi_{1} ,\ldots,\xi_{N - 1} } \right)} \hfill \\ {\dot{\xi}_{N} = B(t)u^{F} (t) + f\left( {\xi_{1} ,\ldots,\xi_{N} } \right) + \Delta_{N} \left( {\xi_{1} ,\ldots,\xi_{N} } \right)} \hfill \\ \end{array} } \right.$$
(1)
where \(\xi_{i} \in R^{n}\), \(i = 1,\ldots,N\) are the system state vector; \(u^{F} (t) \in R^{n}\) is the input vector under the fault of actuators \(A_{i} (t) \in R^{n \times n}\), \(i = 1,\ldots,N - 1\) are the unknown time-varying matrices; \(B(t) \in R^{n \times n}\) is the known time-varying matrix which is continuously differentiable for \(t\); \(\Delta_{i} \left( {\xi_{1} ,\ldots,\xi_{i} } \right) \in R^{n}\), \(i = 1,\ldots,N\) are uncertain nonlinear functions; \(f\left( {\xi_{1} ,\ldots,\xi_{N} } \right) \in R^{n}\) is more complex uncertain nonlinear function, which is not necessarily satisfying the Lipschitz properties and may have complex and unknown structures; only \(\xi_{1}\) is measurable of all the state variables.

The faults considered in this article are: \(u^{F} (t) = \rho (t)u(t) + \psi (t)\), where \(\rho (t) = {\text{diag}}\left\{ {\rho_{1} (t),\ldots,\rho_{n} (t)} \right\}\) are the effectiveness factors of the actuators which represent the effectiveness loss of actuators, such as the rotor damage of the UAV; \(\psi (t) \in R^{n}\) is the unknown stuck fault, such as the unknown intense disturbance of the UAV system. Before starting to study the fault-tolerant control method of system (1), the following assumptions are necessary.

Assumption 1

There exist known positive constants \(\overline{A}\), \(\underline{A}\), \(\overline{B}\) and \(\underline{B}\), such that
$$\underline{B} \le \left\| {B(t)} \right\| \le \overline{B} ,\quad \underline{A} \le \left\| {A_{i} (t)} \right\| \le \overline{A} ,\quad i = 1, \ldots ,N - 1$$

Assumption 2

There exists known matrix \(\hat{A}(t)\), such that
$$A_{1} (t)..A_{N - 1} (t)B(t)\rho (t)\hat{A}(t) + \left( {A_{1} (t)..A_{N - 1} (t)B(t)\rho (t)\hat{A}(t)} \right)^{\rm T} \ge \lambda_{0} I,$$
where \(\lambda_{0}\) is a known constant.

Assumption 3

There exist unknown positive constants \(\theta_{i1}\) and \(\theta_{i2}\), such that
$$||\Delta_{i} || \le \theta_{i1} \sum\limits_{j = 1}^{i} {\left\| {\xi_{j} } \right\|} + \theta_{i2} ,\quad i = 1, \ldots ,N$$

Assumption 4

There exist unknown positive constants \(\underline{\psi }\) and \(\overline{\psi }\), such that
$$\underline{\psi } \le ||\psi || \le \overline{\psi }$$

Assumption 5

There exists known positive constant \(\underline{\rho }\), such that
$$\underline{\rho } \le |\rho_{i} (t)| \le 1,\quad i = 1,\ldots,n$$

Remark 1

The system state variables \(\xi_{i} \in R^{n}\) studied in this paper are multidimensional. And the system contains more complex nonlinear functions \(f\left( {\xi_{1} ,\ldots,\xi_{N} } \right)\). This is true of the dynamic models of various rigid bodies in reality, such as the rotor unmanned aerial vehicle (UAV). The fault compensation for such systems cannot be realized by using dynamic gain simply. The assumptions in this paper are all about the Lipschitz-like nature of simple nonlinear functions and the bounded nature of uncertain parameters and faults. So the assumptions are general.

3 Dynamic gain-based fault-tolerant control design

3.1 Observer design

At first, we define \(\overline{A}_{i} (t) = A_{i} (t)..A_{N - 1} (t)B(t)\rho (t)\hat{A}(t)\), \(i = 1,\ldots,N\), and introduce the transformation \(\eta_{i} = \left( {A_{i} (t)..A_{N - 1} (t)B(t)\rho (t)\hat{A}(t)} \right)^{ - 1} \left( {\xi_{i} - \xi_{ir} } \right)\), \(i = 1,\ldots,N\), where \(\xi_{ir}\)\(i = 1,\ldots,N\) are the reference signals, which are known and bounded.

System (1) can be converted into,
$$\left\{ {\begin{array}{*{20}l} {\dot{\eta}_{i} = \begin{array}{*{20}c} {\eta_{i + 1} + \Delta_{i}^{\prime} \left( {\xi_{1} ,\ldots,\xi_{i} ,\xi_{ir} ,\dot{\xi}_{ir} ,\rho_{1} ,\ldots,\rho_{n} } \right),} & {i = 1,\ldots,N - 1} \\ \end{array} } \hfill \\ {\dot{\eta}_{N} = \hat{A}^{ - 1} (t)u(t) + \Delta_{N}^{\prime} \left( {\xi_{1} ,\ldots,\xi_{N} ,\xi_{Nr} ,\dot{\xi}_{Nr} ,\rho_{1} ,\ldots,\rho_{n} } \right)} \hfill \\ {\quad\quad + \hat{A}^{ - 1} (t)f^{\prime} (\xi_{1} ,\ldots,\xi_{N} ,\rho_{1} ,\ldots,\rho_{n} ) + \psi^{\prime} (t)} \hfill \\ \end{array} } \right.,$$
(2)
where
$$\begin{aligned} & \Delta_{i}^{\prime} \left( {\xi_{1} ,\ldots,\xi_{i} ,\xi_{ir} ,\dot{\xi}_{ir} ,\rho_{1} ,\ldots,\rho_{n} } \right) \\ & \quad = \left( {A_{i} (t)..A_{N - 1} (t)B(t)\rho (t)} \right)^{ - 1} \Delta_{i}^{{}} \left( {\xi_{1} ,\ldots,\xi_{i} } \right) \\ & \quad \quad- \left( {A_{i} (t)..A_{N - 1} (t)B(t)\rho (t)\hat{A}(t)} \right)^{ - 1} \dot{\xi}_{ir} \\ & \quad \quad + \frac{d}{dt}\left( {A_{i} (t)..A_{N - 1} (t)B(t)\rho (t)\hat{A}(t)} \right)^{ - 1} \left( {\xi_{i} - \xi_{ir} } \right),\quad i = 1,\ldots,N \\ & f^{\prime} (\xi_{1} ,\ldots,\xi_{N} ,\rho_{1} ,\ldots,\rho_{n} ) = \left( {A_{1} (t)..A_{N - 1} (t)B(t)\rho (t)} \right)^{ - 1} f\left( {\xi_{1} ,\ldots,\xi_{N} } \right), \\ & \psi^{\prime} (t) = \left( {A_{1} (t)..A_{N - 1} (t)B(t)\rho (t)\hat{A}(t)} \right)^{ - 1} \psi (t). \\ \end{aligned}$$
According to Assumptions 1, 2, and 3, we can obtain that \(\left\| {\Delta_{i}^{\prime} } \right\| \le \theta_{i1}^{\prime} + \theta_{i2}^{\prime} \sum\nolimits_{j = 1}^{i} {\left\| {\eta_{j} } \right\|}\), \(\underline{{\psi^{\prime} }} \le \left\| {\psi^{\prime} (t)} \right\| \le \overline{{\psi^{\prime} }}\), where \(\theta_{i1}^{\prime}\), \(\theta_{i2}^{\prime}\), \(i = 1..N\), \(\underline{{\psi^{\prime} }}\) and \(\overline{{\psi^{\prime} }}\) are unknown constants. Then, we introduce the dynamic gain \(L(t)\) and the following transformation,
$$\left\{ \begin{array}{l} \begin{array}{*{20}c} {x_{i} = \frac{{\eta_{i} }}{{L^{b + i - 1} (t)}},} & {i = 1,\ldots,N} \\ \end{array} \hfill \\ v(t) = \frac{u(t)}{{L^{b + N} (t)}} \hfill \\ \end{array} \right.$$
(3)
where \(b\) is a parameter to be designed.
By (2) and (3), it can be obtained that
$$\left\{ {\begin{array}{*{20}l} {\dot{x}_{i} = \begin{array}{*{20}c} {L(t)x_{i + 1} + \frac{{\Delta_{i}^{\prime} }}{{L^{b + i - 1} }} - (b + i - 1)\frac{{\dot{L} }}{L}x_{i} ,} & {i = 1,\ldots,N - 1} \\ \end{array} } \hfill \\ {\dot{x}_{N} = L(t)\hat{A}^{ - 1} (t)v(t) + \frac{{\hat{A}^{ - 1} (t)f^{\prime} }}{{L^{b + N - 1} }} + \frac{{\Delta_{N}^{\prime} }}{{L^{b + N - 1} }}} \hfill \\ {\quad\quad - (b + N - 1)\frac{{\dot{L} }}{L}x_{N} + \frac{{\psi^{\prime} (t)}}{{L^{b + N - 1} }}} \hfill \\ \end{array} } \right.,$$
(4)
Then, we design the following observer for system (2)
$$\left\{ \begin{array}{l} \begin{array}{*{20}c} {\dot{\hat{\eta }}}_{i} = \hat{\eta }_{i + 1} - L^{i} a_{i} \hat{\eta }_{1} \quad {i = 1, \ldots ,N - 1} \\ \end{array} \hfill \\ {\dot{\hat{\eta }}}_{N} = \hat{A}^{ - 1} (t)u(t) + \hat{A}^{ - 1} (t)\hat{{f}}^{\prime}(\hat{\xi }_{1} ,\ldots,\hat{\xi}_{N} ,\hat{\rho }_{1} ,\ldots,\hat{\rho }_{n} ) - L^{N} a_{N} \hat{\eta }_{1} \hfill \\ \end{array} \right..$$
(5)
By the similar transformation,
$$\hat{x}_{i} = \frac{{\hat{\eta }_{i} }}{{L^{b + i - 1} (t)}},\quad i = 1,\ldots,N,$$
(6)
observer (5) can be converted into
$$\left\{ {\begin{array}{*{20}l} {{\dot{\hat{x}}}_{i} = L(t)\hat{x}_{i + 1} - L^{{}} a_{i} \hat{x}_{1} - (b + i - 1)\frac{{\dot{L} }}{L}\hat{x}_{i} ,\begin{array}{*{20}c} {} & {i = 1, \ldots N - 1} \\ \end{array} } \hfill \\ {{\dot{\hat{x}}}_{N} = L(t)\hat{A}^{ - 1} (t)v(t) + \frac{1}{{L^{b + N - 1} }}\hat{A}^{ - 1} (t)\hat{{f^{\prime} }}(\hat{\xi }_{1} ,\ldots,\hat{\xi }_{N} ,\hat{\rho }_{1} ,\ldots,\hat{\rho }_{n} )} \hfill \\ { - La_{N} \hat{x}_{1} - (b + N - 1)\frac{{\dot{L} }}{L}\hat{x}_{N} } \hfill \\ \end{array} } \right..$$
(7)
We define \(e_{i} = x_{i} - \hat{x}_{i}\), \(i = 1,\ldots,N\), the following dynamic system of error
$$\left\{ \begin{array}{l} \begin{array}{*{20}c} {\dot{e}_{i} = L(t)e_{i + 1} + L^{{}} a_{i} \hat{x}_{1} + \frac{{\Delta_{i}^{\prime} }}{{L^{b + i - 1} }} - (b + i - 1)\frac{{\dot{L} }}{L}e_{i} ,} & {i = 1, \ldots N - 1} \\ \end{array} \hfill \\ \dot{e}_{N} = \hat{A}^{ - 1} (t)\frac{{f^{\prime} - \hat{{f^{\prime} }}}}{{L^{b + N - 1} }} + \frac{{\Delta_{N}^{\prime} }}{{L^{b + N - 1} }} + L^{{}} a_{N} \hat{x}_{1} - (b + N - 1)\frac{{\dot{L} }}{L}e_{N} + \frac{{\psi^{\prime} (t)}}{{L^{b + N - 1} }} \hfill \\ \end{array} \right.,$$
(8)
and (8) can be expressed as
$$\begin{aligned} \dot{e} & = L(t)\left( {A \otimes I_{n} } \right)e + L(t)\left( {a \otimes I_{n} } \right)x_{1} + \tilde{\Delta } + \frac{1}{{L^{b + N - 1} }}\tilde{I}\psi^{\prime} (t) \\ & \quad - \frac{{\dot{L} }}{L}\left( {D \otimes I_{n} } \right)e + \frac{1}{{L^{b + N - 1} }}\tilde{I}\hat{A}^{ - 1} (t)\left( {f^{\prime} - \hat{{f^{\prime} }}} \right), \\ \end{aligned}$$
(9)
where \(\tilde{I} = \left( {0_{n} ,\ldots,0_{n} ,I_{n} } \right)^{\rm T}\), \(e = (e_{1}^{\rm T} ,\ldots,e_{N}^{\rm T} )^{\rm T}\), \(\tilde{\Delta } = \left( {\frac{1}{{L^{b} }}\Delta_{1}^{\prime{\rm T}} , \ldots ,\frac{1}{{L^{b + N - 1} }}\Delta_{N}^{\prime{\rm T}} } \right)^{\rm T}\), and \(a = (a_{1} ,\ldots,a_{N} )^{\rm T}\); \(\otimes\) is the Kronecker product.

According to the paper [19], there is the following Lemma 1, by which the appropriate observer parameters \(a_{i}\), \(i = 1,\ldots,N\) can be selected.

Lemma 1

[17] There exist\(a_{i}\), \(i = 1,\ldots,N\), which can make\(A\)be a Hurwitz matrix, that there exist positive constants\(\mu\), \(\mu_{1}\), \(\mu_{2}\), and positive-definite matrix\(P \in R^{N \times N}\)such that
$$\begin{aligned} & PA + A^{\rm T} P \le - \mu I_{N} \\ & \mu_{1} I_{N} \le PD + DP \le \mu_{2} I_{N} , \\ \end{aligned}$$
where\(D = {\text{diag}}\left\{ {\begin{array}{*{20}c} b & \ldots & {b + N - 1} \\ \end{array} } \right\}\).
After selecting the parameters of the observer, we first construct the following Lyapunov function:
$$U = e^{\rm T} \left( {P \otimes I_{n} } \right)e.$$
(10)
By taking the derivatives of \(U\) with respect to time, we get
$$\begin{aligned} \dot{U} & = Le^{\rm T} \left( {PA + A^{\rm T} P} \right) \otimes I_{n} e + 2L(t)e^{\rm T} \left( {Pa \otimes I_{n} } \right)x_{1} \\ & \quad + 2e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{\Delta } + \frac{2}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\psi^{\prime} (t) \\ & \quad - \frac{{\dot{L} }}{L}e^{\rm T} \left( {PD + DP} \right) \otimes I_{n} e \\ & \quad + \frac{2}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\left( {f^{\prime} - \hat{{f^{\prime} }}} \right) \\ & \le - \mu L(t)||e||^{2} + 2L(t)e^{\rm T} \left( {Pa \otimes I_{n} } \right)x_{1} + 2e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{\Delta } \\ & \quad + \frac{2}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\psi^{\prime} (t) - \frac{{\dot{L} }}{L}e^{\rm T} \left( {PD + DP} \right) \otimes I_{n} e \\ & \quad+ \frac{2}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\left( {f^{\prime} - \hat{{f^{\prime} }}} \right) \\ \end{aligned}$$
(11)
In (11), the following inequality can be obtained,
$$2L(t)e^{\rm T} \left( {Pa \otimes I_{n} } \right)x_{1} \le L(t)\frac{\mu }{2}||e||^{2} + \frac{2}{\mu }\left\| {Pa \otimes I_{n} } \right\|^{2} \left\| {x_{1} } \right\|^{2} .$$
(12)
Then, by
$$\begin{aligned} x_{1} & = \frac{{\eta_{1} }}{{L^{b} (t)}} \\ & = \frac{1}{{L^{b} }}\left( {A_{1} (t)..A_{N - 1} (t)B(t)\rho (t)} \right)^{ - 1} \left( {\xi_{1} - \xi_{1r} } \right) \\ & = \frac{1}{{L^{b} }}\overline{A}_{1}^{ - 1} (t)\left( {\xi_{1} - \xi_{1r} } \right), \\ \end{aligned}$$
(13)
we can deduce that
$$2L(t)e^{\rm T} \left( {Pa \otimes I_{n} } \right)x_{1} \le L(t)\frac{\mu }{2}||e||^{2} + \frac{2}{{\mu L^{2b} }}\left\| {Pa \otimes I_{n} } \right\|^{2} \left\| {\overline{A}_{1}^{ - 1} } \right\|^{2} \left\| {\xi_{1} - \xi_{1r} } \right\|^{2}$$
(14)
So, we get
$$\begin{aligned} \dot{U} & \le - \frac{\mu }{2}L(t)||e||^{2} + \frac{2}{{\mu L^{2b} }}\left\| {Pa \otimes I_{n} } \right\|^{2} \left\| {\overline{A}_{1}^{ - 1} } \right\|^{2} \left\| {\xi_{1} - \xi_{1r} } \right\|^{2} \\ & \quad + 2e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{\Delta } + \frac{2}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\psi^{\prime} (t) \\ & \quad - \frac{{\dot{L} }}{L}e^{\rm T} \left( {PD + DP} \right) \otimes I_{n} e + \frac{2}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\left( {f^{\prime} - \hat{{f^{\prime} }}} \right) \\ \end{aligned}$$
(15)

3.2 Design of output feedback fault-tolerant control

The design of output feedback fault-tolerant control is realized by back-stepping in this paper. At first, we define
$$z_{1} = \frac{{\xi_{1} - \xi_{1r} }}{{L^{b} }} = \frac{{\overline{A}_{1}^{{}} (t)\eta_{1} }}{{L^{b} }} = \overline{A}_{1} (t)x_{1} .$$
(16)
By taking the derivatives of \(z_{1}\) with respect to time, we get
$$\dot{z}_{1} = \frac{1}{{L^{b} }}\left( {A_{1} (t)\xi_{2} + \Delta_{1} } \right) - \frac{{\dot{\xi}_{1r} }}{{L^{b} }} - b\frac{{\dot{L} }}{L}z_{1} ,$$
(17)
where \(A_{1} (t)\xi_{2} = A_{1} (t)\left( {\xi_{2} - \xi_{2r} } \right) + A(t)\xi_{2r} = A_{1} (t)\overline{A}_{2}^{ - 1} (t)\eta_{2} + A_{1} (t)\xi_{2r}\), and \(\overline{A}_{ 1}^{{}} = A_{ 1} (t)..A_{N - 1} (t)B(t)\rho (t)\). So, (17) can be written as
$$\dot{z}_{1} = \frac{1}{{L^{b} }}A_{1} (t)\overline{A}_{2}^{{}} (t)\eta_{2} + \frac{1}{{L^{b} }}A_{1} (t)\xi_{2r} + \frac{1}{{L^{b} }}\Delta_{1} - \frac{{\dot{\xi}_{1r} }}{{L^{b} }} - b\frac{{\dot{L} }}{L}z_{1} ,$$
(18)
further, (18) can be written as
$$\dot{z}_{1} = L(t)\overline{A}_{1}^{{}} (t)\left( {\hat{x}_{2} + e_{2} } \right) + \frac{1}{{L^{b} }}A_{1} (t)\xi_{2r} + \frac{1}{{L^{b} }}\Delta_{1} - \frac{{\dot{\xi}_{1r} }}{{L^{b} }} - b\frac{{\dot{L} }}{L}z_{1} .$$
(19)
Then, the following Lyapunov function can be constructed for \(z_{1}\),
$$V_{0} = \frac{1}{2}z_{1}^{\rm T} z_{1}$$
(20)
Taking the derivatives of \(V_{0}\) with respect to time, we get
$$\begin{aligned} \dot{V}^{ \cdot }_{0} & = L(t)z_{1}^{\rm T} \overline{A}_{1} (t)\hat{x}_{2} + L(t)z_{1}^{\rm T} \overline{A}_{1} (t)e_{2} + \frac{1}{{L^{b} }}z_{1}^{\rm T} A_{1} \xi_{2r} \\ & \quad + \frac{1}{{L^{b} }}z_{1}^{\rm T} \Delta_{1} - \frac{1}{{L^{b} }}z_{1}^{\rm T} \dot{\xi}_{1r} - b\frac{{\dot{L} }}{L}z_{1}^{\rm T} z_{1} \\ & \le L(t)z_{1}^{\rm T} \overline{A}_{1} (t)\left( {\hat{x}_{2} - \alpha_{1} } \right) + L(t)z_{1}^{\rm T} \overline{A}_{1} (t)\alpha_{1} - b\frac{{\dot{L} }}{L}z_{1}^{\rm T} z_{1} \\ & \quad + L(t)\delta_{11} \left\| {e_{2} } \right\|^{2} + L(t)\delta_{11}^{\prime} \left\| {z_{1} } \right\|^{2} + \frac{1}{{L^{b} }}z_{1}^{\rm T} A_{1} \xi_{2r} \\ & \quad + \frac{1}{{L^{b} }}z_{1}^{\rm T} \Delta_{1} - \frac{1}{{L^{b} }}z_{1}^{\rm T} \dot{\xi}_{1r} , \\ \end{aligned}$$
(21)
where \(\delta_{11}\) is a positive constant to be designed; \(\delta_{11}^{\prime} = \delta_{11}^{\prime} \left( {\delta_{11}^{{}} } \right)\) is a positive constant dependent on \(\delta_{11}\), which can be compensated by the control designed in the next steps; \(\alpha_{1}\) is the virtual control to be designed.
Next, we define \(V_{1} = U + V_{0}\) and take the derivatives of \(V_{1}\) with respect to time, we can deduce that
$$\begin{aligned} {\dot{V}_{1} } & = - L(t)\left( {\frac{\mu }{2} - \delta_{11} } \right)\left\| e \right\|^{2} + \frac{2}{\mu }\left\| {Pa \otimes I_{n} } \right\|^{2} \left\| {\overline{A}_{1}^{ - 1} } \right\|^{2} \left\| {z_{1} } \right\|^{2} \\ & \quad + L(t)\delta_{11}^{\prime} \left\| {z_{1} } \right\|^{2} + L(t)z_{1}^{\rm T} \overline{A}_{1} (t)\left( {\hat{x}_{2} - \alpha_{1} } \right) + L(t)z_{1}^{\rm T} \overline{A}_{1} (t)\alpha_{1} \\ & \quad - b\frac{{\dot{L} }}{L}z_{1}^{\rm T} z_{1} - \frac{{\dot{L} }}{L}e^{\rm T} \left( {PD + DP} \right) \otimes I_{n} e + 2e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{\Delta } \\ & \quad + \frac{2}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\psi^{\prime} (t) + \frac{2}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\left( {f^{\prime} - \hat{{f^{\prime} }}} \right) \\ & \quad + \frac{1}{{L^{b} }}z_{1}^{\rm T} A_{1} \xi_{2r} + \frac{1}{{L^{b} }}z_{1}^{\rm T} \Delta_{1} - \frac{1}{{L^{b} }}z_{1}^{\rm T} \dot{\xi}_{1r} . \\ \end{aligned}$$
(22)
Then, we select \(\delta_{11} < \frac{\mu }{2}\), and design the virtual control \(\alpha_{1} = - q_{1} z_{1}\), where \(q_{1} > \frac{{2\delta_{11}^{\prime} }}{{\lambda_{0} }}\). By defining \(c_{1} = \frac{\mu }{2} - \delta_{11}\) and \(d_{11} = \frac{ 1}{ 2}q_{1} \lambda_{0} - \delta_{11}^{\prime}\), it can be deduced that
$$\begin{aligned} {\dot{V}_{1} } & = - L(t)c_{1} \left\| e \right\|^{2} - L(t)d_{11} \left\| {z_{1} } \right\|^{2} - \frac{{\dot{L} }}{L}\mu_{1} \left\| e \right\|^{2} - b\frac{{\dot{L} }}{L}\left\| {z_{1} } \right\|^{2} \\ & \quad + \frac{{\overline{\theta }_{11} }}{{L^{2b} }} + \overline{\theta }_{12} \left( {\left\| e \right\|^{2} + \left\| z \right\|^{2} } \right) + \frac{2}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\left( {f^{\prime} - \hat{{f^{\prime} }}} \right) \\ & \quad + L(t)z_{1}^{\rm T} \overline{A}_{1} (t)\left( {\hat{x}_{2} - \alpha_{1} } \right), \\ \end{aligned}$$
(23)
where \(d_{11}\) is an known positive constant; \(\overline{\theta }_{11}\) and \(\overline{\theta }_{12}\) are unknown positive constants.
By the above steps and assuming that step \(k\), \(k \ge 1\) has been completed, that the virtual control \(\alpha_{j} = - q_{j} z_{j}\), \(j = 1,\ldots,k\) have been designed which make (24) established
$$\begin{aligned} {\dot{V}_{k} } & \le - L(t)c_{k} \left\| e \right\|^{2} - L(t)\sum\limits_{j = 1}^{k} {d_{kj} \left\| {z_{j} } \right\|^{2} } - \frac{{\dot{L} }}{L}c_{Lk} \left\| e \right\|^{2} \\ & \quad - \frac{{\dot{L} }}{L}\sum\limits_{j = 1}^{k} {d_{Lkj} \left\| {z_{j} } \right\|^{2} } + L(t)z_{k}^{\rm T} \overline{A}_{k} \left( {\hat{x}_{k + 1} - \alpha_{k} } \right) + \frac{{\overline{\theta }_{k1} }}{{L^{2b} }} \\ & \quad + \overline{\theta }_{k2} \left( {\left\| e \right\|^{2} + \left\| z \right\|^{2} } \right) + \frac{{2\overline{\sigma }_{k - 1} }}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\left( {f^{\prime} - \hat{{f^{\prime} }}} \right), \\ \end{aligned}$$
(24)
where \(c_{k}\), \(c_{Lk}\), \(d_{kj}\), \(d_{Lkj}\), \(j = 1,\ldots,k\) and \(\overline{\sigma }_{k - 1}\) are known positive constants; \(\overline{\sigma }_{0} = 1\); \(\overline{\theta }_{k1}\) and \(\overline{\theta }_{k2}\) are unknown positive constants.
Next, for the step \(k + 1\), we define \(z_{k + 1} = \hat{x}_{k + 1} - \alpha_{k}\) and construct the Lyapunov function as \(V_{k + 1} = \sigma_{k} V_{k} + \frac{1}{2}z_{k + 1}^{\rm T} z_{k + 1}\), where \(\sigma_{k}\) a positive constant to be designed. By taking the derivatives of \(z_{k + 1}\) with respect to time, we can obtain that
$$\begin{aligned} \dot{z}_{k + 1} & = {\dot{\hat{x}}}_{k + 1} - \dot{\alpha}_{k} \\ & = \sum\limits_{j = 2}^{k + 1} {( - 1)^{k - j + 1} \mathop \varPi \limits_{l = j}^{k} q_{l} {\dot{\hat{x}}}_{j} } + ( - 1)^{k} \mathop \varPi \limits_{l = 1}^{k} q_{l} {\dot{z}_{1} } \\ & = \sum\limits_{j = 2}^{k + 1} {( - 1)^{k - j + 1} \mathop \varPi \limits_{l = j}^{k} q_{l} \left( {L\hat{x}_{j + 1} - La_{j} \hat{x}_{1} - (b + j - 1)\frac{{\dot{L} }}{L}\hat{x}_{j} } \right)} \\ & \quad + ( - 1)^{k} \mathop \varPi \limits_{l = 1}^{k} q_{l} \left( {L\overline{A}_{1}^{{}} \left( {\hat{x}_{2} + e_{2} } \right) + \frac{1}{{L^{b} }}A_{1} (t)\xi_{2r} } \right. \\ & \quad \left. { + \frac{1}{{L^{b} }}\Delta_{1} - \frac{{\dot{\xi}_{1r} }}{{L^{b} }} - b\frac{{\dot{L} }}{L}z_{1} } \right), \\ \end{aligned}$$
(25)
so, according to (25), we get
$$\begin{aligned} \dot{V}_{k + 1} & = - L\sigma_{k} c_{k} \left\| e \right\|^{2} - L\sigma_{k} \sum\limits_{j = 1}^{k} {d_{kj} \left\| {z_{j} } \right\|^{2} } - \sigma_{k} c_{Lk} \frac{{\dot{L} }}{L}\left\| e \right\|^{2} \\ & \quad - \frac{{\dot{L} }}{L}\sigma_{k} \sum\limits_{j = 1}^{k} {d_{Lkj} \left\| {z_{j} } \right\|^{2} } + L(t)z_{k}^{\rm T} \overline{A}_{k} \left( {\hat{x}_{k + 1} - \alpha_{k} } \right) + \frac{{\sigma_{k} \overline{\theta }_{k1} }}{{L^{2b} }} \\ & \quad + \sigma_{k} \overline{\theta }_{k2} \left( {\left\| e \right\|^{2} + \left\| z \right\|^{2} } \right) + \frac{{2\sigma_{k} \overline{\sigma }_{k - 1} }}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\left( {f^{\prime} - \hat{{f^{\prime} }}} \right) \\ & \quad + z_{k + 1}^{\rm T} \left\{ {\sum\limits_{j = 2}^{k} {( - 1)^{k - j + 1} \mathop \varPi \limits_{l = j}^{k} q_{l} \left( {L\hat{x}_{j + 1} - La_{j} \hat{x}_{1} - (b + j - 1)\frac{{\dot{L} }}{L}\hat{x}_{j} } \right)} } \right. \\ & \quad + ( - 1)^{k} \mathop \varPi \limits_{l = 1}^{k} q_{l} \left( {L\overline{A}_{1}^{{}} \left( {\hat{x}_{2} + e_{2} } \right) + \frac{1}{{L^{b} }}A_{1} (t)\xi_{2r} + \frac{1}{{L^{b} }}\Delta_{1} - \frac{{\dot{\xi}_{1r} }}{{L^{b} }} - b\frac{{\dot{L} }}{L}z_{1} } \right) \\ & \quad \left. { + L\hat{x}_{k + 2} - La_{k + 1} \hat{x}_{1} - (b + 1)\frac{{\dot{L} }}{L}\hat{x}_{k + 1} } \right\} \\ \end{aligned}$$
(26)
By direct derivation, the following three inequalities are established in (26):
$$\begin{aligned} & L(t)z_{k + 1}^{\rm T} \sum\limits_{j = 2}^{k} {( - 1)^{k - j + 1} \mathop \varPi \limits_{l = j}^{k} q_{l} \hat{x}_{j + 1} } - L(t)a_{k + 1} z_{k + 1}^{\rm T} \hat{x}_{1} \\ & \quad + L(t)( - 1)^{k} \mathop \varPi \limits_{l = 1}^{k} q_{l} z_{k + 1}^{\rm T} \overline{A}_{1} (t)(\hat{x}_{2} + e_{2} ) \\ & \quad \quad \le L(t)\sum\limits_{j = 1}^{k} {\delta_{k + 1,1,j} \left\| {z_{j} } \right\|^{2} } + L(t)\delta_{k + 1,2} \left\| e \right\|^{2} + L(t)\delta_{k + 1,1}^{\prime} \left\| {z_{k + 1} } \right\|^{2} , \\ \end{aligned}$$
(27)
$$\begin{aligned} & - z_{k + 1}^{\rm T} \sum\limits_{j = 2}^{k} {( - 1)^{k - j + 1} \left( {b + j - 1} \right)\frac{{\dot{L} }}{L}\hat{x}_{j} } + (b + 1)q_{k} \frac{{\dot{L} }}{L}z_{k + 1}^{\rm T} z_{k} \\ & \quad - ( - 1)^{k} b\frac{{\dot{L} }}{L}\mathop \varPi \limits_{l = 1}^{k} q_{l} z_{k + 1}^{\rm T} z_{1} \\ & \quad \quad \le \frac{{\dot{L} }}{L}\lambda_{k + 1} \left\| {z_{k + 1} } \right\|^{2} + \frac{{\dot{L} }}{L}\sum\limits_{j = 1}^{k} {\lambda_{k + 1,j}^{\prime} \left\| {z_{j} } \right\|^{2} } , \\ \end{aligned}$$
(28)
and
$$L(t)\sigma_{k} z_{k}^{\rm T} \overline{A}_{k}^{{}} z_{k + 1} \le L(t)\delta_{k + 1,3} \left\| {z_{k} } \right\|^{2} + L(t)\delta_{k + 1,3}^{\prime} \left\| {z_{k + 1} } \right\|^{2} ,$$
(29)
where \(\delta_{k + 1,1,j} ,\; j = 1,\ldots,k,\delta_{k + 1,2}\) are positive constants to be designed; \(\delta_{k + 1,1}^{\prime} = \delta_{k + 1,1}^{\prime} \left( {\delta_{k + 1,1,j} ,\delta_{k + 1,2} } \right)\) is a positive constant dependent on \(\delta_{k + 1,1,j} ,\delta_{k + 1,2} ,j = 1,\ldots,k\); \(\lambda_{k + 1}\) is a positive constant to be designed; \(\lambda_{k + 1,j}^{\prime} = \lambda_{k + 1,j}^{\prime} \left( {\lambda_{k + 1} } \right),j = 1,\ldots,k\) are positive constants dependent on \(\lambda_{k + 1}\); \(\delta_{k + 1,3}\) is a positive constant to be designed; \(\delta_{k + 1,3}^{\prime} = \delta_{k + 1,3}^{\prime} \left( {\delta_{k + 1,3} ,\sigma_{k} } \right)\) a positive constant dependent on \(\delta_{k + 1,3}\) and \(\sigma_{k}\).
Therefore, According to (27), (28), and (29), the following can be obtained
$$\begin{aligned} \dot{V}_{k + 1} & \le - L\sigma_{k} c_{k} \left\| e \right\|^{2} - L\sigma_{k} \sum\limits_{j = 1}^{k} {d_{kj} \left\| {z_{j} } \right\|^{2} } - \sigma_{k} c_{Lk} \frac{{\dot{L} }}{L}\left\| e \right\|^{2} \\ & - \sigma_{k} \frac{{\dot{L} }}{L}\sum\limits_{j = 1}^{k} {d_{Lkj} \left\| {z_{j} } \right\|^{2} } + \frac{{2\overline{\sigma }_{k} }}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\left( {f^{\prime} - \hat{{f^{\prime} }}} \right) \\ & + L\sum\limits_{j = 1}^{k} {\left( {\delta_{k + 1,1,j} + \delta_{k + 1,3} } \right)\left\| {z_{j} } \right\|^{2} } + L\delta_{k + 1,2} \left\| e \right\|^{2} + L\left( {\delta_{k + 1,1}^{\prime} + \delta_{k + 1,3}^{\prime} } \right)\left\| {z_{k + 1} } \right\|^{2} \\ & + \frac{{\dot{L} }}{L}\sum\limits_{j = 1}^{k} {\lambda_{k + 1,j}^{\prime} \left\| {z_{j} } \right\|^{2} } + \frac{{\dot{L} }}{L}\lambda_{k + 1} \left\| {z_{k + 1} } \right\|^{2} - \frac{{\dot{L} }}{L}(b + 1)\left\| {z_{k + 1} } \right\|^{2} \\ & + Lz_{k + 1}^{\rm T} \left( {\hat{x}_{k + 2} - \alpha_{k + 1} } \right) + Lz_{k + 1}^{\rm T} \alpha_{k + 1} + \frac{{\sigma_{k} \overline{\theta }_{k1} }}{{L^{2b} }} + \sigma_{k} \overline{\theta }_{k2} \left( {\left\| e \right\|^{2} + \left\| z \right\|^{2} } \right) \\ & + \left( { - 1} \right)^{k} \mathop \varPi \limits_{l = 1}^{k} q_{l} \frac{1}{{L^{b} }}z_{k + 1}^{\rm T} \left( {A_{1} (t)\xi_{2r} + \Delta_{1} - \dot{\xi}_{1r} } \right). \\ \end{aligned}$$
(30)
Because
$$\left( { - 1} \right)^{k} \mathop \varPi \limits_{l = 1}^{k} q_{l} \frac{1}{{L^{b} }}z_{k + 1}^{\rm T} \left( {A_{1} (t)\xi_{2r} + \Delta_{1} - \dot{\xi}_{1r} } \right) \le \frac{{\delta_{\theta ,k + 1,1} }}{{L^{2b} }} + \delta_{\theta ,k + 1,2} \left( {\left\| e \right\|^{2} + \left\| z \right\|^{2} } \right),$$
where \(\delta_{\theta ,k + 1,1}\) and \(\delta_{\theta ,k + 1,2}\) are unknown positive constants, we can select \(\lambda_{k + 1} < b + 1\), and \(\sigma_{k} > \hbox{max} \left\{ {\frac{{\lambda_{k + 1,j}^{\prime} }}{{d_{Lkj} }},j = 1,\ldots,k} \right\}\). Then, we select \(\delta_{k + 1,1,j} + \delta_{k + 1,3} < \sigma_{k} d_{kj}\) and \(\delta_{k + 1,2} < \sigma_{k} c_{k}\), and define \(\lambda_{k + 1} = b + 1 - d_{L,k + 1,k + 1}\), \(d_{Lk + 1j} = \sigma_{k} d_{Lkj} - \lambda_{k + 1,j}^{\prime}\), \(d_{k + 1j} = \sigma_{k} d_{kj} - \left( {\delta_{k + 1,1,j} + \delta_{k + 1,3} } \right)\), \(c_{k + 1} = \sigma_{k} c_{k} - \delta_{k + 1,2}\). Finally, the virtual control can be designed as \(\alpha_{k + 1} = - q_{k + 1} z_{k + 1}\), where \(q_{k + 1} = \delta_{k + 1,1}^{\prime} + \delta_{k + 1,3}^{\prime} + d_{k + 1,k + 1}\). After the above design, we have
$$\begin{aligned} \dot{V}_{k + 1} & \le - Lc_{k + 1} \left\| e \right\|^{2} - L\sum\limits_{j = 1}^{k + 1} {d_{k + 1,j} \left\| {z_{j} } \right\|^{2} } - \frac{{\dot{L} }}{L}\sum\limits_{j = 1}^{k + 1} {d_{Lk + 1,j} \left\| {z_{j} } \right\|^{2} } \\ & \quad + \frac{{2\overline{\sigma }_{k} }}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\left( {f^{\prime} - \hat{{f^{\prime} }}} \right) + Lz_{k + 1}^{\rm T} \left( {\hat{x}_{k + 2} - \alpha_{k + 1} } \right) \\ & \quad + \frac{{\overline{\theta }_{k + 1,1} }}{{L^{2b} }} + \overline{\theta }_{k + 1,2} \left( {\left\| e \right\|^{2} + \left\| z \right\|^{2} } \right), \\ \end{aligned}$$
(31)
where \(c_{k + 1}\), cLk+1, \(d_{k + 1j}\), \(d_{Lk + 1j}\), \(j = 1,\ldots,k + 1\) and \(\overline{\sigma }_{k} = \overline{\sigma }_{k - 1} \sigma_{k}\) are known positive constants, and \(\overline{\theta }_{k + 11}\) and \(\overline{\theta }_{k + 12}\) are unknown positive constants.
Thus, we design the real control as
$$v(t) = - q_{N} \hat{A}(t)z_{N} - \frac{{\hat{{f^{\prime} }}}}{{L^{b + N - 1} }},$$
(32)
such that
$$\begin{aligned} \dot{V}_{N} & \le - Lc_{N} \left\| e \right\|^{2} - L\sum\limits_{j = 1}^{N} {d_{N,j} \left\| {z_{j} } \right\|^{2} } - \frac{{\dot{L} }}{L}\sum\limits_{j = 1}^{N} {d_{LN,j} \left\| {z_{j} } \right\|^{2} } \\ & \quad + \frac{{2\overline{\sigma }_{N - 1} }}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\left( {f^{\prime} - \hat{{f^{\prime} }}} \right) \\ & \quad + \frac{{\overline{\theta }_{N,1} }}{{L^{2b} }} + \overline{\theta }_{N,2} \left( {\left\| e \right\|^{2} + \left\| z \right\|^{2} } \right). \\ \end{aligned}$$
(33)
Then, we select the initial value of \(L(t)\) as \(L(0) = 1\), and design the following dynamic update rate:
$$\dot{L} (t) = \hbox{max} \left\{ {\sum\limits_{i = 1}^{N} {\hat{x}_{i}^{\rm T} \hat{x}_{i} } + z_{1}^{\rm T} z_{1} - \frac{{ \in^{2} }}{{L^{2b} }},0} \right\}$$
(34)
where \(\in > 0\) is a parameter to be designed which indicates the accuracy.

Remark 2

It can be seen from (34) that the update rate of dynamic gain \(L(t)\) depends on the measurable variables \(z_{1}\) and \(\hat{x}_{i}\), \(i = 1,\ldots,N\), and the dynamic gain can form a closed loop with the system. Its role is to change with the measurable state variables to compensate the uncertain parameter nonlinear function and the fault in the system adaptively. Besides, the dynamic gain also enables the output to track the reference signal with certain accuracy. Qualitatively, (34) means that \(L(t)\) will continue to adjust until variables \(z_{1}\) and \(\hat{x}_{i}\), \(i = 1,\ldots,N\) satisfy the tracking conditions. Because the dynamic gain forms a closed loop with the system, we can prove that it is uniformly bounded with all other variables.

4 Compensation of nonlinear function by dynamic neural network

It can be seen from (33) that \(\frac{{2\overline{\sigma }_{N - 1} }}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\left( {f^{\prime} - \hat{{f^{\prime} }}} \right)\) includes more complex nonlinear function, and dynamic gain cannot be processed effectively. The following method is combined with the dynamic neural network to compensate the nonlinear function and realize the fault-tolerant control of the system. The main idea of this method is to use the Lipschitz property of the nonlinear function after compensated by the neural network, and the dynamic gain can effectively compensate the nonlinear function with Lipschitz property.

At first, we assume the nonlinear function \(f^{\prime} \left( {\xi_{1} ,\ldots,\xi_{N} ,\rho_{1} ,\ldots,\rho_{n} } \right)\) as follows
$$f^{\prime} \left( {\xi_{1} ,\ldots,\xi_{N} ,\rho_{1} ,\ldots,\rho_{n} } \right) = f^{\prime} \left( \zeta \right) = W_{0}^{\rm T} \varphi \left( {W_{h}^{\rm T} \zeta } \right) + \omega ,$$
(35)
where
$$\begin{aligned} \zeta & = \left( {\xi_{1}^{\rm T} ,\ldots,\xi_{N}^{\rm T} ,\rho_{1} ,\ldots,\rho_{n} } \right)^{\rm T} \\ & = \left( {\xi_{1}^{\rm T} ,\eta_{2}^{\rm T} \overline{A}_{2}^{ - T} (t) + \xi_{2r}^{\rm T} , \ldots ,\eta_{N}^{\rm T} \overline{A}_{N}^{ - T} (t) + \xi_{Nr}^{\rm T} ,\rho_{1} ,\ldots,\rho_{n} } \right)^{\rm T} ; \\ \end{aligned}$$
\(W_{0}^{\rm T} \in R^{{n \times l_{h} }}\) is the expected weight of input layer of the neural network; \(W_{h}^{\rm T} \in R^{{l_{h} \times \left( {Nn + n} \right)}}\) is the expected weight of hidden layer; \(l_{h}\) is the number of hidden layer nodes; \(\varphi \left( \cdot \right)\) is the activation function; \(\omega \in R^{n}\) is the error which satisfied \(\left\| \omega \right\| \le \overline{\omega }\).
Select the compensation of \(f^{\prime}\) as
$$\hat{{f^{\prime} }}\left( {\xi_{1} ,\hat{\xi }_{2} ..\hat{\xi }_{N} ,\hat{\rho }_{1} ,\ldots,\hat{\rho }_{n} } \right) = \hat{W}_{0}^{\rm T} \varphi \left( {\hat{W}_{h}^{\rm T} \hat{\zeta }} \right),$$
(36)
where \(\hat{\zeta } = \left( {\xi_{1}^{\rm T} ,\hat{\eta }_{2}^{\rm T} + \xi_{2r}^{\rm T} \ldots ,\hat{\eta }_{N}^{\rm T} + \xi_{Nr}^{\rm T} ,\hat{\rho }_{1} ,\ldots,\hat{\rho }_{n} } \right)^{\rm T}\); \(\hat{W}_{0}^{\rm T} \in R^{{n \times l_{h} }}\) and \(\hat{W}_{h}^{\rm T} \in R^{{l_{h} \times \left( {Nn + n} \right)}}\) are the adaptive weight of input layer and hidden layer. And we can have the following proposition.

Proposition 1

There exist constants\(C_{a}\), \(\theta_{W}\), \(\theta_{\rho }\), and\(\theta_{\omega }\), such that
$$\begin{aligned} & \frac{{2\overline{\sigma }_{N - 1} }}{{L^{b + N - 1} }}e^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\left( {f^{\prime} - \hat{{f^{\prime} }}} \right) \\ & \quad \le C_{a} \left( {\left\| {\hat{W}_{0} } \right\|^{2} + \left\| {\hat{W}_{h} } \right\|^{2} + \theta_{W} } \right)\left( {\left\| z \right\|^{2} + \left\| e \right\|^{2} + \frac{1}{{L^{2b} }}\left\| {\tilde{\rho }} \right\|^{2} + \frac{1}{{L^{2b} }}\theta_{\rho } } \right) \\ & \quad \quad + \frac{1}{{L^{2b} }}\theta_{\omega } + \frac{{2\overline{\sigma }_{N - 1} }}{{L^{b + N - 1} }}z^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\tilde{W}_{0}^{\rm T} \varphi \left( {\hat{W}_{h}^{\rm T} \hat{\zeta }} \right) \\ & \quad \quad + \frac{{2\overline{\sigma }_{N - 1} }}{{L^{b + N - 1} }}z^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\hat{W}_{0}^{\rm T} \varphi^{\prime} \left( {\hat{W}_{h}^{\rm T} \hat{\zeta }} \right)\tilde{W}_{h}^{\rm T} \hat{\zeta } \\ & \quad \quad + \frac{{2\overline{\sigma }_{N - 1} }}{{L^{b + N - 1} }}z^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\hat{W}_{0}^{\rm T} \varphi^{\prime} \left( {\hat{W}_{h}^{\rm T} \hat{\zeta }} \right)\hat{W}_{h}^{\rm T} \tilde{I}\tilde{\rho }, \\ \end{aligned}$$
(37)
where\(\tilde{W}_{0} = W_{0} - \hat{W}_{0}\); \(\tilde{W}_{h} = W_{h} - \hat{W}_{h}\); \(\tilde{\rho } = \rho - \hat{\rho }\);\(\varphi^{\prime}\)is the derivative of\(\varphi\).

Remark 3

Equation (37) in Proposition 1 indicates the upper bound of the nonlinear function after compensating by the neural network. The dynamic gain can compensate the first two terms on the right side of the inequality (37) adaptively, and the latter three items can be compensated by the dynamic update rate of the unknown parameters.

By Proposition 1, we can select the following the dynamic update rate of the weight of the neural networks and the adaptive update rate of the effectiveness factors.
$$\dot{\hat{W}}_{0} = \left\{ {\begin{array}{*{20}l} {\varGamma_{0} \left( {\frac{{2\overline{\sigma }_{N - 1} }}{{L^{b + N - 1} }}\varphi \left( {\hat{W}_{h}^{\rm T} \hat{\zeta }} \right)z^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t) - F_{0} \hat{W}_{0}^{{}} } \right)} \hfill & {\left\| {\hat{W}_{0} } \right\| < \varOmega_{0} } \hfill \\ 0 \hfill & {\left\| {\hat{W}_{0} } \right\| \ge \varOmega_{0} } \hfill \\ \end{array} } \right.$$
(38)
$$\dot{\hat{W}}_{h} = \left\{ {\begin{array}{*{20}l} {\varGamma_{h} \left( {\frac{{2\overline{\sigma }_{N - 1} }}{{L^{b + N - 1} }}\hat{\zeta }z^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\hat{W}_{0}^{\rm T} \varphi^{\prime} \left( {\hat{W}_{h}^{\rm T} \hat{\zeta }} \right) - F_{h} \hat{W}_{h}^{{}} } \right)} \hfill & {\left\| {\hat{W}_{h} } \right\|^{2} < \varOmega_{h} } \hfill \\ 0 \hfill & {\left\| {\hat{W}_{h} } \right\|^{2} \ge \varOmega_{h} } \hfill \\ \end{array} } \right.$$
(39)
$$\dot{\hat{\rho }} = \left\{ {\begin{array}{*{20}l} {\gamma \left( {\frac{{2\overline{\sigma }_{N - 1} }}{{L^{b + N - 1} }}z^{\rm T} \left( {P \otimes I_{n} } \right)\tilde{I}\hat{A}^{ - 1} (t)\hat{W}_{a}^{\rm T} \varphi^{\prime} \left( {\hat{W}_{b}^{\rm T} \hat{\zeta }} \right)\hat{W}_{b}^{\rm T} \tilde{I} - F_{\rho } \hat{\rho }} \right)} \hfill & {\left\| {\hat{\rho }} \right\|^{2} < \varOmega_{\rho } } \hfill \\ 0 \hfill & {\left\| {\hat{\rho }} \right\|^{2} \ge \varOmega_{\rho } } \hfill \\ \end{array} } \right.$$
(40)
To describe the stability of the system, we construct the following Lyapunov function.
$$T = V_{N} + \tilde{V},$$
(41)
where \(\tilde{V} = \frac{1}{2}tr\left\{ {\tilde{W}_{0}^{\rm T} \varGamma_{0}^{ - 1} \tilde{W}_{0} } \right\} + \frac{1}{2}tr\left\{ {\tilde{W}_{h}^{\rm T} \varGamma_{h}^{ - 1} \tilde{W}_{h} } \right\} + \frac{1}{2}\gamma^{ - 1} \tilde{\rho }^{\rm T} \tilde{\rho }\); \(tr{\text{\{ }} \cdot {\text{\} }}\) means the trace of a matrix.
By taking the derivatives of \(T\) with respect to time and according to (39)–(42) and Proposition 1, we can obtain that
$$\dot{T}\le - \left( {L(t)\tilde{C} - \tilde{\theta }} \right)\left( {\left\| e \right\|^{2} + \left\| z \right\|^{2} } \right) - D\left( {\left\| {\tilde{W}_{0} } \right\|^{2} + \left\| {\tilde{W}_{h} } \right\|^{2} + \left\| {\tilde{\rho }} \right\|^{2} } \right) + \frac{1}{{L^{2b} }}\tilde{\theta }_{0} ,$$
(42)
where \(\tilde{C}\) and \(D\) are known positive constants; \(\tilde{\theta }\) and \(\tilde{\theta }_{0}\) are unknown positive constants.

5 Stability analysis

We have the following Propositions 2 and 3 for the stability of the system. These two propositions show that the closed-loop system is globally uniformly bounded and that the output signal can track the reference signal with \(\in\).

Proposition 2

For the closed-loop system consisting of (7), (8), (32), (34), (36), (38), (39), and (40), the dynamic gain\(L(t)\), state vectors\(\hat{x}\)and\(e\)are globally bounded on\(\left[ {0,T_{f} } \right)\).

Proposition 3

For the system with the actuator faults (1) which satisfies Assumptions 1–4, if the output feedback fault-tolerant control is given by (32), then for any given initial condition, all variables in the closed-loop system of (1), (5), (32), (34), (36), (38), (39), and (40) are bounded on\(\left[ {0, + \infty } \right)\).

Proof

By Proposition 2, we can obtain that all variables in the closed-loop system consisting of (1), (5), (33), (39), (40), and (41) are bounded. Then, we can deduce that \(\mathop {\lim }\nolimits_{t \to \infty } \dot{L} (t) = 0\). Therefore, There exists a finite time \(t_{ \in }\) for any initial condition such that \(\frac{{\left\| {\xi_{1} - \xi_{1r} } \right\|^{2} }}{{L^{2b} }} - \frac{{ \in^{2} }}{{2L^{2b} }} \le \dot{L} (t) \le \frac{{ \in^{2} }}{{2L^{2b} }}\), \(\forall t > t_{ \in }\). Thus, we can obtain that \(\left\| {\xi_{1} - \xi_{1r} } \right\| \le \in\).□

6 Simulation results

To verify the effectiveness of the proposed method, the following height and attitude control systems of the UAV with actuator faults are considered [18].

Height control system of the UAV:
$$\left\{ \begin{array}{l} \dot{\xi}_{11} = \xi_{21} \hfill \\ \dot{\xi}_{21} = \frac{{\cos \xi_{12} \cos \xi_{13} }}{m}\tilde{u}_{1}^{F} - \frac{{k_{0} }}{m}\xi_{21} - g \hfill \\ \end{array} \right.$$
(43)
Posture control system control system of the UAV:
$$\left\{ \begin{array}{l} \dot{\xi}_{12} = \xi_{22} \hfill \\ \dot{\xi}_{22} = \frac{{J_{y} - J_{z} }}{{J_{x} }}\xi_{23} \xi_{24} + \frac{l}{{J_{x} }}\tilde{u}_{2}^{F} + k_{1} \xi_{23} \hfill \\ \dot{\xi}_{13} = \xi_{23} \hfill \\ \dot{\xi}_{23} = \frac{{J_{z} - J_{x} }}{{J_{y} }}\xi_{22} \xi_{24} + \frac{l}{{J_{y} }}\tilde{u}_{3}^{F} + k_{2} \xi_{22} \hfill \\ \dot{\xi}_{14} = \xi_{24} \hfill \\ \dot{\xi}_{24} = \frac{{J_{x} - J_{y} }}{{J_{z} }}\xi_{22} \xi_{23} + \frac{c}{{J_{z} }}\tilde{u}_{4}^{F} \hfill \\ \end{array} \right.,$$
(44)
where \(\xi_{ 1 1}\) is the height of the UAV (m); \(\xi_{ 2 1}\) is the vertical speed of the UAV (m/s); \(\xi_{ 1 2}\), \(\xi_{ 1 3}\), \(\xi_{ 1 4}\) are the posture angles of the UAV (rad); the angular rates are \(\xi_{ 2 2}\), \(\xi_{ 2 3}\), \(\xi_{ 2 4}\) rad/s; \(m = 1.2\,{\text{kg}}\) is the quality of the UAV; \(g = 9.8\,{\text{m/S}}^{2}\) is the gravity acceleration; \(l = 0.2\,{\text{m}}\) is the distance from the rotor to the center of gravity of the UAV; \(J_{x} = 0.3\,{\text{kg}}\,{\text{m}}^{2}\), \(J_{y} = 0.4\,{\text{kg}}\,{\text{m}}^{2}\), and \(J_{z} = 0.6\,{\text{kg}}\,{\text{m}}^{2}\) are the moments of inertia in three directions of the UAV relative to its own coordinate system; \(c = 0.79\) is the ratio of the anti-torque coefficient of the rotor motor to the lift coefficient corresponding to the motor speed; \(k_{0}\) is an unknown parameter which indicates the air resistance; \(k_{1}\) and \(k_{2}\) are unknown parameters which indicate the disturbance caused by the motor; we select \(k_{0} = 0.03\), \(k_{1} = 0.1\) and \(k_{2} = - 0.1\) in the simulation.
In the above height and attitude control systems (43) and (44), \(\tilde{u}_{i}^{F}\), \(i = 1,\ldots,4\) are
$$\begin{aligned} \left( {\begin{array}{*{20}c} {\tilde{u}_{1}^{F} } \\ {\tilde{u}_{2}^{F} } \\ {\tilde{u}_{3}^{F} } \\ {\tilde{u}_{4}^{F} } \\ \end{array} } \right) & = \left( {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & { - 1} \\ 1 & 0 & { - 1} & 0 \\ 1 & { - 1} & 1 & { - 1} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\rho_{1} (t)} & 0 & 0 & 0 \\ 0 & {\rho_{2} (t)} & 0 & 0 \\ 0 & 0 & {\rho_{3} (t)} & 0 \\ 0 & 0 & 0 & {\rho_{4} (t)} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {u_{1} } \\ {u_{2} } \\ {u_{3} } \\ {u_{4} } \\ \end{array} } \right) \\ & \quad + \left( {\begin{array}{*{20}c} {\left( {\psi_{1} + \psi_{2} + \psi_{3} + \psi_{4} } \right)} \\ {\left( {\psi_{2} - \psi_{4} } \right)} \\ {\left( {\psi_{1} - \psi_{3} } \right)} \\ {\left( {\psi_{1} - \psi_{2} + \psi_{3} - \psi_{4} } \right)} \\ \end{array} } \right) \\ \end{aligned}$$
where \(u_{i} ,i = 1,\ldots,4\) are the rotor motor drives the lift of the propellers; \(\rho_{i} (t)\), \(i{ = 1,}.. 4\) the effectiveness factors of the actuators; \(\psi_{i}\), \(i{ = 1,}.. 4\) are the unknown stuck faults.
Then, (43) and (44) can be expressed as
$$\left\{ {\begin{array}{*{20}l} {\dot{\xi}_{1} = \xi_{2} } \hfill \\ {\dot{\xi}_{2} = B(t)\rho (t)u(t) + B(t)\psi (t) + \Delta_{2} (\xi_{21} ,\xi_{22} ,\xi_{23} )} \hfill \\ {\quad\quad + f\left( {\xi_{22} ,\xi_{23} ,\xi_{24} } \right)} \hfill \\ \end{array} } \right.,$$
(45)
where
$$\begin{aligned} B(t) & = \left( {\begin{array}{*{20}c} {\frac{{\cos \xi_{ 1 2} \cos \xi_{ 1 3} }}{m}} & {\frac{{\cos \xi_{ 1 2} \cos \xi_{ 1 3} }}{m}} & {\frac{{\cos \xi_{ 1 2} \cos \xi_{ 1 3} }}{m}} & {\frac{{\cos \xi_{ 1 2} \cos \xi_{ 1 3} }}{m}} \\ 0 & {\frac{l}{{J_{x} }}} & 0 & { - \frac{l}{{J_{x} }}} \\ {\frac{l}{{J_{y} }}} & 0 & { - \frac{l}{{J_{y} }}} & 0 \\ {\frac{c}{{J_{z} }}} & { - \frac{c}{{J_{z} }}} & {\frac{c}{{J_{z} }}} & { - \frac{c}{{J_{z} }}} \\ \end{array} } \right), \\ \Delta_{2} & = \left( {\begin{array}{*{20}c} { - \frac{{k_{0} }}{m}\xi_{21} } & {k_{1} \xi_{23} } & {k_{2} \xi_{22} } & 0 \\ \end{array} } \right)^{\rm T} \\ f & = \left( {\begin{array}{*{20}c} 0 & {\frac{{J_{y} - J_{z} }}{{J_{x} }}\xi_{23} \xi_{24} } & {\frac{{J_{z} - J_{x} }}{{J_{y} }}\xi_{22} \xi_{24} } & {\frac{{J_{x} - J_{y} }}{{J_{z} }}\xi_{22} \xi_{23} } \\ \end{array} } \right)^{\rm T} . \\ \end{aligned}$$
According to the method described above, we introduce the transformation \(\eta_{1} = \left( {B(t)\rho (t)\hat{A}(t)} \right)^{ - 1} \xi_{1}\), \(\eta_{2} = \left( {B(t)\rho (t)\hat{A}(t)} \right)^{ - 1} \xi_{2}\), and design the following observer:
$$\left\{ \begin{array}{l} \dot{\hat{\eta }}_{1} = \hat{\eta }_{2} - La_{1} \hat{\eta }_{1} \hfill \\ \dot{\hat{\eta }}_{2} = \hat{A}^{ - 1} (t)u(t) + \hat{{f^{\prime} }}\left( {\hat{\xi }_{22} ,\hat{\xi }_{23} ,\hat{\xi }_{24} ,\hat{\rho }_{1} ,\ldots,\hat{\rho }_{4} } \right) - L^{2} a_{2} \hat{\eta }_{1} \hfill \\ \end{array} \right.,$$
(46)
then, make the further transformation \(\hat{x}_{1} = \frac{{\hat{\eta }_{1} }}{{L^{b} }}\), \(\hat{x}_{2} = \frac{{\hat{\eta }_{2} }}{{L^{b + 1} }}\), we get
$$\left\{ \begin{array}{l} {\dot{\hat{x}}_{1} } = L\hat{x}_{2} - La_{1} \hat{x}_{1} - b\frac{{\dot{L} }}{L}\hat{x}_{1} \\ \dot{\hat{x}_{2} } = L\hat{A}^{ - 1} (t)v + \frac{1}{{L^{b + 1} }}\hat{{f^{\prime} }}\left( {\hat{\xi }_{22} ,\hat{\xi }_{23} ,\hat{\xi }_{24} ,\hat{\rho }_{1} ,\ldots,\hat{\rho }_{4} } \right) \\ - La_{2} \hat{x}_{1} - \left( {b + 1} \right)\frac{{\dot{L} }}{L}\hat{x}_{2} \\ \end{array} \right..$$
(47)

Thus, we can design the control \(v(t) = - q_{2} \hat{A}(t)z_{2} - \frac{1}{{L^{b} }}\hat{A}(t)\hat{{f^{\prime} }}\left( {\hat{\xi }_{22} ,\hat{\xi }_{23} ,\hat{\xi }_{24} ,\hat{\rho }_{1} ,\ldots,\hat{\rho }_{4} } \right)\), where the adaptive dynamic of \(L\) is given by (34), and \(\hat{A}(t) = B^{\rm T} (t)\), \(q_{1} = 9.3\), \(q_{2} = 15.1\), \(\in = 0.2\).

We select the faults as follows:
$$\begin{aligned} & \rho_{1} = \left\{ {\begin{array}{*{20}l} 1 \hfill & {0 < t < 8} \hfill \\ {0.7 + 0.3e^{ - 8*(t - 8)} } \hfill & {t \ge 8} \hfill \\ \end{array} } \right., \\ & \rho_{2} = \left\{ {\begin{array}{*{20}l} 1 \hfill & {0 < t < 8} \hfill \\ {0.8 + 0.2e^{ - 8 *(t - 8)} } \hfill & {t \ge 8} \hfill \\ \end{array} } \right., \\ & \rho_{ 3} = \rho_{4} = 1, \\ & \psi_{1} (t) = 0.5\sin 10t\cos 12t, \\ & \psi_{ 2} (t) = 0.7\sin^{2} 12t\cos 5t, \\ & \psi_{2} = \psi_{4} = 0. \\ \end{aligned}$$
Then, we design the compensation for nonlinear functions by neural networks: \(\hat{{f^{\prime} }}\left( {\hat{\xi }_{22} ,\hat{\xi }_{23} ,\hat{\xi }_{24} ,\hat{\rho }_{1} ,\ldots,\hat{\rho }_{4} } \right) = \hat{W}_{0}^{\rm T} \varphi \left( {\hat{W}_{h}^{\rm T} \hat{\zeta }} \right)\), \(\hat{W}_{0}^{\rm T} \in R^{{4 \times l_{h} }}\), \(\hat{W}_{h}^{\rm T} \in R^{{l_{h} \times 16}}\). The number of hidden layer nodes is 16. We select the activation function as the sigmoid function. The adaptive dynamic of \(\hat{W}_{0}^{{}}\), \(\hat{W}_{h}^{{}}\) and \(\hat{\rho }\) is given by (38)–(40). We select \(\varOmega_{0} = \varOmega_{h} = \varOmega_{\rho } = 30\) and \(F_{0} = F_{h} = F_{\rho } = 1\). So, (45), (47) and the dynamic of \(L\), \(\hat{W}_{0}^{{}}\), \(\hat{W}_{h}^{{}}\), \(\hat{\rho }\) can form a closed-loop system.
In the simulation, the initial value of each variable is:\(\xi_{ 1 1} (0) = \xi_{ 1 2} (0) = \xi_{ 1 3} (0) = \xi_{ 1 4} (0) = 0\), \(\hat{x}_{1} (0) = \hat{x}_{2} (0) = 0\), \(\hat{W}_{0} (0) = \hat{W}_{h} (0) = 0\), and \(\hat{\rho }(0) = 1\). And the reference signals are: \(\xi_{ 1 1r} = 3.5m\), \(\xi_{ 1 2r} = 10\deg\), \(\xi_{ 1 3r} = 15\deg\), \(\xi_{ 1 4r} = 20\deg\). The simulation results are shown in Figs. 1, 2 and 3.
Fig. 1

a The height of UAV without faults, b the posture angles of UAV without faults, c the dynamic gain without faults

Fig. 2

a Height of UAV with faults and b posture angle of UAV with faults

Fig. 3

a Height of UAV with faults, b the posture angles of UAV with faults, c the dynamic gain without fault, d the input layer weight \(\left\| {\hat{W}_{0}^{{}} } \right\|\) of neural network, and e the hidden layer \(\left\| {\hat{W}_{h}^{{}} } \right\|\) of neural network

As shown in Fig. 1a, b, when there is no fault in the system, the control method proposed in this paper can make the system track the reference signals effectively. Figure 1c is the variation process of dynamic gain. By the adjustment of the dynamic gain, the uncertainties in the system can be compensated adaptively. Finally, the system can track the reference signals stably and the dynamic gain is bounded.

It can be seen from Fig. 2a, b that if there is no fault-tolerant control (select \(L = 4.9\) by Fig. 1c), the system can respond well without faults (\(t < 8s\)). However, when the faults occur in the system (\(t \ge 8s\)), neither the height nor the posture angles of UAV can track the reference signal properly without fault-tolerant control.

In Fig. 3, we can see that the proposed fault-tolerant control method in this paper can compensate faults effectively. It can be seen from Fig. 3a–c that when the faults occur, the dynamic gain changes from the original stable state and the faults can be compensated by the adaptive adjustment of the dynamic gain. From Fig. 3d, e, we can see that the weights of the neural network tend to converge at first (\(t < 8\,{\rm s}\)). Then, as the faults occur, the weights have large oscillation in the process of the fault compensation. After the faults have been compensated, the dynamic gain and the network weights are stable, and the height and posture angles of UAV can track the reference signals effectively.

7 Conclusions

In this paper, by combining the dynamic gain and the neural network, the output feedback fault-tolerant control problem of a class of nonlinear uncertain systems was solved. The compensation to the faults can be achieved through the adaptive adjustment of the dynamic gain. Meanwhile, the dynamic gain can also compensate for the simple nonlinear uncertain functions of the system. For the more complex nonlinear functions, the single hidden layer neural network was adopted for approximation and combined with the dynamic gain to achieve the compensation. Taking the height and posture angle control system of the quad-rotor UAV as an example, the effectiveness of the proposed method was verified. Based on the work of this paper, there are still further questions to be studied. First, the fault-tolerant method needs to be solved when the lower limitation of the effectiveness factors is unknown and the full loss of actuator effectiveness is allowed. The second question is that the condition of the assumption 2 may be a bit harsh and whether it can be further relaxed.

Notes

Acknowledgements

This work was supported by the National Science Foundation of China (Grant No. 61273190). The authors would like to thank the editor and reviewers for the valuable comments and constructive suggestions to improve the paper.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no competing interests.

References

  1. 1.
    Krishnamurthy P, Khorrami F (2016) Global output-feedback control of systems with dynamic nonlinear input uncertainties through singular perturbation-based dynamic scaling. Int J Adapt Control Signal Process 30(5):690–714MathSciNetCrossRefGoogle Scholar
  2. 2.
    Liu YG (2014) Global output-feedback tracking for nonlinear systems with unknown polynomial-of-output growth rate. J Control Theory Appl 31:921–933zbMATHGoogle Scholar
  3. 3.
    Kaliora G, Astolfi A (2004) Nonlinear control of feedforward systems with bounded signals. IEEE Trans Autom Control 49(11):1975–1990CrossRefGoogle Scholar
  4. 4.
    Krishnamurthy P, Khorrami F (2008) Dual high-gain-based adaptive output-feedback control for a class of nonlinear systems. Int J Adapt Control Signal Process 22(1):23–42MathSciNetCrossRefGoogle Scholar
  5. 5.
    Zhai J, Qian C (2012) Global control of nonlinear systems with uncertain output function using homogeneous domination approach. Int J Robust Nonlinear Control 22(14):1543–1561MathSciNetCrossRefGoogle Scholar
  6. 6.
    BenAbdallah A, Khalifa T, Mabrouk M (2015) Adaptive practical output tracking control for a class of uncertain nonlinear systems. Int J Syst Sci 46(8):1421–1431MathSciNetzbMATHGoogle Scholar
  7. 7.
    Jin S, Liu Y, Li F (2016) Further results on global practical tracking via adaptive output feedback for uncertain nonlinear systems. Int J Control 89(2):368–379MathSciNetCrossRefGoogle Scholar
  8. 8.
    Ma HJ, Yang GH (2010) Adaptive output control of uncertain nonlinear systems with non-symmetric dead-zone input. Automatica 46(2):413–420MathSciNetCrossRefGoogle Scholar
  9. 9.
    Ma HJ, Yang GH (2011) Adaptive logic-based switching fault-tolerant controller design for nonlinear uncertain systems. Int J Robust Nonlinear Control 21(4):404–428MathSciNetCrossRefGoogle Scholar
  10. 10.
    Zouari F, Boulkroune A, Ibeas A (2017) Observer-based adaptive neural network control for a class of MIMO uncertain nonlinear time-delay non-integer-order systems with asymmetric actuator saturation. Neural Comput Appl 28(1):993–1010CrossRefGoogle Scholar
  11. 11.
    Zhou S, Chen M, Ong CJ (2016) Adaptive neural network control of uncertain MIMO nonlinear systems with input saturation. Neural Comput Appl 27(5):1317–1325CrossRefGoogle Scholar
  12. 12.
    Zhang X, Polycarpou MM, Parisini T (2010) Fault diagnosis of a class of nonlinear uncertain systems with Lipschitz nonlinearities using adaptive estimation. Automatica 46(2):290–299MathSciNetCrossRefGoogle Scholar
  13. 13.
    Li XJ, Yang GH (2018) Neural-network-based adaptive decentralized fault-tolerant control for a class of interconnected nonlinear systems. IEEE Trans Neural Netw Learn Syst 29(1):144–155MathSciNetCrossRefGoogle Scholar
  14. 14.
    Mao Z, Jiang B, Shi P (2010) Observer based fault-tolerant control for a class of nonlinear networked control systems. J Frankl Inst 347(6):940–956MathSciNetCrossRefGoogle Scholar
  15. 15.
    Tong S, Huo B, Li Y (2014) Observer-based adaptive decentralized fuzzy fault-tolerant control of nonlinear large-scale systems with actuator failures. IEEE Trans Fuzzy Syst 22(1):1–15CrossRefGoogle Scholar
  16. 16.
    Zhou Q, Shi P, Liu H (2012) Neural-network-based decentralized adaptive output-feedback control for large-scale stochastic nonlinear systems. IEEE Trans Syst Man Cybern Part B Cybern 42(6):1608–1619CrossRefGoogle Scholar
  17. 17.
    Dinh HT, Kamalapurkar R, Bhasin S (2014) Dynamic neural network-based robust observers for uncertain nonlinear systems. Neural Netw 60:44–52CrossRefGoogle Scholar
  18. 18.
    Dierks T, Jagannathan S (2010) Output feedback control of a quadrotor UAV using neural networks. IEEE Trans Neural Netw 21(1):50–66CrossRefGoogle Scholar
  19. 19.
    Krishnamurthy P, Khorrami F, Jiang ZP (2002) Global output feedback tracking for nonlinear systems in generalized output-feedback canonical form. IEEE Trans Autom Control 47(5):814–819MathSciNetCrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Xiaoye Xi
    • 1
  • Tingzhang Liu
    • 1
    Email author
  • Jianfei Zhao
    • 1
  • Limin Yan
    • 2
  1. 1.Department of AutomationShanghai UniversityShanghaiChina
  2. 2.Microelectronics R&D CenterShanghai UniversityShanghaiChina

Personalised recommendations