Stability and Control of Linear Systems pp 111138  Cite as
Stabilization
 107k Downloads
Abstract
In this chapter, the two main topics studied in this book (stability and control) encounter each other. We address the stabilization problem, that is the problem of improving the stability performances of a system by applying a suitable feedback law. We consider several approaches: static state feedback, static output feedback and dynamic feedback. Finally we revisit in this framework the classical PID control method.
Keywords
Feedback Law Static Output Feedback Stabilization Problem Asymptotic Controllability Internal Stability Properties Lyapunov Matrix EquationAs already pointed out in Chap. 1, the behavior of a system can be regulated, without need of radical changes in its internal plant, by the construction of a suitable device which interacts with the system by means of a feedback connection. The action of such a device may have a static nature (and, in this case, it can be mathematically represented as a function) or a dynamic one (and so being interpreted as an auxiliary system). The feedback connection allows us to exert the control action in an automatic way (i.e., without need of the presence of a human operator), and requires the installation of sensors and actuators.
When all the state variables can be monitored and measured at each time, and all the information about their evolution can be used by the control device, we speak about state feedback. On the contrary, when the information about the state is only partially available (since they are, for instance, obtained by means of an observation function) we speak about output feedback.
7.1 Static State Feedback
We stress that by virtue of the particular structure of the control (7.2), the transformed system is still of the form (7.1), with the matrix B unchanged, and the matrix A replaced by the new matrix \(\tilde{A}=A+BF\). We also notice that the transformation induced by (7.2) is invertible; indeed, if we apply the feedback law \(v=Fx+u\) to (7.3) we recover the form (7.1) of the system. Thus, the transformation (7.2) defines an equivalence relation on the set of all the systems of the form (7.1); this fact can be formalized by the following definition.
Definition 7.1
In this perspective, we can formulate the following problem pattern: assume that we are interested in a certain property, and that this property is not satisfied by the given system (7.1). We wonder whether the property is satisfied by system (7.3), for a suitable choice of the matrix F. More precisely, we want to find conditions under which the qualitative behavior of the given system can be modified in the desired way by means of a convenient feedback connection.
7.1.1 Controllability
As a first example, we ask whether a system can achieve the complete controllability property by means of a feedback transformation (diversely stated, whether in the same feedback equivalence class there may exist systems whose reachable spaces have different dimensions). The answer is negative; indeed, the following theorem holds.
Theorem 7.1
Proof
In conclusion, \(\mathrm{R}_{(7.3)} \subseteq \mathrm{R}_{(7.1)}\), since all the vectors of \(\mathrm{R}_{(7.3)}\) are linear combinations of vectors of \(\mathrm{R}_{(7.1)}\).
The opposite inclusion can be achieved by exchanging the roles of the systems (recall that (7.1) can be recovered from (7.3) by the inverse feedback transformation \(v=Fx+u\)). \(\blacksquare \)
In other words, Theorem 7.1 states that the complete controllability property is invariant under feedback equivalence.
7.1.2 Stability
In the previous chapter we tried to characterize those systems of the form (7.1) which enjoy the external stability property. We noticed that this property is intimately linked to the internal stability properties of the system (Hurwitz property). This motivates the effort to elaborate models for which the eigenvalues of the system matrix A lie in the open left half of the complex plane and, in case this condition is not fulfilled, the interest in devising appropriate corrections.
The main purpose of this chapter is to show that feedback connections represent a convenient tool in order to improve the internal stability properties of a system.
7.1.3 Systems with Scalar Input
Consider first the case of a system with scalar input (i.e., with \(m=1\) and B reduced to a column vector b). Our approach is based on the following theorem.
Theorem 7.2
Proof
Recall that the companion form characterizes the system representation of scalar linear differential equations. Theorem 7.2 states therefore that any completely controllable linear system with single input and state space dimension n is linearly equivalent to a system represented by a single linear differential equation of order n. We emphasize that the proof of Theorem 7.2 supplies an explicit expression for the matrix P which determines the similarity between A and its companion form \(A_0\). Indeed, it is immediately seen that \(P=RQ^{1}\).
Definition 7.2
 (1)

\(1\le k\le n\);
 (2)

\(\lambda _1, \ldots , \lambda _k\) are distinct complex numbers;
 (3)

\(\mu _1, \ldots , \mu _k\) are (not necessarily distinct) positive integers such that \(\mu _1+\cdots +\mu _k=n\);
 (4)

for each \(i\in \{1,\ldots , k\}\) there exists \(j\in \{1,\ldots , k\}\) such that \(\lambda _j=\overline{\lambda _i}\) (the conjugate of \(\lambda _i\)) and \(\mu _i=\mu _j\).
If fact, we have proven something more. For any preassigned real \(n\times n\) matrix M, a completely controllable system with scalar input can be always transformed in a new system, such that the eigenvalues of the matrix of the new system coincide exactly with those of M.
7.1.3.1 System with Multiple Inputs
The discussion of the previous section motivates the following general definitions.
Definition 7.3
We say that (7.1) is stabilizable if there exists a static state feedback \(u=Fx\) such that all the eigenvalues of the matrix \((A+BF)\) have negative real part.
We say that (7.1) is superstabilizable if for each \(\alpha >0\) there exists a static state feedback \(u=Fx\) (with F dependent on \(\alpha \)) such that the real part of each eigenvalue of the matrix \((A+BF)\) is less than \( \alpha \).
We say that (7.1) has the pole assignment property if for each given consistent 2ktuple there exists a static state feedback \(u=Fx\) such that the eigenvalues of \(A+BF\) are exactly the numbers \(\lambda _1, \ldots , \lambda _k\), with respective multiplicities \(\mu _1,\ldots ,\mu _k\).
Systems which are superstabilizable are particularly interesting for applications. Indeed for these systems, it is not only possible to construct stabilizing feedback laws, but also to assign an arbitrary decay rate.
We already know that any completely controllable system with a scalar input possesses the pole assignment property, and hence it is stabilizable and superstabilizable. This result can be extended, with some technical complications in the proof, to systems with multiple input.
Theorem 7.3
 (i)

complete controllability
 (ii)

pole assignment
 (iii)

superstabilizability.
The reader interested in the full proof of Theorem 7.3 is referred, for instance, to [11], p. 145 or [28], p. 58. It follows in particular from Theorem 7.3 that for any system in the general form (7.1), complete controllability implies stabilizability by static state feedback. We give below an independent and direct proof of this fact.
Proposition 7.1
If (7.1) is completely controllable, then it is stabilizable.
Proof
To finish the proof, we need therefore to try another way. We will resort directly to Theorem 3.1. More precisely, we will show that all the eigenvalues of \((ABB{}^\mathbf{t}\Gamma ^{1} ){}^\mathbf{t}\) have strictly negative real part. To this end, we take advantage of the previous computations.
7.1.4 Stabilizability
The stabilizability property is actually weaker than complete controllability; this can be easily realized looking at a system for which A is Hurwitz and \(B=0\). In this section we aim to characterize the stabilizability property by means of suitable and easytocheck conditions.
Theorem 7.4
System (7.19) is stabilizable if and only if the matrix \(A_{22}\) is Hurwitz.
Proof
The set of the eigenvalues of a matrix \(A=\begin{pmatrix}A_{11}&{}A_{12}\\ 0&{}A_{22}\\ \end{pmatrix}\) is the union of the sets of the eigenvalues of the matrices \(A_{11}\) and \(A_{22}\). By means of a feedback, there is no way to modify the eigenvalues of the matrix \(A_{22}\). Hence, if the system is stabilizable, \(A_{22}\) must be Hurwitz.
Vice versa, since the subsystem corresponding to the components \(z_1\) is completely controllable, we can construct a feedback \(u=F_1z_1\) (for instance, by the method illustrated in the proof of Proposition 7.1) in such a way that matrix \(A_{11}+B_1F_1\) is Hurwitz. The matrix \(A_{22}\) is Hurwitz by hypothesis. Hence, the matrix A is Hurwitz, as well.\(\blacksquare \)
Next we present (without proof) other necessary and sufficient conditions for stabilization.
Theorem 7.5
Let V be the subspace of \(\mathbf{C}^n\) generated by all the eigenvectors (including the generalized ones) associated to all the eigenvalues \(\lambda \) of A, having nonnegative real part. Moreover, let U be the subspace of \(\mathbf{R}^n\) generated by all the vectors of the form \(\mathrm{Re} v\) and \(\mathrm{im}\) v, with \(v\in V\). System (7.1) is stabilizable if and only if U is contained in its reachable space \(\mathrm{R}\).
Theorem 7.6
It is interesting to compare Theorem 7.6 and Hautus’ controllability criterion (Theorem 5.4).
Theorem 7.7
On the contrary, proving that the same condition is also necessary for stabilizability is more difficult (a proof can be found in [11], p. 133).
Equation (7.20) is called the algebraic Riccati matrix equation associated to system (7.1). We emphasize that (7.20) is nonlinear with respect to the entries of the unknown matrix P. We emphasize also that, once a solution of (7.20) has been found, the feedback law provided by Theorem 7.7 is explicit and simpler than the feedback provided in the proof of Proposition 7.1.
Corollary 7.1
On the other hand, if system (7.1) is stabilizable, then for each symmetric, positive definite matrix Q there exists a symmetric, positive definite solution P of the matrix equation (7.22).
Proof
7.1.5 Asymptotic Controllability
Definition 7.4
We say that the system (7.1) is asymptotically controllable if for each \(x_0\in \mathbf{R}^n\) there exists an input map \(u_{x_0}(t)\) such that the corresponding solution x(t) of the problem (7.23) approaches the origin for \(t\rightarrow +\infty \).
The previous reasoning shows that a stabilizable system is asymptotically controllable. But also the converse is true. Indeed, by an argument similar to that used in the proof of Theorem 7.4, it is not difficult to see that if system (7.1) is asymptotically controllable, then the uncontrollable part of the associated canonical controllability form must be asymptotically stable. Then, the conclusion follows in force of Theorem 7.4. We can therefore state a further necessary and sufficient condition for stabilizability.
Theorem 7.8
System (7.1) is stabilizable if and only if it is asymptotically controllable.
7.2 Static Output Feedback
To become familiar with these new difficulties, we examine some simple examples.
Example 7.1
As already suggested, the impossibility of implementing a feedback which uses all the state variables typically arises when we have an observation function. In this example, the feedback \(u=kx_1\) can be interpreted as an output feedback, if we assume an observation function \(y=c{}^\mathbf{t}x\) with \(c=(1 \ 0)\). We emphasize that the system, with respect to this observation function, is completely observable, as well; nevertheless, the system is not stabilizable by an output feedback. \(\blacksquare \)
Example 7.2
Notice that again in this example, the system is completely controllable and completely observable.\(\blacksquare \)
Of course, if a system is stabilizable by a static output feedback, it is stabilizable also by a static state feedback. Hence, all the static state feedback stabilizability conditions listed in the previous section can be reviewed as necessary (but no more sufficient) conditions for static output feedback stabilizability.
7.2.1 Reduction of Dimension
In this section we present a theorem which allows us to simplify, under particular conditions, the study of the static output feedback stabilization problem.
Theorem 7.9
The overall system (7.24) is stabilizable by static output feedback if and only if the following conditions are both satisfied:
 (1)

the matrices \(A_{11}\), \(A_{33}\) and \(A_{44}\) have all the eigenvalues with negative real part;
 (2)
 the completely controllable and completely observable part of the system, that is the part corresponding to the subsystemis stabilizable by static output feedback.$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{z}_2= A_{22}z_2+B_2u\\ y =C_2z_2\\ \end{array}\right. } \end{aligned}$$(7.27)
Proof
By virtue of the triangular block form of (7.28), it is clear that the feedback \(u=Ky\) stabilizes the system if and only if the matrices \(A_{11}\), \(\tilde{A}_{22}\), \(A_{33}\) and \(A_{44}\) have all their eigenvalues with negative real part. Taking into account of condition (1), this actually happens if and only if the feedback \(u = Ky= KC_2z_2\) stabilizes the reduced order system (7.27). \(\blacksquare \)
In view of Theorem 7.9, as far as we are interested in the static output feedback stabilization problem, it is not restrictive to assume that the system at hand is completely controllable as well as completely observable. Then, the following sufficient condition may be of some help.
Proposition 7.2
Let the system (7.24) be given. Assume that it is completely controllable, and that the matrix C is invertible. Then, the system is stabilizable by a static output feedback.
Proof
By the complete controllability hypothesis, there exists a matrix K such that the system is stabilizable by a static state feedback \(u=Kx\). We can write \(u=KC^{1}Cx= KC^{1}y\). We obtain in this way a static output feedback \(u=Fy\) with \(F=KC^{1}\) whose effect on the system is the desired one.\(\blacksquare \)
In the previous statement, the assumption that C is invertible implies of course that \(p=n\) and that the system is completely observable, as well.
Example 7.3
As suggested by the previous example, once the reduction of dimension has been performed, if the dimension of the completely controllable and observable part turns out to be small, the existence of static output stabilizers can be checked by direct computation. An other example is given below.
Example 7.4
The following statement is a dual version of Proposition 7.2.
Proposition 7.3
Let the system (7.24) with \(m=n\) be given. Assume that it is completely observable, and that the matrix B is invertible. Then, the system is stabilizable by a static output feedback.
Proof
7.2.2 Systems with Stable Zero Dynamics
Corollary 7.2
Let the dimension of the observable but not controllable part of the system (7.24) be zero. Assume in addition that the matrix \(C_2\) is invertible. Then, the system is stabilizable by static output feedback if and only if the origin is asymptotically stable for the system of the zero dynamics.
7.2.3 A Generalized Matrix Equation
Other sufficient conditions for static output stabilization can be obtained by suitable generalizations of the Riccati matrix equation (7.20). Next we present one such generalization.
Theorem 7.10
Notice that (7.30) reduces to (7.20) when \(C=I\).
Example 7.5
Note that assuming \(Q=I\) in (7.30) would be restrictive, contrary to what happens in the case of the Lyapunov matrix equation (see Theorem 3.3 and Corollary 3.1) and in the case of the Riccati matrix equation. For instance, it is not difficult to check that in the previous example, there is no solutions for (7.30) if we set \(Q=I\).
Example 7.6
In conclusion, the system is stabilizable by an output feedback, but the coefficient of the feedback cannot be determined on the base of Theorem 7.10. \(\blacksquare \)
7.2.4 A Necessary and Sufficient Condition
The static output feedback stabilization problem is sometimes referred to in the literature as an unsolved problem. From a practical point of view, a numerical solution of a nonlinear matrix equation like (7.30) can be indeed very hard to find. On the contrary, theoretical characterizations of systems admitting static output stabilizing feedbacks expressed in the form of nonlinear generalized Riccati equations can be actually found in the existing literature. For instance, the following theorem appears in [29].
Theorem 7.11
Proof
Remark 7.1
In [29], condition (7.33) is written in a different, but equivalent, way: indeed, the authors do not use the formalism of generalized inverse.\(\blacksquare \)
Remark 7.2
It is not difficult to see that (7.30) implies (7.33), setting \(M=B{}^\mathbf{t}P\), and using the matrix identity (7.31). However, we notice that with respect to (7.30), the matrix equation (7.33) contains the additional unknown M.\(\blacksquare \)
Remark 7.3
7.3 Dynamic Output Feedback
The practical difficulties encountered in the static output stabilization problem can be overcome resorting to a different approach, provided that the system is, in principle, stabilizable by means of a static state feedback law and a suitable (but natural) technical condition is met. The new approach we are going to describe in this section is dynamic output feedback.
Definition 7.5
In the figure above, \(\Sigma _1\) and \(\Sigma _2\) denote respectively the differential parts of (7.24) and (7.35).
Example 7.7
The remaining part of this section is devoted to illustrate how the stabilizing compensator can be constructed in practice, for a general system of the form (7.24).
7.3.1 Construction of an Asymptotic Observer
Definition 7.6
We say that system (7.24) has the detectability property (or that it is detectable) if there exists a matrix K of appropriate dimensions such that the matrix \(L{}^\mathbf{t}=A{}^\mathbf{t}C{}^\mathbf{t}K{}^\mathbf{t}\) is Hurwitz.
A system possesses the detectability property if and only if its dual system is stabilizable by static state feedback. In particular, each completely observable system is detectable.
Proposition 7.4
Proof
Remark 7.4
Notice that the input of (7.36) is the sum of the same external input received by the given system and the output of the given system. Proposition 7.4 states that, regardless the initialization of the two systems and assuming that the external input is the same, the solutions of (7.36) asymptotically approximate the solutions of the given system. For this reason, system (7.36) is called an asymptotic observer and the quantity e(t) introduced in the previous proof is called the error between the true state x(t) and the observed state z(t).\(\blacksquare \)
7.3.2 Construction of the Dynamic Stabilizer
Now assume that system (7.24) is stabilizable by static state feedback, as well as detectable. Under this additional hypothesis, we may find a matrix H such that the matrix \((A+BH)\) is Hurwitz.
If the full state vector is measurable and available for control purposes, we could directly apply the feedback \(u=Hx\) and solve in this way the stabilization problem. Otherwise, it is natural to try the control law \(u=Hz\), where z is the approximation of x provided by the asymptotic observer (7.36).
Proof
Theorem 7.12
If system (7.24) is stabilizable by static state feedback, and if it is detectable, then it is stabilizable by a dynamic output feedback, as well.
Proof
At a first glance, Theorem 7.12 seems to suggest that the method of dynamic feedback is more general than the method of static state feedback. As a matter of fact, these two methods are (theoretically but, recall, not practically) equivalent.
Theorem 7.13
Let the system (7.24) be given, and assume that it is stabilizable by means of a dynamic output feedback (7.35). Then, the system is stabilizable by means of a static state feedback, as well.
Proof
7.4 PID Control
 1.A feedback proportional to the main variable, that is$$\begin{aligned} u=k_0\xi \ . \end{aligned}$$(7.44)
 2.A feedback proportional to the derivative of the main variable, that is$$\begin{aligned} u=k_1\dot{\xi }\ . \end{aligned}$$(7.45)
 3.An input proportional to the integral of \(\xi (t)\), that is$$\begin{aligned} u=k_2\int _0^t \xi (\tau )\ d\tau \ . \end{aligned}$$(7.46)
Here, \(k_0\), \(k_1\), \(k_2\) are suitable real constants, often referred to as the gains . The feedback (7.44) is called a P control. It can be reviewed as a static output feedback, assuming that (7.43) is associated to the observation function \(y=\xi \).
The main variable \(\xi =\pi \theta \) represents the angle with respect to the upward oriented vertical line (see the figure). The control u is exerted by a torque applied to the pivot.
Assume for simplicity that \({{L}\over {g}}=1\). The free system (i.e., with \(u=0\)) is clearly unstable. The system can be stabilized by means of a P control, with gain \(k_0<1\). However, by means of such a control the decay rate cannot be improved, since it depends on the coefficient of the derivative \(\dot{\xi }\), which it is not affected by a P control.
Finally, we can easily see that the system is superstabilizable if a PID control is used. Unfortunately, feedbacks involving a D control are not easy to implement, because measuring the derivative of a variable is usually in practice a hard task. Nevertheless, even today PID control is very popular in industrial applications.
Chapter Summary
In this chapter, the two main topics studied in this book (stability and control) encounter each other. We address the stabilization problem, that is the problem of improving the stability performances of a system by applying a suitable feedback law. We consider several approaches: static state feedback, static output feedback and dynamic feedback. Finally we revisit in this framework the classical PID control method.
Footnotes
 1.
One such example can be obtained taking \(A=\begin{pmatrix}0&{}1\\ 0&{}0\\ \end{pmatrix}\), \(b=\begin{pmatrix}0\\ 1\\ \end{pmatrix}\).