Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

Linear State Feedback

  • Panos AntsaklisEmail author
  • Alessandro Astolfi
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4471-5102-9_196-1


Feedback is a fundamental mechanism in nature and central in the control of systems. The state contains important system information, and applying a control law that uses state information is a very powerful control policy. To illustrate the effect of feedback in linear systems, continuous-time and discrete-time state variable descriptions are used: these allow one to write explicitly the resulting closed-loop descriptions and to study the effect of feedback on the eigenvalues of the closed-loop system. The eigenvalue assignment problem is also discussed.


Linear systems State variables Feedback State feedback 


Feedback is a fundamental mechanism arising in nature. Feedback is also common in engineered systems and is essential in the automatic control of dynamic processes with uncertainties in their model descriptions and in their interactions with the environment. When feedback is used, the actual values of the system variables are sensed, fed back, and used to control the system. That is, a control law decision process is based not only on predictions on the system behavior derived from a process model (as in open-loop control) but also on information about the actual behavior (closed-loop feedback control).

Linear Continuous-Time Systems

Consider, to begin with, time-invariant systems described by the state variable description
$$\dot{x} = Ax + Bu,\qquad y = Cx + Du,$$
in which \(x(t) \in{\mathbb{R}}^{n}\) is the state, \(u(t) \in{\mathbb{R}}^{m}\) is the input, \(y(t) \in{\mathbb{R}}^{p}\) is the output, and \(A \in{\mathbb{R}}^{n\times n}\), \(B \in{\mathbb{R}}^{n\times m}\), \(C \in{\mathbb{R}}^{p\times n}\), \(D \in{\mathbb{R}}^{p\times m}\) are constant matrices. In this case, the linear state feedback (lsf) control law is selected as
$$u(t) = Fx(t) + r(t),$$
where \(F \in{\mathbb{R}}^{m\times n}\) is the constant gain matrix and \(r(t) \in{\mathbb{R}}^{m}\) is a new external input.
Substituting (2) into (1) yields the closed-loop state variable description, namely,
$$\displaystyle\begin{array}{rcl} \dot{x}& =& (A + BF)x + Br, \\ y& =& (C + DF)x + Dr.\end{array}$$
Appropriately selecting F, primarily to modify A + BF, one affects and improves the behavior of the system.
A number of comments are in order:
  1. Feeding back the information from the state x of the system is expected to be, and it is, an effective way to alter the system behavior. This is because knowledge of the (initial) state and the input uniquely determines the system’s future behavior and intuitively using the state information should be a good way to control the system, i.e., modifying its behavior.

  2. In a state feedback control law, the input u can be any function of the state u = f(x, r), not necessarily linear with constant gain F as in (2). Typically given (1) and (2) is selected as the linear state feedback primarily because the resulting closed-loop description (3) is also a linear time-invariant system. However, depending on the application needs, the state feedback control law (2) can be more complex.

  3. Although the Eqs. (3) that describe the closed-loop behavior are different from Eq. (1), this does not imply that the system parameters have changed. The way feedback control acts is not by actually changing the system parameters A, B, C, D but by changing u so that closed-loop system behaves as if the parameters were changed. When one applies, say, a step via r(t) in the closed-loop system, then u(t) in (2) is modified appropriately so the system behaves in a desired way.

  4. It is possible to implement u in (2) as an open-loop control law, namely,
    $$\hat{u}(s) = F{[sI - (A + BF)]}^{-1}x(0) + {[I - F{(sI - A)}^{-1}B]}^{-1}\hat{r}(s)$$
    where Laplace transforms have been used for notational convenience. Equation (4) produces exactly the same input as Eq. (2), but it has the serious disadvantage that it is based exclusively on prior knowledge on the system (notably x(0) and parameters A, B). As a result, when there are uncertainties (and there always are), the open-loop control law (4) may fail, while the closed-loop control law (2) succeeds.
  5. Analogous definitions exist for continuous-time, time-varying systems described by the equations
    $$\dot{x} = A(t)x + B(t)u,\quad y = C(t)x + D(t)u$$
    In this framework, the control law is described by
    $$u = F(t)x + r,$$
    and the resulting closed-loop system is
    $$\displaystyle\begin{array}{rcl} \dot{x} = [A(t) + B(t)F(t)]x + B(t)r,& & \\ y = [C(t) + D(t)F(t)]x + D(t)r.& & \end{array}$$

Linear Discrete-Time Systems

For the discrete-time, time-invariant case, the system description is
$$x(k + 1) = Ax(k) + Bu(k),\quad y = Cx(k) + Du(k),$$
the linear state feedback control law is defined as
$$u(k) = Fx(k) + r(k),$$
and the closed-loop system is described by
$$\displaystyle\begin{array}{rcl} x(k + 1) = (A + BF)x(k) + Br(k),& & \\ y(k) = (C + DF)x(k) + Dr(k).& &\end{array}$$
Similarly, for the discrete-time, time-varying case
$$\displaystyle\begin{array}{rcl} x(k + 1) = A(k)x(k) + B(k)u(k),& & \\ y(k) = C(k)x(k) + D(k)u(k),& &\end{array}$$
the control law is defined as
$$u(k) = F(k)x(k) + r(k),$$
and the resulting closed-loop system is
$$\displaystyle\begin{array}{rcl} x(k + 1) = [A(k) + B(k)F(k)]x(k) + B(k)r(k),& & \\ y(k) = [C(k) + D(k)F(k)]x(k) + D(k)r(k).& &\end{array}$$

Selecting the Gain F

F (or F(t)) is selected so that the closed-loop system has certain desirable properties. Stability is of course of major importance. Many control problems are addressed using linear state feedback including tracking and regulation, diagonal decoupling, and disturbance rejection. Here we shall focus on stability. Stability can be achieved under appropriate controllability assumptions. In the time-varying case, one way to determine such stabilizing F(t) (or F(k)) is to use results from the optimal linear quadratic regulator (LQR) theory which yields the “best” F(t) (or F(k)) in some sense.

In the time-invariant case, one can also use a LQR formulation, but here stabilization is equivalent to the problem of assigning the n eigenvalues of (A + BF) in the stable region of the complex plane. If λ i , i = 1, , n, are the eigenvalues of A + BF, then F should be chosen so that, for all i = 1, , n, the real part of λ i , Re(λ i ) < 0 in the continuous-time case, and the magnitude of λ i , \(\left \vert \lambda _{i}\right \vert< 1\) in the discrete-time case. Eigenvalue assignment is therefore an important problem, which is discussed hereafter.

Eigenvalue Assignment Problem

For continuous-time and discrete-time, time-invariant systems, the eigenvalue assignment problem can be posed as follows. Given matrices \(A \in{\mathbb{R}}^{n\times n}\) and \(B \in{\mathbb{R}}^{n\times m}\), find \(F \in{\mathbb{R}}^{m\times n}\) such that the eigenvalues of A + BF are assigned to arbitrary, complex conjugate, locations. Note that the characteristic polynomial of A + BF, namely, \(\mathrm{det}\left (sI - (A + BF)\right )\), has real coefficients, which implies that any complex eigenvalue is part of a pair of complex conjugate eigenvalues.

Theorem 1

The eigenvalue assignment problem has a solution if and only if the pair (A,B) is reachable.

For single-input systems, that is, for systems with m = 1, the eigenvalue assignment problem has a simple solution, as illustrated in the following statement:

Proposition 1

Consider system (1) or (8). Let m = 1. Assume that
$$\text{rank}\ R = n,$$
$$R = [B,AB,\ldots .{A}^{n-1}B],$$
that is, the system is reachable. Let p(s) be a desired monic polynomial of degree n. Then there is a (unique) linear state feedback gain matrix F such that the characteristic polynomial of A + BF is equal to p(s). Such linear state feedback gain matrix F is given by
$$F = -\left [\begin{array}{cccc} 0&\cdots &0&1 \end{array} \right ]{R}^{-1}p(A).$$

Proposition 1 provides a constructive way to assign the characteristic polynomial, hence the eigenvalues, of the matrix A + BF. Note that, for low order systems, i.e., if n = 2 or n = 3, it may be convenient to compute directly the characteristic polynomial of A + BF and then compute F using the principle of identity of polynomials, i.e., F should be such that the coefficients of the polynomials det(sI − (A + BF)) and p(s) coincide. Equation (14) is known as Ackermann’s formula.

The result summarized in Proposition 1 can be extended to multi-input systems.

Proposition 2

Consider system (1) or (8). Suppose
$$\text{rank}\ R = n,$$
that is, the system is reachable. Let p(s) be a monic polynomial of degree n. Then there is a linear state feedback gain matrix F such that the characteristic polynomial of A + BF is equal to p(s).

Note that in the case m > 1 the linear state feedback gain matrix F assigning the characteristic polynomial of the matrix A + BF is not unique. To compute such a gain matrix, one may exploit the following fact:

Lemma 1

Consider system (1). Suppose
$$\text{rank}\ R = n,$$
that is, the system is reachable. Let b i be a nonzero column of the matrix B. Then there is a matrix G such that the single-input system
$$\dot{x} = (A + BG)x + b_{i}v$$
is reachable. Similar results are true for discrete-time systems (8).
Exploiting Lemma 1, it is possible to design a matrix F such that the characteristic polynomial of A + BF equals some monic polynomial p(s) of degree n in two steps. First, we compute a matrix G such that the system (15) is reachable, and then we use Ackermann’s formula to compute a linear state feedback gain matrix F such that the characteristic polynomial of
$$A + BG + b_{i}F$$
is p(s). Note also that if (A, B) is reachable, under mild conditions on A, there exists vector g so that (A, Bg) is reachable.

There are many other methods to assign the eigenvalues which may be found in the references below.

Transfer Functions

If H F (s) is the transfer function matrix of the closed-loop system (3), it is of interest to find its relation to the open-loop transfer function H(s) of (1). It can be shown that
$$\displaystyle\begin{array}{rcl} H_{F}(s)& =& H(s)[I - F{(sI - A)}^{-1}B{]}^{-1} \\ & =& H(s)[F{(sI - (A + BF))}^{-1}B + I] \\ \end{array}$$

In the single-input, single-output case, it can be readily shown that the linear state feedback control law (2) only changes the coefficients of the denominator polynomial in the transfer function (this result is also true in the multi-input, multi-output case). Therefore, if any of the (stable) zeros of H(s) need to be changed, the only way to accomplish this via linear state feedback is by pole-zero cancelation (assigning closed-loop poles at the open-loop zero locations; in the MIMO case, closed-loop eigenvalue directions also need to be assigned for cancelations to take place). Note that it is impossible to change the unstable zeros of H(s) under stability, since they would have to be canceled with unstable poles.

Observer-Based Dynamic Controllers

When the state x is not available for feedback, an asymptotic estimator (a Luenberger observer) is typically used to estimate the state. The estimate \(\tilde{x}\) of the state, instead of the actual x, is then used in (2) to control the system, in what is known as the certainty equivalence architecture.


The notion of state feedback for linear systems has been discussed. It has been shown that state feedback modifies the closed-loop behavior. The related problem of eigenvalue assignment has been discussed, and its connection with the reachability (controllability) properties of the system has been highlighted. The class of feedback laws considered is the simplest possible one. If additional constraints on the input signal, or on the closed-loop performance, are imposed, then one perhaps has to resort to nonlinear state feedback, for example, if the input signal is bounded in amplitude or rate. If constraints such as decoupling of the systems into m noninteracting subsystems or tracking under asymptotic stability are imposed, then dynamic state feedback may be necessary.



  1. Antsaklis PJ, Michel AN (2006) Linear systems. Birkhauser, BostonzbMATHGoogle Scholar
  2. Chen CT (1984) Linear system theory and design. Holt, Rinehart and Winston, New YorkGoogle Scholar
  3. DeCarlo RA (1989) Linear systems. Prentice-Hall, Englewood CliffsGoogle Scholar
  4. Hespanha JP (2009) Linear systems theory. Princeton Press, PrincetonzbMATHGoogle Scholar
  5. Kailath T (1980) Linear systems. Prentice-Hall, Englewood CliffszbMATHGoogle Scholar
  6. Rugh WJ (1996) Linear systems theory, 2nd edn. Prentice-Hall, Englewood CliffsGoogle Scholar
  7. Wonham WM (1967) On pole assignment in multi-input controllable linear systems. IEEE Trans Autom Control AC-12:660–665CrossRefGoogle Scholar
  8. Wonham WM (1985) Linear multivariable control: a geometric approach, 3rd edn. Springer, New YorkCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag London 2014

Authors and Affiliations

  1. 1.Department of Electrical EngineeringUniversity of Notre DameNotre Dame INUSA
  2. 2.Department of Electrical and Electronic EngineeringImperial College LondonLondonUK
  3. 3.Dipartimento di Ingegneria Civile e Ingegneria InformaticaUniversità di Roma Tor VergataRomeItaly