# Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

# Linear Systems: Discrete-Time, Time-Invariant State Variable Descriptions

• Panos J.  Antsaklis
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4471-5102-9_187-1

## Abstract

Discrete-time processes that can be modeled by linear difference equations with constant coefficients can also be described in a systematic way in terms of state variable descriptions of the form $$x(k + 1) = Ax(k) + Bu(k),\ y(k) = Cx(k) + Du(k)$$. The response of such systems due to a given input and subject to initial conditions is derived. Equivalence of state variable descriptions is also discussed.

## Keywords

Linear systems Discrete-time Time-invariant State variable descriptions

## Introduction

Discrete-time systems arise in a variety of ways in the modeling process. There are systems that are inherently defined only at discrete points in time; examples include digital devices, inventory systems, economic systems such as banking where interest is calculated and added to savings accounts at discrete time interval, etc. There are also systems that describe continuous-time systems at discrete points in time; examples include simulations of continuous processes using digital computers and feedback control systems that employ digital controllers and give rise to sampled-data systems.

Linear, discrete-time, time-invariant systems can be modeled via state variable equations, namely,
$$\begin{array}{l} x(k + 1) = Ax(k) + Bu(k);\quad x(0) = x_{0} \\ y(k) = Cx(k) + Du(k)\\ \end{array}$$
(1)
where $$k \in \mathbb{Z}$$, the set of integers, the state vector $$x \in {\mathbb{R}}^{n}$$, i.e., an n dimensional column vector; $$A \in {\mathbb{R}}^{n\times n}$$, $$B \in {\mathbb{R}}^{n\times m}$$, $$C \in {\mathbb{R}}^{p\times n}$$, $$D \in {\mathbb{R}}^{p\times m}$$ are matrices with entries of real numbers; and $$y(k) \in {\mathbb{R}}^{p}$$, $$u(k) \in {\mathbb{R}}^{m}$$ the output and the input, respectively. The vector difference equation in (1) is the state equation and the algebraic equation is the output equation.

Note that (1) could have been equivalently written as $$x(l) = Ax(l - 1) + Bu(l - 1)$$ where $$l = k + 1$$ and x(l − 1) is an easily visualized delayed version of x(l); this is a form more common in signal processing (where a two-sided or bilateral z-transform is used). In control where we assume a known initial condition at time equal to zero (and one-sided or unilateral z-transform is taken), the form in (1) is common.

Similar to the continuous-time case, (1) can be derived from a set of high-order difference equations by introducing the state variables $$x(k) = {[x_{1}(k),\ldots ,x_{n}(k)]}^{T}$$. Description (1) can also be derived from continuous-time system descriptions by sampling (see “Sampled Data Systems,” Panos J. Antsaklis and H. L. Trentelman).

The advantage of the above state variable description is that given any input u(k) and initial conditions x(0), its solution (state trajectory or motion) can be conveniently and systematically characterized. This is done below. We first consider the solutions of the homogenous equation $$x(k + 1) = Ax(k)$$.

## Solving x(k + 1) = Ax(k);  x(0) = x0

Consider the homogenous equation
$$x(k + 1) = Ax(k);\quad x(0) = x_{0}$$
(2)
where $$k \in {\mathbb{Z}}^{+}$$ is a nonnegative integer, $$x(k) = {[x_{1}(k),\ldots ,x_{n}(k)]}^{T}$$ is the state column vectors of dimension n, and A is an n ×n matrix with entries real numbers (i.e., $$A \in {\mathbb{R}}^{n\times n}$$).
Write (2) for $$k = 0,1,2,\ldots$$, namely, x(1) = Ax(0), $$x(2) = Ax(1) = {A}^{2}x(0),\ldots$$ to derive the solution
$$x(k) = {A}^{k}x(0),\quad k \geq 0$$
(3)
This result can be shown formally by induction. Note that A 0 = I by convention and so (3) also satisfies the initial condition.
If the initial time were some (integer) k 0 instead of zero, then the solution would be
$$x(k) = {A}^{k-k_{0} }x(k_{0}),\quad k \geq k_{0}$$
(4)
The solution can be written as
$$\displaystyle\begin{array}{rcl} x(k)& =& \Phi (k,k_{0})x(k_{0}) \\ & =& \Phi (k - k_{0},0)x(k_{0}),\quad k \geq k_{0}\end{array}$$
(5)
where Φ(k, k 0) is the state transition matrix and it equals $$\Phi (k,k_{0}) = {A}^{k-k_{0}}$$. Note that for time-invariant systems, the initial time k 0 can always be taken to be zero without loss of generality; this is because the behavior depends only on the time elapsed (k − k 0) and not on the actual initial time k 0.

In view of (3), it is clear that A k plays an important role in the solutions of the difference state equations that describe linear, discrete-time, time-invariant systems; it is actually analogous to the role e At plays in the solutions of the linear differential state equations that describe linear, continuous-time, time-invariant systems.

Notice that in (3), $$k \geq 0$$. This is so because A k for k < 0 may not exist; this is the case, for example, when A is a singular matrix – it has at least one eigenvalue at the origin. In contrast, e At exists for any t positive or negative. The implication is that in discrete-time systems we may not be able to determine uniquely the initial past state x(0) from a current state value x(k); in contrast to continuous-time systems, it is always possible to go backwards in time.

There are several methods to calculate A k that mirror the methods to calculate e At . One could, for example, use similarity transformations, or the z-transform. When all eigenvectors of A are linearly independent (this is the case, e.g., when all eigenvalues λ i of A are distinct), then a similarity transformation exists so that
$$PA{P}^{-1} =\tilde{ A} = \text{diag}[\lambda _{ i}].$$
Then
$${A}^{k} = {P}^{-1}\tilde{{A}}^{k}P = {P}^{-1}\left [\begin{array}{*{20}{c}} \lambda _{1}^{k}&& \\ &\ddots & \\ &&\lambda _{n}^{k} \end{array} \right ]P.$$
Alternatively, using the z-transforms, $${A}^{k} = {\mathcal{Z}}^{-1}\{z{(zI - A)}^{-1}\}$$. Also when the eigenvalues λ i of A are distinct, then
$${A}^{k} =\displaystyle\sum \limits _{ i=0}^{n}A_{ i}\lambda _{i}^{k},$$
where $$A_{i} = v_{i}\tilde{v}_{i}$$ with v i , $$\tilde{v}_{i}$$ the right and left eigenvectors of A that correspond to λ i . Note that
$$\left [\begin{array}{*{20}{c}} \tilde{v}_{1}\\ \vdots \\ \tilde{v}_{n} \end{array} \right ] ={ \left [\begin{array}{*{20}{c}} v_{1} & \cdots &v_{n} \end{array} \right ]}^{-1},$$
$$A_{i}\lambda _{i}^{k}$$ are the modes of the system. One could also use the Cayley-Hamilton theorem to determine A k .

## System Response

Consider the description (1). The response can be easily derived by writing the equation for $$k = 0,1,2,\ldots$$ and substituting or formally by induction. It is
$$x(k) = {A}^{k}x(0) +\displaystyle\sum \limits _{ j=0}^{k-1}{A}^{k-(j+1)}Bu(j),\quad k > 0$$
(6)
and
$$y(k) = C{A}^{k}x(0) +\displaystyle\sum \limits _{ j=0}^{k-1}C{A}^{k-(j+1)}Bu(j) + Du(k),\quad k > 0 y(0) = Cx(0) + Du(0).$$
(7)
Note that (6) can also be written as
$$x(k) = {A}^{k}x(0)+[B,AB,\cdots \,,{A}^{k-1}B]\left [\begin{array}{*{20}{c}} u(k - 1)\\ \vdots \\ u(0) \end{array} \right ].$$
(8)
Clearly the response is the sum of two components, one due to the initial condition (state response) and one due to the input (input response). This illustrates the linear system principle of superposition.
If the initial time is k 0 and (4) is used, then
$$y(k) = C{A}^{k-k_{0} }x(k_{0}) +\displaystyle\sum \limits _{ j=k_{0}}^{k-1}C{A}^{k-(j+1)}Bu(j) + Du(k),k > k_{ 0} y(k_{0}) = Cx(k_{0}) + Du(k_{0}).$$
(9)

## Equivalence of State Variable Descriptions

Given description (1), consider the new state vector $$\tilde{x}$$ where
$$\tilde{x}(k) = Px(k)$$
with $$P \in {\mathbb{R}}^{n\times n}$$ a real nonsingular matrix.
Substituting $$x = {P}^{-1}\tilde{x}$$ in (1), we obtain
$$\tilde{x}(k + 1) =\tilde{ A}\tilde{x}(k) +\tilde{ B}u(k) y(k) =\tilde{ C}\tilde{x}(k) +\tilde{ D}u(k)$$
(10)
where
$$\tilde{A} = PA{P}^{-1},\quad \tilde{B} = PB,\quad \tilde{C} = C{P}^{-1},\quad \tilde{D} = D$$

The state variable descriptions (1) and (9) are called equivalent and P is the equivalence transformation matrix. This transformation corresponds to a change in the basis of the state space, which is a vector space. Appropriately selecting P one can simplify the structure of $$\tilde{A}\ (= PA{P}^{-1})$$. It can be easily shown that equivalent description gives rise to the same discrete impulse response and transfer function.

## Summary

State variable descriptions for discrete-time, time-invariant systems were introduced and the state and output responses to inputs and initial conditions were derived. The equivalence of state variable representations was also discussed.

## Cross-References

The state variable descriptions received wide acceptance in systems theory beginning in the late 1950s. This was primarily due to the work of R.E. Kalman. For historical comments and extensive references, see Kailath (1980). The use of state variable descriptions in systems and control opened the way for the systematic study of systems with multi-inputs and multi-outputs.

## Bibliography

1. Antsaklis PJ, Michel AN (2006) Linear systems. Birkhauser, BostonGoogle Scholar
2. Franklin GF, Powell DJ, Workman ML (1998) Digital control of dynamic systems, 3rd edn. Addison-Wesley, Longman, Inc., Menlo Park, CAGoogle Scholar
3. Kailath T (1980) Linear systems. Prentice-Hall, Englewood CliffsGoogle Scholar
4. Rugh WJ (1996) Linear systems theory, 2nd edn. Prentice-Hall, Englewood CliffsGoogle Scholar