# Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

# Linear Systems: Discrete-Time, Time-Varying, State Variable Descriptions

• P. J. Antsaklis
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4471-5102-9_191-1

## Abstract

Discrete-time processes that can be modeled by linear difference equations with time-varying coefficients can be written in terms of state variable descriptions of the form $$x(k + 1) = A(k)x(k) + B(k)u(k),\ y(k) = C(k)x(k) + D(k)u(k)$$. The response of such systems due to a given input and initial conditions is derived. Equivalence of state variable descriptions is also discussed.

## Keywords

Linear systems Discrete-time Time-varying State variable descriptions

## Introduction

Discrete-time systems arise in a variety of ways in the modeling process. There are systems that are inherently defined only at discrete points in time; examples include digital devices, inventory systems, and economic systems such as banking where interest is calculated and added to savings accounts at discrete time interval. There are also systems that describe continuous-time systems at discrete points in time; examples include simulations of continuous processes using digital computers and feedback control systems that employ digital controllers and give rise to sampled-data systems.

Dynamical processes that can be described or approximated by linear difference equations with time-varying coefficients can also be described, via a change of variables, by state variable descriptions of the form
$$\begin{array}{c} x(k + 1) = A(k)x(k) + B(k)u(k);\quad x(k_{0}) = x_{0} \\ y(k) = C(k)x(k) + D(k)u(k).\end{array}$$
(1)
Above, the state vector x(k) ($$k \in \mathbb{Z}$$, the set of integers) is a column vector of dimension n ($$x(k) \in {\mathbb{R}}^{n}$$); the output is $$y(k) \in {\mathbb{R}}^{m}$$ and the input is $$u(k) \in {\mathbb{R}}^{m}$$. A(k), B(k), C(k), and D(k) are matrices with entries functions of time k, A(k) = [a ij (k)], $$a_{ij}(k) : \mathbb{Z} \rightarrow \mathbb{R}$$ ($$A(k) \in {\mathbb{R}}^{n\times n}$$, $$B(k) \in {\mathbb{R}}^{n\times m}$$, $$C(k) \in {\mathbb{R}}^{p\times n}$$, $$D(k) \in {\mathbb{R}}^{p\times m}$$). The vector difference equation in (1) is the state equation, while the algebraic equation is the output equation. Note that in the time-invariant case, A(k) = A, B(k) = B, C(k) = C, and D(k) = D.

The advantage of the state variable description (1) is that given an input u(k), kk 0 and an initial condition x(k 0) = x 0, the state trajectories or motions for kk 0 can be conveniently characterized. To determine the expressions, we first consider the homogeneous state equation and the corresponding initial value problem.

## Solving x(k + 1) = A(k)x(k); x(k0) = x0

Consider the homogenous equation
$$x(k + 1) = A(k)x(k);\quad x(k_{0}) = x_{0}$$
(2)
Note that
$$\displaystyle\begin{array}{rcl} x(k_{0} + 1)& =& A(k_{0})x(k_{0}) \\ x(k_{0} + 2)& =& A(k_{0} + 1)A(k_{0})x(k_{0}) \\ & \vdots & \\ x(k)& =& A(k - 1)A(k - 2)\cdots A(k_{0})x(k_{0}) \\ & =& \displaystyle\prod _{j=k_{0}}^{k-1}A(j)x(k_{ 0}),\quad k > k_{0} \\ \end{array}$$
This result can be shown formally by induction. The solution of (2) is then
$$x(k) = \Phi (k,k_{0})x(k_{0}),$$
(3)
where $$\Phi (k,k_{0})$$ is the state transition matrix of (2) given by
$$\Phi (k,k_{0}) =\displaystyle\prod _{ j=k_{0}}^{k-1}A(j),\quad k > k_{ 0};\quad \Phi (k_{0},k_{0}) = I.$$
(4)
Note that in the time-invariant case, $$\Phi (k,k_{0}) = {A}^{k-k_{0}}$$.

## System Response

Consider now the state equation in (1). It can be easily shown that the solution is
$$x(k) = \Phi (k,k_{0})x(k_{0}) +\displaystyle\sum _{ j=k_{0}}^{k-1}\Phi (k,j + 1)B(j)u(j),\quad k > k_{ 0},$$
(5)
and the response y(k) of (1) is
$$y(k) = C(k)\Phi (k,k_{0})x(k_{0})+C(k)\displaystyle\sum _{j=k_{0}}^{k-1}\Phi (k,j+1)B(j)u(j)+D(k)u(k),\quad k > k_{ 0},$$
(6)
and
$$y(k_{0}) = C(k_{0})x(k_{0}) + D(k_{0})u(k_{0}).$$
Equation (5) is the sum of two parts, the state response (when u(k) = 0 and the system is driven only by the initial state conditions) and the input response (when x(k 0) = 0 and the system is driven only by the input u(k)); this illustrates the linear systems principle of superposition.

## Equivalence of State Variable Descriptions

Given (1), consider the new state vector $$\tilde{x}$$ where
$$\tilde{x}(k) = P(k)x(k)$$
where P − 1(k) exists. Then
$$\begin{array}{c} \tilde{x}(k + 1) =\tilde{ A}(k)\tilde{x}(k) +\tilde{ B}(k)u(k) \\ y(k) =\tilde{ C}(k)\tilde{x}(k) +\tilde{ D}(k)u(k) \end{array}$$
(7)
where
$$\displaystyle\begin{array}{rcl} \tilde{A}(k)& =& P(k + 1)A(k){P}^{-1}(k), \\ \tilde{B}(k)& =& P(k + 1)B(k), \\ \tilde{C}(k)& =& C(k){P}^{-1}(k), \\ \tilde{D}(k)& =& D(k) \\ \end{array}$$
is equivalent to (1). It can be easily shown that equivalent descriptions give rise to the same discrete impulse responses.

## Summary

State variable descriptions for linear discrete-time time-varying systems were introduced and the state and output responses to inputs and initial conditions were derived. The equivalence of state variable representations was also discussed.

## Cross-References

The state variable descriptions received wide acceptance in systems theory beginning in the late 1950s. This was primarily due to the work of R.E. Kalman. For historical comments and extensive references, see Kailath (1980). The use of state variable descriptions in systems and control opened the way for the systematic study of systems with multi-inputs and multi-outputs.

## Bibliography

1. Antsaklis PJ, Michel AN (2006) Linear systems. Birkhauser, BostonGoogle Scholar
2. Astrom KJ, Wittenmark B (1997) Computer controlled systems: Theory and Design, 3rd edn. Prentice Hall, Upper Saddle River, NJGoogle Scholar
3. Franklin GF, Powell DJ, Workman ML (1998) Digital control of dynamic systems, 3rd edn. Addison-Wesley, Menlo Park, CAGoogle Scholar
4. Jury EI (1958) Sampled-data control systems. Wiley, New YorkGoogle Scholar
5. Kailath T (1980) Linear systems. Prentice-Hall, Englewood CliffsGoogle Scholar
6. Ragazzini JR, Franklin GF (1958) Sampled-data control systems. McGraw-Hill, New YorkGoogle Scholar
7. Rugh WJ (1996) Linear systems theory, 2nd edn. Prentice-Hall, Englewood CliffsGoogle Scholar