Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In this chapter, an introduction to the basics of continuous-time feedback systems is given. For more detailed treatments, the reader is referred to textbooks such as [15]. A simple amplitude control loop serves as an example in the following sections. The concepts presented here, however, may also be applied to more advanced control loops (cf. [6, 7]). The RF control loops are often called low-level RF (LLRF) systems to distinguish them from the high-power parts.

7.1 Basics of Continuous-Time Feedback Systems

Since many discrete feedback systems may be treated as quasicontinuous if the sampling time is small enough, discrete-time systems are not covered in the following. The analysis of discrete-time systems is, however, possible in an analogous way to continuous-time systems with the \(\mathcal{Z}\)-transform instead of the Laplace transform [8]. Most feedback analysis and design methods may then be used for discrete systems in a very similar way.

7.1.1 Linear Time-Invariant Systems

The systems under consideration are assumed to be linear and time-invariant (they are so-called LTI systems). Assume a general dynamic system

$$\displaystyle{y(t) =\varphi \{ x(t)\}}$$

that maps the input signal x(t) to the output signal y(t). If the system is time-invariant, a time shift at the input will lead to the shifted output

$$\displaystyle{ \varphi \{x(t - t_{0})\} = y(t - t_{0}). }$$
(7.1)

In case of a linear system, a linear combination of two input signals x1(t) and x2(t) will lead to the same linear combination of their corresponding outputs \(y_{1}(t) =\varphi \{ x_{1}(t)\}\) and \(y_{2}(t) =\varphi \{ x_{2}(t)\}\), i.e.,

$$\displaystyle{ \varphi \{a_{1}x_{1}(t) + a_{2}x_{2}(t)\} = a_{1}y_{1}(t) + a_{2}y_{2}(t) }$$
(7.2)

holds for arbitrary constants a1 and a2.

A consequence of properties (7.1) and (7.2) is that the output of LTI systems can be calculated in the Laplace domain as

$$\displaystyle{ Y (s) = H(s)\ X(s), }$$
(7.3)

where the transfer functionH(s) corresponds to the impulse response h(t) of the system as defined in Sect. 2.3, and X(s) is the Laplace transform of the system input x(t). This fact is of particular importance, because it enables the analysis and design of feedback systems in the Laplace domain. For a demonstration of the fact that Eq. (7.3) holds for any LTI system, we follow [9] and approximate the input signal x(t) by the step function

$$\displaystyle{x_{\mathrm{step}}(t) =\sum _{ \nu =0}^{\infty }x(\tau _{\nu })\ \Big(\Theta (t -\tau _{\nu }) - \Theta (t -\tau _{\nu +1})\Big) \approx x(t),}$$

where \(\tau _{\nu } =\nu \Delta \tau\) are discrete sampling times with distance \(\Delta \tau\) and \(\Theta (t)\) is the Heaviside step function. It is assumed that x(t) is zero for t < 0, as introduced in Sect. 2.2, for all functions for which the one-sided Laplace transform is used. The step response of the system, i.e., the output for \(x(t) = \Theta (t)\), will be denoted by \(y_{\Theta }(t)\) in the following. For the input xstep(t), the LTI properties then lead to the output response

$$\displaystyle{y_{\mathrm{step}}(t) =\sum _{ \nu =0}^{\infty }x(\tau _{\nu })\ \Big(y_{ \Theta }(t -\tau _{\nu }) - y_{\Theta }(t -\tau _{\nu +1})\Big).}$$

The continuous output response y(t) is obtained for the limit \(\Delta \tau \rightarrow 0\):Footnote 1

$$\displaystyle\begin{array}{rcl} y(t)& =& \lim _{\Delta \tau \rightarrow 0}\sum _{\nu =0}^{\infty }x(\tau _{\nu })\ \frac{y_{\Theta }(t -\tau _{\nu }) - y_{\Theta }(t -\tau _{\nu +1})} {\Delta \tau } \ \Delta \tau {}\\ & =& \int _{0}^{\infty }x(\tau )\ \dot{y}_{ \Theta }(t-\tau )\ \mathrm{d}\tau. {}\\ \end{array}$$

If the derivative \(\dot{y}_{\Theta }(t)\) is denoted by the function h(t), this is a convolution integral, and Eq. (2.27),

holds. The choice of h(t) is indeed not coincidental, because a comparison with Sect. 2.3 shows that due to \(\dot{\Theta }(t) =\delta (t)\), this is the already defined impulse response, and the relation

holds for t > 0, i.e., the impulse response h(t) is the derivative of the step response with respect to time. Conversely, it can easily be shown that systems defined by Eq. (7.3) are linear, because in the Laplace domain, the output Y (s) results from a simple multiplication of the input X(s) and the transfer function [10]. In addition, they are time-invariant, because the shifted input

leads to the output

In summary, we can conclude that the definition of LTI systems by the properties (7.1) and (7.2) is equivalent to Definition (7.3).

In many cases, the transfer function H(s) has the form

$$\displaystyle{ H(s) = \frac{b_{0} + b_{1}s +\ldots +b_{m}s^{m}} {a_{0} + a_{1}s +\ldots +a_{n}s^{n}} }$$
(7.4)

with real coefficients bν and aν and nonzero coefficients bm ≠ 0 and an ≠ 0. This is a rational transfer function, and the system (7.3) is then represented in the time domain by the linear ODE

$$\displaystyle{ a_{0}y(t) + a_{1}\dot{y}(t) +\ldots +a_{n} \frac{\mathrm{d}^{n}} {\mathrm{d}t^{n}}y(t) = b_{0}x(t) + b_{1}\dot{x}(t) +\ldots +b_{m} \frac{\mathrm{d}^{m}} {\mathrm{d}t^{m}}x(t) }$$
(7.5)

with constant coefficients. A transfer function (7.4) is called properif m ≤ n and strictly properif m < n. In the latter case, H(s) tends to zero as | s | → .

It is sometimes more convenient to use the zero-pole-gain representation

$$\displaystyle{ H(s) = \frac{K(s - z_{1})(s - z_{2})\ldots (s - z_{m})} {s^{N}(s - p_{1})\ldots (s - p_{n-N})}. }$$
(7.6)

The zeroszν are those values for which H(s) becomes zero, whereas the polespν ≠ 0 are singularities of H(s). In case a pole and a zero are exactly equal, they cancel and do not influence the input–output behavior of the system. The gain can also be expressed as \(K = b_{m}/a_{n}\).

As will be discussed in the following, the system represented by H(s) is called stableif all poles have a negative real part, i.e., Re{pν} < 0 and N = 0. In this case, all poles lie in the open left half of the complex s-plane, which is referred to as OLHP. The abbreviations ORHP (open right half-plane), LHP (left half-plane), RHP (right half-plane) follow accordingly. If at least one pole has a positive real part, the system is unstable.

7.1.2 State-Space Representation

The higher-order ODE (7.5) can be rewritten as a system of ODEs of first order. Consider the transfer function H(s) with input U and output Y, as shown in Fig. 7.1. The input variable U(s) corresponds to X(s) in the previous section. The notation is changed here to be consistent with the standard notation in the control system literature. Without loss of generality, it is assumed that H(s) has the form (7.4) but with an = 1, i.e., the coefficients of H(s) are normalized by an ≠ 0. By splitting H(s) in two blocks with its denominator and numerator, a new variable X(s) may be defined as shown in Fig. 7.1.

Fig. 7.1
figure 1

Derivation of the state-space representation

In the time domain, the following ODEs can be derived from this block diagram:

$$\displaystyle\begin{array}{rcl} u(t)& =& a_{0}x(t) + a_{1}\dot{x}(t) +\ldots +a_{n-1} \frac{\mathrm{d}^{n-1}} {\mathrm{d}t^{n-1}}x(t) + \frac{\mathrm{d}^{n}} {\mathrm{d}t^{n}}x(t), {}\\ y(t)& =& b_{0}x(t) + b_{1}\dot{x}(t) +\ldots +b_{m} \frac{\mathrm{d}^{m}} {\mathrm{d}t^{m}}x(t). {}\\ \end{array}$$

Defining the states (see also Sect. 2.8.1)

$$\displaystyle{ x_{1}:= x,\quad x_{2}:=\dot{ x},\quad \ldots,\quad x_{n}:= \frac{\mathrm{d}^{n-1}} {\mathrm{d}t^{n-1}}x(t), }$$
(7.7)

leads to the system of equations

$$\displaystyle\begin{array}{rcl} \dot{x}_{1}(t)& =& x_{2}(t), {}\\ \dot{x}_{2}(t)& =& x_{3}(t), {}\\ & \vdots & {}\\ \dot{x}_{n-1}(t)& =& x_{n}(t), {}\\ \dot{x}_{n}(t)& =& -a_{0}x_{1}(t) -\ldots -a_{n-1}x_{n}(t) + u(t), {}\\ y(t)& =& b_{0}x_{1}(t) +\ldots +b_{m}x_{m+1}(t). {}\\ \end{array}$$

With the definition of the state vector

$$\displaystyle{\vec{x}(t):= \left [\begin{array}{*{10}c} x_{1}(t)&x_{2}(t)&\ldots &x_{n}(t) \end{array} \right ]^{\mathrm{T}},}$$

the matrix representation

$$\displaystyle\begin{array}{rcl} \frac{\mathrm{d}\vec{x}(t)} {\mathrm{d}t} & =& \left [\begin{array}{*{10}c} 0 & 1 &0&0& \ldots \\ 0 & 0 &1 &0 &\ldots \\ & & & \ddots & & \\ & & & & 1\\ -a_{ 0} & -a_{1} & \ldots & & -a_{n-1} \end{array} \right ] \cdot \vec{ x}(t) + \left [\begin{array}{*{10}c} 0\\ 0\\ \vdots \\ 0\\ 1 \end{array} \right ] \cdot u(t), {}\\ y(t)& =& \left [\begin{array}{*{10}c} b_{0} & \ldots & b_{m}&0&\ldots &0 \end{array} \right ] \cdot \vec{ x}(t), {}\\ \end{array}$$

is obtained, which is called the controllable canonical formand is a special case of a state space representation. Different choices of the states (7.7) lead to different representations, but these have the general form

$$\displaystyle\begin{array}{rcl} \frac{\mathrm{d}\vec{x}(t)} {\mathrm{d}t} & =& A \cdot \vec{ x}(t) + B \cdot \vec{ u}(t), {}\\ \vec{y}(t)& =& C \cdot \vec{ x}(t), {}\\ \end{array}$$

with the state vector \(\vec{x}\) of dimension n, the input vector \(\vec{u}\) of dimension p, the output vector \(\vec{y}\) of dimension q, the n × n system matrix A, the n × p input matrix B, and the q × n output matrix C. All matrices are assumed to have constant and real elements. A feedthrough matrix for a direct influence of \(\vec{u}\) on \(\vec{y}\) can be avoided in most practical cases. The Laplace transform of these equations yieldsFootnote 2

$$\displaystyle{s\,\vec{X}(s) -\vec{ x}(0) = A\,\vec{X}(s) + B\,\vec{U}(s),}$$

or

$$\displaystyle{ \vec{X}(s) = \left (s\,I - A\right )^{-1}\left (B\,\vec{U}(s) +\vec{ x}(0)\right ), }$$
(7.8)

where I denotes the n × n identity matrix. The Laplace transform for the output \(\vec{y}\) leads to

$$\displaystyle{\vec{Y }(s) = C\left (s\,I - A\right )^{-1}\left (B\,\vec{U}(s) +\vec{ x}(0)\right ).}$$

In case of a system with a single input and a single output (SISO system), the transfer function H(s) is obtained as

$$\displaystyle{H(s) = C(s\,I - A)^{-1}B,}$$

where C is a row vector and B a column vector (p = 1, q = 1).

7.1.3 Linearization of Nonlinear Systems

Every practical system contains nonlinearities. Examples are nonlinear friction and constraints on the input that lead to saturation. Fortunately, in many cases, the considered nonlinear system behaves similarly to a linear system in the vicinity of its operating point. Consider a nonlinear system described by

$$\displaystyle{\frac{\mathrm{d}\vec{x}(t)} {\mathrm{d}t} =\vec{ v}(\vec{x}(t),\vec{u}(t))}$$

with the analytic vector function \(\vec{v}\). Suppose that \(\vec{x} =\vec{ x}_{\mathrm{F}}\) and \(\vec{u} =\vec{ u}_{\mathrm{F}}\) constitute a constant equilibrium point, i.e.,

$$\displaystyle{\vec{v}(\vec{x}_{\mathrm{F}},\vec{u}_{\mathrm{F}}) = 0.}$$

With the use of the Jacobian matrix

$$\displaystyle{\frac{\partial \vec{v}} {\partial \vec{x}} = \left (\begin{array}{*{10}c} \frac{\partial v_{1}} {\partial x_{1}} & \frac{\partial v_{1}} {\partial x_{2}} & \ldots & \frac{\partial v_{1}} {\partial x_{n}} \\ \frac{\partial v_{2}} {\partial x_{1}} & \frac{\partial v_{2}} {\partial x_{2}} & \ldots & \frac{\partial v_{2}} {\partial x_{n}}\\ \ldots & \ldots & \ldots & \ldots \\ \frac{\partial v_{n}} {\partial x_{1}} & \frac{\partial v_{n}} {\partial x_{2}} & \ldots & \frac{\partial v_{n}} {\partial x_{n}} \end{array} \right ),}$$

the Taylor series expansion around the equilibrium can be written as

$$\displaystyle{\frac{\mathrm{d}\vec{x}(t)} {\mathrm{d}t} =\vec{ v}(\vec{x}_{\mathrm{F}},\vec{u}_{\mathrm{F}}) + \frac{\partial \vec{v}} {\partial \vec{x}}\Big\vert _{\mathrm{F}} \cdot (\vec{x} -\vec{ x}_{\mathrm{F}}) + \frac{\partial \vec{v}} {\partial \vec{u}}\Big\vert _{\mathrm{F}} \cdot (\vec{u} -\vec{ u}_{\mathrm{F}}) +\vec{ v}_{\mathrm{ho}}(\vec{x} -\vec{ x}_{\mathrm{F}},\vec{u} -\vec{ u}_{\mathrm{F}}),}$$

where \(\Big\vert _{\mathrm{F}}\) denotes the value at the equilibrium and \(\vec{v}_{\mathrm{ho}}\) are higher-order terms. For small deviations

$$\displaystyle{\Delta \vec{x}(t) =\vec{ x}(t) -\vec{ x}_{\mathrm{F}},\quad \Delta \vec{u}(t) =\vec{ u}(t) -\vec{ u}_{\mathrm{F}}}$$

from equilibrium, the higher-order terms may be neglected, and the linear system

$$\displaystyle{\frac{\mathrm{d}\Delta \vec{x}(t)} {\mathrm{d}t} = A \cdot \Delta \vec{x}(t) + B \cdot \Delta \vec{u}(t)}$$

with

$$\displaystyle{A = \frac{\partial \vec{v}} {\partial \vec{x}}\Big\vert _{\mathrm{F}},\quad B = \frac{\partial \vec{v}} {\partial \vec{u}}\Big\vert _{\mathrm{F}},}$$

can be used as a linearization of the nonlinear system.

7.1.4 Dynamic Response of LTI Systems

The output of an LTI system depends on its transfer function H(s) and on the input signal u(t). In the following, the response of a general LTI system with respect to important test signals is discussed. This prepares the definition of stability. It is assumed that the poles and zeros of H(s) are all distinct, apart from N poles at s = 0. In most cases, this is a valid assumption. The calculations for the case with poles or zeros of higher multiplicity are similar but more intricate. Because the coefficients in Eq. (7.4) are real, nonreal poles p or zeros z are always accompanied by their complex conjugate counterparts p and z. The complex conjugate operator commutes with every holomorphic function f(x) on its domain of definition if f(x) is real for real x. Thus in this case, f(x) equals f(x). In particular, this applies to every polynomial and rational function with real coefficients.

According to Eq. (7.6), the considered transfer function can be written as

$$\displaystyle{ H(s) = \frac{K\prod _{\nu =1}^{m_{1}}(s - z_{\mathrm{r},\nu })\prod _{\nu =1}^{m_{2}}(s - z_{\mathrm{c},\nu })(s - z_{\mathrm{c},\nu }^{{\ast}})} {s^{N}\prod _{\nu =1}^{n_{1}}(s - p_{\mathrm{r},\nu })\prod _{\nu =1}^{n_{2}}(s - p_{\mathrm{c},\nu })(s - p_{\mathrm{c},\nu }^{{\ast}})}, }$$
(7.9)

where the zr, ν and pr, ν are the nonzero real zeros and poles, zc, ν and pc, ν are the nonzero complex zeros and poles, and K is the real gain. The total polynomial degree equals \(m = m_{1} + 2m_{2}\) for the numerator and \(n = N + n_{1} + 2n_{2}\) for the denominator. For a proper transfer function, n ≥ m holds.

7.1.4.1 Impulse Response

The impulse response is of practical interest for the study of pulse-shaped disturbances that may act on the feedback loop. In addition, this case is equivalent to the response of the state-space representation with zero input and certain nonzero initial conditions \(\vec{x}(t = 0)\neq 0\).

According to Eq. (7.3), the excitation of the system with the Dirac function

yields

$$\displaystyle{Y (s) = H(s) \cdot 1 = H(s)}$$

in the Laplace domain. To calculate the response in the time domain, the partial fraction decomposition

$$\displaystyle{ Y (s) = H(s)\stackrel{!}{=}\sum _{\nu =0}^{N}\frac{K_{0,\nu }} {s^{\nu }} +\sum _{ \nu =1}^{n_{1} } \frac{K_{\mathrm{r},\nu }} {s - p_{\mathrm{r},\nu }} +\sum _{ \nu =1}^{n_{2} }\left ( \frac{K_{\mathrm{c}1,\nu }} {s - p_{\mathrm{c},\nu }} + \frac{K_{\mathrm{c}2,\nu }} {s - p_{\mathrm{c},\nu }^{{\ast}}}\right )\quad }$$
(7.10)

is used. Here one assumes that the transfer function H(s) is proper, i.e., n ≥ m. The constants Kr, ν can be calculated as follows. Multiplying Eqs. (7.9) and (7.10) by (spr, i) for a specific i = 1, , n1 and setting s = pr, i leads to

$$\displaystyle\begin{array}{rcl} K_{\mathrm{r},i}& =& \left [H(s)\ (s - p_{\mathrm{r},i})\right ]_{s=p_{\mathrm{r},i}} {}\\ & =& \frac{K\prod _{\nu =1}^{m_{1}}(p_{\mathrm{r},i} - z_{\mathrm{r},\nu })\prod _{\nu =1}^{m_{2}}(p_{\mathrm{r},i} - z_{\mathrm{c},\nu })(p_{\mathrm{r},i} - z_{\mathrm{c},\nu }^{{\ast}})} {p_{\mathrm{r},i}^{N}\prod _{\nu =1,\nu \neq i}^{n_{1}}(p_{\mathrm{r},i} - p_{\mathrm{r},\nu })\prod _{\nu =1}^{n_{2}}(p_{\mathrm{r},i} - p_{\mathrm{c},\nu })(p_{\mathrm{r},i} - p_{\mathrm{c},\nu }^{{\ast}})}. {}\\ \end{array}$$

The constants Kr, ν are always real, because in the denominator, the expression

$$\displaystyle\begin{array}{rcl} (p_{\mathrm{r},i} - p_{\mathrm{c},\nu })(p_{\mathrm{r},i} - p_{\mathrm{c},\nu }^{{\ast}}) = \left (p_{\mathrm{ r},i} -\mathrm{ Re}\{p_{\mathrm{c},\nu }\}\right )^{2} + \left (\mathrm{Im}\{p_{\mathrm{ c},\nu }\}\right )^{2}& & {}\\ \end{array}$$

is real, and the same applies to the numerator. A similar calculation yields the constants

$$\displaystyle\begin{array}{rcl} K_{\mathrm{c}1,i}& =& \left [H(s)\ (s - p_{\mathrm{c},i})\right ]_{s=p_{\mathrm{c},i}}, {}\\ K_{\mathrm{c}2,i}& =& \left [H(s)\ (s - p_{\mathrm{c},i}^{{\ast}})\right ]_{ s=p_{\mathrm{c},i}^{{\ast}}}, {}\\ \end{array}$$

and using the above-mentioned commutability property of the complex conjugate operator leads to

$$\displaystyle{K_{\mathrm{c}2,i} = K_{\mathrm{c}1,i}^{{\ast}}.}$$

The constant K0, N is obtained by multiplying by sN; it reads

$$\displaystyle\begin{array}{rcl} K_{0,N}& =& \left [H(s)\ s^{N}\right ]_{ s=0} = \frac{K\prod _{\nu =1}^{m_{1}}(-z_{\mathrm{r},\nu })\prod _{\nu =1}^{m_{2}}\vert z_{\mathrm{c},\nu }\vert ^{2}} {\prod _{\nu =1}^{n_{1}}(-p_{\mathrm{r},\nu })\prod _{\nu =1}^{n_{2}}\vert p_{\mathrm{c},\nu }\vert ^{2}}. {}\\ \end{array}$$

For the remaining constants K0, i, a system of N linear equations is obtained by evaluating Eqs. (7.9) and (7.10) at N points s = si that are different from the zeros and poles of the system. The constant K0, 0 is zero for strictly proper transfer functions H(s), i.e., for n > m.

The transformation of Eq. (7.10) into the time domain

yields the impulse response

$$\displaystyle\begin{array}{rcl} h(t)& =& K_{0,0}\,\delta (t) +\sum _{ \nu =1}^{N}K_{ 0,\nu } \frac{t^{\nu -1}} {(\nu -1)!} +\sum _{ \nu =1}^{n_{1} }K_{\mathrm{r},\nu }\,e^{p_{\mathrm{r},\nu }t} {}\\ & & \quad +\sum _{ \nu =1}^{n_{2} }\left (K_{\mathrm{c}1,\nu }\,e^{p_{\mathrm{c},\nu }t} + K_{\mathrm{ c}1,\nu }^{{\ast}}\,e^{p_{\mathrm{c},\nu }^{{\ast}}t }\right ), {}\\ \end{array}$$

as Table A.4 shows (\(\Theta (t)\) is omitted for the sake of simplicity). The elements of the last sum can be rewritten as

$$\displaystyle{e^{\mathrm{Re}\{p_{\mathrm{c},\nu }\}t}\left [K_{\mathrm{ c}1,\nu }\,e^{j\mathrm{Im}\{p_{\mathrm{c},\nu }\}t} + K_{\mathrm{ c}1,\nu }^{{\ast}}\,e^{-j\mathrm{Im}\{p_{\mathrm{c},\nu }\}t}\right ],}$$

and the term of this expression in square brackets is equal to

$$\displaystyle{\vert K_{\mathrm{c}1,\nu }\vert \left (e^{j\left (\mathrm{Im}\{p_{\mathrm{c},\nu }\}t+\measuredangle K_{\mathrm{c}1,\nu }\right )} + e^{-j\left (\mathrm{Im}\{p_{\mathrm{c},\nu }\}t+\measuredangle K_{\mathrm{c}1,\nu }\right )}\right ),}$$

where | K | and \(\measuredangle K\) are the amplitude and phase of the complex number K, respectively. Altogether, the impulse response for t ≥ 0 is

$$\displaystyle{ \begin{array}{l} h(t) = K_{0,0}\,\delta (t) +\sum _{ \nu =1}^{N}K_{0,\nu } \frac{t^{\nu -1}} {(\nu -1)!} +\sum _{ \nu =1}^{n_{1}}K_{\mathrm{ r},\nu }\,e^{p_{\mathrm{r},\nu }t}+ \\ + 2\sum _{\nu =1}^{n_{2}}\vert K_{\mathrm{c}1,\nu }\vert \,e^{\mathrm{Re}\{p_{\mathrm{c},\nu }\}t}\,\cos \left (\mathrm{Im}\{p_{\mathrm{c},\nu }\}t + \measuredangle K_{\mathrm{c}1,\nu }\right ),\end{array} }$$
(7.11)

and it tends to zero as t →  if all poles have negative real parts.

7.1.4.2 Step Response

The response \(y(t) = y_{\Theta }(t)\) to a step command

$$\displaystyle{u(t) = \Theta (t)}$$

can be calculated in an analogous way with \(U(s) = 1/s\). An alternative is the use of the convolution integral

$$\displaystyle{ y_{\Theta }(t) =\int _{ 0}^{t}h(\tau )\,\Theta (t-\tau )\ \mathrm{d}\tau =\int _{ 0}^{t}h(\tau )\ \mathrm{d}\tau. }$$
(7.12)

With

$$\displaystyle\begin{array}{rcl} \int _{0}^{t}e^{\delta \tau }\,\cos (\omega \tau +\varphi )\ \mathrm{d}\tau & =& \frac{\delta } {\delta ^{2} +\omega ^{2}}\left (e^{\delta t}\,\cos (\omega t+\varphi )-\cos \varphi \right ) {}\\ & & \quad + \frac{\omega } {\delta ^{2} +\omega ^{2}}\left (e^{\delta t}\,\sin (\omega t+\varphi )-\sin \varphi \right ) {}\\ & =& \frac{1} {\sqrt{\delta ^{2 } +\omega ^{2}}}\left (e^{\delta t}\,\cos \left (\omega t +\varphi -\arctan \frac{\omega } {\delta }\right )\right. {}\\ & & \quad \left.-\cos \left (\varphi -\arctan \frac{\omega } {\delta }\right )\right ), {}\\ \end{array}$$

the integration of (7.11) for t ≥ 0 yields

$$\displaystyle\begin{array}{rcl} y_{\Theta }(t)& =& K_{0,0} +\sum _{ \nu =1}^{N}K_{ 0,\nu }\frac{t^{\nu }} {\nu !} +\sum _{ \nu =1}^{n_{1} } \frac{K_{\mathrm{r},\nu }} {p_{\mathrm{r},\nu }} \left (e^{p_{\mathrm{r},\nu }t} - 1\right ) + \\ & & +\sum _{\nu =1}^{n_{2} } \frac{2\vert K_{\mathrm{c}1,\nu }\vert } {\vert p_{\mathrm{c},\nu }\vert } \,\left (e^{\mathrm{Re}\{p_{\mathrm{c},\nu }\}t}\,\cos (\mathrm{Im}\{p_{\mathrm{ c},\nu }\}t + \measuredangle K_{\mathrm{c}1,\nu } -\measuredangle p_{\mathrm{c},\nu })\right. \\ & & \quad \left.-\cos (\measuredangle K_{\mathrm{c}1,\nu } -\measuredangle p_{\mathrm{c},\nu })\right ). {}\end{array}$$
(7.13)

This calculation shows that the step response \(y_{\Theta }\) will approach a finite value for large times t if and only if the conditions

$$\displaystyle{N = 0,\quad p_{\mathrm{r},\nu } < 0,\quad \mathrm{Re}\{p_{\mathrm{c},\nu }\} < 0}$$

are satisfied, i.e., all poles have negative real parts. Because the limit \(\lim _{t\rightarrow \infty }y_{\Theta }(t)\) is then finite, the final value theorem can be applied:

$$\displaystyle{\lim _{t\rightarrow \infty }y_{\Theta }(t) =\lim _{s\rightarrow 0}s\,Y _{\Theta }(s) = H(0).}$$

The initial value theorem leads to

$$\displaystyle{\lim _{t\rightarrow +0}y_{\Theta }(t) = K_{0,0} =\lim _{s\rightarrow \infty }s\,Y _{\Theta }(s) =\lim _{s\rightarrow \infty }H(s) = \left \{\begin{array}{@{}l@{\quad }l@{}} 0 \quad &\mbox{ if $n > m$,}\\ K\quad &\mbox{ if $n = m$}. \end{array} \right.}$$

For this reason, systems with n = m are also said to have direct feedthrough. In contrast, strictly proper transfer functions with n > m have a continuous output response at t = 0.

7.1.4.3 Frequency Response

An important test signal is the harmonic excitation

The amplitude of the test signal may be chosen arbitrarily because of the linearity property (7.2). If it is assumed that none of the poles of H(s) is equal to ± j ω, the decomposition of the output response in the Laplace domain can be written as

$$\displaystyle{Y (s) = H(s)\ \frac{\omega } {s^{2} +\omega ^{2}} = \frac{K_{\omega }} {s - j\omega } + \frac{K_{\omega }^{{\ast}}} {s + j\omega } + Y _{\mathrm{trans}}(s),}$$

where Ytrans has the same structure as the expression in Eq. (7.10), but with K0, 0 = 0. A multiplication by (sj ω) and the evaluation at s = j ω leads to

$$\displaystyle{K_{\omega } = \left [H(s)\ \frac{\omega (s - j\omega )} {s^{2} +\omega ^{2}} \right ]_{s=j\omega } = \left [H(s)\ \frac{\omega } {s + j\omega }\right ]_{s=j\omega } = \frac{1} {2j}H(j\omega ).}$$

In the time domain, the output response reads

$$\displaystyle\begin{array}{rcl} y(t)& =& K_{\omega }e^{j\omega t} + K_{\omega }^{{\ast}}e^{-j\omega t} + y_{\mathrm{ trans}}(t) {}\\ & =& H(j\omega )\frac{e^{j\omega t}} {2j} - H^{{\ast}}(j\omega )\frac{e^{-j\omega t}} {2j} + y_{\mathrm{trans}}(t) {}\\ & =& \vert H(j\omega )\vert \,\sin (\omega t + \measuredangle H(j\omega )) + y_{\mathrm{trans}}(t). {}\\ \end{array}$$

If the transfer function H(s) has only poles with negative real parts, the transient response ytrans will tend to zero, and y(t) tends to a constant oscillation. The amplitude and phase of this oscillation with respect to the excitation u(t) is determined by H(j ω), i.e., the value of the transfer function at s = j ω. Because of the linearity property, this also applies to any shifted or scaled sinusoidal excitation. For this reason, the function H(j ω) depending on the frequency ω is called the frequency responseof the system H(s) and is obtained by introducing s = j ω into H(s).

There are two main reasons why the frequency response is important for feedback systems. First, H(j ω) can easily be measured by exciting the system with different frequencies ω, even if the transfer function H(s) of the physical system is not known. Second, H(j ω) can be used for the stability analysis of the closed feedback loop with the Nyquist criterion (see Sect. 7.4.2).

So far, it has been assumed that j ω is not a pole of H(s). Without further calculation, it can be reasoned that if j ω is a pole, H(s) has a singularity at H(j ω) and the excitation with frequencies close to ω will lead to very large amplitudes. If the chosen frequency is exactly ω, this will result in a perfect resonance, and the oscillation at the output will grow without bound, although the input is a bounded signal.

7.1.4.4 General Input Function

In the previous sections, the Laplace transform was used to calculate specific output responses for SISO systems. In case of general input functions, multiple-input and multiple-output (MIMO)systems, or initial values, it is often more convenient to consider the state-space representation. In Sect. 2.8.6, it was shown that autonomous linear systems of differential equations

$$\displaystyle{\frac{\mathrm{d}\vec{r}} {\mathrm{d}t} = A \cdot \vec{ r}\qquad \vec{r}(0) =\vec{ r}_{0}}$$

have the solution (2.99),

$$\displaystyle{\vec{r}(t) = e^{\mathit{tA}}\,\vec{r}_{ 0},}$$

where etA is the matrix exponential function. In the presence of an input vector \(\vec{u}(t)\), the system is no longer autonomous in general. The input may be a control effort or a disturbance such as a noise signal. In Sect. 7.1.2, the Laplace domain solution of a system with inputs was given by Eq. (7.8) as

$$\displaystyle{\vec{X}(s) = (\mathit{sI} - A)^{-1}\vec{x}(0) + (\mathit{sI} - A)^{-1}B\vec{U}(s).}$$

Comparing this with the solution \(\vec{r}(t)\) of the autonomous system, it is apparent that

must hold, i.e., we have found the Laplace transform of the matrix exponential function. Transforming \(\vec{X}(s)\) into the time domain thus leads to

$$\displaystyle{\vec{x}(t) = e^{\mathit{tA}}\vec{x}(0) +\int _{ 0}^{t}e^{(t-\tau )A}B\vec{u}(\tau )\ \mathrm{d}\tau.}$$

It can be shown that the matrix exponential function has the following properties, similar to those of an ordinary exponential function (cf. [11]):

  • series representation: \(e^{\mathit{tA}} = I +\sum _{ \nu =1}^{\infty }A^{\nu }\frac{t^{\nu }} {\nu !}\)

  • inverse: \(\left (e^{\mathit{tA}}\right )^{-1} = e^{-\mathit{tA}}\)

  • multiplication: \(e^{t_{2}A}e^{t_{1}A} = e^{(t_{2}+t_{1})A}\)

7.1.5 Stability

In Sects. 2.8.6 and 2.8.10, it was shown that a linear autonomous system is asymptotically stable if and only if all eigenvalues of the system matrix A have negative real parts, i.e., are situated in the OLHP. Equivalently, the same holds for the roots of the characteristic equation. Asymptotic stability for autonomous systems implies that a trajectory that starts at some initial value will tend to a fixed point.

For a system with nonzero inputs \(\vec{u}(t)\), this definition may not be sufficient. The input can be a persistent disturbance with a certain amplitude that prevents the system from approaching the fixed point. For a feedback system, it is, however, necessary that the states \(\vec{x}(t)\) or the output \(\vec{y}(t)\) remain bounded. This motivates the following definition:

Definition 7.1.

A dynamical system

$$\displaystyle{\frac{\mathrm{d}\vec{x}(t)} {\mathrm{d}t} =\vec{ v}_{1}(\vec{x}(t),\vec{u}(t)),\quad \vec{y}(t) =\vec{ v}_{2}(\vec{x}(t),\vec{u}(t))}$$

with input \(\vec{u}(t)\), states \(\vec{x}(t)\), and output \(\vec{y}(t)\) is assumed to be in equilibrium for t = t0 with arbitrary real t0, i.e., \(\vec{x}(t_{0}) =\vec{ x}_{\mathrm{F}}\), where \(\vec{x}_{\mathrm{F}}\) is a fixed point. This fixed point is said to be bounded-input bounded-output (BIBO) stable if for every finite c1 with \(\|\vec{u}(t)\| < c_{1}\) for t ≥ t0, there exists a finite c2 such that \(\|\vec{y}(t)\| \leq c_{2}\) for t ≥ t0.

(See, e.g., Ludyk [11, Definition 3.37, p. 159].)

The step response (7.13) shows that \(y_{\Theta }(t)\) is bounded if all poles of H(s) have negative real parts. Because of Eq. (7.12), this is also true if

$$\displaystyle{\vert y_{\Theta }(t)\vert = \left \vert \int _{0}^{t}h(\tau )\ \mathrm{d}\tau \right \vert \leq \int _{ 0}^{t}\vert h(\tau )\vert \ \mathrm{d}\tau \leq \int _{ 0}^{\infty }\vert h(\tau )\vert \ \mathrm{d}\tau \leq c_{ 2} < \infty }$$

holds, i.e., if the impulse response h(t) is absolutely integrable.

In general, the following theorem holds.

Theorem 7.2.

An LTI SISO system is BIBO stable if and only if the following (equivalent) conditions are satisfied:

  • The transfer function H(s) has only poles with negative real parts.

  • the impulse response h(t) is absolutely integrable.

(See, e.g., Ludyk [11, Theorems 3.39 and 3.40, p. 160].)

In addition, there is a close relationship between BIBO and asymptotic stability. The transfer function H(s) can be written as

$$\displaystyle{H(s) = C(\mathit{sI} - A)^{-1}B = \frac{C\,\mathrm{adj}(\mathit{sI} - A)\,B} {\det (\mathit{sI} - A)},}$$

where adj(A) denotes the adjugateFootnote 3 matrix of A. Thus, the poles of H(s) are obtained by calculating the roots of the characteristic equation

$$\displaystyle{\det (\mathit{sI} - A) = 0,}$$

and these are identical to the eigenvalues of A. However, due to pole–zero cancelations, the poles are, in general, a subset of the eigenvalues of A, i.e., not every eigenvalue is a pole of H(s). If A has only eigenvalues with negative real parts, the system is asymptotically stable, and this always implies that the poles have negative real parts. This consideration leads to the following theorem:

Theorem 7.3.

An LTI system that is asymptotically stable is also BIBO stable, but a BIBO stable system is not always asymptotically stable.

(See, e.g., Ludyk [Theorem 3.41, p. 160][11].)

7.2 Standard Closed Loop

The block diagram in Fig. 7.2 is called the standard feedback loop. It has one input and one output and is thus also called a single-input single-output (SISO) system.

Fig. 7.2
figure 2

Standard feedback loop: transfer functions of process Hp(s), controller Hc(s), and measurement Hm(s). The signals are reference Yr(s), control error Xe(s), input U(s), output Y (s), and disturbances Xd1(s), Xd2(s), and Xd3(s)

The feedback system can be described by the following equations:

$$\displaystyle\begin{array}{rcl} Y (s)& =& X_{\mathrm{d2}}(s) + H_{\mathrm{p}}(s)\Big[X_{\mathrm{d1}}(s) + U(s)\Big], {}\\ U(s)& =& H_{\mathrm{c}}(s)X_{\mathrm{e}}(s), {}\\ X_{\mathrm{e}}(s)& =& Y _{\mathrm{r}}(s) - H_{\mathrm{m}}(s)\Big[X_{\mathrm{d3}}(s) + Y (s)\Big]. {}\\ \end{array}$$

Solving these equations for the output Y (s) leads to

$$\displaystyle{ Y (s) = H_{\mathrm{ry}}(s)\Big[Y _{\mathrm{r}}(s) - H_{\mathrm{m}}(s)X_{\mathrm{d3}}(s)\Big] + H_{\mathrm{dy}}(s)\Big[H_{\mathrm{p}}(s)X_{\mathrm{d1}}(s) + X_{\mathrm{d2}}(s)\Big]\quad }$$
(7.14)

with the reference to output transfer function

$$\displaystyle{ H_{\mathrm{ry}}(s) = \frac{H_{\mathrm{p}}(s)H_{\mathrm{c}}(s)} {1 + H_{\mathrm{p}}(s)H_{\mathrm{c}}(s)H_{\mathrm{m}}(s)} }$$
(7.15)

and the disturbance to output transfer function

$$\displaystyle{ H_{\mathrm{dy}}(s) = \frac{1} {1 + H_{\mathrm{p}}(s)H_{\mathrm{c}}(s)H_{\mathrm{m}}(s)}. }$$
(7.16)

A unity feedback systemhas Hm(s) = 1, and in this case, the disturbance to output transfer function

$$\displaystyle{H_{\mathrm{dy}}(s) = \frac{1} {1 + H_{\mathrm{p}}(s)H_{\mathrm{c}}(s)}}$$

is also called the sensitivity function, and the reference to output transfer function

$$\displaystyle{H_{\mathrm{ry}}(s) = \frac{H_{\mathrm{p}}(s)H_{\mathrm{c}}(s)} {1 + H_{\mathrm{p}}(s)H_{\mathrm{c}}(s)}}$$

is the complementary sensitivity function. Note that \(H_{\mathrm{dy}}(s) + H_{\mathrm{ry}}(s) = 1\).

Usually, the process transfer function Hp(s) has to be determined in a separate modeling step before the analysis or the design of the feedback loop. The modeling can be based on analytical equations if the underlying physical principles are well known. If this is not the case, measurements may be used for a system identification. In both cases, modeling assumptions have to be made to limit the complexity of the system. Often, nonlinearities in the feedback loop are linearized, and high-frequency dynamics are omitted.

7.3 Example: Amplitude Feedback

As a realistic example of a feedback loop, the amplitude feedback control of a ferrite-loaded cavity will be considered. The feedback is needed to hold the amplitude \(\hat{V }_{\mathrm{gap}}\) of the RF voltage close to a given reference value \(\hat{V }_{\mathrm{gap,ref}}\). In our example, the cavity feedback loop behaves highly nonlinearly with respect to the RF frequency fRF and the reference amplitude \(\hat{V }_{\mathrm{ref}}\). In the following, the operating point

$$\displaystyle{f_{\mathrm{RF}} = 3\,\mathrm{MHz},\quad \hat{V }_{\mathrm{gap,ref}} = 2\,\mathrm{kV},}$$

will be considered. A model of the feedback loop was obtained in [12] based on measurements, and the corresponding block diagram is shown in Fig. 7.3. In the following, only amplitudes of RF signals are used, not the RF signals themselves.

Fig. 7.3
figure 3

Model of the amplitude feedback loop

The feedback loop consists of the following subcomponents:

  • The cavity is driven by the anode current with the amplitude \(\hat{I}_{\mathrm{a}}\). The amplitude of the resulting gap voltage \(\hat{V }_{\mathrm{gap}}\) acts approximately as a first-order system (PT1) with respect to \(\hat{I}_{\mathrm{a}}\) (see also Appendix A.12.1). The “gain” is equalFootnote 4 to \(R_{\mathrm{p}} \approx 2700\,\mathrm{\Omega }\), and the time constant is \(T_{\mathrm{cav}} \approx 4\,\upmu\ \mathrm{s}\). The set points are \(\hat{V }_{\mathrm{gap}} = 2\,\mathrm{kV}\) and \(\hat{I}_{\mathrm{a}} = 0.75\,\mathrm{A}\).

  • A capacitive divider is used to downscale the gap voltage of one-half the gap with a factor of 1000. This has no significant influence on the time constants in the loop. With respect to the total gap voltage, the scaling is \(K_{\mathrm{cd}} = 1/2000\).

  • An amplitude detector with time constant \(T_{\mathrm{det}} = 5\,\upmu\ \mathrm{s}\) is used to obtain the amplitude \(\hat{V }_{\mathrm{gap,det}}\). This amplitude is then compared to the reference \(\hat{V }_{\mathrm{ref}}\). The set points are \(\hat{V }_{\mathrm{gap,det}} = 1\,\mathrm{V}\) and \(\hat{V }_{\mathrm{ref}} = 1.04\,\mathrm{V}\).

  • The parameters of the controller are Kc = 14. 9, \(T_{\mathrm{c1}} = 17.2\,\upmu\ \mathrm{s}\), and \(T_{\mathrm{c2}} = 487.2\,\upmu\ \mathrm{s}\). A saturation limit sat follows that limits the control output to \(\pm 7.23\,\mathrm{V}\). The offset voltage is \(\hat{V }_{\mathrm{c,off}} = 0.2\,\mathrm{V}\). In the feedforward loop, the gain is Kff = 0. 6. According to these values, the set point of the control effort is \(\hat{V }_{\mathrm{c}} = 1.02\,\mathrm{V}\).

  • The (amplitude) modulator produces a sinusoidal signal modulated with \(\hat{V }_{\mathrm{c}}\). The sinusoidal signal with initial amplitude \(0.316\,\mathrm{V}\) (0 dBm) is damped with a factor of \(-12.2\,\mathrm{dB}\); this corresponds to a factor of 0. 245 for the voltage amplitude. Altogether, the modulator can be modeled as a gain Kmod = 0. 316 ⋅ 0. 245. Hence, the set point of the driving voltage is \(\hat{V }_{\mathrm{dr}} = 79\,\mathrm{mV}\).

  • The gains of the driver and tetrode amplifiers depend on the RF frequency and the amplitude of the gap voltage. For the chosen setting, we have \(G_{\mathrm{Vgain}} \approx 27\,\mathrm{S}\) and \(K_{\mathrm{Vgain}} \approx 0.35\).

Signal time delays with a magnitude of about \(1\,\upmu\ \mathrm{s}\) are neglected in the following. However, they would be important for larger feedback gains.

The given set-point values were obtained by choosing \(\hat{V }_{\mathrm{gap}} = 2\,\mathrm{kV}\). Because the stationary gain of the cavity transfer function is Rp, the necessary anode current amplitude equals \(\hat{I}_{\mathrm{a}} =\hat{ V }_{\mathrm{gap}}/R_{\mathrm{p}}\). All other set-point values in the feedback loop follow accordingly. This results in a reference \(\hat{V }_{\mathrm{ref}}\) that is slightly higher than \(\hat{V }_{\mathrm{gap,det}}\) and thus in a stationary control error \(\hat{V }_{\mathrm{e}} = 40\,\mathrm{mV}\). This steady-state error could be avoided by introducing an integral controller in the loop. However, it is also possible to adjust the reference in such a way that the desired value \(\hat{V }_{\mathrm{gap}}\) is reached, as has been done in this case.

The system is nonlinear due to the saturation function. This function and the offset values \(\hat{V }_{\mathrm{ref}}\) and \(\hat{V }_{\mathrm{c,off}}\) can be neglected if only small deviations with respect to the set point are considered. This leads to the linearized feedback loop in standard notation, as shown in Fig. 7.4 with amplitude error \(\Delta \hat{V }_{\mathrm{gap}} =\hat{ V }_{\mathrm{gap}} -\hat{ V }_{\mathrm{gap,ref}}\) and reference \(\Delta \hat{V }_{\mathrm{ref}} = 0\). Similarly, all other values are defined relative to their set-point values, e.g., the relative control effort is \(\Delta \hat{V }_{\mathrm{c}} =\hat{ V }_{\mathrm{c}} - 1.02\,\mathrm{V}\).

Fig. 7.4
figure 4

Small-signal model of the amplitude feedback loop

A calculation of the reference to output transfer function according to (7.15) yields

$$\displaystyle{H_{\mathrm{ry}}(s) = \frac{K_{\mathrm{c}}K_{\mathrm{mod}}G_{\mathrm{Vgain}}K_{\mathrm{Vgain}}R_{\mathrm{p}}(\mathit{sT}_{\mathrm{c1}} + 1)(\mathit{sT}_{\mathrm{det}} + 1)} {(\mathit{sT}_{\mathrm{det}} + 1)(\mathit{sT}_{\mathrm{c2}} + 1)(\mathit{sT}_{\mathrm{cav}} + 1) + K_{\mathrm{c}}K_{\mathrm{mod}}G_{\mathrm{Vgain}}K_{\mathrm{Vgain}}R_{\mathrm{p}}(\mathit{sT}_{\mathrm{c1}} + 1)K_{\mathrm{cd}}}.}$$

A zero-pole-gain representation of this transfer function can be obtained by a numerical calculation of the poles and zeros. The gain is equal to the ratio of the factors of the highest order in s in the numerator and denominator. For the amplitude loop, these orders are s2 and s3, respectively, and the gain is

$$\displaystyle{K = \frac{K_{\mathrm{c}}K_{\mathrm{mod}}G_{\mathrm{Vgain}}K_{\mathrm{Vgain}}R_{\mathrm{p}}T_{\mathrm{c1}}T_{\mathrm{det}}} {T_{\mathrm{det}}T_{\mathrm{c2}}T_{\mathrm{cav}}} = 2.6 \cdot 10^{8}\,\mathrm{\,}s^{-1}.}$$

The resulting zero-pole-gain representation is

$$\displaystyle{H_{\mathrm{ry}}(s) = \frac{\Delta \hat{V }_{\mathrm{gap}}(s)} {\Delta \hat{V }_{\mathrm{ref}}(s)} = 2.6 \cdot 10^{8}\,\mathrm{s^{-1}}\; \frac{(s - z_{1})(s - z_{2})} {(s - p_{1})(s - p_{2})(s - p_{3})}}$$

with zeros

$$\displaystyle{z_{1} = -5.81 \cdot 10^{4}\,\mathrm{s^{-1}},\quad z_{ 2} = -2 \cdot 10^{5}\,\mathrm{s^{-1}},}$$

and poles

$$\displaystyle{p_{1} = -2.42 \cdot 10^{4}\,\mathrm{s^{-1}},\quad p_{ 2,3} = -(2.14 \pm j\;1.44) \cdot 10^{5}\,\mathrm{s^{-1}}.}$$

Thus, the closed-loop system is BIBO stable. The pole p1 is closest to the imaginary axis and dominates the dynamics of the feedback. The dominating pole corresponds to a closed-loop bandwidth and a time constant of

$$\displaystyle{\omega _{\mathrm{ry}} = -p_{1} = 2.42 \cdot 10^{4}\,\mathrm{s^{-1}}\quad \Rightarrow \quad T_{\mathrm{ ry}} = -\frac{1} {p_{1}} \approx 40\,\upmu\ \mathrm{s}.}$$

The absolute values of the remaining poles are larger by an order of magnitude. They are thus negligible for a first rough evaluation of the closed-loop dynamics.

7.4 Analysis and Stability

The closed-loop transfer function

$$\displaystyle{H_{\mathrm{ry}}(s) = \frac{b_{0} + b_{1}s + b_{2}s^{2}} {a_{0} + a_{1}s + a_{2}s^{2} + s^{3}}}$$

can be obtained from the given open-loop transfer function using only basic manipulations. The calculation of the poles pi from the characteristic equation

$$\displaystyle{0 = a_{0} + a_{1}s + a_{2}s^{2} + s^{3}}$$

is a more complex task, and numerical computations are necessary for higher-order systems in general. For a stability analysis, one may, however, not be interested in the exact values of the poles, but only in the decision whether all poles have negative real parts. There are several stability criteria that can be applied without solving the characteristic equation directly. The Hurwitz and Nyquist criteria will be presented in the next sections.

7.4.1 Routh–Hurwitz Stability Criterion

The Routh–Hurwitz criterion is a necessary and sufficient condition for the roots of the polynomial

$$\displaystyle{ a_{0} + a_{1}s +\ldots +a_{n-1}s^{n-1} + s^{n} }$$
(7.17)

to have only negative real parts, in which case the polynomial is then called a Hurwitz polynomial. The criterion is of particular interest if the coefficients ai contain undetermined parameters. An example of such a parameter is the controller gain in the feedback loop. With the Routh–Hurwitz criterion, inequalities in these parameters can then be obtained for the closed loop to be stable.

A first necessary condition is given by the following theorem:

Theorem 7.4.

If the polynomial  (7.17) is Hurwitz, then it has only positive coefficients ai> 0, \(i = 0,1,\ldots,n - 1\).

(See, e.g., Ludyk [11, Theorem 3.43, p. 161].)

This enables a first simple test whether a polynomial can be Hurwitz. If any of the coefficients is missing, i.e., ai = 0, or any ai is negative, there will be roots with nonnegative real part, and the polynomial is not Hurwitz.

A necessary and sufficient condition is presented by the Hurwitz criterion. It uses the ν ×ν Hurwitz determinants

$$\displaystyle{ H_{\nu }:=\det \left (\begin{array}{*{10}c} a_{n-1} & a_{n-3} & a_{n-5} & \ldots & a_{n-2\nu +1} \\ 1 &a_{n-2} & a_{n-4} & \ldots & a_{n-2\nu +2} \\ 0 &a_{n-1} & a_{n-3} & \ldots & a_{n-2\nu +3} \\ 0 & 1 &a_{n-2} & \ldots & a_{n-2\nu +4} \\ 0 & 0 &a_{n-1} & \ldots & a_{n-2\nu +5}\\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 &\ldots & a_{n-\nu } \end{array} \right ), }$$
(7.18)

where the coefficients ai in the matrix with an index i < 0 are set to zero. As an example, the first three determinants for a polynomial with degree n ≥ 5 are

$$\displaystyle\begin{array}{rcl} H_{1}&:=& a_{n-1}, {}\\ H_{2}&:=& \det \left (\begin{array}{*{10}c} a_{n-1} & a_{n-3} \\ 1 &a_{n-2} \end{array} \right ), {}\\ H_{3}&:=& \det \left (\begin{array}{*{10}c} a_{n-1} & a_{n-3} & a_{n-5} \\ 1 &a_{n-2} & a_{n-4} \\ 0 &a_{n-1} & a_{n-3} \end{array} \right ).{}\\ \end{array}$$

The Hurwitz criterion is given by the following theorem:

Theorem 7.5.

The polynomial  (7.17) is Hurwitz if and only if the Hurwitz determinants Hνdefined by  (7.18) are positive for ν = 1,…,n.

(See, e.g., Gantmacher [13].)

A simplified version of this theorem needs only half the determinants:

Theorem 7.6.

Suppose that all the coefficients of the polynomial  (7.17) are positive. For odd n, the polynomial is Hurwitz if and only if the Hurwitz determinants H2,H4,…,Hn−1are positive. For even n, the polynomial is Hurwitz if and only if the Hurwitz determinants H3,H5,…,Hn−1are positive.

(See, e.g., Gantmacher [13].)

Consider as an example the amplitude feedback introduced in Sect. 7.3. The denominator of the closed-loop transfer function reads

$$\displaystyle{a_{0} + a_{1}s + a_{2}s^{2} + s^{3}}$$

with

$$\displaystyle{a_{0} = 1.6129 \cdot 10^{15}\,\mathrm{s^{-3}},\quad a_{ 1} = 7.6901 \cdot 10^{10}\,\mathrm{s^{-2}},\quad a_{ 2} = 4.5205 \cdot 10^{5}\,\mathrm{s^{-1}}.}$$

In the following, the physical units of these coefficients will be ignored to avoid confusion with the Laplace variable s. Since all coefficients are positive, this polynomial with n = 3 is Hurwitz, because

$$\displaystyle{H_{2} =\det \left (\begin{array}{*{10}c} a_{2} & a_{0} \\ 1 &a_{1} \end{array} \right ) = a_{2}a_{1}-a_{0} = 3.3150\cdot 10^{16} > 0.}$$

Now assume that the feedback gain Kc in the loop of Fig. 7.4 is a free parameter. As a consequence, the coefficients a0 and a1 become parameter-dependent:

$$\displaystyle{a_{0} = 1.0136 \cdot 10^{14}K_{\mathrm{ c}} + 1.0263 \cdot 10^{14},\quad a_{ 1} = 1.7435 \cdot 10^{9}K_{\mathrm{ c}} + 5.0924 \cdot 10^{10}.}$$

The Hurwitz criterion now leads to the conditions

$$\displaystyle\begin{array}{rcl} a_{0} > 0\quad & \Rightarrow & \quad K_{\mathrm{c}} > -1.01, {}\\ a_{1} > 0\quad & \Rightarrow & \quad K_{\mathrm{c}} > -29.21, {}\\ H_{2} > 0\quad & \Rightarrow & \quad K_{\mathrm{c}} > -33.37. {}\\ \end{array}$$
Fig. 7.5
figure 5

Root locus of the amplitude feedback. The poles p1, p2, and p3 are obtained for Kc = 14. 9

Thus, the feedback loop is stable for Kc > −1. 01. Due to the stability of the open-loop system, the closed-loop system obviously remains stable even if the feedback gain is slightly negative. A positive feedback gain Kc, however, is the typical case for the amplitude control. Figure 7.5 shows the closed-loop poles in the complex s-plane as a function of the positive gain Kc > 0. This type of diagram is also referred to as a the root locus. For Kc = 0, the closed-loop poles are equal to the open-loop poles

$$\displaystyle{p_{1}(0) = - \frac{1} {T_{\mathrm{c2}}},\quad p_{2}(0) = - \frac{1} {T_{\mathrm{det}}},\quad p_{3}(0) = - \frac{1} {T_{\mathrm{cav}}}}$$

that are obtained from the open-loop transfer function (cf. Fig. 7.4)

$$\displaystyle{H_{\mathrm{open}}(s) = \frac{\Delta \hat{V }_{\mathrm{gap,det}}(s)} {\Delta \hat{V }_{\mathrm{e}}(s)}.}$$

For increasing Kc, the closed-loop pole p1 moves to the left toward the open-loop zero

$$\displaystyle{z_{1}(0) = - \frac{1} {T_{\mathrm{c1}}}}$$

of Hopen(s), whereas the poles p2 and p3 approach each other and for a certain Kc between 0 and 14.9, a complex conjugate pole pair arises. The root locus indicates that the closed loop remains stable also for higher Kc → , because all three branches of the root locus remain in the OLHP. Since the branches of the root locus are the positions of the closed-loop poles,Footnote 5 the closed loop is stable. This is in agreement with the result of the Hurwitz criterion.

Please note that for a practical implementation, very large feedback gains Kc would not be recommendable for several reasons:

  • For sufficiently large gains, the complex pair p2, 3 dominates the dynamics of the loop, resulting in an unacceptable oscillatory behavior.

  • Large gains may increase disturbances, especially the measured noise.

  • The feedback of the real system may become unstable for very large gains due to unmodeled high-frequency dynamics and delays.

7.4.2 Bode Plots and Nyquist Criterion

The Hurwitz stability criterion is based on the characteristic equation, i.e., on the denominator polynomial of the closed-loop transfer function. The Bode plots and the Nyquist criterion are approaches that are different in the sense that they rely on the open-loop transfer function

$$\displaystyle{H_{\mathrm{open}}(s):= H_{\mathrm{c}}(s)\ H_{\mathrm{p}}(s)\ H_{\mathrm{m}}(s)}$$

of the standard feedback loop; cf. Fig. 7.2. Consider as an example the system

$$\displaystyle{ H_{\mathrm{open}}(s) = \frac{K\left (1 - \frac{s} {z_{1}} \right )} {s^{N}\left (1 - \frac{s} {p_{1}} \right )\left (1 - \frac{s} {p_{2}} \right )\left (1 - \frac{s} {p_{2}^{{\ast}}}\right )}. }$$
(7.19)

This system is assumed to have a real zero z1 ≠ 0, a real pole p1 ≠ 0, a complex pole pair p2 and p2, and N poles at s = 0. The frequency response of Hopen(s) is given by

$$\displaystyle{H_{\mathrm{open}}(j\omega ) = \frac{K\left (1 - \frac{j\omega } {z_{1}} \right )} {(j\omega )^{N}\left (1 - \frac{j\omega } {p_{1}} \right )\left (1 - \frac{j\omega } {p_{2}} \right )\left (1 - \frac{j\omega } {p_{2}^{{\ast}}}\right )}.}$$

The complex pole pair can also be written as

$$\displaystyle{\left (1 - \frac{j\omega } {p_{2}}\right )\left (1 - \frac{j\omega } {p_{2}^{{\ast}}}\right ) = 1 - \frac{\omega ^{2}} {p_{2}p_{2}^{{\ast}}} - j\omega \left ( \frac{1} {p_{2}} + \frac{1} {p_{2}^{{\ast}}}\right ) = 1 - \frac{\omega ^{2}} {\vert p_{2}\vert ^{2}} - j\omega \frac{2\mathrm{Re}\{p_{2}\}} {\vert p_{2}\vert ^{2}}.}$$

In a Bode diagram, the amplitude and phase of Hopen are plotted versus the frequency ω > 0. A logarithmic scale is used, which has the advantage that the multiplication of two transfer functions is equivalent to the sum of their Bode diagrams. The amplitude of Hopen in decibels (dB) is calculated as

$$\displaystyle{\vert H_{\mathrm{open}}(j\omega )\vert _{\mathrm{dB}}:= 20\log _{10}\vert H_{\mathrm{open}}(j\omega )\vert.}$$

In our example, using the properties of the logarithmic function leads to

$$\displaystyle\begin{array}{rcl} \vert H_{\mathrm{open}}(j\omega )\vert _{\mathrm{dB}}& =& 20\log _{10}\vert K\vert \ +\ 20\log _{10}\sqrt{1 + \frac{\omega ^{2 } } {z_{1}^{2}}}\ -\ 20N\log _{10}\omega \ - {}\\ & &-\ 20\log _{10}\sqrt{1 + \frac{\omega ^{2 } } {p_{1}^{2}}}\ -\ 20\log _{10}\sqrt{\left (1 - \frac{\omega ^{2 } } {\vert p_{2}\vert ^{2}}\right )^{2} + \left (\frac{2\omega \mathrm{Re}\{p_{2}\}} {\vert p_{2}\vert ^{2}} \right )^{2}}. {}\\ \end{array}$$

This expression is the sum of five components. The first is the constant

$$\displaystyle{\vert H_{1}(j\omega )\vert _{\mathrm{dB}}:= 20\log _{10}\vert K\vert.}$$

The second function is due to the zero and can be approximated by two asymptotes:

$$\displaystyle{ \vert H_{2}(j\omega )\vert _{\mathrm{dB}}:= 20\log _{10}\sqrt{1 + \frac{\omega ^{2 } } {z_{1}^{2}}} \approx \left \{\begin{array}{@{}l@{\quad }l@{}} 0\,\mathrm{dB} \quad &\mbox{ for $\omega \ll \vert z_{1}\vert $}, \\ 3\,\mathrm{dB} \quad &\mbox{ for $\omega = \vert z_{1}\vert $}, \\ 20\log _{10}\omega - 20\log _{10}\vert z_{1}\vert \quad &\mbox{ for $\omega \gg \vert z_{1}\vert.$} \end{array} \right. }$$
(7.20)

The N-fold integrator leads to

$$\displaystyle{\vert H_{3}(j\omega )\vert _{\mathrm{dB}}:= -20N\log _{10}\omega.}$$

For the pole p1, the result is similar to the case of zero z1, but with opposite signs:

$$\displaystyle{\vert H_{4}(j\omega )\vert _{\mathrm{dB}}:= -20\log _{10}\sqrt{1 + \frac{\omega ^{2 } } {p_{1}^{2}}} \approx \left \{\begin{array}{@{}l@{\quad }l@{}} 0\,\mathrm{dB} \quad &\mbox{ for $\omega \ll \vert p_{1}\vert $}, \\ -3\,\mathrm{dB} \quad &\mbox{ for $\omega = \vert p_{1}\vert $}, \\ -20\log _{10}\omega + 20\log _{10}\vert p_{1}\vert \quad &\mbox{ for $\omega \gg \vert p_{1}\vert.$} \end{array} \right.}$$

Finally, the pole pair has the following asymptotes:

$$\displaystyle\begin{array}{rcl} \vert H_{5}(j\omega )\vert _{\mathrm{dB}}&:=& -20\log _{10}\sqrt{\left (1 - \frac{\omega ^{2 } } {\vert p_{2}\vert ^{2}}\right )^{2} + \left (\frac{2\omega \mathrm{Re}\{p_{2}\}} {\vert p_{2}\vert ^{2}} \right )^{2}} {}\\ & \approx & \left \{\begin{array}{@{}l@{\quad }l@{}} 0\,\mathrm{dB} \quad &\mbox{ for $\omega \ll \vert p_{2}\vert $}, \\ -40\log _{10}\omega + 40\log _{10}\vert p_{2}\vert \quad &\mbox{ for $\omega \gg \vert p_{2}\vert.$} \end{array} \right.{}\\ \end{array}$$

The phase of Hopen is given by

$$\displaystyle{\measuredangle H_{\mathrm{open}}(j\omega ) =\sum _{ i=1}^{5}\measuredangle H_{ i}(j\omega ).}$$

The phases \(\measuredangle H_{i}(j\omega )\) can be approximated by asymptotes in a similar way as shown for the amplitudes. For example, the zero leads to the phase

$$\displaystyle{\measuredangle H_{2}(j\omega ) = \measuredangle \left (1 - \frac{j\omega } {z_{1}}\right ) = \left \{\begin{array}{@{}l@{\quad }l@{}} \approx 0 \quad &\mbox{ for $\omega \rightarrow 0$}, \\ + \frac{\pi }{4} \quad &\mbox{ for $\omega = \vert z_{1}\vert $ and $z_{1} < 0$}, \\ \approx + \frac{\pi }{2}\quad &\mbox{ for $\omega \rightarrow \infty $ and $z_{1} < 0,$} \\ -\frac{\pi }{4} \quad &\mbox{ for $\omega = \vert z_{1}\vert $ and $z_{1} > 0,$} \\ \approx -\frac{\pi }{2}\quad &\mbox{ for $\omega \rightarrow \infty $ and $z_{1} > 0$}. \end{array} \right.}$$

Figure 7.6 shows the Bode plots of the transfer functions Hi(j ω) with their asymptotes for a system with N = 1, positive gain K, and with the zero and poles in the OLHP, i.e., a stable system. The following observations can be made:

  • The gain H1(j ω) = K leads to an amplitude shift of the open-loop transfer function Hopen.

  • The zero z1 > 0 raises the amplitude and phase; cf. H2(j ω). At the frequency ω =  | z1 | , the amplitude is close to \(3\,\mathrm{dB}\), and the phase equals π∕4. For large frequencies, the amplitude increases with \(20\,\mathrm{dB}\) per (frequency) decade and the phase approaches π∕2.

  • The amplitude of the integrator H3(j ω) tends to infinity for small frequencies. This fact enables steady-state accuracy for the closed loop with regard to stepwise disturbances. However, the phase of \(-\pi /2\) may lead to stability problems in some cases. This can be shown with the Nyquist stability criterion, which will be presented below.

  • The pole p1 has the opposite effect to that of the zero z1. For large frequencies, the amplitude slope is \(-20\,\mathrm{dB}\) per decade, and the phase approaches \(-\pi /2\).

  • For small or large frequencies, the complex pole pair acts as a double pole at ω =  | p2 | . However, for frequencies close to | p2 | , a resonance may occur. This means that | H5(j ω) | may become considerably larger than 1. The frequency at which the maximum of | H5(j ω) | occurs can be calculated analytically, and it reads

    $$\displaystyle{\omega _{\mathrm{res}} = \sqrt{\mathrm{Im }\{p_{2 } \}^{2 } -\mathrm{ Re }\{p_{2 } \}^{2}} \approx 1.94,}$$

    i.e., Im{p2} > Re{p2} is a necessary condition for a resonance. Disturbances or input signals with frequencies close to ωres will be amplified significantly in the open loop. A resonance in the open loop may be one reason why feedback is necessary. Feedback can provide additional damping, so that the resonance is not present in the closed-loop frequency response.

Fig. 7.6
figure 6

Bode plots of the open-loop transfer function (7.19) for N = 1, K = 2, \(z_{1} = -1\), \(p_{1} = -1\), and \(p_{2} = -\frac{1} {2} + 2j\)

For the Bode plot of the system with the transfer function Hopen, the Bode plots of the subsystems Hi have to be combined. As already shown, this simply corresponds to the sum of the amplitude and phase plots due to the use of a logarithmic scale. This also applies to the asymptotes. To sketch the asymptotes of the Bode plot of Hopen, it is therefore possible to proceed as follows. First, the break points are calculated as the absolute value of the zeros and poles, i.e., ω =  | zi(0) | and ω =  | pi(0) | . The argument 0 for both zi and pi emphasizes that the open-loop zeros and poles are used. Next, one begins with the asymptote of the N-fold integrator H3. This asymptote is a line with slope \(-20N\,\mathrm{dB}\) per decade (of the frequency ω) that crosses the point with amplitude 20log10(K) at \(\omega = 1\,\mathrm{s^{-1}}\). For N = 0, the Bode plot begins with a horizontal asymptote. One then proceeds to higher frequencies, changing the slope of the asymptote at every break point. For a single pole, the slope changes by \(-20\,\mathrm{dB}\) per decade; for a single zero, by \(20\,\mathrm{dB}\) per decade; and for multiple poles or zeros, accordingly with the multiple of these slopes. For the phase plot, one begins with a horizontal asymptote of \(-N \frac{\pi } {2}\). At the break points, the asymptote is changed stepwise with \(\frac{-\pi } {2}\) for a single pole, \(\frac{\pi }{2}\) for a zero, and a multiple of \(\frac{\pi }{2}\) for multiple poles or zeros. For the amplitude feedback, this procedure leads to the asymptotes as shown in Fig. 7.7 for Kc = 1. The exact Bode plot is shown as a solid black curve. The static open-loop gain equals

$$\displaystyle{H_{\mathrm{open}}(j\omega = 0) = K_{\mathrm{mod}}G_{\mathrm{Vgain}}K_{\mathrm{Vgain}}R_{\mathrm{p}}K_{\mathrm{cd}} = 0.99}$$

for Kc = 1. At ω = p1(0), the first pole leads to a negative slope of \(-20\,\mathrm{dB}\) per decade. Next, the zero z1(0) raises the slope to zero, before the two remaining poles finally lead to a slope or cutoff rate of \(-40\,\mathrm{dB}\) per decade. The phase begins at zero and drops to

$$\displaystyle{-\frac{\pi } {2} + \frac{\pi } {2} - \frac{\pi } {2} - \frac{\pi } {2} = -\pi }$$

for large frequencies.

Fig. 7.7
figure 7

Bode plot of the amplitude feedback example

The frequency at which the amplitude drops by \(-3\,\mathrm{dB}\) is called the cutoff frequency. It is denoted by \(\omega _{\mathrm{c}} = 2004\,\mathrm{\frac{1} {s} }\) in Fig. 7.7 and is also called the bandwidth of the open-loop transfer function [1].

Because the Bode plot contains all information about the open loop, there is a unique correspondence between this diagram and the transfer function Hopen(s). If the open loop is stable, the Bode plot can be obtained by measuring the frequency response Hopen(j ω). An equivalent diagram that is very useful for determining the stability of the closed loop is the Nyquist plot. It is obtained by plotting the curve

$$\displaystyle{H_{\mathrm{open}}(j\omega ) =\mathrm{ Re}\{H_{\mathrm{open}}(j\omega )\}\ +\ j\;\mathrm{Im}\{H_{\mathrm{open}}(j\omega )\}}$$

in the complex plane for \(\omega \in \mathbb{R}\). The Nyquist plot of the amplitude feedback example is shown in Fig. 7.8.

Fig. 7.8
figure 8

Nyquist plot of the amplitude feedback example

Due to

$$\displaystyle{H_{\mathrm{open}}(-j\omega ) = H_{\mathrm{open}}^{{\ast}}(j\omega ),}$$

the part of the Nyquist plot for negative frequencies ω is always axially symmetric to the part for positive frequencies. For this reason, the Nyquist plot is usually analyzed for only positive frequencies. From the discussion of the Bode plot, it is already known that the Nyquist plot begins at Hopen(j0) = 0. 99 and approaches the origin for large ω. Also, the phase approaches −π, as can be observed from the closeup view in Fig. 7.8. The vector

$$\displaystyle{1 + H_{\mathrm{open}}(j\omega )}$$

points from \(-1 + j0\) to the Nyquist plot, as shown in Fig. 7.8. Its behavior is essential for the stability of the closed loop. If we follow this vector from ω = 0 to ω → , we can define the change of its argument as

$$\displaystyle{ \Delta \varphi _{\mathrm{Nyquist}}:=\lim _{\omega \rightarrow \infty }\measuredangle \left (1 + H_{\mathrm{open}}(j\omega )\right )\ -\ \measuredangle \left (1 + H_{\mathrm{open}}(j0)\right ). }$$
(7.21)

The general Nyquist stability criterion can now be used to determine the stability of the closed loop:

Theorem 7.7.

The closed loop is asymptotically stable if and only if the continuous change of the argument as defined in Eq.  (7.21) is equal to

$$\displaystyle{\Delta \varphi _{\mathrm{Nyquist}} = n_{\mathrm{unstable}}\pi + n_{\mathrm{critical}} \frac{\pi } {2},}$$

where nunstableis the number of (unstable) open-loop poles in the ORHP and ncriticalis the number of open-loop poles on the imaginary axis.

(See, e.g., Unbehauen [14, p. 156].)

Only the continuous change in the argument is considered. If, for example, the Nyquist plot consists of several branches due to open-loop poles on the imaginary axis, then \(\Delta \varphi _{\mathrm{Nyquist}}\) can be determined for each branch separately, and the total change is the sum of these results.

Since the amplitude feedback system in our example contains only stable open-loop poles, a necessary and sufficient condition for stability is

$$\displaystyle{\Delta \varphi _{\mathrm{Nyquist}} = 0,}$$

as is the case for Kc = 1 in Fig. 7.8. Changing the gain Kc will only scale the Nyquist plot, as shown in Fig. 7.9. For positive gains Kc > 0, the closed loop will always be stable, because \(\Delta \varphi _{\mathrm{Nyquist}} = 0\). In the case of negative Kc, the Nyquist plot is also rotated by 180, and the critical point \(-1 + j0\) is crossed for

$$\displaystyle{K_{\mathrm{c}} = - \frac{1} {H_{\mathrm{open}}(j0)} = -1.01}$$

and the change in the argument is \(\Delta \varphi _{\mathrm{Nyquist}} = +\pi\). Thus, the closed loop is unstable for Kc < −1. 01, a result already obtained with the Hurwitz criterion.

Fig. 7.9
figure 9

Nyquist plot for different gains Kc

7.4.3 Time Delay

If the feedback loop contains a considerable time delay Td, this can be taken into account in the Laplace transform of the open loop Hopen(s). If, for example, the measurement of the output y(t) is delayed, this leads to

$$\displaystyle{y_{\mathrm{delay}}(t) = y(t - T_{\mathrm{d}}).}$$

Due to the shift theorem of the Laplace transform, every open loop with a single delay can be expressed by

$$\displaystyle{H_{\mathrm{open,delay}}(s) = H_{\mathrm{open}}(s)\ e^{-T_{\mathrm{d}}s}.}$$

The consequence of the exponential function is that the characteristic equation of the closed loop is no longer an algebraic equation, but a transcendental one. The number of poles becomes infinite, and the stability analysis is thus more involved. Fortunately, the Nyquist criterion can still be applied [15]. For the frequency response,

$$\displaystyle\begin{array}{rcl} \vert H_{\mathrm{open,delay}}(j\omega )\vert & =& \vert H_{\mathrm{open}}(j\omega )\vert {}\\ \measuredangle H_{\mathrm{open,delay}}& =& \measuredangle H_{\mathrm{open}} - T_{\mathrm{d}}\omega {}\\ \end{array}$$
Fig. 7.10
figure 10

Nyquist plot (left) and closeup (right) of the amplitude feedback with delay \(T_{\mathrm{d}} = 5\,\upmu\ \mathrm{s}\) and definition of the amplitude margin (AM) and phase margin (PM)

holds, i.e., the delay leads to a faster decrease of the phase, but does not affect the amplitude. Figure 7.10 shows the Nyquist plot of the amplitude feedback with the nominal feedback gain of Kc = 14. 9 and an additional time delay of \(T_{\mathrm{d}} = 5\,\upmu\ \mathrm{s}\). This time delay is a worst-case scenario for signal transit times due to a distance of about \(100\,\mathrm{m}\) between the cavity and the LLRF unit [12]. The closeup shows that the closed loop is still stable, but not for arbitrary Kc > 0. The Nyquist plot crosses the horizontal axis at − 0. 237. Increasing the gain Kc by a factor of

$$\displaystyle{ \mathit{AM} = 20\log \left ( \frac{1} {0.237}\right ) = 12.5\,\mathrm{dB} }$$

will therefore lead to a crossing of the critical point \(-1 + j0\) and to instability. This factor is called the amplitude margin and is a measure for variations in the amplitude of the process transfer function that can be tolerated. For larger amplitude margins, the feedback is more robust against such variations. In addition, Fig. 7.10 shows that the Nyquist plot crosses the unit circle at an angle of about − 83. The frequency of this crossing is \(\omega = 34.2 \cdot 10^{3}\,\mathrm{s^{-1}}\). The phase margin

$$\displaystyle{ \mathit{PM} = 180^{\circ }- 83^{\circ } = 97^{\circ } }$$

is defined as the distance to the critical point in terms of the phase, i.e., the tolerable variation in the phase of the process transfer function. A simple estimateFootnote 6 shows that an additional time delay of \(T_{\mathrm{d}} = 50\,\upmu\ \mathrm{s}\) would lead to a phase decrease of

$$\displaystyle{ \omega T_{\mathrm{d}} \approx 34.2 \cdot 10^{3}\,\mathrm{s^{-1}} \cdot 50\,\upmu\ \mathrm{s} \approx 98^{\circ }, }$$

i.e., the feedback will remain stable for time delays up to this order of magnitude.

7.4.4 Steady-State Accuracy

The standard closed loop in Fig. 7.2 on p. 341 is said to have no steady-state error if

$$\displaystyle{ x_{\mathrm{e}}(\infty ):=\lim _{t\rightarrow \infty }x_{\mathrm{e}}(t) =\lim _{t\rightarrow \infty }\left (y_{\mathrm{r}}(t) - y_{\mathrm{m}}(t)\right ) = 0 }$$

is guaranteed, i.e., if the measured value converges to the reference value. From Fig. 7.2, the following expression for the steady-state error can be obtained:

$$\displaystyle\begin{array}{rcl} X_{\mathrm{e}}(s)& =& \ \frac{1} {1 + H_{\mathrm{p}}(s)H_{\mathrm{c}}(s)H_{\mathrm{m}}(s)}\ Y _{\mathrm{r}}(s) - \frac{H_{\mathrm{p}}(s)H_{\mathrm{m}}(s)} {1 + H_{\mathrm{p}}(s)H_{\mathrm{c}}(s)H_{\mathrm{m}}(s)}\ X_{\mathrm{d1}}(s) - \\ & &- \frac{H_{\mathrm{m}}(s)} {1 + H_{\mathrm{p}}(s)H_{\mathrm{c}}(s)H_{\mathrm{m}}(s)}\ (X_{\mathrm{d2}}(s) + X_{\mathrm{d3}}(s)). {}\end{array}$$
(7.22)

In the following, it is assumed that all transfer functions in this expression are stable, i.e., have only poles in the OLHP. In this case, we can use the final-value theorem for Laplace transforms (cf. Sect. 2.2). Without disturbances, this leads to

$$\displaystyle{ x_{\mathrm{e}}(\infty )\,=\,\lim _{s\rightarrow 0}\big(s\ X_{\mathrm{e}}(s)\big)\,=\,\lim _{s\rightarrow 0}\left ( \frac{s\ Y _{\mathrm{r}}(s)} {1 + H_{\mathrm{p}}(s)H_{\mathrm{c}}(s)H_{\mathrm{m}}(s)}\right )=\lim _{s\rightarrow 0}\left ( \frac{s\ Y _{\mathrm{r}}(s)} {1 + H_{\mathrm{open}}(s)}\right ). }$$

It is now particularly important which type of reference signal yr(t) is assumed. For a step function, we have \(Y _{\mathrm{r}}(s) = K/s\) andFootnote 7

$$\displaystyle{ x_{\mathrm{e}}(\infty ) = \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{K} {1+H_{\mathrm{open}}(0)}\quad &\mbox{ for }\vert H_{\mathrm{open}}(0)\vert < \infty, \\ 0 \quad &\mbox{ otherwise}. \end{array} \right. }$$

This shows that an integrator (1∕s) in the feedback loop—in the controller, the process, or the measurement transfer function—is sufficient for a vanishing steady-state error. For other reference signals, this may not be sufficient. For example, a ramp signal (1∕s2) requires at least two integrators in the transfer functions of the feedback loop. However, too many integrators may lead to stability problems, because each integrator lowers the phase of the open-loop transfer function by \(-\pi /2\).

If significant disturbances are present, it is usually necessary that the integrator be contained in the controller, as can be seen from the other transfer functions in Eq. (7.22). Assuming that the process and measurement transfer functions have no integrator, Hp(0) and Hm(0) are finite, and an integral controller will lead to xe() = 0 for stepwise disturbances.

7.5 Feedback Design

7.5.1 Tradeoff Between Performance and Robustness

The transfer function Hp(s) in Fig. 7.2 on p. 341 usually describes the physical behavior of the real process only approximately. Reasons for model errors can be nonlinearities, dependence on time or operating conditions, and unmodeled high-frequency dynamics. In many cases, the model errors may be described by parameter variations in the numerator and denominator of the transfer function Hp(s). These variations will lead to a change in performance of the closed-loop control. To estimate this effect, the sensitivity function

$$\displaystyle{ H_{\mathrm{s}}:= \frac{\partial H_{\mathrm{ry}}} {\partial H_{\mathrm{p}}} \ \frac{H_{\mathrm{p}}} {H_{\mathrm{ry}}} }$$

is defined as the relative change of the closed-loop transfer function Hry(s) with respect to variations of the process transfer function Hp(s). With Eq. (7.15), this leads to

$$\displaystyle{ H_{\mathrm{s}} = \frac{\partial \left ( \frac{H_{\mathrm{p}}H_{\mathrm{c}}} {1+H_{\mathrm{p}}H_{\mathrm{c}}H_{\mathrm{m}}} \right )} {\partial H_{\mathrm{p}}} \ \frac{H_{\mathrm{p}}} {H_{\mathrm{ry}}} = \frac{H_{\mathrm{c}}(1 + H_{\mathrm{p}}H_{\mathrm{c}}H_{\mathrm{m}}) - H_{\mathrm{p}}H_{\mathrm{c}}^{2}H_{\mathrm{m}}} {(1 + H_{\mathrm{p}}H_{\mathrm{c}}H_{\mathrm{m}})^{2}} \ \frac{H_{\mathrm{p}}(1 + H_{\mathrm{p}}H_{\mathrm{c}}H_{\mathrm{m}})} {H_{\mathrm{p}}H_{\mathrm{c}}} }$$

and finally to the sensitivity function

$$\displaystyle{ H_{\mathrm{s}}(s) = \frac{1} {1 + H_{\mathrm{p}}(s)H_{\mathrm{c}}(s)H_{\mathrm{m}}(s)}. }$$

This is exactly the disturbance-to-output transfer function Hdy(s) (cf. Eq. (7.16)) that was derived from Fig. 7.2. It is apparent that a sufficiently large feedback gain | Hc | will lead to both a small sensitivity | Hs | and a good disturbance rejection. However, a large feedback decreases the amplitude margin AM in many cases and may lead to instability. This shows that a tradeoff between performance and robustness specifications is usually necessary. Please note that for the open-loop system, Hc = 0, and the sensitivity equals 1. For the closed-loop system, | Hs | also approaches 1 for large frequencies, because for most practical cases, | HpHcHm | tends to zero.

For our amplitude feedback example, the sensitivity function is equal to

$$\displaystyle{ H_{\mathrm{s}}(s) = \frac{(s - z_{1})(s - z_{2})(s - z_{3})} {(s - p_{1})(s - p_{2})(s - p_{3})} }$$

with

$$\displaystyle\begin{array}{rcl} z_{1}& =& -2.5 \cdot 10^{5}\,\mathrm{s^{-1}},\quad z_{ 2} = -2 \cdot 10^{5}\,\mathrm{s^{-1}},\quad z_{ 3} = -2.05 \cdot 10^{3}\,\mathrm{s^{-1}}, {}\\ p_{1}& =& -2.42 \cdot 10^{4}\,\mathrm{s^{-1}},\quad p_{ 2,3} = (-2.14 \pm j\;1.44) \cdot 10^{5}\,\mathrm{s^{-1}}. {}\\ \end{array}$$

Its amplitude | Hs(j ω) | is shown in Fig. 7.11. In contrast to | Hopen,delay(j ω) | , the amplitude of the sensitivity function depends on the time delay.

Fig. 7.11
figure 11

Sensitivity function of the amplitude feedback (Td = 0)

The sensitivity shows that the amplitude feedback rejects disturbances or noise with frequency components up to about \(10\,\mathrm{kHz}\). The closed loop is also less sensitive with respect to model variations than the open loop in this frequency range. However, the sensitivity is not zero for ω → 0. This implies that the closed loop does not reject DC offsets completely and may thus have a steady-state error. This can be shown as follows. From the standard feedback loop, the control error can be calculated as

$$\displaystyle\begin{array}{rcl} X_{\mathrm{e}}(s)& =& Y _{\mathrm{r}}(s) - H_{\mathrm{m}}(s) \cdot Y (s) {}\\ & =& Y _{\mathrm{r}}(s) - H_{\mathrm{m}}(s) \cdot \frac{H_{\mathrm{c}}(s)\ H_{\mathrm{p}}(s)} {1 + H_{\mathrm{m}}(s)\ H_{\mathrm{c}}(s)\ H_{\mathrm{p}}(s)} \cdot Y _{\mathrm{r}}(s) {}\\ & =& \frac{1} {1 + H_{\mathrm{m}}(s)\ H_{\mathrm{c}}(s)\ H_{\mathrm{p}}(s)} \cdot Y _{\mathrm{r}}(s). {}\\ \end{array}$$

If we assume that the closed loop is stable and the reference signal is equal to a unit step, i.e., \(Y _{\mathrm{r}} = 1/s\), then the final value of the control error is given by

$$\displaystyle\begin{array}{rcl} \lim _{t\rightarrow \infty }x_{\mathrm{e}}(t)& =& \lim _{s\rightarrow 0}\left (s \cdot \frac{1} {1 + H_{\mathrm{m}}(s)\ H_{\mathrm{c}}(s)\ H_{\mathrm{p}}(s)} \cdot \frac{1} {s}\right ) {}\\ & =& \frac{1} {1 + H_{\mathrm{m}}(0)\ H_{\mathrm{c}}(0)\ H_{\mathrm{p}}(0)}. {}\\ \end{array}$$

Thus, the value of the sensitivity function for ω = 0 is equal to the relative steady-state error of the closed-loop system. For the amplitude feedback loop, a value of 6. 4%, or \(-23.9\,\mathrm{dB}\), is obtained. This steady-state error will also be apparent in the simulation results in the next section.

7.5.2 Design Goals and Specifications

The main design goals of feedback are stability, a fast dynamic response, disturbance rejection, a small tracking error, and robustness against parameter variations. In addition, the control effort should comply with the physical limitations of the process. There exist several parameters to describe these specifications quantitatively. In the time domain, the response to a step disturbance or reference signal is often considered, and the following quantities are used to describe the dynamic response:

  • Rise time: transit time from 10% to 90% of the final value, i.e., of the output step size.

  • Percentage of overshoot.

  • Settling time: time after which the output stays inside a ± 5% or ± 2% interval around the final value.

  • Steady-state error between the reference signal and the output.

The performance of the amplitude feedback example is shown in Fig. 7.12. The curve \(\hat{V }_{\mathrm{gap,det}}\) is obtained from a simulation model from [12], which is in good agreement with measurements. The reference signal \(\hat{V }_{\mathrm{ref}}\) is initially raised from zero to \(1\,\mathrm{V}\). Due to a prefilter with a time constant of \(43\,\upmu\ \mathrm{s}\), the reference signal is raised not stepwise, but smoothly. The simulation model includes not only the amplitude feedback, but also a resonance frequency feedback to ensure that the cavity is in resonance. At the beginning of the simulation, the resonance frequency feedback has to settle and has a strong coupling with \(\hat{V }_{\mathrm{gap}}\). At \(t \approx 3\,\mathrm{ms}\), both feedback loops have reached their equilibrium.

Fig. 7.12
figure 12

Performance of the amplitude feedback

The amplitude feedback is excited at \(t = 3.5\,\mathrm{ms}\) with a stepwise disturbance of the measurement \(\hat{V }_{\mathrm{gap,det}}\). The dynamic response of the simulation model is compared to the response of the linear closed loop Hry(s) with Td = 0 (Fig. 7.12, bottom left). This shows that the transfer function Hry(s) describes the behavior very well for small deviations from equilibrium. From the simulation results, a rise time of \(73\,\upmu\ \mathrm{s}\), a 5% settling time of \(103\,\upmu\ \mathrm{s}\), and a steady-state error of 6. 4% are obtained.

At \(t = 4.5\,\mathrm{ms}\), the cavity is detuned, so that the gap voltage drops by about \(0.5\,\mathrm{kV}\). This time, the simulation model shows a different behavior due to the interaction of the resonance frequency feedback with the amplitude feedback. This demonstrates that nested control loops are dynamically coupled in general. If the coupling is strong, it is necessary to take this into account during the analysis and design of the feedback. Nested control loops can be described by MIMO or multivariable control systems [16].

In addition to the mentioned parameters, there also exist specifications in the frequency domain:

  • Resonant peak: the maximum of the closed-loop frequency response | Hry(j ω) | indicates relative stability and is recommended to be between 1.1 and 1.5 [1].

  • Bandwidth: the frequency at which | Hry(j ω) | has decreased by \(-3\,\mathrm{dB}\) with respect to the zero-frequency value.

  • Cutoff rate: the slope of | Hry | at high frequencies.

  • Amplitude margin and phase margin (cf. Sect. 7.4.3): an AM larger than \(6\,\mathrm{dB}\) and a PM between 30 and 60 are regarded as a good tradeoff between robustness and performance [1].

In our example, the bandwidth of Hry(j ω) equals \(30.3 \cdot 10^{3}\,\mathrm{s^{-1}}\) (which corresponds to \(\Delta f = 4831\;\mathrm{Hz}\)), and the cutoff-rate is \(-20\,\mathrm{dB/decade}\).

7.5.3 PID Control

A general proper PID control algorithm is given by

$$\displaystyle{ H_{\mathrm{c}}(s) = \frac{U(s)} {X_{\mathrm{e}}(s)} = K_{\mathrm{P}} + K_{\mathrm{I}}\frac{1} {s} + K_{\mathrm{D}} \frac{s} {T_{\mathrm{D}}s + 1}; }$$

it is a combination of a proportional, an integral, and a derivative controller. The transfer function can also be written as

$$\displaystyle{ H_{\mathrm{c}}(s) = \frac{(K_{\mathrm{P}}T_{\mathrm{D}} + K_{\mathrm{D}})s^{2} + (K_{\mathrm{P}} + K_{\mathrm{I}}T_{\mathrm{D}})s + K_{\mathrm{I}}} {s(T_{\mathrm{D}}s + 1)}; }$$
(7.23)

it has two zeros and two poles. A pure derivative is obtained for TD = 0. However, this leads to an improper transfer function. In the time domain, the controller is described by the differential equation

$$\displaystyle{ T_{\mathrm{D}}\dot{u}(t) + u(t) = (K_{\mathrm{P}}T_{\mathrm{D}} + K_{\mathrm{D}})\dot{x}_{e}(t) + (K_{\mathrm{P}} + K_{\mathrm{I}}T_{\mathrm{D}})x_{\mathrm{e}}(t) + K_{\mathrm{I}}\int _{0}^{t}x_{\mathrm{ e}}(\tau )\ \mathrm{d}\tau. }$$

In steady state, the control error xe must be zero due to the integration.

The controller of the amplitude feedback example is of PDT1 type. This can be shown as follows. A general PDT1controller can be written as

$$\displaystyle{ H_{\mathrm{c}}(s) = K_{\mathrm{P}} + \frac{K_{\mathrm{D}}s} {T_{\mathrm{D}}s + 1} = K_{\mathrm{P}}\frac{s\left (T_{\mathrm{D}} + \frac{K_{\mathrm{D}}} {K_{\mathrm{P}}} \right ) + 1} {s\ T_{\mathrm{D}} + 1}. }$$

With

$$\displaystyle{ K_{c} = K_{\mathrm{P}},\quad T_{c1} = T_{\mathrm{D}} + \frac{K_{\mathrm{D}}} {K_{\mathrm{P}}},\quad T_{c2} = T_{\mathrm{D}}, }$$

we obtain the amplitude controller that is shown in Fig. 7.4.

To design a general PID controller, it is necessary to determine the four degrees of freedom KP, KD, KI, and TD so that the specifications are met. If the open-loop system is stable, the two zeros of Hc(s) may be used to compensate open-loop poles. The time constant TD should not be chosen too small, because that would amplify high-frequency noise.

Several so-called tuning rules exist for the design of PI and PID controllers [5]. A simple tuning rule is described in [16] that is based on the approximation of the process transfer function with a first-order model

$$\displaystyle{ H_{\mathrm{approx}}(s) = \frac{K} {T \cdot s + 1}\ e^{-T_{\mathrm{d}}\cdot s}, }$$

with the gain K, the time constant T, and a time delay Td. For a PI controller, the tuning rule is (cf. [16, p. 57])

$$\displaystyle{ K_{\mathrm{P}} = \frac{1} {K} \frac{T} {T_{\mathrm{tune}} + T_{\mathrm{d}}},\qquad K_{\mathrm{I}} = \frac{K_{\mathrm{P}}} {\min \left \{T,4(T_{\mathrm{tune}} + T_{\mathrm{d}})\right \}}, }$$

with a single tuning parameter Ttune. A small value of this parameter will lead to fast output performance, whereas a large value implies a high robustness and smaller values of the input. A typical tradeoff is the choice Ttune = Td.

This tuning rule can be applied to the amplitude feedback loop example. From Fig. 7.4, the open-loop transfer function

$$\displaystyle{ \frac{\Delta \hat{V }_{\mathrm{gap,det}}(s)} {\Delta \hat{V }_{\mathrm{c}}(s)} = \frac{K_{\mathrm{mod}}G_{\mathrm{Vgain}}K_{\mathrm{Vgain}}R_{\mathrm{p}}K_{\mathrm{cd}}} {(s\ T_{\mathrm{cav}} + 1)(s\ T_{\mathrm{det}} + 1)} }$$

is obtained. For this type of transfer function, the following first-order approximation may be used; cf. [16, p. 58]:

$$\displaystyle\begin{array}{rcl} K& =& K_{\mathrm{mod}}G_{\mathrm{Vgain}}K_{\mathrm{Vgain}}R_{\mathrm{p}}K_{\mathrm{cd}} = 0.9877,\qquad T = T_{\mathrm{det}} + \frac{1} {2}T_{\mathrm{cav}} = 7\,\upmu\ \mathrm{s}, {}\\ T_{\mathrm{d}}& =& \frac{1} {2}T_{\mathrm{cav}} = 2\,\upmu\ \mathrm{s}. {}\\ \end{array}$$

With Ttune = Td as the choice of the tuning parameter, the coefficients of the resulting PI controller are

$$\displaystyle{ K_{\mathrm{P}} = 1.7718,\qquad K_{\mathrm{I}} = 2.5312 \cdot 10^{5}\,\mathrm{\frac{1} {s} }. }$$

The settling time of the linear amplitude feedback with this controller is \(16.4\,\upmu\ \mathrm{s}\) for a 5% interval around the set point. This is considerably faster than the PDT1 controller. Furthermore, the PI controller leads to a zero steady-state error. Note, however, that for the design in this section, we have neglected any interaction of the amplitude loop with the resonance frequency feedback loop.

For the practical implementation of a PID controller, some issues should be taken into account. If the process is stable, it is often sufficient to use a PI controller. Derivative action, i.e., KD ≠ 0, will lead to an increased sensitivity with respect to measurement noise. If the reference signal yr(t) contains steps and a derivative action is needed, it is usually better to use the measured output ym(t) as input of the derivative part of the controller instead of the control error xe(t); cf. [16, p. 56] and [5, p. 317]. One challenge for the integral action is the so-called integrator windup [5], a nonlinear effect.

We can illustrate this effect by means of Fig. 7.3. We assume that the controller has integral action and generates a value that exceeds the constraints of the subsequent saturation function. In this case, the output of the feedback will be a constant value as long as the saturation function is active. This may be interpreted as a feedback loop that is no longer closed, because the output of the controller does not depend on the control error. The integral controller will, however, continue to integrate the control error, and this may result in a poor overall feedback performance. Measures that prevent windup are known as antiwindup.

7.5.4 Stability Issues for Nonlinear Systems

As described in Sect. 7.1.3, almost every practical feedback system is, in fact, a nonlinear system

$$\displaystyle{ \frac{\mathrm{d}\vec{x}(t)} {\mathrm{d}t} =\vec{ v}_{1}(\vec{x}(t),\vec{u}(t)), }$$
(7.24a)
$$\displaystyle{ \vec{y}_{\mathrm{m}}(t) =\vec{ v}_{2}(\vec{x}(t)), }$$
(7.24b)

where \(\vec{y}_{\mathrm{m}}\) is the output vector with the measured quantities of the process. A common approach is to calculate the linearization

$$\displaystyle{ \frac{\mathrm{d}\Delta \vec{x}(t)} {\mathrm{d}t} = A \cdot \Delta \vec{x}(t) + B \cdot \Delta \vec{u}(t), }$$
(7.25a)
$$\displaystyle{ \Delta \vec{y}_{\mathrm{m}}(t) = C \cdot \Delta \vec{x}(t) }$$
(7.25b)

of the system for a certain equilibrium and to use it for the analysis or design of a linear controller so that the closed-loop behavior is stable. This approach has also been chosen in the previous sections. An important question that now arises is whether the linear controller will also be able to stabilize the nonlinear system. The stability theory of Lyapunov that was described in Sect. 2.8.5 is useful to obtain some conclusions concerning this question. In order to use the theory of Lyapunov, it is necessary to analyze the feedback loop in the time domain, because the frequency domain approach is in general not applicable to nonlinear systems.

Consider first a very general linear controller in state-space representation

$$\displaystyle{ \frac{\mathrm{d}\vec{x}_{\mathrm{c}}(t)} {\mathrm{d}t} = A_{\mathrm{c}} \cdot \vec{ x}_{\mathrm{c}}(t) + B_{\mathrm{c}} \cdot \vec{ x}_{\mathrm{e}}(t), }$$
(7.26a)
$$\displaystyle{ \vec{u}_{\mathrm{c}}(t) = C_{\mathrm{c}} \cdot \vec{ x}_{\mathrm{c}}(t) + D_{\mathrm{c}} \cdot \vec{ x}_{\mathrm{e}}(t), }$$
(7.26b)

where \(\vec{x}_{\mathrm{e}} = \Delta \vec{y}_{\mathrm{r}} - \Delta \vec{y}_{\mathrm{m}}\) denotes the vector with measured control errors, \(\vec{u}_{\mathrm{c}}\) is the actuator value that can be used as input to the process (i.e., \(\Delta \vec{u} =\vec{ u}_{\mathrm{c}}\)), and \(\vec{x}_{\mathrm{c}}\) contains the internal states of the controller. This type of controller is also known as a dynamic output feedback, because the controller has a dynamic structure and it uses the output vector \(\vec{y}_{\mathrm{m}}\) as the only information about the process. This type of controller also contains the PID controller as a special case: rewriting the transfer function (7.23) as the sum of a constant and a remaining polynomial leads to

$$\displaystyle{ H_{\mathrm{c}}(s) = \frac{U_{\mathrm{c}}(s)} {X_{\mathrm{e}}(s)} = \left (K_{\mathrm{P}} + \frac{K_{\mathrm{D}}} {T_{\mathrm{D}}} \right ) + \frac{\left (K_{\mathrm{I}} - \frac{K_{\mathrm{D}}} {T_{\mathrm{D}}^{2}} \right )s + \frac{K_{\mathrm{I}}} {T_{\mathrm{D}}} } {s^{2} + \frac{1} {T_{\mathrm{D}}} s}. }$$

Using the results of Sect. 7.1.2 and taking the additional direct feedthrough into account leads to the following state-space representation of the controller:

$$\displaystyle\begin{array}{rcl} \frac{\mathrm{d}\vec{x}_{\mathrm{c}}(t)} {\mathrm{d}t} & =& \left [\begin{array}{*{10}c} 0& 1 \\ 0&- \frac{1} {T_{\mathrm{D}}} \end{array} \right ] \cdot \vec{ x}_{\mathrm{c}}(t) + \left [\begin{array}{*{10}c} 0\\ 1 \end{array} \right ] \cdot x_{\mathrm{e}}(t), {}\\ u_{\mathrm{c}}(t)& =& \left [\begin{array}{*{10}c} \frac{K_{\mathrm{I}}} {T_{\mathrm{D}}} & \left (K_{\mathrm{I}} - \frac{K_{\mathrm{D}}} {T_{\mathrm{D}}^{2}} \right ) \end{array} \right ] \cdot \vec{ x}_{\mathrm{c}}(t) + \left (K_{\mathrm{P}} + \frac{K_{\mathrm{D}}} {T_{\mathrm{D}}} \right ) \cdot x_{\mathrm{e}}(t). {}\\ \end{array}$$

This is a dynamic output feedback. Note that the case of a pure derivative controller (TD = 0) is not included in this representation.Footnote 8 Due to Eq. (7.26), the transfer function of the controller can be obtained by

$$\displaystyle{ H_{\mathrm{c}}(s) = C_{\mathrm{c}} \cdot \left (sI - A_{\mathrm{c}}\right )^{-1} \cdot B_{\mathrm{ c}} + D_{\mathrm{c}}. }$$

Connecting the controller (7.26) with system (7.25) (i.e., by \(\Delta \vec{u} =\vec{ u}_{\mathrm{c}}\)) leads directly to the following dynamics of the closed loop:

$$\displaystyle{ \frac{\mathrm{d}} {\mathrm{d}t}\left [\begin{array}{*{10}c} \Delta \vec{x}(t)\\ \vec{x}_{\mathrm{ c}}(t) \end{array} \right ] =\mathop{\underbrace{ \left [\begin{array}{*{10}c} A - B \cdot D_{\mathrm{c}} \cdot C &B \cdot C_{\mathrm{c}} \\ -B_{\mathrm{c}} \cdot C & A_{\mathrm{c}} \end{array} \right ]}}\limits _{A_{\mathrm{cl}}}\cdot \left [\begin{array}{*{10}c} \Delta \vec{x}(t)\\ \vec{x}_{\mathrm{ c}}(t) \end{array} \right ]+\left [\begin{array}{*{10}c} B \cdot D_{\mathrm{c}} \\ B_{\mathrm{c}} \end{array} \right ]\cdot \Delta \vec{y}_{\mathrm{r}}(t).\quad }$$
(7.27)

We assume that the controller is designed properly, so that the closed-loop dynamics are stable. According to the results of Sect. 7.1.5, this is the case if Acl has only eigenvalues with negative real parts.

After the controller design, the controller will be connected to the real nonlinear process. One possible choice for the input of the nonlinear system (7.24) is then

$$\displaystyle{ \vec{u}(t) =\vec{ u}_{\mathrm{F}} +\vec{ u}_{\mathrm{c}}(t) =\vec{ u}_{\mathrm{F}} + C_{\mathrm{c}} \cdot \vec{ x}_{\mathrm{c}}(t) + D_{\mathrm{c}} \cdot \vec{ x}_{\mathrm{e}}(t), }$$

where \(\vec{u}_{\mathrm{F}}\) is a feedforward value that equals the input value at the equilibrium point \(\vec{x} =\vec{ x}_{\mathrm{F}}\). In other words,

$$\displaystyle{ \vec{v}_{1}(\vec{x}_{\mathrm{F}},\vec{u}_{\mathrm{F}}) = 0 }$$

is assumed, and the controller has only to correct deviations from the equilibrium. The control error is now given by

$$\displaystyle{ \vec{x}_{\mathrm{e}} =\vec{ y}_{\mathrm{r}} -\vec{ y}_{\mathrm{m}} =\vec{ y}_{\mathrm{r}} -\vec{ v}_{2}(\vec{x}). }$$

These choices of the closed-loop connection lead to the following dynamics:

$$\displaystyle{ \frac{\mathrm{d}} {\mathrm{d}t}\left [\begin{array}{*{10}c} \vec{x}(t)\\ \vec{x}_{\mathrm{ c}}(t) \end{array} \right ] = \left [\begin{array}{*{10}c} \vec{v}_{1}(\vec{x}(t),\vec{u}(t)) \\ A_{\mathrm{c}} \cdot \vec{ x}_{\mathrm{c}}(t) + B_{\mathrm{c}} \cdot (\vec{y}_{\mathrm{r}}(t) -\vec{ v}_{2}(\vec{x}(t))) \end{array} \right ]. }$$
(7.28)

A linearization around \(\vec{x} =\vec{ x}_{\mathrm{F}}\), \(\vec{u} =\vec{ u}_{\mathrm{F}}\), \(\vec{x}_{\mathrm{c}} = 0\), and \(\vec{y}_{\mathrm{r}} =\vec{ v}_{2}(\vec{x}_{\mathrm{F}})\) leads to the same linear dynamics as Eq. (7.27). This is reasonable, because it means that the same result is obtained either by linearizing the nonlinear closed-loop dynamics or by using the linearization (7.25) of the open-loop system (7.24) to obtain the linear closed-loop model (7.27).

We already assumed that Eq. (7.27) is stable, and we can now use theorem 2.18. For \(\Delta \vec{y}_{\mathrm{r}} = 0\) and the previous assumption of a strictly stable matrix Acl (the real parts of all eigenvalues are negative), the theorem can be applied to Eq. (7.28), and the consequence is a stable equilibrium of the nonlinear setup. This is an important motivation for using linear control design in many cases, even for systems that are practically nonlinear.

Note, however, that the linear system (7.27) is asymptotically stable in the global sense, i.e., for arbitrary initial values, whereas in general, the asymptotic stability of the nonlinear system (7.28) is given only in a local neighborhood around the equilibrium. This neighborhood, also called a region of attraction, may be so small that from a practical point of view, the equilibrium is in fact unstable. The size of the region of attraction can be estimated using Lyapunov functions as defined in Sect. 2.8.5.

A nonzero reference value \(\Delta \vec{y}_{\mathrm{r}}\neq 0\) acts as an excitation. As long as it is not too large, the closed loop will be stable.

If further disturbances act on the system (7.24) or the model is inaccurate, this may lead to a steady-state error. In most cases, an integral controller will help to avoid such an error. A pure integral controller can be written as

$$\displaystyle\begin{array}{rcl} \frac{\mathrm{d}} {\mathrm{d}t}x_{\mathrm{c}}(t)& =& x_{\mathrm{e}}(t), {}\\ u_{\mathrm{c}}(t)& =& K_{\mathrm{I}}x_{\mathrm{c}}(t). {}\\ \end{array}$$

Therefore, Ac = 0, Bc = 1, Cc = KI, and Dc = 0. The closed-loop dynamics for a SISO system are then

$$\displaystyle{ \frac{\mathrm{d}} {\mathrm{d}t}\left [\begin{array}{*{10}c} \vec{x}(t)\\ x_{\mathrm{ c}}(t) \end{array} \right ] = \left [\begin{array}{*{10}c} \vec{v}_{1}(\vec{x}(t),u_{\mathrm{F}} + K_{\mathrm{I}}\ x_{\mathrm{c}}(t)) \\ y_{\mathrm{r}} - v_{2}(\vec{x}(t)) \end{array} \right ], }$$

and from the bottom row, we have the equilibrium

$$\displaystyle{ \frac{\mathrm{d}} {\mathrm{d}t}x_{\mathrm{c}} = 0\quad \Rightarrow \quad y_{\mathrm{r}} = v_{2}(\vec{x}) = y_{\mathrm{m}}, }$$

and the steady-state error will therefore tend to zero for stepwise reference signals.