Abstract
While the original classical parameter adaptive controllers do not handle noise or unmodelled dynamics well, redesigned versions have been proven to have some tolerance; however, exponential stabilization and a bounded gain on the noise are rarely proven. Here we consider a classical pole placement adaptive controller using the original projection algorithm rather than the commonly modified version; we impose the assumption that the plant parameters lie in a convex, compact set, although some progress has been made at weakening the convexity requirement. We demonstrate that the closed-loop system exhibits a very desirable property: there are linear-like convolution bounds on the closed-loop behaviour, which confers exponential stability and a bounded noise gain, and which can be leveraged to prove tolerance to unmodelled dynamics and plant parameter variation. We emphasize that there is no persistent excitation requirement of any sort; the improved performance arises from the vigilant nature of the parameter estimator.
Similar content being viewed by others
Notes
Since the closed-loop system is nonlinear, a bounded-noise bounded-state property does not automatically imply a bounded gain on the noise.
It is common to make this more general by letting \(\alpha \) be time-varying.
Here \(\beta >0\) is fixed, so this is equivalent to \({\varepsilon }\) being small enough.
We also implicitly use a pole placement procedure to obtain the controller parameters from the plant parameter estimates; this entails solving a linear equation.
In addition, if we define \({\hat{\theta }} (t)\) and \({\hat{\theta }}^{\gamma } (t)\) in the natural way, then it is easy to prove that for \(\gamma \ne 0\) we have
$$\begin{aligned} {\hat{\theta }}^{\gamma } (t) = {\hat{\theta }} (t) , \; t \ge t_0 . \end{aligned}$$Furthermore, in [37] it is assumed that \(\alpha _i \) and \(\beta _i \) are strictly greater than zero, but it is trivial to extend this to allow for zero as well.
The choice of N and the value of the switching signal \(\sigma (t)\) play no role.
References
Feuer A, Morse AS (1978) Adaptive control of single-input, single-output linear systems. IEEE Trans Autom Control 23(4):557–569
Morse AS (1980) Global stability of parameter-adaptive control systems. IEEE Trans Autom Control 25:433–439
Goodwin GC, Ramadge PJ, Caines PE (1980) Discrete time multivariable control. IEEE Trans Autom Control 25:449–456
Narendra KS, Lin YH (1980) Stable discrete adaptive control. IEEE Trans Autom Control 25(3):456–461
Narendra KS, Lin YH, Valavani LS (1980) Stable adaptive controller design, Part II: proof of stability. IEEE Trans Autom Control 25:440–448
Rohrs CE et al (1985) Robustness of continuous-time adaptive control algorithms in the presence of unmodelled dynamics. IEEE Trans Autom Control 30:881–889
Middleton RH, Goodwin GC (1988) Adaptive control of time-varying linear systems. IEEE Trans Autom Control 33(2):150–155
Middleton RH et al (1988) Design issues in adaptive control. IEEE Trans Autom Control 33(1):50–58
Tsakalis KS, Ioannou PA (1989) Adaptive control of linear time-varying plants: a new model reference controller structure. IEEE Trans Autom Control 34(10):1038–1046
Kreisselmeier G, Anderson BDO (1986) Robust model reference adaptive control. IEEE Trans Autom Control 31:127–133
Ioannou PA, Tsakalis KS (1986) A robust direct adaptive controller. IEEE Trans Autom Control 31(11):1033–1043
Ydstie BE (1989) Stability of discrete-time MRAC revisited. Syst Control Lett 13:429–439
Ydstie BE (1992) Transient performance and robustness of direct adaptive control. IEEE Trans Autom Control 37(8):1091–1105
Naik SM, Kumar PR, Ydstie BE (1992) Robust continuous-time adaptive control by parameter projection. IEEE Trans Autom Control 37:182–297
Wen C, Hill DJ (1992) Global boundedness of discrete-time adaptive control using parameter projection. Automatica 28(2):1143–1158
Wen C (1994) A robust adaptive controller with minimal modifications for discrete time-varying systems. IEEE Trans Autom Control 39(5):987–991
Li Y, Chen H-F (1996) Robust adaptive pole placement for linear time-varying systems. IEEE Trans Autom Control 41:714–719
Narendra KS, Annawswamy AM (1987) A new adaptive law for robust adaptation without persistent excitation. IEEE Trans Autom Control 32:134–145
Fu M, Barmish BR (1986) Adaptive stabilization of linear systems via switching control. IEEE Trans Autom Control AC–31:1097–1103
Miller DE, Davison EJ (1989) An adaptive controller which provides Lyapunov stability. IEEE Trans Autom Control 34:599–609
Morse AS (1996) Supervisory control of families of linear set-point controllers–Part 1: exact matching. IEEE Trans Autom Control 41:1413–1431
Morse AS (1997) Supervisory control of families of linear set-point controllers–Part 2: robustness. IEEE Trans Autom Control 42:1500–1515
Hespanha JP, Liberzon D, Morse AS (2003) Hysteresis-based switching algorithms for supervisory control of uncertain systems. Automatica 39:263–272
Vu L, Chatterjee D, Liberzon D (2007) Input-to-state stability of switched systems and switching adaptive control. Automatica 43:639–646
Hespanha JP, Liberzon D, Morse AS (2003) Overcoming the limitations of adaptive control by means of logic-based switching. Syst Control Lett 49(1):49–65
Morse AS (1998) A bound for the disturbance to tracking error gain of a supervised set-point control system. In: Normand-Cyrot D (ed) Perspectives in control. Springer, London
Vu L, Liberzon D (2011) Supervisory control of uncertain linear time-varying systems. IEEE Trans Automat Control 1:27–42
Zhivoglyadov PV, Middleton RH, Fu M (2001) Further results on localization-based switching adaptive control. Automatica 37:257–263
Miller DE (2003) A new approach to model reference adaptive control. IEEE Trans Autom Control 48:743–757
Miller DE (2006) Near optimal LQR performance for a compact set of plants. IEEE Trans Autom Control 51:1423–1439
Vale JR, Miller DE (2011) Step tracking in the presence of persistent plant changes. IEEE Trans Autom Control 56:43–58
Miller DE (2017) A parameter adaptive controller which provides exponential stability: the first order case. Syst Control Lett 103:23–31
Miller DE (2017) Classical discrete-time adaptive control revisited: exponential stabilization. In: 1st IEEE conference on control technology and applications, pp 1975–1980. The submitted version is posted at arXiv:1705.01494
Goodwin GC, Sin KS (1984) Adaptive filtering prediction and control. Prentice Hall, Englewood Cliffs
Jerbi A, Kamen EW, Dorsey J (1993) Construction of a robust adaptive regulator for time-varying discrete-time systems. Int J Adapt Control 7:1–12
Praly L (1984) Towards a globally stable direct adaptive control scheme for not necessarily minimum phase system. IEEE Trans Autom Control 29:946–949
Kreisselmeier G (1986) Adaptive control of a class of slowly time-varying plants. Syst Control Lett 8:97–103
Kreisselmeier G, Smith MC (1986) Stable adaptive regulation of arbitrary \(n^{th}\)-order plants. IEEE Trans Autom Control 31:299–305
Zhou K, Doyle JC, Glover K (1995) Robust and optimal control. Prentice Hall, New Jersey
Desoer CA (1970) Slowly varying discrete time system \(x_{t+1} = A_t x_t\). Electron Lett 6(11):339–340
Shahab MT, Miller DE (2018) Multi-estimator based adaptive control which provides exponential stability: the first-order case. In: 2018 IEEE 57th conference on decision and control (to appear)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research was supported by a grant from the Natural Sciences Research Council of Canada.
11 Appendix
11 Appendix
Proof of Proposition 1:
Since projection does not make the parameter estimate worse, it follows from (7) that
so the first inequality holds.
We now turn to energy analysis. We first define \(\tilde{{\check{\theta }}} (t):= {\check{\theta }} (t) - \theta ^*\) and \({\check{V}} (t) := {\tilde{\check{\theta }}}(T)^T {\tilde{\check{\theta }}}(t) \). Next, we subtract \(\theta ^*\) from each side of (7), yielding
Then
Now let us analyse the three terms on the RHS: the fact that \(W_1(t)^2= W_1(t)\) allows us to simplify the first term; the fact that \(W_1 (t) W_2 (t) = W_2 (t)\) means that the second term is zero; \(W_2 (t)^T W_2 (t) = {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{1}{ \phi (t)^T \phi (t)}\), which simplifies the third term. We end up with
Since projection never makes the estimate worse, it follows that
\(\square \)
Proof of Lemma 1:
Fix \(\delta \in ( 0 , \infty ]\) and \(\sigma \in ( {\underline{\lambda }} , 1) \). First of all, it is well known that the characteristic polynomial of \({\bar{A}} (t)\) is exactly \(z^{2n} A^* (z^{-1})\) for every \(t \ge t_0\). Furthermore, it is well known that the coefficients of \({\hat{L}} (t, z^{-1} )\) and \({\hat{P}} (t,z^{-1} )\) are the solution of a linear equation, and are analytic functions of \({\hat{\theta }} (t) \in {{\mathcal {S}}}\). Hence, there exists a constant \(\gamma _1\) so that, for every set of initial conditions, \(y^* \in {l_{\infty }}\) and \(d \in {l_{\infty }}\), we have \(\sup _{t \ge t_0} \Vert {\bar{A}} (t) \Vert \le \gamma _1\).
To prove the first bound, we now invoke the argument used in [40], who considered a more general time-varying situation but with more restrictions on \(\sigma \). By making a slight adjustment to the first part of the proof given there, we can prove that with \(\gamma _2 := \sigma \frac{ (\sigma + \gamma _1 )^{2n-1}}{ ( \sigma - {\underline{\lambda }} )^{2n}}\), then for every \(t \ge t_0\) we have \(\Vert {\bar{A}} (t) ^k \Vert \le \gamma _2 \sigma ^{k} , \;\; k \ge 0 \), as desired.
Now we turn to the second bound. From Proposition 1 and the Cauchy–Schwarz inequality, we obtain
Now notice that
The fact that the coefficients of \({\hat{L}} (t, z^{-1} )\) and \({\hat{P}} (t,z^{-1} )\) are analytic functions of \({\hat{\theta }} (t) \in {{\mathcal {S}}}\) means that there exists a constant \(\gamma _3 \ge 1\) so that
so we conclude that the second bound holds as well. \(\Box \)
In order to prove Theorem 4, we need some preliminary results. The first step is to extend Proposition 1 to the case when \(\theta ^*\) may not lie in \({{\mathcal {S}}}_{i}\).
Proof
Since projection does not make the parameter estimate worse, it follows from (48) and (49) that when \(\phi (t)\ne 0\),
and when \(\phi (t)=0\),
The result follows by iteration. \(\square \)
The next result produces a crude bound on the closed-loop behaviour.
Proposition 4
Consider the plant (1) and suppose that the controller consisting of the estimator (48), (49) and the control law (54) is applied.Footnote 8 Then for every \(p\ge 0\), there exists a constant \({{\bar{c}}}\ge 1\) such that for every \(t_0\in \mathbf{Z}\), \(t\ge t_0\), \(\phi _0\in \mathbf{R}^{2n}\), \({\theta ^{*}\in {{\mathcal {S}}}}\), \({\hat{\theta _i}} (t_0)\in {{\mathcal {S}}}_{i}\) (\(i=1,2\)) and \(y^*,d \in {\varvec{\ell }}_{\infty }\):
Proof
Fix \(p\ge 0\). Let \(t_0\in \mathbf{Z}\), \(t\ge t_0\), \(\phi _0\in \mathbf{R}^{2n}\), \({\theta ^{*}\in {{\mathcal {S}}}}\), \({\hat{\theta _i}} (t_0)\in {{\mathcal {S}}}_{i}\) (\(i=1,2\)) and \(y^*,d \in {\varvec{\ell }}_{\infty }\) be arbitrary. From (1) we see that
From (54) and Assumption 2, we have that there exists a constant \(\gamma \) so that
From the definition of \(\Vert \phi (t+1)\Vert \), we have that
Combining these three bounds, we end up with
Solving iteratively, we have
Put \({{\bar{c}}}:={{\bar{a}}}^{p}\) to conclude the proof. \(\square \)
We now state a technical result which we used in [32] to analyse the first-order one-step-ahead adaptive control problem.
Proof of Theorem 4:
Fix \(\lambda \in (0,1)\) and \(N\ge 2n\). Let \(t_0\in \mathbf{Z}\), \({\phi _0 \in {\mathbf{R}}^{2n}}\), \(\sigma _{0}\in \{1,2\}\), \({\theta ^{*} \in {{\mathcal {S}}}}\), \({\hat{\theta _i}} (t_0)\in {{\mathcal {S}}}_{i}\) (\(i=1,2\)), and \(y^*,d \in \ell _{\infty }\) be arbitrary; as usual, we let \(i^*\) denote the smallest \(j\in \{1,2\}\) which satisfies \(\theta ^{*}\in {{\mathcal {S}}}_{j}\).
As mentioned at the beginning of Sect. 8, the proposed controller is based on the first-order one-step-ahead control setup [41], although it is more complicated. The proof also uses similar ideas as those used in [41], but as our system is more complicated, it should not be surprising that the proof is significantly different and much more complicated. Hence, before proceeding we provide a proof outline: using the definition of \({{\hat{t}}}_\ell \) given in Sect. 8.2:
-
1.
first, we define a state-space equation describing \(\phi (t)\) which holds on intervals of the form \([{{\hat{t}}}_\ell ,{{\hat{t}}}_{\ell +1})\);
-
2.
second, we analyse this equation, getting a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +1})\Vert \) in terms of \(\Vert \phi ({{\hat{t}}}_{\ell })\Vert \) and the exogenous inputs;
-
3.
we apply Lemma 2 and Proposition 4 to obtain a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \) in terms of \(\Vert \phi ({{\hat{t}}}_{\ell })\Vert \); i.e. we analyse two intervals at a time;
-
4.
fourth, we then analyse the associated difference inequality (relating \(\Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \) in terms of \(\Vert \phi ({{\hat{t}}}_{\ell })\Vert \)) in a way similar (though not identical) to that used in [41].
Step 1: \(\underline{\hbox {Obtain a state-space model describing } \phi (t) \hbox { for } t\in [{{\hat{t}}}_{\ell },{{\hat{t}}}_{\ell +1}).}\)
By definition of the prediction error (47) and by the property of the switching signal (53) being constant on \([{{\hat{t}}}_{\ell },{{\hat{t}}}_{\ell +1})\), we have
From the control law (54) and the control gains (52), we have
We now derive a state-space equation for \(\phi (t)\) in much the same way as (13) was derived; we first define
then, in light of (61) and (62), the following holds:
notice the additional term \(\left[ {\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}(t)-{\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )\right] \) on the right-hand side which (13) does not have.
Step 2: \(\underline{\hbox {Obtain a bound on } \Vert \phi ({{\hat{t}}}_{\ell +1})\Vert \hbox { in terms of } \Vert \phi ({{\hat{t}}}_{\ell })\Vert .}\)
In (63) we have \({{\bar{A}}}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )\in \mathbf{R}^{2n\times 2n}\) to be a constant matrix with all eigenvalues equal to zero; since \(N\ge 2n\), clearly
So, solving (63) for \(\phi ({{\hat{t}}}_{\ell +1})\) yields
It follows from the compactness of \({{\mathcal {S}}}\) and the \({{\mathcal {S}}}_i\)’s that \(\left\| \left[ {{{\bar{A}}}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )}\right] ^j\right\| ,\; j=0,1\ldots ,N-1\), is bounded above by a constant which we label \(c_1\). Using this fact together with Proposition 3 which provides a bound on the difference between parameter estimates at two different points in time, we obtain
By definition of the prediction error, if \(\phi (j)=0\) then
and if \(\phi (j)\ne 0\), then
Incorporating this into the above inequality yields
Since \({{\hat{t}}}_{\ell +1}-{{\hat{t}}}_{\ell }=N\), it follows from Proposition 4 that there exists a constant \(c_2\) so that the following holds:
so, substituting (66) into (65) and using the definition of the performance signal \(J_{\sigma ({{\hat{t}}}_\ell )}(\cdot )\) given in (55) it follows that there exists a constant \(c_3\) so that
Step 3: \(\underline{\hbox {Apply Lemma}~2 \hbox { and Proposition}~4 \hbox { to obtain a bound on } \Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \hbox { in terms of }}\) \(\underline{\Vert \phi ({{\hat{t}}}_{\ell })\Vert .}\)
From Lemma 2 either
or
If (68) is true, then we can substitute this into (67) to obtain a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +1})\Vert \) in terms of \(J_{i^*}({{\hat{t}}}_{\ell })\) and then apply Proposition 4 to get a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \) in terms of \(\Vert \phi ({{\hat{t}}}_{\ell +1})\Vert \) and the exogenous inputs; it follows that there exists a constant \(c_4\) so that
On the other hand, if (69) is true, we can use (67) to get a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \) in terms of \(J_{i^*}({{\hat{t}}}_{\ell +1})\), and then apply Proposition 4 to get a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +1})\Vert \) in terms of \(\Vert \phi ({{\hat{t}}}_{\ell })\Vert \); it follows that there exists a constant \(c_5\) so that
If we define \(\alpha ({{\hat{t}}}_\ell ):= \max \{J_{i^*}({{\hat{t}}}_\ell ),J_{i^*}({{\hat{t}}}_{\ell +1})\}\), then there exists a constant \(c_6\) so that (70) and (71) can be combined to yield
Step 4: Analyse the first-order difference inequality (72).
First, we change notation in (72) to facilitate analysis:
Next, we will analyse (73) to obtain a bound on the closed-loop behaviour; we consider two cases—one with noise and one without.
Case 1: \(d(t)=0\) for all \(t\ge t_0\).
From Proposition 1 and the definition of \(\alpha (\cdot )\), we have
If we use this bound in the second occurrence of \(\alpha ({{\hat{t}}}_{2j})\) in (73), we obtain
Since \(\lambda \in (0,1)\) and \(c_6\ge 1\), then it follows that \(\lambda _1:=\frac{\lambda ^{2N}}{c_6}\in (0,1)\). By Lemma 3(i) if we define \(c_9:=c_7^{\frac{c_7+1}{2}}(\frac{1}{\lambda _1})^{\frac{c_7}{\lambda _1^2}+1}\) and use the fact that \(\alpha ({{\hat{t}}}_{2j})\ge 0\), we see that
which, in turn, implies that
Solving (75) iteratively and using this bound, we obtain
Using Proposition 4 to obtain a bound on \(\phi (t)\) between \({{\hat{t}}}_{2j}\) and \({{\hat{t}}}_{2j+2}\), we conclude that there exists a constant \(c_{10}\) so that
Case 2: \(d(t)\ne 0\) for some \(t\ge t_0\).
We now analyse the case when there is noise entering the system. Here the analysis will use a similar (but not identical) approach to that of Case 2 in the proof of Theorem 1. Motivated by Case 1, in the following we will be applying Lemma 3(ii) with a larger bound than in (74); define \( c_{11}:=8(1+N){\bar{\mathbf{{s}}}}^2\). We also define \(\lambda _{1}=\frac{\lambda ^{2N}}{c_6}\) and \(\nu := \left\lceil { \frac{ {\frac{c_{11}+1}{2}}\ln {(c_{11})} +(4{\frac{c_{11}}{\lambda _1^2}}+1)(\ln {(2)}-\ln {(\lambda _1)})}{\ln {(2)}}}\right\rceil .\)
We now partition the timeline into two parts: one in which the noise is small versus \(\phi \) and one where it is not. With \(\nu \) defined above, we define
clearly \(\{ j \in \mathbf{Z}: \;\; j \ge t_0 \} = S_{\mathrm{good}} \cup S_{\mathrm{bad}} \). We can clearly define a (possibly infinite) sequence of intervals of the form \([k_l,k_{l+1})\) which satisfy:
-
(i)
\(k_{0}=t_0\) serves as the initial instant of the first interval;
-
(ii)
\([k_l,k_{l+1})\) either belongs to \(S_{\mathrm{good}}\) or \(S_{\mathrm{bad}}\); and
-
(iii)
if \(k_{l+1}\ne \infty \) and \([k_l,k_{l+1})\) belongs to \(S_{\mathrm{good}}\) then \([k_{l+1},k_{l+2})\) belongs to \(S_{\mathrm{bad}}\) and vice versa.
Now we analyse the behaviour during each interval.
Sub-Case 2.1: \([k_l,k_{l+1})\) lies in \(S_{\mathrm{bad}}\).
Let \(j\in [k_l,k_{l+1})\) be arbitrary. In this case
in either case
Also, applying Proposition 4 for one step, there exists constant \(c_{13}\) so that
Then for \(j\in [k_l,k_{l+1})\), we have
Sub-Case 2.2: \([k_l,k_{l+1})\) lies in \(S_{\mathrm{good}}\).
Let \(j\in [k_l,k_{l+1})\) be arbitrary. First, suppose that \(k_{l+1}-k_l\le 4N\). From Proposition 4 it can be easily proven that there exists a constant \(c_{14}\) so that
Now suppose that \(k_{l+1}-k_l>4N\). This means, in particular, that there exist \(j_1<j_2\) so that
To proceed, observe that \(\Vert \phi (j)\Vert \ne 0\) and
With \(0\le j_1<j_2\), it follows from Proposition 1 and the definition of \(\alpha (\cdot )\) that
using the bound given in (82) which holds on \([k_l,k_{l+1})\), this becomes
If \(j_2-j_1\le \nu \) then
so by Lemma 3(i), with \(\lambda _1\) defined above, if we define \(c_{15}:=c_{11}^{\frac{c_{11}+1}{2}}(\frac{2}{\lambda _1})^{\frac{4c_{11}}{\lambda _1^2}+1}\), then
If \(j_2-j_1>\nu \), then by Lemma 3(ii) and our choice of \(\nu \) we have that
as well.
Now we can proceed to solve (73). The first step is to use (85) to bound the second occurrence of \(\alpha ({{\hat{t}}}_{2j})\) in (73), yielding
If we solve this iteratively and use the bounds in (86) and (87), we see that
We can now use Proposition 4:
-
to provide a bound on \(\Vert \phi (t)\Vert \) between consecutive \({{\hat{t}}}_{2j}\)’s;
-
to provide a bound on \(\Vert \phi (t)\Vert \) on the beginning part of the interval \([k_l,k_{l+1})\) (until we get to the first admissible \({{\hat{t}}}_{2j}\));
-
to provide a bound on \(\Vert \phi (t)\Vert \) on the last part of the interval \([k_l,k_{l+1})\) (after the last admissible \({{\hat{t}}}_{2j}\)).
We conclude that there exist a constant \(c_{17}\ge c_{14}\) so that
Now we combine Sub-Case 2.1 and Sub-Case 2.2 into a general bound on \(\phi \). The following analysis is almost identical to the one at the end of the proof of Theorem 1. Define \(c_{18}:=\max \{c_{17},c_{13}, c_{13}c_{17}\}\).
Claim
The following bound holds:
Proof of the Claim
If \([k_0,k_1)=[t_0,k_1)\subset S_{\mathrm{good}}\), then (91) is true for \(t\in [k_0,k_1]\) by (90). If \([k_0,k_1)\subset S_{\mathrm{bad}}\), then from (80) we obtain
which means that (91) holds on \([k_0,k_1]\) for this case as well.
We now use induction: suppose that (91) is true for \(t\in [k_0,k_l]\); we need to prove it holds for \(t\in (k_l,k_{l+1}]\) as well. If \(k\in [k_l,k_{l+1}) \subset S_{\mathrm{bad}}\), then from (80) we see that
which means (91) holds on \((k_l,k_{l+1}]\). On the other hand, if \([k_l,k_{l+1}) \subset S_{\mathrm{good}}\), then \(k_{l}-1\in S_{\mathrm{bad}}\); from (80) we have that
Using (90) to analyse the behaviour on \([k_l,k_{l+1}]\), we have
which implies that (91) holds. \(\square \)
This concludes the proof. \(\square \)
Rights and permissions
About this article
Cite this article
Miller, D.E., Shahab, M.T. Classical pole placement adaptive control revisited: linear-like convolution bounds and exponential stability. Math. Control Signals Syst. 30, 19 (2018). https://doi.org/10.1007/s00498-018-0225-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00498-018-0225-1