Skip to main content
Log in

Classical pole placement adaptive control revisited: linear-like convolution bounds and exponential stability

  • Original Article
  • Published:
Mathematics of Control, Signals, and Systems Aims and scope Submit manuscript

Abstract

While the original classical parameter adaptive controllers do not handle noise or unmodelled dynamics well, redesigned versions have been proven to have some tolerance; however, exponential stabilization and a bounded gain on the noise are rarely proven. Here we consider a classical pole placement adaptive controller using the original projection algorithm rather than the commonly modified version; we impose the assumption that the plant parameters lie in a convex, compact set, although some progress has been made at weakening the convexity requirement. We demonstrate that the closed-loop system exhibits a very desirable property: there are linear-like convolution bounds on the closed-loop behaviour, which confers exponential stability and a bounded noise gain, and which can be leveraged to prove tolerance to unmodelled dynamics and plant parameter variation. We emphasize that there is no persistent excitation requirement of any sort; the improved performance arises from the vigilant nature of the parameter estimator.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Since the closed-loop system is nonlinear, a bounded-noise bounded-state property does not automatically imply a bounded gain on the noise.

  2. An exception is the work of Ydstie [12, 13], who considers the ideal Projection Algorithm as a special case; however, a crisp bound on the effect of the initial condition is not proven and a minimum phase assumption is imposed.

  3. It is common to make this more general by letting \(\alpha \) be time-varying.

  4. Here \(\beta >0\) is fixed, so this is equivalent to \({\varepsilon }\) being small enough.

  5. We also implicitly use a pole placement procedure to obtain the controller parameters from the plant parameter estimates; this entails solving a linear equation.

  6. In addition, if we define \({\hat{\theta }} (t)\) and \({\hat{\theta }}^{\gamma } (t)\) in the natural way, then it is easy to prove that for \(\gamma \ne 0\) we have

    $$\begin{aligned} {\hat{\theta }}^{\gamma } (t) = {\hat{\theta }} (t) , \; t \ge t_0 . \end{aligned}$$
  7. Furthermore, in [37] it is assumed that \(\alpha _i \) and \(\beta _i \) are strictly greater than zero, but it is trivial to extend this to allow for zero as well.

  8. The choice of N and the value of the switching signal \(\sigma (t)\) play no role.

References

  1. Feuer A, Morse AS (1978) Adaptive control of single-input, single-output linear systems. IEEE Trans Autom Control 23(4):557–569

    Article  MathSciNet  Google Scholar 

  2. Morse AS (1980) Global stability of parameter-adaptive control systems. IEEE Trans Autom Control 25:433–439

    Article  MathSciNet  Google Scholar 

  3. Goodwin GC, Ramadge PJ, Caines PE (1980) Discrete time multivariable control. IEEE Trans Autom Control 25:449–456

    Article  MathSciNet  Google Scholar 

  4. Narendra KS, Lin YH (1980) Stable discrete adaptive control. IEEE Trans Autom Control 25(3):456–461

    Article  Google Scholar 

  5. Narendra KS, Lin YH, Valavani LS (1980) Stable adaptive controller design, Part II: proof of stability. IEEE Trans Autom Control 25:440–448

    Article  Google Scholar 

  6. Rohrs CE et al (1985) Robustness of continuous-time adaptive control algorithms in the presence of unmodelled dynamics. IEEE Trans Autom Control 30:881–889

    Article  Google Scholar 

  7. Middleton RH, Goodwin GC (1988) Adaptive control of time-varying linear systems. IEEE Trans Autom Control 33(2):150–155

    Article  MathSciNet  Google Scholar 

  8. Middleton RH et al (1988) Design issues in adaptive control. IEEE Trans Autom Control 33(1):50–58

    Article  MathSciNet  Google Scholar 

  9. Tsakalis KS, Ioannou PA (1989) Adaptive control of linear time-varying plants: a new model reference controller structure. IEEE Trans Autom Control 34(10):1038–1046

    Article  MathSciNet  Google Scholar 

  10. Kreisselmeier G, Anderson BDO (1986) Robust model reference adaptive control. IEEE Trans Autom Control 31:127–133

    Article  MathSciNet  Google Scholar 

  11. Ioannou PA, Tsakalis KS (1986) A robust direct adaptive controller. IEEE Trans Autom Control 31(11):1033–1043

    Article  Google Scholar 

  12. Ydstie BE (1989) Stability of discrete-time MRAC revisited. Syst Control Lett 13:429–439

    Article  Google Scholar 

  13. Ydstie BE (1992) Transient performance and robustness of direct adaptive control. IEEE Trans Autom Control 37(8):1091–1105

    Article  MathSciNet  Google Scholar 

  14. Naik SM, Kumar PR, Ydstie BE (1992) Robust continuous-time adaptive control by parameter projection. IEEE Trans Autom Control 37:182–297

    Article  MathSciNet  Google Scholar 

  15. Wen C, Hill DJ (1992) Global boundedness of discrete-time adaptive control using parameter projection. Automatica 28(2):1143–1158

    Article  Google Scholar 

  16. Wen C (1994) A robust adaptive controller with minimal modifications for discrete time-varying systems. IEEE Trans Autom Control 39(5):987–991

    Article  MathSciNet  Google Scholar 

  17. Li Y, Chen H-F (1996) Robust adaptive pole placement for linear time-varying systems. IEEE Trans Autom Control 41:714–719

    Article  MathSciNet  Google Scholar 

  18. Narendra KS, Annawswamy AM (1987) A new adaptive law for robust adaptation without persistent excitation. IEEE Trans Autom Control 32:134–145

    Article  MathSciNet  Google Scholar 

  19. Fu M, Barmish BR (1986) Adaptive stabilization of linear systems via switching control. IEEE Trans Autom Control AC–31:1097–1103

    MathSciNet  MATH  Google Scholar 

  20. Miller DE, Davison EJ (1989) An adaptive controller which provides Lyapunov stability. IEEE Trans Autom Control 34:599–609

    Article  MathSciNet  Google Scholar 

  21. Morse AS (1996) Supervisory control of families of linear set-point controllers–Part 1: exact matching. IEEE Trans Autom Control 41:1413–1431

    Article  Google Scholar 

  22. Morse AS (1997) Supervisory control of families of linear set-point controllers–Part 2: robustness. IEEE Trans Autom Control 42:1500–1515

    Article  Google Scholar 

  23. Hespanha JP, Liberzon D, Morse AS (2003) Hysteresis-based switching algorithms for supervisory control of uncertain systems. Automatica 39:263–272

    Article  MathSciNet  Google Scholar 

  24. Vu L, Chatterjee D, Liberzon D (2007) Input-to-state stability of switched systems and switching adaptive control. Automatica 43:639–646

    Article  MathSciNet  Google Scholar 

  25. Hespanha JP, Liberzon D, Morse AS (2003) Overcoming the limitations of adaptive control by means of logic-based switching. Syst Control Lett 49(1):49–65

    Article  MathSciNet  Google Scholar 

  26. Morse AS (1998) A bound for the disturbance to tracking error gain of a supervised set-point control system. In: Normand-Cyrot D (ed) Perspectives in control. Springer, London

    Google Scholar 

  27. Vu L, Liberzon D (2011) Supervisory control of uncertain linear time-varying systems. IEEE Trans Automat Control 1:27–42

    Article  MathSciNet  Google Scholar 

  28. Zhivoglyadov PV, Middleton RH, Fu M (2001) Further results on localization-based switching adaptive control. Automatica 37:257–263

    Article  MathSciNet  Google Scholar 

  29. Miller DE (2003) A new approach to model reference adaptive control. IEEE Trans Autom Control 48:743–757

    Article  MathSciNet  Google Scholar 

  30. Miller DE (2006) Near optimal LQR performance for a compact set of plants. IEEE Trans Autom Control 51:1423–1439

    Article  MathSciNet  Google Scholar 

  31. Vale JR, Miller DE (2011) Step tracking in the presence of persistent plant changes. IEEE Trans Autom Control 56:43–58

    Article  MathSciNet  Google Scholar 

  32. Miller DE (2017) A parameter adaptive controller which provides exponential stability: the first order case. Syst Control Lett 103:23–31

    Article  MathSciNet  Google Scholar 

  33. Miller DE (2017) Classical discrete-time adaptive control revisited: exponential stabilization. In: 1st IEEE conference on control technology and applications, pp 1975–1980. The submitted version is posted at arXiv:1705.01494

  34. Goodwin GC, Sin KS (1984) Adaptive filtering prediction and control. Prentice Hall, Englewood Cliffs

    MATH  Google Scholar 

  35. Jerbi A, Kamen EW, Dorsey J (1993) Construction of a robust adaptive regulator for time-varying discrete-time systems. Int J Adapt Control 7:1–12

    Article  Google Scholar 

  36. Praly L (1984) Towards a globally stable direct adaptive control scheme for not necessarily minimum phase system. IEEE Trans Autom Control 29:946–949

    Article  MathSciNet  Google Scholar 

  37. Kreisselmeier G (1986) Adaptive control of a class of slowly time-varying plants. Syst Control Lett 8:97–103

    Article  Google Scholar 

  38. Kreisselmeier G, Smith MC (1986) Stable adaptive regulation of arbitrary \(n^{th}\)-order plants. IEEE Trans Autom Control 31:299–305

    Article  Google Scholar 

  39. Zhou K, Doyle JC, Glover K (1995) Robust and optimal control. Prentice Hall, New Jersey

    MATH  Google Scholar 

  40. Desoer CA (1970) Slowly varying discrete time system \(x_{t+1} = A_t x_t\). Electron Lett 6(11):339–340

    Article  Google Scholar 

  41. Shahab MT, Miller DE (2018) Multi-estimator based adaptive control which provides exponential stability: the first-order case. In: 2018 IEEE 57th conference on decision and control (to appear)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel E. Miller.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research was supported by a grant from the Natural Sciences Research Council of Canada.

11 Appendix

11 Appendix

Proof of Proposition 1:

Since projection does not make the parameter estimate worse, it follows from (7) that

$$\begin{aligned} \Vert {\hat{\theta }} (t+1) - {\hat{\theta }} (t) \Vert \le \Vert {\check{\theta }} (t+1) - {\hat{\theta }} (t) \Vert\le & {} \Vert {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{ \phi (t)}{ \phi (t)^T \phi (t)} e(t+1) \Vert \\\le & {} {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{ |e(t+1)| }{ \Vert \phi (t) \Vert } , \quad t \ge t_0 . \end{aligned}$$

so the first inequality holds.

We now turn to energy analysis. We first define \(\tilde{{\check{\theta }}} (t):= {\check{\theta }} (t) - \theta ^*\) and \({\check{V}} (t) := {\tilde{\check{\theta }}}(T)^T {\tilde{\check{\theta }}}(t) \). Next, we subtract \(\theta ^*\) from each side of (7), yielding

$$\begin{aligned} {\tilde{\check{\theta }}}(t+1)= & {} {\tilde{\theta }} (t) + {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{ \phi (t)}{ \phi (t)^T \phi (t)} [ -\phi (t)^T {\tilde{\theta }} (t) + d(t) ] \\= & {} \left[ I - \underbrace{{\rho _{\delta } ( \phi (t) , e(t+1))}\frac{ \phi (t) \phi (t)^T}{ \phi (t)^T \phi (t)}}_{=: W_1 (t)} \right] {\tilde{\theta }} (t) + \underbrace{ {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{ \phi (t)}{ \phi (t)^T \phi (t)}}_{=: W_2 (t)} d(t) . \end{aligned}$$

Then

$$\begin{aligned} {\check{V}} (t+1)= & {} [ (I-W_1(t)) \tilde{\theta }(t) + W_2 (t) d(t) ]^T \times [ (I-W_1(t)) \tilde{\theta }(t) + W_2 (t) d(t) ] \\= & {} \tilde{\theta }(t)^T [ I - W_1(t)] [ I - W_1(t)] \tilde{\theta }(t) \\&+ 2 \tilde{\theta }(t)^T [I - W_1(t)] W_2(t) d(t) + W_2(t)^T W_2(t) d(t)^2 . \end{aligned}$$

Now let us analyse the three terms on the RHS: the fact that \(W_1(t)^2= W_1(t)\) allows us to simplify the first term; the fact that \(W_1 (t) W_2 (t) = W_2 (t)\) means that the second term is zero; \(W_2 (t)^T W_2 (t) = {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{1}{ \phi (t)^T \phi (t)}\), which simplifies the third term. We end up with

$$\begin{aligned} {\check{V}} (t+1)= & {} \tilde{\theta }(t)^T [ I - W_1(t)] \tilde{\theta }(t) + {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{d(t)^2}{ \phi (t)^T \phi (t) } \\= & {} V(t) - {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{ [ \tilde{\theta }(t)^T \phi (t)]^2}{ \phi (t)^T \phi (t)} + {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{d(t)^2}{ \phi (t)^T \phi (t)} \\= & {} V(t) + {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{ d(t)^2 - [d (t)-e(t+1)]^2}{ \phi (t)^T \phi (t)} \\\le & {} V(t) + {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{ - \frac{1}{2} e(t+1)^2 + 2 d(t)^2}{\phi (t)^T \phi (t)} . \end{aligned}$$

Since projection never makes the estimate worse, it follows that

$$\begin{aligned} V(t+1) \le V(t) + {\rho _{\delta } ( \phi (t) , e(t+1))}\frac{ - \frac{1}{2} e(t+1)^2 + 2 d(t)^2}{\phi (t)^T \phi (t)} . \end{aligned}$$

\(\square \)

Proof of Lemma 1:

Fix \(\delta \in ( 0 , \infty ]\) and \(\sigma \in ( {\underline{\lambda }} , 1) \). First of all, it is well known that the characteristic polynomial of \({\bar{A}} (t)\) is exactly \(z^{2n} A^* (z^{-1})\) for every \(t \ge t_0\). Furthermore, it is well known that the coefficients of \({\hat{L}} (t, z^{-1} )\) and \({\hat{P}} (t,z^{-1} )\) are the solution of a linear equation, and are analytic functions of \({\hat{\theta }} (t) \in {{\mathcal {S}}}\). Hence, there exists a constant \(\gamma _1\) so that, for every set of initial conditions, \(y^* \in {l_{\infty }}\) and \(d \in {l_{\infty }}\), we have \(\sup _{t \ge t_0} \Vert {\bar{A}} (t) \Vert \le \gamma _1\).

To prove the first bound, we now invoke the argument used in [40], who considered a more general time-varying situation but with more restrictions on \(\sigma \). By making a slight adjustment to the first part of the proof given there, we can prove that with \(\gamma _2 := \sigma \frac{ (\sigma + \gamma _1 )^{2n-1}}{ ( \sigma - {\underline{\lambda }} )^{2n}}\), then for every \(t \ge t_0\) we have \(\Vert {\bar{A}} (t) ^k \Vert \le \gamma _2 \sigma ^{k} , \;\; k \ge 0 \), as desired.

Now we turn to the second bound. From Proposition 1 and the Cauchy–Schwarz inequality, we obtain

$$\begin{aligned} \sum _{j=k}^{t-1} \Vert {\hat{\theta }} (j+1) - {\hat{\theta }} (j) \Vert\le & {} \sum _{j=k}^{t-1} {\rho _{\delta } ( \phi (j) , e(j+1))}\frac{| e(j+1) |}{\Vert \phi (j)\Vert } \\\le & {} \left[ \sum _{j=k}^{t-1} {\rho _{\delta } ( \phi (j) , e(j+1))}\frac{e(j+1)^2}{\Vert \phi (j)\Vert ^2} \right] ^{1/2} ( t-k)^{1/2} . \end{aligned}$$

Now notice that

$$\begin{aligned}&\Vert {\bar{A}} (t+1) - {\bar{A}} (t) \Vert \le \Vert {\hat{\theta }} (t+1) - {\hat{\theta }} (t) \Vert \\&\quad + \sum _{i=1}^n ( | {\hat{l}}_i (t+1) - {\hat{l}}_i (t) | + | {\hat{p}}_i (t+1) - {\hat{p}}_i (t) | ) . \end{aligned}$$

The fact that the coefficients of \({\hat{L}} (t, z^{-1} )\) and \({\hat{P}} (t,z^{-1} )\) are analytic functions of \({\hat{\theta }} (t) \in {{\mathcal {S}}}\) means that there exists a constant \(\gamma _3 \ge 1\) so that

$$\begin{aligned} \sum _{j=k}^{t-1} \Vert {\bar{A}} (j+1) - {\bar{A}} (j) \Vert \le \gamma _3 \sum _{j=k}^{t-1} \Vert {\hat{\theta }} (j+1) - {\hat{\theta }} (j) \Vert , \end{aligned}$$

so we conclude that the second bound holds as well. \(\Box \)

In order to prove Theorem 4, we need some preliminary results. The first step is to extend Proposition 1 to the case when \(\theta ^*\) may not lie in \({{\mathcal {S}}}_{i}\).

figure k

Proof

Since projection does not make the parameter estimate worse, it follows from (48) and (49) that when \(\phi (t)\ne 0\),

$$\begin{aligned} \Vert {\hat{\theta }}_i (t+1) - {\hat{\theta }}_i (t) \Vert \le \Vert {\check{\theta }}_i (t+1) - {\hat{\theta }}_i (t) \Vert\le & {} \left\| \frac{\phi (t)e_i(t+1)}{\Vert \phi (t)\Vert ^2}\right\| \le \frac{ |e_i(t+1)| }{ \Vert \phi (t) \Vert }, \end{aligned}$$
(59)

and when \(\phi (t)=0\),

$$\begin{aligned} \Vert {\hat{\theta }}_i (t+1) - {\hat{\theta }}_i (t) \Vert =0. \end{aligned}$$

The result follows by iteration. \(\square \)

The next result produces a crude bound on the closed-loop behaviour.

Proposition 4

Consider the plant (1) and suppose that the controller consisting of the estimator (48), (49) and the control law (54) is applied.Footnote 8 Then for every \(p\ge 0\), there exists a constant \({{\bar{c}}}\ge 1\) such that for every \(t_0\in \mathbf{Z}\), \(t\ge t_0\), \(\phi _0\in \mathbf{R}^{2n}\), \({\theta ^{*}\in {{\mathcal {S}}}}\), \({\hat{\theta _i}} (t_0)\in {{\mathcal {S}}}_{i}\) (\(i=1,2\)) and \(y^*,d \in {\varvec{\ell }}_{\infty }\):

$$\begin{aligned} \Vert \phi (t+p)\Vert \le {{\bar{c}}}\Vert \phi (t)\Vert + {{\bar{c}}} \sum _{j=0}^{p-1} (|d(t+j)|+|r(t+j)|). \end{aligned}$$
(60)

Proof

Fix \(p\ge 0\). Let \(t_0\in \mathbf{Z}\), \(t\ge t_0\), \(\phi _0\in \mathbf{R}^{2n}\), \({\theta ^{*}\in {{\mathcal {S}}}}\), \({\hat{\theta _i}} (t_0)\in {{\mathcal {S}}}_{i}\) (\(i=1,2\)) and \(y^*,d \in {\varvec{\ell }}_{\infty }\) be arbitrary. From (1) we see that

$$\begin{aligned} |y(t+1)|\le \Vert {{\mathcal {S}}}\Vert \Vert \phi (t)\Vert +|d(t)|. \end{aligned}$$

From (54) and Assumption 2, we have that there exists a constant \(\gamma \) so that

$$\begin{aligned} |u(t+1)|\le \gamma \Vert \phi (t)\Vert +|r(t)|. \end{aligned}$$

From the definition of \(\Vert \phi (t+1)\Vert \), we have that

$$\begin{aligned} \Vert \phi (t+1)\Vert \le \Vert \phi (t)\Vert +|y(t+1)|+|u(t+1)|. \end{aligned}$$

Combining these three bounds, we end up with

$$\begin{aligned} \Vert \phi (t+1)\Vert \le \underbrace{\left( 1+\Vert {{\mathcal {S}}}\Vert +\gamma \right) }_{=:{{\bar{a}}}} \Vert \phi (t)\Vert +|r(t)|+|d(t)|. \end{aligned}$$

Solving iteratively, we have

$$\begin{aligned} \Vert \phi (t+p)\Vert\le & {} {{\bar{a}}}^{p}\Vert \phi (t)\Vert +\sum _{j=0}^{p-1} {{\bar{a}}}^{p-j-1} (|d(t+j)|+|r(t+j)|) \\\le & {} {{\bar{a}}}^{p}\Vert \phi (t)\Vert + {{\bar{a}}}^{p-1} \sum _{j=0}^{p-1} (|d(t+j)|+|r(t+j)|). \end{aligned}$$

Put \({{\bar{c}}}:={{\bar{a}}}^{p}\) to conclude the proof. \(\square \)

We now state a technical result which we used in [32] to analyse the first-order one-step-ahead adaptive control problem.

figure l

Proof of Theorem 4:

Fix \(\lambda \in (0,1)\) and \(N\ge 2n\). Let \(t_0\in \mathbf{Z}\), \({\phi _0 \in {\mathbf{R}}^{2n}}\), \(\sigma _{0}\in \{1,2\}\), \({\theta ^{*} \in {{\mathcal {S}}}}\), \({\hat{\theta _i}} (t_0)\in {{\mathcal {S}}}_{i}\) (\(i=1,2\)), and \(y^*,d \in \ell _{\infty }\) be arbitrary; as usual, we let \(i^*\) denote the smallest \(j\in \{1,2\}\) which satisfies \(\theta ^{*}\in {{\mathcal {S}}}_{j}\).

As mentioned at the beginning of Sect. 8, the proposed controller is based on the first-order one-step-ahead control setup [41], although it is more complicated. The proof also uses similar ideas as those used in [41], but as our system is more complicated, it should not be surprising that the proof is significantly different and much more complicated. Hence, before proceeding we provide a proof outline: using the definition of \({{\hat{t}}}_\ell \) given in Sect. 8.2:

  1. 1.

    first, we define a state-space equation describing \(\phi (t)\) which holds on intervals of the form \([{{\hat{t}}}_\ell ,{{\hat{t}}}_{\ell +1})\);

  2. 2.

    second, we analyse this equation, getting a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +1})\Vert \) in terms of \(\Vert \phi ({{\hat{t}}}_{\ell })\Vert \) and the exogenous inputs;

  3. 3.

    we apply Lemma 2 and Proposition 4 to obtain a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \) in terms of \(\Vert \phi ({{\hat{t}}}_{\ell })\Vert \); i.e. we analyse two intervals at a time;

  4. 4.

    fourth, we then analyse the associated difference inequality (relating \(\Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \) in terms of \(\Vert \phi ({{\hat{t}}}_{\ell })\Vert \)) in a way similar (though not identical) to that used in [41].

Step 1: \(\underline{\hbox {Obtain a state-space model describing } \phi (t) \hbox { for } t\in [{{\hat{t}}}_{\ell },{{\hat{t}}}_{\ell +1}).}\)

By definition of the prediction error (47) and by the property of the switching signal (53) being constant on \([{{\hat{t}}}_{\ell },{{\hat{t}}}_{\ell +1})\), we have

$$\begin{aligned} y(t+1)= & {} \phi (t)^T{\hat{\theta }}_{\sigma (t)}(t)+e_{\sigma (t)}(t+1) \nonumber \\= & {} \phi (t)^T{\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}(t)+e_{\sigma ({{\hat{t}}}_\ell )}(t+1) + \phi (t)^T{\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )-\phi (t)^T{\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell ) \nonumber \\= & {} {\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )^T\phi (t)+\left[ {\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}(t)-{\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )\right] ^T\phi (t)\nonumber \\&+e_{\sigma ({{\hat{t}}}_\ell )}(t+1),\;\; t\in [{{\hat{t}}}_\ell ,{{\hat{t}}}_{\ell +1}). \end{aligned}$$
(61)

From the control law (54) and the control gains (52), we have

$$\begin{aligned} u(t+1)= & {} K_{\sigma (t)}(t)^T\phi (t)+r(t) \nonumber \\= & {} K_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )^T\phi (t)+r(t),\qquad t\in [{{\hat{t}}}_\ell ,{{\hat{t}}}_{\ell +1}). \end{aligned}$$
(62)

We now derive a state-space equation for \(\phi (t)\) in much the same way as (13) was derived; we first define

$$\begin{aligned}&{{{\bar{A}}}_{\sigma (j)}(j)}\\&\quad :=\begin{bmatrix} -{\hat{a}}_{\sigma (j),1}(j)&-{\hat{a}}_{\sigma (j),2}(j)&\cdots&-{\hat{a}}_{\sigma (j),n}(j)&{\hat{b}}_{\sigma (j),1}(j)&\cdots&\cdots&{\hat{b}}_{\sigma (j),n}(j) \\ 1&0&\cdots&0&0&\cdots&\cdots&0 \\&\ddots&\vdots&\vdots&\cdots&\cdots&\vdots \\&1&0&0&\cdots&\cdots&0 \\ - {\hat{p}}_{\sigma (j),1}(j)&- {\hat{p}}_{\sigma (j),2}(j)&\cdots&-{\hat{p}}_{\sigma (j),n}(j)&- {\hat{l}}_{\sigma (j),1}(j)&-{\hat{l}}_{\sigma (j),2}(j)&\cdots&- {\hat{l}}_{\sigma (j),n}(j) \\ 0&\cdots&\cdots&0&1&0&\cdots&0 \\ \vdots&\cdots&\cdots&\vdots&\ddots&\vdots \\ 0&\cdots&\cdots&0&&1&0 \end{bmatrix}; \end{aligned}$$

then, in light of (61) and (62), the following holds:

$$\begin{aligned}&\phi (t+1)={{\bar{A}}}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )\phi (t)\nonumber \\&\quad + B_1 \left( \left[ {\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}(t)-{\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )\right] ^T\phi (t) + e_{\sigma ({{\hat{t}}}_\ell )}(t+1)\right) +B_2r(t),\nonumber \\&t\in [{{\hat{t}}}_\ell ,{{\hat{t}}}_{\ell +1}),\ell \in \mathbf{Z}^+; \end{aligned}$$
(63)

notice the additional term \(\left[ {\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}(t)-{\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )\right] \) on the right-hand side which (13) does not have.

Step 2: \(\underline{\hbox {Obtain a bound on } \Vert \phi ({{\hat{t}}}_{\ell +1})\Vert \hbox { in terms of } \Vert \phi ({{\hat{t}}}_{\ell })\Vert .}\)

In (63) we have \({{\bar{A}}}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )\in \mathbf{R}^{2n\times 2n}\) to be a constant matrix with all eigenvalues equal to zero; since \(N\ge 2n\), clearly

$$\begin{aligned} \left[ {{{\bar{A}}}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )}\right] ^{{{\hat{t}}}_{\ell +1}-{{\hat{t}}}_{\ell }}=\left[ {{{\bar{A}}}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )}\right] ^N=0. \end{aligned}$$

So, solving (63) for \(\phi ({{\hat{t}}}_{\ell +1})\) yields

$$\begin{aligned}&\phi ({{\hat{t}}}_{\ell +1})\nonumber \\&=\sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1} \left[ {{{\bar{A}}}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )}\right] ^{{{\hat{t}}}_{\ell +1}-j-1} \biggl ({B}_1 \biggl (\left[ {\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}(j)-{\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )\right] ^T\phi (j)+ e_{\sigma ({{\hat{t}}}_\ell )}(j+1)\biggr )\nonumber \\&\quad +B_2r(j)\biggr ). \end{aligned}$$
(64)

It follows from the compactness of \({{\mathcal {S}}}\) and the \({{\mathcal {S}}}_i\)’s that \(\left\| \left[ {{{\bar{A}}}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )}\right] ^j\right\| ,\; j=0,1\ldots ,N-1\), is bounded above by a constant which we label \(c_1\). Using this fact together with Proposition 3 which provides a bound on the difference between parameter estimates at two different points in time, we obtain

$$\begin{aligned} \Vert \phi ({{\hat{t}}}_{\ell +1})\Vert\le & {} c_1 \sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1} \left( \left\| {\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}(j)-{\hat{\theta }}_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )\right\| \Vert \phi (j)\Vert +|e_{\sigma ({{\hat{t}}}_\ell )}(j+1)|+|r(j)| \right) \\\le & {} c_1 \sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1} \left( \left[ \sum _{q={{\hat{t}}}_\ell ,\phi (q)\ne 0}^{j-1}\frac{|e_{\sigma ({{\hat{t}}}_\ell )}(q+1)|}{\Vert \phi (q)\Vert } \right] \Vert \phi (j)\Vert +|e_{\sigma ({{\hat{t}}}_\ell )}(j+1)|+|r(j)| \right) . \end{aligned}$$

By definition of the prediction error, if \(\phi (j)=0\) then

$$\begin{aligned} |e_i(j+1)|=|d(j)|, \end{aligned}$$

and if \(\phi (j)\ne 0\), then

$$\begin{aligned} |e_i(j+1)|=\frac{|e_i(j+1)|}{\Vert \phi (j)\Vert }\Vert \phi (j)\Vert . \end{aligned}$$

Incorporating this into the above inequality yields

$$\begin{aligned} \Vert \phi ({{\hat{t}}}_{\ell +1})\Vert\le & {} c_1 \sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1} \left( \left[ \sum _{q={{\hat{t}}}_\ell ,\phi (q)\ne 0}^{j}\frac{|e_{\sigma ({{\hat{t}}}_\ell )}(q+1)|}{\Vert \phi (q)\Vert } \right] \Vert \phi (j)\Vert +|d(j)|+|r(j)| \right) \nonumber \\\le & {} c_1 \sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1} \left( \left[ \sum _{q={{\hat{t}}}_\ell ,\phi (q)\ne 0}^{{{\hat{t}}}_{\ell +1}-1}\frac{|e_{\sigma ({{\hat{t}}}_\ell )}(q+1)|}{\Vert \phi (q)\Vert } \right] \Vert \phi (j)\Vert +|d(j)|+|r(j)| \right) \nonumber \\= & {} c_1\left[ \sum _{q={{\hat{t}}}_\ell ,\phi (q)\ne 0}^{{{\hat{t}}}_{\ell +1}-1}\frac{|e_{\sigma ({{\hat{t}}}_\ell )}(q+1)|}{\Vert \phi (q)\Vert } \right] \sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1} \Vert \phi (j)\Vert +c_1\sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1}(|d(j)|+|r(j)|) \nonumber \\\le & {} c_1({{\hat{t}}}_{\ell +1}-{{\hat{t}}}_{\ell }) \left[ \underset{{j\in [{{\hat{t}}}_\ell ,{{\hat{t}}}_{\ell +1}), \phi (j)\ne 0}}{\max } \frac{|e_{\sigma ({{\hat{t}}}_\ell )}(j+1)|}{\Vert \phi (j)\Vert } \right] \sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1} \Vert \phi (j)\Vert \nonumber \\&\quad +c_1\sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1}(|d(j)|+|r(j)|). \end{aligned}$$
(65)

Since \({{\hat{t}}}_{\ell +1}-{{\hat{t}}}_{\ell }=N\), it follows from Proposition 4 that there exists a constant \(c_2\) so that the following holds:

$$\begin{aligned} \sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1} \Vert \phi (j)\Vert\le & {} c_2\Vert \phi ({{\hat{t}}}_\ell )\Vert +c_2\sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-2} (|d(j)|+|r(j)|); \end{aligned}$$
(66)

so, substituting (66) into (65) and using the definition of the performance signal \(J_{\sigma ({{\hat{t}}}_\ell )}(\cdot )\) given in (55) it follows that there exists a constant \(c_3\) so that

$$\begin{aligned} \Vert \phi ({{\hat{t}}}_{\ell +1})\Vert\le & {} c_1N J_{\sigma ({{\hat{t}}}_\ell )} \left( c_2\Vert \phi ({{\hat{t}}}_\ell )\Vert +c_2\sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-2} (|d(j)|+|r(j)|) \right) \nonumber \\&+ c_1\sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1}(|d(j)|+|r(j)|) \nonumber \\\le & {} c_3 J_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell ) \Vert \phi ({{\hat{t}}}_\ell )\Vert +c_3\left( 1+ J_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_\ell )\right) \sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1} (|d(j)|+|r(j)|).\nonumber \\ \end{aligned}$$
(67)

Step 3: \(\underline{\hbox {Apply Lemma}~2 \hbox { and Proposition}~4 \hbox { to obtain a bound on } \Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \hbox { in terms of }}\) \(\underline{\Vert \phi ({{\hat{t}}}_{\ell })\Vert .}\)

From Lemma 2 either

$$\begin{aligned} J_{\sigma ({{\hat{t}}}_\ell )}({{\hat{t}}}_{\ell })\le J_{i^*}({{\hat{t}}}_{\ell }) \end{aligned}$$
(68)

or

$$\begin{aligned} J_{\sigma ({{\hat{t}}}_{\ell +1})}({{\hat{t}}}_{\ell +1})\le J_{i^*}({{\hat{t}}}_{\ell +1}). \end{aligned}$$
(69)

If (68) is true, then we can substitute this into (67) to obtain a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +1})\Vert \) in terms of \(J_{i^*}({{\hat{t}}}_{\ell })\) and then apply Proposition 4 to get a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \) in terms of \(\Vert \phi ({{\hat{t}}}_{\ell +1})\Vert \) and the exogenous inputs; it follows that there exists a constant \(c_4\) so that

$$\begin{aligned}&\Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \le c_3c_4 J_{i^*}({{{\hat{t}}}_\ell }) \Vert \phi ({{\hat{t}}}_\ell )\Vert \nonumber \\&\quad + c_3c_4\left( 1+ J_{i^*}({{{\hat{t}}}_\ell })\right) \sum _{j={{\hat{t}}}_\ell }^{{{\hat{t}}}_{\ell +1}-1} (|d(j)|+|r(j)| )+c_4 \sum _{j={{\hat{t}}}_{\ell +1}}^{{{\hat{t}}}_{\ell +2}-1} (|d(j)|+|r(j)|). \end{aligned}$$
(70)

On the other hand, if (69) is true, we can use (67) to get a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \) in terms of \(J_{i^*}({{\hat{t}}}_{\ell +1})\), and then apply Proposition 4 to get a bound on \(\Vert \phi ({{\hat{t}}}_{\ell +1})\Vert \) in terms of \(\Vert \phi ({{\hat{t}}}_{\ell })\Vert \); it follows that there exists a constant \(c_5\) so that

$$\begin{aligned} \Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \le c_3 c_5 J_{i^*}({{\hat{t}}}_{\ell +1}) \Vert \phi ({{\hat{t}}}_\ell )\Vert+ & {} c_3 c_5 J_{i^*}({{\hat{t}}}_{\ell +1}) \sum _{j={{\hat{t}}}_{\ell }}^{{{\hat{t}}}_{\ell +1}-1} (|d(j)|+|r(j)|)\nonumber \\+ & {} c_3\left( 1+ J_{i^*}({{\hat{t}}}_{\ell +1})\right) \sum _{j={{\hat{t}}}_{\ell +1}}^{{{\hat{t}}}_{\ell +2}-1} (|d(j)|+|r(j)|).\nonumber \\ \end{aligned}$$
(71)

If we define \(\alpha ({{\hat{t}}}_\ell ):= \max \{J_{i^*}({{\hat{t}}}_\ell ),J_{i^*}({{\hat{t}}}_{\ell +1})\}\), then there exists a constant \(c_6\) so that (70) and (71) can be combined to yield

$$\begin{aligned} \Vert \phi ({{\hat{t}}}_{\ell +2})\Vert \le c_6 \alpha ({{\hat{t}}}_{\ell }) \Vert \phi ({{\hat{t}}}_\ell )\Vert + c_6\left( 1+\alpha ({{\hat{t}}}_{\ell })\right) \sum _{j={{\hat{t}}}_{\ell }}^{{{\hat{t}}}_{\ell +2}-1}( |d(j)|+|r(j)|),\qquad \ell \in \mathbf{Z}^+. \end{aligned}$$
(72)

Step 4: Analyse the first-order difference inequality (72).

First, we change notation in (72) to facilitate analysis:

$$\begin{aligned} \Vert \phi ({{\hat{t}}}_{2j+2})\Vert \le c_6 \alpha ({{\hat{t}}}_{2j}) \Vert \phi ({{\hat{t}}}_{2j})\Vert + c_6\left( 1+\alpha ({{\hat{t}}}_{2j})\right) \sum _{q={{\hat{t}}}_{2j}}^{{{\hat{t}}}_{2j+2}-1} (|d(q)|+|r(q)|),\qquad j\in \mathbf{Z}^+ . \end{aligned}$$
(73)

Next, we will analyse (73) to obtain a bound on the closed-loop behaviour; we consider two cases—one with noise and one without.

Case 1: \(d(t)=0\) for all \(t\ge t_0\).

From Proposition 1 and the definition of \(\alpha (\cdot )\), we have

$$\begin{aligned} \sum _{q=0}^{j-1} \alpha ({{\hat{t}}}_{2q})^2\le & {} \sum _{p=t_0,\phi (p)\ne 0}^{t_0+jN-1} \frac{|e_{i^*}(p+1)|^2}{\Vert \phi (p)\Vert ^2}\nonumber \\\le & {} 2[V(t_0)-V(t_0+jN)] \le 2V( t_0) \le 8\Vert {{\mathcal {S}}}_{i^*}\Vert ^2\nonumber \\\le & {} 8{\bar{\mathbf{{s}}}}^2 =:c_7, \; j\ge 1. \end{aligned}$$
(74)

If we use this bound in the second occurrence of \(\alpha ({{\hat{t}}}_{2j})\) in (73), we obtain

$$\begin{aligned} \Vert \phi ({{\hat{t}}}_{2j+2})\Vert \le c_6 \alpha ({{\hat{t}}}_{2j}) \Vert \phi ({{\hat{t}}}_{2j})\Vert + \underbrace{c_6(1+\sqrt{c_7})}_{=:c_8} \underbrace{\sum _{q={{\hat{t}}}_{2j}}^{{{\hat{t}}}_{2j+2}-1} |r(q)|}_{=:{{\bar{r}}}(j)},\qquad j\in \mathbf{Z}^+ . \end{aligned}$$
(75)

Since \(\lambda \in (0,1)\) and \(c_6\ge 1\), then it follows that \(\lambda _1:=\frac{\lambda ^{2N}}{c_6}\in (0,1)\). By Lemma 3(i) if we define \(c_9:=c_7^{\frac{c_7+1}{2}}(\frac{1}{\lambda _1})^{\frac{c_7}{\lambda _1^2}+1}\) and use the fact that \(\alpha ({{\hat{t}}}_{2j})\ge 0\), we see that

$$\begin{aligned} {\prod }_{q=0}^{j-1} \alpha ({{\hat{t}}}_{2q})\le c_9 \lambda _1^{j}\;\; j\in \mathbf{Z}^+, \end{aligned}$$
(76)

which, in turn, implies that

$$\begin{aligned} {\prod }_{q=0}^{j-1} [c_6 \alpha ({{\hat{t}}}_{2q})] \le c_9 \lambda ^{2j N},\;\; j\in \mathbf{Z}^+. \end{aligned}$$
(77)

Solving (75) iteratively and using this bound, we obtain

$$\begin{aligned} \Vert \phi ({{\hat{t}}}_{2j})\Vert \le c_9 \lambda ^{2jN} \Vert \phi ({{\hat{t}}}_0)\Vert + \sum _{q=0}^{j-1} c_9c_8 \left( \lambda ^{2N} \right) ^{j-1-q} {{\bar{r}}}(q),\;\; j\in \mathbf{Z}^+. \end{aligned}$$
(78)

Using Proposition 4 to obtain a bound on \(\phi (t)\) between \({{\hat{t}}}_{2j}\) and \({{\hat{t}}}_{2j+2}\), we conclude that there exists a constant \(c_{10}\) so that

$$\begin{aligned} \Vert \phi (t)\Vert \le c_{10} \lambda ^{t-t_0} \Vert \phi (t_0)\Vert +\sum _{j=t_0}^{t-1}c_{10}\lambda ^{t-j-1}|r(j)|,\qquad t\ge t_0. \end{aligned}$$
(79)

Case 2: \(d(t)\ne 0\) for some \(t\ge t_0\).

We now analyse the case when there is noise entering the system. Here the analysis will use a similar (but not identical) approach to that of Case 2 in the proof of Theorem 1. Motivated by Case 1, in the following we will be applying Lemma 3(ii) with a larger bound than in (74); define \( c_{11}:=8(1+N){\bar{\mathbf{{s}}}}^2\). We also define \(\lambda _{1}=\frac{\lambda ^{2N}}{c_6}\) and \(\nu := \left\lceil { \frac{ {\frac{c_{11}+1}{2}}\ln {(c_{11})} +(4{\frac{c_{11}}{\lambda _1^2}}+1)(\ln {(2)}-\ln {(\lambda _1)})}{\ln {(2)}}}\right\rceil .\)

We now partition the timeline into two parts: one in which the noise is small versus \(\phi \) and one where it is not. With \(\nu \) defined above, we define

$$\begin{aligned} S_{\mathrm{good}}= & {} \left\{ j\ge t_0:\phi (j)\ne 0 \text { and } \tfrac{|d(j)|^2}{\Vert \phi (j)\Vert ^2}<\tfrac{{\bar{\mathbf{{s}}}}^2}{\nu }\right\} ,\\ S_{\mathrm{bad}}= & {} \left\{ j\ge t_0:\phi (j)=0 \text { or } \tfrac{|d(j)|^2}{\Vert \phi (j)\Vert ^2}\ge \tfrac{{\bar{\mathbf{{s}}}}^2}{\nu }\right\} ; \end{aligned}$$

clearly \(\{ j \in \mathbf{Z}: \;\; j \ge t_0 \} = S_{\mathrm{good}} \cup S_{\mathrm{bad}} \). We can clearly define a (possibly infinite) sequence of intervals of the form \([k_l,k_{l+1})\) which satisfy:

  1. (i)

    \(k_{0}=t_0\) serves as the initial instant of the first interval;

  2. (ii)

    \([k_l,k_{l+1})\) either belongs to \(S_{\mathrm{good}}\) or \(S_{\mathrm{bad}}\); and

  3. (iii)

    if \(k_{l+1}\ne \infty \) and \([k_l,k_{l+1})\) belongs to \(S_{\mathrm{good}}\) then \([k_{l+1},k_{l+2})\) belongs to \(S_{\mathrm{bad}}\) and vice versa.

Now we analyse the behaviour during each interval.

Sub-Case 2.1: \([k_l,k_{l+1})\) lies in \(S_{\mathrm{bad}}\).

Let \(j\in [k_l,k_{l+1})\) be arbitrary. In this case

$$\begin{aligned} \frac{|d(j)|^2}{\Vert \phi (j)\Vert ^2} \ge \frac{{\bar{\mathbf{{s}}}}^2}{\nu } \qquad \text {or}\qquad \Vert \phi (j)\Vert =0; \end{aligned}$$

in either case

$$\begin{aligned} \Vert \phi (j)\Vert \le \underbrace{\frac{\nu ^{\frac{1}{2}}}{{\bar{\mathbf{{s}}}}}}_{=:c_{12}}|d(j)|. \end{aligned}$$

Also, applying Proposition 4 for one step, there exists constant \(c_{13}\) so that

$$\begin{aligned} \Vert \phi (j)\Vert \le c_{13} (|d(j-1)|+|r(j-1)|). \end{aligned}$$

Then for \(j\in [k_l,k_{l+1})\), we have

$$\begin{aligned} \Vert \phi (j)\Vert \le \left\{ \begin{array}{ll} c_{12}|d(j)| &{} { j=k_l } \\ c_{13}(|d(j-1)|+|r(j-1)|) &{} j=k_l+1,k_l+2,\ldots ,k_{l+1}. \end{array} \right. \end{aligned}$$
(80)

Sub-Case 2.2: \([k_l,k_{l+1})\) lies in \(S_{\mathrm{good}}\).

Let \(j\in [k_l,k_{l+1})\) be arbitrary. First, suppose that \(k_{l+1}-k_l\le 4N\). From Proposition 4 it can be easily proven that there exists a constant \(c_{14}\) so that

$$\begin{aligned} \Vert \phi (t)\Vert \le c_{14} \lambda ^{t-k_l} \Vert \phi (k_l)\Vert + c_{14} \sum _{j=k_{l}}^{t-1} \lambda ^{t-j-1} (|d(j)|+|r(j)|), \qquad t\in [k_l,k_{l+1}]. \end{aligned}$$
(81)

Now suppose that \(k_{l+1}-k_l>4N\). This means, in particular, that there exist \(j_1<j_2\) so that

$$\begin{aligned} k_l\le {{\hat{t}}}_{2j_1} \le {{\hat{t}}}_{2j_2}\le k_{l+1}. \end{aligned}$$

To proceed, observe that \(\Vert \phi (j)\Vert \ne 0\) and

$$\begin{aligned} \frac{|d(j)|^2}{\Vert \phi (j)\Vert ^2} < \frac{{\bar{\mathbf{{s}}}}^2}{\nu }, \; j \in [k_l,k_{l+1}) . \end{aligned}$$
(82)

With \(0\le j_1<j_2\), it follows from Proposition 1 and the definition of \(\alpha (\cdot )\) that

$$\begin{aligned} \sum _{q=j_1}^{j_2-1} \alpha ({{\hat{t}}}_{2q})^2\le & {} \sum _{p=t_0+2j_1N,\phi (p)\ne 0}^{t_0+2j_2N-1} \frac{|e_{i^*}(p+1)|^2}{\Vert \phi (p)\Vert ^2}\nonumber \\\le & {} 2V({{\hat{t}}}_{2j_1})+4\sum _{p={{\hat{t}}}_{2j_1},\phi (p)\ne 0}^{{{\hat{t}}}_{2j_2}-1}{\frac{|d(p)|^2}{\Vert \phi (p)\Vert ^2}}; \end{aligned}$$
(83)

using the bound given in (82) which holds on \([k_l,k_{l+1})\), this becomes

$$\begin{aligned}&\sum _{q=j_1}^{j_2-1} \alpha ({{\hat{t}}}_{2q})^2\nonumber \\&\le 8{\bar{\mathbf{{s}}}}^2+8N(j_2-j_1)\frac{{\bar{\mathbf{{s}}}}^2}{\nu }, \;\;\text { for all }j_1,j_2\in {\mathbf{Z}}^+\text { s.t. } k_l\le {{\hat{t}}}_{2j_1} < {{\hat{t}}}_{2j_2} \le k_{l+1}. \end{aligned}$$
(84)

If \(j_2-j_1\le \nu \) then

$$\begin{aligned} \sum _{q=j_1}^{j_2-1} \alpha ({{\hat{t}}}_{2q})^2 \le 8{\bar{\mathbf{{s}}}}^2+8N\nu \frac{{\bar{\mathbf{{s}}}}^2}{\nu }= (8+8N){\bar{\mathbf{{s}}}}^2=c_{11}; \end{aligned}$$
(85)

so by Lemma 3(i), with \(\lambda _1\) defined above, if we define \(c_{15}:=c_{11}^{\frac{c_{11}+1}{2}}(\frac{2}{\lambda _1})^{\frac{4c_{11}}{\lambda _1^2}+1}\), then

$$\begin{aligned} {\prod }_{q=j_1}^{j_2-1} [c_6 \alpha ({{\hat{t}}}_{2q})] \le c_{15} \lambda ^{2N(j_2-j_1)},\;\;\text { for all } j_1,j_2\in {\mathbf{Z}}^+\;\text {s.t. }k_l\le {{\hat{t}}}_{2j_1} < {{\hat{t}}}_{2j_2}\le k_{l+1}. \end{aligned}$$
(86)

If \(j_2-j_1>\nu \), then by Lemma 3(ii) and our choice of \(\nu \) we have that

$$\begin{aligned} {\prod }_{q=j_1}^{j_2-1} [c_6 \alpha ({{\hat{t}}}_{2q})] \le c_{15} \lambda ^{2N(j_2-j_1)},\;\;\text { for all } j_1,j_2\in {\mathbf{Z}}^+\;\text {s.t. }k_l\le {{\hat{t}}}_{2j_1} < {{\hat{t}}}_{2j_2}\le k_{l+1}, \end{aligned}$$
(87)

as well.

Now we can proceed to solve (73). The first step is to use (85) to bound the second occurrence of \(\alpha ({{\hat{t}}}_{2j})\) in (73), yielding

$$\begin{aligned} \Vert \phi ({{\hat{t}}}_{2j+2})\Vert \le c_6 \alpha ({{\hat{t}}}_{2j}) \Vert \phi ({{\hat{t}}}_{2j})\Vert + \underbrace{c_6(1+\sqrt{c_{11}})}_{=:c_{16}} \underbrace{\sum _{q={{\hat{t}}}_{2j}}^{{{\hat{t}}}_{2j+2}-1} (|r(q)|+|d(q)|)}_{=:{{\bar{w}}}(j)} . \end{aligned}$$
(88)

If we solve this iteratively and use the bounds in (86) and (87), we see that

$$\begin{aligned}&\Vert \phi ({{\hat{t}}}_{2j_2})\Vert \le c_{15} \lambda ^{2N(j_2-j_1)} \Vert \phi ({{\hat{t}}}_{2j_1})\Vert + \sum _{q=j_1}^{j_2-1} c_{16} c_{15} \left( \lambda ^{2N} \right) ^{j_2-1-q} {{\bar{w}}}(q),\nonumber \\&\text { for all } j_1,j_2\in {\mathbf{Z}}^+\;\text {s.t. }k_l\le {{\hat{t}}}_{2j_1} < {{\hat{t}}}_{2j_2}\le k_{l+1}. \end{aligned}$$
(89)

We can now use Proposition 4:

  • to provide a bound on \(\Vert \phi (t)\Vert \) between consecutive \({{\hat{t}}}_{2j}\)’s;

  • to provide a bound on \(\Vert \phi (t)\Vert \) on the beginning part of the interval \([k_l,k_{l+1})\) (until we get to the first admissible \({{\hat{t}}}_{2j}\));

  • to provide a bound on \(\Vert \phi (t)\Vert \) on the last part of the interval \([k_l,k_{l+1})\) (after the last admissible \({{\hat{t}}}_{2j}\)).

We conclude that there exist a constant \(c_{17}\ge c_{14}\) so that

$$\begin{aligned} \Vert \phi (t)\Vert \le c_{17} \lambda ^{t-k_l} \Vert \phi (k_l)\Vert + c_{17} \sum _{j=k_{l}}^{t-1} \lambda ^{t-j-1} (|d(j)|+|r(j)|), \qquad t\in [k_l,k_{l+1}]. \end{aligned}$$
(90)

Now we combine Sub-Case 2.1 and Sub-Case 2.2 into a general bound on \(\phi \). The following analysis is almost identical to the one at the end of the proof of Theorem 1. Define \(c_{18}:=\max \{c_{17},c_{13}, c_{13}c_{17}\}\).

Claim

The following bound holds:

$$\begin{aligned} \Vert \phi (t)\Vert \le c_{18}\lambda ^{t-t_0}\Vert \phi (t_0)\Vert +\sum _{j=t_0}^{t-1} c_{18} \lambda ^{t-j-1}(|d(j)|+|r(j)|), \qquad t\ge t_0. \end{aligned}$$
(91)

Proof of the Claim

If \([k_0,k_1)=[t_0,k_1)\subset S_{\mathrm{good}}\), then (91) is true for \(t\in [k_0,k_1]\) by (90). If \([k_0,k_1)\subset S_{\mathrm{bad}}\), then from (80) we obtain

$$\begin{aligned} \Vert \phi (j)\Vert \le \left\{ \begin{array}{ll} \Vert \phi (k_0)\Vert =\Vert \phi _0\Vert &{} { j=k_0=t_0 } \\ c_{13}(|d(j-1)|+|r(j-1)|) &{} j=k_0+1,k_0+2,\ldots ,k_{1}, \end{array} \right. \end{aligned}$$

which means that (91) holds on \([k_0,k_1]\) for this case as well.

We now use induction: suppose that (91) is true for \(t\in [k_0,k_l]\); we need to prove it holds for \(t\in (k_l,k_{l+1}]\) as well. If \(k\in [k_l,k_{l+1}) \subset S_{\mathrm{bad}}\), then from (80) we see that

$$\begin{aligned} \Vert \phi (j)\Vert \le c_{13}(|d(j-1)|+|r(j-1)|), \; j=k_l+1,k_l+2,\ldots ,k_{l+1}, \end{aligned}$$

which means (91) holds on \((k_l,k_{l+1}]\). On the other hand, if \([k_l,k_{l+1}) \subset S_{\mathrm{good}}\), then \(k_{l}-1\in S_{\mathrm{bad}}\); from (80) we have that

$$\begin{aligned} |\phi (k_l)|\le c_{13}(|d(k_l-1)|+|r(k_l-1)|). \end{aligned}$$

Using (90) to analyse the behaviour on \([k_l,k_{l+1}]\), we have

$$\begin{aligned} \Vert \phi (k)\Vert\le & {} c_{15}\lambda ^{k-k_l}[c_{13}(|d(k_l-1)|+|r(k_l-1)|)]+\sum _{j=k_l}^{k-1} c_{17} \lambda ^{k-j-1}(|d(j)|+|r(j)|),\nonumber \\\le & {} c_{18}\sum _{j=k_l-1}^{k-1} \lambda ^{k-j-1}(|d(j)|+|r(j)|),\qquad k\in [k_l,k_{l+1}], \end{aligned}$$
(92)

which implies that (91) holds. \(\square \)

This concludes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Miller, D.E., Shahab, M.T. Classical pole placement adaptive control revisited: linear-like convolution bounds and exponential stability. Math. Control Signals Syst. 30, 19 (2018). https://doi.org/10.1007/s00498-018-0225-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00498-018-0225-1

Keywords

Navigation