Abstract
Nearly every model predictive control (MPC) algorithm is premised on knowledge of the system’s state. As a result, state estimation is vital to good MPC performance. Moving horizon estimation (MHE) is an optimization-based state estimation algorithm. Similar to MPC, it relies on the minimization of a sum of stage costs subject to a dynamic model. Unlike MPC, however, conditions under which MHE is robustly stable have been slow to emerge. Recently, several results have appeared about the robust stability of MHE. We generalize the result on the robust stability of MHE without a max-term presented in Müller (Automatica 79:306–314, 2017). using assumptions inspired by the result in Hu (Robust stability of optimization-based state estimation under bounded disturbances. ArXiv e-prints, 2017). Furthermore, we show that all systems that are covered by the assumptions used in those previous works satisfy a certain form of exponential incremental input/output-to-state stability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Available at http://jbrwww.che.wisc.edu/home/jbraw/mpc/figures.html.
- 2.
Available at www.casadi.org.
- 3.
Available at https://bitbucket.org/rawlings-group/octave-mpctools.
References
Andersson, J.: A general-purpose software framework for dynamic optimization. Ph.D. thesis, Arenberg Doctoral School, KU Leuven (2013)
Findeisen, P.K.: Moving horizon state estimation of discrete time systems. Master’s thesis, University of Wisconsin-Madison (1997)
Haseltine, E.L., Rawlings, J.B.: Critical evaluation of extended Kalman filtering and moving horizon estimation. Ind. Eng. Chem. Res. 44(8), 2451–2460 (2005)
Hu, W.: Robust stability of optimization-based state estimation under bounded disturbances. ArXiv e-prints (2017)
Hu, W., Xie, L., You, K.: Optimization-based state estimation under bounded disturbances. In: 2015 54th IEEE Conference on Decision and Control CDC, pp. 6597–6602 (2015). https://doi.org/10.1109/CDC.2015.7403258
Jazwinski, A.H.: Stochastic Processes and Filtering Theory. Academic Press, New York (1970)
Ji, L., Rawlings, J.B., Hu, W., Wynn, A., Diehl, M.M.: Robust stability of moving horizon estimation under bounded disturbances. IEEE Trans. Autom. Control 61(11), 3509–3514 (2016)
Julier, S.J., Uhlmann, J.K.: Unscented filtering and nonlinear estimation. Proc. IEEE 92(3), 401–422 (2004)
Kalman, R.E.: A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng. 82(1), 35–45 (1960)
Meadows, E.S., Muske, K.R., Rawlings, J.B.: Constrained state estimation and discontinuous feedback in model predictive control. In: Proceedings of the 1993 European Control Conference, pp. 2308–2312. European Automatic Control Council (1993)
Michalska, H., Mayne, D.Q.: Moving horizon observers and observer-based control. IEEE Trans. Autom. Control 40(6), 995–1006 (1995)
Müller, M.A.: Nonlinear moving horizon estimation for systems with bounded disturbances. In: 2016 American Control Conference ACC, pp. 883–888 (2016). https://doi.org/10.1109/ACC.2016.7525026
Müller, M.A.: Nonlinear moving horizon estimation in the presence of bounded disturbances. Automatica 79, 306–314 (2017). https://doi.org/10.1016/j.automatica.2017.01.033. http://www.sciencedirect.com/science/article/pii/S0005109817300432
Muske, K.R., Rawlings, J.B., Lee, J.H.: Receding horizon recursive state estimation. In: Proceedings of the 1993 American Control Conference, pp. 900–904 (1993)
Pannocchia, G., Rawlings, J.B.: Disturbance models for offset-free MPC control. AIChE J. 49(2), 426–437 (2003)
Rajamani, R.: Observers for nonlinear systems: part 2: an overview of the special issue. IEEE Control Syst. Mag. 37(4), 30–32 (2017). https://doi.org/10.1109/MCS.2017.2696758
Rajamani, R.: Observers for nonlinear systems: introduction to part 1 of the special issue. IEEE Control Syst. Mag. 37(3), 22–24 (2017). https://doi.org/10.1109/MCS.2017.2674400
Rao, C.V.: Moving horizon strategies for the constrained monitoring and control of nonlinear discrete-time systems. Ph.D. thesis, University of Wisconsin-Madison (2000)
Rao, C.V., Rawlings, J.B., Mayne, D.Q.: Constrained state estimation for nonlinear discrete-time systems: stability and moving horizon approximations. IEEE Trans. Autom. Control 48(2), 246–258 (2003)
Rawlings, J.B., Ji, L.: Optimization-based state estimation: current status and some new results. J. Process Control 22, 1439–1444 (2012)
Rawlings, J.B., Mayne, D.Q.: Model Predictive Control: Theory and Design, 576 p. Nob Hill Publishing, Madison, WI (2009). ISBN 978-0-9759377-0-9
Rawlings, J.B., Risbeck, M.J.: On the equivalence between statements with epsilon-delta and K-functions. Technical Report 2015–01, TWCCC Technical Report (2015). http://jbrwww.che.wisc.edu/tech-reports/twccc-2015-01.pdf
Rawlings, J.B., Mayne, D.Q., Diehl, M.M.: Model Predictive Control: Theory, Design, and Computation, 2nd edn., 770 p. Nob Hill Publishing, Madison, WI (2017). ISBN 978-0-9759377-3-0
Romanenko, A., Castro, J.A.A.M.: The unscented filter as an alternative to the EKF for nonlinear state estimation: a simulation case study. Comput. Chem. Eng. 28(3), 347–355 (2004)
Romanenko, A., Santos, L.O., Afonso, P.A.F.N.A.: Unscented Kalman filtering of a simulated pH system. Ind. Eng. Chem. Res. 43, 7531–7538 (2004)
Sontag, E.D.: Smooth stabilization implies coprime factorization. IEEE Trans. Autom. Control 34(4), 435–443 (1989). https://doi.org/10.1109/9.28018
Sontag, E.D.: Mathematical Control Theory, 2nd edn. Springer, New York (1998)
Sontag, E.D., Wang, Y.: Output-to-state stability and detectability of nonlinear systems. Syst. Control Lett. 29, 279–290 (1997)
Tenny, M.J., Rawlings, J.B.: Efficient moving horizon estimation and nonlinear model predictive control. In: Proceedings of the American Control Conference, pp. 4475–4480. Anchorage, Alaska (2002)
Vachhani, P., Narasimhan, S., Rengaswamy, R.: Robust and reliable estimation via unscented recursive nonlinear dynamic data reconciliation. J. Process Control 16(10), 1075–1086 (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Proof (Proposition 1)
Without loss of generality, assume that x 0 = 0 and that V (0) = 0. Because V (⋅ ) is Lipschitz continuous at the origin, there exists some δ > 0 and L > 0 such that if \(\left \vert x\right \vert \leq \delta\) we have that \(\left \vert V (x)\right \vert \leq L\left \vert x\right \vert\). Let \((\overline{s}(n))\) be a strictly increasing and unbounded sequence such that \(\overline{s}(n)>\delta\) for all \(n \in \mathbb{I}_{\geq 0}\). Define a sequence \((\tilde{M}(n))\) such that
for all \(n \in \mathbb{I}_{\geq 0}\). We have that \(\tilde{M}(n)\) is finite for all \(n \in \mathbb{I}_{\geq 0}\) because V (⋅ ) is locally bounded. Define another sequence \((M(n))\) such that
Note that (M(n)) is a nondecreasing sequence. Now define a piecewise linear function
The function \(\tilde{\alpha }(s)\) is continuous, nondecreasing, and \(\tilde{\alpha }(0) = 0\). Furthermore, because it is piecewise-linear, it is locally Lipschitz. Because \(\tilde{\alpha }(\overline{s}(n - 1)) = M(n)\), we also have that
We have a similar bound for \(\left \vert x\right \vert \in (\delta,\overline{s}(0)]\). Finally, from the Lipschitz bound, we have that \(\left \vert V (x)\right \vert \leq \tilde{\alpha } (\left \vert x\right \vert )\) if x ≤ δ and thus for all \(x \in \mathcal{ C}\). Finally, let \(\alpha (s)\,:=\,s +\tilde{\alpha } (s)\). We have that α(⋅ ) is strictly increasing, continuous, zero at the origin, and asymptotically unbounded. Thus \(\alpha (\cdot ) \in \mathcal{ K}_{\infty }\). Furthermore, α(⋅ ) is piecewise-linear and thus locally Lipschitz, and is therefore the required bound.
Proof (Proposition 2)
We prove this proposition by showing first that Statement 1 implies Statement 2, next that Statement 2 implies Statement 3, and then finally that Statement 3 implies Statement 4. Because Statement 4 is a restatement of Statement 1 with a particular form of the \(\mathcal{L}\) function, the proof is then complete.
Proof (Statement 1 Implies Statement 2 (adapted from [1], Lemma 6))
Fix η ∈ (0, 1) and \(\overline{s}> 0\). We seek to find T such that σ x(s)ϕ(T) ≤ ηs for all \(s \leq \overline{s}\). This condition is equivalent to ϕ(T) ≤ ηs∕σ x(s) for all \(s \in \left (0,\overline{s}\right ]\) and ϕ(T) ≤ ηlims↓0 s∕σ x(s). Because σ x(s) is Lipschitz continuous at zero, there exists some L > 0 and δ > 0 such that σ x(s) ≤ Ls for 0 ≤ s ≤ δ. Thus, we have that s∕σ x(s) ≥ s∕(sL) = 1∕L > 0 for s ≤ δ. Furthermore, s∕σ x(s) > 0 and s∕σ x(s) is continuous for all s > 0. Thus we have that \(\inf _{s\leq \overline{s}}s/\sigma _{x}(s)\,:=\,\zeta> 0\). Finally, we require T such that ϕ(T) ≤ ηζ ≤ s∕σ x(s) for \(s \leq \overline{s}\). Because both η, ζ > 0 and \(\phi (\cdot ) \in \mathcal{ L}\), there exists a T that fulfills this condition. Finally, because σ x(s) is Lipschitz at the origin, we have that β(s, 0) is Lipschitz at s = 0.
Remark 3
Because the only place where s∕σ x(s) might equal zero is at s = 0, this proof also implies that if there exists a single \(\overline{s}> 0\), η ∈ (0, 1), and T > 0 such that σ x(s)ϕ(T) ≤ ηs for all \(s \in [0,\overline{s}]\), then we can find such a T for every \(\overline{s}> 0\).
Proof (Statement 2 Implies Statement 3)
For brevity, we define \(x_{1}(k) - x_{2}(k)\,:=\,\varDelta x(k)\), \(\mathbf{w}_{1} -\mathbf{w}_{2}\,:=\,\varDelta \mathbf{w}\), and \(\mathbf{y}_{1} -\mathbf{y}_{2}\,:=\,\varDelta \mathbf{y}\). Fix \(\overline{s}> 0\) and choose T such that β(s, T) ≤ ηs for all \(s \in [0,\overline{s}]\). First, we prove by induction in n that
for all k ≥ 0 and n ≥ 1 if \(\varDelta x(0) \leq \overline{s}\).
Base Case Suppose that \(\varDelta x(0) \leq \overline{s}\). Then we have that
by the fact that β(⋅ ) is nonincreasing in its second argument and by Statement 2.
Inductive Case Suppose that (4) holds for some \(n \in \mathbb{I}_{\geq 0}\). We have that
for all k ≥ 0. Suppose further that
We then apply the original i-IOSS bound to obtain
Furthermore, we have that
Thus we have that the \(\mathcal{KL}\) function bound is unnecessary, and therefore we have that
trivially. Now suppose that
Because we have that \(\eta ^{n}\left \vert \varDelta x(0)\right \vert \leq \overline{s}\), we have that \(\left \vert \varDelta x(k + nT)\right \vert \leq \overline{s}\) so we can apply the i-IOSS bound from \(\varDelta x(k + nT)\) to obtain
Thus we have that (4) for n implies (4) for n + 1, completing the proof by induction.
Now we bound \(\left \vert \varDelta x(k)\right \vert\) for \(k \in \mathbb{I}_{0:T-1}\). We have that
because β(⋅ ) is nonincreasing in its second argument. Because β(s, 0) is Lipschitz at s = 0, by Proposition 1 there exists some locally Lipschitz function \(\bar{\alpha }(\cdot ) \in \mathcal{ K}_{\infty }\) such that \(\beta (s,0) \leq \bar{\alpha } (s)\) for all \(s \in \mathbb{R}_{\geq 0}\). Because \(\bar{\alpha }(\cdot )\) is locally Lipschitz and β(s, 0) ≥ s, there exists some K ≥ 1 such that for all \(s \in [0,\overline{s}]\), we have that \(\bar{\alpha }(s) \leq Ks\). Thus we have that
for all \(\varDelta x(0)\) such that \(\left \vert \varDelta x(0)\right \vert \leq \overline{s}\). Because we have that (4) holds for n ≥ 1 and k ≥ 0 and that (5) holds for n = 0 and k ≥ 0, we can combine these equations to write
for all \(\varDelta x\) such that \(\left \vert \varDelta x\right \vert \leq \overline{s}\). We can then eliminate the index n using the floor function \(\lfloor \cdot \rfloor\) to obtain the bound
Note that \(\eta ^{\lfloor k/T\rfloor }\leq (1/\eta )\eta ^{k/T}\), so we have that
Finally, let C : = K∕η and λ : = η 1∕T. We have that
for all \(\varDelta x(0)\) such that \(\left \vert \varDelta x(0)\right \vert \leq \overline{s}\).
Proof (Statement 3 Implies 4)
We prove this statement in two steps. We first show that although both λ and C in Statement 3 depend on \(\overline{s}\), the dependence of λ on \(\overline{s}\) can be removed by increasing \(C(\overline{s})\). We can remove this dependence because the term dependent on the initial conditions decays to be less than some smaller \(\overline{s}\) within finite time. We then turn the function \(C(\overline{s})s\) into a \(\mathcal{K}_{\infty }\) function.
First, let \((\overline{s}(n))\) be a strictly increasing and unbounded sequence such that \(\overline{s}(n)> 0\) for all \(n \in \mathbb{I}_{1:\infty }\). By Statement 3, there exists sequences \((C(n))\) and \((\lambda (n))\) such that C(n) ≥ 1 and λ(n) ∈ (0, 1) and that
for all \(\varDelta x(0)\) such that \(\left \vert \varDelta x(0)\right \vert \leq \overline{s}(n)\) for all \(n \in \mathbb{I}_{1:\infty }\). Without loss of generality, assume that (C(n)) and that (λ(n)) are nondecreasing. Let \(\underline{\lambda }\,:=\,\lambda (1)\). We prove by induction that there exists a sequence \( (\tilde{C}(n)) \) such that
for all \(\varDelta x(0)\) such that \(\left \vert \varDelta x(0)\right \vert \leq \overline{s}(n)\) and all \(n \in \mathbb{I}_{1:\infty }\). The base case is given by Statement 3 for \(\overline{s}(1)\).
Inductive Case Suppose for some n we have that
for all \(\varDelta x(0)\) such that \(\left \vert \varDelta x(0)\right \vert \leq \overline{s}(n)\). We also have that
for all \(\varDelta x(0)\) such that \(\left \vert \varDelta x(0)\right \vert \leq \overline{s}(n + 1)\). Let
in which \(\lceil \cdot \rceil\) is the ceiling function. For all \(\varDelta x(0)\) such that \(\left \vert \varDelta x(0)\right \vert \leq \overline{s}(n + 1)\), we have that \(C(n + 1)\lambda ^{N}\left \vert \varDelta x(0)\right \vert \leq \overline{s}(n)\). Suppose that \(\gamma _{w}(\|\varDelta \mathbf{w}\|_{0:k-1}) \oplus \gamma _{y}(\|\varDelta \mathbf{y}\|_{0:k-1}) <C(n + 1)\lambda ^{N}\left \vert \varDelta x(0)\right \vert\). Then we have that \(\left \vert \varDelta x(k)\right \vert \leq C(n + 1)\lambda ^{N}\left \vert \varDelta x(0)\right \vert\) alone, and thus we can apply the bound for all \(\varDelta x(0) \leq \overline{s}(n)\) to obtain for all k ≥ N that
Define
and we have the required bound. Now suppose that \(\gamma _{w}(\|\varDelta \mathbf{w}\|_{0:k-1}) \oplus \gamma _{y}(\|\varDelta \mathbf{y}\|_{0:k-1}) \geq C(n + 1)\lambda ^{N}\left \vert \varDelta x(0)\right \vert\). Then, because the \(\mathcal{KL}\) function is nonincreasing, we have that
which is the required bound. Both of these bounds apply for k ≥ N. For k < N, note that
because \(\underline{\lambda }\leq \lambda (n + 1)\). Therefore we also have that
for k < N and thus for all \(k \in \mathbb{I}_{\geq 0}\). Thus the statement is proven for n + 1, and the first part of the proof is complete.
Next, define
Note that this function is Lipschitz continuous at the origin and locally bounded. Furthermore, note that by construction we have that
for all \(\varDelta x(0)\). By Proposition 1, there exists a locally Lipschitz function \(\alpha (\cdot ) \in \mathcal{ K}_{\infty }\) such that \(\tilde{\alpha }(s) \leq \alpha (s)\) for all \(s \in \mathbb{R}_{\geq 0}\). Thus we have that
for all \(\varDelta x(0)\), and so the result is established.
Proof (Proposition 3)
By Assumptions 2 and 3, the MHE problem has a solution \((\hat{x}(0),\widehat{\mathbf{d}})\). Denote the estimated state at time k within the MHE problem as \(\hat{x}(k)\). By Assumption 3, we have that
in which \(\hat{e}_{p}\,:=\,\overline{x} -\hat{ x}(0)\). Furthermore, by optimality, we have that
Combining these bounds and rearranging, we obtain the following bounds
From the system’s i-IOSS bound, we have that
We next substitute (6) and (7) into this expression.
Note that because \(\underline{\gamma }_{p}(s) \leq \overline{\gamma }_{p}(s) \leq 2\overline{\gamma }_{p}(s)\), we have that \(s \leq \underline{\gamma }_{p}^{-1}(2\overline{\gamma }_{p}(s))\). A similar argument follows for \(\underline{\gamma }_{s}(\cdot )\) and \(\overline{\gamma }_{s}(\cdot )\). Thus we have that the term \(\beta (2\left \vert e_{p}\right \vert,k) \leq \beta (2\underline{\gamma }_{p}^{-1}(2\overline{\gamma }_{p}(\left \vert e_{p}\right \vert )),k)\) and the term \(\gamma _{d}(2\|\mathbf{d}\|_{0:N-1}) \leq \gamma _{d}(\underline{\gamma }_{s}^{-1}(2N\overline{\gamma }_{s}(\|\mathbf{d}\|_{0:N-1})))\), so we can eliminate them from the maximization. Thus we have
for all k ≥ 0, which is the desired result.
Rights and permissions
Copyright information
© 2019 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Allan, D.A., Rawlings, J.B. (2019). Moving Horizon Estimation. In: Raković, S., Levine, W. (eds) Handbook of Model Predictive Control. Control Engineering. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-77489-3_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-77489-3_5
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-319-77488-6
Online ISBN: 978-3-319-77489-3
eBook Packages: EngineeringEngineering (R0)