Skip to main content

Constrained Model Predictive Control of Processes with Uncertain Structure Modeled by Jump Markov Linear Systems

  • Chapter
  • First Online:
Variable-Structure Approaches

Part of the book series: Mathematical Engineering ((MATHENGIN))

  • 1001 Accesses

Abstract

Linear systems with abrupt changes in its structure, e.g. caused by component failures of a production system, can be modelled by the use of jump Markov linear systems (JMLS). This chapter proposes a finite horizon model predictive control (MPC) approach for discrete-time JMLS considering input constraints as well as constraints for the expectancy of the state trajectory. For the expected value of the state as well as a quadratic cost criterion, recursive prediction schemes are formulated, which consider dependencies on the input trajectory explicitly. Due to the proposed prediction scheme, the MPC problem can be formulated as a quadratic program (QP) exhibiting low computational effort compared to existing approaches. The resulting properties concerning stability as well as computational complexity are investigated and demonstrated by illustrative simulation studies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The following equations differ from other work, like [1], due to the different definition of the conditional expectancies \(\bar{x}_i\llbracket j\rrbracket \) (cf. Remark 1).

  2. 2.

    Only the dependencies on \(n_\theta \) and N are stated, since the statements are just meant to demonstrate the improvement compared to the case where all \(n_{\theta }^{N+1}\) Markov state trajectories are calculated.

References

  1. Costa OLV, Fragoso MD, Marques RP (2005) Discrete-time Markov jump linear systems. Probability and its applications. Springer, New York

    Google Scholar 

  2. Maciejowski JM (2002) Predictive control with constraints. Prentice-Hall, New Jersey

    Google Scholar 

  3. Costa OLV, Filho EOA (1996) Discrete-time constrained quadratic control of Markovian jump linear systems. In: Conference on Decision Control 2:1763–1764

    Google Scholar 

  4. Costa OLV, Filho EOA, Boukas EK, Marques RP (1999) Constrained quadratic state feedback control of discrete-time Markovian jump linear systems. Automatica 35(4):617–626

    Article  MathSciNet  MATH  Google Scholar 

  5. do Val JBR, Başar T (1999) Receding horizon control of jump linear systems and a macroeconomic policy problem. J Econ Dyn Control 23(8):1099–131, 1999

    Google Scholar 

  6. Vargas AN, do Val JBR, Costa EF (2004) Receding horizon control of Markov jump linear systems subject to noise and unobserved state chain. In: Conference on decision and control, vol 4, pp 4381–4386

    Google Scholar 

  7. Park B-G, Kwon WH (2002) Robust one-step receding horizon control of discrete-time Markovian jump uncertain systems. Automatica 38(7):1229–1235

    Article  MathSciNet  MATH  Google Scholar 

  8. Vargas AN, Furloni W, do Val JBR (2006) Constrained model predictive control of jump linear systems with noise and non-observed Markov state. In: American control conference

    Google Scholar 

  9. Vargas AN, Furloni W, do Val JBR (2007) Control of Markov jump linear systems with state and input constraints: a necessary optimality condition. In: 3rd IFAC symposium on system, structure and control, vol 3. pp 250–255

    Google Scholar 

  10. Vargas AN, Furloni W, do Val JBR (2013) Second moment constraints and the control problem of Markov jump linear systems. Numer Linear Algebra Appl 20(2):357–368

    Google Scholar 

  11. Blackmore L, Bektassov A, Ono M, Williams BC (2007) Robust, optimal predictive control of jump Markov linear systems using particles. In: Hybrid systems: computation and control, Lecture notes in computer science, vol 4416. Springer, New York, pp 104–117

    Google Scholar 

  12. Blackmore L, Ono M, Bektassov A, Williams BC (2010) A probabilistic particle-control approximation of chance-constrained stochastic predictive control. IEEE Trans Robot, 26(3):502–517

    Google Scholar 

  13. Yin Y, Shi Y, Liu F (2013) Constrained model predictive control on convex polyhedron stochastic linear parameter varying systems. Int. Journal of Innovative Computing. Inf Control 9(10):4193–4204

    Google Scholar 

  14. Yin Y, Liu Y, Karimi HR (2014) A simplified predictive control of constrained Markov jump system with mixed uncertainties. Abstr Appl Anal Special Issue:1–7

    Google Scholar 

  15. Lu J, Li D, Xi Y (2012) Constrained MPC of uncertain discrete-time Markovian jump linear systems. In: 31st Chinese control conference, pp 4131–4136

    Google Scholar 

  16. Song Y, Liu S, Wei G (2015) Constrained robust distributed model predictive control for uncertain discrete-time Markovian jump linear system. J Frankl Inst 352(1):73–92

    Article  MathSciNet  MATH  Google Scholar 

  17. Dombrovskii VV, Dombrovskii DV, Lyashenko EA (2005) Predictive control of random-parameter systems with multiplicative noise. Application to investment portfolio optimization. Autom Remote Control 66(4):583–595

    Article  MathSciNet  MATH  Google Scholar 

  18. Dombrovskii VV, Yu T (2011) Ob"edko. Predictive control of systems with Markovian jumps under constraints and its application to the investment portfolio optimization. Autom Remote Control 72(5):989–1003

    Article  MathSciNet  MATH  Google Scholar 

  19. Yan Z, Wang J (2013) Stochastic model predictive control of Markov jump linear systems based on a two-layer recurrent neural network. In: IEEE international conference on information and automation, pp 564–569

    Google Scholar 

  20. Bernardini D, Bemporad A (2012) Stabilizing model predictive control of stochastic constrained linear systems. IEEE Trans Autom Control 57(6):1468–1480

    Google Scholar 

  21. Patrinos P, Sopasakis P, Sarimveis H, Bemporad A (2014) Stochastic model predictive control for constrained discrete-time Markovian switching systems. Automatica 50(10):2504–2514

    Article  MathSciNet  MATH  Google Scholar 

  22. Lu J, Xi Y, Li D, Cen L (2014) Probabilistic constrained stochastic model predictive control for Markovian jump linear systems with additive disturbance. In: 19th IFAC world congress, vol 19, pp 10469–10474

    Google Scholar 

  23. Chitraganti S, Aberkane S, Aubrun C, Valencia-Palomo G, Dragan V (2014) On control of discrete-time state-dependent jump linear systems with probabilistic constraints: a receding horizon approach. Syst Control Lett 74:81–89

    Google Scholar 

  24. Tonne J, Jilg M, Stursberg O (2015) Constrained model predictive control of high dimensional jump Markov linear systems. In: American control conference pp 2993–2998

    Google Scholar 

  25. Jerez JL, Kerrigan EC, Constantinides GA (2011) A condensed and sparse QP formulation for predictive control. In: 50th IEEE conference on decision and control, pp 5217–5222

    Google Scholar 

  26. Seber GAF, Lee AJ (2003) Linear regression analysis. Wiley series in probability and statistics. 2nd edn. Wiley, New York

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jens Tonne .

Editor information

Editors and Affiliations

Appendices

Appendix A—Proof of Theorem 2

This appendix states the proof of Theorem 2. For sake of a brief notation, a certain trajectory of the Markov chain \(\mathscr {M}\) defined by \((\theta \llbracket 0\rrbracket = \theta _0, \dots , \theta \llbracket j\rrbracket = \theta _j)\) is denoted by \(({\theta _0,\dots ,\theta _j})\) in this proof. The corresponding realization probability is given by:

$$\begin{aligned} p_{(\theta _0, \dots , \theta _j)}:= \mathbf {P}\left( \theta \llbracket 0\rrbracket = \theta _0, \dots , \theta \llbracket j\rrbracket = \theta _j\right) = \mu _{\theta _0}\!(k) \cdot \prod _{l=0}^{j-1} p_{\theta _{l+1},\theta _l} . \end{aligned}$$
(49)

Let \(\Lambda _j\) denote the set of all possible Markov state trajectories with j transitions.

By applying the system dynamic (1) j times recursively and consecutive expansion of the products, it follows for the expected cost at time step \(k+j\):

$$\begin{aligned}&\text {E}\left[ x^{\intercal }\llbracket j\rrbracket \,Q_{\theta _j} \,x\llbracket j\rrbracket \right] \nonumber \\&\quad = \text {E}\left[ \left( A_{\theta _{j-1}}\,x\llbracket j-1 \rrbracket + B_{\theta _{j-1}}\, u\llbracket j-1 \rrbracket + G_{\theta _{j-1}}\, w\llbracket j-1 \rrbracket \right) ^{\intercal }\,Q_{\theta _j}\cdot \ldots \right. \nonumber \\&\qquad \quad \left. \ldots \cdot \left( A_{\theta _{j-1}}\,x\llbracket j-1 \rrbracket + B_{\theta _{j-1}}\, u\llbracket j-1 \rrbracket + G_{\theta _{j-1}}\, w\llbracket j-1 \rrbracket \right) \right]&\\&\qquad \qquad \qquad \qquad \qquad \qquad \vdots&\nonumber \\&\quad = \sum _{\Lambda _j} p_{(\theta _0,\dots ,\theta _{j})}\bigg ( 2\sum _{l=0}^{j-1} x^{\intercal }(k)\prod _{c=0}^{j-1} A^{\intercal }_{\theta _{c}}\,Q_{\theta _j}\,\prod _{c=1}^{j-l-1} A_{\theta _{j-c}} \, B_{\theta _{l}} \,u\llbracket l\rrbracket&\nonumber \\&\qquad +2\sum _{l_1=0}^{j-1}\sum _{l_2=0}^{j-1} \bar{w}^{\intercal }\llbracket l_1\rrbracket \,G^{\intercal }_{\theta _{l_1}}\,\prod _{c=l_1+1}^{j-1} A^{\intercal }_{\theta _{c}}\,Q_{\theta _j}\,\prod _{c=1}^{j-l_2-1} A_{\theta _{j-c}} \, B_{\theta _{l_2}} \,u\llbracket l_2\rrbracket \nonumber \\&\qquad + \sum _{l_1=0}^{j-1}\sum _{l_2=0}^{j-1} u^{\intercal }\llbracket l_1\rrbracket \,B^{\intercal }_{\theta _{l_1}}\,\prod _{c=l_1+1}^{j-1} A^{\intercal }_{\theta _{c}}\,Q_{\theta _j}\,\prod _{c=1}^{j-l_2-1} A_{\theta _{j-c}} \, B_{\theta _{l_2}} \,u\llbracket l_2\rrbracket \bigg ) + \varPsi \nonumber \\&\quad = \sum _{\Lambda _{j-1}} p_{(\theta _0,\dots ,\theta _{j-1})}\Bigg ( 2\sum _{l=0}^{j-1} x^{\intercal }(k)\prod _{c=0}^{j-1} A^{\intercal }_{\theta _{c}}\left( \sum _{\theta _j = 1}^{n_{\theta }} p_{\theta _{j},\theta _{j-1}} Q_{\theta _j}\right) \prod _{c=1}^{j-l-1} A_{\theta _{j-c}} \, B_{\theta _{l}} \,u\llbracket l\rrbracket \nonumber \\&\qquad + 2\sum _{l_1=0}^{j-1}\sum _{l_2=0}^{j-1} \bar{w}^{\intercal }\llbracket l_1\rrbracket \,G^{\intercal }_{\theta _{l_1}}\,\prod _{c=l_1+1}^{j-1} A^{\intercal }_{\theta _{c}}\left( \sum _{\theta _j = 1}^{n_{\theta }} p_{\theta _{j},\theta _{j-1}} Q_{\theta _j}\right) \prod _{c=1}^{j-l_2-1} A_{\theta _{j-c}} \, B_{\theta _{l_2}} \,u\llbracket l_2\rrbracket&\nonumber \\&\qquad \left. +\!\sum _{l_1=0}^{j-1}\sum _{l_2=0}^{j-1} u^{\intercal }\llbracket l_1\rrbracket \,B^{\intercal }_{\theta _{l_1}}\!\prod _{c=l_1+1}^{j-1}\! A^{\intercal }_{\theta _{c}}\! \left( \!\sum _{\theta _j = 1}^{n_{\theta }} p_{\theta _{j},\theta _{j-1}} Q_{\theta _j}\!\right) \!\prod _{c=1}^{j-l_2-1}\! A_{\theta _{j-c}} \, B_{\theta _{l_2}} \,u\llbracket l_2\rrbracket \! \right) + \varPsi . \nonumber \end{aligned}$$
(50)

Here, the variable \(\varPsi \) contains all costs that cannot be influenced by the inputs. The sums over the cost matrices \(Q_{\theta _j}\) can be replaced by \(\mathscr {T}_{\theta _{j-1}}(Q)\), like in (23). To express the costs as a function of \(\mathbf {u}(k)\), the sums over \(l,l_1\) and \(l_2\) are reformulated as matrix multiplications:

$$\begin{aligned}&\text {E}\left( x^{\intercal }\llbracket j\rrbracket \,Q_{\theta _j} \,x\llbracket j\rrbracket \right) - \varPsi&\nonumber \\&=\!\!\!\mathop \sum _{\Lambda _{j-1}} p_{(\theta _0,\dots ,\theta _{j-1})}\!\left( 2\, x^{\intercal }(k)\, A^{\intercal }_{\theta _0}\cdot \ldots \cdot A^{\intercal }_{\theta _{j-1}} \begin{bmatrix} \mathscr {T}_{\theta _{j-1}}(Q)&\ldots&\mathscr {T}_{\theta _{j-1}}(Q) \end{bmatrix}\! \begin{bmatrix} A_{\theta _{j-1}}\cdot \ldots \cdot A_{\theta _{1}}\cdot B_{\theta _{0}} u\llbracket 0\rrbracket \\ A_{\theta _{j-1}}\cdot \ldots \cdot A_{\theta _{2}}\cdot B_{\theta _{1}} u\llbracket 1\rrbracket \\ \vdots \\ B_{\theta _{j-1}} u\llbracket j-1\rrbracket \end{bmatrix}\right. \nonumber \\&\,\, + 2\begin{bmatrix} A_{\theta _{j-1}}\cdot \ldots \cdot A_{\theta _{1}}\cdot G_{\theta _{0}} \bar{w}\llbracket 0\rrbracket \\ A_{\theta _{j-1}}\cdot \ldots \cdot A_{\theta _{2}}\cdot G_{\theta _{1}} \bar{w}\llbracket 1\rrbracket \\ \vdots \\ G_{\theta _{j-1}} \bar{w}\llbracket j-1\rrbracket \end{bmatrix}^{\intercal } \!\! \begin{bmatrix} \mathscr {T}_{\theta _{j-1}}(Q)&\ldots&\mathscr {T}_{\theta _{j-1}}(Q) \\ \vdots&\ddots&\vdots \\ \mathscr {T}_{\theta _{j-1}}(Q)&\ldots&\mathscr {T}_{\theta _{j-1}}(Q) \end{bmatrix} \begin{bmatrix} A_{\theta _{j-1}}\cdot \ldots \cdot A_{\theta _{1}}\cdot B_{\theta _{0}} u\llbracket 0\rrbracket \\ A_{\theta _{j-1}}\cdot \ldots \cdot A_{\theta _{2}}\cdot B_{\theta _{1}} u\llbracket 1\rrbracket \\ \vdots \\ B_{\theta _{j-1}} u\llbracket j-1\rrbracket \end{bmatrix} \nonumber \\&\,\, \left. + \begin{bmatrix} A_{\theta _{j-1}}\cdot \ldots \cdot A_{\theta _{1}}\cdot B_{\theta _{0}} u\llbracket 0\rrbracket \\ A_{\theta _{j-1}}\cdot \ldots \cdot A_{\theta _{2}}\cdot B_{\theta _{1}} u\llbracket 1\rrbracket \\ \vdots \\ B_{\theta _{j-1}} u\llbracket j-1\rrbracket \end{bmatrix}^{\intercal } \!\! \begin{bmatrix} \mathscr {T}_{\theta _{j-1}}(Q)&\ldots&\mathscr {T}_{\theta _{j-1}}(Q) \\ \vdots&\ddots&\vdots \\ \mathscr {T}_{\theta _{j-1}}(Q)&\ldots&\mathscr {T}_{\theta _{j-1}}(Q) \end{bmatrix} \begin{bmatrix} A_{\theta _{j-1}}\cdot \ldots \cdot A_{\theta _{1}}\cdot B_{\theta _{0}} u\llbracket 0\rrbracket \\ A_{\theta _{j-1}}\cdot \ldots \cdot A_{\theta _{2}}\cdot B_{\theta _{1}} u\llbracket 1\rrbracket \\ \vdots \\ B_{\theta _{j-1}} u\llbracket j-1\rrbracket \end{bmatrix}\right) \qquad \end{aligned}$$
(51)
$$\begin{aligned}&{=\!\mathop \sum \limits _{\Lambda _{j-1}} p_{(\theta _0,\dots ,\theta _{j-1})}\left( 2\, x^{\intercal }(k)\,\,A^{\intercal }_{\theta _0}\cdot \ldots \cdot A^{\intercal }_{\theta _{j-1}}\,\, \begin{bmatrix} \mathscr {T}_{\theta _{j-1}}(Q)&\ldots&\mathscr {T}_{\theta _{j-1}}(Q) \end{bmatrix} \begin{bmatrix} A_{\theta _{j-1}}&\mathbf {0}&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&\ddots&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&A_{\theta _{j-1}}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&\mathbf {0}&B_{\theta _{j-1}} \end{bmatrix}\cdot \cdots \right. }\nonumber \\&\qquad \quad \, {\ldots \cdot \begin{bmatrix} A_{\theta _{j-2}}&\mathbf {0}&\mathbf {0}&\mathbf {0}&\mathbf {0}\\ \mathbf {0}&\ddots&\mathbf {0}&\mathbf {0}&\mathbf {0}\\ \mathbf {0}&\mathbf {0}&A_{\theta _{j-2}}&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&\mathbf {0}&B_{\theta _{j-2}}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&\mathbf {0}&\mathbf {0}&I_{ n_{\text {u}}} \end{bmatrix} \cdot \ldots \cdot \begin{bmatrix} A_{\theta _{1}}&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&B_{\theta _{1}}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&I_{(j-2)\cdot n_{\text {u}}} \end{bmatrix} \begin{bmatrix} B_{\theta _{0}}&\mathbf {0} \\ \mathbf {0}&I_{(j-1)\cdot n_{\text {u}}} \end{bmatrix} \begin{bmatrix} u\llbracket 0\rrbracket \\ \vdots \\ u\llbracket j-1\rrbracket \end{bmatrix}} \nonumber \\&{\qquad \quad +2\begin{bmatrix} \bar{w}^{\intercal }\llbracket 0\rrbracket&\ldots&\bar{w}^{\intercal }\llbracket j-1\rrbracket \end{bmatrix} \begin{bmatrix} G^{\intercal }_{\theta _{0}}&\mathbf {0} \\ \mathbf {0}&I_{(j-1)\cdot n_{\text {w}}} \end{bmatrix} \begin{bmatrix} A_{\theta _{1}}^{\intercal }&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&G_{\theta _{1}}^{\intercal }&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&I_{(j-2)\cdot n_{\text {w}}} \end{bmatrix} \cdot \ldots \cdot \begin{bmatrix} A^{\intercal }_{\theta _{j-1}}&\mathbf {0}&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&\ddots&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&A^{\intercal }_{\theta _{j-1}}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&\mathbf {0}&G_{\theta _{j-1}} \end{bmatrix} \ \ \ } \nonumber \\&{\qquad \quad \cdot \begin{bmatrix} \mathscr {T}_{\theta _{j-1}}(Q)&\ldots&\mathscr {T}_{\theta _{j-1}}(Q) \\ \vdots&\ddots&\vdots \\ \mathscr {T}_{\theta _{j-1}}(Q)&\ldots&\mathscr {T}_{\theta _{j-1}}(Q) \end{bmatrix}\!\! \begin{bmatrix} A_{\theta _{j-1}}&\mathbf {0}&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&\ddots&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&A_{\theta _{j-1}}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&\mathbf {0}&B_{\theta _{j-1}} \end{bmatrix} \cdot \ldots \cdot \begin{bmatrix} B^{\intercal }_{\theta _{0}}&\mathbf {0} \\ \mathbf {0}&I_{(j-1)\cdot n_{\text {u}}} \end{bmatrix}\!\! \begin{bmatrix} u\llbracket 0\rrbracket \\ \vdots \\ u\llbracket j-1\rrbracket \end{bmatrix}} \nonumber \\&{\qquad \quad +\begin{bmatrix} u^{\intercal }\llbracket 0\rrbracket&\ldots&u^{\intercal }\llbracket j-1\rrbracket \end{bmatrix} \begin{bmatrix} B^{\intercal }_{\theta _{0}}&\mathbf {0} \\ \mathbf {0}&I_{(j-1)\cdot n_{\text {u}}} \end{bmatrix} \begin{bmatrix} A_{\theta _{1}}^{\intercal }&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&B_{\theta _{1}}^{\intercal }&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&I_{(j-2)\cdot n_{\text {u}}} \end{bmatrix} \cdot \ldots \cdot \begin{bmatrix} A^{\intercal }_{\theta _{j-1}}&\mathbf {0}&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&\ddots&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&A^{\intercal }_{\theta _{j-1}}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&\mathbf {0}&B_{\theta _{j-1}} \end{bmatrix} \ \ \ } \nonumber \\&{\qquad \quad \left. \cdot \begin{bmatrix} \mathscr {T}_{\theta _{j-1}}(Q)&\ldots&\mathscr {T}_{\theta _{j-1}}(Q) \\ \vdots&\ddots&\vdots \\ \mathscr {T}_{\theta _{j-1}}(Q)&\ldots&\mathscr {T}_{\theta _{j-1}}(Q) \end{bmatrix}\!\! \begin{bmatrix} A_{\theta _{j-1}}&\mathbf {0}&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&\ddots&\mathbf {0}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&A_{\theta _{j-1}}&\mathbf {0} \\ \mathbf {0}&\mathbf {0}&\mathbf {0}&B_{\theta _{j-1}} \end{bmatrix} \cdot \ldots \cdot \begin{bmatrix} B^{\intercal }_{\theta _{0}}&\mathbf {0} \\ \mathbf {0}&I_{(j-1)\cdot n_{\text {u}}} \end{bmatrix}\!\! \begin{bmatrix} u\llbracket 0\rrbracket \\ \vdots \\ u\llbracket j-1\rrbracket \end{bmatrix}\!\right) ,} \nonumber \end{aligned}$$

where I and \(\mathbf {0}\) denote identity and zero matrices of appropriate dimensions. With the matrices defined in (30), Eq. (51) can be written as a function of \(\mathbf {u}(k)\):

$$\begin{aligned}&\text {E}\left( x^{\intercal }\llbracket j\rrbracket \,Q_{\theta _j} \,x\llbracket j\rrbracket \right) - \varPsi \nonumber \\&= \sum _{\Lambda _{j-1}} p_{(\theta _0,\ldots \theta _{j-1})}\Big ( 2x^{\intercal }(k)\,A^{\intercal }_{\theta _0} \cdot \ldots \cdot A^{\intercal }_{\theta _{j-1}}\,\, \hat{Q}_{\text {q}_{\text {x}},\theta _{j-1}}[j]\,\, \hat{B}_{\theta _{j-1}}[j]\cdot \ldots \cdot \hat{B}_{\theta _0}[1]\, \mathbf {u}(k) \nonumber \\&\qquad \qquad \quad + 2\bar{\mathbf {w}}^{\intercal }(k)\, \hat{G}^{\intercal }_{\theta _0}[1] \cdot \ldots \cdot \hat{G}^{\intercal }_{\theta _{j-1}}[j]\,\, \hat{Q}_{\text {q}_{\text {x}},\theta _{j-1}}[j]\,\,\, \hat{B}_{\theta _{j-1}}[j]\cdot \ldots \cdot \hat{B}_{\theta _0}[1]\, \mathbf {u}(k) \nonumber \\&\qquad \qquad \quad + \mathbf {u}^{\intercal }(k) \, \hat{B}^{\intercal }_{\theta _0}[1] \cdot \ldots \cdot \hat{B}^{\intercal }_{\theta _{j-1}}[j]\,\, \hat{Q}_{\text {W},\theta _{j-1}}[j] \,\, \hat{B}_{\theta _{j-1}}[j]\cdot \ldots \cdot \hat{B}_{\theta _0}[1]\, \mathbf {u}(k)\Big ).&\end{aligned}$$
(52)

Thus, one obtains for the cost prediction matrices:

$$\begin{aligned} q_{\text {x}}\llbracket j\rrbracket&:= {2x^{\intercal }(k)\!\!\mathop \sum _{\Lambda _{j-1}} p_{(\theta _0,\ldots ,\theta _{j-1})}\,\,A^{\intercal }_{\theta _0} \cdot \ldots \cdot A^{\intercal }_{\theta _{j-1}}\,\, \hat{Q}_{\text {q}_{\text {x}},\theta _{j-1}}[j]\,\, \hat{B}_{\theta _{j-1}}[j]\cdot \ldots \cdot \hat{B}_{\theta _0}[1],}&\end{aligned}$$
(53)
$$\begin{aligned} q_{\text {w}}\llbracket j\rrbracket&:= {2\bar{\mathbf {w}}^{\intercal }(k)\mathop \sum _{\Lambda _{j-1}} p_{(\theta _0,\ldots \theta _{j-1})}\,\, \hat{G}^{\intercal }_{\theta _0}[1] \cdot \ldots \cdot \hat{G}^{\intercal }_{\theta _{j-1}}[j]\,\, \hat{Q}_{\text {q}_{\text {w}},\theta _{j-1}}[j] \,\, \hat{B}_{\theta _{j-1}}[j]\cdot \ldots \cdot \hat{B}_{\theta _0}[1],}&\end{aligned}$$
(54)
$$\begin{aligned} W'\llbracket j\rrbracket&:= {\mathop \sum _{\Lambda _{j-1}} p_{(\theta _0,\ldots \theta _{j-1})}\,\, \hat{B}^{\intercal }_{\theta _0}[1] \cdot \ldots \cdot \hat{B}^{\intercal }_{\theta _{j-1}}[j]\,\, \hat{Q}_{\text {W},\theta _{j-1}}[j] \,\, \hat{B}_{\theta _{j-1}}[j]\cdot \ldots \cdot \hat{B}_{\theta _0}[1].} \end{aligned}$$
(55)

These equation describe a way to calculate \(q_{\text {x}}\llbracket j\rrbracket \), \(q_{\text {w}}\llbracket j\rrbracket \), and \(W'\llbracket j\rrbracket \). However, in this form the summation over all possible Markov trajectories is still employed. The computational effort still depends exponentially on \(n_{\theta }\) and N. To reduce the computational effort, the sums are reduced to only the parts that depend on the summation variable. Thus, a nested sum is formed which can be calculated recursively:

$$\begin{aligned}&q_{\text {x}}\llbracket j\rrbracket = 2x^{\intercal }(k)\sum _{\theta _0=1}^{n_{\theta }}\ldots \sum _{\theta _{j-1}=1}^{n_{\theta }}p_{(\theta _0,\ldots ,\theta _{j-1})} A^{\intercal }_{\theta _0} \cdot \ldots \cdot A^{\intercal }_{\theta _{j-1}} \hat{Q}_{q _{x }}[j] \hat{B}_{\theta _{j-1}}[j]\cdot \ldots \cdot \hat{B}_{\theta _0}[1]&\nonumber \\&\qquad \quad = 2x^{\intercal }\!(k)\!\sum _{\theta _0=1}^{n_{\theta }}\cdots \sum _{\theta _{j-2}=1}^{n_{\theta }}p_{(\theta _0,\ldots ,\theta _{j-2})} \cdot A^{\intercal }_{\theta _0} \cdot \ldots \cdot A^{\intercal }_{\theta _{j-2}}&\nonumber \\&\qquad \qquad \quad \cdot \Bigg (\sum _{\theta _{j-1}=1}^{n_{\theta }}p_{\theta _{j-1},\theta _{j-2}} \underbrace{A^{\intercal }_{\theta _{j-1}} \hat{Q}_{q _{x }}[j] \hat{B}_{\theta _{j-1}}[j]}_{=: \chi _{\theta _{j-1}}^{(1)}}\Bigg ) \hat{B}_{\theta _{j-2}}[j-1]\cdot \ldots \cdot \hat{B}_{\theta _0}[1]&\nonumber \\&\qquad \quad {= 2x^{\intercal }(k) \!\! \mathop \sum _{\theta _0=1}^{n_{\theta }}\!\!\cdots \!\!\!\!\mathop \sum _{\theta _{j-2}=1}^{n_{\theta }}\!p_{(\theta _0,\ldots ,\theta _{j-2})} \cdot A^{\intercal }_{\theta _0} \cdot \ldots \cdot \underbrace{A^{\intercal }_{\theta _{j-2}} \mathscr {T}_{\theta _{j-2}}\!\left( \chi ^{(1)}\!\right) \hat{B}_{\theta _{j-2}}[j-1]}_{=: \chi _{\theta _{j-2}}^{(2)}}\cdot \ldots \cdot \hat{B}_{\theta _0}[1]}&\nonumber \\&\qquad \quad {= 2x^{\intercal }(k) \!\! \mathop \sum _{\theta _0=1}^{n_{\theta }}\!\!\cdots \!\!\!\!\mathop \sum _{\theta _{j-3}=1}^{n_{\theta }}\!p_{(\theta _0,\ldots ,\theta _{j-3})} \cdot A^{\intercal }_{\theta _0} \cdot \ldots \cdot \underbrace{A^{\intercal }_{\theta _{j-3}} \mathscr {T}_{\theta _{j-3}}\!\left( \chi ^{(2)}\!\right) \hat{B}_{\theta _{j-3}}[j-2]}_{=: \chi _{\theta _{j-3}}^{(3)}}\cdot \ldots \cdot \hat{B}_{\theta _0}[1]}&\nonumber \\&\qquad \qquad \qquad \qquad \qquad \quad \qquad \qquad \vdots&\nonumber \\&\qquad \quad = 2x^{\intercal }(k) \sum _{\theta _0=1}^{n_{\theta }}\mu _{\theta _0}\!(k) \underbrace{A^{\intercal }_{\theta _{0}} \mathscr {T}_{\theta _{0}}\left( \chi ^{(j-1)} \right) \hat{B}_{\theta _{0}}[1]}_{=: \chi _{\theta _{0}}^{(j)}} = 2x^{\intercal }(k) \sum _{\theta _0=1}^{n_{\theta }}\mu _{\theta _0}\!(k) \chi _{\theta _{0}}^{(j)}. \end{aligned}$$
(56)

These transformations correspond to the steps defined in Theorem 2. An analogous procedure for \(W'\llbracket j\rrbracket \) leads to:

$$\begin{aligned}&W'\llbracket j\rrbracket = \sum _{\theta _0=1}^{n_{\theta }}\cdots \sum _{\theta _{j-1}=1}^{n_{\theta }}p_{(\theta _0,\ldots ,\theta _{j-1})} \cdot \hat{B}^{\intercal }_{\theta _0}[1] \cdot \ldots \cdot \hat{B}^{\intercal }_{\theta _{j-1}}[j] \hat{Q}_{W }[j] \hat{B}_{\theta _{j-1}}[j]\cdot \ldots \cdot \hat{B}_{\theta _0}[1]&\nonumber \\&\qquad \quad = \sum _{\theta _0=1}^{n_{\theta }}\cdots \sum _{\theta _{j-2}=1}^{n_{\theta }}p_{(\theta _0,\ldots ,\theta _{j-2})} \cdot \hat{B}^{\intercal }_{\theta _0}[1] \cdot \ldots \cdot \hat{B}^{\intercal }_{\theta _{j-2}}[j-1]\cdot \ldots&\nonumber \\&\qquad \qquad \ldots \cdot \bigg (\sum _{\theta _{j-1}=1}^{n_{\theta }}p_{\theta _{j-1},\theta _{j-2}} \underbrace{\hat{B}^{\intercal }_{\theta _{j-1}}[j] \hat{Q}_{W }[j] \hat{B}_{\theta _{j-1}}[j]}_{=: \kappa _{\theta _{j-1}}^{(1)}}\bigg ) \hat{B}_{\theta _{j-2}}[j-1]\cdot \ldots \cdot \hat{B}_{\theta _0}[1]&\nonumber \\&\qquad \quad {= \!\! \mathop \sum _{\theta _0=1}^{n_{\theta }}\!\!\cdots \!\!\!\!\mathop \sum _{\theta _{j-2}=1}^{n_{\theta }}\!p_{(\theta _0,\ldots ,\theta _{j-2})} \cdot \hat{B}^{\intercal }_{\theta _0}[1] \cdot \ldots \cdot \underbrace{\hat{B}^{\intercal }_{\theta _{j-2}}[j-1] \mathscr {T}_{\theta _{j-2}}\left( \kappa ^{(1)}\right) \hat{B}_{\theta _{j-2}}[j-1]}_{=: \kappa _{\theta _{j-2}}^{(2)}}\cdot \ldots \cdot \hat{B}_{\theta _0}[1]}&\nonumber \\&\qquad \qquad \quad \qquad \qquad \vdots&\nonumber \\&\qquad \quad = \sum _{\theta _0=1}^{n_{\theta }}\mu _{\theta _0}\!(k) \underbrace{\hat{B}^{\intercal }_{\theta _{0}}[1] \mathscr {T}_{\theta _{0}}\left( \kappa ^{(j-1)} \right) \hat{B}_{\theta _{0}}[1]}_{=: \kappa _{\theta _{0}}^{(j)}} = \sum _{\theta _0=1}^{n_{\theta }}\mu _{\theta _0}\!(k) \kappa _{\theta _{0}}^{(j)}.&\end{aligned}$$
(57)

For \( q_{\text {w}}\llbracket j\rrbracket \), the same procedure is used to obtain a recursive algorithm. With these derivations, it is proven that the algorithm in Theorem 2 calculates the cost prediction matrices \(q_{\text {x}}\llbracket j\rrbracket \), \(q_{\text {w}}\llbracket j\rrbracket \), and \(W'\llbracket j\rrbracket \). \(\blacksquare \)

Appendix B—MPC Approach Proposed in [24]

This section gives a very brief description of the MPC approach given in [24]. The following optimization problem is solved to determine the input trajectory:

$$\begin{aligned} \min \limits _{\mathbf {u}(k)} \quad&\mathop \sum _{j = 1}^N\Big ( \bar{x}^{\intercal }\llbracket j \rrbracket \, Q_j \, \bar{x}\llbracket j \rrbracket + u^{\intercal }\llbracket j-1 \rrbracket \, R_{j-1}\, u\llbracket j -1 \rrbracket \Big ) \nonumber \\ \text {s.t.} \quad&x_{\text {min},j}\ \le \ \bar{x}\llbracket j \rrbracket \ \le \ x_{\text {max,j}}, \ \ \ \ \ u_{\text {min},j}\ \le \ u\llbracket j -1\rrbracket \ \le \ u_{\text {max},j} \ \ \ \ \ \ \forall \ j \in \{1,\dots ,N\}. \end{aligned}$$
(58)

Here, the cost matrices do not depend on the Markov state, but on the prediction step. With (35) and results from Sect. 3 (58) can be formulated as a QP [24].

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Tonne, J., Stursberg, O. (2016). Constrained Model Predictive Control of Processes with Uncertain Structure Modeled by Jump Markov Linear Systems. In: Rauh, A., Senkel, L. (eds) Variable-Structure Approaches. Mathematical Engineering. Springer, Cham. https://doi.org/10.1007/978-3-319-31539-3_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-31539-3_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-31537-9

  • Online ISBN: 978-3-319-31539-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics