Abstract
In this chapter, we study the state estimation and optimal control (i.e., linear quadratic Gaussian (LQG) control) problems for the Quasi-TCP-like networked control systems, i.e., the systems in which control inputs, observations, and packet acknowledgments are randomly lost.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Anderson, B.D.O., Moore, J.B.: Optimal Filtering. Prentice-Hall, Englewood Cliffs (1979)
Jazwinski, A.H.: Stochastic Processes and Filtering Theory. Academic Press, New York (1970)
Sinopoli, B., Schenato, L., Franceschetti, M., Poolla, K., Jordan, M.I., Sastry, S.S.: Kalman filtering with intermittent observations. IEEE Trans. Autom. Control 49(9), 1453–1464 (2004)
Schenato, L., Sinopoli, B., Franceschetti, M., Poolla, K., Sastry, S.S.: Foundations of control and estimation over lossy networks. Proc. IEEE 95(1), 163–187 (2007)
Wei, Q., Wang, F.-Y., Liu, D., Yang, X.: Finite-approximation-error-based discrete-time iterative adaptive dynamic programming. IEEE Trans. Cybern. 44(12), 2820–2833 (2014)
Zhang, H., Cui, L., Luo, Y.: Near-optimal control for nonzero-sum differential games of continuous-time nonlinear systems using single-network ADP. IEEE Trans. Cybern. 43(1), 206–216 (2013)
Zhang, H., Shi, Y., Wang, J.: On energy-to-peak filtering for nonuniformly sampled nonlinear systems: a Markovian jump system approach. IEEE Trans. Fuzzy Syst. 22(1), 212–222 (2014)
Lin, H., Su, H., Shu, Z., Wu, Z.-G., Xu, Y.: Optimal estimation for networked control systems with intermittent inputs without acknowledgement. In: Proceedings of the 19th IFAC World Congress, pp. 5017–5022 (2014)
Maybeck, P.S.: Stochastic models, estimation, and control. Academic press, Cambridge (1982)
Sinopoli, B., Schenato, L., Franceschetti, M., Poolla, K., Sastry, S.: Optimal linear LQG control over lossy networks without packet acknowledgment. Asian J. Control 10(1), 3–13 (2008)
Lin, H., Su, H., Shi, P., Lu, R., Wu, Z.-G.: LQG control for networked control systems over packet drop links without packet acknowledgment. J. Frankl. Inst. 352(11), 5042–5060 (2015)
Moayedi, M., Foo, Y.K., Soh, Y.C.: Networked LQG control over unreliable channels. Int. J. Robust Nonlinear Control 23(2), 167–189 (2013)
Imer, O.C., Yüksel, S., Başar, T.: Optimal control of LTI systems over unreliable communication links. Automatica 42(9), 1429–1439 (2006)
Bitmead, R.R., Gevers, M.: Riccati difference and differential equations: Convergence, monotonicity and stability. In: The Riccati Equation, pp. 263–291. Springer, Heidelberg (1991)
Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge university press, Cambridge (2012)
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Proof of Lemma 8.4
Proof
We prove Lemma 8.4 by mathematical induction.
Step 1: Consider the case \(k=1\). Then \(x_1=Ax_0+\nu _0Bu_0 + \omega _0\).
-
If \(\tau _0=1\), then the value of \(\nu _0\) is known and \(n_0=0\). From (8.2b) in Lemma 8.2, it follows that \(p(x_1)=\mathcal {N}_{x_1}(\bar{x}_1,P_1)\), where \(\bar{x}_1=A\bar{x}_0\) and \(\bar{P}_1 = AP_0A^{\prime }+Q\). By computing (8.6) and (8.7) with \(k=1\), we can obtain \(\bar{\alpha }^{[1]}_{1}\), \(\bar{m}^{[1]}_{1}\), and \(\bar{M}_1\). Substituting them into (8.5a) yields \(p(x_1)=\mathcal {N}_{x_1}(\bar{x}_1,P_1)\). Thus, (8.5a), (8.6), and (8.7) hold for \(k=1\) and \(\tau _0=1\).
-
If \(\tau _0=0\), then the value of \(\nu _0\) is unknown and \(n_1=1, N_1=2\). By the total probability law, we have
$$\begin{aligned} p(x_1)&= p(x_1|\{\nu _{0}=0\})p(\{\nu _{0}=0\}) + p(x_1|\{\nu _{0}=1\})p(\{\nu _{0}=1\}). \end{aligned}$$(8.32)In \(p(x_1|\{\nu _{0}=0\})\), \(\nu _0\) takes the value 0 and is a deterministic quantity. By (8.2b), \(p(x_1|\{\nu _{0}=0\})=\mathcal {N}_{x_1}(A\bar{x}_0,\bar{M}_1)\) where \(\bar{M}_1=AP_0A^{\prime }+Q\). Similarly, by using (8.2b) again, \(p(x_1|\{\nu _{0}=1\})=\mathcal {N}_{x_1}(A\bar{x}_0+Bu_0,\bar{M}_1)\). If we set \(\bar{\alpha }^{[1]}_{1}=\bar{\nu }\), \(\bar{\alpha }^{[2]}_{1}=\nu \), \(\bar{m}^{[1]}_{1}=A\bar{x}_0\), and \(\bar{m}^{[2]}_{1}=A\bar{x}_0+Bu_0\), then (8.32) can be rewritten as
$$\begin{aligned} p(x_1|\mathcal {I}_{0}) = \bar{\alpha }^{[1]}_{1}\mathcal {N}_{x_1}(\bar{m}^{[1]}_{1} ,\bar{M}_1) + \bar{\alpha }^{[2]}_{1}\mathcal {N}_{x_1}(\bar{m}^{[2]}_{1},\bar{M}_1). \end{aligned}$$(8.33)It is easy to verify that \(p(x_1)\) computed by (8.5a), (8.6), and (8.8) with \(k=1\) is equal to (8.33). Hence, (8.5a), (8.6), and (8.8) hold for \(k=1\) and \(\tau _0=0\).
Consequently, (8.5a), (8.6), (8.7), and (8.8) hold for \(k=1\).
Step 2: In Step 1, we have proved that (8.5a) holds at \(k=1\), that is,
-
If \(\gamma _1=0\), there is no observation \(y_1\) and thus \(p(x_1|\mathcal {I}_{1})=p(x_1)\). Let \(p(x_1|\mathcal {I}_{1})\) take the form
$$\begin{aligned} p(x_1|\mathcal {I}_{1}) = \sum _{i=1}^{2^{n_{1}}} \alpha ^{[i]}_{1}\mathcal {N}_{x_{1}}(m^{[i]}_{1},M_{1}). \end{aligned}$$(8.35)It is evident that \(\alpha ^{[i]}_{1}=\bar{\alpha }^{[i]}_{1}, m^{[i]}_{1}=\bar{m}^{[i]}_{1}\), and \(M_{1}=\bar{M}_{1}\), since \(p(x_1|\mathcal {I}_{1})=p(x_1)\). Hence, (8.5b), (8.9), (8.10), and (8.11) hold at \(k=1\) and \(\gamma _1=0\).
-
If \(\gamma _1=1\), with the observation \(y_1\), \(p(x_1|\mathcal {I}_{1})\) can be derectly obtained from \(p(x_1)\) in (8.34) by using Lemma 8.3 (ii). We still let \(p(x_1|y_1)\) take the form as in (8.35). It is easy to check that \(p(x_1|\mathcal {I}_{1})\) and the parameters \(\{\alpha ^{[i]}_{1}, m^{[i]}_{1}, M_{1}\}\), obtained from \(p(x_1)\) in (8.34) by using Lemma 8.3 (ii), are completely identical to those computed by (8.5b), (8.9), (8.10), and (8.11) at \(k=1\) and \(\gamma _1=1\).
From Steps 1 and 2, it follows that the Eqs. (8.5)–(8.11) hold at \(k=1\). Suppose that the equations (8.5)–(8.11) hold for \(1,\ldots ,n\). We check the case \(k=n+1\) as follows.
Step 3: For \(k=n+1\), \(x_{n+1}=Ax_n+\nu _nBu_n+\omega _n\).
-
If \(\tau _n=1\), then the value of \(\nu _n\) is known and \(n_{n+1}=n_{n}\). \(p(x_{n+1}|\mathcal {I}_{n})\) can be obtained from \(p(x_{n}|\mathcal {I}_{n})\) by using Lemma 8.3 (i). It is easy to verify that the \(p(x_{n+1}|\mathcal {I}_{n})\) obtained is equal to the \(p(x_{n+1}|\mathcal {I}_{n})\) computed by (8.5a), (8.6), and (8.7) with \(k=n+1\). Thus, (8.5a), (8.6), and (8.7) hold at \(k=n+1\) and \(\tau _n=1\).
-
If \(\tau _n=0\), then the value of \(\nu _n\) is unknown to the estimator, and \(n_{n+1}=n_{n}+1\), \(N_{n+1}=2N_{n}\). By using the total probability law,
$$\begin{aligned} p(x_{n+1}|\mathcal {I}_{n}) {}&= p(x_{n+1}|\mathcal {I}_{n},\{\nu _{n}=0\})p(\{\nu _{n}=0\}) \nonumber \\ {}&+ p(x_{n+1}|\mathcal {I}_{n},\{\nu _{n}=1\})p(\{\nu _{n}=1\}). \end{aligned}$$(8.36)By applying Lemma 8.3 (i) to \(p(x_{n+1}|\mathcal {I}_{n},\{\nu _{n}=0\})\) and \(p(x_{n+1}|\mathcal {I}_{n},\{\nu _{n}=1\})\), we have
$$\begin{aligned} {}&p(x_{n+1}|\mathcal {I}_{n},\{\nu _{n}=0\}) \nonumber \\ = {}&\sum _{i=1}^{2^{n_n}} \alpha ^{[i]}_{n} \mathcal {N}_{x_{n+1}}(\bar{m}^{[i]}_{n+1},\bar{M}_{n+1}) \end{aligned}$$(8.37)where \(\bar{m}^{[i]}_{n+1}=Am^{[i]}_{n}\) and \(\bar{M}_{n+1}=A M_n A^{\prime }+Q\), for \(1 \le i \le 2^{n_n}\); and
$$\begin{aligned} {}&p(x_{n+1}|\mathcal {I}_{n},\{\nu _{n}=1\}) \nonumber \\ = {}&\sum _{i=1}^{2^{n_n}} \alpha ^{[i]}_{n} \mathcal {N}_{x_{n+1}}(\bar{m}^{[i]}_{n+1},\bar{M}_{n+1}) \end{aligned}$$(8.38)where \(\bar{m}^{[i]}_{n+1}=Am^{[i]}_{n} + Bu_{n}\), for \(1 \le i \le 2^{n_n}\). By substituting (8.37) and (8.38) into (8.36), \(p(x_{n+1}|\mathcal {I}_{n})\) can be rewritten as:
$$\begin{aligned} p(x_{n+1}|\mathcal {I}_{n}) = \sum _{i=1}^{2^{n_{n+1}}} \bar{\alpha }^{[i]}_{n+1}\mathcal {N}_{x_{n+1}}(\bar{m}^{[i]}_{n+1},\bar{M}_{n+1}) \end{aligned}$$where \(\{\bar{m}^{[i]}_{n+1}\), \(\bar{\alpha }^{[i]}_{n+1}\), \(\bar{M}_{n+1}\}\) are equal to (8.6) and (8.8) with \(k=n+1\), which means that (8.5a), (8.6), and (8.8) hold for \(k=n+1\).
Step 4: By using Lemma 8.3 (ii) and following the same line of argument in Step 2, it is easy to verify that (8.5b), (8.9), (8.10), and (8.11) hold at \(k=n+1\). For the sake of space, the proof is not presented here.
From Steps 3 and 4, it follows that the Eqs. (8.5)–(8.11) hold at \(k=n+1\), which completes the proof.
Proof of Part (i) of Theorem 8.18
Proof
Let \(\mathcal {K}_{k}=(I-\gamma _{k}K_{k}C)\). We start with calculating \(x_k\) and \(e_k\). By substituting \(u_k=L\hat{x}_{k}\) into (8.1),
By combining (8.24) and (8.25), we have
Then, the homogenous parts of (8.39) and (8.40) are the following:
Since \(\mathbb {E}[|\!|\omega _k|\!|^2]=\mathrm {tr}(Q)\) and \(\mathbb {E}[|\!|\upsilon _{k+1}|\!|^2]=\mathrm {tr}(R)\) in (8.39) and (8.40) are bounded, it was pointed out in [13] that if the homogenous parts of (8.39) and (8.40) are asymptotically stable, then the system Eqs. (8.39) and (8.40) are mean square stable.
To study the asymptotic stability of (8.41) and (8.42), we follow the similar line of augument developed in [13], which requires the calculation of \(x_k^{\prime }Z_kx_k+e_k^{\prime }H_ke_k\). However, it would be cumbersome to compute this quantity directly via (8.41) and (8.42), which can be seen in [13]. Actually, majorities of the derivations for computing this quantity have been performed in calculating \(V_k(x_k)\) in Lemma 8.14. Therefore, in the following we employ the results on \(V_k(x_k)\) to compute this quantity.
Denote the optimal control by \(u^{*}_k\). From (8.13), we have
According to the definition of the mean square stability, it is the \(\mathbb {E}[|\!|x_k|\!|^2]\) not the \(\mathbb {E}[|\!|x_k|\!|^2|\mathcal {I}_{k}]\) that is considered. Thus, taking mathematical expectation over all information \(\mathcal {I}_{k}\) yields
From (8.26) and by noting that \(\mathbb {E}[e^{\prime }_k H_k e_k]=\mathrm {tr}(H_k P_k)\), we obtain
Then,
where the last equality is obtained by (8.43) and (8.27e).
In Lemma 8.14, \(x_k\) and \(e_k\) are determined by (8.39) and (8.40). While what we consider is their homogenous parts, i.e., (8.43) and (8.44), in which there is no noise, which is equivalent to letting \(Q=R=0\) in \(V_k(x_k)\). Therefore, for the homogenous parts (8.43) and (8.44), by letting \(Q=R=0\) in (8.45),
Summing up this equality for \(k=0\) to \(n-1\) yields
Due to \(\mathbb {E}[x^{\prime }_{n} Z_{n} x_{n} + e^{\prime }_{n} H_{n} e_{n}\ge 0\), we have
By the hypothesis that \(\{Z_k\) and \(G_k\}\) are bounded, we have \(\bar{Z}\ge Z_0\) and \(\bar{G}\ge G_0 = Z_0+H_0 \ge H_0\). Then
The boundedness of the series \(\sum ^{n-1}_{k=0} \mathbb {E}[x^{\prime }_k W x_k]\) implies \(\lim _{k\rightarrow \infty }\mathbb {E}[x^{\prime }_k W x_k]=0\). Due to \(W>0\), \(\mathbb {E}[x^{\prime }_k x_k] = \mathbb {E}[|\!|x_k|\!|^2]\rightarrow 0\). Since \(\mathbb {E}[x^{\prime }_k W x_k] = \hat{x}^{\prime }_k W \hat{x}_k + \mathbb {E}[e^{\prime }_k W e_k]\), we have \(\lim _{k\rightarrow \infty }\mathbb {E}[e^{\prime }_k W e_k]=0\), i.e., \(\mathbb {E}[|\!|e_k|\!|^2]\rightarrow 0\), which implies the asymptotic stability of (8.41) and (8.42). Hence, (8.39) and (8.40) are mean square stable. The proof of part (i) is completed.
Proof of Part (ii) of Theorem 8.18
Before the proof of part (ii) of Theorem 8.18, we introduce some useful preliminaries and lemmas as follows.
To study the boundedness of \(Z_k\) and \(G_k\), we reverse the time index in (8.27) and then rewrite (8.27) as follows:
with \(Z_0=W\) and \(H_0=0\), where
Define two operators as follows:
Lemma 8.19
Some results on g(1, X), \(\varPhi _X\), \(\varPhi _Z\), and \(\varPhi _G\) are formulated as follows ([4, pp. 182] and [14, Theorems 10.6 and 10.7]):
-
(i)
g(1, X), \(\varPhi _X\), \(\varPhi _Z\), and \(\varPhi _G\) are monotonically increasing functions. Namely, if \(Z_1\ge Z_2\) and \(Y_1\ge Y_2\), then
$$\begin{aligned} g(1,Z_1) \ge {}&g(1,Z_2)\\ \varPhi _X(Z_1,Y_1)\ge {}&\varPhi _X(Z_2,Y_2)\\ \varPhi _Z(Z_1,Y_1,\rho )\ge {}&\varPhi _Z(Z_2,Y_2,\rho )\\ \varPhi _G(Z_1,Y_1,\eta )\ge {}&\varPhi _G(Z_2,Y_2,\eta ). \end{aligned}$$ -
(ii)
If Condition 1 is satisfied, then a necessary and sufficient condition for the convergences of \(Z_{k+1}=\varPhi _Z(Z_k,G_k,\rho )\) and \(G_{k+1}=\varPhi _G(Z_k,G_k,\eta )\) is
$$\begin{aligned} \lambda _A^2(\eta + \nu - 2\eta \nu ) < (\eta + \nu - \eta \nu ). \end{aligned}$$ -
(iii)
If \(S_0 \ge S_\infty \), then \(S_0 \ge S_k \ge S_\infty \).
Lemma 8.20
Let \(X >0\) and \(Y \ge 0\), and C is a matrix with compatible dimension. Then
-
(i)
([15], Theorem 7.7.3 and Corollary 7.7.4) The following three inequalities are equivalent:
$$\begin{aligned} \lambda (YX^{-1})<1 \Leftrightarrow X> Y \Leftrightarrow Y^{-1}>X^{-1}. \end{aligned}$$ -
(ii)
([9], p. 213) The matrix inverse lemma:
$$\begin{aligned} XC^{\prime }(CXC^{\prime }+Y)^{-1} = (X^{-1}+C^{\prime }Y^{-1}C)^{-1}C^{\prime }Y^{-1}. \end{aligned}$$
In the sequel, we assume that Conditions 1, 2, and 3 are satisfied.
Lemma 8.21
Let \(\bar{M}_0 = P_0 \ge S_\infty \). The following facts hold.
-
(i)
Let \(S_{k+1} = g(1,S_k)\) with \(S_0 = \bar{M}_0 = P_0\). Then \(S_\infty \le \bar{M}_k\).
-
(ii)
\(F(\bar{M}_k)=\mathbb {K}_{k}^{\prime }H_{k}\mathbb {K}_{k}\) is monotonically decreasing, and thus \(\mathbb {K}_{k}^{\prime }H_{k}\mathbb {K}_{k} \le (\lambda _{\mathbb {K}})^{2} H_{k}\).
Proof
-
(i)
We prove this lemma by mathematical induction. For \(k=0\), this lemma holds. Suppose that it holds for \(0,\ldots ,n\). We check the case \(k=n+1\) as follows. By the hypothesis that \(S_n \le \bar{M}_n\) and Lemma 8.19,
$$\begin{aligned} S_{n+1} = {}&g(1,S_n) \\ \le {}&g(1,\bar{M}_n) \\ \le {}&g(\gamma _{n+1},\bar{M}_n) = \bar{M}_{n+1}. \end{aligned}$$Consequently, we have \(S_k \le \bar{M}_k\). From Lemma 8.19 (iii), it follows that \(S_\infty \le S_k \le \bar{M}_k\). The proof is completed.
-
(ii)
Define three functions as follows:
$$\begin{aligned} h(S)\triangleq {}&(S^{-1} + C^{\prime }R^{-1}C)^{-1}C^{\prime }R^{-1}C\\ f(S)\triangleq {}&I-h(S)\\ F(S)\triangleq {}&f(S)^{\prime }H_kf(S). \end{aligned}$$By Lemma 8.20 (ii), we have \(h(\bar{M}_k)=K_kC\). Thus, \(f(\bar{M}_k)=\mathbb {K}_{k}\) and \(F(\bar{M}_k)=\mathbb {K}_{k}^{\prime }H_{k}\mathbb {K}_{k}\).
Suppose that \(S_1>S_2\). By Lemma 8.20 (ii), we have \(S_1^{-1} < S_2^{-1}\) and thus
$$\begin{aligned} S_1^{-1} + C^{\prime }R^{-1}C < S_2^{-1} + C^{\prime }R^{-1}C. \end{aligned}$$Let \(Y=C^{\prime }R^{-1}C\), and \(Y^{-1}\) exists by virtue of the assumption that C is full column rank. By using Lemma 8.20 (ii) again, we have
$$\begin{aligned} {}&(S_1^{-1} + Y)^{-1}> (S_2^{-1} + Y)^{-1}\\ \overset{(a)}{\Rightarrow } {}&\lambda ((S_2^{-1} + Y)^{-1}YY^{-1}(S_1^{-1} + Y))<1\\ \Rightarrow {}&\lambda (h(S_2)h(S_1)^{-1})<1 \\ \overset{(b)}{\Rightarrow } {}&h(S_1)>h(S_2)\\ \overset{(c)}{\Rightarrow } {}&f(S_1)<f(S_2)\\ \overset{(d)}{\Rightarrow } {}&\lambda (f(S_1)f(S_2)^{-1})<1, \end{aligned}$$where the inequalities on the right-hand side of \(\overset{(a)}{\Rightarrow }\), \(\overset{(b)}{\Rightarrow }\), and \(\overset{(d)}{\Rightarrow }\) are obtained by using Lemma 8.20 (i), and \(\overset{(c)}{\Rightarrow }\) is obtained by noting that \(f(S_1)<f(S_2)\) due to \(f(S)=I-h(S)\).
To compare \(f(S_1)^{\prime }H_{k}f(S_1)\) with \(f(S_2)^{\prime }H_{k}f(S_2)\), we consider the following inequalities.
$$\begin{aligned} {}&\lambda ((f(S_1)f(S_2)^{-1})^{\prime }H_{k}(f(S_1)f(S_2)^{-1}H_{k}^{-1})\\ \le {}&\lambda ((f(S_1)f(S_2)^{-1})^{\prime })\lambda (H_{k}f(S_1)(f(S_2)^{-1}H_{k}^{-1})\\ ={}&(\lambda (f(S_1)f(S_2)^{-1}))^2 < 1. \end{aligned}$$From Lemma 8.20 (i), it follows that
$$\begin{aligned} (f(S_1)f(S_2)^{-1})^{\prime }H_{k}f(S_1)(f(S_2))^{-1} < H_{k}, \end{aligned}$$which means that \(f(S_1)^{\prime }H_{k}f(S_1) < f(S_2)^{\prime }H_{k}f(S_2)\), i.e., \(F(S_1) < F(S_2)\). From the result in part (i), we have
$$\begin{aligned} \mathbb {K}_{k}^{\prime }H_{k}\mathbb {K}_{k} = {}&F(\bar{M}_k) \\ \le {}&F(S_\infty ) \\ = {}&\mathbb {K}^{\prime }H_{k}\mathbb {K}\le (\lambda _{\mathbb {K}})^{2} H_{k}. \end{aligned}$$
The proof is completed.
Lemma 8.22
Define two sequences as follows:
with \(\bar{Z}_0=Z_0,\bar{G}_k=G_0\). Then
Proof
From (8.46c) and by using Lemma 8.21, we have
From (8.27b),
By Lemma 8.19 (i),
We prove this lemma by mathematical induction. It is clear that (8.47) holds for \(k=0\). Suppose that it holds for \(0,\ldots , n\). We check the case \(k=n+1\) as follows. From (8.48), (8.49), and Lemma 8.19 (i), we have
and
The proof is completed.
Proof of Part (ii) of Theorem 8.18
Proof
From Lemma 8.19 (ii), it follows that if Condition 1 is satisfied and the inequality \(\lambda _A^2(\eta + \nu - 2\eta \nu ) < (\eta + \nu - \eta \nu )\) holds, then \(\bar{Z}_k\) and \(\bar{G}_k\) are convergent and thus are bounded. By Lemma 8.22, \(Z_k\) and \(G_k\) are bounded as well.
Rights and permissions
Copyright information
© 2017 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Lin, H., Su, H., Shi, P., Shu, Z., Wu, ZG. (2017). Estimation and Control for Quasi-TCP-Like Systems. In: Estimation and Control for Networked Systems with Packet Losses without Acknowledgement. Studies in Systems, Decision and Control, vol 77. Springer, Cham. https://doi.org/10.1007/978-3-319-44212-9_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-44212-9_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-44211-2
Online ISBN: 978-3-319-44212-9
eBook Packages: EngineeringEngineering (R0)