Linear Matrix Inequality Techniques in Optimal Control
Abstract
LMI (linear matrix inequality) techniques offer more flexibility in the design of dynamic linear systems than techniques that minimize a scalar functional for optimization. For linear state space models, multiple goals (performance bounds) can be characterized in terms of LMIs, and these can serve as the basis for controller optimization via finitedimensional convex feasibility problems. LMI formulations of various standard control problems are described in this article, including dynamic feedback stabilization, covariance control, LQR, H _{ ∞ } control, L _{ ∞ } control, and information architecture design.
Keywords
Matrix inequalities Control system design Covariance control LQR/LQG H_{∞} control L_{∞} control Sensor/actuator designEarly Optimization History
Hamilton invented state space models of nonlinear dynamic systems with his generalized momenta work in the 1800s (Hamilton 1834, 1835), but at that time the lack of computational tools prevented broad acceptance of the firstorder form of dynamic equations. With the rapid development of computers in the 1960s, state space models evoked a formal control theory for minimizing a scalar function of control and state, propelled by the calculus of variations and Pontryagin’s maximum principle. Optimal control has been a pillar of control theory for the last 50 years. In fact, all of the problems discussed in this article can perhaps be solved by minimizing a scalar functional, but a search is required to find the right functional. Globally convergent algorithms are available to do just that for quadratic functionals, but more direct methods are now available.
Since the early 1990s, the focus for linear system design has been to pose control problems as feasibility problems, to satisfy multiple constraints. Since then, feasibility approaches have dominated design decisions, and such feasibility problems may be convex or not. If the problem can be reduced to a set of linear matrix inequalities (LMIs) to solve, then convexity is proven. However, failure to find such LMI formulations of the problem does not mean it is not convex, and computerassisted methods for convex problems are available to avoid the search for LMIs (see Camino et al. 2003).
In the case of linear dynamic models of stochastic processes, optimization methods led to the popularization of linear quadratic Gaussian (LQG) optimal control, which had globally optimal solutions (see Skelton 1988). The first two moments of the stochastic process (the mean and the covariance) can be controlled with these methods, even if the distribution of the random variables involved is not Gaussian. Hence, LQG became just an acronym for the solution of quadratic functionals of control and state variables, even when the stochastic processes were not Gaussian. The label LQG was often used even for deterministic problems, where a time integral, rather than an expectation operator, was minimized, with given initial conditions or impulse excitations. These were formally called LQR (linear quadratic regulator) problems. Later the book (Skelton 1988) gave the formal conditions under which the LQG and the LQR answers were numerically identical, and this particular version of LQR was called the deterministic LQG.
It was always recognized that the quadratic form of the state and control in the LQG problem was an artificial goal. The real control goals usually involved prespecified performance bounds on each of the outputs and bounds on each channel of control. This leads to matrix inequalities (MIs) rather than scalar minimizations. While it was known early that any stabilizing linear controller could be obtained by some choice of weights in an LQG optimization problem (see Chap. 6 and references in Skelton 1988), it was not known until the 1980s what particular choice of weights in LQG would yield a solution to the matrix inequality (MI) problem. See early attempts in Skelton (1988), and see Zhu and Skelton (1992) and Zhu et al. (1997) for a globally convergent algorithm to find such LQG weights when the MI problem has a solution. Since then, rather than stating a minimization problem for a meaningless sum of outputs and inputs, linear control problems can now be stated simply in terms of norm bounds on each input vector and/or each output vector of the system (L _{2} bounds, L _{ ∞ } bounds, or variance bounds and covariance bounds). These feasibility problems are convex for state feedback or fullorder output feedback controllers (the focus of this elementary introduction), and these can be solved using linear matrix inequalities (LMIs), as illustrated in this article. However, the earliest approach to these MI problems was iterative LQG solutions (to find the correct weights to use in the quadratic penalty of the state), as in Skelton (1988), Zhu and Skelton (1992), and Zhu et al. (1997).
Matrix Inequalities
Let Q be any square matrix. The linear matrix inequality (LMI) “Q > 0” is just a shorthand notation to represent a certain scalar inequality. That is, the matrix notation “Q > 0” means “the scalar x ^{ T } Qx is positive for all values of x, except x = 0.” Obviously this is a property of Q, not x, hence the abbreviated matrix notation Q > 0. This is called a linear matrix inequality (LMI), since the matrix unknown Q appears linearly in the inequality Q > 0. Note also that any square matrix Q can be written as the sum of a symmetric matrix \(\mathbf{Q}_{s} = \frac{1} {2}(\mathbf{Q} +{ \mathbf{Q}}^{\mathbf{T}})\), and a skewsymmetric matrix \(\mathbf{Q}_{k} = \frac{1} {2}(\mathbf{Q} {\mathbf{Q}}^{\mathbf{T}})\), but x ^{ T } Q _{ k } x = 0, so only the symmetric part of the matrix Q affects the scalar x ^{ T } Qx. We assume hereafter without loss of generality that Q is symmetric. The notation “Q ≥ 0” means “the scalar x ^{ T } Qx cannot be negative for any x.”
 1.
For any initial condition x(0) of the system \(\dot{\mathbf{x}} = \mathbf{Ax}\), the state x(t) converges to zero.
 2.
All eigenvalues of A lie in the open left half plane.
 3.
There exists a matrix Q with the two properties Q > 0 and QA + A ^{ T } Q < 0.
 4.
The set of all quadratic Lyapunov functions that can be used to prove the stability or instability of the null solution of \(\dot{\mathbf{x}} = \mathbf{Ax}\) is given by x ^{ T } Q ^{− 1} x, where Q is any square matrix with the two properties of item 3 above.
LMIs are prevalent throughout the fundamental concepts of control theory, such as controllability and observability. For the linear system example \(\dot{\mathbf{x}} = \mathbf{Ax} + \mathbf{Bu}\), y = Cx, the “Observability Gramian” is the infinite integral \(\mathbf{Q} =\int {e}^{{\mathbf{A}}^{\mathbf{T}} t}{\mathbf{C}}^{\mathbf{T}}\mathbf{C}{e}^{\mathbf{A}t}dt\). Furthermore Q > 0 if and only if (A, C) is an observable pair, and Q is bounded only if the observable modes are asymptotically stable. When it exists, the solution of \(\mathbf{QA} +{ \mathbf{A}}^{\mathbf{T}}\mathbf{Q} +{ \mathbf{C}}^{\mathbf{T}}\mathbf{C} = \mathbf{0}\) satisfies Q > 0 if and only if the matrix pair (A, C) is observable.
Likewise the “Controllability Gramian” \(\mathbf{X} =\int {e}^{\mathbf{A}t}{\mathbf{BB}}^{T}{e}^{{\mathbf{A}}^{\mathbf{T}} t}dt > 0\) if and only if the pair (A, B) is controllable. If X exists, it satisfies \({\mathbf{XA}}^{\mathbf{T}} + \mathbf{AX} +{ \mathbf{BB}}^{\mathbf{T}} = 0\), and X > 0 if and only if (A, B) is a controllable pair. Note also that the matrix pair (A, B) is controllable for any A if BB ^{ T } > 0, and the matrix pair (A, C) is observable for any A if C ^{ T } C > 0. Hence, the existence of Q > 0 or X > 0 satisfying either (QA + A ^{ T } Q < 0) or (AX + XA ^{ T } < 0) is equivalent to the statement that “all eigenvalues of A lie in the open left half plane.”
It should now be clear that the set of all stabilizing state feedback controllers, u = Gx, is parametrized by the inequalities Q > 0, \(\mathbf{Q}(\mathbf{A} + \mathbf{BG}) + {(\mathbf{A} + \mathbf{BG})}^{\mathbf{T}}\mathbf{Q} < \mathbf{0}\). The difficulty in this MI is the appearance of the product of the two unknowns Q and G, so more work is required to show how to use LMIs to solve this problem.
In the sequel some techniques are borrowed from linear algebra, where a linear matrix equality (LME) \(\boldsymbol{\Gamma \mathrm{G}\Lambda } = \boldsymbol{\Theta }\) may or may not have a solution G. For LMEs there are two separate questions to answer. The first question is “Does there exist a solution?” and the answer is “if and only if \(\boldsymbol{\Gamma {\Gamma }}^{+}\boldsymbol{\Theta {\Lambda }}^{+}\boldsymbol{\Lambda } = \boldsymbol{\Theta }\).” The second question is “What is the set of all solutions?” and the answer is “\(\mathbf{G} ={ \boldsymbol{\Gamma }}^{+}\boldsymbol{\Theta {\Lambda }}^{+} + \mathbf{Z} {\boldsymbol{\Gamma }}^{+}\boldsymbol{\Gamma Z\Lambda {\Lambda }}^{+}\), where Z is arbitrary, and the + symbol denotes Pseudo Inverse.” LMI approaches employ the same two questions by formulating the necessary and sufficient conditions for the existence of an LMI solution and then to parametrize all solutions.
Perhaps the earliest book on LMI control methods was Boyd et al. (1994), but the results and notations used herein are taken from Skelton et al. (1998). Other important LMI papers and books can give the reader a broader background, including Iwasaki and Skelton (1994), Gahinet and Apkarian (1994), de Oliveira et al. (2002), Li et al. (2008), de Oliveira and Skelton (2001), Camino et al. (2001, 2003), Boyd and Vandenberghe (2004), Iwasaki et al. (2000), Khargonekar and Rotea (1991), Vandenberghe and Boyd (1996), Scherer (1995), Scherer et al. (1997), Balakrishnan et al. (1994), Gahinet et al. (1995), and Dullerud and Paganini (2000).
Control Design Using LMIs
Often it is of interest to characterize the set of all controllers that can satisfy performance bounds on both the outputs and inputs, \(E[{\mathbf{yy}}^{\mathbf{T}}] \leq \bar{\mathbf{Y}}\) and \(E[{\mathbf{uu}}^{\mathbf{T}}] \leq \bar{\mathbf{U}}\), and we call these covariance control problems. But without prespecified performance bounds \(\bar{\mathbf{Y}},\bar{\mathbf{U}}\), one can require stability only. Such examples are given below.
Many Control Problems Reduce to the Same LMI
Stabilizing Control
Covariance Upper Bound Control

There exists a controller G that solves the covariance upper bound control problem \(\mathbf{Y}<\overline{\mathbf{Y}}\).
 There exists a matrix X > 0 such that \(\mathbf{Y} ={ \mathbf{CXC}}^{\mathbf{T}}<\overline{\mathbf{Y}}\) and (7) and (8) hold, where the matrices are defined by(\(\boldsymbol{\Theta }\) occupies the last two columns).$$\left [\begin{array}{@{}c@{\quad }c@{\quad }c@{}} \boldsymbol{\Gamma }\quad &\boldsymbol{{\Lambda }}^{\mathbf{T}}\quad &\boldsymbol{\Theta }\end{array} \right ] = \left [\begin{array}{@{}c@{\quad }c@{\quad }c@{\quad }c@{}} \mathbf{B}\quad &{\mathbf{XM}}^{\mathbf{T}}\quad &\mathbf{AX} +{ \mathbf{XA}}^{\mathbf{T}}\quad & \mathbf{D} \\ \mathbf{0}\quad & {\mathbf{E}}^{\mathbf{T}} \quad & {\mathbf{D}}^{\mathbf{T}} \quad & \mathbf{I}\end{array} \right ]$$(12)
Proof is provided by Theorem 9.1.2 in Skelton et al. (1998).
Linear Quadratic Regulator

There exists a controller G that solves the LQR problem.
 There exists a matrix Y > 0 such that \(\{\mathbf{D}}^{\mathbf{T}}\mathbf{Y}\mathbf{D}\ {<\gamma }^{2}\) and (7) and (8) hold, where the matrices are defined by$$\left [\begin{array}{@{}c@{\quad }c@{\quad }c@{}} \boldsymbol{\Gamma }\quad &\boldsymbol{{\Lambda }}^{\mathbf{T}}\quad &\boldsymbol{\Theta }\end{array} \right ] = \left [\begin{array}{@{}c@{\quad }c@{\quad }c@{\quad }c@{}} \mathbf{YB}\quad &{\mathbf{M}}^{\mathbf{T}}\quad &\mathbf{YA} +{ \mathbf{A}}^{\mathbf{T}}\mathbf{Y}\quad &{\mathbf{M}}^{\mathbf{T}} \\ \mathbf{H} \quad & \mathbf{0} \quad & \mathbf{M} \quad & \mathbf{I}\end{array} \right ].$$(13)
Proof is provided by Theorem 9.1.3 in Skelton et al. (1998).
H _{ ∞ } Control
LMI techniques provided the first papers to solve the general H _{ ∞ } problem, without any restrictions on the plant. See Iwasaki and Skelton (1994) and Gahinet and Apkarian (1994).
Let a performance bound γ > 0 be given. Determine whether or not there exists a controller G in (1) which asymptotically stabilizes the system and yields the closedloop transfer matrix (14) such that the peak value of the frequency response is less than γ. That is, \(\\mathbf{T}\_{\mathbf{H}_{\infty }} =\sup \ \mathbf{T}(j\omega )\ <\gamma\).

A controller G solves the H _{ ∞ } control problem.
 There exists a matrix X > 0 such that (7) and (8) holds, where(\(\boldsymbol{\Theta }\) occupies the last three columns).$$\left [\begin{array}{@{}c@{\quad }c@{\quad }c@{}} \boldsymbol{\Gamma }\quad &\boldsymbol{{\Lambda }}^{\mathbf{T}}\quad &\boldsymbol{\Theta }\end{array} \right ] = \left [\begin{array}{@{}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} \mathbf{B}\quad &{\mathbf{XM}}^{\mathbf{T}}\quad &\mathbf{AX} + \mathbf{X}{\mathbf{A}}^{\mathbf{T}}\quad &{\mathbf{XC}}^{\mathbf{T}}\quad & \mathbf{D} \\ \mathbf{H}\quad & \mathbf{0} \quad & \mathbf{CX} \quad & \gamma \mathbf{I} \quad & \mathbf{F} \\ \mathbf{0}\quad & {\mathbf{E}}^{\mathbf{T}} \quad & {\mathbf{D}}^{\mathbf{T}} \quad & {\mathbf{F}}^{\mathbf{T}} \quad & \gamma \mathbf{I}\end{array} \right ]$$(15)
Proof is provided by Theorem 9.1.5 in Skelton et al. (1998).
L _{ ∞ } Control
The peak value of the frequency response is controlled by the above H _{ ∞ } controller. A similar theorem can be written to control the peak in the time domain.
Information Architecture in Estimation and Control Problems
In the typical “control problem” that occupies most research literature, the sensors and actuators have already been selected. Yet the selection of sensors and actuators and their locations greatly affect the ability of the control system to do its job efficiently. Perhaps in one location a highprecision sensor is needed, and in another location high precision is not needed, and paying for high precision in that location would therefore be a waste of resources. These decisions must be influenced by the control dynamics which are yet to be designed. How does one know where to effectively spend money to improve the system? To answer this question, we must optimize the information architecture jointly with the control law.
Let us consider the problem of selecting the control law jointly with the selection of the precision (defined here as the inverse of the noise intensity) of each actuator/sensor, subject to the constraint of specified upper bounds on the covariance of output error and control signals, and specified upper bounds on the sensor/actuator cost. We assume the cost of these devices is proportional to their precision (i.e., the cost is equal to the price per unit of precision, times the precision). Traditionally, with fullorder controllers, and prespecified sensor/actuator instruments (with specified precisions); this is a wellknown solved convex problem (which means it can be converted to an LMI problem if desired), see Chap. 6 of Skelton et al. (1998). If we enlarge the domain of the freedom to include sensor/actuator precisions, it is not obvious whether the feasibility problem is convex or not. The following shows that this problem of including the sensor/actuator precisions within the control design problem is indeed convex and therefore completely solved. The proof is provided in Li et al. (2008).
Consider the linear control system (1)–(5). Assume that the cost of sensors and actuators is proportional to their precision, which we herein define to be the inverse of the noise intensity (or variance, in the discretetime case). So if the price per unit of precision of the ith sensor/actuator is P _{ii}, and if the variance (or intensity) of the noise associated with the ith sensor/actuator is W _{ ii }, then the total cost of all sensors and actuators is \(\sum P_{ii}W_{ii}^{1}\), or simply tr( PW ^{−1}), where P = diag(P _{ ii }) and \({\mathbf{W}}^{1} =\mathrm{ diag}(W_{ii}^{1})\).
To emphasize the theme of this article, to relate optimization to LMIs, we note that three optimization problems present themselves in the above problem with three constraints: control effort \(\bar{\mathbf{U}}\), output performance \(\bar{\mathbf{Y}}\), and instrument costs \(\bar{\$}\). To solve optimization problems, one can fix any two of these prespecified upper bounds and iteratively reduce the level set value of the third “constraint” until feasibility is lost. This process minimizes the resource expressed by the third constraint, while enforcing the other two constraints.
As an example, if cost is not a concern, one can always set large limits for \(\bar{\$}\) and discover the best assignment of sensor/actuator precisions for the specified performance requirements. These precisions produced by the algorithm are the values W _{ ii } ^{− 1}, produced from the solution (18)–(20), where the observed rankings \(W_{ii}^{1}>W_{jj}^{1} > W_{kk}^{1} >\ldots\) indicate which sensors or actuators are most critical to the required performance goals \((\bar{\mathbf{U}},\bar{\mathbf{Y}},\bar{\$})\). If any precision W _{ nn } ^{− 1} is essentially zero, compared to other required precisions, then the math is asserting that the information from this sensor (n) is not important for the control objectives specified, or the control signals through this actuator channel (n) are ineffective in controlling the system to these specifications. This information leads us to a technique for choosing the best sensor actuators and their location.
The previous discussion provides the precisions (W _{ ii } ^{− 1}) required of each sensor and each actuator in the system. Our final application of this theory locates sensors and actuators in a largescale system, by discarding the least effective ones. Suppose we solve any of the above feasibility problems, by starting with the entire admissible set of sensors and actuators (without regard to cost). For example, in a flexible structure control problem we might not know whether to place a rate sensor or displacement sensors at a given location, so we add both. We might not know whether to use torque or force actuators, so we add both. We fill up the system with all the possibilities we might want to consider, and let the above precision rankings (available after the above LMI problem is solved) reveal how much precision is needed at each location and at each sensor/actuator. If there is a large gap in the precisions required (say \(W_{11}^{1} > W_{22}^{1} > W_{33}^{1} >>\ldots W_{nn}^{1}\)), then delete the sensor/actuator n and repeat the LMI problem with one less sensor or actuator. Continue deleting sensors/actuators in this manner until feasibility of the problem is lost. Then this algorithm, stopping at the previous iteration, has selected the best distribution of sensors/actuators for solving the specific problem specified by the allowable bounds (\(\bar{\$},\bar{\mathbf{U}},\bar{\mathbf{Y}}\)). The most important contribution of the above algorithm has been to extend control theory to solve system design problems that involve more than just deigning control gains. This enlarges the set of solved linear control problems, from solutions of linear controllers with sensors/actuators prespecified to solutions which specify the sensor/actuator requirements jointly with the control solution.
Summary
LMI techniques provide more powerful tools for designing dynamic linear systems than techniques that minimize a scalar functional for optimization, since multiple goals (bounds) can be achieved for each of the outputs and inputs. Optimal control has been a pillar of control theory for the last 50 years. In fact, all of the problems discussed in this article can perhaps be solved by minimizing a scalar functional, but a search is required to find the right functional. Globally convergent algorithms are available to do just that for quadratic functionals. But more direct methods are now available (since the early 1990s) for satisfying multiple constraints. Since then, feasibility approaches have dominated design decisions (at least for linear systems), and such feasibility problems may be convex or not. If the problem can be reduced to a set of LMIs to solve, then convexity is proven. However, failure to find such LMI formulations of the problem does not mean it is not convex, and computerassisted methods for convex problems are available to avoid the search for LMIs (see Camino et al. 2003). Optimization can also be achieved with LMI methods by reducing the level set for one of the bounds, while maintaining all the other bounds. This level set is reduced iteratively, between convex (LMI) solutions, until feasibility is lost. A most amazing fact is that most of the common linear control design problems all reduce to exactly the same matrix inequality problem (6). The set of such equivalent problems includes LQR, the set of all stabilizing controllers, the set of all H _{ ∞ } controllers, and the set of all L _{ ∞ } controllers. The discrete and robust versions of these problems are also included in this equivalent set; 17 control problems have been found to be equivalent to LMI problems.
LMI techniques extend the range of solvable system design problems beyond just control design. By integrating information architecture and control design, one can simultaneously choose the control gains and the precision required of all sensor/actuators to satisfy the closedloop performance constraints. These techniques can be used to select the information (with precision requirements) required to solve a control or estimation problem, using the best economic solution (minimal precision). For a more complete discussion of LMI problems in control, read Dullerud and Paganini (2000), de Oliveira et al. (2002), Li et al. (2008), de Oliveira and Skelton (2001), Gahinet and Apkarian (1994), Iwasaki and Skelton (1994), Camino et al. (2001, 2003), Skelton et al. (1998), Boyd and Vandenberghe (2004), Boyd et al. (1994), Iwasaki et al. (2000), Khargonekar and Rotea (1991), Vandenberghe and Boyd (1996), Scherer (1995), Scherer et al. (1997), Balakrishnan et al. (1994), and Gahinet et al. (1995).
CrossReferences
H _{ ∞ } Control, Keith Glover
H _{2} Optimal Control, Ben Chen
Linear Quadratic Optimal Control, Harry Trentleman
LMI Approach to Robust Control, KangZhi Liu
Stochastic LinearQuadratic Control, Shanjian Tang
Bibliography
 Balakrishnan V, Huang Y, Packard A, Doyle JC (1994) Linear matrix inequalities in analysis with multipliers. In: Proceedings of the 1994 American control conference, Baltimore, vol 2, pp 1228–1232Google Scholar
 Boyd SP, Vandenberghe L (2004) Convex optimization. Cambridge University Press, CambridgeCrossRefzbMATHGoogle Scholar
 Boyd SP, El Ghaoui L, Feron E, Balakrishnan V (1994) Linear matrix inequalities in system and control theory. SIAM, PhiladelphiaCrossRefzbMATHGoogle Scholar
 Camino JF, Helton JW, Skelton RE, Ye J (2001) Analysing matrix inequalities systematically: how to get schur complements out of your life. In: Proceedings of the 5th SIAM conference on control & its applications, San DiegoGoogle Scholar
 Camino JF, Helton JW, Skelton RE, Ye J (2003) Matrix inequalities: a symbolic procedure to determine convexity automatically. Integral Equ Oper Theory 46(4):399–454CrossRefzbMATHMathSciNetGoogle Scholar
 de Oliveira MC, Skelton RE (2001) Stability tests for constrained linear systems. In: Reza Moheimani SO (ed) Perspectives in robust control. Lecture notes in control and information sciences. Springer, New York, pp 241–257. ISBN:1852334525CrossRefGoogle Scholar
 de Oliveira MC, Geromel JC, Bernussou J (2002) Extended H _{2} and H _{∞} norm characterizations and controller parametrizations for discretetime systems. Int J Control 75(9):666–679CrossRefzbMATHGoogle Scholar
 Dullerud G, Paganini F (2000) A course in robust control theory: a convex approach. Texts in applied mathematics. Springer, New YorkCrossRefGoogle Scholar
 Gahinet P, Apkarian P (1994) A linear matrix inequality approach to H _{∞} control. Int J Robust Nonlinear Control 4(4):421–448CrossRefzbMATHMathSciNetGoogle Scholar
 Gahinet P, Nemirovskii A, Laub AJ, Chilali M (1995) LMI control toolbox user’s guide. The Mathworks Inc., NatickGoogle Scholar
 Hamilton WR (1834) On a general method in dynamics; by which the study of the motions of all free systems of attracting or repelling points is reduced to the search and differentiation of one central relation, or characteristic function. Philos Trans R Soc (part II):247–308Google Scholar
 Hamilton WR (1835) Second essay on a general method in dynamics. Philos Trans R Soc (part I):95–144Google Scholar
 Iwasaki T, Skelton RE (1994) All controllers for the general H _{∞} control problem – LMI existence conditions and statespace formulas. Automatica 30(8):1307–1317CrossRefzbMATHMathSciNetGoogle Scholar
 Iwasaki T, Meinsma G, Fu M (2000) Generalized Sprocedure and finite frequency KYP lemma. Math Probl Eng 6:305–320CrossRefzbMATHMathSciNetGoogle Scholar
 Khargonekar PP, Rotea MA (1991) Mixed H _{2} ∕ H _{∞} control: a convex optimization approach. IEEE Trans Autom Control 39:824–837CrossRefMathSciNetGoogle Scholar
 Li F, de Oliveira MC, Skelton RE (2008) Integrating information architecture and control or estimation design. SICE J Control Meas Syst Integr 1(2):120–128CrossRefGoogle Scholar
 Scherer CW (1995) Mixed H _{2} ∕ H _{∞} control. In: Isidori A (ed) Trends in control: a European perspective, pp 173–216. Springer, BerlinCrossRefGoogle Scholar
 Scherer CW, Gahinet P, Chilali M (1997) Multiobjective outputfeedback control via LMI optimization. IEEE Trans Autom Control 42(7):896–911CrossRefzbMATHMathSciNetGoogle Scholar
 Skelton RE (1988) Dynamics Systems Control: linear systems analysis and synthesis. Wiley, New YorkGoogle Scholar
 Skelton RE, Iwasaki T, Grigoriadis K (1998) A unified algebraic approach to control design. Taylor & Francis, LondonGoogle Scholar
 Vandenberghe L, Boyd SP (1996) Semidefinite programming. SIAM Rev 38:49–95CrossRefzbMATHMathSciNetGoogle Scholar
 Youla DC, Bongiorno JJ, Jabr HA (1976) Modern WienerHopf design of optimal controllers, part ii: the multivariable case. IEEE Trans Autom Control 21:319–338CrossRefzbMATHMathSciNetGoogle Scholar
 Zhu G, Skelton R (1992) A twoRiccati, feasible algorithm for guaranteeing output l _{∞} constraints. J Dyn Syst Meas Control 114(3):329–338CrossRefzbMATHGoogle Scholar
 Zhu G, Rotea M, Skelton R (1997) A convergent algorithm for the output covariance constraint control problem. SIAM J Control Optim 35(1):341–361CrossRefzbMATHMathSciNetGoogle Scholar