# Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

# Optimization-Based Robust Control

Living reference work entry

DOI: https://doi.org/10.1007/978-1-4471-5102-9_159-2

## Abstract

This entry describes the basic setup of linear robust control and the difficulties typically encountered when designing optimization algorithms to cope with robust stability and performance specifications.

## Keywords

Linear systems Optimization Robust control

## Linear Robust Control

Robust control allows dealing with uncertainty affecting a dynamical system and its environment. In this section, we assume that we have a mathematical model of the dynamical system without uncertainty (the so-called nominal system) jointly with a mathematical model of the uncertainty. We restrict ourselves to linear systems: if the dynamical system we want to control has some nonlinear components (e.g., input saturation), they must be embedded in the uncertainty model. Similarly, we assume that the control system is relatively small scale (low number of states): higher-order dynamics (e.g., highly oscillatory but low energy components) are embedded in the uncertainty model. Finally, for conciseness, we focus exclusively on continuous-time systems, even though most of the techniques described in this section can be transposed readily to discrete-time systems.

Our control system is described by the first-order ordinary differential equation
\displaystyle \begin{aligned} \begin{array}{l} \dot {x}=A\left( \delta \right)x+D\left( \delta \right)u \\ y=C\left( \delta \right)x \\ \end{array} \end{aligned}
where as usual $$x \in {\mathbb R}^{n}$$ denotes the states, $$u \in {\mathbb R}^{m}$$ denotes the controlled inputs, and $$y \in {\mathbb R}^{p}$$ denotes the measured outputs, all depending on time t, with $$\dot {x}$$ denoting the time derivative of x. The system is subject to uncertainty, and this is reflected by the dependence of matrices A, B, and C on uncertain parameter δ which is typically time varying and restricted to some bounded set
\displaystyle \begin{aligned} \delta \in \Delta \subset {\mathbb R}^{q}. \end{aligned}
A linear control law
\displaystyle \begin{aligned} u = Ky \end{aligned}
modeled by a matrix $$K \in {\mathbb R}^{m\times p}$$ must be designed to overcome the effect of the uncertainty while optimizing some performance criterion (e.g., pole placement, disturbance rejection, H2 or H norm). Sometimes, a relevant performance criterion is that the control should be stabilizing for the largest possible uncertainty (measured, e.g., by some norm on Δ). In this section, for conciseness, we restrict our attention to static output feedback control laws, but most of the results can be extended to dynamical output feedback control laws, where the control signal u is the output of a controller (a linear system to be designed) whose input is y.

## Uncertainty Models

Among the simplest possible uncertainty models, we can find the following:
• Unstructured uncertainty, also called norm-bounded uncertainty, where

\displaystyle \begin{aligned} \Delta = \{\delta \in {\mathbb R}^{q} : \vert \vert \delta \vert \vert \le 1\} \end{aligned}
and the given norm can be a standard vector norm or a more complicated matrix norm if δ is interpreted as a vector obtained by stacking the column of a matrix
• Structured uncertainty, also called polytopic uncertainty, where

\displaystyle \begin{aligned} \Delta =\mathrm{conv}\ {\{}\delta _{i}, i = 1,\ldots, N{\}} \end{aligned}
is a polytope modeled as the convex combination of a finite number of given vertices $$\delta _{i} \in {\mathbb R}^{q}, i = 1,\ldots ,N$$

We can find more complicated uncertainty models (e.g., combinations of the two above: see Zhou et al. 1996), but to keep the developments elementary, they are not discussed here.

## Nonconvex Nonsmooth Robust Optimization

The main difficulties faced when seeking a feedback matrix K are as follows:
• Nonconvexity: The stability conditions are typically nonconvex in K.

• Nondifferentiability: The performance criterion to be optimized is typically a non-differentiable function of K.

• Robustness: Stability and performance should be ensured for every possible instance of the uncertainty.

So if we are to formulate the robust control problem as an optimization problem, we should be ready to develop and use techniques from nonconvex, nondifferentiable, robust optimization.
Let us first elaborate on the first difficulty faced by optimization-based robust control, namely, the nonconvexity of the stability conditions. In continuous time, stability of a linear system $$\dot {x}=Ax$$ is equivalent to negativity of the spectral abscissa, which is defined as the maximum real part of the eigenvalues of A:
It turns out that the open cone of matrices $$A \in {\mathbb R}^{n\times n}$$ such that α(A) < 0 is nonconvex (Ackermann, 1993). This is illustrated in Fig. 1 where we represent the set of vectors $$K=(k_{1}, k_{2}, k_{3}) \in {\mathbb R}^{3}$$ such that $$k_1^2 +k_2^2 +k_3^2 < 1$$ and α(A(K)) < 0 for
\displaystyle \begin{aligned} A(K)=\left( {\begin{array}{l} -1\;\;\;\;\;k_1 \\ k_2 \;\;\;\;\;k_3 \\ \end{array}} \right). \end{aligned}

There exist various approaches to handling nonconvexity. One possibility consists of building convex inner approximations of the stability region in the parameter space. The approximations can be polytopes, balls, ellipsoids, or more complicated convex objects described by linear matrix inequalities (LMI). The resulting stability conditions are convex, but surely conservative, in the sense that the conditions are only sufficient for stability and not necessary. Another approach to handling nonconvexity consists of formulating the stability conditions algebraically (e.g., via the Routh-Hurwitz stability criterion or its symmetric version by Hermite) and using converging hierarchies of LMI relaxations to solve the resulting nonconvex polynomial optimization problem: see, e.g., Henrion and Lasserre (2004) and Chesi (2010). Fig. 3 The graph of the negative spectral abscissa for some randomly generated matrix parametrizations
The second difficulty characteristic of optimization-based robust control is the potential nondifferentiability of the objective function. Consider for illustration one of the simplest optimization problems which consists of minimizing the spectral abscissa α(A(K)) of a matrix A(K) depending linearly on a matrix K. Such a minimization makes sense since negativity of the spectral abscissa is equivalent to system stability. Then typically, α(A(K)) is a continuous but non-Lipschitz function of K, which means that its gradient can be unbounded locally. In Fig. 2, we plot the spectral abscissa α(A(K)) for
\displaystyle \begin{aligned} A(K)=\left( {{\begin{array}{*{20}c} 0 \hfill & 1 \hfill \\ K \hfill & {-K} \hfill \\ \end{array} }} \right) \end{aligned}
and $$K \in {\mathbb R}$$. The function is non-Lipschitz at K = 0, at which the global minimum α(A(0)) = 0 is achieved. Nonconvexity of the function is also apparent in this example. The lack of convexity and smoothness of the spectral abscissa and other similar performance criteria renders optimization of such functions particularly difficult (Burke et al., 2001, 2006b). In Fig. 3, we represent graphs of the spectral abscissa (with flipped vertical axis for better visualization) of some small-size matrices depending on two real parameters, with randomly generated parametrization. We observe the typical nonconvexity and lack of smoothness around local and global optima.

The third difficulty for optimization-based robust control is the uncertainty. As explained above, optimization of a performance criterion with respect to controller parameters is already a potentially difficult problem for a nominal system (i.e., when the uncertainty parameter is equal to zero). This becomes even more difficult when this optimization must be carried out for all possible instances of the uncertainty δ in Δ. This is where the above assumption that the uncertainty set Δ has a simple description proves useful. If the uncertainty δ is unstructured and not time varying, then it can be handled with the complex stability radius (Ackermann, 1993), the pseudospectral abscissa (Trefethen and Embree, 2005), or via an H norm constraint (Zhou et al., 1996). If the uncertainty δ is structured, then we can try to optimize a performance criterion at every vertex in the polytopic description (which is a relaxation of the problem of stabilizing the whole polytope). An example is the problem of simultaneous stabilization, where a controller K must be found such that the maximum spectral abscissa of several matrices Ai(K), i = 1, …, N is negative (Blondel, 1994). Finally, if the uncertainty δ is time varying, then performance and stability guarantees can still be achieved with the help of Lyapunov certificates or potentially conservative convex LMI conditions: see, e.g., Boyd et al. (1994) and Scherer et al. (1997).

A unified approach to addressing conflicting performance criteria and uncertainty consists of searching for locally optimal solutions of a nonsmooth optimization problem that is built to incorporate minimization objectives and constraints for multiple plants. This is called (linear robust) multiobjective control, and formally, it can be expressed as the following optimization problem
$$\displaystyle \begin{gathered} \min\nolimits_{K}\ \max\nolimits_{i=1,\ldots,N}\{g_{i}(K) : \beta_{i}=\infty \}\\ \mathrm{s.t.}\,g_{i}(K) \le \beta_{i}, i = 1,\ldots, N, \end{gathered}$$
where each gi(K) is a function of the closed-loop matrix Ai(K) (e.g., a spectral abscissa or an H norm) and the scalars βi are given and such that if βi =  for some i, then gi appears in the objective function and not in a constraint: see Gumussoy et al. (2009) for details. In the above problem, the objective function, a maximum of nonsmooth and nonconvex functions, is typically also nonsmooth and nonconvex. Moreover, without loss of generality, we can easily impose a sparsity pattern on controller matrix K to account for structural constraints (e.g., a low-order decentralized controller).

## Software Packages

Algorithms for nonconvex nonsmooth optimization have been developed and interfaced for linear robust multiobjective control in the public domain Matlab package HIFOO released in 2006 (Burke et al., 2006a) and updated in 2009 (Gumussoy et al., 2009) and based on the theory described in Burke et al. (2006b). In 2011, The MathWorks released HINFSTRUCT, a commercial implementation of these techniques based on the theory described in Apkarian and Noll (2006).

## Bibliography

1. Ackermann J (1993) Robust control – systems with uncertain physical parameters. Springer, Berlin
2. Apkarian P, Noll D (2006) Nonsmooth H-infinity synthesis. IEEE Trans Autom Control 51(1):71–86
3. Blondel VD (1994) Simultaneous stabilization of linear systems. Springer, Heidelberg
4. Boyd S, El Ghaoui L, Feron E, Balakrishnan V (1994) Linear matrix inequalities in system and control theory. SIAM, Philadelphia
5. Burke JV, Lewis AS, Overton ML (2001) Optimizing matrix stability. Proc AMS 129:1635–1642
6. Burke JV, Henrion D, Lewis AS, Overton ML (2006a) HIFOO – a Matlab package for fixed-order controller design and H-infinity optimization. In: Proceedings of the IFAC symposium robust control design, ToulouseGoogle Scholar
7. Burke JV, Henrion D, Lewis AS, Overton ML (2006b) Stabilization via nonsmooth, nonconvex optimization. IEEE Trans Autom Control 51(11):1760–1769
8. Chesi G (2010) LMI techniques for optimization over polynomials in control: a survey. IEEE Trans Autom Control 55(11):2500–2510
9. Gumussoy S, Henrion D, Millstone M, Overton ML (2009) Multiobjective robust control with HIFOO 2.0. In: Proceedings of the IFAC symposium on robust control design (ROCOND 2009), HaifaGoogle Scholar
10. Henrion D, Lasserre JB (2004) Solving nonconvex optimization problems – how GloptiPoly is applied to problems in robust and nonlinear control. IEEE Control Syst Mag 24(3):72–83
11. Scherer CW, Gahinet P, Chilali M (1997) Multi-objective output feedback control via LMI optimization. IEEE Trans Autom Control 42(7):896–911
12. Trefethen LN, Embree M (2005) Spectra and pseudospectra: the behavior of nonnormal matrices and operators. Princeton University Press, Princeton
13. Zhou K, Doyle JC, Glover K (1996) Robust and optimal control. Prentice Hall, Upper Saddle River