# Residual-based iterations for the generalized Lyapunov equation

## Abstract

This paper treats iterative solution methods for the generalized Lyapunov equation. Specifically, a residual-based generalized rational-Krylov-type subspace is proposed. Furthermore, the existing theoretical justification for the alternating linear scheme (ALS) is extended from the stable Lyapunov equation to the stable generalized Lyapunov equation. Further insights are gained by connecting the energy-norm minimization in ALS to the theory of H2-optimality of an associated bilinear control system. Moreover it is shown that the ALS-based iteration can be understood as iteratively constructing rank-1 model reduction subspaces for bilinear control systems associated with the residual. Similar to the ALS-based iteration, the fixed-point iteration can also be seen as a residual-based method minimizing an upper bound of the associated energy norm.

## Keywords

Generalized Lyapunov equation H2-optimal model reduction Bilinear control systems Alternating linear scheme Projection methods Matrix equations Rational Krylov## Mathematics Subject Classification

65F10 58E25 65F30 65F35## 1 Introduction

*generalized Lyapunov equation*,

*Lyapunov operator*, and \(\varPi \) is sometimes called a

*correction*. We further assume that

*A*is

*stable*, i.e.,

*A*has all its eigenvalues in the left-half plane, which implies that \({{\,\mathrm{{\mathscr {L}}}\,}}\) is invertible [23, Theorem 4.4.6]. Moreover, we assume that \(\rho ({{\,\mathrm{{\mathscr {L}}}\,}}^{-1}\varPi )<1\), where \(\rho \) denotes the (operator) spectral radius. The assumption on the spectral radius implies that (1) has a unique solution [24, Theorem 2.1]. Furthermore, the definition of \(\varPi \) in (3) implies that it is non-negative, in the sense that \(\varPi (X)\) is positive semidefinite when

*X*is positive semidefinite. Thus one can assert that, for all positive definite right-hand-sides, the unique solution

*X*is indeed positive definite [9, Theorem 3.9] [12, Theorem 4.1]. Under these assumptions we prove that the alternating linear scheme (ALS) presented by Kressner and Sirković in [25] computes search directions which at each step fulfill a first order necessary condition for being \({{\,\mathrm{{\mathscr {H}}_2}\,}}\)-optimal. Moreover, we show an equivalence between the bilinear iterative rational Krylov (BIRKA) method [5, 19] and the ALS-iteration for the generalized Lyapunov equation. The established equivalence leads to that the ALS-iteration for the generalized Lyapunov equation can be understood as iteratively computing model reduction spaces of dimension 1 for a sequence of bilinear control systems associated with the residual of the generalized Lyapunov equation (Sect. 3). We also present a residual-based generalized rational-Krylov-type subspace adapted for solving the generalized Lyapunov equation (Sect. 5). A further result regards the fixed-point iteration, a residual-based iteration which we show minimizes an upper bound of the energy-norm (Sect. 4).

The standard Lyapunov equation, \(AX+XA^T+BB^T = 0\), has been well studied for a long time and considerable research effort has been, and is still, put into finding efficient algorithms for computing the solution and approximations thereof. For large and sparse problems it is typical to look for low-rank approximations since algorithms can be adapted to exploit the low-rank format, reducing computational effort and storage requirement. One such algorithm is the Riemannian optimization method from [40] which computes a low-rank approximation by minimizing an associated cost function over the manifold of rank-*k* matrices, where \(k\ll n\). The Lyapunov equation has a close connection to control theory. Hence methods such as the iterative rational Krylov algorithm (IRKA) [18, 21], which computes subspaces for locally \({{\,\mathrm{{\mathscr {H}}_2}\,}}\)-optimal reduced order linear systems, provide good approximation spaces for low-rank approximations. Related research is presented in a series of papers [13, 14, 15], where Druskin and co-authors develop a strategy to choose shifts for the rational Krylov subspace for efficient subspace reduction when solving PDEs [13, 14], as well as for model reduction of linear single-input-single-output (SISO) systems and solutions to Lyapunov equations [15]. Instead of computing full spaces iteratively with a method such as IRKA, the idea is to construct an infinite sequence with asymptotically optimal convergence speed [13]. Then the subspace can be dynamically extended as needed, until required precision is achieved. The idea is also further developed by using tangential directions, proving especially useful for situations where the right-hand-side is not of particularly low rank [16], e.g., multiple-input-multiple-output (MIMO) systems. For a more complete overview of results and techniques for Lyapunov equations see the review article [38].

The generalized Lyapunov equation has received increased attention over the last decade. Results on low-rank approximability have emerged [6, 24]. More precisely, similarly to the standard Lyapunov equation one can in certain cases when the right-hand-side *B* is of low rank, \(r\ll n\), expect the singular values of the solution to decay rapidly even for the generalized Lyapunov equation. The result [6, Theorem 1] is applicable when the matrices \(N_i\) for \(i=1,\dots ,m\) have low rank, and the result [24, Theorem 2] when \(\rho ({{\,\mathrm{{\mathscr {L}}}\,}}^{-1}\varPi )<1\). Examples of algorithms exploiting low-rank structures are a Bilinear ADI method [6], specializations of Krylov methods for matrix equations [24], as well as greedy low-rank methods [25], and exploitations of the fixed-point iteration [37]. Through the connection with bilinear control systems there is an extension of IRKA, known as bilinear iterative rational Krylov (BIRKA) [5, 19]. There are also methods based on Lyapunov and ADI-preconditioned GMRES and BICGSTAB [12], and in general for problems with tensor product structure [26]. In the context of stochastic steady-state diffusion equations, rational Krylov subspace methods for generalized Sylvester equations have also been analyzed in [32]. The suggested search space is based on a union of rational Krylov subspaces, as well as combinations of rational functions, generated by the coefficient matrices defining the generalized Sylvester operator. We also mention that for the case when the correction \(\varPi \) has low operator-rank, there is a specialization of the Sherman-Morrison-Woodbury formula to the linear matrix equation; see [33] or [12, Section 3]. The result has been exploited in works such as [6, 28, 34]. Recently, the generalized Lyapunov equation has also been considered on an infinite-dimensional Hilbert space, see [4]. In particular, the authors show ([4, Proposition 1.1]) that the Gramians solving the generalized linear operator equations can be approximated by truncated Gramians that are associated to a sequence of standard operator Lyapunov equations.

## 2 Preliminaries

### 2.1 Generalized matrix equations and approximations

*k*is typically an iteration count. Connected with an approximation \({{\hat{X}}}_k\) is the corresponding

*error*

*X*is the exact solution to (1), and the

*residual*,

### Definition 1

*The Galerkin approximation*) Let \({{\,\mathrm{{\mathscr {K}}}\,}}_k\subseteq {\mathbb {R}}^n\) be an \(n_k\le n\) dimensional subspace for \(k=0,1,\dots \), and let \(V_k\in {\mathbb {R}}^{n\times n_k}\) be a matrix containing an orthogonal basis of \({{\,\mathrm{{\mathscr {K}}}\,}}_k\). We call \({{\hat{X}}}_k\) the

*Galerkin approximation*to (1), in \({{\,\mathrm{{\mathscr {K}}}\,}}_k\), if \({{\hat{X}}}_k = V_k Y_k V_k^T\) and \(Y_k\) is determined by the condition

For the generalized Lyapunov equation there are certain sufficient conditions for the Galerkin approximation to exist and be unique, e.g., the criteria in [9, Theorem 3.9], [12, Theorem 4.1] or [24, Proposition 3.2]. Related to the Galerkin approximation there is also the (standard) definition of the Galerkin residual.

### Definition 2

(*The Galerkin residual*) We call \({\mathscr {R}}_k\) from (5) the *Galerkin residual* if \({{\hat{X}}}_k\) is the Galerkin approximation.

The condition (6) is known as both the *projected problem* and the *Galerkin condition*, and it states that \(V_k^T{\mathscr {R}}_kV_k = 0\) for the Galerkin residual. Some of the results and arguments presented below are valid for a (generic) residual and others, more specialized, only for the Galerkin residual. However, it will be clear from context and the Galerkin residual will always be referenced as such.

The following fundamental result from linear algebra will be important for us. The specialization for the Lyapunov equation was presented already by Smith in [39]. For generalized matrix equations cf. [12, Section 4.2], and [25, Algorithm 2]; and an analogy for the algebraic Riccati equation in [29].

### Proposition 3

One strategy for computing updates to the current approximation is to compute approximations of the error. Proposition 3 allows such iterations by connecting the error with the known, or computable, quantities \({{\,\mathrm{{\mathscr {L}}}\,}}\), \(\varPi \) and \({\mathscr {R}}_k\). The idea is well established in the literature and is, e.g., analogous to the defect correction method [29] and the RADI method [8] for the algebraic Riccati equation, as well as the iterative improvement [20, Section 3.5.3] for a general linear system. For future reference we also need the following basic definition.

### Definition 4

*Symmetric generalized Lyapunov equation*) The generalized Lyapunov equation

*symmetric*if \(A = A^T\) and \(N_i=N_i^T\) for \(i=1,\dots ,m\).

### 2.2 Bilinear systems

*bilinear control systems*of the form

### Remark 5

Note that the bilinear system (8) differs from the notation frequently used in the literature, e.g., [1, 2, 5, 9, 12, 19, 41]. The formulation (8) is convenient since it allows for \(m\ne r\). However, the system \(\varSigma \) can be put into the usual form by considering the input vector \(\begin{bmatrix}w(t)^T,&u(t)^T\end{bmatrix}^T\), adding *m* zero-columns to the beginning of *B*, i.e., \(\begin{bmatrix} 0,&B \end{bmatrix}\), and considering the matrices \(N_{m+1}=0,\dots ,N_{m+r}=0\). The system \(\varSigma \) can also be compared to systems from applications, e.g., [30, Equation (2)].

As in [2], for a MIMO bilinear system (8), we define the controllability and observability Gramians as follows.

### Definition 6

*, Bilinear Gramians*) Consider the bilinear system (8) and let

*A*be stable. Moreover, let \(P_1(t_1) := e^{At_1}B\), \(P_j(t_1,\dots ,t_j) := e^{At_j}[N_1P_{j-1}, \dots , N_mP_{j-1}]\) for \(j=2,3,\dots \), \(Q_1(t_1) := Ce^{At_1}\), and \(Q_j(t_1,\dots ,t_j) := [N_1^TQ_{j-1}^T,\dots ,N_m^TQ_{j-1}^T]^T e^{At_j}\) for \(j=2,3,\dots \). We define the controllability and observability Gramian respectively as

### Proposition 7

Consider a symmetric generalized Lyapunov equation (7). Let \({{\hat{X}}}_k\) be an approximation such that the residual \({\mathscr {R}}_k = {\mathscr {R}}_k^T \succeq 0\). Then one can choose \(B_{{\mathscr {R}}_k}=C_{{\mathscr {R}}_k}^T\) and the error \(X_k^\text {e}\) is the controllability and observability Gramian of the system \(\varSigma ^\text {e}\).

For what follows, we recall the definition of the \({{\,\mathrm{{\mathscr {H}}_2}\,}}\)-norm for bilinear systems that was introduced by Zhang and Lam in [41].

### Definition 8

*, Bilinear*\({{\,\mathrm{{\mathscr {H}}_2}\,}}\)

*-norm*) Consider the bilinear system \(\varSigma \) from (8). We define the \({{\,\mathrm{{\mathscr {H}}_2}\,}}\)-norm of \(\varSigma \) as

## 3 ALS and \({{\,\mathrm{{\mathscr {H}}_2}\,}}\)-optimal model reduction for bilinear systems

*energy norm*,

### 3.1 ALS for the generalized Lyapunov equation

*X*is a solution to the symmetric Lyapunov equation (7), i.e., \( AX + XA + \sum _{i=1}^m N_i X N_i + BB^T=0. \) Given an approximation \({\hat{X}}_k\), we consider the minimization problem

*alternating linear scheme*(ALS). The main step is to fix one of the two vectors, e.g.,

*v*and then minimize the strictly convex objective function to obtain an update for

*w*. A pseudocode is given in Algorithm 1.

In view of Proposition 3 the ALS-based approach for computing new subspace extensions can be seen as searching for an approximation to \(X_k^\text {e}\) of the form \(v_kw_k^T\) by iterating \(({{\,\mathrm{{\mathscr {L}}}\,}}(v_kw_k^T) + \varPi (v_kw_k^T) + {\mathscr {R}}_k)w_k = 0\) when determining \(v_k\) and \(v_k^T({{\,\mathrm{{\mathscr {L}}}\,}}(v_kw_k^T) + \varPi (v_kw_k^T) + {\mathscr {R}}_k) = 0\) when determining \(w_k\). This is to say that the error is approximated by a rank-1 matrix, and at convergence this would result in the new residual, \({\mathscr {R}}_{k+1}\), being left-orthogonal to \(v_{k}\) and right-orthogonal to \(w_{k}\). In the symmetric case, local minimizers of (10) are necessarily symmetric positive semidefinite. This yields the following extension of [25, Lemma 2.3].

### Lemma 9

Consider the symmetric generalized Lyapunov equation (7) and assume that \(A\prec 0\), \(\rho ({{\,\mathrm{{\mathscr {L}}}\,}}^{-1}\varPi )<1\), and \({\mathscr {R}}_k={\mathscr {R}}_k^T\succeq 0\). Let *J* be as in (10). Then every local minimum \((v_*,w_*)\) of *J* is such that \(v_*w_*^T\) is symmetric positive semidefinite.

### Proof

*w*and \(J(v,w_*)\) is strictly convex in

*v*, it follows that

*N*-matrix, since the following argument can be applied to all terms in the sum independently. We observe that

Algorithm 1 and the argument in Lemma 9 are based on a residual. However, if \({{\hat{X}}}_k = 0\), then \({\mathscr {R}}_k = BB^T\), and hence the result is applicable directly to any symmetric generalized Lyapunov equation. The focus on the residual in the previous results is natural since it leads to the following extension of [25, Theorem 2.4] to the case of the symmetric generalized Lyapunov equation.

### Theorem 10

### Proof

We show the assertion by induction. It clearly holds that \({\mathscr {R}}_{0}={\mathscr {R}}_{0}^T\succeq 0\). Now assume that this is the case for some *k*. From Lemma 9 the local minimizers of (10) are symmetric and hence \({{\hat{X}}}_{k+1}\) is reasonably defined in (11). Moreover, since \({{\hat{X}}}_{k+1}\) and the operators in (1) are symmetric it follows that \({\mathscr {R}}_{k+1}\) is symmetric. Thus what is left to show is that \({\mathscr {R}}_{k+1} \succeq 0\), which is true if and only if \(y^T{\mathscr {R}}_{k+1}y\ge 0\) for all \(y\in {\mathbb {R}}^{n}\). Hence take an arbitrary \(y\in {\mathbb {R}}^{n}\) and consider \(y^T{\mathscr {R}}_{k+1}y\). We derive properties similar to [25, equations (12)–(14)]:

*J*(

*v*,

*w*), it also follows that \(v_{k+1}\) is a (global) minimizer of the (convex) cost function

*v*is given by

### Corollary 11

The iteration (11) produces an increasing sequence of approximations \(0={{\hat{X}}}_0 \preceq {{\hat{X}}}_1\preceq \cdots \preceq X\).

### 3.2 \({{\,\mathrm{{\mathscr {H}}_2}\,}}\)-optimal model reduction for symmetric state space systems

### Proposition 12

If \(\sigma ({\mathscr {M}})=-\sigma ({{\,\mathrm{{\mathscr {L}}}\,}}+\varPi )\subset {\mathbb {C}}_+\) then \(\sigma (\widetilde{{\mathscr {M}}})\subset {\mathbb {C}}_+\) and \(\sigma (\widehat{{\mathscr {M}}})\subset {\mathbb {C}}_+\).

### Proof

*A*and \(N_i\) are assumed to be symmetric, we conclude that \({\mathbf {M}}={\mathbf {M}}^T\succ 0\). Let us then define the orthogonal matrix \({\mathbf {V}}= { V{{\,\mathrm{\otimes }\,}}I}\). It follows that \(\widetilde{{\mathbf {M}}}={\mathbf {V}}^T {\mathbf {M}} {\mathbf {V}}\) and, consequently, \(\widetilde{{\mathbf {M}}} =\widetilde{{\mathbf {M}}}^T\succ 0\). A similar argument with \({\mathbf {V}}=V{{\,\mathrm{\otimes }\,}}V\) can be applied to show the second assertion. \(\square \)

Given a reduced bilinear system, we naturally obtain an approximate solution to the generalized Lyapunov equation. Moreover, the error with respect to the \({\mathscr {M}}\)-inner product is given by the \({{\,\mathrm{{\mathscr {H}}_2}\,}}\)-norms of the original and reduced system, respectively.

### Proposition 13

*X*be the solution to \({\mathscr {M}}( X) = BB^T\), and let \({\hat{X}}\) be the solution to \(\widehat{{\mathscr {M}}}({{\hat{X}}}) = {\hat{B}}{\hat{B}}^T\). Then

### Proof

*X*and \({{\hat{X}}}\) exist. We observe that \(\Vert X\Vert _{{\mathscr {M}}}^2 = {{\,\mathrm{trace}\,}}(XBB^T)=\Vert \varSigma \Vert _{{{\,\mathrm{{\mathscr {H}}_2}\,}}}^2\) and that \(\langle V{{\hat{X}}} V^T, X \rangle _{{\mathscr {M}}} = {{\,\mathrm{trace}\,}}(V {{\hat{X}}} V^T BB^T)=\Vert {\widehat{\varSigma }}\Vert _{{{\,\mathrm{{\mathscr {H}}_2}\,}}}^2\). Moreover, for the reduced system we obtain

Extending the results from [7], we obtain a lower bound for the previous terms by the \({{\,\mathrm{{\mathscr {H}}_2}\,}}\)-norm of the error system \(\varSigma -{\widehat{\varSigma }}\).

### Proposition 14

### Proof

*Y*and \({\hat{X}}\) are the solutions of \(\widetilde{{\mathscr {M}}}(Y)=B{\hat{B}}^T\) and \(\widehat{{\mathscr {M}}}({\hat{X}})={\hat{B}}{\hat{B}}^T\). With the operators introduced in (15) and (16), we obtain

As a consequence of Propositions 13 and 14, we obtain the following result.

### Theorem 15

Let \(\varSigma \) denote a bilinear system (8) and let \(A=A^T\prec 0,N_i=N_i^T\) for \(i=1,\dots ,m\) and \(B=C^T\). Assume that \(\rho ({{\,\mathrm{{\mathscr {L}}}\,}}^{-1}\varPi )<1\). Given an orthogonal \(V\in {\mathbb {R}}^{n \times k},k< n,\) define \({\widehat{\varSigma }}\), the reduced bilinear system (14), via \({\hat{A}}=V^T A V, {\hat{N}}_i=V^TN_i V\) and \({\hat{B}}=V^TB={\hat{C}}^T.\) Assume that \({\hat{X}}\) solves \(\widehat{{\mathscr {M}}}({{\hat{X}}}) = {\hat{B}}{\hat{B}}^T\). If \({\widehat{\varSigma }}\) is locally \({{\,\mathrm{{\mathscr {H}}_2}\,}}\)-optimal, then \(V{\hat{X}}V^T\) is locally optimal with respect to the \({\mathscr {M}}\)-norm.

### 3.3 Equivalence of ALS and rank-1 BIRKA

So far we have shown that a subspace producing a locally \({{\,\mathrm{{\mathscr {H}}_2}\,}}\)-optimal model reduction is also a subspace for which the Galerkin approximation is locally optimal in the \({\mathscr {M}}\)-norm. In this part we, algorithmically, establish an equivalence between BIRKA and ALS. More precisely, for the symmetric case the equivalence is between BIRKA applied with the target model reduction subspace of dimension 1 for (8), and ALS applied to (1). The proof is based on the following lemmas.

### Lemma 16

Consider using BIRKA (Algorithm 2) with \(k=1\), i.e., both the initial guesses and the output are vectors. Then \(\tilde{A}\in {\mathbb {R}}\) is a scalar and hence we can take \({{\tilde{\varLambda }}} = \tilde{A}\) and \(R=1\) in Step 2. Thus \({{\hat{B}}} = \tilde{B}\), \({{\hat{C}}} = {{\tilde{C}}}\), \({{\hat{N}}}_1 = {{\tilde{N}}}_1, \dots , {{\hat{N}}}_m = {{\tilde{N}}}_m\), and hence Steps 2–3 are redundant. Moreover, since \({{\tilde{V}}}\) and \({{\tilde{W}}}\) are vectors, Step 6, is redundant.

### Proof

The result follows from direct computation. \(\square \)

When speaking about *redundant* steps and operations we mean that the entities assigned in that step are exactly equal to another, existing, entity. In such a situation the algorithm can be rewritten, by simply changing the notation, in a way that skips the redundant step and still produces the same result.

### Lemma 17

Consider a symmetric generalized Lyapunov equation (7) and let \(v,w\in {\mathbb {R}}^{n}\) be two given vectors. Let \(v_\textsc {birka},w_\textsc {birka}\in {\mathbb {R}}^{n}\) be the approximations obtained by applying BIRKA (Algorithm 2) to (1) with \(C = B^T\) and initial guesses *v* and *w*. If \(v=w\), then \(v_\textsc {birka} = w_\textsc {birka}\).

### Proof

The proof is by induction, and it suffices to show that if \({{\tilde{V}}} = {{\tilde{W}}}\) at the beginning of a loop, the same holds at the end of the loop. Thus assume \({{\tilde{V}}} = {{\tilde{W}}}\). Then \({{\tilde{N}}}_i = ({{\tilde{W}}}^T {{\tilde{V}}})^{-1} {{\tilde{W}}}^T N_i {{\tilde{V}}} = {{\tilde{V}}}^T N_i {{\tilde{V}}}/\Vert V\Vert ^2 = {{\tilde{V}}}^T N_i^T {{\tilde{V}}}/\Vert V\Vert ^2 = {{\hat{N}}}_i^T\) for \(i=1,\dots ,m\), and \({{\tilde{C}}} = C {{\tilde{V}}} = B^T {{\tilde{W}}} = {{\tilde{B}}}^T\). By Lemma 16 we do not need to consider Steps 2–3. We can now conclude that Step 4 and Step 5 are equal, and thus at the end of the iteration we still have \({{\tilde{V}}} = {{\tilde{W}}}\). \(\square \)

### Lemma 18

Consider a symmetric generalized Lyapunov equation (7) and let \(v,w\in {\mathbb {R}}^{n}\) be two given vectors. Let \(v_\textsc {als},w_\textsc {als}\in {\mathbb {R}}^{n}\) be the approximations obtained by applying the ALS algorithm (Algorithm 1) to (1) with initial guesses *v* and *w*. If \(v=w\), then \(v_\textsc {als} = w_\textsc {als}\).

### Proof

Similar to the proof of Lemma 17 it is enough to show that if \(v = w\) at the beginning of a loop then it also holds at the end of the loop. Hence we assume that \(v = w\). Then \({{\hat{A}}}_1 = {{\hat{A}}}_2\) follows by direct calculations. Moreover, by assumption \({\mathscr {R}}_k={\mathscr {R}}_k^T\). Thus Step 3 and Step 6 are equal, and hence at the end of the iteration we still have that \(v=w\). \(\square \)

### Theorem 19

Consider a symmetric generalized Lyapunov equation (7) and let \(v\in {\mathbb {R}}^{n}\) be a given vector. Let \(v_\textsc {birka}\in {\mathbb {R}}^{n}\) be the approximation obtained by applying BIRKA (Algorithm 2) to (1) with \(C= B^T\) and initial guess *v*. Moreover, let \(v_\textsc {als}\in {\mathbb {R}}^n\) be the approximation obtained by applying the ALS algorithm (Algorithm 1) to (1) with initial guess *v*. Then \(v_\textsc {birka}= v_\textsc {als}\).

### Proof

First, Lemma 17 and Lemma 18 makes it reasonable to assess the algorithms with only a single initial guess as well as a single output. Moreover, Step 5 in BIRKA as well as Steps 2–4 in ALS are redundant. Furthermore, it follows from Lemma 16 that in this situation Steps 2, 3, and 6 of BIRKA are also redundant. Hence we need to compare the procedure consisting of Steps 1 and 4 from BIRKA, with the procedure consisting of Steps 1, 5, and 6 from ALS. It can be observed that the computations are equivalent and thus the asserted equality holds if they stop after an equal amount of iterations. We hence consider the stopping criteria and note that they are the same, since \((v^TA^Tv + v^TAv)/2\Vert v\Vert ^2 = v^TAv/\Vert v\Vert ^2 = {{\tilde{A}}} \in {\mathbb {R}}\). \(\square \)

### Corollary 20

Theorem 10 is applicable with ALS changed to BIRKA, using subspaces of dimension 1.

### Remark 21

Note that ALS can be generalized such that the optimization is computing rank-\(\ell \) corrections, see [25, Remark 2.2]. With similar arguments as above, one can show that for symmetric systems this can equivalently be achieved by BIRKA. From a theoretical point of view, this will yield more accurate approximations. However, the computational complexity increases quickly since each ALS or BIRKA step then requires solving a generalized Sylvester equation of dimension \(n\times {\ell }\).

## 4 Fixed-point iteration and approximative \({\mathscr {M}}\)-norm minimization

In the previous section we showed that the ALS-based iteration (11) locally minimizes the error in the \({\mathscr {M}}\)-norm with rank-1 updates. In contrast we here show that the fixed-point iteration minimizes an upper bound for the \({\mathscr {M}}\)-norm, but with no rank constraint on the minimizer.

### Theorem 22

Consider the symmetric generalized Lyapunov equation (7) with the additional assumptions that \(A\prec 0\) and \(\rho ({{\,\mathrm{{\mathscr {L}}}\,}}^{-1}\varPi )<1\). Moreover, consider the sequence of approximations constructed by (18) where \({\mathscr {R}}_{k}\) is the residual associated with \({{\hat{X}}}_k\). Then \({{\hat{X}}}_k={{\hat{X}}}_k^T \succeq 0\) and \({\mathscr {R}}_k={\mathscr {R}}_k^T \succeq 0\), for all \(k\ge 0\).

### Proof

*k*. Then \(\varDelta =-{{\,\mathrm{{\mathscr {L}}}\,}}^{-1}({\mathscr {R}}_k)\) is symmetric and positive semidefinite, and hence \({{\hat{X}}}_{k+1}\) is symmetric and positive semidefinite. Moreover, since \({{\hat{X}}}_{k+1}\) and the operators in (1) are symmetric it follows that \({\mathscr {R}}_{k+1}\) is symmetric. Thus what is left to show is \({\mathscr {R}}_{k+1} \succeq 0\), which is true if and only if \(y^T{\mathscr {R}}_{k+1}y\ge 0\) for all \(y\in {\mathbb {R}}^{n}\). Hence take an arbitrary \(y\in {\mathbb {R}}^{n}\) and consider

### Corollary 23

The fixed-point iteration (17) produces an increasing sequence of approximations \(0={{\hat{X}}}_0 \preceq {{\hat{X}}}_1\preceq \cdots \preceq X\).

### Remark 24

One could consider creating a subspace iteration from (18), by computing a few singular vectors of \({{\,\mathrm{{\mathscr {L}}}\,}}^{-1}({\mathscr {R}}_k)\) and adding these to the basis. The method seems to have nice convergence properties per iteration in the symmetric case, but not in the non-symmetric case. However, the (naïve) computations are prohibitively expensive. See [37] for a computationally more efficient way of exploiting the fixed-point iteration.

## 5 A residual-based rational Krylov generalization

A viable technique for designing iterative methods for the generalized Lyapunov equation seems to be working with the residual; see the discussion in connection to Proposition 3, and in Sects. 3 and 4. In [25, Section 4] it is suggested that, so called, preconditioned residuals can be used to expand the search space. It is further suggested that one such preconditioner could be a one-step-ADI preconditioner \(P^{-1}_\text {ADI} = (A-\sigma I)^{-1}{{\,\mathrm{\otimes }\,}}(A-\sigma I)^{-1}\), for a suitable choice of the shift. We present a method along those lines, and show that it can be seen as a generalization of the rational Krylov subspace method.

### 5.1 Suggested search space

*A*we let \(\sigma _\text {min}\) be the negative real part of the eigenvalue of

*A*with largest real part (closest to 0). Correspondingly we let \(\sigma _\text {max}\) the negative real part of the eigenvalue of

*A*with smallest real part. Equations (19) and (20) can be straightforwardly incorporated in a Galerkin method for the generalized Lyapunov equation; the pseudocode is presented in Algorithm 3.

### Remark 25

In practice the computation of the left singular vector can typically be done approximatively in an iterative fashion. This would also remove the need of computing the approximative solution \({{\hat{X}}}_k\) in Step 5 and the residual in Step 6 explicitly, since the matrix vector product can be implemented as \({\mathscr {R}}_k v = A V_k Y_k V_k^T v + V_k Y_k V_k^T A^T v + \sum _{i=1}^m N_i V_k Y_k V_k^T N_i^T v + BB^Tv\). However, such computations may introduce inexactness which can present a difficulty in a subspace method.

### Remark 26

*S*approximates the mirrored spectrum of

*A*and \(\partial S\) is the boundary of

*S*, and

*S*is approximated at each step using the convex hull of the Ritz values of \({{\hat{A}}}_{k-1}\). It has been observed efficient in experiments since the maximization of (21) is computationally faster compared to (20). See Sect. 6 for a practical comparison of convergence properties.

### Remark 27

The steps 8–9 in Algorithm 3 can be changed for a tangential-direction approach according to [16]. One practical way, although a heuristic, is to do the shift search according to either (20) or (21), and then compute the principal direction(s) according to [16, Section 3], i.e., through a singular value decomposition of \({\mathscr {R}}_{k-1} - (A - \sigma _k I)V_{k-1}({{\hat{A}}}_{k-1} - \sigma _k I)^{-1}V_{k-1}^T {\mathscr {R}}_{k-1}\). It has been observed in experiments that such an approach tends to speed up the convergence, in terms of computation time, since the computation of the residual is costly.

### Remark 28

It is (sometimes) desirable to allow for complex conjugate shifts \(\sigma _k\) and \(\bar{\sigma }_k\), although, for reasons of computations and model interpretation one wants to keep the basis real. This goal is achievable using the same idea as in [35]. More precisely, one can utilize the relation \( {{\,\mathrm{Span}\,}}\left\{ (A-\sigma _k I)^{-1}{u}_{k-1},\,(A-\bar{\sigma }_kI)^{-1}{u}_{k-1}\right\} = {{\,\mathrm{Span}\,}}\left\{ {{\,\mathrm{Re}\,}}((A-\sigma _k I)^{-1}{u}_{k-1}),\,{{\,\mathrm{Im}\,}}((A-\sigma _k I)^{-1}{u}_{k-1})\right\} \). Although it requires two shifts to be used together with the vector \({u}_{k-1}\).

### 5.2 Analogies to the linear case

*Lyapunov equation*,

### Lemma 29

Let \(A\in {\mathbb {R}}^{n\times n}\) and \(\sigma _a\in {\mathbb {R}}\) be any scalar such that \((A-\sigma _{a}I)\) is nonsingular. Moreover, let \(V\in {\mathbb {R}}^{n\times k}\), \(k\le n\), be orthogonal, i.e., \(V^TV= I\), and let \({\mathscr {R}}\in {\mathbb {R}}^{n\times n}\) be such that \({{\,\mathrm{Range}\,}}((A-\sigma _a I)^{-1}{\mathscr {R}})\subseteq {{\,\mathrm{Span}\,}}(V)\). Then \({\mathscr {R}}= (A-\sigma _aI)V(V^T A V- \sigma _aI)^{-1}V^T{\mathscr {R}}\).

### Proof

### Theorem 30

Let \(A\in {\mathbb {R}}^{n\times n}\), \(B\in {\mathbb {R}}^{n\times r}\), and let \(\{\sigma _\ell \}_{\ell =1}^{k+1}\) be a sequence of shifts such that \(A-\sigma _\ell I\) is nonsingular for \(\ell =1,\dots ,k+1\). Define the space \({{\,\mathrm{{\mathscr {K}}}\,}}_k := {{\,\mathrm{Span}\,}}\{B,(A-\sigma _1 I)^{-1}B,\dots ,\prod _{\ell =1}^k(A-\sigma _\ell I)^{-1}B\}\), and \({{\,\mathrm{{\mathscr {K}}}\,}}_{k+1}\) analogously. Let \(V_k\) be an orthogonal basis of \({{\,\mathrm{{\mathscr {K}}}\,}}_k\), \(V_{k+1}\) an orthogonal basis of \({{\,\mathrm{{\mathscr {K}}}\,}}_{k+1}\), and let \(v_{k+1}\in {\mathbb {R}}^{n\times r}\) be such that \(V_{k+1} = \begin{bmatrix}V_k,&v_{k+1}\end{bmatrix}\).^{1} Moreover, let \({\mathscr {R}}_k\in {\mathbb {R}}^{n\times n}\) be the Galerkin residual with respect to (22). Then each column of \((A-\sigma _{k+1}I)^{-1} {\mathscr {R}}_k\) is in \({{\,\mathrm{Span}\,}}(V_{k+1})\), i.e., \({{\,\mathrm{Range}\,}}((A-\sigma _{k+1}I)^{-1} {\mathscr {R}}_k)\subseteq {{\,\mathrm{Span}\,}}(V_{k+1})\). Furthermore, if \({{\,\mathrm{Range}\,}}((A-\sigma _{k+1}I)^{-1} {\mathscr {R}}_k)\subseteq {{\,\mathrm{Span}\,}}(V_{k})\), then \({\mathscr {R}}_k = 0\).

### Proof

We introduce the notation \(S_{k+1}:=(A-\sigma _{k+1}I)\) and \({{\hat{S}}}_{k+1} := (V^TAV-\sigma _{k+1}I)\).

### Remark 31

The interpretation of Theorem 30 is easiest in the case when \(B=b\in {\mathbb {R}}^n\). Consider the two spaces \({{\,\mathrm{{\mathscr {K}}}\,}}_k := {{\,\mathrm{Span}\,}}\{b,(A-\sigma _1 I)^{-1}b,\dots ,\prod _{\ell =1}^k(A-\sigma _\ell I)^{-1}b\}\) and \({{\hat{{{\,\mathrm{{\mathscr {K}}}\,}}}}}_k := {{\,\mathrm{Span}\,}}\{{\mathscr {R}}_{-1},(A-\sigma _1 I)^{-1}{\mathscr {R}}_0,\dots ,(A-\sigma _kI)^{-1}{\mathscr {R}}_{k-1}\}\), where \({\mathscr {R}}_{-1} = b\) and \({\mathscr {R}}_j\) is the Galerkin residual in space \({{\,\mathrm{{\mathscr {K}}}\,}}_{j}\), with \(j=0,1,\dots ,k-1\) . Then for all relevant cases, i.e., \({\mathscr {R}}_j\ne 0\) for \(j=-1,0,\dots ,k-1\), we have that \({{\,\mathrm{{\mathscr {K}}}\,}}_k = {{\hat{{{\,\mathrm{{\mathscr {K}}}\,}}}}}_k\). In this sense the suggested subspace in (19) can be seen as a natural generalization of a rational Krylov subspace for linear matrix equations.

## 6 Numerical examples

We now numerically compare different methods discussed in the paper. All algorithms are treated in a subspace fashion^{2} and we compare practically achieved approximation properties as a function of subspace dimension. Since the paper focuses on the symmetric problem we use Galerkin projection in the tested methods, except BIRKA. However, to (numerically) investigate the domain of application we test the methods on problems with varying degree of symmetry.

*relative error*, i.e.,

A: \({\mathscr {K}}_k\) as in (19), according to Algorithm 3

B: Algorithm 3 but with tangential directions according to Remark 27, though with shifts according to (20)

C: Algorithm 3 but with shifts according to (21)

D: Algorithm 3 but with tangential directions according to Remark 27 and shifts according to (21)

E: Standard rational Krylov. More precisely, similar to Algorithm 3, but instead of using \({u}_{k-1}\) we use the right-hand-side

*B*in both (19) and (20)F: \({\mathscr {K}}_k\) as in (19), but with on-beforehand-prescribed shifts given as the recycling of mirrored eigenvalues from a size-10-BIRKA (convergence tolerance set to \(10^{-3}\)). Mirrored eigenvalues are potentially complex, with positive real part, and taken in ascending order according to the real parts.

^{3}The simulations were done in Matlab R2018a (9.4.0.813654) on a computer with four 1.6 GHz processors and 16 GB of RAM.

We test the algorithms on three different problems. All examples are bilinear control systems and we approximate the associated controllability Gramian, as in (9). The examples all have stable Lyapunov operators. The first example is symmetric, the second is non-symmetric but symmetrizable, and the third example is non-symmetric.

### 6.1 Heat equation

*w*models the evolution of a temperature and is described by a two-dimensional heat equation,

*u*(

*t*) enters bilinearly from the left through a Robin condition,

We compare different methods discussed in the paper, both the relative residual norm and the relative error. For readability the plots have been split in different figures. Hence in Fig. 1 we compare across different classes of methods, and in Fig. 2 we compare between different flavors the rational-Krylov-type methods. It can be observed, see Fig. 1, that for this example BIRKA has extremely good performance, even outperforming the SVD in relative residual norm. Nevertheless, the larger BIRKA subspaces can be rather costly to compute. In comparison ALS shows good performance compared to the rational-Krylov-type subspace, and is rather cheap to compute. When comparing the different rational-Krylov-type methods, see Fig. 2, we see that standard rational-Krylov (E) has the problem that the convergence stagnates. The methods A, C, and F have similar performance. In comparison, B and D are only slightly worse in the error per subspace dimension comparison but are practically sometimes faster to compute.

### 6.2 1D Fokker–Planck

*A*is not asymptotically stable due to a simple zero eigenvalue associated with the stationary probability distribution. Using a projection-based decoupling, it is however possible to work with an asymptotically stable system of dimension \(n=4999\). Similar to the first example, the control variable is a scalar and, consequently, we only obtain a single bilinear coupling matrix \(N_1=N\). Since the system is non-symmetric, the operator \({\mathscr {M}}\) is generally indefinite and hence we make no comparisons in the \({\mathscr {M}}\)-norm.

The plots in Figs. 4 and 5 are analogous to the plots in Figs. 1 and 2 respectively. However, for this example the direct solver stagnated at a relative residual of about \(10^{-8}\), which can be seen in the stagnation of the SVD approximation in the left of Fig. 4. As a result, the comparisons of relative error performance, the right of Figs. 4 and 5, show an artificial stagnation. At a certain level the convergence stagnates since it measures the discrepancy between the method approximations and the inexact reference solution, rather than the true error of the method approximations. Nevertheless we believe the comparisons to be fair more or less up to to the point of stagnation, which is justified by the relative residual plots showing similar behavior. However, the relative residual indicates stagnation around \(10^{-8}\) for the other methods as well, although not quite as clear as for the SVD.

From Fig. 4 we see the BIRKA performs well for this example. However, the subspaces of dimension 28 and 29 did not converge in a 100 iterations and hence for clarity these are left out of the plots. This illustrates a drawback of the method. The performance difference between ALS and the rational-Krylov-type method is slightly smaller compared to the previous example. Among the rational-Krylov-type methods A, B, and F seems to have similar performance, whereas C is clearly worse. Method E is competitive for about 10 iterations and then the convergence is significantly slower. However, method D ends up with an insufficient subspace.

### 6.3 Burgers’ equation

*u*(

*t*) is an applied control input. The solution

*w*(

*x*,

*t*) can be interpreted as a velocity and the equation occurs in, e.g., modeling of gas or traffic flow. The problem is discretized in space using centered finite differences with 71 uniformly distributed grid points. Using a second order Carleman bilinearization, we obtain a bilinear control system approximation with \(A,N\in {\mathbb {R}}^{5112\times 5112}\) and \(B\in {\mathbb {R}}^{5112}\); see [10] for further details. Note that in this case

*A*is an asymptotically stable but non-symmetric matrix. To ensure the positive semidefiniteness of the Gramian, we scale the control matrices

*N*and

*B*with a factor \(\alpha =0.25\). We emphasize that the control law is scaled proportionally with \(\frac{1}{\alpha }\) such that the dynamics remain unchanged, for further discussion see [9, Section 3.4].

The comparison is similar to the previous examples and the Figs. 6 and 7 are analogous to the respective Figs. 1 and 2. The problem is difficult in the sense that the singular values of the solution decay slowly. Moreover, the direct method stagnates at a relative residual norm of \(5\times 10^{-6}\). This is, however, less visible compared to the previous example since in general the convergence is slower.

For this example the performance of BIRKA is not significantly better than other methods, which is not surprising since the theoretical justifications for the method are not valid. ALS shows faster convergence in relative residual norm but slower convergence in relative error, as well as indications of stagnation. However, the theoretical justifications for ALS are also not valid for this example and the result is in line with the results in [25]. Regarding the rational-Krylov-type methods it seems as if method D and B has the best performance. However, method E does not provide a useful subspace for this example.

### 6.4 Execution time experiment

In this situation, and for the chosen parameters, BIRKA is faster for the heat equation, and slower for the Fokker–Planck equation. In the case of the Burger’s equation it seems as if BIRKA is faster. However, if we take the approximation properties into account we find, by looking at Fig. 6, that a more fair comparison with method A is to consider the latter only up to iteration 30. Moreover, fixing the subspace dimension, rather than the tolerance, is (likely) advantageous for BIRKA.

## 7 Conclusions and outlooks

We have proposed a rational-Krylov-type subspace for solving the generalized Lyapunov equation. Simulations indicate competitive performance, at least in the non-symmetric case where optimality statements for the other methods are no longer valid. Simulations show that methods A and F perform well for all three examples. The ALS iteration, as well as results from the literature, cf. [1], seems to indicate that subspaces of the type \((A-\sigma I -\mu N_i)^{-1}B\) could be useful. Although we have not been able to exploit this efficiently. Another generalization of the rational Krylov subspace, for general linear matrix equations, is presented in [32]. It is suggested to use subspaces of the type \((A-\sigma I)^{-1}v\), and \((N_i-\sigma I)^{-1}v\), where *v* is a vector from the previous space. We see that more research is needed to understand the theoretical aspects of the suggested, and related, spaces.

Common for all methods studied is that they use the current residual in the iterations. Computing the residual can in itself be costly for a truly large scale problem, although approximate dominant directions can be computed in an iterative fashion, resulting in an inner-outer-type iteration. However, more research is needed to understand the consequences of such inexact subspaces.

## Footnotes

- 1.
Here we have, implicitly, assumed that the dimension of the \({{\,\mathrm{{\mathscr {K}}}\,}}_{k+1}\) is \(n\times (k+2)r\), i.e., all the columns in the definition of the space are linearly independent.

- 2.
The technique of turning an iterative method, such as, e.g., ALS, into a subspace method is known as

*Galerkin acceleration*. The idea is nicely explained in [25, Section 3]. - 3.

## Notes

### Acknowledgements

We wish to thank the anonymous referees, who’s comments helped improve the manuscript. The authors also wish to thank Elias Jarlebring (KTH) for support and discussions. This research started when the second author visited the first author at the Karl-Franzens-Universität in Graz; the kind hospitality was greatly appreciated. The visit was made possible due to the generous support from the European Model Reduction Network (COST action TD1307, STSM Grant 38025).

## References

- 1.Ahmad, M., Baur, U., Benner, P.: Implicit Volterra series interpolation for model reduction of bilinear systems. J. Comput. Appl. Math.
**316**(Supplement C), 15–28 (2017)MathSciNetzbMATHGoogle Scholar - 2.Al-Baiyat, S.A., Bettayeb, M.: A new model reduction scheme for k-power bilinear systems. In: Proceedings of 32nd IEEE Conference on Decision and Control, vol. 1, pp. 22–27 (1993)Google Scholar
- 3.Baars, S., Viebahn, J., Mulder, T., Kuehn, C., Wubs, F., Dijkstra, H.: Continuation of probability density functions using a generalized Lyapunov approach. J. Comput. Phys.
**336**, 627–643 (2017)MathSciNetzbMATHGoogle Scholar - 4.Becker, S., Hartmann, C.: Infinite-dimensional bilinear and stochastic balanced truncation with error bounds. Technical report. arXiv:1806.05322 (2018)
- 5.Benner, P., Breiten, T.: Interpolation-based \({\mathscr {H}}_2\)-model reduction of bilinear control systems. SIAM J. Matrix Anal. Appl.
**33**(3), 859–885 (2012)MathSciNetzbMATHGoogle Scholar - 6.Benner, P., Breiten, T.: Low rank methods for a class of generalized Lyapunov equations and related issues. Numer. Math.
**124**(3), 441–470 (2013)MathSciNetzbMATHGoogle Scholar - 7.Benner, P., Breiten, T.: On optimality of approximate low rank solutions of large-scale matrix equations. Syst. Control Lett.
**67**, 55–64 (2014)MathSciNetzbMATHGoogle Scholar - 8.Benner, P., Bujanović, Z., Kürschner, P., Saak, J.: RADI: a low-rank ADI-type algorithm for large scale algebraic Riccati equations. Numer. Math.
**138**(2), 301–330 (2018)MathSciNetzbMATHGoogle Scholar - 9.Benner, P., Damm, T.: Lyapunov equations, energy functionals, and model order reduction of bilinear and stochastic systems. SIAM J. Control Optim.
**49**(2), 686–711 (2011)MathSciNetzbMATHGoogle Scholar - 10.Breiten, T., Damm, T.: Krylov subspace methods for model order reduction of bilinear control systems. Syst. Control Lett.
**59**(8), 443–450 (2010)MathSciNetzbMATHGoogle Scholar - 11.Breiten, T., Kunisch, K., Pfeiffer, L.: Numerical study of polynomial feedback laws for a bilinear control problem. Math. Control Relat. Fields
**8**(3&4), 557–582 (2018)MathSciNetzbMATHGoogle Scholar - 12.Damm, T.: Direct methods and ADI-preconditioned Krylov subspace methods for generalized Lyapunov equations. Numer. Linear Algebra Appl.
**15**(9), 853–871 (2008)MathSciNetzbMATHGoogle Scholar - 13.Druskin, V., Knizhnerman, L., Zaslavsky, M.: Solution of large scale evolutionary problems using rational Krylov subspaces with optimized shifts. SIAM J. Sci. Comput.
**31**(5), 3760–3780 (2009)MathSciNetzbMATHGoogle Scholar - 14.Druskin, V., Lieberman, C., Zaslavsky, M.: On adaptive choice of shifts in rational Krylov subspace reduction of evolutionary problems. SIAM J. Sci. Comput.
**32**(5), 2485–2496 (2010)MathSciNetzbMATHGoogle Scholar - 15.Druskin, V., Simoncini, V.: Adaptive rational Krylov subspaces for large-scale dynamical systems. Syst. Control Lett.
**60**(8), 546–560 (2011)MathSciNetzbMATHGoogle Scholar - 16.Druskin, V., Simoncini, V., Zaslavsky, M.: Adaptive tangential interpolation in rational Krylov subspaces for MIMO dynamical systems. SIAM J. Matrix Anal. Appl.
**35**(2), 476–498 (2014)MathSciNetzbMATHGoogle Scholar - 17.Eppler, K., Tröltzsch, F.: Fast optimization methods in the selective cooling of steel. In: Grötschel, M., Krumke, S., Rambau, J. (eds.) Online Optimization of Large Scale Systems, pp. 185–204. Springer, Berlin (2001)zbMATHGoogle Scholar
- 18.Flagg, G., Beattie, C., Gugercin, S.: Convergence of the iterative rational Krylov algorithm. Syst. Control Lett.
**61**(6), 688–691 (2012)MathSciNetzbMATHGoogle Scholar - 19.Flagg, G., Gugercin, S.: Multipoint Volterra series interpolation and \({\mathscr {H}}_2\) optimal model reduction of bilinear systems. SIAM J. Matrix Anal. Appl.
**36**(2), 549–579 (2015)MathSciNetzbMATHGoogle Scholar - 20.Golub, G., Van Loan, C.: Matrix Computations, 4th edn. The Johns Hopkins University Press, Baltimore (2013)zbMATHGoogle Scholar
- 21.Gugercin, S., Antoulas, A., Beattie, C.: \({\mathscr {H}}_2\) model reduction for large-scale linear dynamical systems. SIAM J. Matrix Anal. Appl.
**30**(2), 609–638 (2008)MathSciNetzbMATHGoogle Scholar - 22.Hartmann, C., Schäfer-Bung, B., Thöns-Zueva, A.: Balanced averaging of bilinear systems with applications to stochastic control. SIAM J. Control Optim.
**51**(3), 2356–2378 (2013)MathSciNetzbMATHGoogle Scholar - 23.Horn, R., Johnson, C.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)zbMATHGoogle Scholar
- 24.Jarlebring, E., Mele, G., Palitta, D., Ringh, E.: Krylov methods for low-rank commuting generalized Sylvester equations. Numer. Linear Algebra Appl.
**25**(6), e2176 (2018)MathSciNetzbMATHGoogle Scholar - 25.Kressner, D., Sirković, P.: Truncated low-rank methods for solving general linear matrix equations. Numer. Linear Algebra Appl.
**22**(3), 564–583 (2015)MathSciNetzbMATHGoogle Scholar - 26.Kressner, D., Tobler, C.: Krylov subspace methods for linear systems with tensor product structure. SIAM J. Matrix Anal. Appl.
**31**(4), 1688–1714 (2010)MathSciNetzbMATHGoogle Scholar - 27.Lin, Y., Simoncini, V.: Minimal residual methods for large scale Lyapunov equations. Appl. Numer. Math.
**72**, 52–71 (2013)MathSciNetzbMATHGoogle Scholar - 28.Massei, S., Palitta, D., Robol, L.: Solving rank-structured Sylvester and Lyapunov equations. SIAM J. Matrix Anal. Appl.
**39**(4), 1564–1590 (2018)MathSciNetzbMATHGoogle Scholar - 29.Mehrmann, V., Tan, E.: Defect correction method for the solution of algebraic Riccati equations. IEEE Trans. Autom. Control
**33**(7), 695–698 (1988)MathSciNetzbMATHGoogle Scholar - 30.Mohler, R.R., Kolodziej, W.J.: An overview of bilinear system theory and applications. IEEE Trans. Syst. Man Cybern.
**10**(10), 683–688 (1980)MathSciNetzbMATHGoogle Scholar - 31.Neudecker, H.: A matrix trace inequality. J. Math. Anal. Appl.
**166**(1), 302–303 (1992)MathSciNetzbMATHGoogle Scholar - 32.Powell, C.E., Silvester, D., Simoncini, V.: An efficient reduced basis solver for stochastic Galerkin matrix equations. SIAM J. Sci. Comput.
**39**(1), A141–A163 (2017)MathSciNetzbMATHGoogle Scholar - 33.Richter, S., Davis, L.D., Collins Jr., E.G.: Efficient computation of the solutions to modified Lyapunov equations. SIAM J. Matrix Anal. Appl.
**14**(2), 420–431 (1993)MathSciNetzbMATHGoogle Scholar - 34.Ringh, E., Mele, G., Karlsson, J., Jarlebring, E.: Sylvester-based preconditioning for the waveguide eigenvalue problem. Linear Algebra Appl.
**542**, 441–463 (2018). Proceedings of the 20th ILAS Conference, p. 2016. Belgium, LeuvenGoogle Scholar - 35.Ruhe, A.: The rational Krylov algorithm for nonsymmetric eigenvalue problems. III: complex shifts for real matrices. BIT
**34**(1), 165–176 (1994)MathSciNetzbMATHGoogle Scholar - 36.Shaker, H.R., Tahavori, M.: Control configuration selection for bilinear systems via generalised Hankel interaction index array. Int. J. Control
**88**(1), 30–37 (2015)MathSciNetzbMATHGoogle Scholar - 37.Shank, S.D., Simoncini, V., Szyld, D.B.: Efficient low-rank solution of generalized Lyapunov equations. Numer. Math.
**134**(2), 327–342 (2016)MathSciNetzbMATHGoogle Scholar - 38.Simoncini, V.: Computational methods for linear matrix equations. SIAM Rev.
**58**(3), 377–441 (2016)MathSciNetzbMATHGoogle Scholar - 39.Smith, R.: Matrix equation \(XA + BX = C\). SIAM J. Appl. Math.
**16**(1), 198–201 (1968)MathSciNetzbMATHGoogle Scholar - 40.Vandereycken, B., Vandewalle, S.: A Riemannian optimization approach for computing low-rank solutions of Lyapunov equations. SIAM J. Matrix Anal. Appl.
**31**(5), 2553–2579 (2010)MathSciNetzbMATHGoogle Scholar - 41.Zhang, L., Lam, J.: On \(H_2\) model reduction of bilinear systems. Autom. J. IFAC
**38**(2), 205–216 (2002)zbMATHGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.