1 Introduction

In this work we continue our investigations of the value function associated with infinite-horizon optimal control problems of partial differential equations, that we initiated in [15, 17]. We consider a stabilization problem of the Navier–Stokes equations in dimension two and focus on the regularity of the value function and its characterization as a solution to a Hamilton–Jacobi–Bellman (HJB) equation. This task has been the subject of tremendous research, for optimal control problems of a general structure, in general associated with finite-dimensional dynamical systems. The use of the notion of viscosity solutions has allowed to deal with the low regularity of the value function. In the present paper, to the contrary, we show that the value function is smooth and that the HJB equation is satisfied in the strict sense, in a neighborhood of the steady state. Moreover, we show that the derivatives of the value function, at the steady state, are solutions to an algebraic Riccati equation (for the order 2) and to linear equations, called generalized Lyapunov equations, for the higher orders. The main interest of these results is the fact that polynomial feedback laws can be derived from Taylor approximations of the value function. Moreover their efficiency can be analyzed.

From a methodological point of view, we mainly follow the techniques that we laid out for bilinear optimal control problems (such as control problems of the Fokker–Planck equation) in [15, 17]. The Navier–Stokes control system considered here requires a different functional analytic treatment. In fact, the involved nonlinear terms must be tackled with different estimates, to guarantee, for example, the well-posedness of the closed-loop system. They also lead to different generalized Lyapunov equations. Moreover, from the point of view of open-loop control of the Navier–Stokes equation, this paper contains results on infinite-horizon optimal control which are not readily available elsewhere.

Feedback stabilization of the Navier–Stokes equations has been and still is an active topic of research. Among the numerous works, we refer to, e.g., [6, 7, 10, 24, 38], and the references therein. For literature concerning open-loop optimal control of the Navier–Stokes equations, we can only cite a small selection [13, 18, 19, 21, 22, 27, 30, 42].

The technique of approximation of the value function with a Taylor expansion dates back to [3, 35], where optimal control problems associated to finite-dimensional control systems were investigated. We also quote follow-up work, for instance in [2, 8, 36]. For infinite-dimensional problems, we are only aware of [15, 17]. In [16], the numerical solvability of the Lyapunov equations has been addressed. Model reduction techniques based on balanced truncation have been used in this reference to cope with the curse of dimensionality encountered when dealing with PDE controlled systems.

Let us next specify the problem which will be investigated in this paper. Throughout \(\Omega \subset \mathbb R^2\) denotes a bounded domain with Lipschitz boundary \(\Gamma \). Given two vector valued functions \(\varvec{\varphi }\) and \(\varvec{\psi }\), we consider a solution \(({\bar{{\mathbf {z}}}},{\bar{q}})\) of the stationary Navier–Stokes equations

$$\begin{aligned} \begin{aligned} -\nu \Delta {\bar{{\mathbf {z}}}} + ({\bar{{\mathbf {z}}}} \cdot \nabla ) {\bar{{\mathbf {z}}}} + \nabla {\bar{q}}&= \varvec{\varphi } \quad \text {in } \Omega , \\ {{\,\mathrm{div}\,}}{\bar{{\mathbf {z}}}}&= 0 \quad \text {in } \Omega , \\ {\bar{{\mathbf {z}}}}&=\varvec{\psi } \,\,\,\text {on } \Gamma . \end{aligned} \end{aligned}$$
(1)

Our goal is to find a control u such that the solution \(({\mathbf {z}},q)\) to the transient Navier–Stokes equations

$$\begin{aligned} \begin{aligned} \frac{\partial {\mathbf {z}}}{\partial t}&= \nu \Delta {\mathbf {z}}- ({\mathbf {z}}\cdot \nabla ){\mathbf {z}}- \nabla q +\varvec{\varphi } + {\tilde{B}} u&\text {in } \Omega \times (0,T), \\ {{\,\mathrm{div}\,}}{\mathbf {z}}&= 0&\text {in } \Omega \times (0,T), \\ {\mathbf {z}}&= \varvec{\psi }&\text {on } \Gamma \times (0,T), \\ {\mathbf {z}}(0)&= {\bar{{\mathbf {z}}}} + {\mathbf {y}}_0&\end{aligned} \end{aligned}$$
(2)

is stabilized around \({\bar{{\mathbf {z}}}},\) i.e., \(\lim \limits _{t\rightarrow \infty } {\mathbf {z}}(t) = {\bar{{\mathbf {z}}}}\) provided the initial perturbation \({\mathbf {y}}_0\) is small in an appropriate sense. The control operator \({\tilde{B}}\) will be defined below. Throughout this work, we assume that \({{\,\mathrm{div}\,}}{\mathbf {y}}_0=0\). Our results are concerned with feedback stabilization of (2) and for this purpose, we consider new state variables \(({\mathbf {y}},p):=({\mathbf {z}},q)-({\bar{{\mathbf {z}}}},{\bar{q}})\) which satisfy the following generalized Navier–Stokes equations

$$\begin{aligned} \begin{aligned} \frac{\partial {\mathbf {y}}}{\partial t}&= \nu \Delta {\mathbf {y}}- ({\mathbf {y}}\cdot \nabla ){\bar{{\mathbf {z}}}} - ({\bar{{\mathbf {z}}}} \cdot \nabla ){\mathbf {y}}-({\mathbf {y}}\cdot \nabla ){\mathbf {y}}- \nabla p + {\tilde{B}} u&\text {in } \Omega \times (0,T), \\ {{\,\mathrm{div}\,}}{\mathbf {y}}&= 0&\text {in } \Omega \times (0,T), \\ {\mathbf {y}}&= 0&\text {on } \Gamma \times (0,T), \\ {\mathbf {y}}(0)&= {\mathbf {y}}_0.&\end{aligned} \end{aligned}$$
(3)

The following sections are structured as follows. The problem statement and fundamental results on the state-equation on the time interval \([0,\infty )\) are given in Sect. 2. Section 3 contains the existence theory of optimal controls, the adjoint equation, sensitivity analysis, and differentiability of the value function. The characterization of all higher order derivatives of the value as solutions to generalized Lyapunov equations are provided in Sect. 4. Section 5 contains the Taylor expansion of the value function, and estimates for convergence rates between the optimal solution and its approximation on the basis of feedback solutions obtained from derivatives of the value function. The paper closes with a very short outlook.

Notation For Hilbert spaces \(V\subset Y\) with dense and compact embedding, we consider the Gelfand triple \(V\subset Y \subset V'\) where \(V'\) denotes the topological dual of V with respect to the pivot space Y. Given \(T\in \mathbb R\) we consider the space

$$\begin{aligned} W(0,T) =\left\{ y \in L^2(0,T;V) \ \big | \ \frac{\mathrm {d}}{\mathrm {d}t} y \in L ^2(0,T;V') \right\} . \end{aligned}$$

For \(T=\infty \), the space W(0, T) will be denoted by \(W_\infty \). For vector-valued functions \({\mathbf {f}}\in (L^2(\Omega ))^2\), we use the notation \({\mathbf {f}}\in {\mathbb {L}}^2(\Omega )\). Elements \({\mathbf {f}}\in \mathbb L^{2}(\Omega )\) will be denoted in boldface and are distinguished from real-valued functions \(g \in L^2(\Omega )\). Similarly, we use \({\mathbb {H}}^2(\Omega )\) for the space \((H^2(\Omega ))^2\) and \({\mathbb {H}}^1_0(\Omega )\) for \((H^1_0(\Omega ))^2\). Given a closed, densely defined linear operator \((A,{\mathcal {D}}(A))\) in Y, its adjoint (again considered as an operator in Y) will be denoted with \((A^*,{\mathcal {D}}(A^*))\).

Let us introduce some notation that will be needed for the description of polynomial mappings. For \(\delta \ge 0\) and a Hilbert space Y, we denote by \(B_Y(\delta )\) the closed ball in Y with radius \(\delta \) and center 0. For \(k \ge 1\), we make use of the following norm:

$$\begin{aligned} \Vert (y_1,\dots ,y_k) \Vert _{Y^k}= \max _{i=1,\dots ,k} \Vert y_i \Vert _Y. \end{aligned}$$
(4)

Given a Hilbert space Z, we say that \({\mathcal {T}}:Y^k \rightarrow Z\) is a bounded multilinear mapping (or bounded multilinear form when \(Z= {\mathbb {R}}\)) if for all \(i \in \{ 1,\dots ,k \}\) and for all \((z_1,\dots ,z_{i-1},z_{i+1},\dots ,z_k) \in Y^{k-1}\), the mapping \(z \in Y \mapsto {\mathcal {T}}(z_1,\dots ,z_{i-1},z,z_{i+1},\dots ,z_k) \in Z\) is linear and

$$\begin{aligned} \Vert {\mathcal {T}} \Vert := \sup _{y \in B_{Y^k}(1)} \Vert {\mathcal {T}}(y) \Vert _Z < \infty . \end{aligned}$$
(5)

The set of bounded multilinear mappings on \(Y^k\) will be denoted by \({\mathcal {M}}(Y^k,Z)\). For all \({\mathcal {T}} \in {\mathcal {M}}(Y^k,Z)\) and for all \((z_1,\dots ,z_k) \in Y^k\),

$$\begin{aligned} \Vert {\mathcal {T}}(z_1,\dots ,z_k) \Vert _Z \le \Vert {\mathcal {T}} \Vert \, \prod _{i=1}^k \Vert z_i \Vert _Y. \end{aligned}$$

Given a bounded multilinear form \({\mathcal {T}}\) and \(z_2,\dots ,z_k \in Y^{k-1}\), we denote by \({\mathcal {T}}(\cdot ,z_2,\dots ,z_k)\) the bounded linear form \(z_1 \in Y \mapsto {\mathcal {T}}(z_1,\dots ,z_k) \in {\mathbb {R}}\). It will be very often identified with its Riesz representative. Note that

$$\begin{aligned} \Vert {\mathcal {T}}(\cdot ,z_2,\dots ,z_k) \Vert _Y = \sup _{z_1 \in B_Y(1)} {\mathcal {T}}(z_1,\dots ,z_k) \le \Vert {\mathcal {T}} \Vert \prod _{i=2}^k \Vert z_i \Vert _Y. \end{aligned}$$
(6)

Bounded multilinear mappings \({\mathcal {T}} \in {\mathcal {M}}(Y^k,Z)\) are said to be symmetric if for all \(z_1,\dots ,z_k \in Y^k\) and for all permutations \(\sigma \) of \(\{ 1,\dots ,k \}\),

$$\begin{aligned} {\mathcal {T}}(z_{\sigma (1)},\dots ,z_{\sigma (k)})= {\mathcal {T}}(z_1,\dots ,z_k). \end{aligned}$$

Finally, given two multilinears mappings \({\mathcal {T}}_1 \in {\mathcal {M}}(Y^k,Z)\) and \({\mathcal {T}}_2 \in {\mathcal {M}}(Y^{\ell },Z)\), we denote by \({\mathcal {T}}_1 \otimes {\mathcal {T}}_2\) the bounded multilinear form defined by

$$\begin{aligned} {\mathcal {T}}_1 \otimes {\mathcal {T}}_2(z_1,\dots ,z_{k+ \ell }) = \langle {\mathcal {T}}_1(z_1,\dots ,z_k), {\mathcal {T}}_2(z_{k+1},\dots ,z_{k+\ell }) \rangle _Z. \end{aligned}$$

Throughout the manuscript, we use M as a generic constant that might change its value between consecutive lines.

2 Problem Formulation

2.1 Abstract Cauchy Problem

In this section, we formulate system (3) as an abstract Cauchy problem on a suitable Hilbert space and, subsequently, define the stabilization problem of interest. This procedure is quite standard, see, for instance, [6, 7, 24, 38, 40] for details. We introduce the spaces

$$\begin{aligned} Y&:=\left\{ {\mathbf {y}}\in {\mathbb {L}}^2(\Omega )\, |\, {{\,\mathrm{div}\,}}{\mathbf {y}}=0,\, {\mathbf {y}}\cdot \vec {n}=0 \text { on } \Gamma \right\} , \\ V&:=\left\{ {\mathbf {y}}\in {\mathbb {H}}_0^1(\Omega )\, | \, {{\,\mathrm{div}\,}}{\mathbf {y}}=0 \right\} . \end{aligned}$$

It is well-known that Y is a closed subspace of \({\mathbb {L}}^2(\Omega )\). Moreover, we have the orthogonal decomposition

$$\begin{aligned} {\mathbb {L}}^2(\Omega ) = Y \oplus Y^\perp , \end{aligned}$$
(7)

where

$$\begin{aligned} Y^\perp = \left\{ {\mathbf {z}}=\nabla p \, | \, p \in H^1(\Omega ) \right\} , \end{aligned}$$
(8)

see, e.g., [40, p. 15]. By P we denote the Leray projector\(P:{\mathbb {L}}^2(\Omega ) \rightarrow Y\) which is the orthogonal projector in \({\mathbb {L}}^{2}(\Omega )\) onto Y. Following, e.g., [6], we define a trilinear form s by

$$\begin{aligned} s({\mathbf {u}},{\mathbf {v}},{\mathbf {w}}):=\int _{\Omega } \sum _{i,j=1}^2 u_i w_j \frac{\partial v_j}{\partial x_i} \, \text {d}x = \langle ({\mathbf {u}}\cdot \nabla ){\mathbf {v}},{\mathbf {w}}\rangle _{{\mathbb {L}}^{2}(\Omega )} , \ \ \forall {\mathbf {u}},{\mathbf {v}},{\mathbf {w}}\in V \end{aligned}$$
(9)

and a nonlinear operator \(F:V\rightarrow V'\) by

$$\begin{aligned} \langle F({\mathbf {y}}),{\mathbf {w}}\rangle _{V',V} := s({\mathbf {y}},{\mathbf {y}},{\mathbf {w}}), \ \ \forall {\mathbf {w}}\in V. \end{aligned}$$
(10)

For the bilinear mapping associated with the linearization of F, we introduce the operator

$$\begin{aligned} N:V\times V \rightarrow V', \ \ \langle N({\mathbf {y}},{\mathbf {z}}),{\mathbf {w}}\rangle _{V',V}:=s({\mathbf {y}},{\mathbf {z}},{\mathbf {w}}). \end{aligned}$$
(11)

The Oseen-Operator is then defined by

$$\begin{aligned} A_{0} :V\times V \rightarrow V', \ \ \langle A_{0}({\mathbf {y}},{\mathbf {z}}),{\mathbf {w}}\rangle _{V',V} := \langle N({\mathbf {y}},{\mathbf {z}})+N({\mathbf {z}},{\mathbf {y}}),{\mathbf {w}}\rangle _{V',V}. \end{aligned}$$
(12)

The following well-known results (see, e.g., [6, 40, Lemma III.3.4]) concerning s and N will be used frequently throughout the paper.

Proposition 1

The following properties hold for N and s:

  1. (i)

    \(\Vert N({\mathbf {y}},{\mathbf {z}})\Vert _{V'}\le M \Vert {\mathbf {y}}\Vert ^{\frac{1}{2}}_{Y} \Vert {\mathbf {z}}\Vert ^{\frac{1}{2}}_{Y} \Vert {\mathbf {y}}\Vert ^{\frac{1}{2}}_{V} \Vert {\mathbf {z}}\Vert ^{\frac{1}{2}}_{V} \), for all \({\mathbf {y}}, {\mathbf {z}}\in V\),

  2. (ii)

    \(s({\mathbf {y}},{\mathbf {z}},{\mathbf {w}})=-s({\mathbf {y}},{\mathbf {w}},{\mathbf {z}})\), for all \({\mathbf {y}},{\mathbf {z}},{\mathbf {w}}\in V\).

With the previous result, we obtain similar properties for time-varying functions \({\mathbf {y}},{\mathbf {z}},{\mathbf {w}}\).

Lemma 2

Let \(T \in (0,\infty ]\). For all \({\mathbf {y}}\in W(0,T)\), for all \({\mathbf {z}}\in W(0,T)\), and for all \({\mathbf {w}}\in L^2(0,T;V)\),

$$\begin{aligned}&\langle N({\mathbf {y}},{\mathbf {z}}),{\mathbf {w}}\rangle _{L^2(0,T;V'),L^2(0,T;V)} \\&\quad \le M \Vert {\mathbf {y}}\Vert _{L^\infty (0,T;Y)}^{\frac{1}{2}} \Vert {\mathbf {y}}\Vert _{L^2(0,T;V)}^{\frac{1}{2}} \Vert {\mathbf {z}}\Vert _{L^\infty (0,T;Y)}^{\frac{1}{2}} \Vert {\mathbf {z}}\Vert _{L^2(0,T;V)}^{\frac{1}{2}} \Vert {\mathbf {w}}\Vert _{L^2(0,T;V)}. \end{aligned}$$

Moreover, if \({\mathbf {w}}\in L^\infty (0,T;V)\),

$$\begin{aligned}&\langle N({\mathbf {y}},{\mathbf {z}}),{\mathbf {w}}\rangle _{L^2(0,T;V'),L^2(0,T;V)} \\&\quad \le M \Vert {\mathbf {y}}\Vert _{L^2(0,T;Y)}^{\frac{1}{2}} \Vert {\mathbf {y}}\Vert _{L^2(0,T;V)}^{\frac{1}{2}} \Vert {\mathbf {z}}\Vert _{L^2(0,T;Y)}^{\frac{1}{2}} \Vert {\mathbf {z}}\Vert _{L^2(0,T;V)}^{\frac{1}{2}} \Vert {\mathbf {w}}\Vert _{L^\infty (0,T;V)}, \end{aligned}$$

where M is the constant given by Proposition 1.

Proof

Using Proposition 1 and Cauchy–Schwarz inequality (two times), we obtain that

$$\begin{aligned}&\langle N({\mathbf {y}},{\mathbf {z}}),{\mathbf {w}}\rangle _{L^{2}(0,T;V'),L^{2}(0,T;V)} \le \ M \int _0^T \Vert {\mathbf {y}}(t) \Vert _Y^{\frac{1}{2}} \Vert {\mathbf {y}}(t) \Vert _V^{\frac{1}{2}} \Vert {\mathbf {z}}(t) \Vert _Y^{\frac{1}{2}} \Vert {\mathbf {z}}(t) \Vert _V^{\frac{1}{2}} \Vert {\mathbf {w}}(t) \Vert _V \mathrm {d}t \\&\quad \le \ M \Vert {\mathbf {y}}\Vert _{L^2(0,T;V)}^{\frac{1}{2}} \Vert {\mathbf {z}}\Vert _{L^2(0,T;V)}^{\frac{1}{2}} \Big ( \int _0^T \Vert {\mathbf {y}}(t) \Vert _Y \Vert {\mathbf {z}}(t) \Vert _Y \Vert {\mathbf {w}}(t) \Vert _V^2 \Big )^{\frac{1}{2}}. \end{aligned}$$

The two inequalities easily follow. \(\square \)

Corollary 3

There exists \(M>0\) such that for all \({\mathbf {y}}\) and \({\mathbf {z}}\in W_\infty \),

$$\begin{aligned} \Vert N({\mathbf {y}},{\mathbf {z}}) \Vert _{L^2(0,\infty ;V')} \le M \Vert {\mathbf {y}}\Vert _{W_\infty } \Vert {\mathbf {z}}\Vert _{W_\infty }. \end{aligned}$$

For \({\bar{{\mathbf {z}}}} \in V\), we further introduce the Stokes-Oseen operator A via

$$\begin{aligned} {\mathcal {D}}(A) = {\mathbb {H}}^2(\Omega )\cap V, \ \ A{\mathbf {y}}= P(\nu \Delta {\mathbf {y}}- ({\mathbf {y}}\cdot \nabla ){\bar{{\mathbf {z}}}} - ({\bar{{\mathbf {z}}}} \cdot \nabla ){\mathbf {y}}). \end{aligned}$$
(13)

Considered as operator in \({\mathbb {L}}^2(\Omega )\) the adjoint \(A^*\), as operator in \({\mathbb {L}}^2(\Omega )\), can be characterized by (see, e.g., [38])

$$\begin{aligned} {\mathcal {D}}(A^*) = {\mathbb {H}}^2(\Omega )\cap V, \ \ A^*{\mathbf {p}}= P(\nu \Delta {\mathbf {p}}- (\nabla {\bar{{\mathbf {z}}}})^T {\mathbf {p}}+ ({\bar{{\mathbf {z}}}} \cdot \nabla ){\mathbf {p}}). \end{aligned}$$
(14)

We note that as a consequence of Proposition 1, the operator A can be extended to a bounded linear operator from V to \(V'\) in the following manner:

$$\begin{aligned} \langle A {\mathbf {y}}, {\mathbf {w}}\rangle _{V',V} = - \nu \langle \nabla {\mathbf {y}}, \nabla {\mathbf {w}}\rangle _{{\mathbb {L}}^2(\Omega )} - \langle A_0({\bar{{\mathbf {z}}}}, {\mathbf {y}}), {\mathbf {w}}\rangle _{V',V}. \end{aligned}$$

Note that this extension is consistent, since by definition of the Leray projector P, we have \(\langle P {\mathbf {y}}, {\mathbf {w}}\rangle _Y = \langle {\mathbf {y}}, {\mathbf {w}}\rangle _Y\) for all \({\mathbf {y}}\in {\mathbb {L}}^2(\Omega )\) and for all \({\mathbf {w}}\in V\). Similarly, \(A^*\) can be extended to a bounded linear operator from V to \(V'\).

The control operator is chosen to satisfy \({\tilde{B}}\in {\mathcal {L}}(U,{\mathbb {L}}^2(\Omega ))\). We further define \(B:=P\tilde{B} \in {\mathcal {L}}(U,Y)\). The controlled state Eq. (3) can now be formulated as the abstract control system

$$\begin{aligned} \begin{aligned} {\dot{{\mathbf {y}}}}(t)&= A{\mathbf {y}}- F({\mathbf {y}}) + Bu, \quad {\mathbf {y}}(0) ={\mathbf {y}}_0, \end{aligned} \end{aligned}$$
(15)

where the pressure p is eliminated. We can finally formulate the stabilization problem as an infinite-horizon optimal control problem:

figure a

where \(J :W_\infty \times L^2(0,\infty ;U) \rightarrow {\mathbb {R}}\) and \(e :W_\infty \times L^2(0,\infty ;U) \rightarrow L^2(0,\infty ;V') \times Y\) are defined by

$$\begin{aligned} J({\mathbf {y}},u)=&\frac{1}{2} \int _0^\infty \Vert {\mathbf {y}}\Vert ^2_Y \, \text {d}t + \frac{\alpha }{2} \int _0^\infty \Vert u(t) \Vert _U^2 \, \text {d}t \end{aligned}$$
(16)
$$\begin{aligned} e({\mathbf {y}},u)=&\big ( {\dot{{\mathbf {y}}}}-(A{\mathbf {y}}- F({\mathbf {y}}) + Bu), {\mathbf {y}}(0) \big ). \end{aligned}$$
(17)

Let us note that \(e:W_\infty \times L^2(0,\infty ;U) \rightarrow L^2(0,\infty ;V') \times Y\) is well-defined by Corollary 3.

2.2 Assumptions and First Properties

Throughout the article we assume that the following assumptions hold true.

Assumption A1

The stationary solution satisfies \({\bar{{\mathbf {z}}}} \in V\).

Assumption A2

There exists an operator \(K \in {\mathcal {L}}(Y,U)\) such that the semigroup \(e^{(A-BK) t}\) is exponentially stable on Y.

Assumption A2 concerning the exponential feedback stabilizability of the Stokes-Oseen operator is well investigated. We refer e.g. to [6] where finite-dimensional feedback operators are constructed on the basis of spectral decomposition or alternatively by Riccati theory. In this case A2 can be satisfied with \(U={\mathbb {R}}^m\), for m appropriately large. Alternatively, we can rely on exact controllability results as obtained in [23]. They imply that the finite cost criterion holds. We can then rely on classical results, see, e.g., [37] which guarantee the existence of a stabilizing feedback operator.

Let us discuss some important consequences of the above definitions and assumptions.

Consequence C1

There exists \(\lambda \ge 0\) and \(\theta >0\) such that

$$\begin{aligned} \langle (\lambda I-A){\mathbf {v}},{\mathbf {v}}\rangle _Y \ge \theta \Vert {\mathbf {v}}\Vert _V^2, \quad \text {for all } {\mathbf {v}}\in V. \end{aligned}$$
(18)

Hence, A generates an analytic semigroup \(e^{At}\) on Y, see [12, Part II, Chapter 1, Theorem 2.12].

Consequence C2

For all \({\mathbf {y}}_0 \in Y\), for all \({\mathbf {f}}\in L^2(0,\infty ;V')\), and for all \(T>0\), there exists a unique solution \({\mathbf {y}}\in W(0,T)\) to the system

$$\begin{aligned} {\dot{{\mathbf {y}}}}=A{\mathbf {y}}+ {\mathbf {f}}, \quad {\mathbf {y}}(0)={\mathbf {y}}_0. \end{aligned}$$

This solution satisfies

$$\begin{aligned} \Vert {\mathbf {y}}\Vert _{W(0,T)} \le c(T) (\Vert {\mathbf {y}}_0\Vert _Y + \Vert {\mathbf {f}}\Vert _{L^2(0,\infty ;V')} ) \end{aligned}$$

with a continuous function c. Assuming that \({\mathbf {y}}\in L^2(0,\infty ;Y)\), we consider the equivalent equation

$$\begin{aligned} {\dot{{\mathbf {y}}}}=\underbrace{(A- \lambda I)}_{A_\lambda } {\mathbf {y}}+ \underbrace{\lambda {\mathbf {y}}+ {\mathbf {f}}}_{{\mathbf {f}}_\lambda }, \quad {\mathbf {y}}(0)={\mathbf {y}}_0, \end{aligned}$$

where \({\mathbf {f}}_\lambda \in L^2(0,\infty ;V')\). By (18), the operator \(A_\lambda \) generates an analytic, exponentially stable, semigroup on Y satisfying \(\Vert e^{A_\lambda t} \Vert _Y \le e^{- \delta t}\) for some \( \delta > 0\) independent of \(t \ge 0\), see [12, Theorem II.1.2.12]. It follows that \({\mathbf {y}}\in W_\infty \) and there exists \(M_\lambda \) such that with

$$\begin{aligned} \Vert {\mathbf {y}}\Vert _{W_\infty } \le M_{\lambda } ( \Vert {\mathbf {y}}_0 \Vert _Y + \Vert {\mathbf {f}}_{\lambda } \Vert _{L^2(0,\infty ;V')} ). \end{aligned}$$
(19)

This estimate is obtained by adapting [12, Corollary II.3.2.1] and [12, Theorem II.3.2.2] from the temporal domain (0, T) to \((0,\infty )\), which can be achieved using the exponential stability of \(e^{A_\lambda t}\).

Lemma 4

There exists a constant \(C>0\) such that for all \(\delta \in [0,1]\) and for all \({\mathbf {y}}\) and \({\mathbf {z}}\in W_\infty \) with \(\Vert {\mathbf {y}}\Vert _{W_\infty }\le \delta \) and \(\Vert {\mathbf {z}}\Vert _{W_\infty }\le \delta \), it holds that

$$\begin{aligned} \Vert F({\mathbf {y}})-F({\mathbf {z}})\Vert _{L^2(0,\infty ;V')} \le \delta C\Vert {\mathbf {y}}-{\mathbf {z}}\Vert _{W_\infty }. \end{aligned}$$

Proof

We have

$$\begin{aligned} \Vert F({\mathbf {y}})-F({\mathbf {z}})\Vert _{L^2(0,\infty ;V')}&= \Vert N({\mathbf {y}},{\mathbf {y}})-N({\mathbf {z}},{\mathbf {z}})\Vert _{L^2(0,\infty ;V')} \\&\le \Vert N({\mathbf {y}}-{\mathbf {z}},{\mathbf {y}})\Vert _{L^2(0,\infty ;V')} + \Vert N({\mathbf {z}},{\mathbf {y}}-{\mathbf {z}})\Vert _{L^2(0,\infty ;V')}. \end{aligned}$$

The assertion now easily follows from Corollary 3. \(\square \)

The following lemma is formulated for an abstract generator \(A_{s}\) of an analytic semigroup on Y. It will subsequently be used to address the asymptotic behavior of the nonlinear system (15). We point out that the statement is similar to [38, Theorem 6.1] which, since it addresses the boundary control case, assumes a slightly more regular initial condition \({\mathbf {y}}_0\in {\mathbb {H}}^\varepsilon (\Omega )\cap Y\).

Lemma 5

Let \(A_s\) be the generator of an exponentially stable analytic semigroup \(e^{A_s t}\) on Y such that (18) holds. Let C denote the constant from Lemma 4. Then there exists a constant \(M_s\) such that for all \({\mathbf {y}}_0 \in Y\) and \({\mathbf {f}}\in L^2(0,\infty ;V')\) with

$$\begin{aligned} \gamma := \Vert {\mathbf {y}}_0\Vert _Y + \Vert {\mathbf {f}}\Vert _{L^2(0,\infty ;V')} \le \frac{1}{4CM_s^2} \end{aligned}$$

the system

$$\begin{aligned} {\dot{{\mathbf {y}}}}= A_s{\mathbf {y}}- F({\mathbf {y}}) +{\mathbf {f}}, \quad {\mathbf {y}}(0)= {\mathbf {y}}_0 \end{aligned}$$
(20)

has a unique solution \({\mathbf {y}}\) in \(W_\infty \), which moreover satisfies

$$\begin{aligned} \Vert {\mathbf {y}}\Vert _{W_\infty } \le 2 M_s \gamma . \end{aligned}$$

Proof

We follow the line of argumentation provided in the proof [38, Theorem 6.1]. Since the semigroup \(e^{A_s t}\) is exponentially stable on Y, it follows that for all \(({\mathbf {y}}_0,{\mathbf {g}})\in Y\times L^2(0,\infty ;V')\) the system

$$\begin{aligned} {\dot{{\mathbf {z}}}}=A_s {\mathbf {z}}+ {\mathbf {g}}, \quad {\mathbf {z}}(0)={\mathbf {y}}_0 \end{aligned}$$

has a unique solution \({\mathbf {z}}\in W_\infty \). Moreover, there exists a constant \(M_s\) such that

$$\begin{aligned} \Vert {\mathbf {z}}\Vert _{W_\infty } \le M_s (\Vert {\mathbf {y}}_0 \Vert _Y + \Vert {\mathbf {g}}\Vert _{L^2(0,\infty ;V')} ). \end{aligned}$$
(21)

Without loss of generality we can assume that \(M_s\ge \frac{1}{2C}\). We claim that the constant \(M_s\) is the one announced in the assertion. This will be shown by a fixed-point argument applied to the system (20). For this purpose, let us define \({\mathcal {M}}=\left\{ {\mathbf {y}}\in W_\infty \ | \ \Vert {\mathbf {y}}\Vert _{W_\infty } \le 2 M_s \gamma \right\} \) and let us define the mapping \({\mathcal {Z}}:{\mathcal {M}} \ni {\mathbf {y}}\mapsto {\mathbf {z}}={\mathcal {Z}}({\mathbf {y}})\in W_\infty \), where \({\mathbf {z}}\) is the unique solution of

$$\begin{aligned} {\dot{{\mathbf {z}}}}=A_s{\mathbf {z}}- F({\mathbf {y}}) + {\mathbf {f}}, \quad {\mathbf {z}}(0)={\mathbf {y}}_0. \end{aligned}$$

If there exists a unique fixed point of \({\mathcal {Z}}\), then it is a unique solution of (20) in \({\mathcal {M}}\). With C and \(M_s\) given, we shall use Lemma 4 with \(\delta = 2M_s\gamma \le \frac{1}{2CM_s}\le 1\). Together with (21), it follows that

$$\begin{aligned} \Vert {\mathbf {z}}\Vert _{W_\infty }&\le M_s ( \Vert F({\mathbf {y}}) \Vert _{L^2(0,\infty ;V')} + \Vert {\mathbf {f}}\Vert _{L^2(0,\infty ;V')} + \Vert {\mathbf {y}}_0 \Vert _Y ) \\&\le M_s \left( \frac{1}{2M_s} \Vert {\mathbf {y}}\Vert _{W_\infty }+ \gamma \right) \le 2 M_s \gamma . \end{aligned}$$

This implies \({\mathcal {Z}}({\mathcal {M}})\subseteq {\mathcal {M}}\). For \({\mathbf {y}}_1,{\mathbf {y}}_2\in {\mathcal {M}}\) consider now \({\mathbf {z}}={\mathcal {Z}}({\mathbf {y}}_1)-{\mathcal {Z}}({\mathbf {y}}_2)\) solving

$$\begin{aligned} {\dot{{\mathbf {z}}}}= A_s{\mathbf {z}}- F({\mathbf {y}}_1)+F({\mathbf {y}}_2), \quad {\mathbf {z}}(0)=0. \end{aligned}$$

Again by (21) and Lemma 4 we obtain

$$\begin{aligned} \Vert {\mathcal {Z}}({\mathbf {y}}_1)-{\mathcal {Z}}({\mathbf {y}}_2) \Vert _{W_\infty }&= \Vert {\mathbf {z}}\Vert _{W_\infty }\le M_s (\Vert F({\mathbf {y}}_1)-F({\mathbf {y}}_2) \Vert _{L^2(0,\infty ;V')} ) \\&\le M_s \delta C \Vert {\mathbf {y}}_1 - {\mathbf {y}}_2 \Vert _{W_\infty } \le \frac{1}{2} \Vert {\mathbf {y}}_1 -{\mathbf {y}}_2 \Vert _{W_\infty }. \end{aligned}$$

In other words, \({\mathcal {Z}}\) is a contraction in \({\mathcal {M}}\) and therefore, there exists a unique \({\mathbf {y}}\in {\mathcal {M}}\) such that \({\mathcal {Z}}({\mathbf {y}})={\mathbf {y}}\). Regarding uniqueness in \(W_{\infty }\), consider two solutions \({\mathbf {y}},{\mathbf {z}}\in W_{\infty }\). For the difference \({\mathbf {e}}:={\mathbf {y}}-{\mathbf {z}}\) it then holds

$$\begin{aligned} {\dot{{\mathbf {e}}}}=A_{s} {\mathbf {e}}- F({\mathbf {y}})+F({\mathbf {z}}), \ \ {\mathbf {e}}(0)=0. \end{aligned}$$

Multiplying with \({\mathbf {e}}\) and taking inner products yields

$$\begin{aligned} \frac{1}{2}\frac{\mathrm {d}}{\mathrm {d}t}\Vert {\mathbf {e}}\Vert _{Y}^{2} = \langle A_{s} {\mathbf {e}},{\mathbf {e}}\rangle _{Y} - \langle F({\mathbf {y}})-F({\mathbf {z}}),{\mathbf {e}}\rangle _{V',V}. \end{aligned}$$

Since \(A_{s}\) satisfies an inequality of the form (18), we have

$$\begin{aligned} \frac{1}{2}\frac{\mathrm {d}}{\mathrm {d}t} \Vert {\mathbf {e}}\Vert _{Y}^{2} \le \alpha \Vert {\mathbf {e}}\Vert _{Y}^{2} - \beta \Vert {\mathbf {e}}\Vert _{V}^{2} + \Vert F({\mathbf {y}})-F({\mathbf {z}})\Vert _{V'} \Vert {\mathbf {e}}\Vert _{V}, \end{aligned}$$

where \(\alpha \ge 0\) and \(\beta >0 \). Using Proposition 1 and Young’s inequality we further obtain

$$\begin{aligned} \frac{1}{2}\frac{\mathrm {d}}{\mathrm {d}t} \Vert {\mathbf {e}}\Vert _{Y}^{2}&\le \alpha \Vert {\mathbf {e}}\Vert _{Y}^{2} - \beta \Vert {\mathbf {e}}\Vert _{V}^{2} + M\left( \Vert {\mathbf {e}}\Vert _{Y}^{\frac{1}{2}}\Vert {\mathbf {y}}\Vert _{Y}^{\frac{1}{2}}\Vert {\mathbf {e}}\Vert _{V}^{\frac{1}{2}} \Vert {\mathbf {y}}\Vert _{V}^{\frac{1}{2}}\right. \\&\quad \left. +\,\, \Vert {\mathbf {e}}\Vert _{Y}^{\frac{1}{2}}\Vert {\mathbf {z}}\Vert _{Y}^{\frac{1}{2}}\Vert {\mathbf {e}}\Vert _{V}^{\frac{1}{2}} \Vert {\mathbf {z}}\Vert _{V}^{\frac{1}{2}} \right) \Vert {\mathbf {e}}\Vert _{V}\\&\le \alpha \Vert {\mathbf {e}}\Vert _{Y}^{2} - \beta \Vert {\mathbf {e}}\Vert _{V}^{2} + \frac{M}{ \iota }\Vert {\mathbf {e}}\Vert ^{2}_{V} + \frac{M\iota }{2}\Vert {\mathbf {e}}\Vert _{V} \left( \Vert {\mathbf {e}}\Vert _{Y}\Vert {\mathbf {y}}\Vert _{Y} \Vert {\mathbf {y}}\Vert _{V}\right. \\&\quad \left. + \,\,\Vert {\mathbf {e}}\Vert _{Y} \Vert {\mathbf {z}}\Vert _{Y} \Vert {\mathbf {z}}\Vert _{V}\right) \\&\le \alpha \Vert {\mathbf {e}}\Vert _{Y}^{2} - \beta \Vert {\mathbf {e}}\Vert _{V}^{2} + \frac{M}{ \iota }\Vert {\mathbf {e}}\Vert _{V} ^{2} + \frac{M\iota }{2\kappa } \Vert {\mathbf {e}}\Vert _{V}^{2}+ \frac{M\iota \kappa }{4} \left( \Vert {\mathbf {e}}\Vert _{Y}^{2}\Vert {\mathbf {y}}\Vert _{Y} ^{2}\Vert {\mathbf {y}}\Vert _{V}^{2}\right. \\&\quad \left. + \,\,\Vert {\mathbf {e}}\Vert _{Y}^{2}\Vert {\mathbf {z}}\Vert _{Y}^{2} \Vert {\mathbf {z}}\Vert _{V}^{2}\right) . \end{aligned}$$

Taking \(\iota \) and \(\kappa \) sufficiently large, it holds that

$$\begin{aligned} \frac{1}{2}\frac{\mathrm {d}}{\mathrm {d}t} \Vert {\mathbf {e}}\Vert _{Y}^{2}&\le \left( \alpha + \frac{M\iota \kappa }{4} \left( \Vert {\mathbf {y}}\Vert _{Y} ^{2}\Vert {\mathbf {y}}\Vert _{V}^{2}+ \Vert {\mathbf {z}}\Vert _{Y}^{2} \Vert {\mathbf {z}}\Vert _{V}^{2}\right) \right) \Vert {\mathbf {e}}\Vert _{Y}^{2}. \end{aligned}$$

Since \({\mathbf {y}},{\mathbf {z}}\in W_{\infty }\) and \({\mathbf {e}}(0)=0\), with Gronwall’s inequality, we conclude that \({\mathbf {e}}(t)=0\) for all \(t\ge 0\). Hence, \({\mathbf {y}}={\mathbf {z}}\) showing the uniqueness of the solution in \(W_{\infty }\). \(\square \)

The following two corollaries are consequences of Lemmas 4 and 5. The constant C which is employed is the one given by Lemma 4.

Corollary 6

There exists a constant \(M_K>0\) such that for all \({\mathbf {y}}_0\in Y\) and for all \({\mathbf {f}}\in L^2(0,\infty ;V')\) with

$$\begin{aligned} \gamma :=\Vert {\mathbf {y}}_0 \Vert _Y + \Vert {\mathbf {f}}\Vert _{L^2(0,\infty ;V')} \le \frac{1}{4CM_K^2} \end{aligned}$$

there exists a control \(u \in L^2(0,\infty ;U)\) such that the system

$$\begin{aligned} {\dot{{\mathbf {y}}}}= A{\mathbf {y}}+ Bu - F({\mathbf {y}}) +{\mathbf {f}}, \quad {\mathbf {y}}(0)= {\mathbf {y}}_0 \end{aligned}$$
(22)

has a unique solution \({\mathbf {y}}\in W_\infty \) satisfying

$$\begin{aligned} \Vert {\mathbf {y}}\Vert _{W_\infty } \le 2M_K \gamma \quad \text {and} \quad \Vert u\Vert _{L^2(0,\infty ;U)}\le 2 \Vert K\Vert _{{\mathcal {L}}(Y)} M_K \gamma . \end{aligned}$$

Proof

By Assumption A2, there exists K such that \(A-BK\) generates an exponentially stable, analytic semigroup on Y. The result then follows by applying Lemma 5 to the system

$$\begin{aligned} {\dot{{\mathbf {y}}}}=(A-BK){\mathbf {y}}-F({\mathbf {y}})+{\mathbf {f}}, \quad {\mathbf {y}}(0)={\mathbf {y}}_0. \end{aligned}$$

and by defining \(u= -K{\mathbf {y}}\). \(\square \)

In the following corollary, we assume without loss of generality that the constant \(M_\lambda \) given by Consequence C2 is such that \(M_\lambda \ge \frac{1}{2C}\).

Corollary 7

Let \(({\mathbf {y}}_0,{\mathbf {f}})\in Y\times L^2(0,\infty ;V')\) let \(u \in L^2(0,\infty ;U)\) be such that the system

$$\begin{aligned} {\dot{{\mathbf {y}}}} = A{\mathbf {y}}- F({\mathbf {y}}) + Bu + {\mathbf {f}}, \ \ {\mathbf {y}}(0)={\mathbf {y}}_0 \end{aligned}$$

has a solution \({\mathbf {y}}\in L^2(0,\infty ;Y)\). If

$$\begin{aligned} \gamma :=\Vert {\mathbf {y}}_0 \Vert _Y + \Vert {\mathbf {f}}+ \lambda {\mathbf {y}}+ Bu \Vert _{L^2(0,\infty ;V')} \le \frac{1}{4CM_\lambda ^2} , \end{aligned}$$

then \({\mathbf {y}}\in W_\infty \) and it holds that

$$\begin{aligned} \Vert {\mathbf {y}}\Vert _{W_\infty } \le 2M_{\lambda } \gamma . \end{aligned}$$

Proof

Since \({\mathbf {y}}\in L^2(0,\infty ;Y)\), we can apply Lemma 5 to the equivalent system

$$\begin{aligned} {\dot{{\mathbf {y}}}} = (A-\lambda I){\mathbf {y}}- F({\mathbf {y}}) +{\tilde{{\mathbf {f}}}}, \end{aligned}$$

where \({\tilde{{\mathbf {f}}}} = {\mathbf {f}}+ \lambda {\mathbf {y}}+ Bu\). This shows the assertion. \(\square \)

3 Differentiability of the Value Function

In this section we perform a sensitivity analysis for the stabilization problem. The main purpose is to analyze the dependence of solutions to (P) with respect to the initial condition \({\mathbf {y}}_{0}\) and to show the differentiability of the associated value function, defined by

$$\begin{aligned} {\mathcal {V}}({\mathbf {y}}_{0}) = \inf _{\begin{array}{c} {\mathbf {y}}\in W_\infty \\ u \in L^2(0,\infty ;U) \end{array}} J({\mathbf {y}},u), \quad \text {subject to: } e({\mathbf {y}},u)= (0,{\mathbf {y}}_0). \end{aligned}$$

3.1 Existence of a Solution and Optimality Conditions

In Lemma 8 we prove the existence of a solution \(({\bar{{\mathbf {y}}}},u)\) to problem (P), assuming that \(\Vert {\mathbf {y}}_0 \Vert _Y\) is sufficiently small. We derive then in Proposition 10 first-order necessary optimality conditions.

Lemma 8

There exists \(\delta _{1} >0 \) such that for all \({\mathbf {y}}_0 \in B_Y(\delta _{1})\), problem (P) possesses a solution \(({\bar{{\mathbf {y}}}},{\bar{u}})\). Moreover, there exists a constant \(M> 0\) independent of \({\mathbf {y}}_{0}\) such that

$$\begin{aligned} \max ( \Vert {\bar{u}} \Vert _{L^2(0,\infty ;U)},\Vert {\bar{{\mathbf {y}}}}\Vert _{W_\infty }) \le M \Vert {\mathbf {y}}_{0}\Vert _{Y}. \end{aligned}$$
(23)

Proof

Let us set, for the moment, \(\delta _{1} = \frac{1}{4CM_K^2}\), where C is as in Lemma 4 and \(M_K\) denotes the constant from Corollary 6. Applying this corollary (with \({\mathbf {f}}=0\)), we obtain that for \({\mathbf {y}}_0 \in B_Y(\delta _1)\), there exists a control \(u \in L^2(0,\infty ;U)\) with associated state \({\mathbf {y}}\) satisfying

$$\begin{aligned} \max ( \Vert u \Vert _{L^2(0,\infty ;U)},\Vert {\mathbf {y}}\Vert _{W_\infty } ) \le M\Vert {\mathbf {y}}_{0} \Vert _{Y}, \end{aligned}$$

where \(M=2 M_K \max (1, \Vert K\Vert _{{\mathcal {L}}(Y)})\). We can thus consider a minimizing sequence \(({\mathbf {y}}_n,u_n)_{n \in \mathbb N}\) with \(J({\mathbf {y}}_n,u_n) \le { M^2 \Vert {\mathbf {y}}_0\Vert _Y^2 (1+\alpha )}\). We therefore have for all \(n \in {\mathbb {N}}\) that

$$\begin{aligned} \Vert {\mathbf {y}}_n \Vert _{L^2(0,\infty ;Y)} \le \sqrt{2}M \Vert {\mathbf {y}}_0\Vert _Y \sqrt{1+ \alpha } \quad \text {and} \quad \Vert u_n \Vert _{L^2(0,\infty ;U)} \le \sqrt{2} M \Vert {\mathbf {y}}_0\Vert _Y \frac{\sqrt{1+ \alpha }}{\sqrt{\alpha }}. \end{aligned}$$

Possibly after further reduction of \(\delta _{1}\), we eventually obtain that

$$\begin{aligned} \Vert {\mathbf {y}}_0\Vert _Y + \Vert \lambda {\mathbf {y}}_n +Bu_n\Vert _{L^2(0,\infty ;Y)}&\le \left[ 1 + M \sqrt{2(1 + \alpha )} \left( \lambda + \frac{\Vert B \Vert _{{\mathcal {L}}(U,Y)}}{\sqrt{\alpha }} \right) \right] \delta _1\\&\le \frac{1}{4CM_\lambda ^2}, \end{aligned}$$

where \(M_\lambda \) is as in Corollary 7. It then follows that the sequence \(({\mathbf {y}}_n)_{n \in \mathbb N}\) is bounded in \(W_\infty \) with \(\sup _{n\in \mathbb N} \Vert {\mathbf {y}}_n \Vert _{W_{\infty }} \le 2M_\lambda \Vert {\mathbf {y}}_{0}\Vert _Y\). Extracting if necessary a subsequence, there exists \(({\bar{{\mathbf {y}}}},{\bar{u}}) \in W_\infty \times L^2(0,\infty ;U)\) such that \(({\mathbf {y}}_n,u_n) \rightharpoonup ({\bar{{\mathbf {y}}}},{\bar{u}}) \in W_\infty \times L^2(0,\infty ;U)\), and \(({\bar{{\mathbf {y}}}},{\bar{u}})\) satisfies (23).

Let us prove that \(({\bar{{\mathbf {y}}}},{\bar{u}})\) is feasible and optimal. For any \(T>0\) let us consider an arbitrary \({\mathbf {z}}\in H^{1}(0,T;V)\). For all \(n \in \mathbb N\), we have

$$\begin{aligned} \int _0^T \left\langle {\dot{{\mathbf {y}}}}_n(t),{\mathbf {z}}(t) \right\rangle _{V',V} \mathrm {d}t = \int _0^T \langle A{\mathbf {y}}_n (t) - F({\mathbf {y}}_n(t))+B u_n(t),{\mathbf {z}}(t) \rangle _{V',V}\, \mathrm {d}t. \end{aligned}$$
(24)

Since \({\dot{{\mathbf {y}}}}_n \rightharpoonup \dot{{\bar{{\mathbf {y}}}}}\) in \(L^2(0,T;V')\), we can pass to the limit in the l.h.s. of the above equality. Moreover, since \(A{\mathbf {y}}_n \rightharpoonup A{\bar{{\mathbf {y}}}} \in L^2(0,T;V')\),

$$\begin{aligned} \int _0^T \langle A{\mathbf {y}}_n (t), {\mathbf {z}}(t) \rangle _{V',V} \mathrm {d}t \underset{n \rightarrow \infty }{\longrightarrow }\int _0^T \langle A {\bar{{\mathbf {y}}}}(t),{\mathbf {z}}(t) \rangle _{V',V}\, \mathrm {d}t. \end{aligned}$$

Analogously, we obtain that

$$\begin{aligned} \int _0^T \langle Bu _n (t), {\mathbf {z}}(t) \rangle _{V',V} \mathrm {d}t \underset{n \rightarrow \infty }{\longrightarrow }\int _0^T \langle B{\bar{u}}(t),{\mathbf {z}}(t) \rangle _{V',V}\, \mathrm {d}t. \end{aligned}$$

We also have

$$\begin{aligned}&\left| \int _0^T \langle F({\mathbf {y}}_n(t))-F({\bar{{\mathbf {y}}}}(t)),{\mathbf {z}}(t) \rangle _{V',V} \, \mathrm {d}t \right| \\&\quad =\left| \int _0^T \langle N({\mathbf {y}}_n(t),{\mathbf {y}}_n(t))-N({\bar{{\mathbf {y}}}}(t),{\bar{{\mathbf {y}}}}(t)), {\mathbf {z}}(t) \rangle _{V',V} \, \mathrm {d}t \right| . \end{aligned}$$

By Lemma 2, it then follows that

$$\begin{aligned}&\left| \int _0^T \langle F({\mathbf {y}}_n(t))-F({\bar{{\mathbf {y}}}}(t)),{\mathbf {z}}(t) \rangle _{V',V} \, \mathrm {d}t \right| \\&\quad \le M \Vert {\mathbf {z}}\Vert _{L^{\infty }(0,T;V)} \Vert {\mathbf {y}}_n\Vert _{L^{2}(0,T;Y)}^{\frac{1}{2}} \, \Vert {\mathbf {y}}_n {-} {\bar{{\mathbf {y}}}} \Vert _{L^{2}(0,T;Y)}^{\frac{1}{2}}\, \Vert {\mathbf {y}}_n\Vert _{L^{2}(0,T;V)}^{\frac{1}{2}} \, \Vert {\mathbf {y}}_n {-} {\bar{{\mathbf {y}}}} \Vert _{L^{2}(0,T;V)}^{\frac{1}{2}} \\&\qquad +M \Vert {\mathbf {z}}\Vert _{L^{\infty }(0,T;V)} \Vert {\bar{{\mathbf {y}}}}\Vert _{L^{2}(0,T;Y)}^{\frac{1}{2}} \, \Vert {\mathbf {y}}_n - {\bar{{\mathbf {y}}}} \Vert _{L^{2}(0,T;Y)}^{\frac{1}{2}} \Vert {\bar{{\mathbf {y}}}}\Vert _{L^{2}(0,T;V)}^{\frac{1}{2}} \Vert {\mathbf {y}}_n - {\bar{{\mathbf {y}}}} \Vert _{L^{2}(0,T;V)}^{\frac{1}{2}}. \end{aligned}$$

Since V is compactly embedded in Y, we obtain that \(\Vert {\mathbf {y}}_n -{\bar{{\mathbf {y}}}}\Vert _{L^2(0,T;Y)} \underset{n \rightarrow \infty }{\longrightarrow } 0\) with the Aubin-Lions lemma. We can pass to the limit in (24) and obtain

$$\begin{aligned} \int _0^T \left\langle \dot{{\bar{{\mathbf {y}}}}}(t),{\mathbf {z}}(t) \right\rangle _{V',V}\mathrm {d}t= \int _0^T \langle A{\bar{{\mathbf {y}}}} (t) - F({\bar{{\mathbf {y}}}}(t))+B {\bar{u}}(t),{\mathbf {z}}(t) \rangle _{V',V}\, \mathrm {d}t. \end{aligned}$$

Density of \(H^{1}(0,T;V)\) in \(L^{2}(0,T;V)\) implies that \(e({\bar{{\mathbf {y}}}},{\bar{u}})=(0,{\mathbf {y}}_0)\). Finally, by weak lower semi-continuity of norms it follows that \(J({\bar{{\mathbf {y}}}},{\bar{u}}) \le \liminf _{n \rightarrow \infty } J({\mathbf {y}}_n,u_n)\), which proves the optimality of \(({\bar{{\mathbf {y}}}},{\bar{u}})\).

Consider now an arbitrary solution \(({\tilde{{\mathbf {y}}}},{\tilde{u}})\) to (P). It then holds that \(J({\tilde{{\mathbf {y}}}},{\tilde{u}})\le M^2\Vert {\mathbf {y}}_0\Vert _Y^2 (1+\alpha )\) from which we obtain that

$$\begin{aligned} \Vert {\tilde{{\mathbf {y}}}} \Vert _{L^2(0,\infty ;Y)} \le \sqrt{2}M \Vert {\mathbf {y}}_0\Vert _Y \sqrt{1+\alpha } \quad \text {and} \quad \Vert {\tilde{u}} \Vert _{L^2(0,\infty ;U)} \le \sqrt{2}M\Vert {\mathbf {y}}_0\Vert _Y \frac{\sqrt{1+\alpha }}{\sqrt{\alpha }}. \end{aligned}$$

The estimate (23) for \(\Vert {\tilde{{\mathbf {y}}}}\Vert _{W_\infty }\) can now be shown by applying the same arguments as above. \(\square \)

For the derivation of the optimality system for (P) we need the following technical lemma.

Lemma 9

[15, Lemma 2.5] Let \(G \in {\mathcal {L}}(W_\infty ,L^2(0,\infty ;V'))\) be such that \(\Vert G\Vert < \frac{1}{M_K},\) where \(\Vert G\Vert \) denotes the operator norm of G. Then, for all \({\mathbf {f}}\in L^2(0,\infty ;V')\) and \({\mathbf {y}}_0\in Y\), there exists a unique solution to the following system:

$$\begin{aligned} {\dot{{\mathbf {y}}}} = (A-BK){\mathbf {y}}(t) + (G{\mathbf {y}})(t) + {\mathbf {f}}(t), \ \ {\mathbf {y}}(0)={\mathbf {y}}_0. \end{aligned}$$

Moreover,

$$\begin{aligned} \Vert {\mathbf {y}}\Vert _{W_\infty } \le \frac{M_K}{1-M_K \Vert G\Vert } ( \Vert {\mathbf {f}}\Vert _{L^2(0,\infty ;V')} + \Vert {\mathbf {y}}_0 \Vert _Y). \end{aligned}$$

First-order optimality conditions for finite-horizon optimal control problems have been addressed several times in the literature, we mention e.g. [1, 29,30,31]. The finite-horizon case, and in particular the decay properties of the state, the costate, and the optimal control, require independent treatment, which we provide next. For an analysis of the linear infinite-horizon problem, we additionally refer to [38].

Proposition 10

There exists \(\delta _{2} \in (0,\delta _1]\) such that for all \({\mathbf {y}}_0 \in B_Y(\delta _2)\), for all solutions \(({\bar{{\mathbf {y}}}},{\bar{u}})\) of (P), there exists a unique costate \({\mathbf {p}}\in L^2(0,\infty ;V)\) satisfying

$$\begin{aligned} -{\dot{{\mathbf {p}}}} - A^{*}{\mathbf {p}}- ({\bar{{\mathbf {y}}}}\cdot \nabla ){\mathbf {p}}+(\nabla {\bar{{\mathbf {y}}}})^T {\mathbf {p}}= \,&{\bar{{\mathbf {y}}}} \quad \text {(in } (W^0_\infty )'), \end{aligned}$$
(25)
$$\begin{aligned} \alpha {\bar{u}} + B^{*} {\mathbf {p}}= \,&0. \end{aligned}$$
(26)

Moreover, there exists a constant \(M>0\), independent of \(({\bar{{\mathbf {y}}}},{\bar{u}})\), such that

$$\begin{aligned} \Vert {\mathbf {p}}\Vert _{L^2(0,\infty ;V)} \le M \Vert {\mathbf {y}}_0\Vert _Y. \end{aligned}$$
(27)

Remark 11

Note that (25) is a formal expression for

$$\begin{aligned} \begin{aligned}&\langle -{\dot{{\mathbf {p}}}} - A^{*}{\mathbf {p}}- ({\bar{{\mathbf {y}}}}\cdot \nabla ){\mathbf {p}}+(\nabla {\bar{{\mathbf {y}}}})^T {\mathbf {p}}- {\bar{{\mathbf {y}}}} , {\mathbf {z}} \rangle _{(W^0_\infty )',W^0_\infty } \\&\quad = \langle {\mathbf {p}},{\dot{{\mathbf {z}}}}-A{\mathbf {z}}+ ({\mathbf {z}}\cdot \nabla ){\bar{{\mathbf {y}}}} + ({\bar{{\mathbf {y}}}}\cdot \nabla ){\mathbf {z}}\rangle _{L^2(0,\infty ;V),L^2(0,\infty ;V')} \\&\qquad - \langle {\bar{{\mathbf {y}}}},{\mathbf {z}}\rangle _{L^2(0,\infty ;Y)} , \ \ \forall {\mathbf {z}}\in W_\infty ^0, \end{aligned} \end{aligned}$$
(28)

where \(W_\infty ^0:= \{ {\mathbf {z}}\in W_\infty \,|\, {\mathbf {z}}(0)= 0 \}\).

Proof of Proposition 10

Let us set \(\delta _2= \delta _1\). By Lemma 8, problem (P) has a solution \(({\bar{{\mathbf {y}}}},{\bar{u}})\). In the first part of the proof, we derive abstract optimality conditions, by proving that the mapping e (used for formulating the constraints) has a surjective derivative. For proving the differentiability of e, we only need to consider the nonlinear term. We have \(F({\mathbf {y}})= N({\mathbf {y}},{\mathbf {y}})\) and we know that N is a bounded bilinear mapping from \(W_\infty \times W_\infty \) to \(L^2(0,\infty ;V')\), by Lemma 2. Thus N and F are Fréchet differentiable, and so is e, with

$$\begin{aligned}&De({\mathbf {y}},u):W_\infty \times L^2(0,\infty ;U) \rightarrow L^2(0,\infty ;V')\times Y\\&De({\mathbf {y}},u)({\mathbf {z}},v)=({\dot{{\mathbf {z}}}}-(A{\mathbf {z}}-N({\mathbf {y}},{\mathbf {z}})-N({\mathbf {z}},{\mathbf {y}})+Bv),{\mathbf {z}}(0)). \end{aligned}$$

Let us show that \(De({\bar{{\mathbf {y}}}},{\bar{u}})\) is surjective if \(\delta _{2}\) is sufficiently small. Let \(({\mathbf {r}},{\mathbf {s}})\in L^2(0,\infty ;V') \times Y\) and consider the system

$$\begin{aligned} {\dot{{\mathbf {z}}}} - (A{\mathbf {z}}-N({\bar{{\mathbf {y}}}},{\mathbf {z}})-N({\mathbf {z}},{\bar{{\mathbf {y}}}})+Bv)&= {\mathbf {r}}, \ \ {\mathbf {z}}(0) = {\mathbf {s}}. \end{aligned}$$

Observe that by Corollary 3

$$\begin{aligned} \Vert N({\bar{{\mathbf {y}}}},{\mathbf {z}})+N({\mathbf {z}},{\bar{{\mathbf {y}}}})\Vert _{L^2(0,\infty ;V')}&\le M \Vert {\bar{{\mathbf {y}}}}\Vert _{W_{\infty }}\, \Vert {\mathbf {z}}\Vert _{W_{\infty }} . \end{aligned}$$

By Lemma 8, it further holds that

$$\begin{aligned} \Vert N({\bar{{\mathbf {y}}}},{\mathbf {z}})+N({\mathbf {z}},{\bar{{\mathbf {y}}}})\Vert _{L^2(0,\infty ;V')} \le M \delta _{2} \Vert {\mathbf {z}}\Vert _{W_\infty }. \end{aligned}$$
(29)

For sufficiently small \(\delta _{2}\), the operator \(G\in {\mathcal {L}}(W_\infty ,L^2(0,\infty ;V'))\) defined by

$$\begin{aligned} (G{\mathbf {z}})(t) := D F({{\bar{{\mathbf {y}}}}}(t))({\mathbf {z}}(t))= N({\bar{{\mathbf {y}}}}(t),{\mathbf {z}}(t))+N({\mathbf {z}}(t),{\bar{{\mathbf {y}}}}(t)) \end{aligned}$$
(30)

satisfies \(\Vert G\Vert \le \frac{1}{2M_K} < \frac{1}{M_K}\). By Lemma 9 there exists a unique solution \({\mathbf {z}}\in W_\infty \) to the system

$$\begin{aligned} {\dot{{\mathbf {z}}}}- (A-BK){\mathbf {z}}+N({\bar{{\mathbf {y}}}},{\mathbf {z}})+N({\mathbf {z}},{\bar{{\mathbf {y}}}})&= {\mathbf {r}}, \ \ {\mathbf {z}}(0) = {\mathbf {s}}. \end{aligned}$$

Setting \(v=-K{\mathbf {z}}\in L^2(0,\infty ;U)\) proves the surjectivity of \(De({\bar{{\mathbf {y}}}},{\bar{u}})\). Note that

$$\begin{aligned} \Vert {\mathbf {z}}\Vert _{W_\infty } \le M \big ( \Vert {\mathbf {r}} \Vert _{L^2(0,\infty ;V')} + \Vert {\mathbf {s}} \Vert _{L^2(0,\infty ;V')} \big ), \end{aligned}$$
(31)

for some constant M independent of \(({\mathbf {r}},{\mathbf {s}})\) and \({\mathbf {y}}_0\).

From the surjectivity of \(De({\bar{{\mathbf {y}}}},{\bar{u}})\) and Lagrange multiplier theory it follows that there exists a unique pair \(({\mathbf {p}},\mu )\in L^2(0,\infty ;V)\times Y\) such that for all \(({\mathbf {z}},v)\in W_\infty \times L^2(0,\infty ;U)\),

$$\begin{aligned} DJ({\bar{{\mathbf {y}}}},{\bar{u}})({\mathbf {z}},v) - \langle ({\mathbf {p}},\mu ), De({\bar{{\mathbf {y}}}},{\bar{u}})({\mathbf {z}},v) \rangle _{L^2(0,\infty ;V)\times Y,L^2(0,\infty ;V')\times Y}=0. \end{aligned}$$
(32)

Using (32) we derive in the second part of the proof the costate equation (25) and relation (26). As can be easily verified, J is differentiable with

$$\begin{aligned} DJ({\bar{{\mathbf {y}}}},{\bar{u}})({\mathbf {z}},v) = \langle {\bar{{\mathbf {y}}}},{\mathbf {z}}\rangle _{L^2(0,\infty ;Y)} + \alpha \langle {\bar{u}},v \rangle _{L^2(0,\infty ;U)} \end{aligned}$$
(33)

Moreover, for all \(({\mathbf {z}},v)\in W_\infty \times L^2(0,\infty ;U)\)

$$\begin{aligned} \begin{aligned}&\langle ({\mathbf {p}},\mu ),De({\bar{{\mathbf {y}}}},{\bar{u}})({\mathbf {z}},v) \rangle _{L^2(0,\infty ;V)\times Y,L^2(0,\infty ;V')\times Y} \\&\quad = \langle {\mathbf {p}},{\dot{{\mathbf {z}}}}\rangle _{L^2(0,\infty ;V),L^{2}(0,\infty ;V')} - \langle {\mathbf {p}},A {\mathbf {z}}- G {\mathbf {z}}\rangle _{L^2(0,\infty ;V),L^{2}(0,\infty ;V')} \\&\qquad - \langle {\mathbf {p}},Bv \rangle _{L^2(0,\infty ;Y)} + \langle \mu , {\mathbf {z}}(0) \rangle _{Y}. \end{aligned} \end{aligned}$$
(34)

Taking \({\mathbf {z}}=0\) and letting v vary in \(L^2(0,\infty ;U)\), we deduce from (32), (33) and (34) that

$$\begin{aligned} \alpha {\bar{u}} + B^{*} {\mathbf {p}}= 0 \text { in } L^2(0,\infty ;U), \end{aligned}$$

which proves (26). Taking now \(v=0\), we obtain that

$$\begin{aligned} \langle {\mathbf {p}}, {\dot{{\mathbf {z}}}} \rangle _{L^2(0,\infty ;V),L^2(0,\infty ;V')}&= \langle {\mathbf {p}},A {\mathbf {z}}-G {\mathbf {z}}\rangle _{L^2(0,\infty ;V),L^{2}(0,\infty ;V')} \nonumber \\&\quad + \langle {{\bar{{\mathbf {y}}}}}, {\mathbf {z}}\rangle _{L^2(0,\infty ;Y)}, \ \ \forall {\mathbf {z}}\in W_\infty ^0. \end{aligned}$$
(35)

It remains to bound \({\mathbf {p}}\) in \(L^2(0,\infty ;V)\). Let \({\mathbf {r}} \in L^2(0,\infty ;V')\) and let \(({\mathbf {z}},v)\) satisfy \(De({\bar{{\mathbf {y}}}},{\bar{u}})({\mathbf {z}},v)= ({\mathbf {r}},0)\) and the bound (31) (with \({\mathbf {s}}= 0\)). Using the optimality condition (32), the expression (33) of \(DJ({\bar{{\mathbf {y}}}},{\bar{u}})\), estimate (31), and estimate (23) on \(({\bar{{\mathbf {y}}}},{\bar{u}})\), we obtain the following inequalities:

$$\begin{aligned}&\langle {\mathbf {p}}, {\mathbf {r}} \rangle _{L^2(0,\infty ;V),L^2(0,\infty ;V')} \\&\quad = \ \langle ({\mathbf {p}},\mu ), ({\mathbf {r}},0) \rangle _{L^2(0,\infty ;V) \times Y,L^2(0,\infty ;V') \times Y} \\&\quad = \ \langle De({\bar{{\mathbf {y}}}},{\bar{u}})'({\mathbf {p}},\mu ), ({\mathbf {z}},v) \rangle _{W_\infty ' \times L^2(0,\infty ;U),W_\infty \times L^2(0,\infty ;U)} \\&\quad = \ DJ({\bar{{\mathbf {y}}}},{\bar{u}})({\mathbf {z}},v) \\&\quad \le \ M\big ( \Vert {\bar{{\mathbf {y}}}} \Vert _{L^2(0,\infty ;Y)} + \Vert {\bar{u}} \Vert _{L^2(0,\infty ;U)} \big ) \big (\Vert {\mathbf {z}}\Vert _{L^2(0,\infty ;Y)} + \Vert v \Vert _{L^2(0,\infty ;U)} \big ) \\&\quad \le \ M \Vert {\mathbf {y}}_0 \Vert _{Y} \Vert {\mathbf {r}} \Vert _{L^2(0,\infty ;V')}. \end{aligned}$$

Since \({\mathbf {r}}\) was arbitrary and since M does not depend on \({\mathbf {r}}\), we obtain that \(\Vert {\mathbf {p}}\Vert _{L^2(0,\infty ;V)} \le M \Vert {\mathbf {y}}_0 \Vert _{Y}\). \(\square \)

3.2 Sensitivity Analysis

We define a mapping \(\Phi \) via

$$\begin{aligned} \begin{aligned} \Phi :W_\infty&\times L^2(0,\infty ;U) \times L^2(0,\infty ;V)\rightarrow Y \times L^2(0,\infty ;V') \times (W^0_\infty )' \\&\times L^2(0,\infty ;U)=:X, \\&\Phi ({\mathbf {y}},u,{\mathbf {p}}) = \begin{pmatrix} {\mathbf {y}}(0) \\ {\dot{{\mathbf {y}}}}- A{\mathbf {y}}+ F({\mathbf {y}})-Bu \\ - {\dot{{\mathbf {p}}}} - A^{*}{\mathbf {p}}-({\mathbf {y}}\cdot \nabla ){\mathbf {p}}+(\nabla {\mathbf {y}})^T {\mathbf {p}}- {\mathbf {y}}\\ \alpha u + B^{*}{\mathbf {p}}\end{pmatrix}, \end{aligned} \end{aligned}$$
(36)

where the third line again has to be understood formally, see Remark 11. We endow the space X with the \(l_\infty \)-product norm. The well-posedness of \(\Phi \) follows from the considerations on \(e({\mathbf {y}},u)\) and the costate Eq. (25) that have been given in the proof of Proposition 10.

Lemma 12

There exist \(\delta _3 > 0\), \(\delta _3'>0\), and three \(C^\infty \)-mappings

$$\begin{aligned} {\mathbf {y}}_0 \in B_Y(\delta _3) \mapsto \big ( {\mathcal {Y}}({\mathbf {y}}_0),{\mathcal {U}}({\mathbf {y}}_0),{\mathcal {P}}({\mathbf {y}}_0) \big ) \in W_\infty \times L^2(0,\infty ;U) \times L^2(0,\infty ;V) \end{aligned}$$

such that for all \({\mathbf {y}}_0 \in B_Y(\delta _3)\), the triplet \(\big ( {\mathcal {Y}}({\mathbf {y}}_0),{\mathcal {U}}({\mathbf {y}}_0),{\mathcal {P}}({\mathbf {y}}_0) \big )\) is the unique solution to

$$\begin{aligned} \Phi ({\mathbf {y}},u,{\mathbf {p}})= ({\mathbf {y}}_0,0,0,0), \quad \max \big ( \Vert {\mathbf {y}}\Vert _{W_\infty }, \Vert u \Vert _{L^2(0,\infty ;U)}, \Vert {\mathbf {p}}\Vert _{L^2(0,\infty ;V)} \big ) \le \delta _3'\nonumber \\ \end{aligned}$$
(37)

in \(W_\infty \times L^2(0,\infty ;U) \times L^2(0,\infty ;V)\). Moreover, there exists a constant \(M>0\) such that for all \({\mathbf {y}}_0 \in B_Y(\delta _3)\),

$$\begin{aligned} \max \big ( \Vert {\mathcal {Y}}({\mathbf {y}}_0) \Vert _{W_\infty }, \Vert {\mathcal {U}}({\mathbf {y}}_0) \Vert _{L^2(0,\infty ;U)}, \Vert {\mathcal {P}}({\mathbf {y}}_0) \Vert _{L^2(0,\infty ;V)} \big ) \le M \Vert {\mathbf {y}}_0 \Vert _Y. \end{aligned}$$
(38)

Proof

The result is a consequence of the inverse function theorem. Since \(\Phi \) contains only linear terms and three bilinear terms, it is infinitely differentiable. We also have \(\Phi (0,0,0)= (0,0,0,0)\). It remains to prove that \(D\Phi (0,0,0)\) is an isomorphism. Let \(({\mathbf {w}}_1,{\mathbf {w}}_2,{\mathbf {w}}_3,w_4) \in X\) and let \(({\mathbf {y}},u,{\mathbf {p}}) \in W_\infty \times L^2(0,\infty ;U) \times L^2(0,\infty ;V)\). We have the following equivalence

$$\begin{aligned} D\Phi (0,0,0) ({\mathbf {y}},u,{\mathbf {p}})= ({\mathbf {w}}_1,{\mathbf {w}}_{2},{\mathbf {w}}_{3},w_4) \Longleftrightarrow {\left\{ \begin{array}{ll} \begin{array}{rcl} {\mathbf {y}}(0) &{} = &{} {\mathbf {w}}_1 \\ {\dot{{\mathbf {y}}}} - A {\mathbf {y}}- Bu &{} = &{} {\mathbf {w}}_2 \\ -{\dot{{\mathbf {p}}}} - A^{*} {\mathbf {p}}- {\mathbf {y}}&{} = &{} {\mathbf {w}}_3 \\ \alpha u + B^{*} {\mathbf {p}}&{} = &{} w_4. \end{array} \end{array}\right. } \end{aligned}$$
(39)

It can be proved with the same techniques as for [15, Proposition 3.1, Lemma 4.4] that the linear system on the left-hand side has a unique solution \(({\mathbf {y}},u,{\mathbf {p}})\), moreover,

$$\begin{aligned} \Vert ({\mathbf {y}},u,{\mathbf {p}}) \Vert _{W_\infty \times L^2(0,\infty ;U) \times L^2(0,\infty ;V)} \le M \Vert ({\mathbf {w}}_1,{\mathbf {w}}_2,{\mathbf {w}}_3,w_4) \Vert _X . \end{aligned}$$

This proves that \(D\Phi (0,0,0)\) is an isomorphism. The inverse function theorem ensures the existence of \(\delta _3>0\), \(\delta _3'>0\), and \(C^\infty \)-mappings \({\mathcal {Y}}\), \({\mathcal {U}}\), and \({\mathcal {P}}\) with the properties announced in (37).

It remains to prove (38). Reducing if necessary \(\delta _3\), we can assume that the norms of the derivatives of the three mappings are bounded on \(B_Y(\delta _3)\) by some constant \(M>0\). The three mappings are therefore Lipschitz continuous with modulus M. Estimate (38) follows, since \(\big ( {\mathcal {Y}}(0),{\mathcal {U}}(0),({\mathcal {P}}(0) \big )= (0,0,0)\). \(\square \)

Proposition 13

There exists \(\delta _4 \in (0,\min (\delta _2,\delta _3)]\) such that for all \({\mathbf {y}}_0 \in B_Y(\delta _4)\), the pair \(({\mathcal {Y}}(y_0),{\mathcal {U}}(y_0))\) is the unique solution to (P) with initial condition \({\mathbf {y}}_0\). Moreover, \({\mathcal {P}}({\mathbf {y}}_0)\) is the unique associated costate.

Proof

Let us set \(\delta _4= \min (\delta _2,\delta _3)\) for the moment. Let \({\mathbf {y}}_0 \in B_Y(\delta _4)\). By Lemma 8 and Proposition 10, there exist a solution \(({\bar{{\mathbf {y}}}},{\bar{u}})\) to (P) with associated costate \({\bar{{\mathbf {p}}}}\) which necessarily satisfies

$$\begin{aligned} \max (\Vert {\bar{{\mathbf {y}}}} \Vert _{W_{\infty }}, \Vert {\bar{u}} \Vert _{L^{2}(0,\infty ;U)}, \Vert {\bar{{\mathbf {p}}}}\Vert _{L^2(0,\infty ;V)} ) \le M \Vert {\mathbf {y}}_{0} \Vert _{Y}. \end{aligned}$$

By further reduction of \(\delta _{4}\), we obtain that

$$\begin{aligned} \max (\Vert {\bar{{\mathbf {y}}}} \Vert _{W_{\infty }}, \Vert {\bar{u}} \Vert _{L^{2}(0,\infty ;U)}, \Vert {\bar{{\mathbf {p}}}}\Vert _{L^2(0,\infty ;V)} ) \le \delta _3'. \end{aligned}$$

Since \(\Phi ({\bar{{\mathbf {y}}}},{\bar{u}},{\bar{{\mathbf {p}}}})= ({\mathbf {y}}_0,0,0,0)\), Lemma 12 implies that \(({\bar{{\mathbf {y}}}},{\bar{u}},{\bar{{\mathbf {p}}}})= ({\mathcal {Y}}({\mathbf {y}}_0),{\mathcal {U}}({\mathbf {y}}_0),{\mathcal {P}}({\mathbf {y}}_0))\). The proposition is proved. \(\square \)

Corollary 14

The value function \({\mathcal {V}}\) is infinitely differentiable on \(B_Y(\delta _4)\).

Proof

The cost function J is clearly infinitely differentiable. Since \({\mathcal {V}}({\mathbf {y}}_0)= J({\mathcal {Y}}({\mathbf {y}}_0),{\mathcal {U}}({\mathbf {y}}_0))\), \({\mathcal {V}}\) is then the composition of infinitely differentiable mappings, which shows the assertion. \(\square \)

3.3 Additional Regularity for \({\mathbf {p}}\)

We next assert that for small initial data \({\mathbf {y}}_0\) the adjoint state is more regular than \({\mathbf {p}}\in L^2(0,\infty ;V)\). For this, we need more smoothness of the boundary \(\Gamma \).

Assumption A3

Let \(\Omega \subset \mathbb R^2\) denote a bounded domain with smooth boundary \(\Gamma \).

Proposition 15

There exists \({\tilde{\delta }}_{4}\in (0,\delta _4] \) such that for all \({\mathbf {y}}_0 \in B_Y({\tilde{\delta }}_4)\), for all solutions \(({\bar{{\mathbf {y}}}},{\bar{u}})\) of (P), there exists a unique costate \({\mathbf {p}}\in W_\infty \) satisfying

$$\begin{aligned} -{\dot{{\mathbf {p}}}} - A^{*}{\mathbf {p}}- ({\bar{{\mathbf {y}}}}\cdot \nabla ){\mathbf {p}}+(\nabla {\bar{{\mathbf {y}}}})^T {\mathbf {p}}= \,&{\bar{{\mathbf {y}}}} \quad \text {(in } L^2(0,\infty ;V')). \end{aligned}$$
(40)

Moreover, there exists a constant \(M>0\), independent of \(({\bar{{\mathbf {y}}}},{\bar{u}})\), such that

$$\begin{aligned} \Vert {\mathbf {p}}\Vert _{W_\infty } \le M \Vert {\mathbf {y}}_0\Vert _Y. \end{aligned}$$
(41)

The proof is given in the Appendix.

4 Derivatives of the Value Function

By standard arguments, we can derive a Hamilton-Jacobi-Bellman equation which provides an optimal feedback control based on the derivative of the value function.

All along the section, the first-order derivative \(D{\mathcal {V}}({\mathbf {y}}_0)\) is either seen as a linear form on Y or is identified with its Riesz representative in Y. The identification is done for example in the term \(\Vert B^{*} D{\mathcal {V}}({\mathbf {y}}_0) \Vert _U^2\) appearing in the HJB equation below.

Proposition 16

There exists \(\delta _5 \in (0,\tilde{\delta _4}]\) such that for all \({\mathbf {y}}_0 \in B_{Y}(\delta _5) \cap {\mathcal {D}}(A)\), the following Hamilton-Jacobi-Bellman equation holds:

$$\begin{aligned} D{\mathcal {V}}({\mathbf {y}}_0) (A {\mathbf {y}}_0 - F({\mathbf {y}}_0) ) + \frac{1}{2} \Vert {\mathbf {y}}_0 \Vert _Y^2 - \frac{1}{2\alpha } \Vert B^{*} D{\mathcal {V}}({\mathbf {y}}_0) \Vert _U^2=0. \end{aligned}$$
(42)

Moreover,

$$\begin{aligned} {\bar{u}}(t)= -\frac{1}{\alpha } B^{*}D{\mathcal {V}}({\bar{{\mathbf {y}}}}(t)), \quad \text {for all }t \ge 0, \end{aligned}$$
(43)

where \(({\bar{{\mathbf {y}}}},{\bar{u}})= ({\mathcal {Y}}({\mathbf {y}}_0),{\mathcal {U}}({\mathbf {y}}_0)).\)

Remark 17

Note that by, e.g., [6, Proposition 1.7], we have that \(F:{\mathcal {D}}(A)\times {\mathcal {D}}(A) \rightarrow Y\) and, as a consequence, the term \(D{\mathcal {V}}({\mathbf {y}}_{0}) F({\mathbf {y}}_{0})\) is well-defined.

Proof

Let us set \(\delta _5= \delta _4\). Let \({\mathbf {y}}_0 \in B_Y(\delta _5) \cap {\mathcal {D}}(A)\). Let us consider the Hamiltonian of the system, defined by

$$\begin{aligned} H({\mathbf {y}}_0,u,{\mathbf {p}})&= \frac{1}{2} \Vert {\mathbf {y}}\Vert _Y^2 + \frac{\alpha }{2} \Vert u \Vert _U^2 + \langle {\mathbf {p}}, A {\mathbf {y}}- F({\mathbf {y}}) \\&\quad +\, Bu \rangle _Y, \ \ \forall ({\mathbf {y}},u,{\mathbf {p}}) \in {\mathcal {D}}(A) \times U \times Y. \end{aligned}$$

Using the arguments provided in the proof of [17, Proposition 9], one can prove that

$$\begin{aligned} \min _{u \in U} H({\mathbf {y}}_0,u,D {\mathcal {V}}({\mathbf {y}}_0))= 0, \end{aligned}$$

from which (42) derives. One can also prove that

$$\begin{aligned} {\bar{u}}(0)= \text {arg min}_{u \in U} H({\mathbf {y}}_0,u,D{\mathcal {V}}({\mathbf {y}}_0)), \end{aligned}$$

which proves (43) for \(t=0\). Let us emphasize that the assumptions which are required in [17, Proposition 9] are satisfied. In particular, the optimality condition \({\bar{u}}(t)= -\frac{1}{\alpha } B^* {\bar{{\mathbf {p}}}}(t)\) which holds in \(L^2(0,\infty ;U)\) implies that \({\bar{u}}\) is almost everywhere equal to a continuous function. We can thus assume that \({\bar{u}}\) is continuous. For proving (43) for all \(t \ge 0\), one has first to reduce \(\delta _5\) so that \(\Vert {\bar{{\mathbf {y}}}}(t) \Vert _Y \le \delta _4\), for all \(t \ge 0\). For a given \(t \ge 0\), we have by dynamic programming that \(({\bar{{\mathbf {y}}}}(t+ \cdot ),{\bar{u}}(t+\cdot ))\) is the solution to (P) with initial condition \({\bar{{\mathbf {y}}}}(t)\) and thus (43) holds true at t. \(\square \)

For deriving a Taylor series expansion of \({\mathcal {V}}\), let us follow the approach from [3] and differentiate (42) in some direction \({\mathbf {z}}_{1} \in {\mathcal {D}}(A)\). To alleviate the calculations, we denote the variable \({\mathbf {y}}_0\) in (42) by \({\mathbf {y}}\). We then obtain

$$\begin{aligned}&D^{2}{\mathcal {V}}({\mathbf {y}})\left( A{\mathbf {y}}-F({\mathbf {y}}),{\mathbf {z}}_{1}\right) +D{\mathcal {V}}({\mathbf {y}})\left( A{\mathbf {z}}_{1}-A_{0}({\mathbf {y}},{\mathbf {z}}_{1})\right) + \langle {\mathbf {y}},{\mathbf {z}}_{1} \rangle _{Y}\\&\qquad - \frac{1}{\alpha } \langle B^{*}D^{2}{\mathcal {V}}({\mathbf {y}})(\cdot ,{\mathbf {z}}_{1}),B^{*}D{\mathcal {V}}({\mathbf {y}}) \rangle _{U}=0. \end{aligned}$$

A second differentiation in the directions \(({\mathbf {z}}_{1},{\mathbf {z}}_{2}) \in {\mathcal {D}}(A)^2\) yields the equation

$$\begin{aligned}&D^{3}{\mathcal {V}}({\mathbf {y}})\left( A{\mathbf {y}}-F({\mathbf {y}}),{\mathbf {z}}_{1},{\mathbf {z}}_{2}\right) +D^{2}{\mathcal {V}}({\mathbf {y}})\left( A{\mathbf {z}}_{2}-A_{0}({\mathbf {y}},{\mathbf {z}}_{2}),{\mathbf {z}}_{1}\right) \\&\quad +D^{2}{\mathcal {V}}({\mathbf {y}})\left( A{\mathbf {z}}_{1}-A_{0}({\mathbf {y}},{\mathbf {z}}_{1}),{\mathbf {z}}_{2}\right) \\&\quad -D{\mathcal {V}}({\mathbf {y}})\left( A_{0}({\mathbf {z}}_{2},{\mathbf {z}}_{1})\right) + \langle {\mathbf {z}}_{2},{\mathbf {z}}_{1} \rangle _{Y} -\frac{1}{\alpha }\langle B^{*} D^{3}{\mathcal {V}}({\mathbf {y}})\left( \cdot ,{\mathbf {z}}_{1},{\mathbf {z}}_{2}\right) ,B^{*} D {\mathcal {V}}({\mathbf {y}}) \rangle _{U}\\&\quad - \frac{1}{\alpha } \langle B^{*} D^{2}{\mathcal {V}}({\mathbf {y}})\left( \cdot ,{\mathbf {z}}_{1}\right) , B^{*} D^{2}{\mathcal {V}}({\mathbf {y}})\left( \cdot ,{\mathbf {z}}_{2}\right) \rangle _{U} = 0. \end{aligned}$$

Since \({\mathcal {V}}(0)=0\) and \({\mathcal {V}}({\mathbf {y}})\ge 0\) for all \({\mathbf {y}}\in Y\), it follows that \(D{\mathcal {V}}(0)=0\). We can thus evaluate the last equation for \({\mathbf {y}}=0\) to obtain

$$\begin{aligned} \begin{aligned}&D^{2}{\mathcal {V}}(0)\left( A{\mathbf {z}}_{2},{\mathbf {z}}_{1}\right) +D^{2}{\mathcal {V}}(0)\left( A{\mathbf {z}}_{1},{\mathbf {z}}_{2}\right) + \langle {\mathbf {z}}_{2},{\mathbf {z}}_{1} \rangle _{Y} \\&\quad - \frac{1}{\alpha } \langle B^{*} D^{2}{\mathcal {V}}(0)\left( \cdot ,{\mathbf {z}}_{1}\right) , B^{*} D^{2}{\mathcal {V}}(0)\left( \cdot ,{\mathbf {z}}_{2}\right) \rangle _{U} = 0. \end{aligned} \end{aligned}$$
(44)

We recall that \(D^{2}{\mathcal {V}}(0)\in {\mathcal {M}}(Y\times Y,\mathbb R)\) is a bounded and symmetric bilinear form on Y and thus can be represented (see, e.g., [32, Chapter 5, Section 2]) by an operator \(\Pi \in {\mathcal {L}}(Y)\) such that

$$\begin{aligned} D^{2}{\mathcal {V}}(0)({\mathbf {y}},{\mathbf {z}}) = \langle \Pi {\mathbf {y}},{\mathbf {z}}\rangle _{Y}, \ \ \text {for all } {\mathbf {y}},{\mathbf {z}}\in Y. \end{aligned}$$

As a consequence, we can formulate (44) as

$$\begin{aligned} \langle {\mathbf {z}}_{2},A^{*} \Pi {\mathbf {z}}_{1 } \rangle _{Y} + \langle \Pi A{\mathbf {z}}_{1},{\mathbf {z}}_{2}\rangle _{Y}+\langle {\mathbf {z}}_{2},{\mathbf {z}}_{1} \rangle _{Y} - \frac{1}{\alpha } \langle B^{*} \Pi {\mathbf {z}}_{1},B^{*}\Pi {\mathbf {z}}_{2} \rangle _{U} = 0. \end{aligned}$$
(45)

Equation (45) is the well-known algebraic operator Riccati equation which has been studied in detail in, e.g., [20, 33]. From the stabilizability Assumption A2, and the fact that the pair \((A,\mathrm {id})\) is exponentially detectable as a consequence of (18), we conclude that (45) has a unique stabilizing solution \(\Pi \in {\mathcal {L}}(Y)\). In the discussion below, we denote by

$$\begin{aligned} A_{\pi } := A-\frac{1}{\alpha }BB^{*}\Pi \end{aligned}$$

the closed-loop operator associated with the linearized stabilization problem. In particular, let us mention that \(A_{\pi }\) generates an analytic exponentially stable semigroup \(e^{A_{\pi }t}\) on Y. Hence, for trajectories of the form \({\tilde{{\mathbf {y}}}}=e^{A\cdot }{\mathbf {y}}\), \({\mathbf {y}}\in Y\) it follows that \({\tilde{{\mathbf {y}}}}\in W_{\infty }\).

For higher order derivatives of \({\mathcal {V}}\), we follow the exposition from [17]. For this purpose, let us briefly recall the symmetrization technique introduced there. Let i and \(j \in {\mathbb {N}}\), consider

$$\begin{aligned} S_{i,j}= \big \{ \sigma \in S_{i+j} \,|\, \sigma (1)< \dots< \sigma (i) \text { and } \sigma (i+1)< \dots < \sigma (i+j) \big \}, \end{aligned}$$

where \(S_{i+j}\) is the set of permutations of \(\{ 1,\dots ,i+j \}\). A permutation \(\sigma \in S_{i,j}\) is uniquely defined by the subset \(\{\sigma (1),\dots ,\sigma (i) \}\), therefore, the cardinality of \(S_{i,j}\) is equal to the number of subsets of cardinality i of \(\{ 1,\dots ,i+j\}\), that is to say \(|S_{i,j}|= \left( {\begin{array}{c}i+j\\ i\end{array}}\right) \). For a multilinear mapping \({\mathcal {T}}\) of order \(i+j\), we set

$$\begin{aligned} \text {Sym}_{i,j}({\mathcal {T}})({\mathbf {z}}_1,\dots ,{\mathbf {z}}_{i+j}) = {\left( {\begin{array}{c}i+j\\ i\end{array}}\right) }^{-1} \left[ \sum _{\sigma \in S_{i,j}} {\mathcal {T}} ({\mathbf {z}}_{\sigma (1)},\dots ,{\mathbf {z}}_{\sigma (i+j)} ) \right] . \end{aligned}$$
(46)

The following proposition is a generalization of the Leibniz formula for the differentiation of the product of two functions.

Proposition 18

Let Z be a Hilbert space. Let \(f:Y\rightarrow Z\) and \(g:Y\rightarrow Z\) be two k-times continuously differentiable functions. Then, for all \(k\ge 1\), for all \({\mathbf {y}}\in Y\) and \(({\mathbf {z}}_1,\dots ,{\mathbf {z}}_k)\in Y^k\),

$$\begin{aligned}&D^k [\langle f({\mathbf {y}}) , g({\mathbf {y}}) \rangle _Z]({\mathbf {z}}_1,\dots ,{\mathbf {z}}_k) \\&\quad = \sum _{i=0}^k \left( {\begin{array}{c}k\\ i\end{array}}\right) {{\,\mathrm{Sym}\,}}_{i,k-i} (D^i f({\mathbf {y}})\otimes D^{k-i}g({\mathbf {y}}) )({\mathbf {z}}_{1},\dots ,{\mathbf {z}}_{k}). \end{aligned}$$

Proof

The proof is analogous to the one given in [17, Lemma 10] for \(Z= {\mathbb {R}}\). \(\square \)

Theorem 19

Let \(k \ge 3\). For all \({\mathbf {z}}_1,\dots ,{\mathbf {z}}_k \in {\mathcal {D}}(A)\),

$$\begin{aligned} \sum _{i=1}^k {\mathcal {D}}^k{\mathcal {V}}(0)({\mathbf {z}}_1,\dots ,{\mathbf {z}}_{i-1},A_{\pi }{\mathbf {z}}_i,{\mathbf {z}}_{i+1},\dots ,{\mathbf {z}}_k) = {\mathcal {R}}_k({\mathbf {z}}_1,\dots ,{\mathbf {z}}_k), \end{aligned}$$
(47)

where the multilinear form \({\mathcal {R}}_{k}:{\mathcal {D}}(A)^k \rightarrow {\mathbb {R}}\) is given by

$$\begin{aligned} {\mathcal {R}}_{k}({\mathbf {z}}_1,\dots ,{\mathbf {z}}_k)=&\frac{1}{2\alpha } \sum _{i=2}^{k-2} \begin{pmatrix} k \\ i \end{pmatrix} \mathop {{{\,\mathrm{Sym}\,}}}\limits _{i,k-i} \big ( {\mathcal {C}}_i \otimes {\mathcal {C}}_{k-i} \big )({\mathbf {z}}_1,\dots ,{\mathbf {z}}_k) \\&+ \frac{k(k-1)}{2} \mathop {{{\,\mathrm{Sym}\,}}}\limits _{k-2,2}\big ( D^{k-1}{\mathcal {V}}(0)\otimes D^{2}F(0) \big )({\mathbf {z}}_1,\dots ,{\mathbf {z}}_k) \end{aligned}$$

with \( {\mathcal {C}}_{i}({\mathbf {z}}_1,\dots ,{\mathbf {z}}_i) = \displaystyle {B^{*} D^{i+1} {\mathcal {V}}(0)(\cdot ,{\mathbf {z}}_1,\dots ,{\mathbf {z}}_i)} \) and \(D^{2}F(0)({\mathbf {z}}_{1},{\mathbf {z}}_{2})=A_{0}({\mathbf {z}}_{1},{\mathbf {z}}_{2})\).

Proof

The proof relies on successive differentiations of (42). For a bilinear control problem, a similar result has been obtained in [17, Theorem 12]. In particular, it was shown that

$$\begin{aligned} \left( D^k [{\mathcal {V}}({\mathbf {y}})(A{\mathbf {y}})]_{{\mathbf {y}}=0}\right) ({\mathbf {z}}_1,\dots ,{\mathbf {z}}_k) = \sum _{i=1}^k D^k {\mathcal {V}}(0)({\mathbf {z}}_1,\dots ,{\mathbf {z}}_{i-1},A{\mathbf {z}}_i,{\mathbf {z}}_{i+1},\dots ,{\mathbf {z}}_k). \end{aligned}$$
(48)

Obviously, for \(k \ge 3\), we have \(D^k(\frac{1}{2}\Vert {\mathbf {y}}\Vert _Y^2)=0\). Let us discuss the structure of the derivatives of the remaining terms appearing in (42). Applying Proposition 18 to the term \(\Vert B^{*} D{\mathcal {V}}({\mathbf {y}}) \Vert _U^2 \), we obtain

$$\begin{aligned} D^k \Vert B^{*} D{\mathcal {V}}({\mathbf {y}}) \Vert _U^2 = \sum _{i=0}^k \begin{pmatrix} k \\ i \end{pmatrix} {{\,\mathrm{Sym}\,}}_{i,k-i} (D^i (B^{*} D{\mathcal {V}}({\mathbf {y}})) \otimes D^{k-i}(B^{*} D {\mathcal {V}}({\mathbf {y}}))) . \end{aligned}$$

Since \({\mathcal {V}}\) has a minimum at the origin, we have \(D{\mathcal {V}}(0)=0\) and the terms for \(i=0\) and \(i=k\) vanish when evaluated in \({\mathbf {y}}=0\). By definition of the \({{\,\mathrm{Sym}\,}}\)-operator, for \(i=1\) we obtain

$$\begin{aligned}&\left. \begin{pmatrix} k \\ 1 \end{pmatrix} {{\,\mathrm{Sym}\,}}_{1,k-1} (D (B^{*} D{\mathcal {V}}({\mathbf {y}})) \otimes D^{k-1}(B^{*} D {\mathcal {V}}({\mathbf {y}}))) ({\mathbf {z}}_1,\dots ,{\mathbf {z}}_k)\right| _{{\mathbf {y}}=0} \\&\qquad =\sum _{\sigma \in S_{1,k-1}}\langle B^{*} D^2{\mathcal {V}}(0)(\cdot ,{\mathbf {z}}_{\sigma (1)}),B^{*} D^k {\mathcal {V}}(0)(\cdot ,{\mathbf {z}}_{\sigma (2)},\dots ,{\mathbf {z}}_{\sigma (k)}) \rangle _U \\&\qquad =\sum _{\sigma \in S_{1,k-1}}\langle BB^{*} D^2{\mathcal {V}}(0)(\cdot ,{\mathbf {z}}_{\sigma (1)}),D^k {\mathcal {V}}(0)(\cdot ,{\mathbf {z}}_{\sigma (2)},\dots ,{\mathbf {z}}_{\sigma (k)}) \rangle _Y \\&\qquad =\sum _{\sigma \in S_{1,k-1}} D^k {\mathcal {V}}(0)( BB^{*} D^2{\mathcal {V}}(0)(\cdot ,{\mathbf {z}}_{\sigma (1)}),{\mathbf {z}}_{\sigma (2)},\dots ,{\mathbf {z}}_{\sigma (k)}) \end{aligned}$$

As explained previously, we can represent \(D^2{\mathcal {V}}(0)\) in terms of the solution \(\Pi \) of the algebraic operator Riccati equation. This shows

$$\begin{aligned} \begin{aligned}&\left. \begin{pmatrix} k \\ 1 \end{pmatrix} {{\,\mathrm{Sym}\,}}_{1,k-1} (D (B^{*} D{\mathcal {V}}({\mathbf {y}})) \otimes D^{k-1}(B^{*} D {\mathcal {V}}({\mathbf {y}}))) ({\mathbf {z}}_1,\dots ,{\mathbf {z}}_k)\right| _{{\mathbf {y}}=0} \\&\qquad = \sum _{\sigma \in S_{1,k-1}} D^k {\mathcal {V}}(0)( BB^{*}\Pi {\mathbf {z}}_{\sigma (1)},{\mathbf {z}}_{\sigma (2)},\dots ,{\mathbf {z}}_{\sigma (k)}) \\&\qquad = \sum _{i=1}^k D^k {\mathcal {V}}(0)({\mathbf {z}}_1,\dots ,{\mathbf {z}}_{i-1},BB^* {\mathbf {z}}_{i},{\mathbf {z}}_{i+1},\dots ,{\mathbf {z}}_k). \end{aligned} \end{aligned}$$
(49)

A similar relation can be derived for \(i=k-1\). Finally we consider the term \(D^k(D{\mathcal {V}}({\mathbf {y}})F({\mathbf {y}}))\). By Proposition 18, we get

$$\begin{aligned}&D^{k} (D{\mathcal {V}}({\mathbf {y}})F({\mathbf {y}})) ({\mathbf {z}}_{1},\dots ,{\mathbf {z}}_{k}) \\&\quad = D^{k} \langle D{\mathcal {V}}({\mathbf {y}}),F({\mathbf {y}}) \rangle _{Y} ({\mathbf {z}}_{1},\dots ,{\mathbf {z}}_{k})\\&\quad =\sum _{i=0}^{k} \begin{pmatrix} k \\ i \end{pmatrix} \mathop {{{\,\mathrm{Sym}\,}}}\limits _{i,k-i} \left( D^{i+1}{\mathcal {V}}({\mathbf {y}}) \otimes D^{k-i}F({\mathbf {y}})\right) ({\mathbf {z}}_{1},\dots ,{\mathbf {z}}_{k}). \end{aligned}$$

Since \(D^{3+\ell }F({\mathbf {y}})=0\) for all \(\ell \ge 0\), the previous equation simplifies as follows

$$\begin{aligned}&D^{k} (D{\mathcal {V}}({\mathbf {y}})F({\mathbf {y}}))({\mathbf {z}}_{1},\dots ,{\mathbf {z}}_{k}) \\&\quad =\sum _{i=k-2}^{k} \begin{pmatrix} k \\ i \end{pmatrix} \mathop {{{\,\mathrm{Sym}\,}}}\limits _{i,k-i} \left( D^{i+1}{\mathcal {V}}({\mathbf {y}}) \otimes D^{k-i}F({\mathbf {y}})\right) ({\mathbf {z}}_{1},\dots ,{\mathbf {z}}_{k}). \end{aligned}$$

Evaluating the last expression in \({\mathbf {y}}=0\) yields

$$\begin{aligned}&\left( D^{k} [D{\mathcal {V}}({\mathbf {y}})F({\mathbf {y}})]_{{\mathbf {y}}=0}\right) ({\mathbf {z}}_{1},\dots ,{\mathbf {z}}_{k}) \nonumber \\&\quad =\frac{k(k-1)}{2} \mathop {{{\,\mathrm{Sym}\,}}}\limits _{k-2,2} \left( D^{k-1}{\mathcal {V}}(0) \otimes D^{2}F(0)\right) ({\mathbf {z}}_{1},\dots ,{\mathbf {z}}_{k}), \end{aligned}$$
(50)

since F(0) and DF(0) are both null. Combining (48), (49) and (50) proves the assertion. \(\square \)

5 Polynomial Feedback Laws

5.1 Estimates for the Velocity

In this section we analyze the polynomial feedback law \(u_d\) derived from the Taylor series approximation of the value function

$$\begin{aligned} {\mathcal {V}}_d({\mathbf {y}}) := \sum _{k=2}^d \frac{1}{k!}D^{k}{\mathcal {V}}(0)({\mathbf {y}},\dots ,{\mathbf {y}}), \end{aligned}$$

for a given \(d \ge 2\). The feedback \(u_d :Y \rightarrow U\) is obtained by approximating \({\mathcal {V}}\) with \({\mathcal {V}}_d\) in formula (43), that is

$$\begin{aligned} u_{d}({\mathbf {y}}) = -\frac{1}{\alpha } B^{*}D{\mathcal {V}}_d({\mathbf {y}})= - \frac{1}{\alpha } \sum _{k=2}^{d} \frac{1}{(k-1)!} B^{*} D^{k}{\mathcal {V}}(0)(\cdot ,{\mathbf {y}},\dots ,{\mathbf {y}}). \end{aligned}$$

The associated closed-loop system is given by

$$\begin{aligned} {\dot{{\mathbf {y}}}}_d = A {\mathbf {y}}_d - F({\mathbf {y}}_d) + Bu_{d}({\mathbf {y}}_d), \quad {\mathbf {y}}_d(0) = {\mathbf {y}}_0. \end{aligned}$$
(51)

Below we will also derive an estimate for the open-loop control, i.e., the function defined by

$$\begin{aligned} u_d:[0,\infty ) \rightarrow U, \ t\mapsto u_d(t):=u_d({\mathbf {y}}_d(t)) \end{aligned}$$
(52)

which is obtained via closed-loop dynamics here. With slight abuse of notation, the open-loop control \(u_d(t)\) as well as its closed-loop interpretation \(u_d({\mathbf {y}}_d(t))\) will both be denoted with \(u_d\).

We begin with some local Lipschitz continuity estimates for the nonlinear part of the feedback law. For this purpose, we set

$$\begin{aligned} G_k({\mathbf {y}}):=-\frac{1}{\alpha (k-1)!}BB^{*}D^{k}{\mathcal {V}}(0)(\cdot ,{\mathbf {y}},\dots ,{\mathbf {y}}), \end{aligned}$$
(53)

for all \(k \ge 3\). The closed-loop system can be reformulated as follows:

$$\begin{aligned} {\dot{{\mathbf {y}}}}_d&=A_\pi {\mathbf {y}}_d - F({\mathbf {y}}_d) -\frac{1}{\alpha } \sum _{k=3}^d \frac{1}{(k-1)!}BB^{*}D^{k} {\mathcal {V}}(0)(\cdot ,{\mathbf {y}}_d,\dots ,{\mathbf {y}}_d) \nonumber \\&= A_{\pi } {\mathbf {y}}_d - F({\mathbf {y}}_d) + \sum _{k=3}^{d} G_{k}({\mathbf {y}}_d). \end{aligned}$$
(54)

Lemma 20

For all \(k \ge 3\), there exists a constant \(C(k)>0\) such that for all \({\mathbf {y}}\) and \({\mathbf {z}}\in Y\),

$$\begin{aligned} \Vert G_k({\mathbf {y}})- G_k({\mathbf {z}}) \Vert _Y \le C(k) \Vert {\mathbf {y}}- {\mathbf {z}}\Vert _Y \max ( \Vert {\mathbf {y}}\Vert _Y, \Vert {\mathbf {z}}\Vert _Y )^{k-2}. \end{aligned}$$

Moreover, for all \(\delta \in [0,1]\), for all \({\mathbf {y}}\) and \({\mathbf {z}}\in W_\infty \) such that \(\Vert {\mathbf {y}}\Vert _{W_\infty } \le \delta \) and \(\Vert {\mathbf {z}}\Vert _{W_\infty } \le \delta \),

$$\begin{aligned} \Vert G_k({\mathbf {y}})-G_k({\mathbf {z}}) \Vert _{L^2(0,\infty ;V')} \le C(k) \delta \Vert {\mathbf {y}}- {\mathbf {z}}\Vert _{W_\infty }. \end{aligned}$$

Proof

We have the identity

$$\begin{aligned}&D^k {\mathcal {V}}(0)(\cdot , {\mathbf {y}},\dots ,{\mathbf {y}}) - D^k {\mathcal {V}}(0)(\cdot , {\mathbf {z}},\dots ,{\mathbf {z}}) = D^k {\mathcal {V}}(0)(\cdot , {\mathbf {y}}- {\mathbf {z}}, {\mathbf {y}},\dots ,{\mathbf {y}}) \\&\quad + D^k {\mathcal {V}}(0)(\cdot , {\mathbf {z}}, {\mathbf {y}}- {\mathbf {z}},\dots ,{\mathbf {y}}) + \dots + D^k {\mathcal {V}}(0)(\cdot , {\mathbf {z}},\dots ,{\mathbf {z}}, {\mathbf {y}}- {\mathbf {z}}). \end{aligned}$$

The first inequality easily follows, with \(C(k)= \frac{1}{\alpha (k-2)!} \Vert B\Vert _{{\mathcal {L}}(U,Y)}^2 \Vert D^k {\mathcal {V}}(0) \Vert \) and \(\Vert D^k {\mathcal {V}}(0) \Vert \) as defined in (5). We also obtain that for all \({\mathbf {y}}\) and \({\mathbf {z}}\in W_\infty \),

$$\begin{aligned} \Vert G_k({\mathbf {y}})-G_k({\mathbf {z}}) \Vert _{L^2(0,\infty ;V')} \le C(k) \Vert {\mathbf {y}}- {\mathbf {z}}\Vert _{W_\infty } \max ( \Vert {\mathbf {y}}\Vert _{W_\infty }, \Vert {\mathbf {z}}\Vert _{W_\infty } )^{k-2}. \end{aligned}$$

The second inequality follows, since \(k \ge 3\) and \(\delta \le 1\). \(\square \)

The well-posedness of the closed-loop system can be now established with the same tools as those used in Lemma 5.

Theorem 21

Let \(d \ge 2\). Let C and C(k) denote the constants from Lemmas 4 and 20. There exists a constant \(M_{\mathrm {cls}}\) such that for all \({\mathbf {y}}_0 \in Y\) with

$$\begin{aligned} \Vert {\mathbf {y}}_0\Vert _Y \le \frac{1}{4(C+ \sum _{k=3}^d C(k))M_{\mathrm {cls}}^2}, \end{aligned}$$

the closed-loop system (51) has a unique solution \({\mathbf {y}}_d\) in \(W_\infty \), which satisfies

$$\begin{aligned} \Vert {\mathbf {y}}_d\Vert _{W_\infty } \le 2 M_{\mathrm {cls}} \Vert {\mathbf {y}}_0 \Vert _Y. \end{aligned}$$
(55)

Proof

The existence of a solution \({\mathbf {y}}\in W_\infty \), satisfying (55), can be obtained exactly as in Lemma 5. Thus we only discuss uniqueness. Let \({\mathbf {y}}\) and \({\mathbf {z}}\) denote two solutions to (51) in \(W_\infty \). Let us set \({\mathbf {e}}= {\mathbf {y}}- {\mathbf {z}}\). Arguing as in the proof of Lemma 5, one can prove the existence of \(M>0\) such that

$$\begin{aligned} \frac{1}{2} \frac{\text {d}}{\text {d} t} \Vert {\mathbf {e}} \Vert _Y^2\le & {} M \Big ( 1 + \Vert {\mathbf {y}}\Vert _Y^2 \Vert {\mathbf {y}}\Vert _V^2 + \Vert {\mathbf {z}}\Vert _Y \Vert ^2 {\mathbf {z}}\Vert _V^2 \\&+ \sum _{k=3}^d C(k)^2 \max ( \Vert {\mathbf {y}}\Vert _Y, \Vert {\mathbf {z}}\Vert _Y )^{2(k-2)} \Big ) \Vert {\mathbf {e}} \Vert _Y^2, \end{aligned}$$

for all \(t \ge 0\). Since \({\mathbf {y}}\) and \({\mathbf {z}}\in W_\infty \) and \({\mathbf {e}}(0)=0\), we obtain with Gronwall’s inequality that \({\mathbf {e}}=0\), which proves the uniqueness of the solution to the closed-loop system. \(\square \)

Theorem 22

Let \(d \ge 2\). There exist \(\delta _6 > 0\) and \(M > 0\) such that for all \({\mathbf {y}}_{0} \in B_Y(\delta _6)\), it holds that

$$\begin{aligned} \Vert {\bar{{\mathbf {y}}}}-{\mathbf {y}}_{d} \Vert _{W_{\infty }}&\le M \Vert {\mathbf {y}}_{0} \Vert _{Y}^{d}, \\ \max \left( \Vert {\bar{u}}-u_{d} \Vert _{L^{2}(0,\infty ;U)},\Vert {\bar{u}}-u_{d} \Vert _{L^{\infty }(0,\infty ;U)}\right)&\le M \Vert {\mathbf {y}}_{0} \Vert _{Y}^{d}, \end{aligned}$$

where \(({\bar{{\mathbf {y}}}},{\bar{u}})= ({\mathcal {Y}}({\mathbf {y}}_0),{\mathcal {U}}({\mathbf {y}}_0))\), \({\mathbf {y}}_{d}\) is the solution of the closed-loop system (51) with initial condition \({\mathbf {y}}_0\), and \(u_d\) is as defined in (52).

Proof

Let us fix \(\delta _6= \min \big ( \delta _5,(4(C+ \sum _{k=3}^d C(k)) M_{\text {cls}}^2)^{-1} \big )\), so that Proposition 16 and Theorem 21 apply for \({\mathbf {y}}_0 \in B_Y(\delta _6)\). By Taylor’s theorem, see, e.g., [43, Theorem 4A], there exists \(\delta > 0\) such that for all \({\mathbf {y}}\in B_Y(\delta )\),

$$\begin{aligned} D{\mathcal {V}}({{\mathbf {y}}}) = \sum _{k=2}^{d}\frac{1}{(k-1)!} D^{k} {\mathcal {V}}(0) (\cdot ,{{\mathbf {y}}},\dots ,{{\mathbf {y}}}) + R_{d} ({{\mathbf {y}}}), \end{aligned}$$
(56)

where the remainder term \(R_{d}\) satisfies

$$\begin{aligned} \Vert R_{d}({{\mathbf {y}}}) \Vert _{Y} \le M \Vert {{\mathbf {y}}} \Vert ^{d}_{Y}, \end{aligned}$$

for some constant M independent of \({\mathbf {y}}\). Reducing if necessary \(\delta _6\), we have that \(\Vert {\bar{{\mathbf {y}}}}(t) \Vert _Y \le \delta \) for all \(t \ge 0\). Combining then (43) and the Taylor expansion (56), we obtain that

$$\begin{aligned} \dot{{\bar{{\mathbf {y}}}}} = A {\bar{{\mathbf {y}}}} - F({\bar{{\mathbf {y}}}}) - \frac{1}{\alpha } B B^* D {\mathcal {V}}({\bar{{\mathbf {y}}}}) = A_\pi {\bar{{\mathbf {y}}}} - F({\bar{{\mathbf {y}}}}) + \sum _{k=3}^d G_k({\bar{{\mathbf {y}}}}) - \frac{1}{\alpha } B B^* R_d({\bar{{\mathbf {y}}}}).\nonumber \\ \end{aligned}$$
(57)

Let us now consider the error dynamics \({\mathbf {e}}:= {\bar{{\mathbf {y}}}}-{\mathbf {y}}_{d}\). We have \({\mathbf {e}}(0)=0\), moreover by (54) and (57),

$$\begin{aligned} \dot{ {\mathbf {e}} }&= A_\pi {\mathbf {e}} - F({\bar{{\mathbf {y}}}})+F({\mathbf {y}}_{d}) + \sum _{k=3}^{d} (G_{k}({\bar{{\mathbf {y}}}})-G_{k}({\mathbf {y}}_{d})) - \frac{1}{\alpha } B B^* R_d({\bar{{\mathbf {y}}}}). \end{aligned}$$

Alternatively, \({\mathbf {e}}\) can be expressed as the solution of the system

$$\begin{aligned} \dot{{\mathbf {e}}}= A_{\pi } {\mathbf {e}} + {\mathbf {f}}, \quad {\mathbf {e}}(0)=0, \end{aligned}$$
(58)

where the source term \({\mathbf {f}}\) is given by

$$\begin{aligned} {\mathbf {f}}= -F({\bar{{\mathbf {y}}}})+F({\mathbf {y}}_{d}) + \sum _{k=3}^{d} (G_{k}({\bar{{\mathbf {y}}}})-G_{k}({\mathbf {y}}_{d})) - \frac{1}{\alpha } B B^* R_d({\bar{{\mathbf {y}}}}). \end{aligned}$$

Consider \({\tilde{\delta }} \in (0,1]\). The precise value of \({\tilde{\delta }}\) will be fixed later. By Lemma 12 and Theorem 21, we can reduce \(\delta _6\) so that \(\max (\Vert {\bar{{\mathbf {y}}}} \Vert _{W_{\infty }},\Vert {\mathbf {y}}_{d} \Vert _{W_{\infty }} ) \le {\tilde{\delta }}\). We first observe that

$$\begin{aligned} \Big \Vert \frac{1}{\alpha } B B^* R_d({\bar{{\mathbf {y}}}}) \Big \Vert _{L^2(0,\infty ;V')} \le M \Vert {\bar{{\mathbf {y}}}} \Vert _{L^\infty (0,\infty ;Y)}^{d-1} \Vert {\bar{{\mathbf {y}}}} \Vert _{L^2(0,\infty ;Y)} \le M \Vert {\mathbf {y}}_0 \Vert _Y^d. \end{aligned}$$

Applying further Lemmas 4 and 20, we obtain

$$\begin{aligned} \Vert {\mathbf {f}}\Vert _{L^2(0,\infty ;V')}&\le M \Big (\Vert F({\bar{{\mathbf {y}}}})-F({\mathbf {y}}_{d})\Vert _{L^2(0,\infty ;V')} \\&\quad + \sum _{k=3}^{d} \Vert G_{k}({\bar{{\mathbf {y}}}})-G_{k}({\mathbf {y}}_{d})\Vert _{L^2(0,\infty ;V')} + \Vert {\mathbf {y}}_0 \Vert _Y^d \Big ) \\&\le M ({\tilde{\delta }} \Vert {\mathbf {e}} \Vert _{W_{\infty }} + \Vert {\mathbf {y}}_{0} \Vert _{Y}^{d} ). \end{aligned}$$

For the solution of system (58) we thus obtain the estimate

$$\begin{aligned} \Vert {\mathbf {e}} \Vert _{W_{\infty }} \le M \Vert {\mathbf {f}}\Vert _{L^2(0,\infty ;V')} \le M ({\tilde{\delta }} \Vert {\mathbf {e}} \Vert _{W_{\infty }} + \Vert {\mathbf {y}}_{0} \Vert _{Y}^{d} ). \end{aligned}$$

The constant \(M> 0\) in the above estimate is independent of \({\tilde{\delta }}\). We can now define \({\tilde{\delta }}= \min \big ( 1, \frac{1}{2M} \big )\). The first estimate on \(\Vert {\bar{{\mathbf {y}}}}-{\mathbf {y}}_{d} \Vert _{W_{\infty }}\) follows.

Let us estimate \({\bar{u}}-u_d\). By (43) and by definition of the generated open-loop control \(u_d\), we have that

$$\begin{aligned} {\bar{u}}- u_d = - \frac{1}{\alpha } B^* \big ( D {\mathcal {V}}({\bar{{\mathbf {y}}}})- D {\mathcal {V}}_d({\mathbf {y}}_d) \big ) = - \frac{1}{\alpha } B^* \big ( R_d({\bar{{\mathbf {y}}}}) + D {\mathcal {V}}_d({\bar{{\mathbf {y}}}})- D {\mathcal {V}}_d({\mathbf {y}}_d) \big ). \end{aligned}$$

Let us estimate the two terms of the right-hand side. It is easy to check that

$$\begin{aligned} \max \big ( \Vert R_d({\bar{{\mathbf {y}}}}) \Vert _{L^\infty (0,\infty ;Y)}, \Vert R_d({\bar{{\mathbf {y}}}}) \Vert _{L^2(0,\infty ;Y)} \big ) \le M \Vert {\mathbf {y}}_0 \Vert _Y^d. \end{aligned}$$

Using the techniques of Lemma 20 and the estimate on \(\Vert {\bar{{\mathbf {y}}}} - {\mathbf {y}}_d \Vert _{W_\infty }\), we also obtain that

$$\begin{aligned}&\max \big ( \Vert D {\mathcal {V}}_d({\bar{{\mathbf {y}}}})- D {\mathcal {V}}_d({\mathbf {y}}_d) \Vert _{L^2(0,\infty ;Y)}, \Vert D {\mathcal {V}}_d({\bar{{\mathbf {y}}}})- D {\mathcal {V}}_d({\mathbf {y}}_d) \Vert _{L^\infty (0,\infty ;Y)} \big ) \\&\quad \le M \Vert {\bar{{\mathbf {y}}}}- {\mathbf {y}}_d \Vert _{W_\infty } \le M \Vert {\mathbf {y}}_0 \Vert _Y^d. \end{aligned}$$

The second estimate on \({\bar{u}}-u_d\) follows. \(\square \)

5.2 Estimates for the Pressure

It is well-known that for \({\mathbf {y}}_0 \in Y\), the pressure term that can be associated to the Navier–Stokes equations is a distribution only (see, e.g., [39, 40, Chapter III-§3]). In the following, we redemonstrate this fact and we argue that a result analogous to Theorem 22 also holds for the pressure, provided the latter is considered in \(W^{-1,\infty }(0,\infty ; L_0^2(\Omega )) = W^{1,1}_0(0,\infty ; L_0^2(\Omega ))'\) with

$$\begin{aligned} W^{1,1}_0(0,\infty ; L_0^2(\Omega )) = \left\{ v \in W^{1,1}(0,\infty ;L_0^2(\Omega )) \ | \ v(0) = 0 \right\} \end{aligned}$$

and

$$\begin{aligned} L_0^2(\Omega ) = \left\{ v \in L^2(\Omega ) \ | \ \int _\Omega {v(x)\, \mathrm {d}x = 0}\right\} . \end{aligned}$$

We define similarly \(W_0^{1,1}(0,\infty ;{\mathbb {H}}_0^1(\Omega ))\). We recall here that \(W^{1,1}_0(0,\infty ; {\mathbb {H}}_0^1(\Omega ))\) embeds continuously into \(L^\infty (0,\infty ;{\mathbb {H}}_0^1(\Omega )) \cap L^2(0,\infty ;{\mathbb {H}}_0^1(\Omega ))\). Further the elements \(\varvec{\phi }\) of \(W^{1,1}_0(0,\infty ; {\mathbb {H}}_0^1(\Omega ))\) can be identified a.e. with continuous functions on \([0,\infty )\) and satisfy \(\lim _{t\rightarrow \infty } \Vert \varvec{\phi }(t)\Vert _{{\mathbb {H}}_0^1(\Omega )}=0\). We use the properties of Banach-space valued functions as summarized in [14, Chapter II-§5].

Lemma 23

Let \(({\mathbf {y}},u) \in W_\infty \times L^2(0,\infty ;U)\) be such that \({\dot{{\mathbf {y}}}}= A {\mathbf {y}}- F({\mathbf {y}}) + Bu\). Then, there exists a unique \(p \in W^{-1,\infty }(0,\infty ;L_0^2(\Omega ))\) such that

$$\begin{aligned} {\dot{{\mathbf {y}}}}= A {\mathbf {y}}- F({\mathbf {y}}) + Bu - \nabla p \quad \text {in } W^{1,1}(0,\infty ;{\mathbb {H}}_0^1(\Omega ))', \end{aligned}$$

that is,

$$\begin{aligned} -\int _0^\infty \langle {\mathbf {y}}(t), {\dot{\varvec{\phi }}}(t) \rangle _Y \, \mathrm {d} t =&\ \int _0^\infty \big \langle A {\mathbf {y}}(t) - F({\mathbf {y}}(t)) + Bu(t), \varvec{\phi }(t) \big \rangle _{{\mathbb {H}}^{-1}(\Omega ),{\mathbb {H}}_0^1(\Omega )} \, \mathrm {d} t \nonumber \\&+ \langle p, {{\,\mathrm{div}\,}}\varvec{\phi }\rangle _{W^{-1,\infty }(0,\infty ,L_0^2(\Omega )),W_0^{1,1}(0,\infty ;L_0^2(\Omega ))}, \end{aligned}$$
(59)

for all \(\varvec{\phi }\in W_0^{1,1}(0,\infty ;{\mathbb {H}}_0^1(\Omega ))\). Moreover,

$$\begin{aligned} \Vert p \Vert _{W^{-1,\infty }(0,\infty ,L_0^2(\Omega ))} \le M \big ( \Vert {\mathbf {y}}\Vert _{W_\infty } + \Vert {\mathbf {y}}\Vert _{W_\infty }^2 + \Vert u \Vert _{L^2(0,\infty ;U)} \big ), \end{aligned}$$
(60)

for a constant M independent of \(({\mathbf {y}},u)\).

Proof

We follow the technique consisting in integrating the state equation, see, e.g., [14, Chapter V-§1] and introduce

$$\begin{aligned} {\mathbf {G}}(t)= {\mathbf {y}}(t) {- {\mathbf {y}}_0} + \int _0^t {\mathbf {g}}(s) \, \mathrm {d} s, \quad \text {with: } {\mathbf {g}}(s)= A {\mathbf {y}}(s) - F({\mathbf {y}}(s)) + Bu(s). \end{aligned}$$
(61)

It can be easily shown that \({\mathbf {g}}\in L^2(0,\infty ;{\mathbb {H}}^{-1}(\Omega ))\) and that there exists a constant \(M>0\) independent of \(({\mathbf {y}},u)\) such that

$$\begin{aligned} \Vert {\mathbf {g}}\Vert _{L^2(0,\infty ;{\mathbb {H}}^{-1}(\Omega ))} \le M \big ( \Vert {\mathbf {y}}\Vert _{W_\infty } + \Vert {\mathbf {y}}\Vert _{W_\infty }^2 + \Vert u \Vert _{L^2(0,\infty ;U)} \big ). \end{aligned}$$
(62)

This estimate can be obtained with the Cauchy-Schwarz inequality and Proposition 1(i), which also holds true in \({\mathbb {H}}^{-1}(\Omega )\) (in place of \(V'\)). Since \({\mathbf {y}}\in W_\infty \), it further follows that \({\mathbf {G}}\) is a continuous function of time with values in \({\mathbb {H}}^{-1}(\Omega )\). Moreover, \(\langle {\mathbf {G}}(t),\varvec{\psi } \rangle _{{\mathbb {H}}^{-1}(\Omega ),{\mathbb {H}}_0^1(\Omega )} = 0\) for all \(t \in [0,\infty )\) and \(\varvec{\psi } \in V\). Hence for all \(t \in [0, \infty )\), there exists a unique \({\mathcal {P}}(t) \in L_0^2(\Omega )\) such that \({\mathbf {G}}(t) = -\nabla {\mathcal {P}}(t)\), see, e.g., [14, Theorem IV.2.3]. Let us prove that \({\mathcal {P}} \in C([0,\infty ), L^2_0(\Omega ))\). Recall first that there exists an operator \({\mathcal {K}} \in {\mathcal {L}}(L_0^2(\Omega ),{\mathbb {H}}_0^1(\Omega ))\) with the property that

$$\begin{aligned} {{\,\mathrm{div}\,}}({\mathcal {K}} \rho ) = \rho , \quad \forall \rho \in L_0^2(\Omega ), \end{aligned}$$

see [14, Theorem IV.3.1]. Let \(\rho \in L_0^2(\Omega )\) be arbitrary and let \(\varvec{\phi }= {\mathcal {K}} \rho \). For all t and \(\tau \) in \([0, \infty )\), we have

$$\begin{aligned} \langle {\mathcal {P}}(t) - {\mathcal {P}}(\tau ), \rho \rangle _{L_0^2(\Omega )}&= -\langle \nabla {\mathcal {P}}(t) - \nabla {\mathcal {P}}(\tau ), \varvec{\phi }\rangle _{{\mathbb {H}}^{-1}(\Omega ),{\mathbb {H}}_0^1(\Omega )} \\&= \langle {\mathbf {G}}(t) - {\mathbf {G}}(\tau ), \varvec{\phi }\rangle _{{\mathbb {H}}^{-1}(\Omega ),{\mathbb {H}}_0^1(\Omega )} \\&\le \Vert {\mathcal {K}} \Vert _{{\mathcal {L}}(L_0^2(\Omega ),{\mathbb {H}}_0^1(\Omega ))} \Vert {\mathbf {G}}(t) - {\mathbf {G}}(\tau ) \Vert _{{\mathbb {H}}^{-1}(\Omega )} \Vert \rho \Vert _{L_0^2(\Omega )}. \end{aligned}$$

It follows that \(\Vert {\mathcal {P}}(t)-{\mathcal {P}}(\tau ) \Vert _{L_0^2(\Omega )} \le M \Vert {\mathbf {G}}(t)-{\mathbf {G}}(\tau ) \Vert _{{\mathbb {H}}^{-1}(\Omega )}\), which concludes the proof of continuity of \({\mathcal {P}}\). We now introduce the distributional derivative \(p= \frac{\mathrm {d}}{\mathrm {d} t} {\mathcal {P}}\) and establish that \(p \in W^{-1,\infty }(0,\infty ;L_0^2(\Omega ))\). Let \(\rho \in {\mathcal {C}}_c^\infty (0,\infty ;L_0^2(\Omega ))\) be arbitrary and set \({\varvec{\phi }}(t) = {\mathcal {K}} \rho (t)\). Note that \({\varvec{\phi }} \in {\mathcal {C}}_c^\infty (0,\infty ;{\mathbb {H}}^{1}_0(\Omega ))\). We have

$$\begin{aligned} \langle p, \rho \rangle = \,&- \int _0^\infty \langle {\mathcal {P}}(t), {\dot{\rho }}(t) \rangle _{L_0^2(\Omega )} \, \mathrm {d} t = - \int _0^\infty \langle {\mathcal {P}}(t), {{\,\mathrm{div}\,}}{\dot{\varvec{\phi }}}(t) \rangle _{L_0^2(\Omega )} \, \mathrm {d} t \\ = \,&\int _0^\infty \langle \nabla {\mathcal {P}}(t), {\dot{\varvec{\phi }}}(t) \rangle _{{\mathbb {H}}^{-1}(\Omega ),{\mathbb {H}}_0^1(\Omega )} \, \mathrm {d} t \\ = \,&- \int _0^\infty \langle {\mathbf {y}}(t) {- {\mathbf {y}}_0} - {\textstyle \int _0^t } {\mathbf {g}}(s)\, \mathrm {d} s, {\dot{\varvec{\phi }}}(t) \rangle _{{\mathbb {H}}^{-1}(\Omega ),{\mathbb {H}}_0^1(\Omega )} \, \mathrm {d} t \\ = \,&-\int _0^\infty \langle {\mathbf {y}}(t) {- {\mathbf {y}}_0}, {\dot{\varvec{\phi }}}(t) \rangle _{{\mathbb {H}}^{-1}(\Omega ),{\mathbb {H}}_0^1(\Omega )} + \langle {\mathbf {g}}(t), \varvec{\phi }(t) \rangle _{{\mathbb {H}}^{-1}(\Omega ),{\mathbb {H}}_0^1(\Omega )} \, \mathrm {d} t. \end{aligned}$$

Recalling the embedding of \(W_0^{1,1}(0,\infty ;{\mathbb {H}}_0^1(\Omega ))\) in \(L^2(0,\infty ;{\mathbb {H}}_0^1(\Omega ))\), we deduce that

$$\begin{aligned} \langle p, \rho \rangle \le M \big ( \Vert {\mathbf {y}}\Vert _{L^\infty (0,\infty ;{\mathbb {H}}^{-1}(\Omega ))} + \Vert {\mathbf {g}}\Vert _{L^2(0,\infty ;{\mathbb {H}}^{-1}(\Omega ))} \big ) \Vert \varvec{\phi }\Vert _{W_0^{1,1}(0,\infty ;{\mathbb {H}}_0^1(\Omega ))}. \end{aligned}$$

Using then estimate (62), we obtain that p can be extended to an element of \(W^{-1,\infty }(0,\infty ;L_0^2(\Omega ))\) satisfying estimate (60).

With the same calculations as above, we can show that for all \(\varvec{\phi }{\in }W_0^{1,1}(0,\infty ;{\mathbb {H}}_0^1(\Omega ))\),

$$\begin{aligned} \langle p, {{\,\mathrm{div}\,}}\varvec{\phi }\rangle = - \int _0^\infty \langle {\mathbf {y}}(t), {\dot{\varvec{\phi }}}(t) \rangle _{{\mathbb {H}}^{-1}(\Omega ),{\mathbb {H}}_0^1(\Omega )} + \langle {\mathbf {g}}(t), \varvec{\phi }(t) \rangle _{{\mathbb {H}}^{-1}(\Omega ),{\mathbb {H}}_0^1(\Omega )} \, \mathrm {d} t, \end{aligned}$$

which proves that p satisfies (59). Let us prove the uniqueness. Let \({\tilde{p}} \in W^{-1,\infty }(0,\infty ;L_0^2(\Omega ))\) satisfy (59). Let \(\rho \in W_0^{1,1}(0,\infty ;L_0^2(\Omega ))\) be arbitrary and let us set \(\varvec{\phi }= {\mathcal {K}} \rho \). Then, by (59), we have

$$\begin{aligned} \begin{aligned} 0&= \langle p- {\tilde{p}}, {{\,\mathrm{div}\,}}\varvec{\phi }\rangle _{W^{-1,\infty }(0,\infty ;L_0^2(\Omega )),W_0^{1,1}(0,\infty ;L_0^2(\Omega ))} \\&= \langle p- {\tilde{p}}, \rho \rangle _{W^{-1,\infty }(0,\infty ;L_0^2(\Omega ),W_0^{1,1}(0,\infty ;L_0^2(\Omega ))}, \end{aligned} \end{aligned}$$

which proves that \(p= {\tilde{p}}\) and concludes the proof. \(\square \)

We have the following result, extending Theorem 22.

Proposition 24

Let \(d \ge 2\). There exists \(M > 0\) such that for all \({\mathbf {y}}_0 \in Y\) with \(\Vert {\mathbf {y}}_0 \Vert _Y \le \delta _6\),

$$\begin{aligned} \Vert {\bar{p}} - p_d \Vert _{W^{-1,\infty }(L_0^2)} \le M \Vert {\mathbf {y}}_0 \Vert _Y^d, \end{aligned}$$

where \({\bar{p}}\) and \(p_d\) denote the pressure terms associated with \(({\bar{{\mathbf {y}}}},{\bar{u}})\) and \(({\mathbf {y}}_d,u_d)\) respectively.

Proof

We have introduced in the proof of Lemma 23 the term \({\mathbf {g}}\) associated with a feasible pair \(({\mathbf {y}},u)\). Let us denote by \({\bar{{\mathbf {g}}}}\) and \({\mathbf {g}}_d\) the corresponding terms associated with \(({\bar{{\mathbf {y}}}},{\bar{u}})\) and \(({\mathbf {y}}_d,u_d)\). One can verify that as a consequence of Theorem 22, \(\Vert {\bar{{\mathbf {g}}}} - {\mathbf {g}}_d \Vert _{L^2(0,\infty ;{\mathbb {H}}^{-1}(\Omega ))} \le M \Vert {\mathbf {y}}_0 \Vert _Y^d\). Proposition 24 follows then with similar calculations to those performed in the proof of Lemma 23. \(\square \)

6 A Numerical Example

In this section, we present numerical simulations for the two-dimensional Navier–Stokes equations and computed feedback laws of order 2 and 3. The discretization procedure and the example setups are classical and are taken from [9]. The main purpose is to show that the computation of higher order feedback laws is possible and, depending on the chosen parameters, visible differences to a Riccati-based feedback law can be observed.

Fig. 1
figure 1

Geometry and non uniform grid

6.1 Setup and Discretization

We briefly summarize the numerical implementation provided in [9]. Therein a Taylor-Hood\(P_{2}\)-\(P_{1}\) finite element discretization for a two dimensional wake behind a cylinder is discussed. The computational domain \(\Omega =(0,2.2)\times (0,0.41)\) as well as a non uniform grid are shown in Fig. 1. For all simulations, we use the Reynolds number\(\mathrm {Re}:=\frac{1}{\nu }=90\) and the parabolic inflow profile discussed in [9]. For the upper and lower end of the geometry, no slip boundary conditions are employed. The outflow is modeled by do nothing boundary conditions on the right end of the geometry. For the desired stabilization, we utilize a distributed, separable control acting in the control domain \(\Omega _{c}:= [0.27,0.32]\times [0.15,0.25]\). In particular, the control operator is of the form

$$\begin{aligned} Bu=\sum _{\ell =1}^3 \begin{bmatrix} 0 \\ w_{\ell }(x_2) \end{bmatrix} u_{\ell }(t) + \begin{bmatrix} w_{\ell }(x_2) \\ 0 \end{bmatrix} u_{\ell +3}(t) , \end{aligned}$$

where the control shape functions \(w_{1},w_{2}\) and \(w_{3}\) are piecewise linear functions which are constant along the \(x_1\)-direction.

The finite element discretization is computed in FEniCS and the resulting matrices associated with the spatial semidiscretization are exported to MATLAB. As described in detail in [9], the (spatially) discrete system takes the form

$$\begin{aligned} \begin{aligned} E{\dot{z}}(t)&= -Kz(t) + H (z(t)\otimes z(t)) + Bu(t) + Gq(t)+ f_{z}, \\ 0&= G^{T}z(t) + f_{q}, \end{aligned} \end{aligned}$$
(63)

where \(E,K \in \mathbb R^{n_v \times n_{v}}\) are the mass and stiffness matrices, \(G^T \in \mathbb R^{n_p \times n_v}\) represents the discrete divergence operator, the tensor matricization \(H \in \mathbb R^{n_{v}\times n_{v}^2}\) represents the trilinear form (9) and \(B\in \mathbb R^{n_v \times 6}\) is the discrete control operator. Note that H can be constructed in such a way that \(H (z_{1}\otimes z_{2})=H(z_{2}\otimes z_{1})\) for any \(z_{1},z_{2} \in \mathbb R^{n_{v}}\). The time invariant vectors \(f_{z} \in \mathbb R^{n_v}\) and \(f_{q}\in \mathbb R^{n_p}\) are due to the elimination of the boundary nodes. The following results correspond to a discretization level with \(n_{v}=9356\) and \(n_{p}=1289\). The velocity profile of the unstable steady state solution \({\bar{z}}\) shown in Fig. 2 is obtained by a Picard iteration applied to the uncontrolled stationary system, i.e., system (63) with \({\dot{z}}(t)=0\) and \(u(t)=0\). To illustrate that the controller stabilizes this steady state solution, we start the transient simulations of the closed-loop systems from the slightly randomly perturbed steady state \(z(0)={\bar{z}}+\frac{\Vert {\bar{z}}\Vert _2}{2000}\cdot \texttt {randn}(n_v,1)\).

Fig. 2
figure 2

The steady state and a snapshot of the transient flow regime

6.2 Reformulation as an ODE System

System (63) is a system of differential-algebraic equations (DAEs) and hence the results from above are not readily applicable. While a thorough analysis in the framework of control of DAEs is certainly of interest, at this point we employ a reformulation initially proposed in [28] that allows to rewrite the dynamics as a set of ODEs for the velocity vector z. As in (3), we consider the shifted variables \(y=z-{\bar{z}}\) and \(p=q-{\bar{q}}\), respectively. Consequently, we obtain

$$\begin{aligned} \begin{aligned} E{\dot{y}}(t)&= Ay(t) + H (y(t)\otimes y(t)) + Bu(t) + Gp(t), \\ 0&= G^{T}y(t), \end{aligned} \end{aligned}$$
(64)

where \(A=-K+H({\bar{z}}\otimes I+I\otimes {\bar{z}})\). Let us note that the second equation implies \(G^{T}{\dot{y}}(t)=0\). Following [28, Section 3], from the first equation, we thus obtain

$$\begin{aligned} 0=G^{T}{\dot{y}}(t) = G^{T}E^{-1}\left( A y(t) + H (y(t)\otimes y(t)) + Bu(t) + Gp(t) \right) . \end{aligned}$$

We can now eliminate the pressure from (64) using the relation

$$\begin{aligned} p(t)= - (G^{T}E^{-1}G)^{-1}G^{T}E^{-1} \left( Ay (t) + H(y(t)\otimes y(t)) + Bu(t)\right) . \end{aligned}$$

With the notation \(P=I- G(G^{T}E^{-1}G)^{-1}G^{T}E^{-1}\) this yields the system

$$\begin{aligned} E {\dot{y}}(t) = PAy(t) + P H(y(t)\otimes y(t)) + PBu(t). \end{aligned}$$

In fact, as has been discussed in [11], the matrix \(P=P^{2}\) as a discrete realization of the Leray projector. Since \(G^Ty=0\), we have \(P^{T}y(t)=y(t)\) so that we can multiply the last equation by P to obtain

$$\begin{aligned} (PEP^{T}) {\dot{y}}(t) = (PAP^{T})y(t) + \left( PHP^{T}\otimes P^{T}\right) (y(t)\otimes y(t)) + (PB) u(t). \end{aligned}$$

Finally, by means of a decomposition \(P=\Theta _{\ell }\Theta _{r}^{T}\) with \(\Theta _{\ell }^{T} \Theta _{r}=I\) we can project onto the \(n_{v}-n_{p}\) dimensional subspace \(\mathrm {range}(P)\) and arrive at the ODE system

$$\begin{aligned} \underbrace{(\Theta _{r}^{T}E\Theta _{r})}_{{\widetilde{E}}} \dot{{\tilde{y}}}(t) = \underbrace{(\Theta _{r}^{T }A \Theta _{r})}_{{\widetilde{A}}} {\tilde{y}}(t) +\underbrace{(\Theta _{r}^{T} H \Theta _{r}\otimes \Theta _{r})}_{{\widetilde{H}}} {\tilde{y}}(t)\otimes {\tilde{y}}(t) + \underbrace{(\Theta _{r}^{T} B)}_{{\widetilde{B}}} u(t), \end{aligned}$$
(65)

where \({\tilde{y}}=\Theta _{\ell }^{T} y(t)\). For the initialization, we use \({\tilde{y}}(0)=\Theta _{\ell }^{T} y_{0}\). At this point, we emphasize that the explicit formulas yield dense matrices and thus are rather a theoretical tool. In particular, an explicit computation of \({\widetilde{H}}\) is infeasible for the problem dimension considered here. As a remedy, we work with an implementation that applies the above operations whenever a matrix vector multiplication is needed.

6.3 Computing the Feedback Gain

With the previous considerations in mind, we focus on the stabilization problem

$$\begin{aligned} \inf _{\begin{array}{c} u \in L^2(0,\infty ;\mathbb R^{6}) \end{array}} J({\tilde{y}}_{0},u), \quad \text {subject to: } e({\tilde{y}}_{u},u)= (0,{\tilde{y}}_0) \end{aligned}$$
(66)

where

$$\begin{aligned} J({\tilde{y}}_{u},u)=\;&\frac{1}{2} \int _0^\infty \Vert \Theta _{r}{\tilde{y}}_{u}(t) \Vert ^2_{\mathbb R^{n_{v}}} \, \text {d}t + \frac{\alpha }{2} \int _0^\infty \Vert u(t) \Vert _{\mathbb R^{6}}^2 \, \text {d}t \\ e({\tilde{y}}_{u},u)=\;&\big ( {\widetilde{E}}\dot{{\tilde{y}}}_{u}-({\widetilde{A}}{\tilde{y}}_{u} + {\widetilde{H}}({\tilde{y}}_{u}\otimes {\tilde{y}}_{u}) + {\widetilde{B}}u), {\tilde{y}}(0) \big ). \end{aligned}$$

We illustrate the effect of higher order feedback laws by computing the first two non trivial derivatives \(D^{2}{\mathcal {V}}(0)\) and \(D^{3}{\mathcal {V}}(0)\), respectively. For the computation of \(D^{2}{\mathcal {V}}(0)\equiv \Pi \in \mathbb R^{(n_{v}-{n_p})\times (n_{v}-n_{p})}\), we have to solve the algebraic matrix Riccati equation

$$\begin{aligned} {\widetilde{A}}^{T} \Pi {\widetilde{E}} + {\widetilde{E}}^{T}\Pi {\widetilde{A}} - {\widetilde{E}}^{T} \Pi {\widetilde{B}} {\widetilde{B}}^{T} \Pi {\widetilde{E}} + \Theta _{r}^{T} \Theta _{r} = 0, \end{aligned}$$

which in our case was done by means of the MATLAB function \(\texttt {care}\). For the third order tensor \(D^{3}{\mathcal {V}}(0)\equiv {\mathcal {X}} \in \mathbb R^{(n_{v}-n_{p})^{3}}\) we have to solve a linear system of the form \({\mathcal {A}}^{T}{\mathcal {X}} = {\mathcal {F}}\) where

$$\begin{aligned} \begin{aligned} {\mathcal {A}}&= {\widetilde{E}}\otimes {\widetilde{E}} \otimes {\widetilde{A}}_{\pi } + {\widetilde{E}} \otimes {\widetilde{A}}_{\pi }\otimes {\widetilde{E}}+ {\widetilde{A}}_{\pi }\otimes {\widetilde{E}} \otimes {\widetilde{E}}, \ \ \ {\widetilde{A}}_{\pi } = {\widetilde{A}}-\frac{1}{\alpha }{\widetilde{B}}{\widetilde{B}}^{T} \Pi {\widetilde{E}}, \\ {\mathcal {F}}&=-2\left( {\widetilde{H}}^{T}\otimes {\widetilde{E}}^{T} + {\widetilde{E}}^{T} \otimes {\widetilde{H}}^{T}+ (I\otimes {\mathcal {P}}^{T})({\widetilde{H}}^{T}\otimes {\widetilde{E}}^{T}) \right) \pi , \end{aligned} \end{aligned}$$
(67)

where \(\pi =\mathrm {vec}(\Pi )\) denotes the vectorization of \(\Pi \) and the permutation matrix \({\mathcal {P}}\) is given by

$$\begin{aligned} {\mathcal {P}}= \begin{bmatrix} I\otimes e_{1},\dots ,I\otimes e_{n_v-n_p} \end{bmatrix} \in \mathbb R^{(n_v-n_p)^{2} \times (n_v-n_p)^{2} }. \end{aligned}$$

Let us emphasize that \({\mathcal {F}}\) is the discrete realization of the term \({\mathcal {R}}_{3}\) in (47). In particular, the tensor \({\mathcal {F}}\) is symmetric. Note that computing a solution \({\mathcal {X}}\) to \({\mathcal {A}}^{T}{\mathcal {X}}={\mathcal {F}}\) is infeasible without using further tools such as model order reduction or tensor calculus as storing the vector \({\mathcal {X}}\in \mathbb R^{(n_v-n_p)^{3}}\) already requires more than 4 TB of data. As a remedy, we aim for a direct computation of the corresponding feedback gain

$$\begin{aligned} {\widetilde{K}} = ({\widetilde{E}}^{T} \otimes {\widetilde{E}}^{T} \otimes {\widetilde{B}}^{T}){\mathcal {X}} \end{aligned}$$
(68)

without explicitly computing \({\mathcal {X}}\). With this in mind, we proceed as in [16] and utilize a quadrature-based approximation that has been analyzed in [25]. From [25, Lemma 3], it follows that

$$\begin{aligned} {\mathcal {A}}^{-1} = -\int _{0}^{\infty } \left( e^{t {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }} {\widetilde{E}}^{-1} \right) \otimes \left( e^{t {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }} {\widetilde{E}}^{-1} \right) \otimes \left( e^{t {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }} {\widetilde{E}}^{-1} \right) \,\mathrm {d}t. \end{aligned}$$

As shown in [25, Theorem 9], the previous integral can be well approximated by a tensor sum of the form

$$\begin{aligned} {\mathcal {A}}^{-1} \approx - \sum _{j=-r}^{r} \frac{2 w_{j}}{\lambda } \left( e^{\frac{t_{j}}{\lambda } {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }} {\widetilde{E}}^{-1} \right) \otimes \left( e^{\frac{t_{j}}{\lambda } {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }} {\widetilde{E}}^{-1} \right) \otimes \left( e^{\frac{t_{j}}{\lambda } {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }} {\widetilde{E}}^{-1} \right) \end{aligned}$$
(69)

where \(t_{j}\) and \(w_{j}\) are suitable quadrature points and weights and \(\lambda \) denotes a constant determined by the spectrum of the matrix pencil \(({\widetilde{E}},{\widetilde{A}})\). Combining the representation in (67), (68) and (69), we obtain the following approximation formula for the feedback gain

$$\begin{aligned} {\widetilde{K}}&=-\sum _{j=-r}^{r} \frac{2w_{j}}{\lambda } \left( (e^{\frac{t_{j}}{\lambda } {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }})^{T}\right) \otimes \left( (e^{\frac{t_{j}}{\lambda } {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }})^{T} \right) \otimes \left( {\widetilde{B}}^{T} {\widetilde{E}}^{-T} (e^{\frac{t_{j}}{\lambda } {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }})^{T} \right) {\mathcal {F}} \\&= \sum _{j=-r}^{r} \frac{4w_{j}}{\lambda } \left( (e^{\frac{t_{j}}{\lambda } {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }})^{T}\right) \otimes \left( (e^{\frac{t_{j}}{\lambda } {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }})^{T} \right) \otimes \left( {\widetilde{B}}^{T} {\widetilde{E}}^{-T} (e^{\frac{t_{j}}{\lambda } {\widetilde{E}}^{-1} {\widetilde{A}}_{\pi }})^{T} \right) \\&\quad \times \left( {\widetilde{H}}^{T}\otimes {\widetilde{E}}^{T} + {\widetilde{E}}^{T} \otimes {\widetilde{H}}^{T}+ (I\otimes {\mathcal {P}}^{T})({\widetilde{H}}^{T}\otimes {\widetilde{E}}^{T}) \right) \pi , \end{aligned}$$

with \(r=30\) in the numerical examples. By use of algebraic manipulations such as reshaping and transposition of matrices, the computation of the permutation matrix \({\mathcal {P}}\) as well as computation of the dense matricization \({\widetilde{H}}\) can be avoided. As a consequence, we obtain an approximation of \({\widetilde{K}}\in \mathbb R^{6 (n_{v}-n_{p})^{2}}\) whose storage requires less than 4 GB of data. Let us point out that the above considerations do not fully break the curse of dimensionality but nevertheless allow us to compute a third order feedback law even for a spatially discretized PDE. For the simulation of the time-varying systems, we make use of the MATLAB function ode23 with the standard relative error tolerance \(10^{-3}\). In each time step, the control laws \(u_2({\tilde{y}})\) and \(u_{3}({\tilde{y}})\) are obtained via

$$\begin{aligned} u_2({\tilde{y}})&= -\frac{1}{\alpha } {\widetilde{B}}^T\Pi {\widetilde{E}}{\tilde{y}}, \\ u_{3}({\tilde{y}})&= -\frac{1}{\alpha } {\widetilde{B}}^T\Pi {\widetilde{E}}{\tilde{y}} -\frac{1}{\alpha } \left( I_{6} \otimes {\tilde{y}}^{T} \otimes {\tilde{y}}^{T}\right) {\widetilde{K}}, \end{aligned}$$

where \(I_{6}\) denotes the identity matrix for the control space \(\mathbb R^{6}\).

6.4 Results

Below, we present a numerical comparison for two different values of \(\alpha \). In Fig. 3, the control laws corresponding to (66) with \(\alpha =1\) are shown. We observe that both feedback laws \(u_{2}\) and \(u_{3}\), respectively, exhibit a similar behavior and create vortices which induce the desired control. Indeed, the control velocities in \(x_{1}\)-direction are of opposite sign (with the centered velocitiy field being negligible) while the control velocities in \(x_{2}\)-direction all have the same sign.

Fig. 3
figure 3

Control laws in \(x_{2}\) (left) and \(x_{1}\)-direction (right) for \(\alpha =1\)

For \(\alpha =10^{-4}\), Fig. 4 shows more visible differences between the control laws.

Fig. 4
figure 4

Control laws in \(x_{2}\) (left) and \(x_{1}\)-direction (right) for \(\alpha =10^{-4}\)

It would certainly be of interest to investigate the numerical convergence behavior as the order of the control laws increases. At the moment, however this is out of reach, and could be based on model reduction techniques in an independent numerical endeavor. In Fig. 4, we observe that the amplitudes of the \(u_{3}\) controls decay more rapidly than those of the \(u_{2}\) controls. This is consistent with Fig.  5, where we compare the dynamical behavior of \(\Vert u_{2}\Vert _{2}^{2}\) and \(\Vert u_{3}\Vert _{2}^{2}\). Let us emphasize that for \(\alpha =10^{-4}\), for all t, the norm of the control law \(u_{3}(t)\) is smaller than the one of \(u_{2}(t)\). For the values of the cost functionals, we obtain

$$\begin{aligned} J({\tilde{y}}_{u_2},u_2)&=0.9546, ~~ J({\tilde{y}}_{u_3},u_3)=0.8432, \quad \text {for } \alpha =1, \\ J({\tilde{y}}_{u_2},u_2)&=0.0128, ~~ J({\tilde{y}}_{u_3},u_3)=0.0125, \quad \text {for } \alpha =10^{-4}, \end{aligned}$$

which indicates that higher order feedback laws can be of interest for feedback stabilization.

Fig. 5
figure 5

Dynamical behavior of the control norms for \(\alpha =1\) and \(\alpha =10^{-4}\)

7 Outlook

In the present paper we demonstrated that the approach that we carried out for obtaining Taylor approximations to the value function of optimal control problems related to the Fokker-Planck equation, is also applicable for optimal control of the Navier–Stokes equations in dimension two. The question arises to which extent analogous results can be obtained for dimension three and for boundary control problems. In dimension three the situation will be significantly different from that of the current paper. It will not be possible to work with weak variational solutions. Rather one has to resort to strong variational solutions, and thus one can expect at best that the value function is smooth on V rather than on Y. This leads to difficulties for the operator representations of the derivatives of the value function. Alternatively one can start by analyzing (47) as equations for abstract multilinear forms \(D^k{\mathcal {V}}(0)\), which are not necessarily obtained of derivatives of \({\mathcal {V}}\). This is an approach which we plan to follow.