1 Introduction

We consider an optimum design problem with the aim to determine the best location of hollows γ in a given bounded subdomain ω surrounded by the exterior subdomain Ω in \(\mathbb {R}^{2}\) with smooth boundaries ω, Γ = Ω∖ω. In the interior subdomain ω the physical phenomenon are described by the linear PDE and in the exterior domain the processes are governed by nonlinear PDE subject to some external function. We took that problem from the paper (Szulc and Zochowski 2015) where more details is presented. The design problem can be considered as a two level optimization problem. At the first level an optimal control within a set of admissible controls is determined for a given location of the source. At the second level an optimal location of the source in terms of its characteristic function is selected in such a way that the resulting value of the cost functional is the best possible within the set of admissible locations. The problem considered in this paper is for an elliptic equation. The problem at the second level is nonconvex, which leads to well known difficulties with the solution procedure. In particular, such difficulties and possible relaxation procedures are discussed e.g. in Kohn and Strang (1986) and Murat and Tartar (1985) mainly from the point of view of existence of solutions in optimum design problems. The standard techniques in classical optimal control theory are based on the lower semicontinuity of some physical quantity (functional) with respect to control and on the compactness of the set of admissible controls. In the optimum design problem the location of the source is optimized. Thus, the lower semicontinuity of the shape functional is required with respect to some family of sets. So, except for very particular cases, there is no optimal location in optimum design problem (see Delfour and Zolesio 2001). That is why many authors confine themself only to numerical analyzis and this is just done in Sokolowski and Zochowski (1999) applying topological derivative and in Szulc and Zochowski (2015). In the last decade some optimum design problems were investigated for time-dependent state equations, including the wave equation or the heat equation (see e.g. Hebrard and Henrot 2003; Münch 2008; Periago 2009; Nowakowski and Sokolowski 2012). Our aim is also to apply the particular variational method, widely used in the classical optimal control theory, which is the dynamic programming technique. In this way, we need neither relaxation nor homogenization of the problem under investigation. We also develop a new computational technic basing on dual dynamic method to characterize the best approximate location of the source explicitly, which is useful for possible applications. In the paper we assume that the number of hollow voids γ is finite but not fixed and non zero i.e. γ = γ1 ∪ ... ∪ γn, nN, γ-open \( \varsubsetneq \omega \) and such that 𝜖2vol γ𝜖1 > 0, with given 𝜖1, 𝜖2. Moreover we assume that all boundaries of γ are smooth. Put D = Ω ∪ ω.

Thus, consider the optimization of the shape and a location of the source γ in ωD in the following Problem (P) :

$$ \text{minimize }J(\gamma )=\frac{1}{2}\int\nolimits_{\Omega }(u(x)-z_{d}(x))^{2}dx $$
(1)

subject to

$$\begin{array}{@{}rcl@{}} &&-{\Delta} u(x)=F(x,u(x)),\ \ \text{ \ }x\in D\backslash \bar{\gamma }, \end{array} $$
(2)
$$\begin{array}{@{}rcl@{}} &&u(x)= 0,\text{ \ }x\in {\Gamma} , \end{array} $$
(3)
$$\begin{array}{@{}rcl@{}} &&u(x)=\varphi (x),\text{}\partial_{n}u(x)=\partial_{n}\varphi (x),\text{ } x\in \partial \omega , \end{array} $$
(4)
$$\begin{array}{@{}rcl@{}} &&\partial_{n}u(x)= 0\text{ \ on \ }\partial \gamma , \end{array} $$
(5)

where \(\omega \varsubsetneq D \) is an open set, such that 𝜖2vol ω𝜖1 > 0. The numbers δ, 𝜖1, 𝜖2 are given; \(z_{d}:{\Omega } \mapsto \mathbb {R}\) is given target functions, \(\varphi :\omega \mapsto \mathbb {R}\), a fixed function, \(u:D \backslash \gamma \mapsto \mathbb {R}\) is the state,

$$F(x,u(x))=\left\{ \begin{array}{c} -u^{3}(x)+f(x),\text{ \ }x\in {\Omega} ,\\ \text{ \ }0,\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }x\in \omega \backslash \gamma , \end{array} \right. $$

where \(f:{\Omega } \mapsto \mathbb {R}\). We assume that the functions xzd(x), xf(x) belong to L2(Ω), xφ(x) belongs to H1(ω). Let us observe that by standard elliptic theory for each open set γω with smooth boundary there exists a unique solution uH2(Ω) of (2)–(5). In the problem (P) we are interested not only in a shape of γ but also in a location of γ in ω.

Our main aim is to be able to characterize the optimal (ε -optimal) pair \((\bar {u}\), \(\bar {\gamma })\) and the ε-minimal value of J. Note, that \(\bar {\gamma }\) is an optimal (or ε -optimal) open set in our notation. We call a pair (u(⋅), γ) ”admissible” if it satisfies (2)–(5) and the conditions imposed on γ. Then the corresponding trajectory u(⋅) is said to be admissible and γ is an admissible set. The set of all admissible pairs is denoted by Ad.

The problems with unknown a subset γ appear in many papers. In Münch (2008) the optimum design for two- dimensional wave equation is studied, in Münch (2009) an optimal location of the support of the control for one-dimensional wave equation is determined, in Hebrard and Henrot (2005) and Hebrard and Henrot (2003) the optimal geometry for controls in stabilization problem is considered. In all mentioned papers different approaches to the design problems have been investigated, and some numerical results are presented. The existence of optimum design is essential if we have not at hand any sufficient optimality conditions. From the beginning of the last century, under strong influence of Hilbert, the existence issue became one of the fundamental questions in many branches of mathematics, especially in calculus of variations as well as in its branch, the optimal control theory. Of course, following the existence proof, the next step is derivation of necessary optimality conditions and the evaluation of the minimum argument. However, it should be pointed out that for many variational problems the existence of a solution accompanied by some necessary optimality conditions are not sufficient to find the argument of minimum in practice. On the other hand, having at hand a stronger result, i.e. the sufficient optimality conditions for a minimum in a specific problem, replaces the requirement for the existence.

In the present paper the framework of dynamic programming together with sufficient optimality conditions (the so-called verification theorem for relative minimum) is proposed for a solution of the optimum design problem. Different approaches are given in Sokolowski and Zochowski (1999), Bednarczuk et al. (2000), Nowakowski and Sokolowski (2012), Fulmański et al. (2014), and Fulmański and Nowakowski (2014). In Sokolowski and Zochowski (1999), Bednarczuk et al. (2000), and Nowakowski and Sokolowski (2012) the shape problem is formulated in terms of characteristic functions which define the support of control while in Fulmański et al. (2014) and Fulmański and Nowakowski (2014) also the framework of dynamic programming together with sufficient optimality conditions is proposed for a solution of the optimum design problem. In Fulmański et al. (2014) the problem of dividing tube is investigated by reduction of the shape optimization problem to a classical control problem. To do that the authors apply the level set approach to build deformation, following Zolésio (Sokolowski and Zolesio 1992) they introduce a field depending on control, which define the type of deformation, and formulate the control problem governed by ordinary differential equation (defined by field). Next the classical dynamic programming is developed for such a problem. In Fulmański and Nowakowski (2014) a little similar approach by level set method is applied to the optimization problem of deformable structures described by Navier-Stock equation but then the dual dynamic programming is delveloped to derive the verification theorem (again for dynamics governed by ordinary differential equation). In both papers completely different numerical algorithms are constructed the base of which are different approximate verification theorems. However we should stress that approaches based on the level set method described in Fulmański et al. (2014) and Fulmański and Nowakowski (2014) use smaller set of admissible deformation (described by a field depending on control) i.e. in our case a significantly smaller set of the sources γ. In the presentation (Kaźmierczak 2008) similar ideas to those of Fulmański et al. (2014) and Fulmański and Nowakowski (2014) are mentioned but nothing is precisely formulated and proved. Our goal is not the standard analysis of the problem as e.g. in Delfour and Zolesio (2001) or Fulmański et al. (2014) and Fulmański and Nowakowski (2014) but the approximate solution by application of the sufficient ε -optimality conditions given by the dual dynamic programming directly to our problem (P). That means we admit full set of admissible sources γ. This approach seems to be new and the result obtained is original, to our best knowledge.

We provide a dual dynamic programming approach to control problems (2)–(5). This approach allows us to obtain the sufficient conditions for optimality in problem considered. We believe that the conditions for problems of type (P) in terms of the dual dynamic programming, that we formulate here, have not been provided earlier. There are two main difficulties that must be overcome in such problems as (P). The first one consists in the following observation. We have no possibility to perform perturbations of the problem - as it is considered in the fixed set D with boundary condition (5) - which can be compared to the one-dimensional case given in Bellman (1957) and Fleming and Rishel (1975). The second one is that we deal with elliptic equation for state and controls (2 ). The technique we apply is similar to the methods from Galewska and Nowakowski (2006) and Nowakowski (1992). The main idea of the methods from Galewska and Nowakowski (2006) and Nowakowski (1992) is that they carry over all objects used in dynamic programming to dual space - space of multipliers (similar to those which appear in the Pontryagin maximum principle). Next, instead of classical value function (which for problem (P) makes no sense), we define an auxiliary function V (x, p) satisfying the second order partial differential equation of dual dynamic programming (compare Galewska and Nowakowski 2006). Investigations of the properties of this function lead to an appropriate verification theorem. We introduce also the concept of an optimal dual feedback control and provide new sufficient ε -optimality conditions determined within our framework. Just by using the approximate differential equation of dual dynamic programming (9 ) (see below) and ε-optimal dual feedback control (section 4) we are able to solve problem (P) completely, from the approximate point of view i.e we find aproximate location of γε as well for ε-optimal value of J.

2 Dual dynamic programming approach

In Galewska and Nowakowski (2006) a new approach to dynamic programming was developed using some ideas of Nowakowski (1992). In Nowakowski (1992), instead of considering notions of dynamic programming such as value function S(t, s) or Hamilton-Jacobi equation in the space (t, s), a new space – the dual space is proposed and new notions of dual dynamic programming are defined: an auxiliary function, a dual optimal value and a dual Hamilton-Jacobi equation which the auxiliary function should satisfy. The dual space in Nowakowski (1992) is, in fact, defined by conjugate (dual) functions (variables) which appear in Pontryagin maximum principle. In turns out that this approach works also in control problems governed by elliptic equations (see Galewska and Nowakowski 2006). That means: in dual approach to dynamic programming the perturbation of optimal value is not needed – instead we deal with an auxiliary function. However, there is a price to be paid for that, as we have to impose on the auxiliary function some additional condition, called the transversality condition. Our aim is to apply ideas from Galewska and Nowakowski (2006) to the optimum design problem (P). To this end we need to define dual notions in some dual space. Thus let \(P\subset \mathbb {R}^{2 + 2}\) be an open (dual) set of the variables (x, p) = (x, y0, y), xD , y0 < 0, |y| ≥ 0. We shall also use the sets

$$P_{{\Gamma} }=\left\{ (x,p):\text{ }x\in {\Gamma} \text{, }y^{0}<0,\text{ } \left\vert y\right\vert \geq 0\right\} , $$
$$P_{\omega }=\left\{ (x,p):\text{ }x\in \partial \omega \text{, }y^{0}<0, \text{ }\left\vert y\right\vert \geq 0\right\} , $$
$$P_{\gamma }=\left\{ (x,p):\text{ }x\in \partial \gamma \text{, }y^{0}<0, \text{ }\left\vert y\right\vert \geq 0\right\} . $$

Denote by W2:3(P) the specific Sobolev space of functions of two variables (x, p) having up to the second order weak or generalized derivatives (in the sense of distributions) with respect x and up to the third order weak derivatives with respect to the variable p. Our notation for the function space is used for the functions depending on the primal variable x, and the dual variable p, the primal and dual variables are independent and the functions in the space W2:3(P) enjoy different properties with respect to x and p. Let V (x, p) of W2:3(P) be a (auxiliary) function defined on P and satisfying the following condition:

$$ V(x,p)=y^{0}V_{y^{0}}(x,p)+yV_{y}(x,p)=pV_{p}(x,p), $$
(6)

for (x, p) ∈ P. Here, \(V_{y^{0}},V_{y}\), and V p denote the partial derivatives and the gradient with respect to the dual variables y0, y, and p = (y0, y), respectively.

Now, we denote by p(x), xDγ, the dual trajectory, while u(x), xDγ stands for the primal trajectory. Let us put

$$ \mathbf{u}(x,p)=-V_{y}(x,p)\text{, for }(x,p)\in P\text{.} $$
(7)

Using the function u it is possible to come back from the dual trajectories p(x), xDγ, lying in P to the primal functions u(x), xDγ. How to find V y is precisely described below. Further, we confine ourselves only to those admissible trajectories u(⋅), for which there exist functions p(x) = (y0, y(x)), (x, p(x)) ∈ P, y(⋅) ∈ L2(D), y(x) = 0, xγ, such that u(x) = u(x, p(x)) for xDγ. Thus denote

$$\begin{array}{@{}rcl@{}} Ad_{\mathbf{u}} &=&\left\{ (u(\cdot ),\gamma )\in Ad:\text{ there exist }\right.\\ &&p(x)=(y^{0},y(x)),\text{ }y(\cdot )\in L^{2}(D)\text{, } \text{ } \\ &&\left. \begin{array}{l} y(x)= 0\text{, }x\in \gamma ,\text{ }(x,p(x))\in P,\text{ }x\in D\text{ and }\\ \psi :\mathbb{R}^{2}\mapsto \mathbb{R}, \text{ }(x,y^{0},\psi (x))\in P_{b}\text{, }\\ \mathbf{u}(x,y^{0},\psi (x))= 0 \text{, }p(x)=(y^{0},\psi (x)), \end{array} \right. \\ &&x\in {\Gamma} \cup \partial \gamma ,\text{ }\mathbf{u} (x,y^{0},\psi (x))=\varphi (x)\text{, }\\ &&\partial_{n}\mathbf{u}(x,y^{0},\psi(x)) \text{ }=\partial_{n}\varphi (x),\text{ }x\in \partial \gamma \text{ and }\\ &&\left. u(x)=\mathbf{u}(x,p(x)),\text{ }x\in D\backslash \gamma \right\}. \end{array} $$

Actually, it means that we are going to study problem (P) possibly in some smaller set Adu, which is determined by the function (7).

Next, we define a dual optimal value \(S_{D}^{\mathbf {u}}\) for the problem (P) by the formula

$$ S_{D}^{\mathbf{u}}:=\inf_{\left( u,\gamma \right) \in Ad_{\mathbf{u} }}\left\{ -y^{0}\frac{1}{2}\int\nolimits_{\Omega }(u(x)-z_{d}(x))^{2}dx\right\}. $$
(8)

In order to prove the verification theorem we require that the function V (x, p) satisfies the second order partial differential equation in dynamic programming form:

$$ \begin{array}{l} {\Delta}_{x}V(x,p)=\\ \max \{yF(x,-V_{y}(x,p))+y^{0}\frac{1}{2} (-V_{y}(x,p)-z_{d}(x))^{2} : \\ \gamma \varsubsetneq \omega \text{, }\epsilon_{2}\geq vol\text{ }\gamma \geq \epsilon_{1}\}\text{, \ }(x,p)\in P \end{array} $$
(9)

with boundary conditions

$$ V_{y}(x,p)= 0\text{, }(x,p)\in P_{{\Gamma} }\text{,} $$
(10)
$$V_{y}(x,p)=\varphi (x)\text{, }\partial_{n}V_{y}(x,p)=\partial_{n}\varphi (x)\text{, }(x,p)\in P_{\omega }\text{,} $$
$$\partial_{n}V_{y}(x,p)= 0\text{, }(x,p)\in P_{\gamma }\text{.} $$

Let us note that in (9) maximum is taken instead of supremum since it is attained. We shall not discuss here the question of existence of solution to (9) and satisfying condition (6). We simply assume in verification theorem (in next section) that such a function exists.

2.1 A verification theorem

In this section we formulate and prove our main theorem called ”verification theorem”, which provides the sufficient optimality conditions for (P) in terms of a solution V (x, p) of the second order partial differential equation of dynamic programming (9). Let us fix \(\bar {y}^{0}<0\). Define the set

$$\begin{array}{@{}rcl@{}} \mathcal{P} &=&\left\{ p(x)=(\bar{y}^{0},y(x)),\text{ }x\in D;\text{ } (x,p(x))\in P,\right.\\ && y(\cdot )\in L^{2}(D),\text{ exists} (u(\cdot ),\gamma )\in Ad_{\mathbf{u}},\\ &&\left. u(x)=-V_{y}(x,p(x)) \text{, }x\in D,\text{ }y(x)= 0,\text{ }x\in \gamma \right\}. \end{array} $$

Theorem 1

Assume that there exists a W2:3(P) solution V (x, p) of (9) on Pwith boundary condition (10) such that (6) holds and let\(\bar {\mathbf {u}}(x,p)=-V_{y}(x,p)\),(x, p) ∈ P. Let \((\overline {u}(\cdot ),\bar {\gamma })\) \(\in Ad_{\bar {\mathbf {u}}}\), and \(\overline {p}(x)=(\overline {y}^{0},\overline {y}(x))\),\(\bar {y}(\cdot )\in L^{2}(D)\), \(\overline {p}\in \mathcal {P}\) \(\bar {y}(x)= 0\), \(x\in \bar { \gamma }\), be a function such that \(\overline {u}(x)=-V_{y}(x,\overline {p}(x))\) for \(x\in D\backslash \bar {\gamma }\). Suppose that

$$ {\Delta}_{x}V(x,\overline{p}(x))=\bar{y}(x)F(x,-V_{y}(x,\overline{p}(x))) $$
(11)
$$ +\bar{y}^{0}\frac{1}{2}(-V_{y}(x,\overline{p}(x))-z_{d}(x))^{2}\text{, \ } x\in D\backslash \bar{\gamma}\text{.} $$

Then \((\overline {u}(\cdot ),\bar {\gamma })\) is an optimal pair relative to all \((u(\cdot ),\gamma )\in Ad_{\bar {\mathbf {u}}}\).

Proof

Our proof starts with the observation that from transversality condition (6), we see that for xDγ,

$$ {\Delta}_{x}V(x,p(x))=\overline{y}^{0}{\Delta}_{x}V_{y^{0}}(x,p(x))+y(x){\Delta}_{x}V_{y}(x,p(x)) $$
(12)

where (u(⋅), γ) \(\in Ad_{\bar {\mathbf {u}}}\) and \(p(x)=(\overline {y}^{0},y(x))\), y(⋅) ∈ L2(D), y(x) = 0, xγ, (x, p(x)) ∈ P, such that u(x) = −V y(x, p(x)) for xDγ. Since u(x) = −V y(x, p(x)), for xDγ, (2) shows that for xDγ,

$$ {\Delta}_{x}V_{y}(x,p(x))=F(x,-V_{y}(x,p(x)))\text{.} $$
(13)

Now define a function DγxW(x, p(x)) by

$$\begin{array}{*{20}l} W(x,p(x))= \\ \overline{y}^{0}\left[ -{\Delta}_{x}V_{y^{0}}(x,p(x))+\frac{1}{2} (-V_{y}(x,p(x))-z_{d}(x))^{2}\right] \text{.} \end{array} $$
(14)

We conclude from (12)–(14) that

$$ \begin{array}{r} W(x,p(x))=-{\Delta}_{x}V(x,p(x))+y(x)F(x,-V_{y}(x,p(x))) \\ +\overline{y}^{0}\frac{1}{2}(-V_{y}(x,p(x))-z_{d}(x))^{2}\text{, }x\in D\backslash \gamma \text{. } \end{array} $$
(15)

Hence, (9) and (15) imply

$$ W(x,p(x))\leq 0\ \ \text{for \ }x\in D\backslash \gamma $$
(16)

and finally, after integrating (16) and applying (14), we have

$$\begin{array}{@{}rcl@{}} &&-\overline{y}^{0}\int\nolimits_{{\Omega} }div\nabla V_{y^{0}}(x,p(x))dx \\ &\leq &-\overline{y}^{0}\frac{1}{2}\int\nolimits_{\Omega }(-V_{y}(x,p(x))-z_{d}(x))^{2}dx. \end{array} $$
(17)

Thus from (17) and the Green formula it follows that

$$\begin{array}{@{}rcl@{}} &&-\overline{y}^{0}\int\nolimits_{\partial {\Omega} }\nabla V_{y^{0}}(s,\psi (s))\nu (s)ds \\ &\leq &-\overline{y}^{0}\frac{1}{2}\int\nolimits_{\Omega }(-V_{y}(x,p(x))-z_{d}(x))^{2}dx\text{,} \end{array} $$
(18)

where ν(⋅) is the exterior unit normal vector to Ω. In the same manner applying (11) and (14) we have

$$\begin{array}{@{}rcl@{}} &&-\overline{y}^{0}\int\nolimits_{\partial {\Omega} }\nabla V_{y^{0}}(s,\psi (s))\nu (s)ds \\ &=&-\overline{y}^{0}\frac{1}{2}\int\nolimits_{{\Omega} }(\bar{u} (x)-z_{d}(x))^{2}dx\text{.} \end{array} $$
(19)

Combining (18) with (19) gives

$$\begin{array}{@{}rcl@{}} &&-\overline{y}^{0}\frac{1}{2}\int\nolimits_{{\Omega} }(\bar{u} (x)-z_{d}(x))^{2}dx \\ &\leq &-\overline{y}^{0}\frac{1}{2}\int\nolimits_{\Omega }(u(x)-z_{d}(x))^{2}dx \end{array} $$

which completes the proof. □

2.2 Optimal dual feedback control

This section is devoted to the notion of an optimal dual feedback control. We present appropriate definitions and the sufficient conditions for optimality which follow from the verification theorem.

For each fixed xD define in P:

  • for \(\left \vert \frac {y}{y^{0}}\right \vert \geq 1\)a function χ(⋅,⋅)which to those (x, p) ∈ Passigns an admissible set γpω, xγp, such that χ(x, p) = 0,

  • for \(\left \vert \frac {y}{y^{0}}\right \vert <1\)a function χ(⋅,⋅)which to those (x, p) ∈ Passigns an admissible set γpω , xγp, such that χ(x, p) = 1 .

Thus, for a given p only one γp is assigned. This means that χ(x, p) = 0 for \(\left \vert \frac {y}{y^{0}}\right \vert \geq 1\), χ(x, p) = 1 for \(\left \vert \frac {y}{y^{0}}\right \vert <1\).

Let the function χ(⋅,⋅) defined above be given. A function χ is called a dual feedback control, if there is a solution u(x, p), (x, p) ∈ P, of the partial differential equation

$$-{\Delta}_{x}\mathbf{u}=F(x,\mathbf{u}(x,p))\text{ in }D\backslash \gamma_{p} $$

defining nonempty set Adu.

A dual feedback control HCode \(\bar {\mathbf {\chi }} (x,p)\), (x, p) ∈ P, is called an optimal dual feedback control, if the following conditions are verified: There exist a function \( \overline {\mathbf {u}}(x,p)\), (x, p) ∈ P, as in Definition 1, a function \(\overline {p}(x)=(\overline {y}^{0},\overline {y}(x))\), \(\bar {y}(\cdot )\in L^{2}(D)\), \(\bar {y}(x)= 0\), \(x\in \bar {\gamma }\), \(\left (x,\bar {p}(x)\right ) \in P\), such that there exist a pair \((\bar {u} (\cdot ),\bar {\gamma })\in Ad_{\bar {\mathbf {u}}}\) satisfying

$$\bar{u}(x)=\overline{\mathbf{u}}(x,\bar{p}(x))\text{, }\bar{\mathbf{\chi}}(x, \bar{p}(x))= 0,\text{ }x\in \bar{\gamma}. $$

In addition, the optimal value \(S_{D}^{\overline {\mathbf {u}}}\) (see (8)) is defined by the pair \((\bar {u}(\cdot ),\bar { \gamma })\):

$$S_{D}^{\overline{\mathbf{u}}}=-\overline{y}^{0}\frac{1}{2} \int\nolimits_{{\Omega} }[(\bar{u}(x)-z_{d}(x))^{2}dx $$

and there is V (x, p), (x, p) ∈ P belonging to W2:3(P), satisfying (6), such that \({\Delta }_{x}V_{y^{0}}(\cdot ,\overline {p}(\cdot ))\in L^{2}({\Omega } )\) and

$$\overline{y}^{0}{\int}_{{\Omega} }{\Delta}_{x}V_{y^{0}}(s,\overline{p} (s))ds=-S_{D}^{\overline{\mathbf{u}}}, $$

\(V_{y}(x,p)=-\overline {\mathbf {u}}(x,p)\).

Now, we formulate and prove the sufficient optimality conditions for the existence of an optimal dual feedback control, again in terms of the auxiliary function V (x, p).

Theorem 2

Let \(\bar {\mathbf {\chi }}(x,p)\) be a dual feedback control in P and \(\overline {\mathbf {u}}(x,p)\),(x, p) ∈ P, be defined according to Definition 2. Suppose that there exists a W2:3(P) solution V (x, p) of (9) on P such that (6) holds and that

$$ V_{y}(x,p)=-\overline{\mathbf{u}}(x,p)\text{ \ for \ }(x,p)\in P\text{. } $$
(20)

Let \(\overline {p}(x)=(\overline {y}^{0},\overline {y}(x))\), \(\bar {y}(\cdot )\in L^{2}(D)\), \(\bar {y}(x)= 0\), \(x\in \bar {\gamma }\), \(\left (x,\bar {p} (x)\right ) \in P\), be a function such that there is a pair \((\overline {u} (\cdot ),\bar {\gamma })\in Ad_{\bar {\mathbf {u}}}\) and \(\overline {u}(x)= \overline {\mathbf {u}}(x,\overline {p}(x))\), \(\bar {\mathbf {\chi }}(x,\bar {p} (x))= 0,\) \(x\in \bar {\gamma }\). Furthermore, assume that:

$$\begin{array}{@{}rcl@{}} &&\overline{y}^{0}\int\nolimits_{{\Omega} }div\nabla V_{y^{0}}(s,\overline{p} (s))ds \\ &=&\overline{y}^{0}\frac{1}{2}\int\nolimits_{{\Omega} }(\overline{\mathbf{u}} (x,\bar{p}(x))-z_{d}(x))^{2}dx\text{.} \end{array} $$
(21)

Then \(\bar {\mathbf {\chi }}(x,p)\), (x, p) ∈ P, is an optimal dual feedback control.

Proof

Take any function \(p(x)=(\overline {y}^{0},y(x))\), y(⋅) ∈ L2(D), y(x) = 0, xγ, (x, p(x)) ∈ P, such that there is a pair \((u(\cdot ),\gamma )\in Ad_{\bar {\mathbf {u}}}\) and \(u(x)=\overline { \mathbf {u}}(x,p(x))\), \(\bar {\mathbf {\chi }}(x,p(x))= 0,\) xγp(x) . By (20), it follows that u(x) = −V y(x, p(x)) for xDγ. In the same way, as in the proof of Theorem 1, Eq. 21 gives

$$ \begin{array}{c} -\overline{y}^{0}\frac{1}{2}{\int}_{{\Omega} }(\overline{\mathbf{u}}(x,\bar{p} (x))-z_{d}(x))^{2}dx \\ \leq -\overline{y}^{0}\frac{1}{2}{\int}_{{\Omega} }(\overline{\mathbf{u}} (x,p(x))-z_{d}(x))^{2}dx\text{.} \end{array} $$
(22)

We conclude from (22) that

$$ S_{D}^{\overline{\mathbf{u}}}=-\overline{y}^{0}\frac{1}{2} \int\nolimits_{{\Omega} }(\overline{\mathbf{u}}(x,\bar{p}(x))-z_{d}(x))^{2}dx $$
(23)

and it is sufficient to show that \(\bar {\mathbf {\chi }}(x,p)\) is an optimal dual feedback control, by Theorem 1 and Definition 2. □

3 Approximate optimality for the problem (P)

In the last two subsections we discussed optimality conditions in terms of dual dynamic programming. In practice it is difficult (or even impossible) to solve equations stated there in exact form. In fact we solve such a system using different approximate (numerical) methods. Therefore what we can get then that is eventually approximate optimality. This is why in this section we present dual dynamic approach to sufficient conditions for approximate (ε-optimality) optimality. Just the dual ε-optimality conditions are the base for construction of computational, approximate optimality. Let us recall that for given \(\overline {\mathbf {u}}\) and \( \overline {y}^{0}\) (see (8)) optimal value is defined as

$$S_{D}^{\overline{\mathbf{u}},\overline{y}^{0}}\inf_{\left( u,\gamma \right) \in Ad_{\bar{\mathbf{u}}}}\left\{ -\bar{y}^{0}\frac{1}{2}\int\nolimits_{ {\Omega} }(u(x)-z_{d}(x))^{2}dx\right\} . $$

Dual ε-optimal value for problem (P) we call each value \( S_{\varepsilon D}^{\mathbf {u,}\bar {y}^{0}}\) satisfying inequality

$$S_{D}^{\overline{\mathbf{u}}\mathbf{,}\bar{y}^{0}}\leq S_{\varepsilon D}^{ \mathbf{u,}\bar{y}_{\varepsilon }^{0}}\leq S_{D}^{\overline{\mathbf{u}} \mathbf{,}\bar{y}^{0}}-\varepsilon \bar{y}_{\varepsilon }^{0} $$
$$ \text{ for any fixed }\bar{y}_{\varepsilon }^{0}<0\text{ and }\varepsilon >0. $$
(24)

Let us fix m > 0 and put

$$\mathbf{P=}\left\{ (x,y^{0},y);\text{ }y\in \digamma \subset \mathbb{R} ^{2},y^{0}<0,\text{ }y>0,\right. $$
$$ \left. x\in D\text{, }\digamma \text{-open} \right\}, $$
(25)
$$\mathbf{P}_{{\Gamma} }=\left\{ (x,p):\text{ }x\in {\Gamma} \text{, }y^{0}<0,\text{ }y>0,\text{ }y\in \digamma \right\} , $$
$$\mathbf{P}_{\omega }=\left\{ (x,p):\text{ }x\in \partial \omega \text{, } y^{0}<0,\text{ }y>0,\text{ }y\in \digamma \right\} , $$
$$\mathbf{P}_{\gamma }=\left\{ (x,p):\text{ }x\in \partial \gamma \text{, } y^{0}<0,\text{ }y>0,\text{ }y\in \digamma \right\} . $$

As for ε-optimal value we use inequality it suggests that expressions allowing to derive Theorem 1 should satisfy also suitable inequalities. Thus we shall use the following inequalities for auxiliary function \(\tilde {V}:\) the dual Hamilton-Jacobi inequality

$$ \begin{array}{l} 0\leq -{\Delta}_{x}\tilde{V}(x,p)+\max \{yF(x,-\tilde{V}_{y}(x,p))\\ +y^{0}\frac{1}{2}(-\tilde{V}_{y}(x,p)-z_{d}(x))^{2} : \\ \gamma \varsubsetneq \omega \text{, }\epsilon_{2}\geq vol\text{ }\gamma \geq \epsilon_{1}\}\leq -\varepsilon \bar{y}_{\varepsilon }^{0}\text{, \ } (x,p)\in P \end{array} $$
(26)

with boundary conditions

$$ \tilde{V}_{y}(x,p)= 0\text{, }(x,p)\in P_{{\Gamma} }\text{,} $$
(27)
$$\tilde{V}_{y}(x,p)=\varphi (x)\text{, }\partial_{n}\tilde{V} _{y}(x,p)=\partial_{n}\varphi (x)\text{, }(x,p)\in P_{\omega }\text{,} $$
$$\partial_{n}\tilde{V}_{y}(x,p)= 0\text{, }(x,p)\in P_{\gamma }\text{.} $$

Since we want to apply our theory to numerical solutions of (2)–(5), therefore instead of the equation we shall deal with the inequality:

$$\begin{array}{@{}rcl@{}} 0\leq {\Delta} u(x)+F(x,u(x))\leq -\frac{\varepsilon }{m}\bar{y}_{\varepsilon }^{0},\ \ \text{ \ }x\in D\backslash \gamma , \\ u(x)= 0,\text{ \ }x\in {\Gamma} , \\ u(x)=\varphi (x),\text{ }\partial_{n}u(x)=\partial_{n}\varphi (x),\text{ } x\in \partial \omega , \\ \partial_{n}u(x)= 0\text{ \ on \ }\partial \gamma . \end{array} $$
(28)

Thus in this section by the set of admissible sets and states i.e. satisfying (28) we denote Adε.

3.1 ε-optimality

In this subsection we describe the concept of ε-optimal pair, formulate and prove sufficient ε-optimality for problem (1) i.e. ε-version of verification theorem. Assume that there exists \(\tilde {V}\) satisfying (6) and (26)–(27). Then we define similarly as in Section 2

$$ \mathbf{u}_{\varepsilon }(x,p)=-\tilde{V}_{y}(x,p)\text{, for }(x,p)\in P. $$
(29)

For \(\bar {y}_{\varepsilon }^{0}\) and uε we define similarly as in Section 2 Adu

$$\begin{array}{@{}rcl@{}} Ad_{\mathbf{u}_{\varepsilon }} &=&\left\{ (u(\cdot ),\gamma )\in Ad_{\varepsilon }:\right. \\ &&\text{there exist }p(x)=(y^{0},y(x)), \\ &&y(\cdot )\in L^{2}(D)\text{, } \text{ } y(x)= 0\text{, }x\in \gamma , \\ &&(x,p(x))\in P,\text{ }x\in D\text{ and } \psi :\mathbb{R}^{2}\mapsto \mathbb{R}, \\ &&(x,y^{0},\psi (x))\in P_{b}\text{,} \\ &&\mathbf{u}_{\varepsilon}(x,y^{0},\psi (x))= 0, \\ &&p(x)=(y^{0},\psi (x)), x\in {\Gamma} \cup \partial \gamma , \\ &&\mathbf{u}_{\varepsilon }(x,y^{0},\psi (x))=\varphi (x)\text{, } \\ &&\partial_{n}\mathbf{u}_{\varepsilon }(x,y^{0},\psi (x))=\partial_{n}\varphi (x), \\ &&\left. x\in \partial \gamma \text{ and } u(x)=\mathbf{u}_{\varepsilon }(x,p(x)),\text{ }x\in D\backslash \gamma \right\}. \end{array} $$
(30)

Now we are ready to define notions of ε-optimal dual feedback controls χε(x, p) of ε-optimal pair \((\bar {u}_{\varepsilon }(\cdot ),\bar {\gamma }_{\varepsilon })\).

Dual feedback control \(\bar {\mathbf {\chi }}_{\varepsilon }(x,p)\) we call ε-optimal dual feedback control if there exists function \( \bar {\mathbf {u}}_{\varepsilon }(x,p)\) in P, a solution of the partial differential inequality

$$ 0\leq {\Delta}_{x}\bar{\mathbf{u}}_{\varepsilon }\mathbf{+}F(x,\bar{\mathbf{u} }_{\varepsilon }(x,p))\leq -\frac{\varepsilon }{m}\bar{y}_{\varepsilon }^{0} \text{ in }D\backslash \gamma_{p} $$
(31)

defining nonempty set \(Ad_{\bar {\mathbf {u}}_{\varepsilon }}\) and a function \(\bar {p}_{\varepsilon }(\cdot )\in \mathcal {P}\) such that the pair defined by

$$\bar{\mathbf{\chi}}_{\varepsilon }(x,\bar{p}_{\varepsilon }(x))\rightarrow \bar{\gamma}_{\varepsilon },\text{ }x\in D, $$
$$ \bar{u}_{\varepsilon }(x)=\bar{\mathbf{u}}_{\varepsilon }(x,\bar{p} _{\varepsilon }(x))\text{, for }x\in D\backslash \bar{\gamma}_{\varepsilon } $$
(32)

belongs to \(Ad_{\bar {\mathbf {u}}_{\varepsilon }}\) and define ε-optimal value

$$S_{\varepsilon D}^{\bar{\mathbf{u}}_{\varepsilon }\bar{y}_{\varepsilon }^{0}}=-\bar{y}^{0}\frac{1}{2}\int\nolimits_{{\Omega} }(\bar{u}_{\varepsilon }(x)-z_{d}(x))^{2}dx. $$

We need the set

$$\begin{array}{@{}rcl@{}} \mathcal{P}_{\varepsilon } &=&\left\{ p(x)=(\bar{y}_{\varepsilon }^{0},y(x)), \text{ }x\in D;(x,p(x))\in \mathbf{P},\right.\\ &&y(\cdot )\in L^{2}(D), \sup\limits_{x\in D}y(x)\leq m\text{,} \\ &&\text{ exists }(u(\cdot ),\gamma )\in Ad_{\mathbf{u}},u(x)=- \tilde{V}_{y}(x,p(x)),\\ &&x\left. \in D,y(x)= 0,x\in \gamma \right\}. \end{array} $$

For given \(\tilde {V}\) satisfying (6) and (26)–(27) let \(\bar {\mathbf {u}}_{\varepsilon }(x,p)\), in P be defined by (31). Let \(\bar {p}_{\varepsilon }(\cdot )\in \mathcal {P}_{\varepsilon } \) and \(\bar {u}_{\varepsilon }\) be defined by (32). Let \((\bar {u}_{\varepsilon }(\cdot ),\bar {\gamma }_{\varepsilon })\) belong \(Ad_{\mathbf { \bar {x}}_{\varepsilon }}\). The pair \((\bar {u}_{\varepsilon }(\cdot ),\bar { \gamma }_{\varepsilon })\) we call an ε-optimal pair with respect to all pairs \((u(\cdot ),\gamma )\in Ad_{\bar {\mathbf {u}}_{\varepsilon }}\) if

$$-\bar{y}_{\varepsilon }^{0}\frac{1}{2}\int\nolimits_{{\Omega} }(\bar{u} _{\varepsilon }(x)-z_{d}(x))^{2}dx $$
$$ \leq -\bar{y}_{\varepsilon }^{0}\frac{1}{2} \int\nolimits_{{\Omega} }(u(x)-z_{d}(x))^{2}dx-(\frac{\varepsilon }{m} +\varepsilon )\bar{y}_{\varepsilon }^{0}. $$
(33)

Having all the above notions we can formulate the verification theorem for ε-optimality.

Theorem 3

Assume that there exists \(\tilde {V}\) satisfying(6) and (26)(27). Take \(\bar {p}_{\varepsilon }(\cdot )\in \mathcal {P }_{\varepsilon }\) and \((\bar {u}_{\varepsilon }(\cdot ),\bar {\gamma }_{\varepsilon })\in Ad_{\bar {\mathbf {x}}_{\varepsilon }}\) such that \(\bar {u}_{\varepsilon }(x)=-\tilde {V}_{y}(x,\bar {p}_{\varepsilon }(x))\). Moreover assume that the pair \((\bar {\gamma }_{\varepsilon },\bar {p}_{\varepsilon }(\cdot ))\) satisfies

$$-{\Delta} \tilde{V}_{y}(s,\bar{p}_{\varepsilon }(s))+F(x,-\tilde{V}_{y}(x,\bar{ p}_{\varepsilon }(x))) $$
$$ \leq -\frac{\varepsilon }{m}\bar{y}_{\varepsilon }^{0}, \text{ }x\in D\backslash \bar{\gamma}_{\varepsilon }, $$
(34)
$$ 0\leq {\Delta}_{s}\tilde{V}(x,\bar{p}_{\varepsilon }(x))+\bar{y}_{\varepsilon }(x)F(x,-\tilde{V}_{y}(x,\bar{p}_{\varepsilon }(x))) $$
(35)
$$+\bar{y}^{0}\frac{1}{2}(-\tilde{V}_{y}(x,\bar{p}_{\varepsilon }(x))-z_{d}(x))^{2}\leq -\varepsilon \bar{y}_{\varepsilon }^{0},\text{ }x\in D\backslash \bar{\gamma}_{\varepsilon }. $$

Then the pair \((\bar {u}_{\varepsilon }(\cdot ),\bar {\gamma }_{\varepsilon })\) is an ε-optimal with respect to all \((u(\cdot ),\gamma )\in Ad_{\bar {u}_{\varepsilon }}\).

Proof

Take any \((u(\cdot ),\gamma )\in Ad_{\bar {u}_{\varepsilon }}\) and \(p(\cdot )\in \mathcal {P}_{\varepsilon }\) such that \(u(x)=-\tilde {V}_{y}(x,p(x))\). We follow the same way as in the proof of Theorem 1, i.e. from (6) we have, for xDγ,

$${\Delta}_{x}V(x,p(x))=\overline{y}_{\varepsilon }^{0}{\Delta}_{x}V_{y^{0}}(x,p(x))+y(x){\Delta}_{x}V_{y}(x,p(x)). $$

Similarly, we have by (28)

$$0\leq -{\Delta} \tilde{V}_{y}(x,p(x))+F(x,-\tilde{V}_{y}(x,p(x))),\ \ \text{ \ }x\in D\backslash \gamma , $$

and then applying (26) (having in mind that y > 0) we get inequality

$$\bar{y}_{\varepsilon }^{0}\left( \frac{1}{2}(-\tilde{V} _{y}(x,p(x))-z_{d}(x))^{2}-{\Delta}_{x}\tilde{V}_{y^{0}}(x,p(x))\right) $$
$$ \leq -\varepsilon \bar{y}_{\varepsilon }^{0} $$
(36)

and using inequality (34) we come to the inequality

$$\frac{\varepsilon }{m}\bar{y}_{\varepsilon }^{0} $$
$$ \leq \bar{y}_{\varepsilon }^{0}\left( \frac{1}{2}(-\tilde{V}_{y}(x,\bar{p}_{\varepsilon }(x))-z_{d}(x))^{2}-{\Delta}_{x}\tilde{V}_{y^{0}}(x,\bar{p}_{\varepsilon }(x))\right). $$
(37)

Integrating (36), (37) over Ω and applying boundary condition for p(x) and \(\bar {p}_{\varepsilon }(x)\) on Ω (\( p(x)=\bar {p}_{\varepsilon }(x)=\psi (x)\)) we come to

$$-\bar{y}_{\varepsilon }^{0}\frac{1}{2}{\int}_{{\Omega} }(-\tilde{V}_{y}(x,\bar{p} _{\varepsilon }(x))-z_{d}(x))^{2} $$
$$\leq -\bar{y}_{\varepsilon }^{0}\frac{1}{2} {\int}_{{\Omega} }(-\tilde{V}_{y}(x,p(x))-z_{d}(x))^{2}-(\frac{\varepsilon }{m} +\varepsilon )\bar{y}_{\varepsilon }^{0}, $$

i.e.

$$-\bar{y}_{\varepsilon }^{0}\frac{1}{2}{\int}_{{\Omega} }(\bar{u}_{\varepsilon }(x)-z_{d}(x))^{2} $$
$$\leq -\bar{y}_{\varepsilon }^{0}\frac{1}{2}{\int}_{\Omega }(u(x)-z_{d}(x))^{2}-(\frac{\varepsilon }{m}+\varepsilon )\bar{y} _{\varepsilon }^{0}. $$

This is just the assertion of the theorem. □

3.2 Numerical algorithm

The verification theorem formulated in the former section allows us to build numerical approach to calculate suboptimal pair \((\bar {u}_{\varepsilon }(\cdot ),\bar {\gamma }_{\varepsilon })\) such that \(S_{\varepsilon D}^{\bar {\mathbf {u}}_{\varepsilon }\bar {y}_{\varepsilon }^{0}}\) satisfies (33). The algorithm we present below ensure that we find in finite number of steps suboptimal pair.

Algorithm:

  1. 1.

    Fix m > 0, ε > 0 and calculate auxiliary function \(\tilde {V}\) from (26)–(27).

  2. 2.

    Form \(Ad_{\bar {u}_{\varepsilon }}\) as a finite family of N pairs (u(⋅), γ) :

    1. (a)

      Define sets γn in ω, n = 1,..., N.

    2. (b)

      To calculate un, n = 1,..., N solve inequality (28).

  3. 3.

    Find minimal value of J(γn), n = 1,..., N and corresponding to it pair denote by \((\hat {u}(\cdot ),\hat {\gamma })\).

  4. 4.

    Assume any fixed \(\bar {y}_{\varepsilon }^{0}<0\) and determine \(\hat {y}(\cdot )\) from the relation

    $$\hat{u}(x)=-\tilde{V}_{y}(x,\bar{y}_{\varepsilon }^{0},\hat{y}(x)). $$
  5. 5.

    For \(\tilde {V}\) and \((\hat {u}(\cdot ),\hat {\gamma })\) check the inequalities (34)–(35)

    1. (a)

      If \(\tilde {V}\) and \((\hat {u}(\cdot ),\hat {\gamma })\) satisfy (34)–(35) then \((\hat {u}(\cdot ),\hat {\gamma })\) is an ε-optimal pair and\(\ J(\hat {\gamma })\) is an ε -optimal value.

    2. (b)

      If \(\tilde {V}\) and \((\hat {u}(\cdot ),\hat {\gamma })\) do not satisfy (34)–(35) then go to 2.

3.3 Scalability and convergence of the algorithm

First we would like to stress that using any package to solve nonlinear differential equation we are able only to find approximate solution i.e. the numerical solutions do not satisfy in our case the Eqs. 29. Therefore any sufficient optimality conditions for approximate optimality should take into account those circumstances. In our algorithm we do it, i.e. we solve with any computational package the inequalities (28), (26) and check whether the calculated solutions satisfy (28), (26) with given ε. The number N in points 2., 3. of the above Algorithm may be arbitrarily large. We can then divide N on smaller parts and do calculations for (28) and point 3. (on those parts) independly (on different processors) and next to find minimal value of J as a minimum of minimal values of J on those parts. If we were looking for γ being balls (or systems of balls) then we can cover the whole bounded domain ω with finite number of balls of any positive fixed radius. Then the convergence of the algorithm is obvious. However if we are interested not only in locations but also in shapes of γ then from the theoretical point of view the convergence of algorithm is still almost obvious as the closure of ω is a compact set, thus there is always finite covering of the closure of ω by open sets γ. Hence we can calculate at least theoreticaly the best of them using te above algorithm and any other set γ, by continuity of our functional is near one of a set from the covering family.

3.4 The parameters

In the numerical simulations we take

$$D = [-1,1]\times [-1,1], $$
$$\omega = \{(x_{1},x_{2}) \in D: (x_{1})^{2} + (x_{2})^{2} \leq (0.5)^{2} \} $$

and

$${\Omega} = D \setminus \bar{\omega} $$

(Fig. 1).

Fig. 1
figure 1

Domain Ω

As a first choice we take \(\bar {\varepsilon } = 0.08\) and we define the control γ as

  • one ball

    $$\gamma = \{(x_{1},x_{2}) \in \omega: (x_{1} - \bar x_{1})^{2} + (x_{2} - \bar x_{2})^{2} < (\bar{\varepsilon })^{2} \}, $$

    where \((\bar x_{1}, \bar x_{2})\) - the center of γ.

  • two balls

    $$\gamma_{1} = \{(x_{1},x_{2}) \in \omega: (x_{1} - \bar {x_{1}^{1}})^{2} + (x_{2} - \bar {x_{2}^{1}})^{2} < (\bar{\varepsilon })^{2} \}, $$

    where \((\bar {x_{1}^{1}}, \bar {x_{2}^{1}})\) - the center of γ1,

    $$\gamma_{2} = \{(x_{1},x_{2}) \in \omega: (x_{1} - \bar {x_{1}^{2}})^{2} + (x_{2} - \bar {x_{2}^{2}})^{2} < (\bar{\varepsilon })^{2} \}, $$

    where \((\bar {x_{1}^{2}}, \bar {x_{2}^{2}})\) - the center of γ2, γ = γ1γ2 and γ1γ2 = .

  • three balls

    $$\gamma_{1} = \{(x_{1},x_{2}) \in \omega: (x_{1} - \bar {x_{1}^{1}})^{2} + (x_{2} - \bar {x_{2}^{1}})^{2} < (\bar{\varepsilon })^{2} \}, $$

    where \((\bar {x_{1}^{1}}, \bar {x_{2}^{1}})\) - the center of γ1,

    $$\gamma_{2} = \{(x_{1},x_{2}) \in \omega: (x_{1} - \bar {x_{1}^{2}})^{2} + (x_{2} - \bar {x_{2}^{2}})^{2} < (\bar{\varepsilon })^{2} \}, $$

    where \((\bar {x_{1}^{2}}, \bar {x_{2}^{2}})\) - the center of γ2,

    $$\gamma_{3} = \{(x_{1},x_{2}) \in \omega: (x_{1} - \bar {x_{1}^{3}})^{2} + (x_{2} - \bar {x_{2}^{3}})^{2} < (\bar{\varepsilon })^{2} \}, $$

    where \((\bar {x_{1}^{3}}, \bar {x_{2}^{3}})\) - the center of γ3, γ = γ1γ2γ3, γ1γ2 = , γ1γ3 = and γ2γ3 = .

We assume that ωε = ωγ, ε = 0.1, \(\hat {\gamma }\) - the optimal γ, \(\bar {y}_{\varepsilon }^{0} = -0.005\) and N = 500.

For a randomly generated control, we calculate the value of J. For this purpose, we define the function zd (or zd1 and zd2 respectively) by defining the domain for each type of control. It means that we define

  • function zd on Dγzd, where \(\gamma _{zd} = \{(x_{1},x_{2}) \in \omega : (x_{1} + 0.2)^{2} + (x_{2} - 0.2)^{2} < (\bar {\varepsilon })^{2} \}\) for the first case,

  • functions zd1 and zd2 on Dγzd12, where \(\gamma _{zd12} = \{(x_{1},x_{2}) \in \omega : (x_{1} - 0.2)^{2} + (x_{2} + 0.1)^{2} < (\bar {\varepsilon })^{2} \} \cup \{(x_{1},x_{2}) \in \omega : (x_{1} + 0.1)^{2} + (x_{2} - 0.2)^{2} < (\bar {\varepsilon })^{2} \}\) for the second case,

  • functions zd1 and zd2 on Dγzd12, where \(\gamma _{zd12} = \{(x_{1},x_{2}) \in \omega : (x_{1} - 0.2)^{2} + (x_{2} + 0.1)^{2} < (\bar {\varepsilon })^{2} \} \cup \{(x_{1},x_{2}) \in \omega : (x_{1} + 0.1)^{2} + (x_{2} - 0.2)^{2} < (\bar {\varepsilon })^{2} \} \cup \{(x_{1},x_{2}) \in \omega : (x_{1} + 0.2)^{2} + (x_{2} + 0.2)^{2} < (\bar {\varepsilon })^{2} \}\) for the third case.

We are looking for optimal control as ball (or systems of balls) that generate minimal values of J and satisfy (34)–(35).

3.5 Numerical calculations

In order to make calculations we apply Matlab2016a and implement in this application the following steps:

  1. 1.

    Let ε = 0.1.

  2. 2.

    Let fixed number of balls (one, two or three).

  3. 3.

    We calculate function zd (or zd1 and zd2 respectively). zd is a solution of the following boundary value problem:

    $$-{\Delta} z_{d}(x) + \chi ({\Omega} ){z_{d}^{3}}(x)=\chi ({\Omega} ) f_{zd}(x),\ \ \text{ \ }x\in D\backslash \gamma_{zd}, $$
    $$z_{d}(x)= 0,\text{ \ }x\in {\Gamma} , $$
    $$\partial_{n} z_{d}(x)= 0\text{ \ on \ }\partial \gamma_{zd}, $$

    where fzd(x) = x1 for xDγzd when we consider one ball.

  4. 4.

    If we consider two or three balls we solve above boundary value problem due to two functions fzd1(x) = x1 + x2 for xDγzd12 and fzd2(x) = x1x2 for xDγzd12.

  5. 5.

    For given parameters we randomly generate γ for fixed \(\bar {\varepsilon }\).

  6. 6.

    For given γ, ω and Ω we solve the following systems of equations

    $$-{\Delta} u(x)=F(x,u(x)),\ \ \text{ \ }x\in D\backslash \gamma, $$
    $$u(x)= 0,\text{ \ }x\in {\Gamma} , $$
    $$u(x)=\varphi (x),\text{ }\partial_{n}u(x)=\partial_{n}\varphi (x),\text{ } x\in \partial \omega, $$
    $$\partial_{n}u(x)= 0\text{ \ on \ }\partial \gamma, $$

    where \(\omega \varsubsetneq {\Omega }\).

    $$F(x,u(x))=\left\{ \begin{array}{c} -u^{3}(x)+f(x),\text{ \ }x\in {\Omega}, \\ \text{ \ }0,\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }x\in \omega \backslash \gamma, \end{array} \right. $$

    where \(f:{\Omega } \mapsto \mathbb {R}\) defined as

    $$f(x) = x_{1}+x_{2}, \text{ \ }x = (x_{1},x_{2})\in {\Omega}. $$
  7. 7.

    We calculate the value of (1)

    $$ J(\gamma )=\frac{1}{2}\int\nolimits_{\Omega}(u(x)-z_{d}(x))^{2}dx. $$
    (38)

    for one ball and

    $$ J(\gamma )=\frac{1}{2}\int\nolimits_{\Omega}(u(x)-z_{d1}(x))^{2}dx+\frac{1}{2}\int\nolimits_{\Omega}(u(x)-z_{d2}(x))^{2}dx. $$
    (39)

    for two and three balls.

  8. 8.

    We repeat 5. - 7. N = 500-times.

  9. 9.

    We select the minimum value among all values from (1). It means that we find minimal value of J(γn), n = 1,..., N and corresponding to it pair denote by \((\hat {u}(\cdot ),\hat {\gamma })\). minJ for the optimal \((\hat {u}(\cdot ),\hat {\gamma })\) defined by the \(\hat {\gamma }\).

  10. 10.

    We generate the set

    $$\boldsymbol{Y}=\{(y^{0},y)=p; \text{ there exists }\ (x,p)\in P \}:$$
    $$\boldsymbol{Y} = \{(-0.005, -0.110100), $$
    $$(-0.005, -0.060100 ), (-0.005, -0.010100), $$
    $$(-0.005, 0.039900), (-0.005, 0.089900),$$
    $$(-0.005, -0.001000), (-0.005, 0.001000) \}. $$
  11. 11.

    We calculate the function \(\tilde {V}\) from (26)–(27).

  12. 12.

    We assume \(y_{\varepsilon }^{0} = -0.005\) and determine \(\hat {y}(\cdot )\) from the relation

    $$\hat{u}\left( t ,x\right) = -\hat{V}_{y}(t ,x , -0.005 ,\hat{y}(t ,x)). $$
  13. 13.

    For \(\tilde {V}\), \((\hat {u} (\cdot ) ,\hat {\gamma } (\cdot ))\) we check the inequality (34), and we obtain that \(\tilde {V}\) and \((\hat {u}(\cdot ),\hat {\gamma })\) satisfy (34)–(35) so \((\hat {u}(\cdot ),\hat {\gamma })\) is an ε-optimal pair and\(\ J(\hat {\gamma })\) is an ε -optimal value with ε = 0.1.

3.6 Numerical examples

We present four examples, each for an otherwise defined control.

3.6.1 Example 1

Let us γ as one ball. For given parameters we realize numerical calculations in the following way:

  • We fixed ε = 0.1.

  • From 2. we calculate function zd (Fig. 2).

  • For given parameters we repeat 3. - 5. N = 500-times. We select the minimum value among all values from (1). It means that we find minimal value of J(γn), n = 1,..., N and corresponding to it pair denote by \((\hat {u}(\cdot ),\hat {\gamma })\). (Figs. 34 and 5)

    $$\min J = 0.022086 $$

    for the optimal \((\hat {u}(\cdot ),\hat {\gamma })\) defined by the center of \(\hat {\gamma }\), where

    $$(\hat x_{1}, \hat x_{2}) = (-0.2012,0.1934). $$
  • We generate the set Y

  • We calculate the function \(\tilde {V}\) from (26)–(27).

  • We assume \(y_{\varepsilon }^{0} = -0.005\) and determine \(\hat {y}(\cdot )\) from the relation

    $$\hat{u}\left( t ,x\right) = -\hat{V}_{y}(t ,x , -0.005 ,\hat{y}(t ,x)). $$
  • For \(\tilde {V}\), \((\hat {u} (\cdot ) ,\hat {\gamma } (\cdot ))\) we check the inequality (34), and we obtain that \(\tilde {V}\) and \((\hat {u}(\cdot ),\hat {\gamma })\) satisfy (34)–(35) so \((\hat {u}(\cdot ),\hat {\gamma })\) is an ε-optimal pair and\(\ J(\hat {\gamma })\) is an ε -optimal value with ε = 0.1.

Fig. 2
figure 2

Function zd

Fig. 3
figure 3

Optimal domain \(\omega \setminus \hat {\gamma }\) in case of one hole

Fig. 4
figure 4

Optimal solution \(\hat {u}\) in \(\omega \setminus \hat {\gamma }\) in case of one hole

Fig. 5
figure 5

Optimal solution \(\hat {u}\) in Ω in case of one hole

3.6.2 Example 2

Let us γ as two holes defined as two balls. For given parameters we realize numerical calculations in the same way like in Example 1. We obtain the following result:

  • Functions zd1 and zd2(Figs. 6 and 7).

  • $$\min J = 0.041088 $$

    for the optimal \((\hat {u}(\cdot ),\hat {\gamma })\) (Fig. 8) defined by the center of \(\hat {\gamma }\), where

    $$(\bar {x_{1}^{1}}, \bar {x_{2}^{1}}) = (0.2100, -0.1028), $$
    $$(\bar {x_{1}^{2}}, \bar {x_{2}^{2}}) = (-0.0982, 0.1863). $$
Fig. 6
figure 6

Function zd1 in case of two holes

Fig. 7
figure 7

Function zd2 in case of two holes.

Fig. 8
figure 8

Optimal solution \(\hat {u}\) in \(\omega \setminus \hat {\gamma }\) in case of two hole

3.6.3 Example 3

Let us γ as three holes defined as three balls. For given parameters we realize numerical calculations in the same way like in Example 1. We obtain the following result:

  • Functions zd1 and zd2(Figs. 9 and 10).

  • $$\min J = 0.022472 $$

    for the optimal \((\hat {u}(\cdot ),\hat {\gamma })\) defined by the center of \(\hat {\gamma }\), where

    $$(\bar {x_{1}^{1}}, \bar {x_{2}^{1}}) = (0.2156, -0.1012), $$
    $$(\bar {x_{1}^{2}}, \bar {x_{2}^{2}}) = (-0.0862, 0.2243), $$
    $$(\bar {x_{1}^{3}}, \bar {x_{2}^{3}}) = (-0.1868, -0.2011). $$
Fig. 9
figure 9

Function zd1 in case of three holes

Fig. 10
figure 10

Function zd2 in case of three holes

3.6.4 Example 4

Let us γ as one hole defined as non-convex set - two balls with nonempty intersection.

For given parameters we realize numerical calculations in the similar way like in Example 1. We obtain the following result:

  • Function zd (Fig. 2).

  • $$\min J = 0.020753 $$

    for the optimal \((\hat {u}(\cdot ),\hat {\gamma })\) (Figs. 11 and 12) defined by the center of \(\hat {\gamma }\), where

    $$(\bar {x_{1}^{1}}, \bar {x_{2}^{1}}) = (-0.2, 0.2), $$
    $$(\bar {x_{1}^{2}}, \bar {x_{2}^{2}}) = (-0.2983, 0.1986). $$
Fig. 11
figure 11

Optimal domain \(\omega \setminus \hat {\gamma }\) for one non-convex holes

Fig. 12
figure 12

Optimal solution \(\hat {u}\) in \(\omega \setminus \hat {\gamma }\) for one non-convex hole

Remark 1

In step 2. we took, for simplicity, balls for ω as in final calculations the obtained results turn out satisfying and computations were significantly shorter.

Remark 2

In Example 4 we considered the controls as non-convex holes. To calculate function zd we defined domain with one hole defined as a ball. For this case the verification theorem is satisfied.

Remark 3

The approximate value of our functional J is similar like the result in the numerical example presented in Szulc and Zochowski (2015). However we know how far we are from the inf J(γ), while in Szulc and Zochowski (2015) is only known some step of calculation.

4 Conclusions

The dual dynamic programming by construction furnishes the sufficient optimality conditions, therefore it is a powerful tool for solution of optimization problems which enjoy the special structure. This is especially important when we deal with approximate (numerical) solutions. In the paper for the optimum design problem the dual dynamic programming and dual feedback notions are developed as well their approximate counterparts. Next the solutions of that problem are characterized in terms of verification theorems. The approximate verification theorem allows us to calculate approximate solution but also to state how far we are from the inf J(γ). In Szulc and Zochowski (2015) the value of calculated numerically functional is almost the same (we took the same parameters in our example) but the authors in Szulc and Zochowski (2015) can only assert that it is some step of calculation of the minimal value. In a subsequent paper we are going to apply the dynamic programming technique to more general shape optimization problems.