figure a

1 Introduction

Nowadays, many systems are composed of networks of control systems. These systems are highly critical, and formal verification is an essential element for their social acceptability. When the components of the system to model are distributed, delays are naturally introduced in the feedback loop. They may significantly alter the dynamics, and impact safety properties that we want to ensure for the system. The natural model for dynamical systems with such delays is Delay Differential Equations (DDE), in which time derivatives not only depend on the current state, but also on past states. Reachability analysis, which involves computing the set of states reached by the dynamics, is a fundamental tool for the verification of such systems. As the reachable sets are not exactly computable, approximations are used. In particular, outer (also called over)-approximating flowpipes are used to prove that error states will never be reached, whereas inner (also called under)-approximating flowpipes are used to prove that desired states will actually be reached, or to falsify properties. We propose in this article a method to compute both outer- and inner-approximating flowpipes for DDEs.

We concentrate on systems that can be modeled as parametric fixed-delay systems of DDEs, where both the initial condition and right-hand side of the system depend on uncertain parameters, but with a unique constant and exactly known delay:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{z}(t) = f(z(t), z(t-\tau ), \beta ) &{} \hbox { if } t \in [t_0+\tau ,T] \\ z(t) = z_0(t, \beta ) &{} \hbox { if } t \in [t_0,t_0+\tau ] \end{array}\right. } \end{aligned}$$
(1)

where the continuous vector of variables z belongs to a state-space domain \(\mathcal{D} \subseteq \mathbb {R}^n\), the (constant) vector of parameters \(\beta \) belongs to the domain \(\mathcal{B} \subseteq \mathbb {R}^m\), and \(f: \mathcal{D} \times \mathcal{D} \times \mathcal{B} \rightarrow \mathcal{D}\) is \(C^{\infty }\) and such that Eq. (1) admits a unique solutionFootnote 1 on the time interval \([t_0,T]\). The initial condition is defined on \(t \in [t_0,t_0+\tau ]\) by a function \(z_0: \mathbb {R}^+ \times \mathcal{B} \rightarrow \mathcal{D}\). The method introduced here also applies in the case when the set of initial states is given as the solution of an uncertain system of ODEs instead of being defined by a function. Only the initialization of the algorithm will differ. When several constant delays occur in the system, the description of the method is more complicated, but the same method applies.

Example 1

We will exemplify our method throughout the paper on the system

$$\begin{aligned}{\left\{ \begin{array}{ll} \dot{x}(t) = -x(t)\cdot x(t-\tau ) =: f\left( x(t), x(t-\tau ),\beta \right) &{} t\in [0,T] \\ x(t) = x_0(t,\beta )=(1+\beta t)^2 &{} t\in [-\tau ,0] \end{array}\right. }\end{aligned}$$

We take \(\beta \in \left[ \tfrac{1}{3},1\right] \), which defines a family of initial functions, and we fix \(\tau =1\).

This system is a simple but not completely trivial example, for which we have an analytical solution on the first time steps, as detailed in Example 4.

Contributions and Outline. In this work, we extend the method introduced by Goubault and Putot [16] for ODEs, to the computation of inner and outer flowpipes of systems of DDEs. We claim, and experimentally demonstrate with our prototype implementation, that the method we propose here for DDEs is both simple and efficient. Relying on outer-approximations and generalized interval computations, all computations can be safely rounded, so that the results are guaranteed to be sound. Finally, we can compute inner-approximating flowpipes combining existentially and universally quantified parameters, which offers some strong potential for property verification, beyond falsification.

In Sect. 2, we first define the notions of inner and outer-approximating flowpipes, as well as robust inner-approximations, and state some preliminaries on generalized interval computations, which are instrumental in our inner flowpipes computations. We then present in Sect. 3 our method for outer-approximating solutions to DDEs. It is based on the combination of Taylor models in time with a space abstraction relying on zonotopes. Section 4 relies on this approach to compute outer-approximations of the Jacobian of the solution of the DDE with respect to the uncertain parameters, using variational equations. Inner-approximating tubes are obtained from these using a generalized mean-value theorem introduced in Sect. 2. We finally demonstrate our method in Sect. 5, using our C++ prototype implementation, and show its superiority in terms of accuracy and efficiency compared to the state of the art.

Related Work. Reachability analysis for systems described by ordinary differential equations, and their extension to hybrid systems, has been an active topic of research in the last decades. Outer-approximations have been dealt with ellipsoidal [20], sub-polyhedral techniques, such as zonotopes or support functions, and Taylor model based methods, for both linear and non-linear systems [2, 4,5,6, 10, 14, 17, 26]. A number of corresponding implementations exist [1, 3, 7, 13, 22, 25, 29]. Much less methods have been proposed, that answer the more difficult problem of inner-approximation. The existing approaches use ellipsoids [21] or non-linear approximations [8, 16, 19, 31], but they are often computationally costly and imprecise. Recently, an interval-based method [24] was introduced for bracketing the positive invariant set of a system without relying on integration. However, it relies on space discretization and has only been applied successfully, as far as we know, to low dimensional systems.

Taylor methods for outer-approximating reachable sets of DDEs have been used only recently, in [28, 32]. We will demonstrate that our approach improves the efficiency and accuracy over these interval-based Taylor methods.

The only previous work we know of for computing inner-approximations of solutions to DDEs, is the method of Xue et al. [30], extending the approach proposed for ODEs in [31]. Their method is based on a topological condition and a careful inspection of what happens at the boundary of the initial condition. We provide in the section dedicated to experiments a comparison to the few experimental results given in [30].

2 Preliminaries on Outer and Inner Approximations

Notations and Definitions. Let us introduce some notations that we will use throughout the paper. Set valued quantities, scalar or vector valued, corresponding to uncertain inputs or parameters, are noted with bold letters, e.g \(\varvec{x}\). When an approximation is introduced by computation, we add brackets: outer-approximating enclosures are noted in bold and enclosed within inward facing brackets, e.g. \([\varvec{x}]\), and inner-approximations are noted in bold and enclosed within outward facing brackets, e.g. \(]\varvec{x}[\).

An outer-approximating extension of a function \(f: \mathbb {R}^m \rightarrow \mathbb {R}^n\) is a function \([\varvec{f}]: \mathcal{P} (\mathbb {R}^m) \rightarrow \mathcal{P}(\mathbb {R}^n)\), such that for all \(\varvec{x}\) in \(\mathcal{P} (\mathbb {R}^m)\), \( \hbox {range}(f,\varvec{x})=\{f(x), x \in \varvec{x}\} \subseteq [\varvec{f}](\varvec{x})\). Dually, inner-approximations determine a set of values proved to belong to the range of the function over some input set. An inner-approximating extension of f is a function \(]\varvec{f}[: \mathcal{P} (\mathbb {R}^m) \rightarrow \mathcal{P}(\mathbb {R}^n)\), such that for all \(\varvec{x}\) in \(\mathcal{P} (\mathbb {R}^m)\), \( ]\varvec{f}[(\varvec{x}) \subseteq \hbox {range}(f,\varvec{x})\). Inner and outer approximations can be interpreted as quantified propositions: \({{\mathrm{range}}}(f,\varvec{x}) \subseteq [\varvec{z}]\) can be written \( (\forall x \in \varvec{x}) \, (\exists z \in [\varvec{z}]) \, (f(x)=z)\), while \(]\varvec{z}[ \, \subseteq {{\mathrm{range}}}(f,\varvec{x})\) can be written \( (\forall z \in \, ]\varvec{z}[) \, (\exists x \in \varvec{x}) \, (f(x)=z). \)

Let \(\varphi (t,\beta )\) for time \(t \ge t_0\) denote the time trajectory of the dynamical system (1) for a parameter value \(\beta \), and \(\varvec{z}(t,\varvec{\beta })=\{ \varphi (t,\beta ), \beta \in \varvec{\beta } \}\) the set of states reachable at time t for the set of parameter values \(\varvec{\beta }\). We extend the notion of outer and inner-approximations to the case where the function is the solution \(\varphi (t,\beta )\) of system (1) over the set \(\varvec{\beta }\). An outer-approximating flowpipe is given by an outer-approximation of the set of reachable states, for all t in a time interval:

Definition 1

(Outer-approximation). Given a vector of uncertain (constant) parameters or inputs \(\beta \in \varvec{\beta }\), an outer-approximation at time t of the reachable set of states, is \([\varvec{z}](t,\varvec{\beta }) \supseteq \varvec{z}(t,\varvec{\beta })\), such that \((\forall \beta \in \varvec{\beta })\, (\exists z \in [\varvec{z}](t,\varvec{\beta })) \, (\varphi (t,\beta )=z)\).

Definition 2

(Inner-approximation). Given a vector of uncertain (constant) parameters or inputs \(\beta \in \varvec{\beta }\), an inner-approximation at time t of the reachable set, is \(]\varvec{z}[(t,\varvec{\beta }) \subseteq \varvec{z}(t,\varvec{\beta })\) such that \((\forall z \in ]\varvec{z}[(t,\varvec{\beta }))\, (\exists \beta \in \varvec{\beta }) \, (\varphi (t,\beta )=z)\).

In words, any point of the inner flowpipe is the solution at time t of system (1), for some value of \(\beta \in \varvec{\beta }\). If the outer and inner approximations are computed accurately, they approximate with arbitrary precision the exact reachable set.

Our method will also solve the more general robust inner-approximation problem of finding an inner-approximation of the reachable set, robust to uncertainty on an uncontrollable subset \(\beta _\mathcal{A}\) of the vector of parameters \(\beta \):

Definition 3

(Robust inner-approximation). Given a vector of uncertain (constant) parameters or inputs \(\beta =(\beta _\mathcal{A},\beta _\mathcal{E}) \in \varvec{\beta }\), an inner-approximation of the reachable set \(\varvec{z}(t,\varvec{\beta })\) at time t, robust with respect to \(\beta _\mathcal{A}\), is a set \(]\varvec{z}[_\mathcal{A}(t,\varvec{\beta }_\mathcal{A},\varvec{\beta }_\mathcal{E})\) such that \((\forall z \in ]\varvec{z}[_\mathcal{A}(t,\varvec{\beta }_\mathcal{A},\varvec{\beta }_\mathcal{E}))\, (\forall \beta _\mathcal{A} \in \varvec{\beta }_\mathcal{A}) \, (\exists \beta _\mathcal{E} \in \varvec{\beta }_\mathcal{E}) \, (\varphi (t,\beta _\mathcal{A},\beta _\mathcal{E})=z)\).

Outer and Inner Interval Approximations. Classical intervals are used in many situations to rigorously compute with interval domains instead of reals, usually leading to outer-approximations of function ranges over boxes. We denote the set of classical intervals by . Intervals are non-relational abstractions, in the sense that they rigorously approximate independently each component of a vector function f. We thus consider in this section a function \(f: \mathbb {R}^m \rightarrow \mathbb {R}\). The natural interval extension consists in replacing real operations by their interval counterparts in the expression of the function. A generally more accurate extension relies on a linearization by the mean-value theorem. Suppose f is differentiable over the interval \(\varvec{x}\). Then, the mean-value theorem implies that \((\forall x_0 \in \varvec{x}) \, ( \forall x \in \varvec{x}) \, (\exists c \in \varvec{x}) \, (f(x) = f(x_0) + f'(c) (x-x_0))\). If we can bound the range of the gradient of f over \(\varvec{x}\), by \( [\varvec{f}'](\varvec{x})\), then we can derive the following interval enclosure, usually called the mean-value extension: for any \(x_0 \in \varvec{x}, \, \hbox {range}(f,\varvec{x}) \subseteq f(x_0) + [\varvec{f}'](\varvec{x}) (\varvec{x}- x_0). \)

Example 2

Consider \(f(x)=x^2-x\), its range over \(x=[2,3]\) is [2, 6]. The natural interval extension of f, evaluated on [2, 3], is \([\varvec{f}]([2,3])=[2,3]^2-[2,3]=[1,7]\). The mean-value extension gives \(f(2.5) + \) \([\varvec{f}']([2, 3]) ([2, 3] - 2.5)\) \( = [1.25, 6.25]\), using \(x_0=2.5\) and \([\varvec{f}'](\varvec{x})=2\varvec{x}-1\).

Modal Intervals and Kaucher Arithmetic. The results introduced in this section are mostly based on the work of Goldsztejn et al. [15] on modal intervals. Let us first introduce generalized intervals, i.e., intervals whose bounds are not ordered, and the Kaucher arithmetic [18] on these intervals.

The set of generalized intervals is denoted by . Given two real numbers \( \underline{x}\) and \(\overline{x}\), with \( \underline{x} \le \overline{x}\), one can consider two generalized intervals, \([\underline{x},\overline{x}]\), which is called proper, and \([\overline{x},\underline{x}]\), which is called improper. We define \(\hbox {dual} ([a,b])=[b,a]\) and \(\text {pro }([a,b])=[\min (a,b)\), \(\max (a,b)]\).

Definition 4

([15]). Let be a continuous function and , which we can decompose in and with \(p+q=m\). A generalized interval is \((f,\varvec{x})\)-interpretable if

$$\begin{aligned} (\forall x_\mathcal{A} \in \varvec{x}_\mathcal{A})\, (Q_z z \in \,\,pro \,\,\varvec{z})\, (\exists x_\mathcal{E} \in \,pro\, \varvec{x}_\mathcal{E}) \, (f(x)=z) \end{aligned}$$
(2)

where \(Q_z = \exists \) if \((\varvec{z})\) is proper, and \(Q_z = \forall \) otherwise.

When all intervals in (2) are proper, we retrieve the interpretation of classical interval computation, which gives an outer-approximation of \({{\mathrm{range}}}(f,\varvec{x})\), or \( (\forall x \in \varvec{x}) \, (\exists z \in [\varvec{z}]) \, (f(x)=z). \) When all intervals are improper, (2) yields an inner-approximation of \({{\mathrm{range}}}(f,\varvec{x})\), or \( (\forall z \in \, ]\text {pro }\varvec{z}[) \, (\exists x \in \text {pro }\varvec{x}) \, (f(x)=z). \)

Kaucher arithmetic [18] provides a computation on generalized intervals that returns intervals that are interpretable as inner-approximations in some simple cases. Kaucher addition extends addition on classical intervals by \(\varvec{x}+\varvec{y}=[\underline{x} + \underline{y},\overline{x} + \overline{y}]\) and \(\varvec{x}-\varvec{y}=[\underline{x} - \overline{y},\overline{x} - \underline{y}]\). For multiplication, let us decompose in \(\mathcal{P} = \{\varvec{x}=[\underline{x},\overline{x}], \; \underline{x} \geqslant 0 \wedge \overline{x} \geqslant 0\}\), \({- \mathcal P} = \{\varvec{x}=[\underline{x},\overline{x}], \; \underline{x} \leqslant 0 \wedge \overline{x} \leqslant 0\}\), \(\mathcal{Z} = \{\varvec{x}=[\underline{x},\overline{x}], \; \underline{x} \leqslant 0 \leqslant \overline{x}\}\), and \({\hbox {dual } \mathcal Z} = \{\varvec{x}=[\underline{x},\overline{x}], \; \underline{x} \geqslant 0 \geqslant \overline{x}\}\). When restricted to proper intervals, the Kaucher multiplication coincides with the classical interval multiplication. Kaucher multiplication \(\varvec{x}\varvec{y}\) extends the classical multiplication to all possible combinations of \(\varvec{x}\) and \(\varvec{y}\) belonging to these sets. We refer to [18] for more details.

Kaucher arithmetic defines a generalized interval natural extension (see [15]):

Proposition 1

Let be a function, given by an arithmetic expression where each variable appears syntactically only once (and with degree 1). Then for , \(f(\varvec{x})\), computed using Kaucher arithmetic, is \((f,\varvec{x})\)-interpretable.

In some cases, Kaucher arithmetic can thus be used to compute an inner-approximation of \({{\mathrm{range}}}(f,\varvec{x})\). But the restriction to functions f with single occurrences of variables, that is with no dependency, prevents a wide use. A generalized interval mean-value extension allows us to overcome this limitation:

Theorem 1

Let be differentiable, and which we can decompose in and with \(p+q=m\). Suppose that for each \(i \in \{1,\ldots ,m\}\), we can compute such that

$$\begin{aligned} \left\{ \frac{\partial f}{\partial x_i} (x), \; x \in \text {pro }\varvec{x}\right\} \subseteq [\varvec{\varDelta }_i]. \end{aligned}$$
(3)

Then, for any \(\tilde{x} \in \text {pro }\varvec{x}\), the following interval, evaluated with Kaucher arithmetic, is \((f,\varvec{x})\)-interpretable:

$$\begin{aligned} \tilde{f}(\varvec{x}) = f(\tilde{x}) + \sum \limits _{i=1}^{n} [\varvec{\varDelta }_i] (\varvec{x}_i - \tilde{x}_i). \end{aligned}$$
(4)

When using (4) for inner-approximation, we can only get the following subset of all possible cases in the Kaucher multiplication table: \((\varvec{x}\in \mathcal{P}) \times (\varvec{y}\in \text {dual }\mathcal{Z})=[\underline{x} \underline{y}, \underline{x} \overline{y}]\), \((\varvec{x}\in -\mathcal{P}) \times (\varvec{y}\in \text {dual }\mathcal{Z})=[\overline{x} \overline{y}, \overline{x} \underline{y}]\), and \((\varvec{x}\in \mathcal{Z}) \times (\varvec{y}\in \text {dual }\mathcal{Z})=0\). Indeed, for an improper \(\varvec{x}\), and \(\tilde{x} \in \text {pro }\varvec{x}\), it holds that \((\varvec{x}- \tilde{x})\) is in \(\text {dual }\mathcal{Z}\). The outer-approximation \([\varvec{\varDelta }_i] \) of the Jacobian is a proper interval, thus in \(\mathcal{P}\), \(-\mathcal{P}\) or \(\mathcal{Z}\), and we can deduce from the multiplication rules that the inner-approximation is non empty only if \([\varvec{\varDelta }_i]\) does not contain 0.

Example 3

Let f be defined by \(f(x)=x^2-x\), for which we want to compute an inner-approximation of the range over \(\varvec{x}=[2,3]\). Due to the two occurrences of x, \(f(\hbox {dual}\, \varvec{x})\), computed with Kaucher arithmetic, is not \((f,\varvec{x})\)-interpretable. The interval \(\tilde{f}(\varvec{x}) = f(2.5) + \varvec{f}'([2,3]) (\varvec{x}- 2.5)=3.75 + [3,5](\varvec{x}- 2.5)\) given by its mean-value extension, computed with Kaucher arithmetic, is \((f,\varvec{x})\)-interpretable. For \(\varvec{x}=[2,3]\), using the multiplication rule for \(\mathcal{P} \times \text {dual }\mathcal{Z}\), we get \(\tilde{f}(\varvec{x}) = 3.75 + [3,5]([2,3] - 2.5)= 3.75 + [3,5] [0.5,-0.5] = 3.75 + [1.5,-1.5] = [5.25,2.25]\), that can be interpreted as: \((\forall z \in [2.25,5.25]) \, (\exists x \in [2,3]) \, (z=f(x))\). Thus, [2.25, 5.25] is an inner-approximation of \({{\mathrm{range}}}(f,[2,3])\).

In Sect. 4, we will use Theorem 1 with f being each component (for a n-dimensional system) of the solution of the uncertain dynamical system (1): we need an outer enclosure of the solution of the system, and of its Jacobian with respect to the uncertain parameters. This is the objective of the next sections.

3 Taylor Method for Outer Flowpipes of DDEs

We now introduce a Taylor method to compute outer enclosures of the solution of system (1). The principle is to extend a Taylor method for the solution of ODEs to the case of DDEs, in a similar spirit to the existing work [28, 32]. This can be done by building a Taylor model version of the method of steps [27], a technique for solving DDEs that reduces these to a sequence of ODEs.

3.1 The Method of Steps for Solving DDEs

The principle of the method of steps is that on each time interval \([t_0+i\tau ,t_0+(i+1)\tau ]\), for \(i \ge 1\), the function \(z(t-\tau )\) is a known history function, already computed as the solution of the DDE on the previous time interval \([t_0+(i-1)\tau ,t_0+i\tau ]\). Plugging the solution of the previous ODE into the DDE yields a new ODE on the next tile interval: we thus have an initial value problem for an ODE with \(z(t_0+i\tau )\) defined by the previous ODE. This process is initialized with \(z_0(t)\) on the first time interval \([t_0,t_0+\tau ]\). The solution of the DDE can thus be obtained by solving a sequence of IVPs for ODEs. Generally, there is a discontinuity in the first derivative of the solution at \(t_0+\tau \). If this is the case, then because of the term \(z(t-\tau )\) in the DDE, a discontinuity will also appear at each \(t_0+i\tau \).

Example 4

Consider the DDE defined in Example 1. On \(t \in [0,\tau ]\) the solution of the DDE is solution of the ODE

$$\begin{aligned} \dot{x}(t) = f(x(t),x_0(t-\tau ,\beta )) = -x(t) (1 + \beta (t-\tau ))^2, \; t \in [0,\tau ] \end{aligned}$$

with initial value \(x(0) = x_0(0,\beta ) = 1\). It admits the analytical solution

$$\begin{aligned} x(t)=\exp \left( -\frac{1}{3\beta }\left( \left( 1+(t-1)\beta \right) ^3 - \left( 1-\beta \right) ^3\right) \right) , \; t \in [0,\tau ] \end{aligned}$$
(5)

The solution of the DDE on the time interval \([\tau ,2\tau ]\) is the solution of the ODE

$$\begin{aligned} \dot{x}(t) = -x(t) \exp \left( -\frac{1}{3\beta }\left( \left( 1+(t-\tau -1)\beta \right) ^3 - \left( 1-\beta \right) ^3\right) \right) , \; t \in [\tau ,2\tau ] \end{aligned}$$

with initial value \(x(\tau )\) given by (5). An analytical solution can be computed, using the transcendantal lower \(\gamma \) function.

3.2 Finite Representation of Functions as Taylor Models

A sufficiently smooth function g (e.g. \(C^{\infty }\)), can be represented on a time interval \([t_0,t_0+h]\) by a Taylor expansion

$$\begin{aligned} g(t) = \sum _{i=0}^k (t-t_0)^i g^{[i]}(t_0) + (t-t_0)^{k+1} g^{[k+1]}(\xi ), \end{aligned}$$
(6)

with \(\xi \in [t_0,t_0+h]\), and using the notation \( g^{[i]}(t) := \frac{g^{(i)}(t)}{i!}\). We will use such Taylor expansions to represent the solution z(t) of the DDE on each time interval \([t_0+i\tau ,t_0+(i+1)\tau ]\), starting with the initial condition \(z_0(t,\beta )\) on \([t_0,t_0+\tau ]\). For more accuracy, we actually define these expansions piecewise on a finer time grid of fixed time step h. The function \(z_0(t,\beta )\) on time interval \([t_0,t_0+\tau ]\) is thus represented by \(p=\tau /h\) Taylor expansions. The \(l^{th}\) such Taylor expansion, valid on the time interval \([t_0+lh,t_0+(l+1)h]\) with \(l \in \{0,\ldots ,p-1\}\), is

$$\begin{aligned} z_0(t,\beta ) = \sum _{i=0}^k (t-t_0)^i z^{[i]}(t_0+lh,\beta ) + (t-t_0)^{k+1} z^{[k+1]}(\xi _l,\beta ), \end{aligned}$$
(7)

for a \(\xi _l \in [t_0+lh,t_0+(l+1)h]\).

3.3 An Abstract Taylor Model Representation

In a rigorous version of the expansion (7), the \(z^{[i]}(t_0+lh,\beta )\) as well as \(g^{[k+1]}(\xi _l,\beta )\) are set-valued, as the vector of parameters \(\beta \) is set valued. The simplest way to account for these uncertainties is to use intervals. However, this approach suffers heavily from the wrapping effect, as these uncertainties accumulate with integration time. A more accurate alternative is to use a Taylor form in the parameters \(\beta \) for each \(z^{[i]}(t_0+lh,\beta )\). This is however very costly. We choose in this work to use a sub-polyhedric abstraction to parameterize Taylor coefficients, expressing some sensitivity of the model to the uncertain parameters: we rely on affine forms [9]. The result can be seen as Taylor models of arbitrary order in time, and order close to 1 in the parameters space.

The vector of uncertain parameters or inputs \(\beta \in \varvec{\beta }\) is thus defined as a vector of affine forms over m symbolic variables \(\varepsilon _i\in [-1,1]\): \( \varvec{\beta } = \alpha _{0}+\sum _{i=1}^{m_j}\alpha _{i}\varepsilon _i\), where the coefficients \(\alpha _i\) are vectors of real numbers. This abstraction describes the set of values of the parameters as given within a zonotope. In the sequel, we will use for zonotopes the same bold letter notation as for intervals, that account for set valued quantities.

Example 5

In Example 1, \(\varvec{\beta } = [\frac{1}{3},1]\) can be represented by the centered form \(\varvec{\beta }=\frac{2}{3}+\frac{1}{3}\varepsilon _1\). The set of initial conditions \(\varvec{x}_0(t,\varvec{\beta })\) is abstracted as a function of the noise symbol \(\varepsilon _1\). For example, at \(t=-1\), \(\varvec{x}_0(-1,\varvec{\beta }) = (1-\varvec{\beta })^2 = (1 - \frac{2}{3} - \frac{1}{3}\varepsilon _1)^2 = \frac{1}{9} (1-\varepsilon _1)^2 \). The abstraction of affine arithmetic operators is computed componentwise on the noise symbols \(\varepsilon _i\), and does not introduce any over-approximation. The abstraction of non affine operations is conservative: an affine approximation of the result is computed, and a new noise term is added, that accounts for the approximation error. Here, using \(\varepsilon _1^2 \in [0,1]\), affine arithmetic [9] will yield \( [\varvec{x}_0](-1,\varvec{\beta }) = \frac{1}{9} ( 1 - 2 \varepsilon _1 + [0,1]) = \frac{1}{9} ( 1.5 - 2 \varepsilon _1 + 0.5 \varepsilon _2), \) with \(\varepsilon _2 \in [-1,1]\). We are now using notation \([\varvec{x}_0]\), denoting an outer-approximation. Indeed, the abstraction is conservative: \([\varvec{x}_0](-1,\varvec{\beta })\) takes its values in \(\frac{1}{9}[-1,4]\), while the exact range of \(\varvec{x}_0(-1,\beta )\) for \(\beta \in [\frac{1}{3},1]\) is \(\frac{1}{9}[0,4]\).

Now, we can represent the initial solution for \(t \in [t_0,t_0+\tau ]\) of the DDE (1) as a Taylor model in time with zonotopic coefficients, by evaluating in affine arithmetic the coefficients of its Taylor model (7). Noting \(\varvec{r}_{0j}=[t_0+jh,t_0+(j+1)h]\), we write, for all \(j=0,\ldots ,p-1\),

$$\begin{aligned}{}[\varvec{z}](t) = \sum _{l=0}^{k-1} (t-t_0)^l [\varvec{z}_{0j}]^{[l]} + (t-t_0)^{k} [\overline{\varvec{z}}_{0j}]^{[k]}, \; t \in \varvec{r}_{0j} \end{aligned}$$
(8)

where the Taylor coefficients

$$\begin{aligned}{}[\varvec{z}_{0j}]^{[l]} := \frac{[\varvec{z}_0]^{(l)}(t_0+jh,\varvec{\beta })}{l!}, \;\; [\overline{\varvec{z}}_{0j}]^{[l]} := \frac{[\varvec{z}_0]^{(l)}(\varvec{r}_{0j},\varvec{\beta })}{l!} \end{aligned}$$
(9)

can be computed by differentiating the initial solution with respect to t (\([\varvec{z}_0]^{(l)}\) denotes the l-th time derivative), and evaluating the result in affine arithmetic.

Example 6

Suppose we want to build a Taylor model of order \(k=2\) for the initial condition in Example 1 on a grid of step size \(h=1/3\). Consider the Taylor model for the first step \([t_0,t_0+h]=[-1,-2/3]\): we need to evaluate \([\varvec{x}_{00}]^{[0]}=[\varvec{x}_0](-1,\varvec{\beta })\), which was done Example 5.

We also need \([\varvec{x}_{00}]^{[1]}\) and \([\overline{\varvec{x}_{00}}]^{[2]}\). We compute \([\varvec{x}_{00}]^{[1]} = [\dot{x}_0](-1,\varvec{\beta }) = 2 \varvec{\beta } (1 - \varvec{\beta }) \) and \([\overline{\varvec{x}_{00}}]^{[2]} = [\varvec{x}_0]^{(2)}(\varvec{r}_l) / 2 = [\ddot{\varvec{x}}_0](\varvec{r}_l) / 2 = \varvec{\beta }^2\), with \(\varvec{\beta }=\frac{2}{3}+\frac{1}{3}\varepsilon _1\). We evaluate these coefficients with affine arithmetic, similarly to Example 5.

3.4 Constructing Flowpipes

The abstract Taylor models (8) introduced in Sect. 3.3, define piecewise outer-approximating flowpipes of the solution on \([t_0,t_0+\tau ]\). Using the method of steps, and plugging into (1) the solution computed on \([t_0+(i-1)\tau ,t_0+i\tau ]\), the solution of (1) can be computed by solving the sequence of ODEs

$$\begin{aligned} \dot{z}(t) = f(z(t), z(t-\tau ),\beta ), \hbox { for } t \in [t_0+i\tau ,t_0+(i+1)\tau ] \end{aligned}$$
(10)

where the initial condition \(z(t_0+i\tau )\), and \(z(t-\tau )\) for t in \([t_0+i\tau ,t_0+(i+1)\tau ]\), are fully defined by (8) when \(i=1\), and by the solution of (10) at previous step when i is greater than 1.

Let the set of the solutions of (10) at time t and for the initial conditions \(z(t') \in \varvec{z}'\) at some initial time \(t' \ge t_0\) be denoted by \(\varvec{z}(t, t', \varvec{z}')\). Using a Taylor method for ODEs, we can compute flowpipes that are guaranteed to contain the reachable set of the solutions \(\varvec{z}(t, t_0+\tau , [\varvec{z}](t_0+\tau ))\) of (10), for all times t in \([t_0+\tau ,t_0+2\tau ]\), with \([\varvec{z}](t_0+\tau )\) given by the evaluation of the Taylor model (8). This can be iterated for further steps of length \(\tau \), solving (10) for \(i=1,\ldots ,T/\tau \), with an initial condition given by the evaluation of the Taylor model for (10) at the previous step.

We now detail the algorithm that results from this principle. Flowpipes are built using two levels of grids. At each step on the coarser grid with step size \(\tau \), we define a new ODE. We build the Taylor models for the solution of this ODE on the finer grid of integration step size \(h=\tau /p\). We note \(t_i = t_0+i\tau \) the points of the coarser grid, and \(t_{ij} = t_0+i\tau +jh\) the points of the finer grid. In order to compute the flowpipes in a piecewise manner on this grid, the Taylor method relies on Algorithm 1. All Taylor coefficients, as well as Taylor expansion evaluations, are computed in affine arithmetic.

figure b

Step 1: Computing an a Priori Enclosure. We need an a priori enclosure \([\overline{\varvec{z}}_{ij}]\) of the solution z(t), valid on the time interval \([t_{ij},t_{i(j+1)}]\). This is done by a straightforward extension of the classical approach [26] for ODEs relying on the interval Picard-Lindelöf method, applied to Eq. (10) on \([t_{ij},t_{i(j+1)}]\) with initial condition \([\varvec{z}_{ij}]\). If \([\varvec{f}]\) is Lipschitz, the natural interval extension \([\varvec{F}]\) of the Picard-Lindelöf operator defined by \([\varvec{F}](\varvec{z})=[\varvec{z}_{ij}]+[t_{ij},t_{i(j+1)}][\varvec{f}](\varvec{z},[\overline{\varvec{z}_{i(j-1)}}],\varvec{\beta })\), where the enclosure of the solution over \(\varvec{r}_{i(j-1)}=[t_{i(j-1)},t_{ij}]\) has already be computed as \([\overline{\varvec{z}_{i(j-1)}}]\), admits a unique fixpoint. A simple Jacobi-like iteration, \(\varvec{z}_0=[\varvec{z}_{ij}]\), \(\varvec{z}_{l+1}=\varvec{F}(\varvec{z}_l)\) for all \(l \in \mathbb {N}\), suffices to reach the fixpoint of this iteration which yields \([\overline{\varvec{z}_{ij)}}]\), and ensures the existence and uniqueness of a solution to (10) on \([t_{ij},t_{i(j+1)}]\). However, it may be necessary to reduce the step size.

Step 2: Building the Taylor Model. A Taylor expansion of order k of the solution at \(t_{ij}\) which is valid on the time interval \([t_{ij},t_{i(j+1)}]\), for \(i \ge 1\), is

$$\begin{aligned}{}[\varvec{z}](t,t_{ij},[{\varvec{z}}_{ij}]) = [{\varvec{z}}_{ij}] + \sum _{l=1}^{k-1} (t-t_{ij})^l [\varvec{f}_{ij}]^{[l]} + (t-t_{ij})^k [\overline{\varvec{f}_{ij}}]^{[k]}, \end{aligned}$$
(11)

The Taylor coefficients are defined inductively, and can be computed by automatic differentiation, as follows:

$$\begin{aligned} \left[ \varvec{f}_{ij}\right] ^{[1]}= & {} \left[ \varvec{f}\right] \left( [{\varvec{z}}_{ij}],[{\varvec{z}}_{(i-1)j}],\varvec{\beta }\right) \end{aligned}$$
(12)
$$\begin{aligned} {\left[ \varvec{f}_{1j}\right] }^{[l+1]}= & {} \frac{1}{l+1} \left( \left[ \frac{\partial {\varvec{f}^{[l]}}}{\partial z}\right] \left[ \varvec{f}_{1j}\right] ^{[1]} + \left[ \varvec{z}_{0j}\right] \left[ \varvec{f}_{0j}\right] ^{[1]}\right) \end{aligned}$$
(13)
$$\begin{aligned} {\left[ \varvec{f}_{ij}\right] }^{[l+1]}= & {} \frac{1}{l+1} \left( \left[ \frac{\partial {\varvec{f}^{[l]}}}{\partial z}\right] \left[ \varvec{f}_{ij}\right] ^{[1]} + \left[ \frac{\partial {\varvec{f}^{[l]}}}{\partial z^{\tau }}\right] \left[ \varvec{f}_{(i-1)j}\right] ^{[1]} \right) \;\; \hbox { if } i \ge 2 \end{aligned}$$
(14)

The Taylor coefficients for the remainder term are computed in a similar way, evaluating \([\varvec{f}]\) over the a priori enclosure of the solution on \(\varvec{r}_{ij}=[t_{ij},t_{i(j+1)}]\). For instance, \([\overline{\varvec{f}_{ij}}]^{[1]} = [\varvec{f}]([\overline{{\varvec{z}}_{ij}}],[\overline{{\varvec{z}}_{(i-1)j}}])\). The derivatives can be discontinuous at \(t_{i0}\): the \({[\varvec{f}_{i0}]}^{[l]}\) coefficients correspond to the right-handed limit, at time \(t_{i0}^+\).

Let us detail the computation of the coefficients (12), (13) and (14). Let z(t) be the solution of (10). By definition, \(\frac{d z}{d t}(t) = f(z(t),z(t-\tau ),\beta ) = f^{[1]}(z(t),z(t-\tau ),\beta )\) from which we deduce the set valued version (12). We can prove (14) by induction on l. Let us denote \({\partial z}\) the partial derivative with respect to z(t), and \({\partial z^{\tau }}\) with respect to the delayed function \(z(t-\tau )\). We have

$$\begin{aligned} \begin{array}{rclcl} f^{[l+1]}(z(t),z(t-\tau ),\beta ) &{}=&{} \frac{1}{(l+1)!} \frac{d^{(l+1)}z}{dt^{(l+1)}}(t) = \frac{1}{l+1} \frac{d}{dt}\left( f^{[l]}(z(t),z(t-\tau ),\beta )\right) \\ &{} = &{} \frac{1}{l+1} \left( \dot{z}(t) \frac{\partial f^{[l]}}{\partial z} + \dot{z}(t-\tau ) \frac{\partial f^{[l]}}{\partial z^{\tau }} \right) \\ &{} = &{} \frac{1}{l+1} \left( f(z(t),z(t-\tau ),\beta ) \frac{\partial f^{[l]}}{\partial z} + \right. \\ &{}&{} \left. f(z(t-\tau ),z(t-2\tau ),\beta ) \frac{\partial f^{[l]}}{\partial z^{\tau }} \right) \end{array} \end{aligned}$$

from which we deduce the set valued version (14). For \(t \in [t_0+\tau , t_0 + 2\tau ]\), the only difference is that \(\dot{z}(t-\tau )\) is obtained by differentiating the initial solution of the DDE on \([t_0,t_0+\tau ]\), which yields (13).

Example 7

As in Example 6, we build the first step of the Taylor model of order \(k=2\) on the system of Example 1. We consider \(t\in [t_0+\tau ,t_0+2\tau ]\), on a grid of step size \(h=1/3\). Let us build the Taylor model on \([t_0+\tau ,t_0+\tau +h]=[0,1/3]\): we need to evaluate\([\varvec{x}_{10}]\), \([\varvec{f}_{10}]^{[1]}\) and \([\overline{\varvec{f}}_{10}]^{[2]}\) in affine arithmetic.

Following Algorithm 1, \([\varvec{x}_{10}]=[\varvec{x}_0](t_{10},\varvec{\beta })=[\varvec{x}_0](t_0+\tau ,\varvec{\beta })=[\varvec{x}_0](0,\varvec{\beta })=1\). Using (12) and the computation of \([\varvec{x}_{00}]\) of Example 5, \([\varvec{f}_{10}]^{[1]} = [\varvec{f}]([\varvec{x}_{10}],[\varvec{x}_{00}]) = [\varvec{f}](1,\frac{1}{9}(1.5-2\varepsilon _1+0.5\varepsilon _2)) = - \frac{1}{9}(1.5-2\varepsilon _1+0.5\varepsilon _2)\). Finally, using (13), \([\overline{\varvec{f}}_{10}]^{[2]} = 0.5 \dot{f}(\varvec{r}_{10},\varvec{r}_{00})\), where \(\varvec{r}_{i0}\) for \(i=0,1\) (with \(r_{00}=r_{10}-\tau \)) is the time interval of width h equal to \([t_{i0},t_{i1}]=[-1+i,-1+i+1/3]\), and \(\dot{f}(t,t-\tau )= \dot{x}(t) x(t-\tau ) + x(t) \dot{x}(t-\tau ) = f(t,t-\tau ) x(t-\tau ) + x(t) \dot{x}_0(t-\tau ) = -x(t)x(t-\tau )^2 + 2 x(t) \beta (1+\beta t)\). Thus, \([\overline{\varvec{f}}_{10}]^{[2]} = - 0.5 [\varvec{x}(\varvec{r}_{10})] [\varvec{x}(\varvec{r}_{00})]^2 + [\varvec{x}(\varvec{r}_{10})] \varvec{\beta } (1+\varvec{\beta } \varvec{r}_{10})\). We need enclosures for \(\varvec{x}(\varvec{r}_{00})\) and \(x(\varvec{r}_{10})\), to compute this expression. Enclosure \([\varvec{x}(\varvec{r}_{00})]\) is directly obtained as \([\varvec{x}_0](\varvec{r}_{00})=(1+\varvec{\beta } \varvec{r}_{00})^2\), evaluated in affine arithmetic. Evaluating \([\varvec{x}(\varvec{r}_{10})]\) requires to compute an a priori enclosure of the solution on interval \(\varvec{r}_{10}\), following the approach described as Step 1 in Algorithm 1. The Picard-Lindelöf operator is \([\varvec{F}](\varvec{x})=[\varvec{x}_{10}]+[0,\frac{1}{3}] [\varvec{f}](\varvec{x},[\varvec{x}(\varvec{r}_{00})],\varvec{\beta })=1+[0,\frac{1}{3}] (1+ \varvec{\beta } \varvec{r}_{00})^2 \varvec{x}\). We evaluate it in interval rather than affine arithmetic for simplicity: \([\varvec{F}](\varvec{x})= 1 + [0,\frac{1}{3}] \left( 1+[\frac{1}{3},1] [-1,-\frac{2}{3}]\right) ^2 \varvec{x}= 1 + [0,\frac{7^2}{3^5}] \varvec{x}\). Starting with \(\varvec{x}_0=[\varvec{x}_{10}]=1\), we compute \(\varvec{x}_1=[\varvec{F}](1) = [1,1+\frac{7^2}{3^5}]\), \(\varvec{x}_2= [\varvec{F}](\varvec{x}_1)= [1,1+\frac{7^2}{3^5}+(\frac{7^2}{3^5})^2]\), etc. This is a geometric progression, that converges to a finite enclosure.

Remark

A fixed step size yields a simpler algorithm. However it is possible to use a variable step size, with an additional interpolation of the Taylor models.

4 Inner-Approximating Flowpipes

We will now use Theorem 1 in order to compute inner-approximating flowpipes from outer-approximating flowpipes, extending the work [16] for ODEs to the case of DDEs. The main idea is to instantiate in this theorem the function f as the solution \(z(t,\beta )\) of our uncertain system (1) for all t, and \(\varvec{x}\) as the range \(\varvec{\beta }\) of the uncertain parameters. For this, we need to compute an outer-approximation of \(z(t,\tilde{\beta })\) for some \(\tilde{\beta }\in \varvec{\beta }\), and of its Jacobian matrix with respect to \(\beta \) at any time t and over the range \(\varvec{\beta }\). We follow the approach described in Sect. 3.4.

Outer-Approximation of the Jacobian Matrix Coefficients. For the DDE (1) in arbitrary dimension \(n \in \mathbb {N}\) and with parameter dimension \(m\in \mathbb {N}\), the Jacobian matrix of the solution \(z=(z_1,\ldots ,z_n)\) of this system with respect to the parameters \(\beta =(\beta _1,\ldots ,\beta _m)\) is

$$J_{ij}(t)=\frac{\partial z_i}{\partial \beta _{j}}(t)$$

for i between 1 and n, j between 1 and m. Differentiating (1), we obtain that the coefficients of the Jacobian matrix of the flow satisfy

$$\begin{aligned} \dot{J}_{ij}(t) = \sum \limits _{k=1}^p \frac{\partial f_i}{\partial z_k}(t) J_{kj}(t)+ \sum \limits _{k=1}^p \frac{\partial f_i}{\partial z_k^{\tau }}(t) J_{kj}(t-\tau ) + \frac{\partial f_i}{\partial \beta _j}(t) \end{aligned}$$
(15)

with initial condition \(J_{ij}(t)=(J_{ij})_0(t,\beta )= \frac{\partial (z_i)_0}{\partial \beta _j}(t,\beta )\) for \(t \in [t_0,t_0+\tau ]\).

Example 8

The Jacobian matrix for Example 1 is a scalar since the DDE is real-valued and the parameter is scalar. We easily get \( \dot{J}_{11}(t)=-x(t-\tau )J_{11}(t)-x(t)J_{11}(t-\tau ) \) with initial condition \((J_{11})_0(t,\beta ) = 2t (1 + \beta t)\).

Equation (15) is a DDE of the same form as (1). We can thus use the method introduced in Sect. 3.4, and use Taylor models to compute outer-approximating flowpipes for the coefficients of the Jacobian matrix.

Computing Inner-Approximating Flowpipes. Similarly as for ODEs [16], the algorithm that computes inner-approximating flowpipes, first uses Algorithm 1 to compute outer-approximations, on each time interval \([t_{ij},t_{i(j+1)}]\), of

  1. 1.

    the solution \(z(t,\tilde{\beta })\) of the system starting from the initialization function \(z_0(t,\tilde{\beta })\) defined by a given \(\tilde{\beta } \in \varvec{\beta }\)

  2. 2.

    the Jacobian \(J(t,\beta )\) of the solution, for all \(\beta \in \varvec{\beta }\)

Then, we can deduce inner-approximating flowpipes by using Theorem 1. Let as in Definition 3 \(\beta =(\beta _\mathcal{A},\beta _\mathcal{E})\) and note \(J_\mathcal{A}\) the matrix obtained by extracting the columns of the Jacobian corresponding to the partial derivatives with respect to \(\beta _\mathcal{A}\). Denote by \(J_\mathcal{E}\) the remaining columns. If the quantity defined by Eq. (16) for t in \([t_{ij},t_{i(j+1)}]\) is an improper interval

$$\begin{aligned} ]\varvec{z}[_\mathcal{A}(t,t_{ij},\varvec{\beta }_\mathcal{A},\varvec{\beta }_\mathcal{E}) = [\varvec{z}](t,t_{ij},[\tilde{\varvec{z}}_{ij}]) + [\varvec{J}]_\mathcal{A}(t,t_{ij},[{\varvec{J}}_{ij}])(\varvec{\beta }_\mathcal{A}-\tilde{\beta }_\mathcal{A}) \nonumber \\ + [\varvec{J}]_\mathcal{E}(t,t_{ij},[{\varvec{J}}_{ij}])(\text {dual }\varvec{\beta }_\mathcal{E}-\tilde{\beta }_\mathcal{E}) \end{aligned}$$
(16)

then the interval \(\left( \text {pro }]\varvec{z}[_\mathcal{A}(t,t_{ij},\varvec{\beta }_\mathcal{A},\varvec{\beta }_\mathcal{E})\right) \) is an inner-approximation of the reachable set \( \varvec{z}(t,\varvec{\beta })\) valid on the time interval \([t_{ij},t_{i(j+1)}]\), which is robust with respect to the parameters \(\beta _\mathcal{A}\), in the sense of Definition 3. Otherwise the inner-approximation is empty. If all parameters are existentially quantified, that is if the subset \(\beta _\mathcal{A}\) is empty, we obtain the classical inner-approximation of Definition 2. Note that a unique computation of the center solution \([\tilde{\varvec{z}}]\) and the Jacobian matrix \([\varvec{J}]\) can be used to infer different interpretations as inner-approximations or robust inner-approximations. With this computation, the robust inner flowpipes will always be included in the classical inner flowpipes.

The computation of the inner-approximations fully relies on the outer-approximations at each time step. A consequence is that we can soundly implement most of our approach using classical interval-based methods: outward rounding should be used for the outer approximations of flows and Jacobians. Only the final computation by Kaucher arithmetic of improper intervals should be done with inward rounding in order to get a sound computation of the inner-approximation.

Also, the wider the outer-approximation in Taylor models for the center and the Jacobian, the tighter and thus the less accurate is the inner-approximation. This can lead to an empty inner-approximation if the result of Eq. (16) in Kaucher arithmetic is not an improper interval. This can occur in two way. Firstly, the Kaucher multiplication \([\varvec{J}]_\mathcal{E}(\text {dual }\varvec{\beta }_\mathcal{E}-\tilde{\beta }_\mathcal{E})\) in (16), yields a non-zero improper interval only if the Jacobian coefficients do not contain 0. Secondly, suppose that the Kaucher multiplication yields an improper interval. It is added to the proper interval \([\varvec{z}](t,t_{ij},[\tilde{\varvec{z}}_{ij}]) + [\varvec{J}]_\mathcal{A}*(\varvec{\beta }_\mathcal{A}-\tilde{\beta }_\mathcal{A})\). The center solution \([\varvec{z}](t,t_{ij},[\tilde{\varvec{z}}_{ij}])\) can be tightly estimated, but the term \( [\varvec{J}]_\mathcal{A}(\varvec{\beta }_\mathcal{A}-\tilde{\beta }_\mathcal{A})\) that measures robustness with respect to the \(\beta _\mathcal{A}\) parameters can lead to a wide enclosure. If this sum is wider than the improper interval resulting from the Kaucher multiplication, then the resulting Kaucher sum will be proper and the inner-approximation empty.

5 Implementation and Experiments

We have implemented our method using the FILIB++ C++ library [23] for interval computations, the FADBAD++Footnote 2 package for automatic differentiation, and (a slightly modified version of) the aaflibFootnote 3 library for affine arithmetic.

Let us first consider the running example, with order 2 Taylor models, and an integration step size of 0.05. Figure 1 left presents the results until \(t=2\) (obtained in 0.03 s) compared to the analytical solution (dashed lines): the solid external lines represent the outer-approximating flowpipe, the filled region represents the inner-approximating flowpipe. Until time \(t=0\), the DDE is in its initialization phase, and the conservativeness of the outer-approximation is due to the abstraction in affine arithmetic of the set of initialization functions. Using higher-order Taylor models, or refining the time step improves the accuracy. However, for the inner-approximation, there is a specific difficulty: the Jacobian contains 0 at \(t=-1\), so that the inner-approximation is reduced to a point. This case corresponds to the parameter value \(\beta =1\). To address this problem, we split the initial parameter set in two sub-intervals of equal width, compute independently the inner and outer flowpipes for these two parameters ranges, and then join the results to obtain Fig. 1 center. It is somehow counter intuitive that we can get this way a larger, thus better quality, inner-approximating set, as the inner-approximation corresponds to the property that there exist a value of \(\beta \) in the parameter set such that a point of the tube is definitely reached. Taking a larger \(\beta \) parameter set would intuitively lead to a larger such inner tube. However, this is in particular due to the fact that we avoid here the zero in the Jacobian. More generally, such a subdivision yields a tighter outer-approximation of the Jacobian, and thus better accuracy when using the mean-value theorem.

Fig. 1.
figure 1

Running example (Taylor model order 2, step size 0.05)

In order to obtain an inner-approximation without holes, we can use a subdivision of the parameters with some covering. This is the case for instance using 10 subdivisions, with 10% of covering. Results are now much tighter: Fig. 1 right represents a measure \(\gamma (x,t)\) of the quality of the approximations (computed in 45 s) for a time horizon \(T=15\), with Taylor Model of order 3, a step size of 0.02. This accuracy measure \(\gamma (x,t)\) is defined by \( \gamma (x,t)=\frac{\gamma _u(x)}{\gamma _o(x)} \) where \(\gamma _u(x)\) and \(\gamma _o(x)\) measure respectively the width of the inner-approximation and outer-approximation, for state variable x. Intuitively, the larger the ratio (bounded by 1), the better the approximation. Here, \(\gamma (x,t)\) almost stabilizes after some time, to a high accuracy of 0.975. We noted that in this example, the order of the Taylor model, the step size and the number of initial subdivisions all have a notable impact on the stabilized value of \(\gamma \), that can here be decreased arbitrarily.

Example 9

Consider a basic PD-controller for a self-driving car, controlling the car’s position x and velocity v by adjusting its acceleration depending on the current distance to a reference position \(p_r\), chosen here as \(p_r=1\). We consider a delay \(\tau \) to transfer the input data to the controller, due to sensing, computation or transmission times. This leads, for \(t\ge 0\), to:

$$\begin{aligned} {\left\{ \begin{array}{ll} {x}'(t) = v(t) &{}\\ {v}'(t) = -K_p \big ( x(t-\tau )-p_r \big ) - K_d\,v(t-\tau ) &{} \end{array}\right. } \end{aligned}$$

Choosing \(K_p=2\) and \(K_d=3\) guarantees the asymptotic stability of the controlled system when there is no delay. The system is initialized to a constant function \((x,v) \in [-0.1,0.1] \times [0,0.1]\) on the time interval \([-\tau ,0]\).

This example demonstrates that even small delays can have a huge impact on the dynamics. We represent in the left subplot of Fig. 2 the inner and outer approximating flowpipes for the velocity and position, with delay \(\tau =0.35\), until time \(T=10\). They are obtained in 0.32 s, using Taylor models of order 3 and a time step of 0.03. The parameters were chosen such that the inner-approximation always remains non-empty. We now study the robustness of the behavior of the system to the parameters: \(K_p\) and \(K_d\) are time invariant, but now uncertain and known to be bounded by \((K_p,K_d) \in [1.95,2.05] \times [2.95,3.05]\). The Jacobian matrix is now of dimension \(2 \times 4\). We choose a delay \(\tau =0.2\), sufficiently small to not induce oscillations. Thanks to the outer-approximation, we prove that the velocity never becomes negative, in contrast to the case of \(\tau =0.35\) where it is proved to oscillate. In Fig. 2 center, we represent, along with the over-approximation, the inner-approximation and a robust inner-approximation. The inner-approximation, in the sense of Definition 2, contains only states for which it is proved that there exists an initialization of the state variables x and v in \([-0.1,0.1] \times [0,0.1]\) and a value of \(K_p\) and \(K_d\) in \([1.95,2.05] \times [2.95,3.05]\), such that these states are solutions of the DDE. The inner-approximation which is robust with respect to the uncertainty in \(K_p\) and \(K_d\), in the sense of Definition 3, contains only states for which it is proved that, whatever the values of \(K_p\) and \(K_d\) in \([1.95,2.05] \times [2.95,3.05]\), there exist an initialization of x and v in \([-0.1,0.1] \times [0,0.1]\), such that these states are solutions of the DDE. These results are obtained in 0.24 s, with order 3 Taylor models and a time step of 0.04. The robust inner-approximation is naturally included in the inner-approximation.

Fig. 2.
figure 2

Left and center: velocity and position of controlled car (left \(\tau =0.35\), center \(\tau =0.2\)); Right: vehicles position in the platoon example

We now demonstrate the efficiency of our approach and its good scaling behavior with respect to the dimension of the state space, by comparing our results with the results of [30] on their seven-dimensional Example 3:

Example 10

Let \( \dot{x}(t) = f(x(t),x(t-\tau )), \; t \in [\tau =0.01,T], \) where \(f(x(t),x(t-\tau ) = (1.4x_3(t) - 0.9x_1(t-\tau ), 2.5x_5(t) - 1.5x_2(t), 0.6x_7(t) -0.8x_3(t)x_2(t), 2-1.3x_4(t)x_3(t),0.7x_1(t) -x_4(t)x_5(t),0.3x_1(t) - 3.1x_6(t), 1.8x_6(t) - 1.5x_7(t)x_2(t))\), and the initial function is constant on \([-\tau ,0]\) with values in a boxFootnote 4 \([1.0, 1.2] \times [0.95, 1.15] \times [1.4, 1.6] \times [2.3, 2.5] \times [0.9, 1.1] \times [0.0, 0.2] \times [0.35, 0.55]\). We compute outer and inner approximations of the reachable sets of the DDE until time \(t=0.1\), and compare the quality measure \(\gamma (x_1),\ldots ,\gamma (x_7)\) for the projection of the approximations over each variable \(x_1\) to \(x_7\), of our method with respect to [30]. We obtain for our work the measures 0.998, 0.996, 0.978, 0.964, 0.97, 0.9997, 0.961, to be compared to 0.575, 0.525, 0.527, 0.543, 0.477, 0.366, 0.523 for [30]. The results, computed with order 2 Taylor models, are obtained in 0.13 s with our method, and 505 s with [30]. Our implementation is thus both much faster and much more accurate. However, this comparison should only be taken as a rough indication, as it is unfair to [30] to compare their inner boxes to our projections on each component.

Example 11

Consider now the model, adapted from [11], of a platoon of n autonomous vehicles. Vehicle \(C_{i+1}\) is just after \(C_i\), for \(i=1\) to \(n-1\). Vehicle \(C_1\) is the leading vehicle. Sensors of \(C_{i+1}\) measure its current speed \(v_{i+1}\) as well as the speed \(v_i\) of the vehicle just in front of it. There respective positions are \(x_{i+1}\) and \(x_i\). We take a simple model where each vehicle \(C_{i+1}\) accelerates so that to catch up with \(C_i\) if it measures that \(v_i > v_{i+1}\) and acts on its brakes if \(v_i < v_{i+1}\). Because of communication, accelerations are delayed by some time constant \(\tau \):

$$\begin{aligned} \begin{array}{rcll} \dot{x}_i(t) &{} = &{} v_i(t) &{} i=2,\cdots ,n \\ \dot{v}_{i+1}(t) &{} = &{} \alpha (v_i(t-\tau )-v_{i+1}(t-\tau )) &{} i=2,\cdots ,n-1 \end{array} \end{aligned}$$

We add an equation defining the way the leading car drives. We suppose it adapts its speed between 1 and 3, following a polynomial curve. This needs to adapt the acceleration of vehicle \(C_2\):

$$\begin{aligned} \begin{array}{rcl} \dot{x}_1(t) &{} = &{} 2+(x_1(t)/5-1)(x_1(t)/5-2)(x_1(t)/5-3)/6 \\ \dot{v}_2(t) &{} = &{} \alpha (2+(x_1(t)/5-1)(x_1(t)/5-2)(x_1(t)/5-3)/6-v_2(t-\tau ) ) \end{array} \end{aligned}$$

We choose \(\tau =0.3\) and \(\alpha =2.5\). The initial position before time 0 of car \(C_i\) is slightly uncertain, taken to \(-(i-1)+[-0.2,0.2]\), and its speed is in [1.99,2.01]. We represent in the right subplot of Fig. 2 the inner and outer approximations of the position of the vehicles in a 5 vehicles platoon (9-dimensional system) until time T = 10, with a time step of 0.1, and order 3 Taylor models, computed in 2.13 s. As the inner-approximations of different vehicles intersect, there are some unsafe initial conditions, such that the vehicules will collide. This example allows us to demonstrate the good scaling of our method: for 10 vehicles (19-dim system) and with the same parameters, results are obtained in 6.5 s.

6 Conclusion

We have shown how to compute, efficiently and accurately, outer and inner flowpipes for DDEs with constant delay, using Taylor models combined with an efficient space abstraction. We have also introduced a notion of robust inner-approximation, that can be computed by the same method. We would like to extend this work for fully general DDEs, including variable delay, as well as study further the use of such computations for property verification on networked control systems. Indeed, while testing is a weaker alternative to inner-approximation for property falsification, we believe that robust inner-approximation provides new tools towards robust property verification or control synthesis.