If time were a graph, what would evolution equations look like?

Abstract

Linear evolution equations are considered usually for the time variable being defined on an interval where typically initial conditions or time periodicity of solutions is required to single out certain solutions. Here, we would like to make a point of allowing time to be defined on a metric graph or network where on the branching points coupling conditions are imposed such that time can have ramifications and even loops. This not only generalizes the classical setting and allows for more freedom in the modeling of coupled and interacting systems of evolution equations, but it also provides a unified framework for initial value and time-periodic problems. For these time-graph Cauchy problems questions of well-posedness and regularity of solutions for parabolic problems are studied along with the question of which time-graph Cauchy problems cannot be reduced to an iteratively solvable sequence of Cauchy problems on intervals. Based on two different approaches—an application of the Kalton–Weis theorem on the sum of closed operators and an explicit computation of a Green’s function—we present the main well-posedness and regularity results. We further study some qualitative properties of solutions. While we mainly focus on parabolic problems, we also explain how other Cauchy problems can be studied along the same lines. This is exemplified by discussing coupled systems with constraints that are non-local in time akin to periodicity.

Introduction

Time has classically been considered as a linear phenomenon, especially in western cultures. This has been clearly mirrored in the physical description of the world, all the way from ancient Greek philosophy to modern partial differential equations of mathematical physics. Many real world phenomena can be—more or less naively—modeled as abstract Cauchy problems

$$\begin{aligned} \partial _t \psi (t) - A\psi (t)=f(t), \end{aligned}$$
(1.1)

such as the heat, transport or Schrödinger equation, which are classically considered with domain for the time variable t in a finite interval [0, a] or a half-line \([0,\infty )\), and there cannot be a unique solution until an initial condition \(\psi (0)=g\) is imposed. Here, for simplicity one may have in mind a sectorial operator \(-A\) in a Hilbert space X.

The western philosophy has, ever since Aristotle [10] and perhaps Heraclitus, most commonly regarded time as a linear instance that allows to order events according to the notions of before and after: Similar ideas are also typical in western monotheistic religions. It is folklore that, throughout the world, different cultures have had diverging approaches to the interpretation of time: Some religions of Indian origin—most notably Hinduism and Jainism, unlike Buddhism [4, 11]—postulate that the time consists of ages featuring repeating patterns, leading to a cyclic existence described by K\(\bar{a}\)lacakra; but also the cosmological implications of the Xiuhmolpilli (52-year cycles in Aztec calendar) or the Bak’tun (144,000-day cycles in Maya calendar) suggest a cyclic understanding of time [35], with such cycles conveniently clocking existence. This does not necessarily lead to mathematical clashes: Indeed, if the time variable t is cyclic and hence lives in a torus \(\mathbb {S}^1\) or the full real line \(\mathbb {R}\), then looking for solutions of (1.1) amounts to inquire existence of periodic solutions.

In each case the time domain is an oriented one-dimensional manifold; thus, there is a clear direction at each point in time and a well-defined time before and after it. Going beyond this, there are different perceptions of time expressed for instance in the multiverse interpretation of quantum mechanics or in the discussions on closed time-like curves in general relativity. More recently, the theoretical physicist Carlo Rovelli has been advocating the necessity of giving up even the weakly ordered structure offered by Albert Einstein’s conception of time. He writes in [38, Chapter 6]:

None of the pieces that time has lost (singularity, direction, independence, the present, continuity) puts into question the fact that the world is a network of events. On the one hand, there was time, with its many determinations; on the other, the simple fact that nothing is: things happen.

The absence of the quantity “time” in the fundamental equations does not imply a world that is frozen and immobile. On the contrary, it implies a world in which change is ubiquitous, without being ordered by Father Time; without innumerable events being necessarily distributed in good order, or along the single Newtonian time line, or according to Einstein’s elegant geometry.

In this article, we would like to invite the reader to participate in a thought experiment and to assume time not to consist of a one-dimensional manifold, but rather of a metric graph or network. Such ramified structures consist—roughly speaking—of intervals glued together at their endpoints and allow for more freedom in the modeling of evolutionary systems in real and some possibly hypothetical applications. The purpose of this note is to widen the scope of classical evolution equations and to show how graphs can be used to model time evolution. The main idea and recurrent motive is to consider initial conditions as boundary conditions in time: We will make this more precise in the following.

We notice in passing that there do exist classical settings where the notion of one-dimensional time is generalized: In the context of analytic semigroups time is allowed to be in a sector of the complex plane as sketched in Fig. 1d. This has a plethora of pleasant mathematical consequences, but it is not evident how to make sense of it physically. Instead, we reckon that allowing time to live on network-like structures may have a practical interpretation as will be discussed in terms of examples.

Fig. 1
figure1

Classical time domains for evolution equations

From initial conditions to boundary conditions in time

To begin with, considering the classical cases illustrated in Fig. 1a–c one first notices that for the real line or the torus there are no initial conditions, and in fact adding initial conditions would over-determine the system. For a bounded interval or the half-line, the initial value problem can be decomposed using linearity into two separate problems

$$\begin{aligned} \begin{aligned} \partial _t \psi _f(t) - A\psi _f(t)&=f(t), \quad \psi (0)=0, \quad \hbox { and } \\ \partial _t \psi _0(t) - A\psi _0(t)&=0, \quad \, \psi (0)=g. \end{aligned} \end{aligned}$$
(1.2)

Both equations can be analyzed in terms of semigroup theory: If A generates a \(C_0\)-semigroup, then the mild solutions to these equations are given by the variation of constants formula and the semigroup, i.e.,

$$\begin{aligned} \psi _f(t)=\int _0^t e^{A(t-s)}f(s) \mathrm{d}s \quad \hbox { and } \quad \psi _0(t)=e^{tA}g, \end{aligned}$$

where the solution to the inhomogeneous initial value problem is \(\psi =\psi _f +\psi _0\) and the solution space depends on the regularity of the data.

The problem on an interval (0, a) with periodicity conditions exhibits similarities with the first equation in (1.2), and it can be written as

$$\begin{aligned} \partial _t \psi (t) - A\psi (t)=f(t), \quad \psi (0)-\psi (a)=0. \end{aligned}$$
(1.3)

This already indicates which possible ‘initial conditions’—or rather ‘inhomogeneous boundary conditions in time’—can be imposed; namely, one can solve

$$\begin{aligned} \partial _t \psi (t) - A\psi (t)=0, \quad \psi (0)-\psi (a)=g. \end{aligned}$$
(1.4)

This means there is no freedom left for initial conditions, but one is free to choose any fixed jump condition \(\psi (0)-\psi (a)=g\), and the solution can be expressed (provided \(\mathbb {1}- e^{a \, A}\) is invertible) as

$$\begin{aligned} \psi _0(t) = e^{tA}(\mathbb {1}- e^{a\, A})^{-1}g \quad \hbox {for}\,t \in (0,a), \end{aligned}$$
(1.5)

which solves (1.4) on (0, a). This solution can be extended to the full real line; it then solves

$$\begin{aligned} \partial _t \psi (t) - A\psi (t)=0, \quad \psi (na)-\psi ((n+1)a)= e^{naA} g \quad \hbox {for } n \in \mathbb {Z}. \end{aligned}$$

In particular, this extension does not lift to a solution on the torus. So, in order to interpret time as a loop, one has to consider the periodic extension of (1.5). This is in general a non-continuous periodic function on \(\mathbb {R}\) which then lifts to a function on the torus.

The regularity of \(\psi _0\) given in (1.5) clearly depends on the regularity of \((\mathbb {1}- e^{a\, A})^{-1}g\) and therefore on g, as well as on the mapping properties of \((\mathbb {1}- e^{a\, A})^{-1}\). The usual notions of mild, strong and classical solutions defined edgewise in the same way as, for example, in [13, 16] can be naturally extended to the setting of metric graphs in time, see also Sect. 6.5.

Considering only the first equation in (1.2), this can be solved—instead of using the variation of constants formula—by means of operator theory by finding realizations of \(\partial _t\) with initial condition \(\psi (0)=0\) such that the sum of closed operators \(\partial _t -A\) is invertible. For \(L^p\)-spaces in time this approach succeeded where the essential ingredient is the theorem of Kalton and Weis on the sum of closed operators. Similarly, Eq. (1.3) can be solved by considering a periodic realization of the time derivative.

Time-graph Cauchy problem

Consider again evolution equations whose time domains are intervals, like in Fig. 1a–c: Both under initial and periodicity conditions they can be split into a part with force, but homogeneous boundary condition in time, and a part without force and inhomogeneous boundary condition in time. We therefore consider finitely many inhomogeneous evolution equations

$$\begin{aligned} (\partial _t - A_i) \psi _i = f_i \quad \hbox {on} \quad (0,a_i)\quad \hbox {for each} \quad i=1, \ldots , n, \end{aligned}$$

on time intervals of length \(a_i>0\), \(i=1,\ldots ,n\), where we assume that \(A_i\) are generators of analytic semigroups in Hilbert spaces \(X_i\), \(f_i\in L^2(0,a_i;X_i)\) are given, and the coupling is defined by

$$\begin{aligned} \begin{bmatrix} \psi _1(0) \\ \vdots \\ \psi _n(0) \end{bmatrix} - \mathbb {B}\begin{bmatrix} \psi _1(a_1) \\ \vdots \\ \psi _n(a_n) \end{bmatrix}= \begin{bmatrix} g_1\\ \vdots \\ g_n \end{bmatrix}, \end{aligned}$$
(1.6)

where \(\mathbb {B}\) is a bounded operator in \(X_1\oplus \cdots \oplus X_n\) which encodes the geometry of the graph by means of transmission conditions, and \(g_i\in X_i\) are given ‘inhomogeneous boundary conditions in time’ in analogy to the fixed jump conditions for the periodic case. This class of time-graph Cauchy problems comprises the classical settings, where the classical initial value problem corresponds to \(\mathbb {B}=0\), and the time-periodic problem is given by \(\mathbb {B}=\mathbb {1}\) with \(g_i=0\) for \(i=1, \ldots , n\).

We present two strategies to solve this problem: First, when all \(g_i=0\), one can apply the Kalton–Weis theorem on the sums of closed operators for suitable time and space operators. Second, going beyond this, explicit formulae in terms of semigroups and transmission conditions as in (1.5) can be derived by a Green’s function Ansatz interpreting the system \(\partial _t - A_i\) as a system of vector-valued ordinary differential equation in time where inhomogeneous boundary conditions in time are included.

First examples, results and outlook

As a next step toward more non-standard examples, one can extend the time-periodic situation: Instead of pure periodicity, we may for instance impose a phase shift after one time period \(a>0\), i.e.,

$$\begin{aligned} \partial _t \psi (t) - A\psi (t)=f(t), \quad \psi (0) = \alpha \psi (a) \quad \hbox {for some} \quad \alpha \in \mathbb {C}, \end{aligned}$$

which corresponds to \(\mathbb {B}=\alpha \cdot \mathbb {1}\) and \(g=0\). As we will see later in Sect. 6 the solution to this problem is given—provided that A generates an analytic semigroup and the operator \(1-\alpha e^{aA}\) is invertible—by

$$\begin{aligned} \psi (t) = \int _0^t e^{(t-s)A} f(s) \mathrm{d}s + e^{tA}\int _0^a (1-\alpha e^{aA})^{-1} \alpha e^{(a-s)A} f(s) \mathrm{d}s. \end{aligned}$$
(1.7)

In particular, extending this to the real line, one has

$$\begin{aligned} \psi (na) - \alpha \psi ((n+1)a)&= \int _0^{na} e^{(na-s)A} f(s) \mathrm{d}s\\&\quad -\alpha \int _a^{(n+1)a} e^{((n+1)a-s)A} f(s) \mathrm{d}s, \quad n\in \mathbb {Z}, \end{aligned}$$

and the phase shift occurs only after the first time period starting at 0 and ending at a, i.e., for \(n=0\), as sketched in Fig. 2b. If instead one considers \(\psi \) as a function on [0, a) and extends it periodically to \(\mathbb {R}\) by setting \(\psi (t+an)=\psi (t)\) for \(n\in \mathbb {Z}\), then for \(\alpha \ne 1\) this \(\psi \) is a discontinuous periodic function with the additional property that

$$\begin{aligned} \psi (na)-\alpha \lim _{\varepsilon \rightarrow 0-}\psi ((n+1)(a+\varepsilon ))=0 \quad \hbox {for } n\in \mathbb {Z}, \end{aligned}$$

and this can be represented in Fig. 2a. Another model is sketched in Fig. 2c. Here, time is represented by the real line, a phase shift with phase \(\alpha _1\) occurs at time \(a_1\), and a second phase shift with phase \(\alpha _2\) occurs at time \(a_2\).

Fig. 2
figure2

Phase shifts

To illustrate various features of time graphs one can consider the graphs depicted in Fig. 3. Building on the initial example of time periodicity, one can take its state at a certain time as input to a new system. This would correspond to the tadpole-like graph in Fig. 3a with matching of the type

$$\begin{aligned} \psi _1(a_1) = \psi _1(0), \quad \psi _2(0)=\psi _1(a_1), \quad \hbox {i.e.,}\quad \mathbb {B}= \begin{bmatrix} 1 &{}\quad 0 \\ 1 &{}\quad 0 \end{bmatrix}, \quad g=0, \end{aligned}$$

where \(\psi _1\) lives on the loop and \(\psi _2\) lives on the adjacent interval.

Fig. 3
figure3

Evolution equations on graphs

More generally, basic building blocks are the joining and the splitting of two systems—as depicted in Fig. 3b, c—which can be used to describe a system which splits into two non-interacting dynamics or two systems which interact after some time by means of some superposition. These blocks can be assembled to form graphs with cycle, see Fig. 3d. Similarly, one may think of the interaction of various periodic systems with dynamics on the time line, see Fig. 3e which shares some features with Fig. 3a, d.

Time-graph Cauchy problems can be understood as a system of Cauchy problems on intervals with possibly non-local constraints such as periodicity, fixed jump conditions, or certain symmetries. Since the map \(\mathbb {B}=(\mathbb {B}_{ij})_{1\le i,j\le n}\) is a block operator matrix with \(\mathbb {B}_{ij}{:}\,X_j \rightarrow X_i\), one can rewrite (1.6) as

$$\begin{aligned} \psi _j(0)- \mathbb {B}_{jj}\psi _j(a_j)=g_j +\sum _{i\ne j} \mathbb {B}_{ij}\psi _i(a_i), \quad j=1,\ldots ,n, \end{aligned}$$

that is, a Cauchy problem is assigned on each interval and their ‘jump conditions’ are interdependent. If \(\mathbb {B}_{jj}\ne 0\) the Cauchy problem on \((0,a_j)\) is non-local and resembles periodicity, and for \(\mathbb {B}_{jj}=0\) the Cauchy problem on \((0,a_j)\) is an initial value problem.

Time graphs with oriented loops can also be used to model closed loops and other control theoretical gadgets, cf. [30]. One can think also of signals that after a certain time are processed differently as illustrated in Fig. 3i. This means that a system changes its character after a certain time. For instance a heat equation is followed after a certain time by a transport process that after a certain time turns again into a heat equation: thus modeling time delays in a diffusive process. Moreover, couplings at the vertices of a time graph can be frequency dependent, and thus, frequency-dependent dynamics can be modeled, too. Also, there are some more non-standard situations where time graphs come into play. A tree graph as depicted in Fig. 3f can serve as an illustration for the multiverse interpretation of quantum mechanics, where it is assumed that, in contrast to a probabilistic interpretation, each possible state is actually attained, but each in one separated universe. Figure 3g, h gives some possibilities how one may represent time travel—independent of its actual physical possibility—using time graphs, see also Sect. 9.

Our main result states the well-posedness of such time-graph models, under some compatibility assumption on the matrix \(\mathbb {B}\), which encodes the transmission conditions in time, and the ‘spatial’ operators \(A_i\). In particular, a generalized variation of constants formula is obtained, allowing us to derive additional mapping properties.

The question of whether the time-graph Cauchy problem reduces to a sequence of Cauchy problems on intervals which can be solved iteratively is traced back to the block structure of \(\mathbb {B}\), and it is pointed out that loops which are reflected by the transmission conditions \(\mathbb {B}\) prevent such iterative solvability and therefore in such situations one indeed needs tools for global solvability such as for the case of periodicity. The methods developed for the case of parabolic problems can be adapted also for some non-parabolic problems such as Schrödinger equations, wave equations, or even coupled dynamics of different types as first and second-order Cauchy problems as illustrated in Fig. 3j.

Organization of the paper

In the subsequent Sect. 2 we recapitulate key elements of the classical theory of evolution equations, some of which are necessary in order to develop our approach to time graphs. Thereafter, in Sect. 3, the notion of networks and function spaces thereon are made precise. In Sect. 4 the Banach space-valued time-derivative operator on graphs with couplings and the spatial operator are studied. In Sect. 5 the time-graph problem for the case \(g=0\) is tackled, using the Kalton–Weis sum theorem on commuting operators applied to the time-derivative and the spatial operator, where some compatibility assumptions on the boundary conditions are required. Section 6 follows a more direct approach computing the Green’s function for the time-graph problem explicitly. This gives our main result on the solvability of the time-graph Cauchy problem for g in a trace space under less restrictive compatibility conditions. Section 7 addresses the question under which condition solutions to time-graph problems can be reduced to Cauchy problems on intervals. In Sect. 8 we discuss a few examples, focusing on specific instances of time graphs and broaching extensions to classes of non-parabolic evolution equations, including Schrödinger, wave and mixed-order equations.

Some of the suggested settings may look mostly motivated by science-fictional or hypothetical physical scenarios, as they may allow for loss of causality: In Sect. 9 we discuss these and further related aspects by commenting on tentative interpretations of evolution supported on network-type time structures.

Classical Cauchy problems

Many of the methods applied here make use of classical results on evolution equation theory and initial value problems. It is well established that the initial value problem

$$\begin{aligned} \partial _t \psi (t) - A\psi (t)=f(t), \quad \psi (0)=g \end{aligned}$$
(2.1)

with A being a closed linear operator on a Banach space X has for all \(g\in X\) a unique mild solution if and only if A generates a \(C_0\)-semigroup on X, cf. [5, Thm. 3.1.12], where at least \(f\in L^1(0,a;X)\) is admissible, \(a>0\). If X is a Hilbert space and \(g=0\), the stronger condition of maximal \(L^2\)-regularity amounts to requiring that there is for all \(f\in L^2(0,a;X)\) a unique solution \(\psi \) of (2.1) in the maximal \(L^2\)-regularity space, i.e.,

$$\begin{aligned} \psi \in L^2(0,a;D(A))\cap \{\psi \in W^{1,2}(0,a;X){:}\,\psi (0)=0\}, \end{aligned}$$

such that

$$\begin{aligned} \Vert \psi \Vert _{L^2(0,a;D(A))} + \Vert \psi \Vert _{W^{1,2}(0,a;X)} \le C \Vert f\Vert _{L^{2}(0,a;X)} \end{aligned}$$

for a constant \(C>0\) independent of f. Maximal \(L^2\)-regularity holds if and only if the semigroup generated by A on X is analytic. This is related to the notion of sectorial operators: considering sectors in the complex plane

$$\begin{aligned} \Sigma _\omega := \{\lambda \in \mathbb {C}{\setminus } \{0\}{:}\,|\hbox {arg}(\lambda )|<\omega \}, \quad \omega \in (0,\pi ), \end{aligned}$$

recall that a closed densely defined linear operator B is sectorial of angle \(\omega \in (0,\pi )\) if

  • \(\sigma (B)\subset \overline{\Sigma _\omega }\) and

  • \(\sup \{ \Vert \lambda (\lambda -B)^{-1}\Vert {:}\,\lambda \in \mathbb {C}{\setminus } \{0\}{,}\nu \le |\text{ arg }(\lambda )|\le \pi \}<\infty \) for all \(\nu \in (\theta , \pi )\),

cf. [34, Theorem 1.11 ff.]. Note that if \(B=-A\) is sectorial of angle smaller than \(\pi /2\), than A is the generator of an analytic semigroup. In the literature, there are several, slightly diverging definitions of sectorial operators. For example, in [16, Definition 4.1] it is the generator itself (rather than its negative) which is called ‘sectorial,’ while in [31, Chapter 3, §3.10] an operator on a Hilbert spaces is called ‘sectorial’ if its numerical range lies in a sector.

For Banach spaces X of class UMD, maximal \(L^p\)-regularity can be characterized using the notions of \({\mathcal R}\)-sectoriality and \({\mathcal H}^{\infty }\)-calculus, where one implication follows from the Dore–Venni-type sum theorem of Kalton and Weis on commuting operators [33, Thm. 6.3], cited here in Theorem 2.1. The key idea in the original Dore–Venni Theorem and its generalizations is to look at evolution equations on a Banach space X as stationary equations on a Bochner space of X-valued functions.

Theorem 2.1

(Sum theorem of Kalton and Weis) Suppose that \(A\in {\mathcal H}^{\infty }(X)\) and \(B\in {\mathcal R}{\mathcal S}(X)\) are commuting operators such that \(\phi _A^{\infty }+\phi _B^R<\pi \). Then, \(A+B\) is closed with domain \(D(A+B)=D(A)\cap D(B)\), \(A+B\in {\mathcal R}{\mathcal S}(X)\) with \(\phi _{A+B}\le \max \{\phi _A^{\infty },\phi _B^R\}\), and for some constant \(C>0\)

$$\begin{aligned} \Vert Ax\Vert _X +\Vert Bx\Vert _X \le C\Vert (A+B)x\Vert _X, \quad x\in D(A)\cap D(B). \end{aligned}$$

The operator \(A+B\) is invertible if A or B is invertible.

In the following we will seldom use this result in its full generality, as we mostly restrict to the case of Hilbert spaces; we refer the interested reader to the classic monograph [13] by Denk, Hieber and Prüßwhere all these notions are introduced. Theorem 2.1 is formulated for a Banach space X. If X is a Hilbert space, however, then the notions of \({\mathcal R}\)-sectoriality and sectoriality agree. We recall that whenever \(-A\) is sectorial, the solution \(\psi (t):=e^{tA}g\) lies in D(A) for all \(t>0\) and all initial data \(g\in X\); and that moreover, \(\psi \) lies for all \(p\in (1,\infty )\) in the maximal \(L^p\)-regularity space whenever the initial data belong to the trace space, i.e., \(g\in (X,D(A))_{1-1/p,p}\), given by the real interpolation functor \((\cdot ,\cdot )_{\theta ,p}\), cf. [37, § 3.4].

The Ansatz using the Kalton and Weis sum theorem has been applied successfully by Arendt and Bu [1,2,3]. In particular, the fact that both time domains \(\mathbb {R}\) and \({\mathbb {S}}^1\) are groups has allowed them to apply methods of harmonic analysis and to deliver a comprehensive theory of Cauchy problems with time-periodic boundary conditions. A general scheme for periodic and almost periodic solutions to semilinear equations has been proposed by Hieber and co-authors, cf. [18, 27] and also [17, 23, 26, 29], where in particular for applications in fluid mechanics semigroup theory plays an important role, cf. [22]. For a similar approach where the stationary part is treated separately, and in particular applications to quasi- and semilinear problems see also the works of Kyed and co-authors, cf. [9, 14, 32]. Existence of time-periodic solutions for (linear or even nonlinear) hyperbolic equations is well known for a large class of problems, cf. the comprehensive monograph [39].

Finite metric graphs

Finite graphs

A graph is a 4-tuple

$$\begin{aligned} {\mathcal G}= \left( {{\mathcal {V}}}, {\mathcal I},{\mathcal E}, \partial \right) , \end{aligned}$$

where \({{\mathcal {V}}}\) denotes the set of vertices, \({\mathcal I}\) the set of internal edges and \({\mathcal E}\) the set of external edges, with \({\mathcal E}\cap {\mathcal I}=\emptyset \). We refer to elements of the set \({\mathcal E}\cup {\mathcal I}\) collectively as edges. To avoid notational ambiguities, we also assume \({{\mathcal {V}}}\cap {\mathcal E}={{\mathcal {V}}}\cap {\mathcal I}=\emptyset \). In order to fix an orientation, one distinguishes incoming \({\mathcal E}_-\) and outgoing \({\mathcal E}_+\) external edges, where \({\mathcal E}={\mathcal E}_-\cup {\mathcal E}_+\) and \({\mathcal E}_-\cap {\mathcal E}_+=\emptyset \).

The structure of the graph is given by the boundary map \(\partial \). On one hand, it assigns to each internal edge \(i\in {\mathcal I}\) an ordered pair of vertices \(\partial (i)=\left( \partial _-(i),\partial _+(i)\right) \in {{\mathcal {V}}}\times {{\mathcal {V}}}\), where \(\partial _-(i)\) is called its initial vertex and \(\partial _+(i)\) its terminal vertex. On the other hand, each incoming external edge \(e_-\in {\mathcal E}_-\) and each outgoing external edge \(e_+\in {\mathcal E}_+\) is associated by means of \(\partial (e_-)=\partial _-(e_-)\) and \(\partial (e_+)=\partial _+(e_+)\) with a single vertex (its initial and terminal vertex, respectively). A graph is called balanced if \(|{\mathcal E}_-|=|{\mathcal E}_+|\). We will see that orientations do play a role only when we study evolution equations that are of first (or, more generally, odd) order in time; in the case of even time order equations, orientations are only imposed for the sake of a consistent parameterization. A graph is called finite if \(|{{\mathcal {V}}}|+|{\mathcal I}|+|{\mathcal E}|<\infty \) and a finite graph is called compact if \({\mathcal E}=\emptyset \).

The structure of the network is given by the \(|{{\mathcal {V}}}|\times |{\mathcal E}\cup {\mathcal I}|\)-outgoing and ingoing incidence matrices \(I^+:=(\iota ^+_{\mathsf {v}\mathsf {e}})\) and \(I^-:=(\iota ^-_{\mathsf {v}\mathsf {e}})\) defined by

$$\begin{aligned} \iota ^+_{\mathsf {v}\mathsf {e}}:=\left\{ \begin{array}{ll} 1, &{}\quad \hbox {if}\,\partial _-(\mathsf {e})=\mathsf {v},\\ 0, &{}\quad \hbox {otherwise}, \end{array} \right. \qquad \hbox {and}\qquad \iota ^-_{\mathsf {v}\mathsf {e}}:=\left\{ \begin{array}{ll} 1, &{}\quad \hbox {if}\,\partial _+(\mathsf {e})=\mathsf {v},\\ 0, &{}\quad \hbox {otherwise.} \end{array} \right. \end{aligned}$$
(3.1)

This encodes the structure of the graph and allows one to define directions on \({\mathcal G}\). The network \({\mathcal G}\) is the directed graph whose signed incidence matrix is \(I:=(\iota _{\mathsf {v}\mathsf {e}})\), defined by \(I:=I^+-I^-\); we will occasionally need the underlying undirected graph, which is fully defined by the (signless) incidence matrix \(I:=I^+ +I^-\). Roughly speaking, a directed graph is the version of the graph where one can move only along the prescribed direction, while for the undirected graph \({\mathcal G}\) one can move into both directions.

Function spaces on metric graphs

A graph \({\mathcal G}\) is endowed with the following metric structure. Each internal edge \(i\in {\mathcal I}\) is associated with an interval \([0,a_i]\), with \(a_i>0\), such that its initial vertex corresponds to 0 and its terminal vertex to \(a_i\). Each external edge \(e\in {\mathcal E}_-\) and \(e\in {\mathcal E}_+\) is associated with a half-line \([0,\infty )\) and \((-\infty ,0]\), respectively, such that \(\partial (e)\) corresponds to 0. The numbers \(a_i\) are called lengths of the internal edges \(i\in {\mathcal I}\) and they are collected into the vector

$$\begin{aligned} \underline{a}=\{a_i\}_{i\in {\mathcal I}}\in (0,\infty )^{|{\mathcal I}|}. \end{aligned}$$

The couple consisting of a finite graph endowed with a metric structure is called a metric graph \(({\mathcal G},\underline{a})\). The metric on the undirected metric graph \(({\mathcal G},\underline{a})\) is defined via minimal path lengths along connected vertices, while for the directed metric graph minimal path lengths along connected vertices is computed taking into account the directions.

Let, for each \(j\in {\mathcal I}\cup {\mathcal E}\), \(X_j\) be a complex Banach space with norm \(\Vert \cdot \Vert _{X_j}\). Then, any collection of functions

$$\begin{aligned} \psi _j{:}\,I_j \rightarrow X_j, \quad j\in {\mathcal I}\cup {\mathcal E}\hbox { with } I_j= {\left\{ \begin{array}{ll} (0,a_j), &{}\quad \text{ if } \ j\in {\mathcal I}, \\ (0,\infty ), &{}\quad \text{ if } \ j\in {\mathcal E}_-, \\ (-\infty ,0), &{}\quad \text{ if } \ j\in {\mathcal E}_+, \end{array}\right. } \end{aligned}$$

can be identified with a map

$$\begin{aligned} \psi {:}\,\bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} I_j \rightarrow \bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} X_j \quad \hbox {with} \quad \psi (t)=\psi _j(t) \quad \hbox {for } t\in I_j, \end{aligned}$$
(3.2)

where the notation for elements in

$$\begin{aligned} t_j=(t,j)\in \bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} I_j \quad \hbox {and} \quad \psi _j=(\psi ,j)\in \bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} X_j \end{aligned}$$

is shortened to t and \(\psi \), and occasionally we write slightly redundantly \(\psi _j(t)=\psi _j(t_j)\). The metric graph \(({\mathcal G},\underline{a})\) is identified with a quotient of \(\bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} \overline{I_j}\), and therefore, \(t\in ({\mathcal G},\underline{a})\) is identified with \(t=t_j\in \overline{I_j}\) for some \(j\in {\mathcal E}\cup {\mathcal I}\). Similarly, the maps \(\psi \) defined as in (3.2) can be identified with maps on \(({\mathcal G},\underline{a})\), where on the vertices in general a set of values can be attained, because for the different edges adjacent to a vertex the edgewise defined functions \(\psi _j\) can in general take different values.

Equipping each edge of the oriented or non-oriented metric graph with the one-dimensional vector-valued Bochner–Lebesgue measure, one obtains a measure space. One defines

$$\begin{aligned} \int _{{\mathcal G}} \psi := \sum _{j\in {\mathcal I}\cup {\mathcal E}} \int _{I_j} \psi (t_j) \, \mathrm{d}t_j \end{aligned}$$

where \(\mathrm{d}t_{j}\) refers to integration with respect to the Bochner–Lebesgue measure on \(I_j\). We set

$$\begin{aligned} {\mathcal X}:= \bigoplus _{j\in {\mathcal I}\cup {\mathcal E}} X_j, \end{aligned}$$

and introduce, with a slight abuse of notation, several related spaces: For \(p\in (1,\infty )\) the space

$$\begin{aligned} L^p({\mathcal G},\underline{a};{\mathcal X}):= \bigoplus _{j\in {\mathcal I}\cup {\mathcal E}} L^p(I_j;X_j) \end{aligned}$$

defines a Banach space, and indeed a Hilbert space provided \(p=2\) and \(X_j\) are Hilbert spaces; the canonical norm and inner product are given by

$$\begin{aligned} \Vert \psi \Vert _{L^p}&:= \left( \sum _{j\in {\mathcal I}\cup {\mathcal E}} \int _{I_j} \Vert \psi (t_j)\Vert _{X_j}^p { \, \mathrm{d}t_j}\right) ^{1/p} \quad \hbox {and}\\ \langle \psi , \varphi \rangle _{L^2}&= \sum _{j\in {\mathcal I}\cup {\mathcal E}} \int _{I_j} \langle \psi _j(t_j), \varphi _j(t_j) \rangle _{X_j} \, \mathrm{d}t_j, \end{aligned}$$

respectively. The corresponding Sobolev spaces are defined for \(p\in [1,\infty )\) and \(m \in \mathbb {N}\) by

$$\begin{aligned} W^{m,p}({\mathcal G},\underline{a};{\mathcal X}):= \bigoplus _{j\in {\mathcal I}\cup {\mathcal E}} W^{m,p}(I_j;X_j). \end{aligned}$$

Recall that for \(\psi \in W^{m,p}({\mathcal G},\underline{a};{\mathcal X})\), \(m\in \mathbb {N}\), \(p\in [1,\infty )\) traces up to the order \(m-1\) are well defined, i.e.,

$$\begin{aligned} \psi ^{(n)}(\partial _{\pm }(j)) \in X_j, \quad j\in {\mathcal I}\cup {\mathcal E}_{\pm }, \hbox { where } 0\le n\le m-1. \end{aligned}$$

Also, using

$$\begin{aligned} W_0^{m,p}(I_j;X_j)= \{\psi _j\in W^{m,p}(I_j;X_j) {:}\,\psi _j^{(n)}\vert _{\partial I_j}=0, \quad 0\le n\le m-1 \} \end{aligned}$$

one sets

$$\begin{aligned} W^{m,p}_0({\mathcal G},\underline{a};{\mathcal X}):=\bigoplus _{j\in {\mathcal I}\cup {\mathcal E}} W_0^{m,p}(I_j;X_j). \end{aligned}$$

Operators on metric graphs

As a first step to study the motivating problem, i.e.,

$$\begin{aligned} \partial _t \psi (t) -A(t)\psi (t)=f(t), \qquad t\in ({\mathcal G},\underline{a}), \end{aligned}$$

the derivative operator with transmission conditions on graphs is analyzed.

Derivative operators on graphs

One considers the nth derivative operators \(D_n\) on graphs formally given by

$$\begin{aligned} (D_n\psi )_j = \partial _t^n \psi _j, \quad j\in {\mathcal I}\cup {\mathcal E}, \end{aligned}$$

where one can define minimal and maximal operators in \(L^{p}({\mathcal G},\underline{a};{\mathcal X})\) by

$$\begin{aligned} D(D_n^{\min }):=W_0^{n,p}({\mathcal G},\underline{a};{\mathcal X}) \subset D(D_n^{\max }):=W^{n,p}({\mathcal G},\underline{a};{\mathcal X}). \end{aligned}$$

These are closed linear operators. If \(p=2\) and each \(X_j\) is a Hilbert space, one has \((D_n^{\min })^* = (-1)^{n}D_n^{\max }\), and hence, \(D_n^{\min }\) is symmetric if n is even and skew-symmetric if n is odd. In this article, the focus lies on the first and second derivative operator, for which we use the notation

$$\begin{aligned} D_t:=D_1 \quad \hbox {and} \quad D_{tt}:=D_2. \end{aligned}$$

Accretive coupling conditions for the first derivative

When considering the first derivative operator, it is assumed that \({\mathcal G}\) is balanced, i.e., there are as many outgoing as incoming external edges. From now on, let \(X_j\) be Hilbert spaces. On \(L^2({\mathcal G},\underline{a};{\mathcal X})\) a class of m-accretive realizations of \(D_t\) defined by boundary conditions is studied, i.e., we consider operators \(D_t^{b.c}\) with

$$\begin{aligned} D_t^{\min } \subset D_t^{b.c} \subset D_t^{\max }, \end{aligned}$$

where \(\rho (D_t^{b.c})\ne \emptyset \), and

$$\begin{aligned} {{\,\mathrm{Re}\,}}\langle D_t^{b.c}\psi ,\psi \rangle \ge 0, \quad \hbox {for all } \psi \in D(D_t^{b.c}). \end{aligned}$$

Integrating by parts yields the following Lagrange identity for the first derivative operator

$$\begin{aligned} \int _{{\mathcal G}}\langle \psi ',\varphi \rangle _X + \int _{{\mathcal G}}\langle \psi , \varphi ' \rangle _X = [ \psi ,\varphi ]_{\partial {\mathcal G}, D_t}, \quad \psi ,\varphi \in D(D_t^{\max }), \end{aligned}$$
(4.1)

where

$$\begin{aligned} \left[ \psi ,\varphi \right] _{\partial {\mathcal G}, D_t}:= \sum _{e\in {\mathcal I}\cup {\mathcal E}_+}\langle \psi (\partial _+(e)),\varphi (\partial _+(e) \rangle _X- \sum _{e\in {\mathcal I}\cup {\mathcal E}_-}\langle \psi (\partial _-(e)),\varphi (\partial _-(e) \rangle _X. \end{aligned}$$

One introduces the space of boundary values

$$\begin{aligned} {\mathcal K}:= \bigoplus _{i\in {\mathcal I}} X_i \oplus \bigoplus _{e\in {\mathcal E}_-} X_e \simeq \bigoplus _{i\in {\mathcal I}} X_i \oplus \bigoplus _{e\in {\mathcal E}_+} X_e, \end{aligned}$$

where the claimed isomorphism holds because the graph is balanced, i.e., \(|{\mathcal E}_+|=|{\mathcal E}_-|\): The vectors of boundary values \(\underline{\psi }_-\in {\mathcal K}\) and \(\underline{\psi }_+\in {\mathcal K}\) are then defined by

$$\begin{aligned} \underline{\psi }_+:= \begin{bmatrix}\{ \psi (\partial _+(i))\}_{i\in {\mathcal I}} \\ \{ \psi (\partial _+(e))\}_{e\in {\mathcal E}_p} \end{bmatrix}, \quad \underline{\psi }_-:=\begin{bmatrix}\{\psi (\partial _-(i))\}_{i\in {\mathcal I}} \\ \{\psi (\partial _-(e))\}_{e\in {\mathcal E}_p} \end{bmatrix} \quad \hbox {and} \quad [\psi ]:= \begin{bmatrix}\underline{\psi }_+ \\ \underline{\psi }_-\end{bmatrix}\in {\mathcal K}^2, \end{aligned}$$
(4.2)

where for a fixed bijection

$$\begin{aligned} p{:}\,{\mathcal E}_+\rightarrow {\mathcal E}_- \quad \hbox {one sets}\quad {\mathcal E}_p=\{(e_+,p(e_+))\in {\mathcal E}_+\times {\mathcal E}_-{:}\,e_+\in {\mathcal E}_+\}, \end{aligned}$$

i.e., one orders the outgoing and incoming edges into pairs, and defines

$$\begin{aligned} \partial _+(e):=\partial _+(e_+), \quad \partial _-(e) :=\partial _-(p(e_+)), \hbox { where } e=(e_+,p(e_+))\in {\mathcal E}_p. \end{aligned}$$

Hence, one obtains

$$\begin{aligned} \left[ \psi ,\varphi \right] _{\partial {\mathcal G}, D_t}= \langle \underline{\psi }_+,\underline{\varphi }_+ \rangle _{{\mathcal K}} -\langle \underline{\psi }_-,\underline{\varphi }_- \rangle _{{\mathcal K}} = \langle \underline{\psi }, J \underline{\varphi }\rangle _{{\mathcal K}^2}, \quad \hbox {where } J=\begin{bmatrix} \mathbb {1}_{{\mathcal K}} &{} 0 \\ 0 &{} -\mathbb {1}_{{\mathcal K}} \end{bmatrix}. \end{aligned}$$
(4.3)

For any subspace \({\mathcal M}\subset {\mathcal K}^2\) one can define a realization by

$$\begin{aligned} D_t({\mathcal M}) \psi :=D_t^{\max }\psi = \psi ', \quad D(D_t({\mathcal M})):=\left\{ \psi \in D(D_t^{\max }){:}\,\underline{\psi }\in {\mathcal M}\right\} \end{aligned}$$

with the extremal cases \(D_t({\mathcal K}^2)=D_t^{\max }\) and \(D_t(\{0\})=D_t^{\min }\) being clearly edgewise decoupled, and couplings can be implemented by means of boundary conditions.

Lemma 4.1

The operator \(D_t({\mathcal M})\) is closed if and only if \({\mathcal M}\subset {\mathcal K}^2\) is closed.

Proof

If \({\mathcal M}\subset {\mathcal K}^2\) is closed, then \(\psi _n\rightarrow \psi \) and \(\psi _n'\rightarrow \varphi \) in \(L^2({\mathcal G},\underline{a};{\mathcal X})\) for \(\psi _n\in D(D_t({\mathcal M}))\) imply first due to the closedness of \(D_t^{\max }\) that \(\psi \in D(D_t^{\max })\) and \(\varphi =\psi '\). Second, due to the boundedness of the trace operator one has in \({\mathcal K}^2\) that \(\underline{\psi _n}\rightarrow \underline{\psi }\in {\mathcal M}\).

If \({\mathcal M}\subset {\mathcal K}^2\) is not closed, then there exist a Cauchy sequence \((\underline{\psi _n})\subset {\mathcal M}\) with \(\underline{\psi _n}\rightarrow \underline{\psi }\notin {\mathcal M}\). Note that there exist smooth cutoff functions \(\eta _{j}^{\pm }{:}\,I_j \rightarrow [0,1]\) with \(\eta _{j}^{\pm }=1\) close to \(\partial _{\pm }(e_j)\) and zero around \(\partial _{\mp }(e_j)\). Then, \(\psi _{n,j}=\eta _j^{+} (\underline{\psi }_{+})_j +\eta _j^{-} (\underline{\psi }_{-})_j\) defines functions \(\psi _{n}\in D(D_t({\mathcal M}))\) with \(\psi _{n}\rightarrow \psi \) and \(\psi _{n}'\rightarrow \psi '\), but \(\psi \notin D(D_t({\mathcal M}))\). \(\square \)

Here, the following type of boundary conditions is considered. Let \(\mathbb {B}\in \mathcal {L}({\mathcal K})\) be a bounded operator on \({\mathcal K}\). Note that \(\mathbb {B}\) is a block operator matrix given with respect to the decomposition of \({\mathcal K}:= \bigoplus _{i\in {\mathcal I}\cup {\mathcal E}_-} X_i\), i.e.,

$$\begin{aligned} \mathbb {B}= (\mathbb {B}_{ij})_{i,j\in {\mathcal I}\cup {\mathcal E}_-} \quad \hbox {with} \quad \mathbb {B}_{ij}\in {\mathcal L}(X_j,X_i). \end{aligned}$$

For such \(\mathbb {B}\in \mathcal {L}({\mathcal K})\) we consider the boundary conditions defined by

$$\begin{aligned} \underline{\psi }_- = \mathbb {B}\underline{\psi }_+. \end{aligned}$$
(4.4)

One defines the operator

$$\begin{aligned} D_t(\mathbb {B}):=D_t({\mathcal M}(\mathbb {B})), \quad \hbox {where } {\mathcal M}(\mathbb {B}) := \{[\psi ]\in {\mathcal K}^2{:}\,\underline{\psi }_- - \mathbb {B}\underline{\psi }_+=0\}. \end{aligned}$$

Under additional assumptions these boundary conditions force the numerical range of \(D_t(\mathbb {B})\) and \(D_t(\mathbb {B})^*\) to lie in a left half-plain of the complex plain.

Lemma 4.2

(Adjoint operator and numerical range) Let \(\mathbb {B}\in \mathcal {L}({\mathcal K})\). Then, \(D_t(\mathbb {B})\) is closed, its Hilbert space adjoint in \(L^2({\mathcal G},\underline{a};{\mathcal X})\) is given by

$$\begin{aligned} D_t(\mathbb {B})^*\varphi = -\varphi ', \quad D(D_t(\mathbb {B})^*)=\left\{ \varphi \in D(D_t^{\max }){:}\,\mathbb {B}^*\underline{\varphi }_- - \underline{\varphi }_+=0 \right\} , \end{aligned}$$

and furthermore

$$\begin{aligned} {{\,\mathrm{Re}\,}}\langle D_t(\mathbb {B})\ \psi ,\psi \rangle&= \tfrac{1}{2}\langle (\mathbb {1} -\mathbb {B}^*\mathbb {B})\underline{\psi }_+, \underline{\psi }_+\rangle _{{\mathcal K}}, \quad \psi \in D(D_t(\mathbb {B})),\\ {{\,\mathrm{Re}\,}}\langle D_t(\mathbb {B})^*\varphi ,\varphi \rangle&= \tfrac{1}{2}\langle (\mathbb {1}-\mathbb {B}\mathbb {B}^*)\underline{\varphi }_-, \underline{\varphi }_-\rangle _{{\mathcal K}}, \quad \varphi \in D(D_t(\mathbb {B})^*). \end{aligned}$$

Proof

By Lemma 4.1\(D_t(\mathbb {B})\) is closed since \({\mathcal M}(\mathbb {B})\) is closed. Note that from \(D_t^{\min }\subset D_t(\mathbb {B})\subset D^{\max }_1\) it follows by taking adjoints that \(-D_t^{\min }\subset D_t(\mathbb {B})^*\subset -D_t^{\max }\). Hence, it follows from (4.3) that

$$\begin{aligned} D(D_t(\mathbb {B})^*)=\left\{ \varphi \in D_t^{\max }{:}\,J\underline{\varphi }\in {\mathcal M}(\mathbb {B})^{\perp }\right\} . \end{aligned}$$

Note that

$$\begin{aligned} {\mathcal M}({\mathcal B})=\ker \begin{bmatrix} \mathbb {1}&\quad -\mathbb {B}\end{bmatrix} \perp {{\,\mathrm{Ran}\,}}\begin{bmatrix} \mathbb {1} \\ -\mathbb {B}^* \end{bmatrix} = \ker \begin{bmatrix} \mathbb {B}^*&\quad \mathbb {1} \end{bmatrix},\\ \quad \hbox {hence } J ({\mathcal M}(\mathbb {B})^{\perp }) =\ker \begin{bmatrix} \mathbb {B}^*&\quad -\mathbb {1} \end{bmatrix}. \end{aligned}$$

Moreover, for \(\psi \in D(D_t(\mathbb {B}))\) one obtains by integration by parts

$$\begin{aligned} {{\,\mathrm{Re}\,}}\langle D_t(\mathbb {B})\psi ,\psi \rangle&= \tfrac{1}{2} \left( \langle D_t(\mathbb {B})\psi ,\psi \rangle + \overline{\langle D_t(\mathbb {B})\psi ,\psi \rangle } \right) \\&= \tfrac{1}{2} \left( \langle \psi ',\psi \rangle + \langle \psi ,\psi ' \rangle \right) = \tfrac{1}{2}\left( \langle \underline{\psi }_+, \underline{\psi }_+\rangle _{{\mathcal K}} - \langle \underline{\psi }_-,\underline{\psi }_-\rangle _{{\mathcal K}} \right) \\&= \tfrac{1}{2} \left( \langle \underline{\psi }_+, \underline{\psi }_+\rangle _{{\mathcal K}} - \langle \mathbb {B}\underline{\psi }_+,\mathbb {B}\underline{\psi }_+\rangle _{{\mathcal K}} \right) = \tfrac{1}{2} \langle (\mathbb {1}-\mathbb {B}^*\mathbb {B})\underline{\psi }_+, \underline{\psi }_+\rangle _{{\mathcal K}}. \end{aligned}$$

A similar proof yields the claimed identity for \({{\,\mathrm{Re}\,}}\langle D_t(\mathbb {B})^*\psi ,\psi \rangle \). \(\square \)

Remark 4.3

(Spectral inclusion) Note that if \(\mathbb {B}\) is a contraction, then \(\sigma (D_t(\mathbb {B}))\subset \{z\in \mathbb {C}{:}\,{{\,\mathrm{Re}\,}}z \le 0\}\) since the spectrum is contained in the closure of the numerical range. The case \(\sigma (D_t(\mathbb {B}))=\emptyset \) can occur, and in particular for a compact graph with \(\mathbb {B}=0\) (which corresponds to the boundary condition \(\psi (\partial _-(i))=0\) for all \(i\in {\mathcal I}\)) one has \(\sigma (D_t(\mathbb {B}))=\emptyset \).

Proposition 4.4

(M-accretivity and invertibility of \(D_t(\mathbb {B}))\) Let \(\mathbb {B}\in \mathcal {L}({\mathcal K})\).

  1. (a)

    If \(\mathbb {B}\) is a contraction on \({\mathcal K}\), i.e., \(\Vert \mathbb {B}\Vert _{\mathcal {L}({\mathcal K})}\le 1\), then \(D_t(\mathbb {B})\) is m-accretive in \(L^2({\mathcal G},\underline{a};{\mathcal X});\)

  2. (b)

    If \(\mathbb {B}\) is a strict contraction, i.e., \(\Vert \mathbb {B}\Vert _{\mathcal {L}({\mathcal K})}< 1\), then \(D_t(\mathbb {B})\) is boundedly invertible,

  3. (c)

    if \(\mathbb {B}\) is unitary, i.e., \(\mathbb {B}^*\mathbb {B}=\mathbb {B}\mathbb {B}^*=\mathbb {1}\), then \(D_t(\mathbb {B})\) is skew-self-adjoint, i.e., \(D_t(\mathbb {B})^*=-D_t(\mathbb {B})\).

Proof

It is a direct consequence of Lemma 4.2 that if \(\Vert \mathbb {B}\Vert _{\mathcal {L}({\mathcal K})}\le 1\), then

$$\begin{aligned} {{\,\mathrm{Re}\,}}\langle D_t(\mathbb {B})\psi ,\psi \rangle \ge 0 \quad \hbox {and} \quad {{\,\mathrm{Re}\,}}\langle D_t(\mathbb {B})^*\varphi ,\varphi \rangle \ge 0 \end{aligned}$$

for all \(\psi \in D(D_t(\mathbb {B}))\) and all \(\varphi \in D(D_t(\mathbb {B})^*)\), respectively, i.e., \(D_t(\mathbb {B})\) is m-accretive [16, Cor. 3.17]. Recall that one has for the operator norm in \({\mathcal K}\) that \(\Vert \mathbb {B}\Vert _{\mathcal {L}({\mathcal K})}^2 =\Vert \mathbb {B}^*\mathbb {B}\Vert _{\mathcal {L}({\mathcal K})}=\Vert \mathbb {B}\mathbb {B}^*\Vert _{\mathcal {L}({\mathcal K})}\). Hence, one obtains from Lemma 4.2 the following proposition, where part (b) follows using Remark 4.3. \(\square \)

By well-established results about operators with bounded \(H^{\infty }\)-calculus in Hilbert spaces, cf. [8, 5.2.2. Thm.] and also [34, Chapt. 11], [20, Cor. 7.1.8], the following holds; the notation \(\phi ^{\infty }\) for the angle of bounded \(H^\infty \)-calculus is introduced in [8, § 4.5].

Corollary 4.5

(Bounded \(H^{\infty }\)-calculus for \(D_t(\mathbb {B}))\) If \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) is a contraction, then \(D_t(\mathbb {B})\) has a bounded \(H^{\infty }\big (L^2({\mathcal G},\underline{a};{\mathcal X})\big )\)-calculus of angle \(\phi _{D_t(\mathbb {B})}^{\infty }=\frac{\pi }{2}\).

Spatial operators

As before we assume that \(X_j\) are Hilbert spaces. For each edge \(j\in {\mathcal I}\cup {\mathcal E}\) let \(A_j\) be a given operator in \(X_j\) with \(D(A_j)\subset X_j\). We consider the abstract time-graph Cauchy problem

$$\begin{aligned} \left\{ \begin{aligned} (\partial _t - A_j)\psi _j(t_j)&=f_j(t_j), \qquad t_j\in \bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} I_j,\\ \underline{\psi }_- -\mathbb {B}\underline{\psi }_+&=0. \end{aligned} \right. \end{aligned}$$
(4.5)

Note that the operators \(A_j\) in \(X_j\) induce operators in \(L^2(I_j;X_j)\) which with a slight abuse of notation are also denoted by \(A_j\) and \(D(A_j)=L^2(I_j;D(A_j))\). Using this we define the operator \(A_{\mathcal E}\) in \(L^2({\mathcal G},\underline{a};{\mathcal X})\): It acts on functions supported on the time branches by

$$\begin{aligned} D(A_{\mathcal E}):=\bigoplus _{j\in {\mathcal I}\cup {\mathcal E}} L^2(I_j;D(A_j)),\quad (A_{\mathcal E}\psi )_j := A_j\psi _j, \end{aligned}$$
(4.6)

and with this the Cauchy problem (4.5) can be formulated as a maximal regularity problem

$$\begin{aligned} (D_t(\mathbb {B}) - A_{\mathcal E})\psi =f, \quad \hbox {where } \psi \in D(D_t(\mathbb {B}))\cap D(A_{\mathcal E}) \hbox { for } f\in L^2({\mathcal G},\underline{a};{\mathcal X}). \end{aligned}$$
(4.7)

Moreover, the operators \(A_j\) induce an operator \(A_{{\mathcal V}}\) in the space of boundary values \({\mathcal K}\) that acts on functions supported on the vertices by

$$\begin{aligned} D(A_{{\mathcal V}}):= \bigoplus _{j\in {\mathcal I}\cup {\mathcal E}} D(A_j), \quad (A_{{\mathcal V}}\, \underline{\psi })_j := A_j\underline{\psi }_j, \end{aligned}$$
(4.8)

which in turn induces an operator in \({\mathcal K}^2\) by

$$\begin{aligned} D(A_{{\mathcal V}^2}):= D(A_{{\mathcal V}})\oplus D(A_{{\mathcal V}}), \quad A_{{\mathcal V}^2}[\psi ]&:= \begin{bmatrix}A_{{\mathcal V}}\underline{\psi }_- \\ A_{{\mathcal V}}\underline{\psi }_+\end{bmatrix}, \quad [\psi ]=\begin{bmatrix}\underline{\psi }_- \\ \underline{\psi }_+\end{bmatrix}. \end{aligned}$$
(4.9)

The following lemma is straightforward.

Lemma 4.6

(Spectrum of induced operators) Let \(A_j\) be operators in \(X_j\) with domain \(D(A_j)\) for \(j\in {\mathcal I}\cup {\mathcal E}\). Then for the induced operators \(A_{{\mathcal E}}\) in \(L^2({\mathcal G},\underline{a};{\mathcal X})\), \(A_{{\mathcal V}}\) in \({\mathcal K}\), and \(A_{{\mathcal V}^2}\) in \({\mathcal K}^2\) the following holds : 

  1. (a)

    \(\sigma (A_{{\mathcal E}})=\sigma (A_{{\mathcal V}})= \sigma (A_{{\mathcal V}^2})=\bigcup _{j\in {\mathcal I}\cup {\mathcal E}}\sigma (A_j)\) as an equality of sets, i.e., without counting multiplicities;

  2. (b)

    If \(-A_j\) are sectorial of angle \(\phi _{-A_j}\in [0,\pi )\), then \(-A_{{\mathcal E}},-A_{{\mathcal V}},-A_{{\mathcal V}^2}\) are sectorial with same sectoriality angle

    $$\begin{aligned} \phi _{A_{{\mathcal E}}}=\phi _{A_{{\mathcal V}}}=\phi _{A_{{\mathcal V}^2}}=\max _{j\in {\mathcal I}\cup {\mathcal E}}\phi _{A_j}. \end{aligned}$$

The Kalton and Weis sum theorem and the parabolic operator

Solvability of the inhomogeneous problem with homogeneous boundary conditions

Having specified time-derivative and spatial operators, one can now define the parabolic operator

$$\begin{aligned} P(\mathbb {B}):=D_t(\mathbb {B})-A_{\mathcal E}\quad \hbox {with} \quad P(\mathbb {B})=D(D_t(\mathbb {B}))\cap D(A_{\mathcal E}). \end{aligned}$$

The Kalton–Weis sum theorem, formulated here in Theorem 2.1, can now be applied to \(D_t(\mathbb {B})\) and \(A_{\mathcal E}\) using Corollary 4.5 and assuming that the \(A_{\mathcal E}\) is sectorial and commuting with \(D_t(\mathbb {B})\). This gives the well-posedness for the time-graph Cauchy problem with homogeneous initial conditions and inhomogeneous right-hand side.

Proposition 5.1

Let \({\mathcal G}\) be balanced and \(\mathbb {B}\) be a contraction in \({\mathcal K}\). Let \(X_j\) be for all \(j\in {\mathcal I}\cup {\mathcal E}\) Hilbert spaces and \(-A_j\) sectorial operators of angle \(\phi _{-A_i}<\pi /2\) on \(X_j\). Assume that \(D_t(\mathbb {B})\) and \(A_{\mathcal E}\) are resolvent commuting. Then, the operator \(P(\mathbb {B})\) is closed.

If furthermore \(A_{\mathcal E}\) or \(D_t(\mathbb {B})\) are boundedly invertible, then so is \(D_t(\mathbb {B})-A_{\mathcal E}\) and in this case there is a constant \(C=C(\mathbb {B},{\mathcal G},\underline{a})>0\) such that for any \(f\in L^2({\mathcal G},\underline{a};{\mathcal X})\) there is a unique solution \(\psi \) to (4.5) with

$$\begin{aligned} \psi \in D_t(\mathbb {B}) \cap D(A_{\mathcal E}) \hbox { and } \Vert \psi '\Vert _{L^2({\mathcal G},\underline{a};{\mathcal X})} + \Vert A_{\mathcal E}\psi \Vert _{L^2({\mathcal G},\underline{a};{\mathcal X})} \le C \Vert f\Vert _{L^2({\mathcal G},\underline{a};{\mathcal X})}. \end{aligned}$$

Remark 5.2

A criterion to assure that the operators \(D_t(\mathbb {B})\) and \(A_{\mathcal E}\) commute is that \((A_{{\mathcal V}}-\lambda )^{-1}\) and \(\mathbb {B}\) commute for \(\lambda \in \rho (A_{{\mathcal V}})\).

Trace spaces and the parabolic operator

The approach using the Kalton–Weis result on commuting operators allowed us to find a simple way how to check solvability for the time-graph Cauchy problem with homogeneous boundary data. However, the condition that \(D_t(\mathbb {B})\) and \(A_{{\mathcal E}}\) commute seems too strict since (4.5) makes sense without it, and in fact closedness of the parabolic operator can be ensured under weaker assumptions.

For notational simplicity we assume from now on that there are no external edges, i.e., the time graph is assumed to be compact. Considering the maximal parabolic operator

$$\begin{aligned} P^{\max }:=D_t^{\max } - A_{\mathcal E}, \end{aligned}$$

where \({\mathcal E}=\emptyset \), one defines the corresponding trace space

$$\begin{aligned} {\mathcal K}_A := [X,D(A_{\mathcal V})]_{1/2}= \bigoplus _{j\in {\mathcal I}} [X_j,D(A_j)]_{1/2}, \end{aligned}$$

where \([\cdot ,\cdot ]_\theta \) for \(\theta \in (0,1)\) denotes the complex interpolation functor. Recall that for sectorial \(-A\) one has the continuous embedding

$$\begin{aligned} D(D_t^{\max } -A_{\mathcal E}) \hookrightarrow \bigoplus _{j\in {\mathcal I}} BUC(I_j;[X_j,D(A_j)]_{1/2}), \end{aligned}$$
(5.1)

where BUC stands for the space of bounded uniformly continuous functions, cf. [37, Section 3.4] or [7, Theorem 4.10.2].

Definition 5.3

(Boundary conditions compatible with trace space) The operator \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) is said to be compatible with \({\mathcal K}_A\) if it restricts to an operator in \({\mathcal K}_A\), i.e., \(\mathbb {B}\vert _{{\mathcal K}_A}\in {\mathcal L}({\mathcal K}_A)\) holds.

Remark 5.4

  1. (a)

    The actual definition of trace spaces \(({\mathcal X},D(A_{\mathcal V}))_{1-1/p,p}\) uses the real interpolation functor for \(p\in (1,\infty )\), and here it is used that for \(p=2\) one has \([\cdot ,\cdot ]_{1/2}=(\cdot ,\cdot )_{1/2,2}\). If \(D(A_{\mathcal V})\subset {\mathcal X}\) is dense, then all interpolation spaces \(({\mathcal X},D(A_{\mathcal V}))_{\theta ,p},[ {\mathcal X},D(A_{\mathcal V})]_{\theta }\subset {\mathcal X}\) for \(\theta \in [0,1], p\in (1,\infty )\) are dense in \({\mathcal X}\).

  2. (b)

    Note that for \(A_j\) having bounded imaginary powers and \(A_j\) injective one has \([X_j,D(A_j)]_{1/2}=D(A^{1/2})\), and compatibility with the trace space \({\mathcal K}_A\) in the sense of Definition 5.3 holds provided one has that \(A_{{\mathcal V}}^{1/2}\) and \(\mathbb {B}\) commute.

Lemma 5.5

(Closedness of the parabolic operator) Let each \(-A_j\) be sectorial of angle smaller than \(\pi /2\), and let \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) such that \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) is compatible with \({\mathcal K}_A\). Then, \(P(\mathbb {B})\) is a closed operator on \(L^2({\mathcal G},\underline{a};{\mathcal X})\).

Proof

One shows first that \(P^{\max }\) is closed. Note that \(P^{\max }\) decouples the edges and hence it is sufficient to prove closedness for a graph consisting of a single interval [0, a]. Consider the operator \(P(\mathbb {B})=P_{0,\delta }\) for \(\mathbb {B}=0\) on \([-\delta ,a_i]\) for \(\delta >0\). This is closed and to trace back this property to \(P^{\max }\) one considers continuous extension and restriction operators

$$\begin{aligned} E{:}\,D(P^{\max }) \rightarrow D(P_{0,\delta })\hbox { and } R {:}\,D(P_{0,\delta }) \rightarrow D(P^{\max }) \end{aligned}$$

with \(R\circ E = \mathbb {1}_{D(P^{\max })}\), where the extension can be realized for instance by even reflection and then multiplying by a cutoff function with value one on \([-\delta /2,a]\) and zero in a neighborhood of \(-\delta \). Then, \(P^{\max }=R \circ P_{0,\delta }\circ E\) and closedness can be proved straightforward.

Now, let \(\varphi _n\in D(P(\mathbb {B}))\) with

$$\begin{aligned} \varphi _n \rightarrow \varphi \quad \hbox {and} \quad P(\mathbb {B})\varphi _n \rightarrow \psi \quad \hbox {in } L^2({\mathcal G},\underline{a};{\mathcal X}). \end{aligned}$$

Then by closedness of \(P^{\max }\) and since \(P(\mathbb {B})\) is a restriction of \(P^{\max }\), \(\varphi \in D(P^{\max })\) and \(\psi =P^{\max }\varphi \). Using (5.1), it follows that \(\underline{\varphi _n}_{\pm } \rightarrow \underline{\varphi }_{\pm }\), and hence \(\varphi \in D(P(\mathbb {B}))\). \(\square \)

The parabolic operator and the Green’s functions approach

The operator theoretical consideration of the parabolic operator gives information on the solvability for homogeneous boundary data. However, it does not provide a solution formula, and it does not include the case of inhomogeneous boundary data. To address these issues we supplement our findings by computing explicitly the Green’s function for (4.5).

Green’s function for the parabolic problem

Now, we are in the position to collect suitable assumptions for the time-graph Cauchy problem; we stress that the following are more general than the ones in Proposition 5.1, where here \({\mathcal E}=\emptyset \) has been assumed for notational simplicity only.

Assumption 6.1

Let \({\mathcal E}=\emptyset \) and \(\mathbb {B}\in {\mathcal L}({\mathcal K})\). Let \(X_j\) be a Hilbert space and \(-A_j\) a sectorial operator of angle \(\phi _{-A_j}<\pi /2\) on \(X_j\) for each \(j\in {\mathcal I}\).

In the following a solution formula is derived generalizing the variation of constants formula from semigroup theory. Note that square integral maps

$$\begin{aligned} k_{ij}{:}\,I_i \times I_j\rightarrow {\mathcal L}(X_j;X_i), \quad i,j\in {\mathcal I}\end{aligned}$$

define integral operators acting on \(L^2({\mathcal G},\underline{a};{\mathcal X})\) via

$$\begin{aligned} \psi = \{\psi _i\}_{i\in {\mathcal I}} \mapsto \left\{ \sum _{j\in {\mathcal I}}\int _{I_j} k_{ij}(t_i,s_j) \psi _j(s_j) \mathrm{d}s_j\right\} _{i\in {\mathcal I}} \end{aligned}$$

In this sense, the Green’s function for zero initial conditions, i.e., for \(\mathbb {B}=0\), is

$$\begin{aligned} \{r_0(t,s;A_{\mathcal E})\}_{j,l}:= {\left\{ \begin{array}{ll}e^{(t_j-s_j)A_j} &{}\quad \hbox {if }j=l \hbox { and } t_j\ge s_j, \\ 0 &{}\quad \hbox {otherwise}, \end{array}\right. } \quad t_j,s_l\in \bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} I_j. \end{aligned}$$
(6.1)

Since each operator \(-A_j\) is sectorial, it generates an analytic \(C_0\)-semigroup; in particular, \(e^{t_j A_j}\) is a well-defined bounded linear operator on \(X_j\) for each \(t_j\) in the time branch \(I_j\). In the following we will adopt for \(\underline{t}= \{t_j\}_{j\in I_j}\), the notation

$$\begin{aligned} e^{\underline{t}A}{:}\,{\mathcal K}\rightarrow {\mathcal K}, \quad \{\underline{\psi }_{j}\}_{j\in {\mathcal I}} \mapsto \{e^{t_j A_j}\underline{\psi }_{j}\}_{j\in {\mathcal I}}, \end{aligned}$$

and hence \(e^{\underline{t}A}\in {\mathcal {L}}({\mathcal K})\) is a diagonal block operator matrix in \({\mathcal K}\).

Proposition 6.2

(Inhomogeneous problem with homogeneous boundary conditions) Under the Assumption 6.1, let \(\mathbb {B}\) be compatible with \({\mathcal K}_A\), and let \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}\) be boundedly invertible in \({\mathcal K}_A\). Then, \(P(\mathbb {B})\) is boundedly invertible, i.e., for each \(f\in L^2({\mathcal G},\underline{a};{\mathcal X})\) there exists a unique solution \(\psi \) to (4.5) in \(D(D_t(\mathbb {B}))\cap D(A_{\mathcal E})\). This \(\psi \) is given by

$$\begin{aligned} \psi =\int _{{\mathcal G}}r(\cdot ,s;\mathbb {B},A_{\mathcal E})f(s) \mathrm{d}s, \end{aligned}$$

where

$$\begin{aligned} r(t,s;\mathbb {B},A_{\mathcal E}):=r_0(t,s;A_{\mathcal E})+r_1(t,s;\mathbb {B},A_{\mathcal E}) \end{aligned}$$
(6.2)

with \(r_0(t,s;A_{\mathcal E})\) given by (6.1) and

$$\begin{aligned} r_1(t,s;\mathbb {B},A_{\mathcal E}): = e^{\underline{t}A} (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\mathbb {B}e^{(\underline{a}-\underline{s})A}. \end{aligned}$$

Remark 6.3

  1. (a)

    Note that \(e^{\underline{a}A}{:}\,{\mathcal K}_A \rightarrow D(A_{\mathcal V})\subset {\mathcal K}_A\), and therefore, if \(\mathbb {B}\) is compatible with \({\mathcal K}_A\), then also \(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A}\) is compatible with \({\mathcal K}_A\).

  2. (b)

    Moreover if \(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A}\in {\mathcal L}({\mathcal K})\) and \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}\in {\mathcal L}({\mathcal K}_A)\) hold, then \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}^{-1}\in {\mathcal L}({\mathcal K}_A)\) implies that \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\in {\mathcal L}({\mathcal K})\). To this end, knowing that \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}\) is closable in \({\mathcal K}\), it is sufficient to prove that \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}^{-1}\) is closable in \({\mathcal K}\), cf. [19, Lemma 2.28]. Now let

    $$\begin{aligned} (\underline{\psi }_n)_{n\in \mathbb {N}}\subset {\mathcal K}_A \quad \hbox {with } \underline{\psi }_n \rightarrow 0 \hbox { and } (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}^{-1}\underline{\psi }_n \rightarrow \underline{\varphi } \hbox { in } {\mathcal K}\hbox { as } n\rightarrow \infty . \end{aligned}$$

    Then since \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})\in {\mathcal L}({\mathcal K})\) one has

    $$\begin{aligned} (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}^{-1}\underline{\psi }_n=\psi _n \rightarrow (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})\underline{\varphi }, \end{aligned}$$

    and since \(\underline{\psi }_n \rightarrow 0\) the claim follows.

Proof of Proposition 6.2

A solution to the Eq. (4.5) is on each edge \(i \in {\mathcal I}\) of the form

$$\begin{aligned} \psi _i(t_i)= e^{A_it_i}c_i + \int _0^{t_i} e^{A_i(t_i-s_i)}f(s_i) \mathrm{d} s_i, \end{aligned}$$
(6.3)

for some vector \(c_i\in X_i\) that is ‘inherited’ from the final state in the preceding edges. Indeed, the boundary condition can be used to determine \(c_i\). Since

$$\begin{aligned} \underline{\psi }_-= \{c_i\}_{i\in {\mathcal I}} \quad \hbox {and} \quad \underline{\psi }_+= \{e^{A_ia_i}c_i + \int _0^{a_i} e^{A_i(a_i-s_i)}f(s_i) \mathrm{d} s_i\}_{i\in {\mathcal I}}, \end{aligned}$$

recalling that \(a_i\) denotes the length of the edge i, the condition \(\underline{\psi }_-=\mathbb {B}\underline{\psi }_+\) gives

$$\begin{aligned} c_{i} = \sum _{j=1}\mathbb {B}_{ij} ( e^{A_ja_j}c_j + \int _0^{a_j} e^{(a_j-s_j)A_j}f(s_j) \mathrm{d} s_j). \end{aligned}$$

Hence, we obtain the vector-valued identity for \(c=\{c_i\}_{i\in {\mathcal I}}\)

$$\begin{aligned} (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})c= \mathbb {B}\left\{ \int _0^{\underline{a}} e^{(\underline{a}-\underline{s})A}f(s) \mathrm{d} s\right\} , \end{aligned}$$

and because \(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A}\) is assumed to be invertible

$$\begin{aligned} c= (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\mathbb {B}\left\{ \int _0^{\underline{a}} e^{(\underline{a}-\underline{s})A}f(s) \mathrm{d} s\right\} , \end{aligned}$$

whence

$$\begin{aligned} \psi (t)= \int _0^{\underline{a}} e^{A\underline{t}}(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\mathbb {B}e^{(\underline{a}-\underline{s})A}f(s) \mathrm{d}s + \int _0^{\underline{t}} e^{A(\underline{t}-\underline{s})}f(s) \mathrm{d}s. \end{aligned}$$

Recall that the Green’s function is the resolvent operator’s integral kernel, i.e., a function \(r(t,s):=r(t,s;\mathbb {B},A_{\mathcal E})\) such that it defines a left and right inverse of \(P(\mathbb {B})\), i.e.,

  1. (a)

    \(\varphi (t)=(D_t(\mathbb {B})-A_{\mathcal E})\int _{{\mathcal G}} r(t,s)\varphi (s) \mathrm{d}s\) for \(\varphi \in L^2({\mathcal G},\underline{a};{\mathcal X})\),

  2. (b)

    \(\psi (t)=\int _{{\mathcal G}}r(t,s) (D_t(\mathbb {B})-A_{\mathcal E})\psi (s) \mathrm{d}s\) for \(\psi \in D(P(\mathbb {B}))\).

First, note that

$$\begin{aligned} \psi _{f}^0:= \int _{{\mathcal G}}r_0(\cdot ,s;A)f(s) \mathrm{d}s \quad \hbox {and}\quad \psi _{f}^1:= \int _{{\mathcal G}}r_{1}(\cdot ,s;\mathbb {B},A)f(s) \mathrm{d}s \end{aligned}$$
(6.4)

solve \((\partial _t-A_j)(\psi _{f}^0)_j=f_j\) and \((\partial _t-A_j)(\psi _{f}^1)_j=0\) on each edge \(j\in {\mathcal I}\), respectively, where one applies the classical variation of constants formula and the properties of the semigroups \(e^{t_jA_j}\). Hence, \(\psi =\psi _f^0+\psi _f^1\) solves \((\partial _t-A_j)\psi _j=f_j\) on each edge with \(\psi \in D(P^{\max })\). Here, \(\psi _f^1\) is the correction term for the variation of constants term \(\psi _f^0\) assuring that the boundary conditions are satisfied.

Secondly, one has to prove that \(\psi \) satisfies the boundary conditions, and indeed

$$\begin{aligned} \underline{\psi }_-&=\underline{\psi _1}_-= (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\mathbb {B}\int _{{\mathcal G}} e^{A(\underline{a}-\underline{s})} f(s) \mathrm{d}s, \\ -\mathbb {B}\underline{\psi }_+&= -\mathbb {B}\int _{{\mathcal G}} e^{(\underline{a}-\underline{s})A} f(s) \mathrm{d}s - \mathbb {B}e^{\underline{a}A} (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\mathbb {B}\int _{{\mathcal G}} e^{(\underline{a}-\underline{s})A} f(s) \mathrm{d}s, \end{aligned}$$

hence \(\underline{\psi }_- - \mathbb {B}\underline{\psi }_+=0\). We conclude that \(\int _{{\mathcal G}} r(\cdot ,s;\mathbb {B},A)\cdot \mathrm{d}s\) is the right inverse of \(P(\mathbb {B})\).

Because the adjoint kernels

$$\begin{aligned} \{r_0(t,s;A_{\mathcal E})^*\}_{j,l}:= {\left\{ \begin{array}{ll}e^{(s_j-t_j)A_j^*} &{} \hbox {if }j=l \hbox { and } t_j< s_j, \\ 0 &{}\hbox {otherwise}, \end{array}\right. } \quad t_j,s_l\in \bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} I_j \end{aligned}$$
(6.5)

consist of the Green’s function for the time-reversed problems

$$\begin{aligned} (-\partial _{t_j} -A_j^*) \psi _j =f_j, \quad \psi _j(a_j)=0,\quad j\in {\mathcal I}, \end{aligned}$$

and since \( (-\partial _{t_j} -A_j^*) e^{(a_j-t_j)A^*} =0\) one has

$$\begin{aligned} (-D_t -A^*)\int _{{\mathcal G}}r_1(s,t;\mathbb {B},A)^*f(s) \mathrm{d}s=0 \end{aligned}$$

with

$$\begin{aligned} r_1(s,t;\mathbb {B},A)^* = e^{(\underline{a}-\underline{t})A^*} \mathbb {B}^*(\mathbb {1}-e^{\underline{a}A^*} \, \mathbb {B}^*)^{-1} e^{\underline{s}A^*} \end{aligned}$$

and concerning the boundary conditions

$$\begin{aligned} \underline{\psi }_+&=\underline{\psi _1}_+= \mathbb {B}^*(\mathbb {1}-e^{\underline{a}A^*} \, \mathbb {B}^*)^{-1} \int _{{\mathcal G}} e^{\underline{s}A^*} f(s) \mathrm{d}s, \\ -\mathbb {B}^*\underline{\psi }_-&= -\mathbb {B}^* \int _{{\mathcal G}} e^{\underline{s}A^*} f(s) \mathrm{d}s - \mathbb {B}^* e^{\underline{a}A^*} \mathbb {B}^*(\mathbb {1}-e^{\underline{a}A^*} \, \mathbb {B}^*)^{-1} \int _{{\mathcal G}}e^{\underline{s}A^*} f(s) \mathrm{d}s, \end{aligned}$$

hence \(\underline{\psi }_+ - \mathbb {B}^*\underline{\psi }_-=0\).

To conclude, note that \(\mathbb {1}- e^{A^*\underline{a}}\,\mathbb {B}^*\) is invertible if and only if so is its adjoint \(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A}\). We have thus proven that the adjoint of \(\int _{{\mathcal G}} r(\cdot ,s;\mathbb {B},A)\cdot \mathrm{d}s\) is the right inverse of \(P(\mathbb {B})^*\): We hence take adjoints and find \(\int _{{\mathcal G}} r(\cdot ,s;\mathbb {B},A)\cdot \mathrm{d}s \, P(\mathbb {B})= \mathbb {1}_{D(P(\mathbb {B}))}\). We conclude that \(\int _{{\mathcal G}}r(\cdot ,s;\mathbb {B},A)\ \mathrm{d}s\) is also the left inverse of \(P(\mathbb {B})\). \(\square \)

Remark 6.4

Two sufficient conditions for invertibility of \(\mathbb {1}-\mathbb {B}e^{\underline{a}A}\) are that each \(A_i\) are m-dissipative and \(\mathbb {B}\) is a strict contraction; or else that each \(A_i+\epsilon \) is m-dissipative for some \(\epsilon >0\) and \(\mathbb {B}\) is a contraction.

For general \(\mathbb {B}\) the resolvent of \(D_t(\mathbb {B})\) can be obtained applying Proposition 6.2 for \(A_j:=\lambda \mathbb {1}_{X_j}\), \(\lambda \in \mathbb {C}\) which induces an operator \(A_{{\mathcal E}}=\lambda _{{\mathcal E}}\).

Corollary 6.5

(Resolvent of \(D_t(\mathbb {B}))\) Let \({\mathcal E}=\emptyset \) and \(\mathbb {B}\in {\mathcal L}({\mathcal K})\). If \(\mathbb {1}-\mathbb {B}e^{\lambda \underline{a}}\) is invertible in \({\mathcal K}\), then \(\lambda \in \rho (D_t(\mathbb {B}))\) and the unique solution \(\psi \in D(D_t(\mathbb {B}))\) to

$$\begin{aligned} (D_t(\mathbb {B})-\lambda ) \psi =f\quad \hbox {for} \quad f\in L^2({\mathcal G},\underline{a};{\mathcal X}) \end{aligned}$$

is given by \( \psi =\int _{{\mathcal G}}r(\cdot ,s;\mathbb {B},\lambda _{{\mathcal E}})f(s) \mathrm{d}s\).

The inverse of the parabolic operator \(P(\mathbb {B})\) can be seen as being given by a functional calculus where the spectral parameter in Corollary 6.5 is replaced by the operator A. This is akin to the case of classical semigroups, where the solution operator of the ordinary differential equation \((\partial _t -\lambda )\psi =f, \quad \psi (0)=\psi _0,\) is considered, and semigroup theory—interpreted as functional calculus for the exponential functions—allows one to ‘replace \(\lambda \) by some generator A.’

Inhomogeneous boundary conditions

So far, we have implicitly focused on the case of 0-boundary conditions imposed on sources of the time graph, i.e., at the initial endpoints of those time branches that have no predecessors. This is clearly a relevant limitation and would, for example, lead to identically vanishing solutions as soon as \(f\equiv 0\). Initial conditions can be introduced by interpreting them as inhomogeneous boundary conditions with respect to time. Thus, one considers the problem

$$\begin{aligned} \left\{ \begin{aligned} (\partial _t - A_j)\psi _j(t_j)&=f_j(t_j), \qquad t_j\in \bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} I_j\\ \underline{\psi }_- - \mathbb {B}\underline{\psi }_+&=\underline{g}\end{aligned} \right. \end{aligned}$$
(6.6)

for given \(\underline{g}\in {\mathcal K}\) and \(f\in L^2({\mathcal G},\underline{a};{\mathcal X})\). For \(\mathbb {B}=0\) this corresponds to the usual initial condition \(\psi (0)=\psi _{0}\).

The solution to this problem can be computed using the Green’s function, where—as for ordinary differential equations with inhomogeneous boundary conditions—the Lagrange identity (4.1) plays an important role. We start with a heuristic argument. Integration by parts yields

$$\begin{aligned} \int _{{\mathcal G}} \left[ (\partial _s r(t,s;A_{\mathcal E})) \psi (s) + r(t,s;A_{\mathcal E})\partial _s \psi (s)\right] \mathrm{d}s = [r(t,\cdot ;A_{\mathcal E}) \psi (\cdot )]_{\partial {\mathcal G}} \end{aligned}$$

where

$$\begin{aligned} {[}r(t,\cdot ;A_{\mathcal E}) \psi (\cdot )]_{\partial {\mathcal G}} = (r(t,a_j;A_{\mathcal E}) \psi (a_j) - r(t,0;A_{\mathcal E}) \psi (0))_{j\in {\mathcal I}}. \end{aligned}$$

Due to the properties of the Green’s function

$$\begin{aligned} \int _{{\mathcal G}} (\partial _s r(t,s;A_{\mathcal E})) \psi (s) \mathrm{d}s&= \int _{{\mathcal G}} r(t,s;A_{\mathcal E})(-A_{\mathcal E})\psi (s) \mathrm{d}s \\ \int _{{\mathcal G}} (\partial _t r(t,s;A_{\mathcal E})) \psi (s) \mathrm{d}s&= \int _{{\mathcal G}} A_{\mathcal E}r(t,s;A_{\mathcal E})\psi (s) \mathrm{d}s. \end{aligned}$$

Hence,

$$\begin{aligned}&\int _{{\mathcal G}} (\partial _s r(t,s;A_{\mathcal E})) \psi (s) + r(t,s;A_{\mathcal E})\partial _s \psi (s) \mathrm{d}s = \int _{{\mathcal G}} A_{\mathcal E}^{-1}((\partial _t-A_{\mathcal E}) r(t,s;A_{\mathcal E})) (-A_{\mathcal E}\psi (s)) \\&\quad \quad +\, r(t,s;A_{\mathcal E})(\partial _s -A_{\mathcal E}) \psi (s) - A_{\mathcal E}^{-1}A_{\mathcal E}r(t,s;A_{\mathcal E})) (-A_{\mathcal E}\psi (s)) - r(t,s;A_{\mathcal E})A_{\mathcal E}\psi (s)\mathrm{d}s \\&\quad = -\psi (t) + \int _{{\mathcal G}}r(t,s;A_{\mathcal E}) f(s) \mathrm{d}s, \end{aligned}$$

where one uses that

$$\begin{aligned} \int _{{\mathcal G}} (\partial _t -A_{\mathcal E})r(t,s;A_{\mathcal E}) \psi (s) \mathrm{d}s = \psi (t) \quad \hbox {and} \quad (\partial _s -A_{\mathcal E}) \psi (s) = f(s), \end{aligned}$$

assuming that \(\psi \) solves the Cauchy problem. Hence,

$$\begin{aligned} \psi (t)= \psi _{0}(t) +\int _{{\mathcal G}} r(t,s;A_{\mathcal E}) f(s) \mathrm{d}s, \quad \psi _0(t)=-[r(t,\cdot ;A_{\mathcal E})\psi (\cdot )]_{\partial {\mathcal G}}, \end{aligned}$$

where \( \psi _0\) is given more explicitly by

$$\begin{aligned} \psi _0(t) = e^{A\underline{t}} \underline{\psi }_- + e^{\underline{t}A} (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\mathbb {B}e^{\underline{a}A}\underline{\psi }_- - e^{\underline{t}A} (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\mathbb {B}\underline{\psi }_+. \end{aligned}$$

For \(\underline{g}:= \underline{\psi }_- -\mathbb {B}\underline{\psi }_+\) one obtains \(\psi _0(t) = e^{A\underline{t}} [\mathbb {1} + (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\mathbb {B}e^{\underline{a}A}]\underline{g}= e^{\underline{t}A}(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\underline{g}\).

Theorem 6.6

Let Assumption 6.1 be fulfilled and let \(\mathbb {B}\) be compatible with \({\mathcal K}_A\). If \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}\) is boundedly invertible in \({\mathcal K}_A\), then for any \(\underline{g}\in {\mathcal K}_A\) and \(f\in L^2({\mathcal G},\underline{a};{\mathcal X})\) there is a unique solution

$$\begin{aligned} \psi \in W^{1,2}({\mathcal G};\underline{a};{\mathcal X}) \cap L^2({\mathcal G},\underline{a};D(A_{\mathcal E})) \end{aligned}$$

to (6.6). The solution is given by

$$\begin{aligned} \psi (t) = \psi _0(t)+\int _{{\mathcal G}} r(t,s;\mathbb {B},A_{\mathcal E})f(s) \mathrm{d}s,\qquad t\in ({\mathcal G},\underline{a}) \end{aligned}$$
(6.7)

where the kernel \(r(\cdot ,\cdot ;\mathbb {B},A)\) is given in (6.2), and

$$\begin{aligned} \psi _0(t) := e^{\underline{t}A}(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\underline{g},\qquad t\in ({\mathcal G},\underline{a}). \end{aligned}$$

In particular there exists a constant C independent of f and \(\underline{g}\) such that

$$\begin{aligned} \Vert D_t\psi \Vert _{L^2({\mathcal G},\underline{a};{\mathcal X})} + \Vert A_{\mathcal E}\psi \Vert _{L^2({\mathcal G},\underline{a};{\mathcal X})} \le C (\Vert f\Vert _{L^2({\mathcal G},\underline{a};{\mathcal X})} + \Vert \underline{g}\Vert _{{\mathcal K}_A}). \end{aligned}$$

Proof

Note that \(\psi _0\in \in W^{1,2}({\mathcal G};\underline{a};{\mathcal X}) \cap L^2({\mathcal G},\underline{a};D(A_{\mathcal E}))\) since \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\underline{g}\in {\mathcal K}_A\) and all \(-A_j\) are sectorial, cf. [37, § 3.4] and in particular [37, Prop. 3.4.2] which can be adapted to finite intervals. Its traces satisfy

$$\begin{aligned} \underline{\psi _0}_-&= \underline{g}+ (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\mathbb {B}e^{\underline{a}A}\underline{g},\\ \underline{\psi _0}_+&= e^{\underline{a}A} \underline{g}+ e^{ \underline{a}A} (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\mathbb {B}e^{\underline{a}A}\underline{g}, \end{aligned}$$

and hence \(\underline{\psi _0}_- -\mathbb {B}\underline{\psi _0}_+=\underline{g}\), and \((\psi _0)_j\) solves \((\partial _{t_j}-A_j)(\psi _0)_j=0\) on each edge \(j\in {\mathcal I}\).

To prove uniqueness assume that there is another solution \(\psi _0'\) to (6.6) in the solution space, and consider the difference \( \psi _{\delta }:=\psi _0'-\psi _0 \) which solves due to the linearity of the equation

$$\begin{aligned} \left\{ \begin{aligned} (\partial _t - A_j)(\psi _\delta )_j&=0,\qquad t_j\in \bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} I_j, \quad j\in {\mathcal I}\cup {\mathcal E},\\ \underline{\psi _\delta }_- - \mathbb {B}\underline{\psi _\delta }_+&=0. \end{aligned} \right. \end{aligned}$$

Because of the former equation, there exists \(\underline{g}'\in {\mathcal K}_A\) such that \(\psi _{\delta }=e^{tA}\underline{g}'\), while the latter implies

$$\begin{aligned} (\mathbb {1}-\mathbb {B}e^{\underline{a}A})\underline{g}'=0, \end{aligned}$$

and by the invertibility of \(\mathbb {1}-\mathbb {B}e^{\underline{a}A}\) it follows that \(\psi _\delta \equiv 0\). Hence, the inhomogeneous boundary value problem is uniquely solvable with \(\psi _0\) in the maximal \(L^2\)-regularity class, and the rest of the statement follows from Proposition 6.2. \(\square \)

Remark 6.7

  1. (a)

    The solution formula given in Theorem 6.6 is a generalization of the well-known variation of constants formula. Considering only one interval [0, a] with boundary conditions \(u(0)=0\), i.e., \(\mathbb {B}=0\), we find that \(\psi _0(t) = e^{tA} \underline{\psi }_{0}\) and \(r(t,s;\mathbb {B},A_{\mathcal E})=r_0(t,s;A_{\mathcal E})\).

  2. (b)

    Another classical case are periodic boundary conditions. For one interval [0, a] with \(u(0)=u(a)\), i.e., \(\mathbb {B}=1\).

  3. (c)

    The solution to the time-graph Cauchy problem certainly satisfies a semigroup law on each edge.

Mapping properties

Assume that \(X_j=L^2(S_j,\mu _j)\) for some measure space \((S_j,\Sigma _j,\mu _j)\) are spaces of complex valued functions, and denote by \(X_{j,\mathbb {R}}\) the cone of real valued functions. This induces spaces \({\mathcal K}_{\mathbb {R}}\) and \({\mathcal K}_{\mathbb {R}}^2\).

Proposition 6.8

Let the assumptions of Theorem 6.6 be satisfied and let \(X_j=L^2(S_j,\mu _j)\) be Hilbert spaces of complex valued functions.

  1. (a)

    If \(\mathbb {B}\) and the operator families \((e^{t_jA_j})_{t_j\in I_j}\) leave \({\mathcal K}_{\mathbb {R}}\) invariant, then the solution \(\psi \) in Theorem 6.6 is real for real data \(\underline{g}\) and f.

  2. (b)

    If in addition to (a) the operators \(\mathbb {B}\), \((\mathbb {1}-\mathbb {B}^{\underline{a}A})^{-1}\) and the operator families \((e^{t_jA_j})_{t_j\in I_j}\) are positivity preserving, then the solution \(\psi \) in Theorem 6.6 is positive for all times \(t\in ({\mathcal G},\underline{a})\) provided the data \(\underline{g}\) and f are positive.

  3. (c)

    If the operator families \((e^{t_jA_j})_{t_j\in I_j}\) as well as \(\mathbb {B}\) and \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1} \) are \(L^{\infty }\)-bounded, then the solution operator in Theorem 6.6 is \(L^{\infty }\)-bounded. The solution operator defined by \(\psi _0\) is \(L^\infty \)-contractive whenever so are the operator families \((e^{t_jA_j})_{t_j\in I_j}\) and \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1} \). If additionally \(\mathbb {B}\) and \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1} \) are \(L^1\)-bounded, then the solution operator extrapolates to all \(L^p\)-spaces.

Proof

We have shown in Proposition 6.2 that the Green’s function is given by \(r_0(\cdot ,\cdot ;A)+r_1(\cdot ,\cdot ;\mathbb {B},A)\). It is apparent that the claimed properties for the solution to (6.6) are proved as soon as corresponding properties hold for both \(r_0(\cdot ,\cdot ;A),r_1(\cdot ,\cdot ;\mathbb {B},A)\), and \(\psi _0\) where the corresponding properties of \(r_0\) are covered by the classical theory. Now, \(r_1\) can be studied using its factorization into operators that also enjoy the corresponding properties. For the mapping properties of \(\psi _0\) analogous arguments apply. \(\square \)

Maximal \(L^p\)-regularity

For notational and mathematical simplicity, we have focused on the Hilbert space case and on maximal \(L^2\)-regularity. In the case of evolution equations on \({\mathbb {R}}_+\), under the assumptions of Proposition 6.8(c) the semigroup \(e^{tA}\) extrapolates to a \(C_0\)-semigroup on all \(L^p\)-spaces, \(p\in [1,\infty )\); this semigroup is additionally analytic on \(L^p\), \(p\in [1,\infty )\), if \(e^{tA}\) satisfies Gaussian estimates. By a celebrated result in [28] this implies in turn \(L^p\)-maximal regularity for \(p\in (1,\infty )\), but our theory does not seem to allow us to discuss kernel estimates. However, the solution formulae (6.2) and (6.7) suggest a straightforward generalization to the general case of maximal \(L^p\)-regularity in Banach spaces.

To this end, let \(X_j\) be Banach spaces and \(p\in (1,\infty )\). Consider the trace space

$$\begin{aligned} {\mathcal K}_{A,p}:= (X,D(A_{\mathcal V}))_{1-1/p,p}= \bigoplus _{j\in {\mathcal I}} (X_j,D(A_j))_{1-1/p,p}, \end{aligned}$$

and collect the following assumptions.

Assumption 6.9

Assume that \({\mathcal E}=\emptyset \), \(X_j\) be Banach spaces of class UMD, and \(p\in (1,\infty )\). Suppose that \(-A_j\) are \({\mathcal R}\)-sectorial operators in \(X_j\) of angle smaller than \(\pi /2\), and that \(\mathbb {B}\in {\mathcal L}({\mathcal K})\).

Proposition 6.10

(Maximal \(L^p\)-regularity) Let the Assumption 6.9 be fulfilled, \(\mathbb {B}\) be compatible with \({\mathcal K}_{A,p}\). If \(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A}\) is boundedly invertible in \({\mathcal K}_{A,p}\), then for any \(\underline{g}\in {\mathcal K}_{A,p}\) and \(f\in L^p({\mathcal G},\underline{a};{\mathcal X})\) there is a unique solution

$$\begin{aligned} \psi \in W^{1,p}({\mathcal G},\underline{a};{\mathcal X}) \cap L^p({\mathcal G},\underline{a};D(A)). \end{aligned}$$

to (6.6), where the solution is given by the same formulae as in Theorem 6.6 and

$$\begin{aligned} \Vert D_t\psi \Vert _{L^p({\mathcal G},\underline{a};{\mathcal X})} + \Vert A_{\mathcal E}\psi \Vert _{L^p({\mathcal G},\underline{a};{\mathcal X})} \le C (\Vert f\Vert _{L^p({\mathcal G},\underline{a};{\mathcal X})} + \Vert \underline{d}\Vert _{{\mathcal K}_{A,p}}). \end{aligned}$$

Proof

The unperturbed part of the Green’s function \(r_0(\cdot ,\cdot ; A)\) defines a bounded operator

$$\begin{aligned} L^p({\mathcal G},\underline{a};{\mathcal X}) \rightarrow D_t(\mathbb {B})\cap L^p({\mathcal G},\underline{a};D(A)), \quad \hbox {where }\mathbb {B}=0. \end{aligned}$$

It remains to verify that the correction terms have the same mapping properties. First,

$$\begin{aligned} \psi _0(t) = e^{tA}(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\underline{g}, \end{aligned}$$

and by assumption \((\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\underline{g}\in {\mathcal K}_{A,p}\), and hence \(\psi _0\) lies in the maximal regularity space, cf. [37, Prop. 3.4.2] which can be adapted to finite intervals. Secondly,

$$\begin{aligned} r_1(t,s;\mathbb {B},A) = e^{\underline{t}A} (\mathbb {1}-\mathbb {B}\, e^{ A \underline{a}})^{-1}\mathbb {B}e^{(\underline{a}-\underline{s})A}. \end{aligned}$$

Using that \(\int _{{\mathcal G}}e^{(\underline{a}-\underline{s})A} f(s) \mathrm{d}s\in {\mathcal K}_{A,p}\) which follows from the classical variation of constants formula and maximal \(L^p\)-regularity of the initial value problem, it follows that \(\int _{{\mathcal G}}r_1(t,s;\mathbb {B},A) f(s)d\) is in the maximal \(L^p\)-regularity space.

The proofs of Proposition 6.2 and Theorem 6.6 now carry over to the present situation. \(\square \)

Remark 6.11

(Transference principle) Transference principles which relate maximal \(L^p\)-regularity for the initial value problem to the maximal \(L^p\)-regularity problem with time periodicity on the real line are well established. Here, let \({\mathcal E}=\emptyset \) and \(P(\mathbb {B})\) for \(\mathbb {B}=0\) have maximal \(L^p\)-regularity, then \(P(\mathbb {B})\) has maximal \(L^p\)-regularity for any \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) satisfying the assumption of Proposition 6.10.

Regularity and other notions of solutions

So far, we have focused on solvability in maximal regularity spaces, since these fit into a suitable functional analytic framework. Note that the solution formula from Theorem 6.6 can be made sense of even under milder assumptions. Consider the case where \({\mathcal E}=\emptyset \), \(X_j\) are Banach spaces, the operators \(A_i\) generate \(C_0\)-semigroups in \(X_i\), and \(\mathbb {B}\in {\mathcal L}({\mathcal K})\). Classical results from semigroup theory carry over as long as sufficient compatibility of \(\mathbb {B}\) is assumed. For instance, smoother inhomogeneous transmission data improve the regularity of solutions.

Mild solutions

Under the assumptions that

$$\begin{aligned} (\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\in {\mathcal L}({\mathcal K}), \quad \underline{g}\in {\mathcal K}, \quad \hbox {and } f\in L^1({\mathcal G},\underline{a}\;{\mathcal X}) \end{aligned}$$

the function defined by (6.7) is a mild solution on each edge, i.e., \(\psi _j\in C(\overline{I_j};X_j)\) for each \(j\in {\mathcal I}\), cf. [5, Prop. 1.3.4] or [16, Prop. VI.7.4 and Prop. VI.7.5]), and the boundary conditions are attained in the larger space \({\mathcal K}\), while in Theorem 6.6 they existed even in \({\mathcal K}_A\).

Classical solutions

For \(\mathbb {B}\) being compatible with \(D(A_{\mathcal V})\), the conditions

$$\begin{aligned}&(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A})^{-1}\in {\mathcal L}(D(A_{\mathcal V})), \quad \underline{g}\in D(A_{\mathcal V}), \quad \hbox {and } \\&\quad f=\{f_j\}_{j\in {\mathcal I}} \hbox { with } f_j\in W^{1,1}(I_j;{\mathcal X}_j) \hbox { or } f_j\in C^{0}(\overline{I_j};{\mathcal X}_j) \hbox { for } j\in {\mathcal I}, \end{aligned}$$

respectively, imply that the solution is classical on each edge, i.e., continuously differentiable with respect to time, cf. [16, Cor. VI.7.6] and [16, Cor. VI.7.8]. Of course there are many refinements of the classical semigroup theory which one can carry over to time graphs by assuming sufficient compatibility between \(\mathbb {B}\) and the inverse of \(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A}\).

Iterative solvability

A time-graph Cauchy problem is iteratively solvable if it reduces to a finite sequence of initial value problems. This is made precise in the following definition.

Definition 7.1

(Iterative solvability) Assume that \({\mathcal E}=\emptyset \) and \(|{\mathcal I}|=n\), and that there exists an ordering of the edges \(i_1, \ldots , i_n\) such that the solution to (4.5) satisfies

$$\begin{aligned} \partial _t \psi _n -A_n \psi _n = f_n, \quad \psi _n(0)-\mathbb {B}_{nn}\psi _n(a)= g_n \end{aligned}$$

and for any \(1\le j\le n-1\) there exists some linear function \(\varphi _{j+1}\) such that

$$\begin{aligned} \partial _t \psi _{j} -A_{j} \psi _{j} = f_{j}, \quad \psi _{j}(0)-\mathbb {B}_{jj}\psi _j(a_j)= \varphi _{j}(\psi _{j+1}, \ldots , \psi _n,g). \end{aligned}$$

Then, we say that the (4.5) is iteratively solvable as a sequence of Cauchy problems on intervals. If \(\mathbb {B}_{jj}=0\) for all \(j=1, \ldots ,n\), then (4.5) is iteratively solvable as a sequence of initial value problems.

Iterative solvability can be traced back to the block structure of \(\mathbb {B}\).

Proposition 7.2

(Characterization if iterative solvability) Let \({\mathcal E}=\emptyset \) and \(\mathbb {B}\in {\mathcal L}({\mathcal K}_A)\).

  1. (a)

    The Cauchy problem (4.5) is iteratively solvable as a sequence of Cauchy problems on intervals if and only if, up to permutation of the edges, \(\mathbb {B}\) is block tri-diagonal, i.e., there exists an ordering of the edges \(i_1, \ldots , i_n\) such that \(\mathbb {B}_{ij}=0\) for \(j>i\).

  2. (b)

    The Cauchy problem (4.5) is iteratively solvable as a sequence of initial value problems if and only if, up to permutation of the edges, \(\mathbb {B}\) is block tri-diagonal with diagonal zero.

Proof

To prove (a) we start assuming that \(\mathbb {B}\) is block tri-diagonal. Then,

$$\begin{aligned} \begin{bmatrix} \psi _1(0) \\ \vdots \\ \psi _n(0) \end{bmatrix} - \begin{bmatrix} \mathbb {B}_{11} &{}\quad \ldots &{}\quad B_{1n}\\ 0 &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad 0 &{}\quad \mathbb {B}_{nn} \end{bmatrix} \begin{bmatrix} \psi _1(a_1) \\ \vdots \\ \psi _n(a_n) \end{bmatrix} = \begin{bmatrix} g_1 \\ \vdots \\ g_n \end{bmatrix}. \end{aligned}$$

Hence, \(\psi _n\) is the solution to the Cauchy problem on \(i_n\), and \(\psi _j\) for \(j=1, \ldots , n-1\) solves the Cauchy problem on \(i_j\) with

$$\begin{aligned} \psi _j(0)-\mathbb {B}_{jj}\psi _j(a)=g_j-\sum _{i=j+1}^n \mathbb {B}_{ji}\psi _i(a_i)=:\varphi _j(\psi _{j+1}, \ldots , \psi _n,g). \end{aligned}$$

Conversely, if (4.5) is iteratively solvable as a sequence of Cauchy problems on intervals, then there exists an ordering of the edges such that \(\mathbb {B}_{n,j}=0\) for \(j\ne n\), and since \(\varphi _j\) depends only on \(\psi _{j+1}, \ldots , \psi _n\) and g one concludes that \(\mathbb {B}_{ij}=0\) for \(j>i\).

For (b) notice that on each step on has an initial value problem if and only if \(\mathbb {B}_{jj}=0\) for all \(j\in {\mathcal I}\). \(\square \)

Remark 7.3

Invertibility of \(\mathbb {1}-\mathbb {B}\, e^{\underline{a}A}\) is automatically satisfied under the assumptions of Proposition 7.2.(b), since

$$\begin{aligned} \mathbb {1}-\mathbb {B}\, e^{\underline{a}A} = \begin{bmatrix} \mathbb {1} &{}\quad -\mathbb {B}_{12}e^{a_2A_2} &{}\quad \ldots &{}\quad -\mathbb {B}_{1n}e^{a_nA_n} \\ 0 &{}\quad \ddots &{}\quad \ddots &{}\quad \vdots \\ \vdots &{}\quad \ddots &{}\quad \mathbb {1} &{}\quad -\mathbb {B}_{(n-1)n}e^{a_nA_n}\\ 0 &{}\quad \cdots &{}\quad 0 &{}\quad \mathbb {1} \end{bmatrix}. \end{aligned}$$

This was a crucial assumption in Theorem 6.6: The structure of the time graph already implies unique solvability. Under the weaker assumptions of Proposition 7.2(a), instead, invertibility of all \(\mathbb {1}-\mathbb {B}_{ii}e^{a_iA_i}\) has to be imposed additionally.

An oriented graph \(({\mathcal G},\underline{a})\) contains a directed loop if there exists a sequence of edges \(i_{1}, \ldots , i_m\) such that

$$\begin{aligned} \partial _+i_{m} = \partial _-i_{1},\quad \partial _+i_{1} = \partial _-i_{2}, \quad \ldots ,\quad \partial _+i_{m-1} = \partial _-i_{m}. \end{aligned}$$

(We stress that this usage of the notion of loop is slightly different than in the literature on metric graphs in that we do not require the intersection of the loop’s closure and its complement’s closure (in the time graph) to be a singleton. We say that a loop is reflected by boundary conditions if \(\mathbb {B}_{i_j i_{(j+1)}}\ne 0\) for each \(i_j \in \{i_{1}, \ldots , i_m\}\), where on sets \(i_{m+1}:=i_1\).

Corollary 7.4

(Loops prevent iterative solvability) Let \(({\mathcal G},\underline{a})\) contain a directed loop which is reflected by the boundary conditions.

  1. (a)

    Then, (4.5) is not iteratively solvable as a sequence of initial value problems.

  2. (b)

    If the loop contains more than one edge, then (4.5) is not iteratively solvable as a sequence of Cauchy problems on intervals.

Proof

To prove (b): By assumption \(m\ge 2\) and \(B_{1,2}, \ldots , B_{(m-1),m}\ne 0\) and \(\mathbb {B}_{m,1}\ne 0\). In particular for any permutation of the edges \(\pi \) one has \(B_{\pi (1)\pi (2)}\ne 0\) and \(\mathbb {B}_{\pi (m)\pi (1)}\ne 0\), and therefore, \(\mathbb {B}\) cannot be block tri-diagonal and the claim follows from Proposition 7.2(a).

To prove (a) use using Proposition 7.2(b) and note that if \(m=1\), then \(B_{11}\ne 0\), and if \(m\ge 2\) this follows already from (b). \(\square \)

Remark 7.5

(Graph symmetries and symmetries of solutions) Periodic functions on \(\mathbb {R}\) clearly induce functions on \({\mathbb {S}}^1\): Are there further symmetries which can be encoded into a time graph? Many graphs have a natural symmetry which corresponds to a group structure. Given a map

$$\begin{aligned} T{:}\,({\mathcal G},\underline{a}) \rightarrow ({\mathcal G},\underline{a}), \end{aligned}$$

this induces a map on function spaces

$$\begin{aligned} {\hat{T}}{:}\, \left\{ f{:}\,({\mathcal G},\underline{a}) \rightarrow X \right\} \rightarrow \left\{ f{:}\,({\mathcal G},\underline{a}) \rightarrow X \right\} , \quad f\mapsto f\circ T, \end{aligned}$$

see [36, § 8.2]. Let \(\Gamma \) be a group of mappings acting on \(({\mathcal G},\underline{a})\). Assume that

$$\begin{aligned} {\hat{G}} D(D_t(\mathbb {B}))= D(D_t(\mathbb {B})) \quad \hbox {and} \quad D_t(\mathbb {B}){\hat{G}} = {\hat{G}} D_t(\mathbb {B})\qquad \hbox { for all }G\in \Gamma . \end{aligned}$$
(7.1)

(For example, the shift on a loop satisfies (7.1) with respect to the derivative operator with periodic boundary conditions.) It seems that there are very few graphs whose automorphism group is an infinite Lie group: In most cases, the automorphism group is finite. Having this, for any \(G\in \Gamma \), \({\hat{G}}\) commutes with the solution operator given in Theorem 6.6. Thus, the symmetry is reflected by the solution.

Examples and applications

The case of a loop with phase shift have been discussed already in the introduction as small modification of the classical periodic case. Also, the tadpole-like graph has been discussed there. We now discuss some other cases depicted in Fig. 3.

Splitting of systems

Take the graph consisting of three internal edges as in Fig. 3b, and consider for sectorial spatial operators \(-A_i\) in Hilbert spaces \(X_i\), \(i=1,2,3\), the problem \(\partial _t\psi _i -A_i\psi _i =f_i\), \( i=1,2,3\), with homogeneous boundary conditions

$$\begin{aligned} \psi _1(0)=g_1, \quad \psi _2(0) = \mathbb {B}_{21} \psi _1(a_1), \quad \psi _3(0) = \mathbb {B}_{31} \psi _1(a_1). \end{aligned}$$

This corresponds to (6.6) for

$$\begin{aligned} \mathbb {B}= \begin{bmatrix} 0 &{}\quad 0 &{}\quad 0 \\ \mathbb {B}_{21} &{}\quad 0 &{}\quad 0 \\ \mathbb {B}_{31} &{}\quad 0 &{}\quad 0 \end{bmatrix} \quad \hbox {and}\quad g = \begin{bmatrix} g_1 \\ 0 \\ 0 \end{bmatrix}. \end{aligned}$$

If \(\mathbb {B}\) is compatible with the trace space \({\mathcal K}_A\), one observes that

$$\begin{aligned} \mathbb {1}-\mathbb {B}e^{\underline{a}\, A}= \begin{bmatrix} \mathbb {1} &{}\quad 0 &{}\quad 0\\ -\mathbb {B}_{21} e^{a_1 A_1} &{}\quad \mathbb {1} &{}\quad 0\\ -\mathbb {B}_{31} e^{a_1 A_1} &{}\quad 0 &{}\quad \mathbb {1}\\ \end{bmatrix} \end{aligned}$$

is invertible for any \(\mathbb {B}_{21}, \mathbb {B}_{32}\), cf. Remark 7.3. Therefore by Theorem 6.6 a unique solution to this problem exists for all \(\underline{g}\in {\mathcal K}_A\); in particular \(\underline{g}=(g_1,0,0)^T \in {\mathcal K}_A\) means that \(g_1\in [X_1,D(A_1)]_{1/2}\).

The tree graph given in Fig. 3f results from an iteration of such as splitting procedure, where as above any splitting condition as long as it is compatible with the trace space is admissible. That such splitting problems can be solved iteratively as sequence of initial value problems is straight forward, it can also be seen more formally by applying Proposition 7.2.

Superposition of systems

Analogously, take the graph consisting of three internal edges as in Fig. 3c, and consider for sectorial spatial operators \(-A_i\) in Hilbert spaces \(X_i\), \(i=1,2,3\), the problem \(\partial _t\psi _i -A_i\psi _i =f_i\), \( i=1,2,3\), with homogeneous boundary conditions

$$\begin{aligned} \psi _1(0)=g_1, \quad \psi _2(0) = g_2, \quad \psi _3(0)=\mathbb {B}_{31}\psi _1(a_1) + \mathbb {B}_{32}\psi _2(a_2), \end{aligned}$$

for \(i=1,2,3\). This corresponds to (6.6) for

$$\begin{aligned}&\mathbb {B}= \begin{bmatrix} 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \\ \mathbb {B}_{31} &{}\quad \mathbb {B}_{32} &{}\quad 0 \end{bmatrix}, \quad \underline{g}= \begin{bmatrix} g_1 \\ g_2 \\ 0 \end{bmatrix},\quad \\&\quad \hbox {where } \mathbb {1}-\mathbb {B}e^{\underline{a}\, A}=\begin{bmatrix} \mathbb {1} &{}\quad 0 &{}\quad 0\\ 0 &{}\quad \mathbb {1} &{}\quad 0\\ -\mathbb {B}_{31} e^{a_1 A_1} &{}\quad -\mathbb {B}_{32} e^{a_2 A_2} &{}\quad \mathbb {1}\\ \end{bmatrix}. \end{aligned}$$

Assuming that \(\mathbb {B}\) is compatible with \({\mathcal K}_A\) one observes that \(\mathbb {1}-\mathbb {B}e^{\underline{a}\, A}\) is invertible in \({\mathcal K}_A\), cf. Remark 7.3. So, Theorem 6.6 is applicable again, and there is a unique solution to this problem if \(g_1\in [X_1,D(A_1)]_{1/2}\) and \(g_2\in [X_2,D(A_2)]_{1/2}\). One observes that this is iteratively solvable as a sequence of initial value problems, too.

Tadpole graph

Consider two edges with

$$\begin{aligned} \psi _1(0)=\psi _1(a_1)\quad \hbox {and}\quad \psi _2(0)=\mathbb {B}_{21}\psi _1(a_1)\quad \hbox {corresponding to } \mathbb {B}= \begin{bmatrix} \mathbb {1} &{}\quad 0 \\ \mathbb {B}_{21}&{}\quad 0 \end{bmatrix}. \end{aligned}$$

Note that \(\mathbb {1}-\mathbb {B}e^{\underline{a}A}\) is invertible if and only if \(\mathbb {1}-e^{a_1 A_1}\) is invertible. So, solvability is assured if for instance \(\Vert e^{a_1 A_1}\Vert <1\), i.e., the semigroup generated by \(A_1\) is contractive and exponentially decaying. This tadpole graph system can be interpreted as a time-periodic system where the output is used as initial data for a new system. In the notion introduced in Sect. 7, this means the problem can be solved iteratively, first solving a time-periodic problem and then an initial value problem, the data of which depend on the solution obtained in the first step.

Frequency-dependent couplings

Frequency-dependent transition conditions between time branches may also be considered. Assume for simplicity that \(A_1=A_2\) are positive self-adjoint operators with discrete spectrum \(k_1\le k_2\le \cdots \) (counted with multiplicities): Any element in \({\mathcal K}\) can thus be expanded in terms of eigenfunctions. For two edges one can, for example, consider the left shift operator \(S_-\) defined by

$$\begin{aligned} \psi = \sum _{n\in \mathbb {N}} a_n \psi _n \mapsto S_-\psi :=\sum _{n\in \mathbb {N}} a_{n+1} \psi _n, \end{aligned}$$

where \((\psi _n)\) is an orthonormal basis of eigenfunctions: This induces a map also in \(D(A^{1/2})\). Then,

$$\begin{aligned} \mathbb {B}= \begin{bmatrix} 0 &{}\quad 0 \\ 0 &{}\quad S_- \end{bmatrix} \end{aligned}$$

is an admissible transmission condition where the first row induces an initial condition on \(\psi _1(0)\) and the second is the frequency shift. One may also consider a projection \(P_I\) onto certain frequency ranges \(I\subset \mathbb {N}\),

$$\begin{aligned} P_{I}\psi := \sum _{n\in I} a_n \psi _n \end{aligned}$$

and one could have a splitting of the system

$$\begin{aligned}&\psi _1(0)=0, \quad \psi _2(0)=P_{I_2}\psi _1(a_1), \\&\quad \quad \psi _3(0)=P_{I_3}\psi _1(a_1), \quad \psi _4(0)=P_{I_4'}\psi _2(0) + P_{I_4''}\psi _3(0) \end{aligned}$$

corresponding to

$$\begin{aligned} \mathbb {B}= \begin{bmatrix} 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ P_{I_2} &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ P_{I_3} &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad P_{I_4'} &{}\quad P_{I_4''} &{}\quad 0 \end{bmatrix}. \end{aligned}$$

This is iteratively solvable as sequence of initial value problems, and hence, it is well posed.

Lions maximal \(L^2\)-regularity problem for non-autonomous Cauchy problems

Lions’ maximal regularity problem for non-autonomous Cauchy problems considers

$$\begin{aligned} (\partial _t - A(t))\psi (t)=f(t), \quad \psi (0)=\psi _0 \end{aligned}$$

for a Hilbert space X and \(f\in L^2(0,T;X)\) and asks if the solution satisfies \(\psi \in W^{1,2}(0,T;X)\); this would in turn imply that also \(A(\cdot )\psi \in L^2(0,T;X)\). This problem has a long history and much remarkable work has been devoted to it. More precisely, depending on \(A(\cdot )\) there are counterexamples as well as criteria which assure an affirmative answer: We refer the interested reader to [24] for an early study of maximal regularity for non-autonomous problems, and to [6] for further information and updated references.

The particular case of \(A(\cdot )\) being a (matrix-valued) step function with matching trace spaces

$$\begin{aligned} A(t)={\left\{ \begin{array}{ll} A_1, &{}\quad t \in (0,t_1), \\ \ldots &{}\quad \\ A_n, &{}\quad t \in (t_{n-1},t_n), \\ \end{array}\right. } \quad \hbox {with} \quad D(A_j^{1/2})=D(A_i^{1/2})\hbox { for all } 1\le i,j\le n. \end{aligned}$$

has already been used by [15] as a first step to consider \(A(\cdot )\) which are of bounded variation. The time-graph approach does not give any additional information at this point but it underlines the role of the compatibility assumption for the trace spaces. Using our approach we directly see that the corresponding abstract time-graph Cauchy problem can be studied by means of

$$\begin{aligned} \mathbb {B}= \begin{bmatrix} 0 &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad 0 \\ 1&{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \vdots \\ \vdots &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \vdots \\ 0&{}\quad \cdots &{}\quad 0 &{}\quad 1 &{}\quad 0 \end{bmatrix}, \end{aligned}$$

and hence, it is iteratively solvable by Proposition 7.2.

Outline on non-parabolic Cauchy problems

So far, the focus has been on parabolic Cauchy problems, but the Green’s function Ansatz makes sense also in some non-parabolic situations.

Schrödinger equation

Let us study the Schrödinger-type problem

$$\begin{aligned} \left\{ \begin{aligned} (\partial _t - iA_j)\psi _j(t_j)&=f_j(t_j),\qquad t_j\in \bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} I_j,\\ \underline{\psi }_- -\mathbb {B}\underline{\psi }_+&=\underline{g}. \end{aligned} \right. \end{aligned}$$
(8.1)

Provided that \(\mathbb {1}-\mathbb {B}\, e^{i\underline{a}A}\) is invertible in \({\mathcal K}\), the solution map

$$\begin{aligned} S_{\mathbb {B}}(t) = e^{itA}(\mathbb {1}-\mathbb {B}\, e^{i\underline{a}A})^{-1} \end{aligned}$$

given by \(\psi _0\) in Theorem 6.6 is well defined for all \(\underline{g}\in {\mathcal K}\) and defines a mild solution. (Notice that invertibility in \({\mathcal K}\) is sufficient since here we merely aim at mild solutions.)

Remark 8.1

While the time graph \({\mathcal G}\) need not display a group structure and, therefore, the issue of time reversal need generally not be well defined, we may still wonder whether the family of solution operators that govern  (8.1) consists of unitary operators. Assume that A and \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) be self-adjoint such that \(e^{i\underline{a}A}\) and \(\mathbb {B}\) commute, and \(\mathbb {1}-\mathbb {B}\, e^{i\underline{a}A}\) is invertible. Then, the solution operators to the time-graph Schrödinger equation (8.1) are unitary, i.e.,

$$\begin{aligned} S_{\mathbb {B}}(t) S_{\mathbb {B}}(t)^* = S_{\mathbb {B}}(t)^* S_{\mathbb {B}}(t) = \mathbb {1}, \end{aligned}$$

if and only if \(\mathbb {B}^2 = 2\mathbb {B}\cos (\underline{a}A)\). This follows, using that all operators commute, from

$$\begin{aligned} e^{itA}(\mathbb {1}-\mathbb {B}\, e^{i\underline{a}A})^{-1} (\mathbb {1}- e^{-i\underline{a}A}\mathbb {B})^{-1} e^{-itA}= (\mathbb {1}-\mathbb {B}\, e^{i\underline{a}A})^{-1} (\mathbb {1}- \mathbb {B}e^{-i\underline{a}A})^{-1}= \mathbb {1} \end{aligned}$$

which is in turn equivalent to \(\mathbb {B}^2 = 2\mathbb {B}\cos (\underline{a}A)\). If, in particular, \(\mathbb {B}\) is invertible, then the above condition amounts to \(\mathbb {B}=2 \cos ( \underline{a}A)\). Accordingly, there is a unitary solution operator for fixed jump condition, i.e., \(\mathbb {B}=\mathbb {1}\), if and only if \(\sigma (\underline{a}A) \subset \pi /3+2\pi \mathbb {Z}\). So, the classical case \(\mathbb {B}=0\) is not the only case of a unitary solution operator. Note that time inversion is still possible even for non-unitary solution operator, but the time-reversed dynamics differs from the time-forward dynamics and going forth and back is not necessarily equal to stay at one time.

Second-order Cauchy problem

The above setting can be generalized to different kind of evolution equations. Let \(\mathbb {B}_1,\mathbb {B}_2\) be bounded linear operators on \({\mathcal {K}}^2\) and consider the second-order Cauchy problem

$$\begin{aligned} \left\{ \begin{aligned} (\partial ^2_t - A_j)\psi _j(t_j)&=f_j(t_j),\qquad t_j\in \bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} I_j,\\ \mathbb {B}_1 \underline{\psi }_+ -\underline{\psi }_-&=\underline{g}_1\\ \mathbb {B}_2 \underline{\psi _t}_+ -\underline{\psi _t}_-&=\underline{g}_{2}. \end{aligned} \right. \end{aligned}$$
(8.2)

The idea is to decompose this into

$$\begin{aligned} (\partial _{t}^2 -A)\psi = (D_t(\mathbb {B}_1) - i|A|^{1/2})(D_t(\mathbb {B}_2) + i|A|^{1/2})\psi , \end{aligned}$$

where A is skew-symmetric. Hence, one has to solve two first-order problems iteratively, where assuming in addition to the assumptions of Theorem 6.6 that \(A_j\) for \(j\in {\mathcal I}\) are self-adjoint and boundedly invertible. Then for any \(\underline{g}_1, \underline{g}_2\in {\mathcal K}\) there is a unique solution to (8.2).

Outline on mixed-order systems

It is also possible to discuss evolution equations whose nature is different on each time branch; in particular, it is possible to define an operator on \(\mathbb {T}\) which agrees with a first derivatives on a subset of \(\mathbb {T}\) and with a second derivative on the remaining time branches. Defining appropriate transition conditions is, however, less obvious: A thorough discussion of ‘well-behaved’ transition conditions can be found in [25]. Following these lines one can use the Kalton–Weis approach to solve a Cauchy problem of the type

$$\begin{aligned} (-\partial _t -A_1)\psi _1&=0, \quad \hbox {on} \quad [0,a_1], \\ (\partial _{tt} - A_2)\psi _2&=0, \quad \hbox {on} \quad [0,a_2]. \end{aligned}$$

Focusing on this example we consider operators \(A_1,A_2\) in Hilbert spaces \(X_1,X_2\) and couplings defined by

$$\begin{aligned}&(P+L)\underline{\psi } + P^{\perp } \underline{\psi }'=0, \quad \underline{\psi }=\begin{bmatrix} 2^{-1/2}(\psi _1(a_1) + \psi _1(0)) \\ \psi _2(a_2) \\ \psi _2(0) \\ \end{bmatrix}, \\&\quad \underline{\psi }'=\begin{bmatrix} 2^{-1/2}(-\psi _1(a_1) + \psi _1(0)) \\ -(\psi _2)_t(a_2) \\ (\psi _2)_t(0) \\ \end{bmatrix} \end{aligned}$$

for an orthogonal projection P in \({\mathcal K}:= X_1\oplus X_2\), \(P^{\perp }=\mathbb {1}-P\) and \(L\in {\mathcal L}({\mathcal K})\) with \(LP^{\perp }=P^{\perp }L\). With these couplings the operator \(\mathbb {T}_{P,L}\) defined on \(L^2(0,a_1;X_1)\oplus L^2(0,a_2;X_2)\) by

$$\begin{aligned} \begin{aligned}&\mathbb {T}_{P,L} \psi _1 = -\psi _1', \quad \mathbb {T}_{P,L} \psi _2 = \psi _2'', \\&D(\mathbb {T}_{P,L}):= \{\psi _1\in W^{1,2}(0,a_1;X_1), \psi _2\in W^{2,2}(0,a_1;X_1){:}\,(P+L)\underline{\psi } + P^{\perp } \underline{\psi }'=0 \} \end{aligned} \end{aligned}$$

is m-dissipative if \(-L\) is dissipative, see [25, Theorem 4.1], and similarly to Corollary 4.5 one concludes that it has a bounded \(H^\infty \)-calculus of angle \(\pi /2\). If the spatial operator \(-A\) is sectorial and commutes with the boundary conditions, the one can apply the Kalton–Weis theorem to obtain well-posedness in a maximal regularity space. A Green’s function approach could be pursued as well on the lines of the Green’s function from [25, Proposition 6.6].

A tentative interpretation of time graphs

Several convenient properties of semigroups depend decisively on the order structure of the underlying set, so it is conceivable to relax the standard approach to time evolution and study semigroup-like operator families that are indexed on posets different from \(\mathbb {R}_+\).

One particularly simple case is that of a tree-like time structure. More precisely, we allow for a time that looks like a rooted tree, see Fig. 3f. This seems to be conceptually very close to H. Everett’s many worlds interpretation of quantum mechanics [12], but our mathematical theory is not restricted to Schrödinger-type equations, see Sect. 6, and a precise analysis of similarities and differences with Everett’s interpretation goes far beyond the scope of this note. In a very simplified synopsis the many worlds interpretation claims in order to conciliate probabilistic and deterministic interpretations of quantum mechanics that time splits at each point in time and each possible state is actually attained in one of the parallel universes.

Imagining parallel universes, one would have no evidence of these in the case of a tree graph. Only if there is some interaction between different ‘time branches,’ then one can know of the other now non-parallel but interacting universes. This leads to an interpretation of time travel in terms of time graphs, and it seems that time graphs are a convenient way of picturing to oneself time travel independently of its actual physical meaning.

Note that in the case of tree graphs, an evolutionary system can be solved iteratively starting with the first edge where some initial condition is imposed, one determines the state at the end of the edge which is the used as new initial conditions for the next level of the tree, etc. Interesting problems arise when oriented loops are allowed, as outlined in Sect. 7. Then, the time evolution can no longer be described iteratively and there is no clear direction distinguishing ‘before’ and ‘after.’ This is one of the problems occurring in the scientific interpretation of closed time-like curves in general relativity, cf. [21, Chapter 5] for a discussion of closed time-like curves occurring in exact solutions to the Einstein equations. and much earlier this has spurred the imagination of science fiction authors. Just to mention a few, there are H.G. Wells ‘novel The time machine (1895) as well as The Man in the High Castle (1962) and further works written by Philip K. Dick between the 1950s and the 1970s, whose main theme are interacting parallel universes. Variations of these themes and particular the so-called grandfather paradox—preventing one’s one birth during a journey back in time or violating causality in some other way—are at the origin of several mainstream movies, like the classic Back to the Future (1985) or the more recent Looper (2012), in whose plot a person is sent back from 2074 to 2044 so that he can meet and be killed by his younger self.Footnote 1 A further, different narrative trick is that of time loops, illustrated for instance in the movies Groundhog Day (1993) or Miss Peregrine’s Home for Peculiar Children (2016), which tell the stories of Phil Connors—respectively, of a group of children and their guardian—who get trapped in a time loop around February 2, 1992—respectively, September 3, 1943 —, before eventually managing to escape: In mathematical terms, this does not mean that a certain function is periodic (the days spent by the main characters in either loop are not identical), but rather that it only satisfies a certain identity condition at different instants (Phil Connors’ environment is reset every day at 6:00 am, and so is the environment of the peculiar children and their guardian, every day at 9:07 pm): This time development can be captured in Fig. 3e or  4d. This suggests a much more down-to-earth interpretation of evolution on branching time structures: Namely, it conveniently allows us to formalize the requirement that solutions at different time instants respect certain algebraic relations.

The question of whether such fictional situations can be reconciled with our deeply rooted perception of time as linear seems for us related to the problem of representing a time-graph Cauchy problem as a sequence of initial value problems which can be solved one after another. As we have pointed out in Sect. 7 this is closely related to the absence of loops inside the time graph, and in this case the solution operator acts in a truly global (in time) fashion, as the solution at some point depends on all other times including future-like times.

Time travel, multiverses and the grandfather paradox

A time-travel scenario can be depicted as in Fig. 3g with a link between some point in the future and a point which might be in the past. Considering such a graph one can, for example, impose the transmission conditions

$$\begin{aligned} \psi _1(0)=\underline{g}_1, \quad \psi _{2}(0) = \psi _{1}(a_1)+ \psi _{4}(a_4), \quad \psi _{3}(0) = \psi _{2}(a_2), \quad \psi _{4}(0) = \psi _{2}(a_2) \end{aligned}$$

that correspond to

$$\begin{aligned} \mathbb {B}= \begin{bmatrix} 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 1 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ \end{bmatrix}, \quad g=\begin{bmatrix} \underline{g}_1 \\ 0 \\ 0 \\ 0 \end{bmatrix}. \end{aligned}$$

Therefore, this is not solvable as sequence of Cauchy problems on intervals, but well posed in the sense of Theorem 6.6 under suitable assumptions on the spatial operators \(A_i\). In a sense, time travel occurs in this deterministic world, but there is no free will to cause a grandfather paradox: The system is forced to be contradiction free. This resembles the case of time-periodic solutions. Given a solution \(\psi \) to

$$\begin{aligned} \partial _t \psi - A\psi = f, \quad \psi (0)=\psi (1), \end{aligned}$$

one can compare this to the solution to

$$\begin{aligned} \partial _t \varphi - A\varphi = f, \quad \varphi (0)=\psi (0). \end{aligned}$$

Due to the uniqueness of solutions the system comes back to its original state, i.e., \(\psi =\varphi \) and \(\varphi (1)=\psi (1)=\psi (0)\). Living in a time-periodic world, would thus be locally like living in a time-interval world with initial conditions, but nevertheless globally it is time periodic. Similarly, in the scenarios of Fig. 3g or 4a, solutions have in time non-local constraints which are seen only on a global level.

Considering the graph from Fig. 3h one can, for example, impose the transmission conditions

$$\begin{aligned}&\psi _1(0)=\underline{g}_1, \quad \psi _{2}(0) = \psi _{1}(a_1), \\&\quad \psi _{3}(0) = \psi _{2}(a_2), \quad \psi _{4}(0) = \psi _{2}(a_2), \quad \psi _{5}(0)= \psi _{4}(a_4)+\psi _{1}(a_1) \end{aligned}$$

which leads to

$$\begin{aligned} \mathbb {B}= \begin{bmatrix} 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 1 &{}\quad 0 &{}\quad 0 &{}\quad 1 &{}\quad 0\\ \end{bmatrix} , \quad g=\begin{bmatrix} \underline{g}_1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}. \end{aligned}$$

This is solvable as sequence of initial value problems, because \(B_{14}=0\) the loop in the graph is not reflected by boundary conditions. A grandfather paradox does not occur because we actually have a sequence of initial value problems, and this seems the way we represent time travel in our thoughts when watching a science fiction movie: A time traveler reaches a past where the initial conditions are the actual state of the past plus the time traveler. This mixer gives new initial conditions which lead to new events which actually do not affect the time from which the time traveler comes from. Traveling back to the present does not lead to any contradictions since one has a simple superposition of two time branches so that changes due to the altered time branch can be incorporated. This becomes more transparent when as in Fig. 4c—compared to Fig. 4b—an auxiliary edge is inserted.

Fig. 4
figure4

Time travel with and without parallel universes, and time loop

Caught in a time loop

The scenario that describes being caught in a time loop just as in Groundhog Day can also be represented by time graphs. Having a time graph as in Fig. 4d one can impose at the first vertex a splitting into several copies of the world. At each subsequent vertex there is a superposition of this original state of the time evolution of the incoming edge, where the superposition is such that only the main character is replaced, while all the rest goes back to the original state. Such conditions would allow for iterative solvability as a sequence of initial value problems.

Representing such a plot properly would, in addition, need a dynamical graph the development of which depends on the solution; one would also need to incorporate an end condition stating that if the solution reaches a certain state, then the time evolution proceeds as a usual time axis. This would be a nonlinear feature. Typically, such an end condition consists in the main character’s reaching a certain goal or a key insight into the meaning of life.

Summarizing, it seems that our thinking is preassigned to represent time as linear with well-specified past and future, and even when imagining science-fictional scenarios of time travel, time loops and parallel universes we search for an ordering asking ‘what happened first?’, ‘what happened then?’, ‘\(\ldots \) and then?’, etc. So, in our language every science-fictional scenario needs a representation as a sequence of iteratively solvable sequence of initial value problems. The other way round, thinking of a properly time-periodic movie would be quite repetitive.

Notes

  1. 1.

    At one point, he urges his younger self to refrain from theoretical considerations stating ‘I don’t want to talk about time travel [...]. If we start talking about it, we’re gonna be here all day. Making diagrams with straws’: a witty allusion to time as a graph.

References

  1. 1.

    W. Arendt and S. Bu. The operator-valued Marcinkiewicz multiplier theorem and maximal regularity. Math. Z., 240:311–343, 2002.

    MathSciNet  Article  Google Scholar 

  2. 2.

    W. Arendt and S. Bu. Operator-valued Fourier multipliers on periodic Besov spaces and applications. Proc. Edinb. Math. Soc., 47:15–33, 2004.

    MathSciNet  Article  Google Scholar 

  3. 3.

    W. Arendt and S. Bu. Operator-valued multiplier theorems characterizing Hilbert spaces. J. Aust. Math. Soc., 77:175–184, 2004.

    MathSciNet  Article  Google Scholar 

  4. 4.

    M. Abe. Time in Buddhism. In S. Heine, editor, Zen and Comparative Studies, Library of Philosophy and Religion, pp. 163–169. MacMillan, London, 1997.

  5. 5.

    W. Arendt, C.J.K. Batty, M. Hieber, and F. Neubrander. Vector-Valued Laplace Transforms and Cauchy Problems—Second Edition, volume 96 of Monographs in Mathematics. Birkhäuser, Basel, 2010.

  6. 6.

    W. Arendt, D. Dier, and S. Fackler. J. L. Lions’ problem on maximal regularity. Arch. Math., 109:59–72, 2017.

  7. 7.

    H. Amann. Linear and Quasilinear Parabolic Problems. Vol. 1: Abstract Linear Theory. Birkhäuser, Basel, 1995.

  8. 8.

    W. Arendt. Semigroups and evolution equations: Functional calculus, regularity and kernel estimates. In C.M. Dafermos and E. Feireisl, editors, Handbook of Differential Equations: Evolutionary Equations—Vol. 1. North Holland, Amsterdam, 2004.

  9. 9.

    A. Celik and M. Kyed. Nonlinear wave equation with damping: periodic forcing and non-resonant solutions to the Kuznetsov equation. ZAMM Z. Angew. Math. Mech., 98(3):412–430, 2018.

    MathSciNet  Article  Google Scholar 

  10. 10.

    U. Coope. Time for Aristotle: Physics IV. 10–14. Oxford Aristotle Studies. Oxford University Press, New York, 2005.

  11. 11.

    H. Coward. Time in Hinduism. J. Hindu-Christian Stud., 12:22–27, 1999.

    Google Scholar 

  12. 12.

    B.S. DeWitt and N. Graham, editors. The Many Worlds Interpretation of Quantum Mechanics. Princeton University Press, Princeton, 2015.

    Google Scholar 

  13. 13.

    R. Denk, M. Hieber, and J. Prüss. \({{R}}\)-Boundedness, Fourier Multipliers and Problems of Elliptic and Parabolic Type, volume 788 of Mem. Amer. Math. Soc. Amer. Math. Soc., Providence, RI, 2003.

  14. 14.

    T. Eiter and M. Kyed. Time-periodic linearized Navier–Stokes equations: an approach based on Fourier multipliers. In Particles in Flows, Adv. Math. Fluid Mech., pp. 77–137. Birkhäuser, Cham, 2017.

  15. 15.

    O. El-Mennaoui and H. Laasri. On evolution equations governed by non-autonomous forms. Arch. Math., 107:43–57, 2016.

    MathSciNet  Article  Google Scholar 

  16. 16.

    K.-J. Engel and R. Nagel. One-Parameter Semigroups for Linear Evolution Equations, volume 194 of Graduate Texts in Mathematics. Springer, New York, 2000.

  17. 17.

    G.P. Galdi, M. Hieber, and T. Kashiwabara. Strong time-periodic solutions to the 3D primitive equations subject to arbitrary large forces. Nonlinearity, 30(10):3979–3992, 2017.

    MathSciNet  Article  Google Scholar 

  18. 18.

    M. Geissert, M. Hieber, and Th. H. Nguyen. A general approach to time periodic incompressible viscous fluid flow problems. Arch. Ration. Mech. Anal., 220(3):1095–1118, 2016.

    MathSciNet  Article  Google Scholar 

  19. 19.

    D. M. Gitman, I. V. Tyutin, and B. L. Voronov. Self-adjoint extensions in quantum mechanics, volume 62 of Progress in Mathematical Physics. Birkhäuser/Springer, New York, 2012. General theory and applications to Schrödinger and Dirac equations with singular potentials.

  20. 20.

    M. Haase. The Functional Calculus for Sectorial Operators, volume 169 of Oper. Theory Adv. Appl. Birkhäuser, Basel, 2006.

  21. 21.

    S. W. Hawking and G. F. R. Ellis. The Large Scale Structure of Space-Time. Cambridge University Press, London-New York, 1973. Cambridge Monographs on Mathematical Physics, No. 1.

  22. 22.

    M. Hieber. On operator semigroups arising in the study of incompressible viscous fluid flows. Philos. Trans. R. Soc. A, 378(2185):618–639, 2020.

    MathSciNet  Article  Google Scholar 

  23. 23.

    M. Hieber, N. Kajiwara, K. Kress, and P. Tolksdorf. The periodic version of the Da Prato–Grisvard theorem and applications to the bidomain equations with FitzHugh–Nagumo transport. Ann. Mat. Pura Appl. (4), 199(6):2435–2457, 2020.

  24. 24.

    M. Hieber and S. Monniaux. Heat-kernels and maximal \({L}^p-{L}^q\)-estimates: the non-autonomous case. J. Fourier Anal. Appl., 6:467–481, 2000.

    MathSciNet  Article  Google Scholar 

  25. 25.

    A. Hussein and D. Mugnolo. Quantum graphs with mixed dynamics: the transport/diffusion case. J. Phys. A, 46:235202, 2013.

    MathSciNet  Article  Google Scholar 

  26. 26.

    M. Hieber, A. Mahalov, and R. Takada. Time periodic and almost time periodic solutions to rotating stratified fluids subject to large forces. J. Differ. Equ., 266(2-3):977–1002, 2019.

    MathSciNet  Article  Google Scholar 

  27. 27.

    M. Hieber, Th. H. Nguyen, and A. Seyfert. On periodic and almost periodic solutions to incompressible viscous fluid flow problems on the whole line. In Mathematics for Nonlinear Phenomena—Analysis and Computation, volume 215 of Springer Proc. Math. Stat., pp. 51–81. Springer, Cham, 2017.

  28. 28.

    M. Hieber and J. Prüss. Heat kernels and maximal \({L}^p-{L}^q\) estimates for parabolic evolution equations. Comm. Partial Differ. Equ., 22:1647–1669, 1997.

    Article  Google Scholar 

  29. 29.

    M. Hieber and C. Stinner. Strong time periodic solutions to Keller–Segel systems: an approach by the quasilinear Arendt–Bu theorem. J. Differ. Equ., 269(2):1636–1655, 2020.

    MathSciNet  Article  Google Scholar 

  30. 30.

    B. Jacob and H. Zwart. Linear Port-Hamiltonian Systems on Infinite-dimensional Spaces, volume 223 of Oper. Theory Adv. Appl. Birkhäuser, Basel, 2012.

  31. 31.

    T. Kato. Perturbation Theory for Linear Operators, volume 132 of Grundlehren der mathematischen Wissenschaften. Springer, Berlin, 1980.

  32. 32.

    M. Kyed and J. Sauer. A method for obtaining time-periodic \(L^p\) estimates. J. Differ. Equ., 262:633–652, 2017.

    Article  Google Scholar 

  33. 33.

    N.J. Kalton and L. Weis. The \({H}^\infty \)-calculus and sums of closed operators. Math. Ann., 321:319–345, 2001.

    MathSciNet  Article  Google Scholar 

  34. 34.

    P.C. Kunstmann and L. Weis. Maximal \(L_p\)-regularity for parabolic equations, Fourier multiplier theorems and \(H^\infty \)-functional calculus. In Functional Analytic Methods for Evolution Equations, volume 1855 of Lect. Notes Math., pages 65–311. Springer-Verlag, Berlin, 2004.

  35. 35.

    S. Martin. Time, kingship, and the Maya universe. Expedition, 54:18–23, 2012.

    Google Scholar 

  36. 36.

    D. Mugnolo. Semigroup Methods for Evolution Equations on Networks. Underst. Compl. Syst. Springer, Berlin, 2014.

  37. 37.

    J. Prüss and G. Simonett. Moving Interfaces and Quasilinear Parabolic Evolution Equations, volume 105 of Monographs in Mathematics. Birkhäuser/Springer, Cham, 2016.

  38. 38.

    C. Rovelli. The Order of Time. Penguin Random House, New York, 2018.

    Google Scholar 

  39. 39.

    O. Vejvoda. Partial Differential Equations: Time-Periodic Solutions. Martinus Nijhoff Publishers, The Hague, 1982.

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Amru Hussein.

Additional information

Dedicated to Matthias Hieber on the occasion of his 60th birthday and in recognition of his brilliant work pointing to the future

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Delio Mugnolo was partially supported by the Deutsche Forschungsgemeinschaft (Grant 397230547).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Hussein, A., Mugnolo, D. If time were a graph, what would evolution equations look like?. J. Evol. Equ. (2021). https://doi.org/10.1007/s00028-021-00672-8

Download citation

Keywords

  • Evolution equations
  • Cauchy problems
  • Time evolution on graphs

Mathematics Subject Classification

  • Primary: 47D99
  • Secondary: 47D06
  • 35B10
  • 34G10