1 Introduction

Optimized Schwarz Methods are Domain Decomposition methods in which the boundary conditions on the artificial interfaces are of Robin type, i.e., containing one or more parameters that can be optimized [1, 3, 4].

In our context, Asynchronous Schwarz methods are those where each subdomain solve is performed with whatever new information (to be used for the boundary conditions) has arrived from the neighboring subdomains since the last update, but without necessarily waiting for new information to arrive. For more details on asynchronous methods, see, e.g. [2] and references therein. See also Sect. 1.2 below.

In this paper we add more details to the convergence proof given in [5] of Asynchronous Optimized Schwarz (AOS) where it is used to solve Poisson’s equation in \(\mathbb {R}^2\). The results presented here complement those of [5].

1.1 Preliminaries

The aim is to provide a complete proof of the convergence of AOS for

$$\displaystyle \begin{aligned} \varDelta u -\eta u = f\;\;\; \text{ in }\; \mathbb{R}^2, {} \end{aligned} $$
(1)

with vanishing value of u at infinity, and η > 0. The space \(\mathbb {R}^2\) is divided into p overlapping infinite vertical strips. This means we have p − 1 vertical lines, say at coordinates x =  1, … p−1; and we assume for simplicity that we have the same overlap 2L between subdomains. We also assume, without loss of generality, that except for the subdomains at infinity, that each strip has the same width, i.e., s −  s−1 = W for s = 2, …, p − 1, so that s =  1 + (s − 1)W. It follows then, that the overlap satisfies 2L < W, and usually L ≪ W. Thus, we have \(\varOmega ^{(1)} = ]-\infty ; \ell _1 + L ] \times \mathbb {R}\), \(\varOmega ^{(s)} =[ \ell _{s-1}- L ; \ell _{s}+L ]\times \mathbb {R}\), s = 2, …, p − 1, and \(\varOmega ^{(p)} = [\ell _{p-1}- L;+\infty [\times \mathbb {R}\). In this context, the normal vector is in the x direction (with the appropriate sign).

Let f (s) and \(u^{n}_{s}\) denote the restriction of f and u n, the approximation to the solution at the iteration n, to Ω (s), s = 1, …, p, respectively. Thus, \(u^{n}_{s} \in {V}^{(s)}\), a space of functions defined on Ω (s). We consider transmission conditions (on the artificial interfaces) composed of local operators. The local problems and the synchronous iteration process is described by the following equations

$$\displaystyle \begin{aligned} \begin{cases} (\varDelta-\eta) u^{n+1}_{1} = f^{(1)} &\text{ on }\ \varOmega^{(1)} , \\ \frac{\partial{u^{n+1}_{1}}}{\partial x}+\varLambda u^{n+1}_{1}=\frac{\partial{u^{n}_{2}}}{\partial x}+\varLambda u^{n}_{2} &\text{ for }\ x= \ell_1 + L, \\ \mbox{For } s=2,\ldots, p-1, \\ -\frac{\partial u^{n+1}_{s}}{\partial x}+\varLambda u^{n+1}_{s}=-\frac{\partial{u^{n}_{s-1}}}{\partial x}+\varLambda u^{n}_{s-1} &\text{ for }\ x= \ell_{s-1}-L, \\ (\varDelta-\eta) u^{n+1}_{s} = f^{(s)} &\text{ on }\ \varOmega^{(s)} , \\ \frac{\partial{u^{n+1}_{s}}}{\partial x}+\varLambda u^{n+1}_{s}=\frac{\partial{u^{n}_{s+1}}}{\partial x}+\varLambda u^{n}_{s+1} &\text{ for }\ x=\ell_{s} + L, \\ -\frac{\partial{u^{n+1}_{p}}}{\partial x}+\varLambda u^{n+1}_{p}=-\frac{\partial{u^{n}_{p-1}}}{\partial x}+\varLambda u^{n}_{p-1} &\text{ for }\ x=\ell_{p-1} -L, \\ (\varDelta-\eta) u^{n+1}_{p} = f^{(p)} &\text{ on }\ \varOmega^{(p)} , \\ \end{cases} {} \end{aligned} $$
(2)

where Λ is a local approximation to the Poincaré-Steklov operator using differential operators (e.g., Λ = α and for artificial boundary conditions of OO0 family of artificial conditions, with α constant, and \(\varLambda =\alpha +\beta \frac {\partial ^2}{\partial \tau ^2}\) for the OO2 family, where \(\frac {\partial ^2}{\partial \tau ^2}\) is the tangential second derivative with respect to the boundary and β a constant; α and β are parameters whose values are chosen to optimize convergence properties and thus minimize convergence bounds).

Using linearity we obtain that the error of the synchronous iterative procedure is the solution of (2) with f = 0. The Fourier transform in the y direction of the error of the local problem s at iteration n then can be written as (see [5])

$$\displaystyle \begin{aligned} \hat{u}^{s}_{n}(x,k)=A^{n}_{s}(k)e^{-\theta(k)\left|x-(l_{s-1}-L)\right|}+B^{n}_{s}(k)e^{-\theta(k)\left|x-(l_{s}+L)\right|} \end{aligned} $$
(3)

where \(\theta (k)=\sqrt {\eta +k}\). Let c(n)T = ((c 1(n), c 2(n), …, c p−1(n), c p(n)) = \( (B^{n}_1, A^{n}_2,\) \(B^{n}_2, \ldots , A^{n}_{p-1}, B^{n}_{p-1}, A^{n}_p)\), where \(c_1=B^{n}_1\) and \(c_p=A^{n}_p\) are scalars, and \(c_s=(A^{n}_s,B^{n}_s)\) are ordered pairs for s = 2, …, p − 1. Plugging the expression (3) into (2) (with f = 0), we can write the iteration from u(n) to u(n + 1) in terms of the coefficients c(n) and c(n + 1) obtaining an (2p − 1) × (2p − 1) matrix \(\hat {T}\) such that \(c(n+1)=\hat {T}c(n)\); see [5] for more details. In that reference, it is shown that the operator \(\hat {T}\) is contracting in max norm, and in this paper we continue the proof starting precisely from this result.Footnote 1

1.2 Mathematical Model of Asynchronous Iterative Methods

Let X (1), …, X (p) be given sets and X be their Cartesian product, i.e., X = X (1) ×⋯ × X (p). Thus x ∈ X implies \(x=\left (x^{(1)},\ldots ,x^{(p)}\right )\) with x (s) ∈ X (s) for s ∈{1, …, p}. Let T (s) : X → X (s) where s ∈{1, …, p}, and let T : X → X be a vector-valued map (iteration map) given by T = (T (1), …, T (p)) with a fixed point x , i.e., x  = T(x ). Let \(\{t_{n}\}_{n\in \mathbb {N}}\) be the sequence of time stamps at which at least one processor updates its associated component. Let \(\{\sigma (n)\}_{n\in \mathbb {N}}\) be a sequence with σ(n) ⊂{1, …, p} \(\forall n\in \mathbb {N}\). The set σ(n) consists of labels (numbers) of the processors that update their associated component at the nth time stamp. Define for s, q ∈{1, …, p}, \(\{\tau ^{s}_{q}(n)\}_{n\in \mathbb {N}}\) a sequence of integers, representing the time-stamp index of the update of the data coming from processor q and available in processor s at the beginning of the computation of x (s)(n) which ends at the nth time stamp. Let \(x(0)=\left (x^{(1)}(0),\ldots ,x^{(p)}(0)\right )\) be the initial approximation (of the fixed point x ). Then, the new computed value updated by processor s at the nth time stamp is

$$\displaystyle \begin{aligned} x^{(s)}(n)=\left\{\begin{array}{ll} T^{(s)}\left(x^{(1)}(\tau^{s}_{1}(n)),\ldots,x^{(p)}(\tau^{s}_{p}(n))\right),& s\in\sigma(n)\\ x^{(s)}(n-1),& s\notin\sigma(n)\cdot \end{array}\right. \end{aligned}$$

It is assumed that the three following conditions (necessary for convergence) are satisfied

$$\displaystyle \begin{aligned} \begin{array}{rcl} \forall s,q\in\left\{ 1,\dots,p\right\}, \forall n\in\mathbb{N}^{*}, \tau_q^{(s)}(n) < n , {} \end{array} \end{aligned} $$
(4)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \forall s\in\left\{ 1,\dots,p \right\}, \operatorname{card}\left\{ n\in\mathbb{N}^{*}| s\in \sigma(n) \right\} = +\infty , {} \end{array} \end{aligned} $$
(5)
$$\displaystyle \begin{aligned} \begin{array}{rcl}\ \forall s,q\in\left\{ 1,\dots,p \right\}, \lim_{n\rightarrow+\infty} \tau_q^{(s)}(n) = +\infty . {} \end{array} \end{aligned} $$
(6)

Condition (4) indicates that data used at the time t n must have been produced before time t n, i.e., time does not flow backward. Condition (5) means that no process will ever stop updating its components. Condition (6) corresponds to the fact that new data will always be provided to the process. In other words, no process will have a piece of data that is never updated.

2 Convergence Proof for the Asynchronous Case

We now present the convergence proof of the asynchronous implementation of Optimized Schwarz with transmission conditions composed of local operators (as described in Sect. 1.1) when applied to (1). Note that the local problem of AOS is obtained by replacing, in (2), n + 1 by t new and n by the corresponding update times of the values of u received from the neighboring subdomains and available at the beginning of the computation of the new update. Let us define a time stamp as the instant of time at which at least one processor finishes its computation and produces a new update. Let t m be the mth time stamp and \(u^{t_{m}}_{s}\) be the error of the local problem s at time t = t m. Note then that the asynchronous method converges if for any monotonically increasing sequence of time stamps \(\{t_{m}\}_{m\in \mathbb {N}}\) we have

$$\displaystyle \begin{aligned} \lim_{m\rightarrow\infty}u^{t_{m}}_{s}=0 \end{aligned} $$
(7)

Thus, in order to prove convergence of the asynchronous iterations, we just need to prove that (7) holds for any monotonically increasing sequence of time stamps \(\{t_{m}\}_{m\in \mathbb {N}}\), which is what we prove next.

Theorem 1

Let us define a time stamp t m as the instant of time at which at least one processor finishes its computation and produces a new update. Let \(u^{t_{m}}_{s}(x,y)\) be the error of the local problem s (of the asynchronous version of (2)), s ∈{1, …, p}, and \(\hat {u}^{t_{m}}_{s}(x,k)\) be its corresponding Fourier transform in the y direction. Let \(S=\left \{l_{s-1}-L:s=2,\ldots ,p \right \}\cup \left \{l_{s}+L:s=1,\ldots ,p-1 \right \}\) (i.e., S is the set of the x coordinates of each of the artificial boundaries of each of the local problems. Then, if \(\hat {u}^{0}_{s}(x,k)\) is uniformly bounded in \(k\in \mathbb {N}\) and x  S, we have,s ∈{1, …, p}, \(\lim _{m\rightarrow \infty } u^{t_{m}}_{s}(x,y)=0\) in Ω (s) for any (monotonically increasing) sequence of time stamps \(\{t_m\}_{m\in \mathbb {N}}\).

Outline of the Proof

Note first that all the derivatives of \(u^{t_{m}}_{s}\) exist and are continuous. Then, if \(u^{t_{m}}_{s}\) converges to zero uniformly in \(\left [l-\epsilon ,l+\epsilon \right ]\times \mathbb {R}\) as m → and the first and derivatives of other orders of \(u^{t_{m}}_{s}\) contained in Λ are continuous, it can be shown that \(\lim _{m\rightarrow \infty } \left (\frac {\partial u^{t_m}_{s}}{\partial x}+\varLambda u^{t_m}_s\right )(x,y)=0\) uniformly in \(\{l\}\times \mathbb {R}\).

We want to prove that for any sequence of time stamps \(\left \{ t_m\right \}_{m\in \mathbb {N}}\) and for every s ∈{1, …, p} we have \(\lim _{m\rightarrow \infty } |u^{t_m}_{s}(x,y)|=0\) in Ω (s). Note that, to prove this statement, by the argument given in the previous paragraph, with S 𝜖 = ∪zS[z − 𝜖, z + 𝜖], we just need to prove that for every s ∈{1, …, p} it holds \(\lim _{m\rightarrow \infty } |u^{t_m}_{s}(x,y)|=0\) uniformly in \(S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\times \mathbb {R}\), since this implies that the values of the boundary conditions of each local problem will converge to zero, and consequently so will do the solution of each local problem in its interior domain.

Observe that, if \(\lim _{m \rightarrow \infty }|\hat {u}^{t_m}_{s}(x,k)|=0\) and

$$\displaystyle \begin{aligned} \lim_{m\rightarrow\infty}\int_{-\infty}^{\infty}|\hat{u}^{t_{m}}_{s}(x,k)|dk=\int_{-\infty}^{\infty} \lim_{m\rightarrow\infty}|\hat{u}^{t_{m}}_{s}(x,k)|dk, \end{aligned} $$
(8)

we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{m\rightarrow\infty}|u^{t_{m}}_{s}(x,y)|&\displaystyle =&\displaystyle \lim_{m\rightarrow\infty}\left|\frac{1}{(2\pi)^2}\int_{-\infty}^{\infty}\hat{u}^{t_{m}}_{s}(x,k)e^{iyk}dk\right|\\ &\displaystyle \leq&\displaystyle \frac{1}{(2\pi)^2}\lim_{m\rightarrow\infty}\int_{-\infty}^{\infty}|\hat{u}^{t_{m}}_{s}(x,k)|dk=\frac{1}{(2\pi)^2}\int_{-\infty}^{\infty} \lim_{m\rightarrow\infty}|\hat{u}^{t_{m}}_{s}(x,k)|dk=0. \end{array} \end{aligned} $$

Thus, in order to prove that \(\lim _{m\rightarrow \infty } |u^{t_m}_{s}(x,y)|=0\) in Ω (s), it suffices to prove that, for every s ∈{1, …, p}, the following three statements hold:

  1. 1.

    \(\lim _{m \rightarrow \infty } |A^{t_m}_{s}(k)|=0\) and \(\lim _{m\rightarrow \infty } |B^{t_m}_{s}(k)|=0\).

  2. 2.

    \(\lim _{m \rightarrow \infty }|\hat {u}^{t_m}_{s}(x,y)|=0\) for \(\forall x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) and \(y\in \mathbb {R}\).

  3. 3.

    For all \(x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) and \(y\in \mathbb {R}\), (8) holds.

Item 3. means, in other words, that if \(|\hat {u}^{t_{m}}_{s}(x,.)|\) goes to zero as m goes to infinity, so will do its integral over \(k\in \mathbb {R}\), and, in turn, the inverse Fourier transform of \( \hat {u}^{t_{m}}_{s}(x,.)\).

Proof of the Theorem

We first prove that ||c(0)|| < . For ease of notation, for each subdomain s, let the left artificial boundary condition be p s(k) and the right artificial boundary condition q s(k). Thus, it follows from the expression (3) that, at x = l s−1 − L,

$$\displaystyle \begin{aligned} \hat{u}^{0}_{s}(l_{s-1}-L,k)=A^{0}_{s}(k)+B^{0}_{s}(k)e^{\theta(k)(l_{s-1}-l_{s}-2L)}=A^{0}_{s}(k)+B^{0}_{s}(k)e^{-\theta(k)(W+2L)}=p_{s}(k) \end{aligned} $$
(9)

and at x = l s + L

$$\displaystyle \begin{aligned} \hat{u}^{0}_{s}(l_{s}+L,k)=A^{0}_{s}(k)e^{-\theta(k)(l_{s}-l_{s-1}+2L)}+B^{0}_{s}(k)=A^{0}_{s}(k)e^{-\theta(k)(W+2L)}+B^{0}_{s}(k)=q_{s}(k). \end{aligned} $$
(10)

From (10) we have \(B^{0}_{s}(k)=q_{s}(k)-A^{0}_{s}(k)e^{-\theta (k)(W+2L)}\). Then, plugging this expression of \(B^{0}_{s}(k)\) into (9) gives

$$\displaystyle \begin{aligned} \begin{array}{rcl} A^{0}_{s}(k)+\left[q_{s}(k)-A^{0}_{s}(k)e^{-\theta(k)(W+2L)}\right]e^{-\theta(k)(W+2L)}&\displaystyle =&\displaystyle p_{s}(k),\\ A^{0}_{s}(k) \left[1-e^{-2\theta(k)(W+2L)}\right]&\displaystyle =&\displaystyle p_{s}(k)-q_{s}(k)e^{-\theta(k)(W+2L)},\\ A^{0}_{s}(k) &\displaystyle =&\displaystyle \frac{p_{s}(k)-q_{s}(k)e^{-\theta(k)(W+2L)}}{1-e^{-2\theta(k)(W+2L)}}, \end{array} \end{aligned} $$
$$\displaystyle \begin{aligned}\begin{array}{rcl}{} |A^{0}_{s}(k)| &\displaystyle =&\displaystyle \frac{|p_{s}(k)-q_{s}(k)e^{-\theta(k)(W+2L)}|}{|1-e^{-2\theta(k)(W+2L)}|}\leq\frac{|p_{s}(k)|+|q_{s}(k)|e^{-\theta(k)(W+2L)}}{1-e^{-2\sqrt{\eta}(W+2L)}}. \end{array} \end{aligned} $$

By a similar process we obtain

$$\displaystyle \begin{aligned} |B^{0}_{s}(k)|\leq\frac{|q_{s}(k)|+|p_{s}(k)|e^{-\theta(k)(W+2L)}}{1-e^{-2\sqrt{\eta}(W+2L)}}. \end{aligned}$$

Let p (k) and q (k) be such that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \max_{s\in\{1,\ldots,p\}}\left\{ \max\left\{ \frac{|p_{s}(k)|+|q_{s}(k)|e^{-\theta(k)(W+2L)}}{1-e^{-2\sqrt{\eta}(W+2L)}}\right.\right.&\displaystyle ,&\displaystyle \left.\left.\frac{|q_{s}(k)|+|p_{s}(k)|e^{-\theta(k)(W+2L)}}{1-e^{-2\sqrt{\eta}(W+2L)}}\right\}\right\}\\&\displaystyle =&\displaystyle \frac{|p^{*}(k)|+|q^{*}(k)|e^{-\theta(k)(W+2L)}}{1-e^{-2\sqrt{\eta}(W+2L)}}. \end{array} \end{aligned} $$

Then, we have that

$$\displaystyle \begin{aligned} ||c(0)||{}_{\infty}\leq\frac{|p^{*}(k)|+|q^{*}(k)|e^{-\theta(k)(W+2L)}}{1-e^{-2\sqrt{\eta}(W+2L)}}. \end{aligned}$$

By hypothesis, \(\hat {u}^{0}_{s}\) is uniformly bounded in \(k\in \mathbb {N}\) and x ∈ S. Thus, there exists a number M > 0 such that \(\hat {u}^{0}(x,k)\leq M\) for any \(k\in \mathbb {R}\) and x ∈ S. Then, we have that |p s(k)|, |q s(k)|≤ M for any \(k\in \mathbb {R}\) and s ∈{1, …p}. Then, necessarily,|p (k)|, |q (k)|≤ M, and consequently ||c(0)|| < .

Let {t m} be a monotonically increasing sequence of time stamps. As mentioned previously, in [5] it is proven that \(||\hat {T}(k)c(k)||{ }_{\infty }\leq \rho ||c(k)||{ }_{\infty }\), with ρ < 1. This implies that after one application of a local operator to an arbitrary vector c old(k) we have

$$\displaystyle \begin{aligned} |A^{\text{new}}_{s}|, |B^{\text{new}}_{s}|\leq ||\hat{T}^{s}(k)c_{\text{old}}(k)||{}_{\infty} \leq\rho||c_{\text{old}}(k)||{}_{\infty} \end{aligned}$$

and after all processes have updated their values at least once, say at time stamp t , we have at least \(|A^{t_{*}}_{s}|, |B^{t_{*}}_{s}|\leq \rho ||c_{0}(k)||{ }_{\infty }\). This implies, in turn, that given a monotonically increasing sequence \(\{t_{m}\}_{m\in \mathbb {N}}\), at time t m we have

$$\displaystyle \begin{aligned} |A^{t_{m}}_{s}|, |B^{t_{m}}_{s}|\leq \rho^{\phi_{s}(m)}||c_{0}(k)||{}_{\infty}, \end{aligned}$$

where, for each s ∈{1, …, p}, \(\phi _{s}:\mathbb {N} \rightarrow \mathbb {N}\) such that ϕ s(m) → as m →. Then,

$$\displaystyle \begin{aligned} \lim_{m\rightarrow\infty}|A^{t_{m}}_{s}(k)|\leq \lim_{m\rightarrow\infty}\rho^{\phi_{s}(m)}||c_{0}(k)||{}_{\infty}=||c_{0}(k)||{}_{\infty}\lim_{m\rightarrow\infty} \rho^{\phi_{s}(m)}=||c_{0}(k)||{}_{\infty}0=0. \end{aligned}$$

Similarly, \(\lim _{m\rightarrow \infty }|B^{t_{m}}_{s}(k)|=0\), and therefore

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{m\rightarrow\infty}|\hat{u}^{t_{m}}_{s}(x,k)|&\displaystyle =&\displaystyle \lim_{m\rightarrow\infty}\left|A^{t_{m}}_{s}(k)e^{-\theta(k)|x-(l_{s-1}-L)|}+B^{t_{m}}_{s}(k)e^{-\theta(k)|x-(l_{s}+L)|}\right| \\ &\displaystyle \leq&\displaystyle \lim_{m\rightarrow\infty}\left(\left|A^{t_{m}}_{s}(k)\right|e^{-\theta(k)|x-(l_{s-1}-L)|}+\left|B^{t_{m}}_{s}(k)\right|e^{-\theta(k)|x-(l_{s}+L)|} \right)\\ &\displaystyle =&\displaystyle \left(\lim_{m\rightarrow\infty}\left|A^{t_{m}}_{s}(k)\right|\right)e^{-\theta(k)|x-(l_{s-1}-L)|}+\left(\lim_{m\rightarrow\infty}\left|B^{t_{m}}_{s}(k)\right|\right)e^{-\theta(k)|x-(l_{s}+L)|}\\&\displaystyle =&\displaystyle 0. \end{array} \end{aligned} $$

To complete the proof, we need to show that (8) holds for \(x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) and \(y\in \mathbb {R}\). We show now that, for all \(m\in \mathbb {N}\), \(|\hat {u}^{t_{m}}_{s}(x,.)|\) is bounded by an \(L^{1}(\mathbb {R})\) function. To that end, we have that,

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} |\hat{u}^{t_{m}}_{s}(x,k)|&\displaystyle =&\displaystyle |A^{t_{m}}_{s}(k)e^{-\theta(k)\left|x-(l_{s-1}-L)\right|}+B^{t_{m}}_{s}(k)e^{-\theta(k)\left|x-(l_{s}+L)\right|}| \\ &\displaystyle \leq&\displaystyle |A^{t_{m}}_{s}(k)|e^{-\theta(k)\left|x-(l_{s-1}-L)\right|}+|B^{t_{m}}_{s}(k)|e^{-\theta(k)\left|x-(l_{s}+L)\right|} \\ &\displaystyle \leq&\displaystyle \rho^{\phi_{s}(m)}||c(0)||{}_{\infty}(k)\left(e^{-\theta(k)\left|x-(l_{s-1}-L)\right|}+e^{-\theta(k)\left|x-(l_{s}+L)\right|}\right) \\ &\displaystyle \leq&\displaystyle \rho^{\phi_{s}(m)}\frac{|p^{*}(k)|+|q^{*}(k)|e^{-\theta(k)(W+2L)}}{1-e^{-2\sqrt{\eta}(W+2L)}}\left(e^{-\theta(k)\left|x-(l_{s-1}-L)\right|}+e^{-\theta(k)\left|x-(l_{s}+L)\right|}\right). \end{array} \end{aligned} $$
(11)

Let

$$\displaystyle \begin{aligned} g(x,k)=\frac{|p^{*}(k)|+|q^{*}(k)|e^{-\theta(k)(W+2L)}}{1-e^{-2\sqrt{\eta}(W+2L)}}\left(e^{-\theta(k)\left|x-(l_{s-1}-L)\right|}+e^{-\theta(k)\left|x-(l_{s}+L)\right|}\right). \end{aligned} $$
(12)

Thus, we have \(|\hat {u}^{t_{m}}_{s}(x,k)|\leq g(x,k)\) for any \(m\in \mathbb {N}\). We show next that \(g(x,.)\in L^{1}(\mathbb {R})\). Since |p (k)|, |q (k)|≤ M, we have

$$\displaystyle \begin{aligned} g(x,k)\leq M\frac{1+e^{-\theta(k)(W+2L)}}{1-e^{-2\sqrt{\eta}(W+2L)}}\left(e^{-\theta(k)\left|x-(l_{s-1}-L)\right|}+e^{-\theta(k)\left|x-(l_{s}+L)\right|}\right) \end{aligned} $$
(13)

Thus,

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \int_{-\infty}^{\infty}|g(x,k)|dk&\displaystyle \leq&\displaystyle \int_{-\infty}^{\infty}M\frac{1+e^{-\theta(k)(W+2L)}}{1-e^{-2\sqrt{\eta}(W+2L)}}\left(e^{-\theta(k)\left|x-(l_{s-1}-L)\right|}+e^{-\theta(k)\left|x-(l_{s}+L)\right|}\right)dk\\ &\displaystyle =&\displaystyle \frac{M}{1-e^{-2\sqrt{\eta}(W+2L)}} \int_{-\infty}^{\infty} \left(e^{-\theta(k)\left|x-(l_{s-1}-L)\right|}+e^{-\theta(k)\left|x-(l_{s}+L)\right|}\right.\\ &\displaystyle &\displaystyle \left.+e^{-\theta(k)[W+2L+\left|x-(l_{s-1}-L)\right|]}+e^{-\theta(k)[W+2L+\left|x-(l_{s}+L)\right|]}\right)dk\\ &\displaystyle \leq&\displaystyle \frac{M}{1-e^{-2\sqrt{\eta}(W+2L)}} \left(\frac{2}{\left|x-(l_{s-1}-L)\right|}+\frac{2}{\left|x-(l_{s}+L)\right|}\right.\\ &\displaystyle &\displaystyle \left.+\frac{2}{W+2L+\left|x-(l_{s-1}-L)\right|}+\frac{2}{W+2L+\left|x-(l_{s}+L)\right|}\right). \end{array} \end{aligned} $$
(14)

Note that for \(x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) we have |x − (l s−1 − L)|, |x − (l s−1 + L)|≥ 2L − 𝜖. Then, plugging these inequalities in (14), we obtain

$$\displaystyle \begin{aligned} \int_{-\infty}^{\infty}|g(x,k)|dk\leq\frac{4M(W+6L-2\epsilon)}{(1-e^{-2\sqrt{\eta}(W+2L)})(2L-\epsilon)(W+4L-\epsilon)}, \end{aligned} $$
(15)

i.e., \(g(x,.)\in L^{1}(\mathbb {R})\). Consequently, for any \(x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) there exists a \(g(x,.)\in L^{1}(\mathbb {R})\) such that \(|\hat {u}^{t_{m}}_{s}(x,k)|\leq g(x,k)\) for all \(m\in \mathbb {N}\), and by the Lebesgue Dominated Convergence Theorem we have then that (8) holds.

The above argument was for s = 2, …, p − 1. Using the same argument but with \(A^{t_{m}}_{1}=0\) and − instead of l s−1 − L, we can see that (8) holds for s = 1; and, using the same argument but with \(B^{t_{m}}_{p}=0\) and instead of l s + L, it can be shown that (8) holds for s = p.

Thus, from (11), (12), (15) we have \(\forall x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) and \(y\in \mathbb {R}\) that

$$\displaystyle \begin{aligned} |u^{t_{m}}_{s}(x,y)|\leq\frac{1}{(2\pi)^2}\int_{-\infty}^{\infty}|\hat{u}^{t_{m}}_{s}(x,k)|dk\leq \frac{\rho^{\phi_{s}(m)}}{\pi^{2}}\frac{M(W+6L-2\epsilon)}{(1-e^{-2\sqrt{\eta}(W+2L)})(2L-\epsilon)(W+4L-\epsilon)}. \end{aligned}$$

Consequently, \(u^{t_{m}}_{s}\rightarrow 0\) uniformly in \(S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\times \mathbb {R}\) as m →. Then, as explained in the outline of the proof, the values of the boundary conditions of each local problem go to zero as m goes to infinity, and therefore ∀s ∈{1, …, p} we have \(u^{t_{m}}_{s}\rightarrow 0\) in Ω (s) as m →. Given that the sequence of time stamps was arbitrary, the theorem is proven. □

Remark 1

Note that the condition that \(\hat {u}^{0}_{s}(x,k)\) is uniformly bounded in \(k\in \mathbb {N}\) and x ∈ S can be weakened to the condition that p and q be such that \(g(x,.)\in L^{1}(\mathbb {R})\).

Remark 2

Note that, for synchronous and asynchronous iterations, for a given t m, the value of ϕ s(t m) is, in general, different for each s, but they have a common lower bound, i.e., ϕ s(t m) ≥ n min, where n min =mins ∈{1,…,p}{n s} and n s is the local update number of process s. Also, for any s, the value of ϕ s(t m) can be much larger than n s. For the synchronous case all the local update numbers are equal to the global iteration number, therefore, \(n_{\min }\) is just the (global) iteration number.

3 Conclusion

In [5], it was shown that the operator \(\hat T\) mapping the coefficients of the Fourier transform of the error at one iteration to those at the next iteration is contracting in max norm. In this paper, we use this result to complete a proof that, for the operator Δ − η, the asynchronous optimized Schwarz method converges for any initial approximation u 0 that gives an initial error with Fourier Transform (along the y direction) uniformly bounded on each of the artificial interfaces.