Abstract
A convergence proof of Asynchronous Optimized Schwarz Methods applied to a shifted Laplacian problem, with negative shift, in \(\mathbb {R}^2\) is presented. Sufficient conditions for convergence involving initial values of the approximation of the solution are discussed.
Similar content being viewed by others
1 Introduction
Optimized Schwarz Methods are Domain Decomposition methods in which the boundary conditions on the artificial interfaces are of Robin type, i.e., containing one or more parameters that can be optimized [1, 3, 4].
In our context, Asynchronous Schwarz methods are those where each subdomain solve is performed with whatever new information (to be used for the boundary conditions) has arrived from the neighboring subdomains since the last update, but without necessarily waiting for new information to arrive. For more details on asynchronous methods, see, e.g. [2] and references therein. See also Sect. 1.2 below.
In this paper we add more details to the convergence proof given in [5] of Asynchronous Optimized Schwarz (AOS) where it is used to solve Poisson’s equation in \(\mathbb {R}^2\). The results presented here complement those of [5].
1.1 Preliminaries
The aim is to provide a complete proof of the convergence of AOS for
with vanishing value of u at infinity, and η > 0. The space \(\mathbb {R}^2\) is divided into p overlapping infinite vertical strips. This means we have p − 1 vertical lines, say at coordinates x = ℓ 1, …ℓ p−1; and we assume for simplicity that we have the same overlap 2L between subdomains. We also assume, without loss of generality, that except for the subdomains at infinity, that each strip has the same width, i.e., ℓ s − ℓ s−1 = W for s = 2, …, p − 1, so that ℓ s = ℓ 1 + (s − 1)W. It follows then, that the overlap satisfies 2L < W, and usually L ≪ W. Thus, we have \(\varOmega ^{(1)} = ]-\infty ; \ell _1 + L ] \times \mathbb {R}\), \(\varOmega ^{(s)} =[ \ell _{s-1}- L ; \ell _{s}+L ]\times \mathbb {R}\), s = 2, …, p − 1, and \(\varOmega ^{(p)} = [\ell _{p-1}- L;+\infty [\times \mathbb {R}\). In this context, the normal vector is in the x direction (with the appropriate sign).
Let f (s) and \(u^{n}_{s}\) denote the restriction of f and u n, the approximation to the solution at the iteration n, to Ω (s), s = 1, …, p, respectively. Thus, \(u^{n}_{s} \in {V}^{(s)}\), a space of functions defined on Ω (s). We consider transmission conditions (on the artificial interfaces) composed of local operators. The local problems and the synchronous iteration process is described by the following equations
where Λ is a local approximation to the Poincaré-Steklov operator using differential operators (e.g., Λ = α and for artificial boundary conditions of OO0 family of artificial conditions, with α constant, and \(\varLambda =\alpha +\beta \frac {\partial ^2}{\partial \tau ^2}\) for the OO2 family, where \(\frac {\partial ^2}{\partial \tau ^2}\) is the tangential second derivative with respect to the boundary and β a constant; α and β are parameters whose values are chosen to optimize convergence properties and thus minimize convergence bounds).
Using linearity we obtain that the error of the synchronous iterative procedure is the solution of (2) with f = 0. The Fourier transform in the y direction of the error of the local problem s at iteration n then can be written as (see [5])
where \(\theta (k)=\sqrt {\eta +k}\). Let c(n)T = ((c 1(n), c 2(n), …, c p−1(n), c p(n)) = \( (B^{n}_1, A^{n}_2,\) \(B^{n}_2, \ldots , A^{n}_{p-1}, B^{n}_{p-1}, A^{n}_p)\), where \(c_1=B^{n}_1\) and \(c_p=A^{n}_p\) are scalars, and \(c_s=(A^{n}_s,B^{n}_s)\) are ordered pairs for s = 2, …, p − 1. Plugging the expression (3) into (2) (with f = 0), we can write the iteration from u(n) to u(n + 1) in terms of the coefficients c(n) and c(n + 1) obtaining an (2p − 1) × (2p − 1) matrix \(\hat {T}\) such that \(c(n+1)=\hat {T}c(n)\); see [5] for more details. In that reference, it is shown that the operator \(\hat {T}\) is contracting in max norm, and in this paper we continue the proof starting precisely from this result.Footnote 1
1.2 Mathematical Model of Asynchronous Iterative Methods
Let X (1), …, X (p) be given sets and X be their Cartesian product, i.e., X = X (1) ×⋯ × X (p). Thus x ∈ X implies \(x=\left (x^{(1)},\ldots ,x^{(p)}\right )\) with x (s) ∈ X (s) for s ∈{1, …, p}. Let T (s) : X → X (s) where s ∈{1, …, p}, and let T : X → X be a vector-valued map (iteration map) given by T = (T (1), …, T (p)) with a fixed point x ∗, i.e., x ∗ = T(x ∗). Let \(\{t_{n}\}_{n\in \mathbb {N}}\) be the sequence of time stamps at which at least one processor updates its associated component. Let \(\{\sigma (n)\}_{n\in \mathbb {N}}\) be a sequence with σ(n) ⊂{1, …, p} \(\forall n\in \mathbb {N}\). The set σ(n) consists of labels (numbers) of the processors that update their associated component at the nth time stamp. Define for s, q ∈{1, …, p}, \(\{\tau ^{s}_{q}(n)\}_{n\in \mathbb {N}}\) a sequence of integers, representing the time-stamp index of the update of the data coming from processor q and available in processor s at the beginning of the computation of x (s)(n) which ends at the nth time stamp. Let \(x(0)=\left (x^{(1)}(0),\ldots ,x^{(p)}(0)\right )\) be the initial approximation (of the fixed point x ∗). Then, the new computed value updated by processor s at the nth time stamp is
It is assumed that the three following conditions (necessary for convergence) are satisfied
Condition (4) indicates that data used at the time t n must have been produced before time t n, i.e., time does not flow backward. Condition (5) means that no process will ever stop updating its components. Condition (6) corresponds to the fact that new data will always be provided to the process. In other words, no process will have a piece of data that is never updated.
2 Convergence Proof for the Asynchronous Case
We now present the convergence proof of the asynchronous implementation of Optimized Schwarz with transmission conditions composed of local operators (as described in Sect. 1.1) when applied to (1). Note that the local problem of AOS is obtained by replacing, in (2), n + 1 by t new and n by the corresponding update times of the values of u received from the neighboring subdomains and available at the beginning of the computation of the new update. Let us define a time stamp as the instant of time at which at least one processor finishes its computation and produces a new update. Let t m be the mth time stamp and \(u^{t_{m}}_{s}\) be the error of the local problem s at time t = t m. Note then that the asynchronous method converges if for any monotonically increasing sequence of time stamps \(\{t_{m}\}_{m\in \mathbb {N}}\) we have
Thus, in order to prove convergence of the asynchronous iterations, we just need to prove that (7) holds for any monotonically increasing sequence of time stamps \(\{t_{m}\}_{m\in \mathbb {N}}\), which is what we prove next.
Theorem 1
Let us define a time stamp t m as the instant of time at which at least one processor finishes its computation and produces a new update. Let \(u^{t_{m}}_{s}(x,y)\) be the error of the local problem s (of the asynchronous version of (2)), s ∈{1, …, p}, and \(\hat {u}^{t_{m}}_{s}(x,k)\) be its corresponding Fourier transform in the y direction. Let \(S=\left \{l_{s-1}-L:s=2,\ldots ,p \right \}\cup \left \{l_{s}+L:s=1,\ldots ,p-1 \right \}\) (i.e., S is the set of the x −coordinates of each of the artificial boundaries of each of the local problems. Then, if \(\hat {u}^{0}_{s}(x,k)\) is uniformly bounded in \(k\in \mathbb {N}\) and x ∈ S, we have, ∀s ∈{1, …, p}, \(\lim _{m\rightarrow \infty } u^{t_{m}}_{s}(x,y)=0\) in Ω (s) for any (monotonically increasing) sequence of time stamps \(\{t_m\}_{m\in \mathbb {N}}\).
Outline of the Proof
Note first that all the derivatives of \(u^{t_{m}}_{s}\) exist and are continuous. Then, if \(u^{t_{m}}_{s}\) converges to zero uniformly in \(\left [l-\epsilon ,l+\epsilon \right ]\times \mathbb {R}\) as m →∞ and the first and derivatives of other orders of \(u^{t_{m}}_{s}\) contained in Λ are continuous, it can be shown that \(\lim _{m\rightarrow \infty } \left (\frac {\partial u^{t_m}_{s}}{\partial x}+\varLambda u^{t_m}_s\right )(x,y)=0\) uniformly in \(\{l\}\times \mathbb {R}\).
We want to prove that for any sequence of time stamps \(\left \{ t_m\right \}_{m\in \mathbb {N}}\) and for every s ∈{1, …, p} we have \(\lim _{m\rightarrow \infty } |u^{t_m}_{s}(x,y)|=0\) in Ω (s). Note that, to prove this statement, by the argument given in the previous paragraph, with S 𝜖 = ∪z ∈ S[z − 𝜖, z + 𝜖], we just need to prove that for every s ∈{1, …, p} it holds \(\lim _{m\rightarrow \infty } |u^{t_m}_{s}(x,y)|=0\) uniformly in \(S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\times \mathbb {R}\), since this implies that the values of the boundary conditions of each local problem will converge to zero, and consequently so will do the solution of each local problem in its interior domain.
Observe that, if \(\lim _{m \rightarrow \infty }|\hat {u}^{t_m}_{s}(x,k)|=0\) and
we have
Thus, in order to prove that \(\lim _{m\rightarrow \infty } |u^{t_m}_{s}(x,y)|=0\) in Ω (s), it suffices to prove that, for every s ∈{1, …, p}, the following three statements hold:
-
1.
\(\lim _{m \rightarrow \infty } |A^{t_m}_{s}(k)|=0\) and \(\lim _{m\rightarrow \infty } |B^{t_m}_{s}(k)|=0\).
-
2.
\(\lim _{m \rightarrow \infty }|\hat {u}^{t_m}_{s}(x,y)|=0\) for \(\forall x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) and \(y\in \mathbb {R}\).
-
3.
For all \(x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) and \(y\in \mathbb {R}\), (8) holds.
Item 3. means, in other words, that if \(|\hat {u}^{t_{m}}_{s}(x,.)|\) goes to zero as m goes to infinity, so will do its integral over \(k\in \mathbb {R}\), and, in turn, the inverse Fourier transform of \( \hat {u}^{t_{m}}_{s}(x,.)\).
Proof of the Theorem
We first prove that ||c(0)||∞ < ∞. For ease of notation, for each subdomain s, let the left artificial boundary condition be p s(k) and the right artificial boundary condition q s(k). Thus, it follows from the expression (3) that, at x = l s−1 − L,
and at x = l s + L
From (10) we have \(B^{0}_{s}(k)=q_{s}(k)-A^{0}_{s}(k)e^{-\theta (k)(W+2L)}\). Then, plugging this expression of \(B^{0}_{s}(k)\) into (9) gives
By a similar process we obtain
Let p ∗(k) and q ∗(k) be such that
Then, we have that
By hypothesis, \(\hat {u}^{0}_{s}\) is uniformly bounded in \(k\in \mathbb {N}\) and x ∈ S. Thus, there exists a number M > 0 such that \(\hat {u}^{0}(x,k)\leq M\) for any \(k\in \mathbb {R}\) and x ∈ S. Then, we have that |p s(k)|, |q s(k)|≤ M for any \(k\in \mathbb {R}\) and s ∈{1, …p}. Then, necessarily,|p ∗(k)|, |q ∗(k)|≤ M, and consequently ||c(0)||∞ < ∞.
Let {t m} be a monotonically increasing sequence of time stamps. As mentioned previously, in [5] it is proven that \(||\hat {T}(k)c(k)||{ }_{\infty }\leq \rho ||c(k)||{ }_{\infty }\), with ρ < 1. This implies that after one application of a local operator to an arbitrary vector c old(k) we have
and after all processes have updated their values at least once, say at time stamp t ∗, we have at least \(|A^{t_{*}}_{s}|, |B^{t_{*}}_{s}|\leq \rho ||c_{0}(k)||{ }_{\infty }\). This implies, in turn, that given a monotonically increasing sequence \(\{t_{m}\}_{m\in \mathbb {N}}\), at time t m we have
where, for each s ∈{1, …, p}, \(\phi _{s}:\mathbb {N} \rightarrow \mathbb {N}\) such that ϕ s(m) →∞ as m →∞. Then,
Similarly, \(\lim _{m\rightarrow \infty }|B^{t_{m}}_{s}(k)|=0\), and therefore
To complete the proof, we need to show that (8) holds for \(x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) and \(y\in \mathbb {R}\). We show now that, for all \(m\in \mathbb {N}\), \(|\hat {u}^{t_{m}}_{s}(x,.)|\) is bounded by an \(L^{1}(\mathbb {R})\) function. To that end, we have that,
Let
Thus, we have \(|\hat {u}^{t_{m}}_{s}(x,k)|\leq g(x,k)\) for any \(m\in \mathbb {N}\). We show next that \(g(x,.)\in L^{1}(\mathbb {R})\). Since |p ∗(k)|, |q ∗(k)|≤ M, we have
Thus,
Note that for \(x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) we have |x − (l s−1 − L)|, |x − (l s−1 + L)|≥ 2L − 𝜖. Then, plugging these inequalities in (14), we obtain
i.e., \(g(x,.)\in L^{1}(\mathbb {R})\). Consequently, for any \(x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) there exists a \(g(x,.)\in L^{1}(\mathbb {R})\) such that \(|\hat {u}^{t_{m}}_{s}(x,k)|\leq g(x,k)\) for all \(m\in \mathbb {N}\), and by the Lebesgue Dominated Convergence Theorem we have then that (8) holds.
The above argument was for s = 2, …, p − 1. Using the same argument but with \(A^{t_{m}}_{1}=0\) and −∞ instead of l s−1 − L, we can see that (8) holds for s = 1; and, using the same argument but with \(B^{t_{m}}_{p}=0\) and ∞ instead of l s + L, it can be shown that (8) holds for s = p.
Thus, from (11), (12), (15) we have \(\forall x\in S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\) and \(y\in \mathbb {R}\) that
Consequently, \(u^{t_{m}}_{s}\rightarrow 0\) uniformly in \(S_{\epsilon }\cap \left [\ell _{s-1},\ell _{s}\right ]\times \mathbb {R}\) as m →∞. Then, as explained in the outline of the proof, the values of the boundary conditions of each local problem go to zero as m goes to infinity, and therefore ∀s ∈{1, …, p} we have \(u^{t_{m}}_{s}\rightarrow 0\) in Ω (s) as m →∞. Given that the sequence of time stamps was arbitrary, the theorem is proven. □
Remark 1
Note that the condition that \(\hat {u}^{0}_{s}(x,k)\) is uniformly bounded in \(k\in \mathbb {N}\) and x ∈ S can be weakened to the condition that p ∗ and q ∗ be such that \(g(x,.)\in L^{1}(\mathbb {R})\).
Remark 2
Note that, for synchronous and asynchronous iterations, for a given t m, the value of ϕ s(t m) is, in general, different for each s, but they have a common lower bound, i.e., ϕ s(t m) ≥ n min, where n min =mins ∈{1,…,p}{n s} and n s is the local update number of process s. Also, for any s, the value of ϕ s(t m) can be much larger than n s. For the synchronous case all the local update numbers are equal to the global iteration number, therefore, \(n_{\min }\) is just the (global) iteration number.
3 Conclusion
In [5], it was shown that the operator \(\hat T\) mapping the coefficients of the Fourier transform of the error at one iteration to those at the next iteration is contracting in max norm. In this paper, we use this result to complete a proof that, for the operator Δ − η, the asynchronous optimized Schwarz method converges for any initial approximation u 0 that gives an initial error with Fourier Transform (along the y direction) uniformly bounded on each of the artificial interfaces.
References
V. Dolean, P. Jolivet, F. Nataf, An Introduction to Domain Decomposition Methods: Algorithms, Theory, and Parallel Implementation (SIAM, Philadelphia, 2015)
A. Frommer, D.B. Szyld, On asynchronous iterations. J. Comput. Appl. Math. 123, 201–216 (2000)
M.J. Gander, Optimized Schwarz methods. SIAM J. Numer. Anal. 44, 699–731 (2006)
F. Magoulés, A.-K.C. Ahamed, R. Putanowicz, Optimized Schwarz method without overlap for the gravitational potential equation on cluster of graphics processing unit. Int. J. Comput. Math. 93, 955–980 (2016)
F. Magoulès, D.B. Szyld, C. Venet, Asynchronous optimized Schwarz methods with and without overlap. Numer. Math. 137, 199–227 (2017)
Acknowledgements
J. C. Garay was supported in part by the U.S. Department of Energy under grant DE-SC0016578. D. B. Szyld was supported in part by the U.S. National Science Foundation under grant DMS-1418882 and the U.S. Department of Energy under grant DE-SC0016578.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Garay, J.C., Magoulès, F., Szyld, D.B. (2018). Convergence of Asynchronous Optimized Schwarz Methods in the Plane. In: Bjørstad, P., et al. Domain Decomposition Methods in Science and Engineering XXIV . DD 2017. Lecture Notes in Computational Science and Engineering, vol 125. Springer, Cham. https://doi.org/10.1007/978-3-319-93873-8_31
Download citation
DOI: https://doi.org/10.1007/978-3-319-93873-8_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-93872-1
Online ISBN: 978-3-319-93873-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)