Large Scale Acoustic Simulations on Clusters of SMPs

  • Luc Giraud
  • Martin B. van Gijzen


Finite element codes are usually parallelized either at a low level by exploiting fine grain loop parallelism or at a much higher level by exploiting the coarse grain parallelism of a mesh splitting in a domain decomposition type approach. The advantage of the first technique is its simplicity, in particular if the code already exists and, even better, is already vectorized. This approach is usually the preferred method if the target machine is a computer with a global address space, on which the cost of communication between computing entities (in this setting commonly denoted by threads) is usually relatively low. This is in particular the case if all the processors of the target computer physically share the same memory; this type of platform is usually referred to as Symmetric Multi-Processors (SMP). The second strategy, based on mesh splitting, is much more involved. Turning a sequential single-domain code into a parallel multi-domain code might require a complete redesign and at least imposes to add new communication subroutines at many places in the existing code. This, however, is a necessary step to exploit parallelism on platforms where the computing entities do not share any address space (in this setting commonly denoted by processes). This is typically the case on distributed memory computers.


Domain Decomposition Finite Element Code Cache Memory Explicit Time Integration Ocean Acoustics 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    L. Giraud, Combining Shared and Distributed Memory Programming Models on Clusters of Symmetric Multiprocessors: Some Basic Promising Experiments, Int. J. High Perf. Comput. Appl. 16 (2002).Google Scholar
  2. 2.
    OpenMP Architecture review Board. OpenMP Fortran Application Program Interface, Technical Report Version 2.0 (2000).Google Scholar
  3. 3.
    Message Passing Interface Forum. MPI: A message-passing interface standard, Internat. J. Supercomputer Appl. and High Performance Computing 8(3/4) (1994).Google Scholar
  4. 4.
    F.B. Jensen et al. Computational Ocean Acoustics, AIP Series in Modern Acoustics and Signal Processing, Section 7.4, American Institute of Physics, New York, 1994.Google Scholar
  5. 5.
    T.J.R. Hughes, R.M. Ferencz, and J.O. Hallquist, Large-scale vectorized implicit calculations in solid mechanics on a Cray X-MP/48 utilizing EBE preconditioned conjugate gradients, Comput. Meth. Appl. Mech. Engrg. 61 (1987), 215–248.MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    M.B. van Gijzen, Parallel ocean flow computations on a regular and on an irregular grid, in Lect. Notes Comp. Sci., 1067, Springer-Verlag, 1996, 207–212.CrossRefGoogle Scholar
  7. 7.
    M.R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nati Bur. Stand. 49 (1954), 409–436.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2004

Authors and Affiliations

  • Luc Giraud
  • Martin B. van Gijzen

There are no affiliations available

Personalised recommendations