Advertisement

Parallelizing a high resolution operational ocean model

  • Josef Schüle
  • Tomas Wilhelmsson
Track C1: (Industrial) End-user Applications of HPCN
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1593)

Abstract

The Swedish Meterological and Hydrological Institute (SMHI) makes daily forescasts of temperature, salinity, water level, and ice conditions in the Baltic Sea. These forecasts are based on data from a High Resolution Operational Model for the Baltic (HIROMB). This application has been parallelized and ported from a CRAY C90 to a CRAY T3E.

Our parallelization strategy is based on a subdivision of the computational grid into a set of smaller rectangular grid blocks which are distributed onto the parallel processors. The model will run with three grid resolutions, where the coarser grids produce boundary values for the finer. The linear equation systems for water level and ice dynamics are solved with a distributed multi-frontal solver.

We find that the production of HIROMB forecasts can successfully be moved from C90 to T3E while increasing resolution from 3 to 1 nautical mile. Though 5 processors of the T3E are 2.2 times faster than a C90 vector processor, speedup and load balance could be further improved.

Keywords

Message Passing Interface Parallelization Strategy Separator Level Elimination Tree Ghost Point 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lennart Funkquist and Eckhard Kleine. HIROMB, an introduction to an operational baroclinic model for the North Sea and Baltic Sea. Technical report, SMHI, Norrköping, Sweden, 199X. In manuscript.Google Scholar
  2. 2.
    Nils Gustafsson, editor. The HIRLAM 2 Final Report, HIRLAM Tech Rept. 9, Available from SMHI. S-60176 Norrköping, Sweden, 1993.Google Scholar
  3. 3.
    Mark Snir, Steve Otto, Steven Huss-Lederman, David W. Walker, and Jack Dongarra. MPI: The Complete Reference. MIT Press, Cambridge, Massachusetts, 1996. ISBN 0-262-69184-1.Google Scholar
  4. 4.
    S. J. Fink, S. R. Kohn, and S. B. Baden. Efficient run-time support for irregular block-structure applications. J. Parallel and Distributed Computing, 1998. To appear.Google Scholar
  5. 5.
    S. C. Eisenstat, H. C. Elman, M. H. Schultz, and A. H. Sherman. The (new) Yale sparse matrix package. In G. Birkhoff and A. Schoenstadt, editors, Elliptic Problem Solvers II, pages 45–52. Academic Press, 1994.Google Scholar
  6. 6.
    Bruce P. Herndon. A Methodology for the Parallelization of PDE Solvers: Application to Semiconductor Device Physics. PhD thesis, Sanford University, January 1996.Google Scholar
  7. 7.
    Patrick Amestoy, Iain Duff, Jean Yves L'Excellent, and Petr Plecháč. PARASOL An integrated programming environment for parallel sparse matrix solvers. Technical report, Department of Computation and Information, 1998.Google Scholar
  8. 8.
    Jarmo Rantakokko. A framework for partitioning domains with inhomogeneous workload. Technical Report Report No. 194, Department of Scientific Computing, Uppsala University, Uppsala, Sweden, 1997.Google Scholar

Copyright information

© Springer-Verlag 1999

Authors and Affiliations

  • Josef Schüle
    • 1
  • Tomas Wilhelmsson
    • 2
  1. 1.Institute for Scientific ComputingTechnical University BraunschweigBraunschweigGermany
  2. 2.Department of Numerical Analysis and Computing ScienceRoyal Institute of TechnologyStockholmSweden

Personalised recommendations