Comparative Analysis of High Performance Solvers for 3D Elasticity Problems
We consider the numerical solution of 3D linear elasticity equations. The investigated problem is described by a coupled system of second order elliptic partial differential equations. This system is then discretized by conforming or nonconforming finite elements. After applying the Finite Element Method (FEM) based discretization, a system of linear algebraic equations has to be solved. In this system the stiffness matrix is large, sparse and symmetric positive definite. In the solution process we utilize a well-known fact that the preconditioned conjugate gradient method is the best tool for efficient solution of large-scale symmetric systems with sparse positive definite matrices. In this context, the displacement decomposition (DD) technique is applied at the first step to construct a preconditioner that is based on a decoupled block diagonal part of the original matrix. Then two preconditioners, namely the Modified Incomplete Cholesky factorization MIC(0) and the Circulant Block-Factorization (CBF) preconditioning, are used to precondition thus obtained block diagonal matrix.
As far as the parallel implementation of the proposed solution methods is concerned, we utilize the Message Passing Interface (MPI) communication libraries. The aim of our work is to compare the performance of the two proposed preconditioners: the DD MIC(0) and the DD CBF. The presented comparative analysis is based on the execution times of actual codes run on modern parallel computers. Performed numerical tests demonstrate the level of parallel efficiency and robustness of the proposed algorithms. Furthermore, we discuss the number of iterations resulting from utilization of both preconditioners.
KeywordsMessage Passing Interface Elasticity Problem Preconditioned Conjugate Gradient Method Linear Elasticity Problem Iterative Solution Method
Unable to display preview. Download preview PDF.
- 4.Georgiev, A., Baltov, A., Margenov, S.: Hipergeos benchmark problems related to bridge engineering applications. REPORT HG CP 94–0820–MOST–4Google Scholar
- 8.Snir, M., Otto, S., Huss-Lederman, S., Walker, D., Dongara, J.: MPI: The Complete Reference. Scientific and engineering computation series. The MIT Press, Cambridge (1997) (second printing)Google Scholar
- 9.Walker, D., Dongara, J.: MPI: a standard Message Passing Interface. Supercomputer 63, 56–68 (1996)Google Scholar
- 10.Bassi — IBM POWER 5, http://www.nersc.gov/nusers/systems/bassi/
- 11.Franklin — Cray XT4, http://www.nersc.gov/nusers/systems/franklin/
- 12.Jacquard — Opteron Cluster, http://www.nersc.gov/nusers/resources/jacquard/