Advertisement

The Performance of Different Communication Mechanisms and Algorithms Used for Parallelization of Molecular Dynamics Code

  • Rafał Metkowski
  • Piotr Bała
  • Terry Clark
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2328)

Abstract

Communication performance appears to have the most important influence on parallelization efficiency of large scientific applications. Different communication algorithms and communication mechanisms were used in parallelization of molecular dynamics code. In is shown that in the case of fast communication hardware well scaling algorithm must be used. Presented data shows that MD code can be also run efficiently on the pentium cluster but low latency communication mechanism must be used.

Keywords

Molecular Dynamic Communication Mechanism Communication Library Communication Algorithm Molecular Dynamic Code 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    S. T. Wlodek, Terry Clark, L. Ridgway Scott, and J. Andrew McCammon. Molecular dynamics of Acetylcholinesterase dimer complexed with Tacrine. Journal of the American Chemical Society, 119(40):9513–952, 1997.CrossRefGoogle Scholar
  2. 2.
    T. W. Clark and J. A. McCammon. Parallelization of a molecular dynamics non-bonded force algorithm for MIMD architecture. Computers & Chemistry, 14(3):219–24, 1990.CrossRefGoogle Scholar
  3. 3.
    T. Clark, R. von Hanxleden, J. A. McCammon, and L. R. Scott. Parallelizing molecular dynamics using spatial decomposition. In Scalable High Performance Computing Conference, pages 95–102, Knoxville, TN, May 1994. IEEE Computer Society. Available via anonymous ftp from softlib.rice.eduaspub/CRPC-TRs/reports/CRPC-TR93356-SGoogle Scholar
  4. 4.
    S. Plimpton. Fast parallel algorithms for short-range molecular dynamics. Journal of Computational Physics, 117:1–19, March 1995.Google Scholar
  5. 5.
    T. Clark, R. von Hanxleden, K. Kennedy, C. Koelbel, and L. R. Scott. Evaluating parallel languages for molecular dynamics computations. In Scalable High Performance Computing Conference, Williamsburg, VA, 1992. IEEE. Available via anonymous ftp from oftlib.rice.edu as pub/CRPC-TRs/reports/CRPC-TR992202-S.Google Scholar
  6. 6.
    R. von Hanxleden. Compiler Support for Machine-Independent Parallelization of Irregular Problems. PhD thesis, Rice University, December 1994. Available via anonymous ftp from softlib.rice.edu as pub/CRPC-TRs/reports/CRPC-TR94495-S.Google Scholar
  7. 7.
    P. Bała, T. W. Clark, and L. R. Scott. Application of Pfortran and Co-Array fortran in the parallelization of the GROMOS96 molecular dynamics module. In M. Bubak, J. Moűciñski, and Marian Noga, editors, SGI Users’ Conference-Conference Processdings, pages 194–204. Academic Computer Centre CYFRONET AGH, 2000.Google Scholar
  8. 8.
    P. Bała, T. W. Clark, and L. R. Scott. Application of Pfortran and Co-Array fortran in the parallelization of the GROMOS96 molecular dynamics module. Su-percomputing Journal 2001 (in press)Google Scholar
  9. 9.
    W. Van Gunsteren and H. J. C. Berendsen. GROMOS (Groningen Molecular Simulation Computer Program Package). Biomos, Laboratory of Physical Chemistry, ETH Zentrum, Zurich, 1996.Google Scholar
  10. 10.
    B. Bagheri, T. W. Clark, and L. R. Scott. Pfortran: a parallel dialect of fortran. ACM Fortran Forum, 11(3):20–31, 1992.CrossRefGoogle Scholar
  11. 11.
    M. P. Allen and D. J. Tildesley. Computer simulations of liquids. Clarendon, Oxford, 1987.Google Scholar
  12. 12.
    W. F. van Gunsteren and H. J. C. Berendsen. A leap-frog algorithm for stochastic dynamics. Molecular Simulation, 1(3):173–182, 1988.CrossRefGoogle Scholar
  13. 13.
    W. F. van Gunsteren and H. J. C. Berendsen. Algorithms for brownian dynamics. Molecular Physics, 45:637–647, 1982.CrossRefGoogle Scholar
  14. 14.
    J. W. Eastwood R. W. Hockney. Computer Simulation Using Particles. Cambridge University Press, Cambridge, 1987.Google Scholar
  15. 15.
    USA NERSC, Lawrance Research Laboratory. MVICH-MPI for virtual interface architecture. http://www.nersc.gov/research/FTG/mvich, 2000.
  16. 16.
    USA NERSC, Lawrance Research Laboratory. M-VIA: A high performance modular via for linux. http://www.nersc.gov/research/FTG/via, 2000.
  17. 17.
    R. W. Numrich. F: a parallel extension to Cray fortran. Scientific Programming, 6(3):275–84, 1997.Google Scholar
  18. 18.
    R. W. Numrich, J. Reid, and K. Kim. Writing a multigrid solver using Co-Array fortran. In B. Kågström, J. Dongarra, E. Elmroth, and J. Waśniewski, editors, Recent Advances in Applied Parallel Computing, Lecture Notes in Computer Science 1541, pages 390–399. Springer-Verlag Berlin, 1998.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Rafał Metkowski
    • 1
  • Piotr Bała
    • 1
  • Terry Clark
    • 2
  1. 1.Faculty of Mathematics and Computer ScienceN. Copernicus UniversityToruńPoland
  2. 2.Department of Computer ScienceThe University of Chicago and Computation InstituteChicagoUSA

Personalised recommendations