Advertisement

Developing a Data-Parallel Application with DaParT

  • Cevat Sęner
  • Yakup Paker
  • Ayşe Kiper
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2328)

Abstract

In order to simplify the task of developing parallel programs based on data-parallel paradigm, this article presents a data-parallel programming tool: DaParT. With its usage, sequential solutions of the nonexpert parallel programmers can be run in data-parallel. The tool was implemented on Helios, MPL, PVM and MPI. The tool is portable to the parallel environments with the support for the basic message passing primitives. The usage of the tool is clarified by an example: simulation of the Hopfield net. Lastly, the benchmark results collected are discussed.

Keywords

Message Passing Interface Main Program Sequential Solution Benchmark Result Parallel Environment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    High Performance Fortran Forum: High Performance Fortran Language Specification Version 2.0. (1997)Google Scholar
  2. 2.
    DPCE Subcommittee: Data Parallel C Extensions. (1994)Google Scholar
  3. 3.
    Colajanni, M., Cermele, M.: DAME: An environment for preserving the efficiency of data-parallel computations on distributed systems. Distributed Computing 41 (1997)Google Scholar
  4. 4.
    Koutsompinas, E.: The PVM implementation of the SAPS model. Master’s thesis, University of London (1994)Google Scholar
  5. 5.
    Sęner, C.: DaParT: A Data-Parallel Programming Tool. PhD thesis, Dept. of Computer Engineeering, Middle East Technical University (2000)Google Scholar
  6. 6.
    Sęner, C., Paker, Y., Kiper, A.: Data-parallel programming on Helios, Parallel Environment and PVM. In: Proc. of Int. Conf. on Parallel and Distributed Systems, France (1996)Google Scholar
  7. 7.
    Kel, H.: Introduction to MPL. Maui High Performance Computing Center (1998)Google Scholar
  8. 8.
    Geist, A., Beguelin, A., Dongarra, J., Jiang, W., Mancheck, R., Sundream, V.: PVM: A Users’ Guide and Tutorial for Networked Parallel Computing. The MIT Press (1993)Google Scholar
  9. 9.
    Gropp, W., Lusk, E., Skjellum, A.: Using MPI. The MIT Press, Cambridge, Massachusetts (1997)Google Scholar
  10. 10.
    Bailey, D., Harris, T., Saphir, W., Wijngaart, R., Woo, A., W. Yarrow: Nas parallel benchmarks 2.0. Technical Report NAS-95-020, NASA (1995)Google Scholar
  11. 11.
    Saini, S., Bailey, D.: Nas parallel benchmark results. Technical Report NAS-95-021, NASA (1995)Google Scholar
  12. 12.
    Saphir, W., Woo, A., Yarrow, M.: Nas parallel benchmark results. Technical Report NAS-96-010, NASA (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Cevat Sęner
    • 1
  • Yakup Paker
    • 2
  • Ayşe Kiper
    • 1
  1. 1.Department of Computer EngineeringMiddle East Technical UniversityAnkaraTurkey
  2. 2.Queen Mary and Westfield CollegeUniversity of LondonLondonEngland

Personalised recommendations