Advertisement

OpenMP Runtime Support for Clusters of Multiprocessors

  • Panagiotis E. Hadjidoukas
  • Eleftherios D. Polychronopoulos
  • Theodore S. Papatheodorou
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2716)

Abstract

This paper presents a prototype runtime system, providing support at the backend of the NANOS OpenMP compiler, that enables the execution of unmodified OpenMP Fortran programs on both SMPs and clusters of multiprocessors, either through the hybrid programming model (MPI+OpenMP) or directly on top of Software Distributed Shared Memory (SDSM). The latter is feasible by adopting a share-everything approach for the generated by the OpenMP compiler code, which corresponds to the “default shared” philosophy of OpenMP. Specifically, the user-level thread stacks and the Fortran common blocks are allocated explicitly, though transparently to the programmer, in shared memory. The management of the internal runtime system structures and of the forkjoin multilevel parallelism is based on explicit communication, exploiting however the shared-memory hardware of the available SMP nodes whenever this is possible. The modular design of the runtime system allows the integration of existing unmodified SDSM libraries, despite their design for SPMD execution.

Keywords

Shared Memory Runtime System Common Block OpenMP Directive Distribute Memory Machine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    E. Ayguadé, J. Labarta, X. Martorell, N. Navarro, and J. Oliver, NanosCompiler: A Research Platform for OpenMP Extensions, In Proceedings of the 1st European Workshop on OpenMP, Lund (Sweden), October 1999.Google Scholar
  2. 2.
    V.K. Barekas, P.E. Hadjidoukas, E.D. Polychronopoulos, and T. S. Papatheodorou, An OpenMP Implementation for Multiprogrammed SMPs, In Proceedings of the 3st European Workshop on OpenMP, Barcelona, Spain, August 2001.Google Scholar
  3. 3.
    A. Basumallik, S.J. Min, and R. Eigenmann, Towards OpenMP Execution on Software Distributed Shared Memory Systems, In Proceedings of the International Workshop on OpenMP: Experiences and Implementations (WOMPEI’02), Lecture Notes in Computer Science, #2327, Springer Verlag, May 2002.Google Scholar
  4. 4.
    J.M. Bull, Measuring Synchronization and Scheduling Overheads in OpenMP, In Proceedings of the 1st European Workshop on OpenMP, Lund, Sweden, October 1999.Google Scholar
  5. 5.
    F. Cappello, O. Richard, and D. Etiemble, Investigating the performance of two programming models for clusters of SMP PCs, In Proceedings of the 6th IEEE Symposium On High-Performance Computer Architecture (HPCA-6), Toulouse, France, January 2000.Google Scholar
  6. 6.
    R. Friedman, M. Goldin, A. Itzkovitz, and A. Schuster, Millipede: Easy Parallel Programming in Available Distributed Environments, Journal of Software: Practice and Experience, 27(8): 929–965, August 1997.CrossRefGoogle Scholar
  7. 7.
    P.E. Hadjidoukas, E.D. Polychronopoulos, and T.S. Papatheodorou, Integrating MPI and the Nanothreads Programming Model, In Proceedings of the 10th Euromicro Workshop on Parallel, Distributed and Network-Based Processing (PDP 2002), Las Palmas, Spain, January 2002.Google Scholar
  8. 8.
    P.E. Hadjidoukas, E.D. Polychronopoulos, and T.S. Papatheodorou, Runtime Support for Multigrain and Multiparadigm parallelism, In Proceedings of the 10th International Conference on High Performance Computing (HIPC’ 02), Bangalore, India, December 2002.Google Scholar
  9. 9.
    P.E. Hadjidoukas, E.D. Polychronopoulos, and T.S. Papatheodorou, Implementing the Nanothreads Programming Model on top of the POSIX Threads API, In Proceedings of the 20th IASTED Applied Informatics International Conference, Innsburg, Austria, February 2002.Google Scholar
  10. 10.
    C. Hu, H. Lu, A. Cox, and W. Zwaenepoel, OpenMP for Networks of SMPs, In Proceedings of the Second Merged Symposium, IPPS/SPDP 99, 1999.Google Scholar
  11. 11.
    W.W. Hu, W.S. Shi, and Z.M. Tang, JIAJIA: An SVM System Based on A New Cache Coherence Protocol, In Proceedings of the High Performance Computing and Networking (HPCN’99), April 1999.Google Scholar
  12. 12.
    Intel Corporation, Intel Fortran Compiler, Available at: http://developer.intel.com.
  13. 13.
    P. Jamieson, and A. Bilas, CableS: Thread Control and Memory System Extensions for Shared Virtual Memory Clusters, In Proceedings of the Workshop on OpenMP Applications and Tools. Purdue University, West Lafayette, Indiana. July 2001.Google Scholar
  14. 14.
    Y. Jegou, Controlling Distributed Shared Memory Consistency from High Level Programming Languages, In Proceedings of Parallel and Distributed Processing, IPDPS 2000 Workshops, pages 293–300, May 2000.Google Scholar
  15. 15.
    H. Jin, M. Frumkin, and J. Yan, The OpenMP Implementation of NAS Parallel Benchmarks and its Performance, Technical Report NAS-99-011, NASA Ames Research Center, October 1999.Google Scholar
  16. 16.
    S. Karlsson and M. Bronsson, A Fully Compliant OpenMP implementation on Software Distributed Shared Memory, In Proceedings of the 10th International Conference on High Performance Computing (HIPC’ 02), Bangalore, India, December 2002.Google Scholar
  17. 17.
    X. Martorell, J. Labarta, N. Navarro, and E. Ayguad’e, A Library Implementation of the Nano-Threads Programming Model, In Proceedings of the 2nd Euro-Par Conference, Lyon, pp. 644–649, August 1996.Google Scholar
  18. 18.
    Message Passing Interface Forum, MPI: A message-passing interface standard, International Journal of Supercomputer Applications and High Performance Computing, Volume 8, Number 3/4, 1994.Google Scholar
  19. 19.
    MPI Software Technology, Inc., http://www.mpi-softtech.com.
  20. 20.
    S.J. Min, S.W. Kim, M. Voss, S.I. Lee, and R. Eigenmann, Portable Compilers for OpenMP, In Proceedings of WOMPAT 2001, Workshop on OpenMP Applications and Tools, Lecture Notes in Computer Science, 2104, pages 11–19, July 2001.Google Scholar
  21. 21.
    NANOS ESPRIT Project No. 21097, http://research.ac.upc.es/nanos.
  22. 22.
    OpenMP Architecture Review Board, OpenMP Specifications, Available at: http://www.openmp.org.
  23. 23.
    S.M. Paas, M. Dormanns, T. Bemmerl, K. Scholtyssik, and S. Lankes, Computing on a Cluster of PCs: Project Overview and Early Experiences, In Proceedings of the 1st Workshop on Cluster-Computing, TU Chemnitz-Zwickau, November 1997.Google Scholar
  24. 24.
    C.D. Polychronopoulos, Nano-Threads: Compiler Driven Multithreading, In Proceedings of the 4th International Workshop on Compilers for Parallel Computing CPC’93, Delft (The Netherlands), December 1993.Google Scholar
  25. 25.
    POP (Performance Portability of OpenMP) IST/FET project (IST-2001-33071), http://www.cepba.upc.es/pop.
  26. 26.
    M. Sato, S. Satoh, K. Kusano, and Y. Tanaka, Design of OpenMP compiler for an SMP Cluster, In Proceedings of the 1st European Workshop on OpenMP, Lund (Sweden), October 1999.Google Scholar
  27. 27.
    R. Stets, S. Dwarkadas, N. Hardavellas, G. Hunt, L. Kontothanassis, S. Parthasarathy, and M. Scott, Cashmere-2L: Software Coherent Shared Memory on a Clustered Remote-Write Network, In Proceedings of the 16th ACM Symposium on Operating Systems Principles (SOSP-16). October 1997.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Panagiotis E. Hadjidoukas
    • 1
  • Eleftherios D. Polychronopoulos
    • 1
  • Theodore S. Papatheodorou
    • 1
  1. 1.High Performance Information Systems Laboratory (HPCLAB), Department of Computer Engineering and InformaticsUniversity of Patras, RioPatrasGreece

Personalised recommendations