Implementing OpenMP for Clusters on Top of MPI

  • Antonio J. Dorta
  • José M. Badía
  • Enrique S. Quintana
  • Francisco de Sande
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3666)


llc is a language designed to extend OpenMP to distributed memory systems. Work in progress on the implementation of a compiler that translates llc code and targets distributed memory platforms is presented. Our approach generates code for communications directly on top of MPI. We present computational results for two different benchmark applications on a PC-cluster platform. The results reflect similar performances for the llc compiled version and an ad-hoc MPI implementation, even for applications with fine-grain parallelism.


MPI OpenMP cluster computing distributed memory OpenMP compiler 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Message Passing Interface Forum, MPI: A Message-Passing Interface Standard, University of Tennessee, Knoxville, TN (1995),
  2. 2.
    OpenMP Architecture Review Board, OpenMP Application Program Interface v. 2.5 (May 2005), electronically available at
  3. 3.
    Min, S.-J., Basumallik, A., Eigenmann, R.: Supporting realistic OpenMP applications on a commodity cluster of workstations. In: Voss, M.J. (ed.) WOMPAT 2003. LNCS, vol. 2716, pp. 170–179. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  4. 4.
    Sato, M., Harada, H., Hasegawa, A.: Cluster-enabled OpenMP: An OpenMP compiler for the SCASH software distributed shared memory system. Scientific Programming, Special Issue: OpenMP 9(2-3), 123–130 (2001)Google Scholar
  5. 5.
    Hu, Y.C., Lu, H., Cox, A.L., Zwaenepoel, W.: OpenMP for Networks of SMPs. Journal of Parallel and Distributed Computing 60(12), 1512–1530 (2000)zbMATHCrossRefGoogle Scholar
  6. 6.
    Huang, L., Chapman, B., Liu, Z.: Towards a more efficient implementation of openmp for clusters via translation to global arrays, Tech. Rep. UH-CS-04-05, Department of Computer Science, Univeristy of Houston (December 2004), electronically available at
  7. 7.
    Yonezawa, N., Wada, K., Ogura, T.: Quaver: OpenMP compiler for clusters based on array section descriptor. In: Proc. of the 23rd IASTED International Multi-Conference Parallel and Distributed Computing and Networks, pp. 234–239. IASTED /Acta Press, Innsbruck (2005), available at Google Scholar
  8. 8.
    Dorta, A.J., González, J.A., Rodríguez, C., de Sande, F.: llc: A parallel skeletal language. Parallel Processing Letters 13(3), 437–448 (2003)CrossRefMathSciNetGoogle Scholar
  9. 9.
    Bailey, D.H., et al.: The NAS parallel benchmarks, Technical Report RNR-94-007, NASA Ames Research Center, Moffett Field, CA, USA (October 1994), electronically available at
  10. 10.
    Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A high-performance, portable implementation of the message passing interface standard. Parallel Computing 22(6), 789–828 (1996)zbMATHCrossRefGoogle Scholar
  11. 11.
    Swope, W.C., Andersen, H.C., Berens, P.H., Wilson, K.R.: A computer simulation method for the calculation of equilibrium constants for the formation of physical clusters of molecules: Application to small water clusters. Journal of Chemical Physics 76, 637–649 (1982)CrossRefGoogle Scholar
  12. 12.
    OmpSCR OpenMP Source Code Repository, and
  13. 13.
    Dorta, A.J., González-Escribano, A., Rodríguez, C., de Sande, F.: The OpenMP source code repository. In: Proc. of the 13th Euromicro Conference on Parallel, Distributed and Network-based Processing (PDP 2005), Lugano, Switzerland, pp. 244–250 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Antonio J. Dorta
    • 1
  • José M. Badía
    • 2
  • Enrique S. Quintana
    • 2
  • Francisco de Sande
    • 1
  1. 1.Depto. de Estadística, Investigación Operativa y ComputaciónUniversidad de La LagunaLa LagunaSpain
  2. 2.Depto. de Ingeniería y Ciencia de ComputadoresUniversidad Jaume ICastellónSpain

Personalised recommendations