Supporting Realistic OpenMP Applications on a Commodity Cluster of Workstations

  • Seung Jai Min
  • Ayon Basumallik
  • Rudolf Eigenmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2716)


In this paper, we present techniques for translating and optimizing realistic OpenMP applications on distributed systems. The goal of our project is to quantify the degree to which OpenMP can be extended to distributed systems and to develop supporting compiler techniques. Our present compiler techniques translate OpenMP programs into a form suitable for execution on a Software DSM system. We have implemented a compiler that performs this basic translation, and we have proposed optimization techniques that improve the baseline performance of OpenMP applications on distributed computer systems. Our results show that, while kernel benchmarks can show high efficiency for OpenMP programs on distributed systems, full applications need careful consideration of shared data access patterns. A naive translation (similar to the basic translation done by OpenMP compilers for SMPs) leads to acceptable performance in very few applications. We propose optimizations such as computation repartitioning, page-aware optimizations, and access privatization that result in average 70% performance improvement on the SPEC OMPM2001 benchmark applications.


OpenMP Applications Software Distributed Shared Memory benchmarks performance characteristics optimizations 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    OpenMP Forum, “OpenMP: A Proposed Industry Standard API for Shared Memory Programming,” Tech. Rep., Oct. 1997.Google Scholar
  2. 2.
    S. Dwarkadas P. Keleher H. Lu R. Rajamony W. Yu C. Amza, A.L. Cox and W. Zwaenepoel, “Treadmarks: Shared memory computing on networks of workstations,” IEEE Computer, vol. 29, no. 2, pp. 18–28, February 1996.Google Scholar
  3. 3.
    H. Lu, Y. C. Hu, and W. Zwaenepoel, “OpenMP on network of workstations,” in Proc. of Supercomputing’98, 1998.Google Scholar
  4. 4.
    Mitsuhisa Sato, Motonari Hirano, Yoshio Tanaka, and Satoshi Sekiguchi, “OmniRPC: A Grid RPC Facility for Cluster and Global Computing in OpenMP,” in Proc. of the Workshop on OpenMP Applications and Tools (WOMPAT2001), July 2001.Google Scholar
  5. 5.
    R. Crowell Z. Cvetanovic J. Harris C. Nelson J. Bircsak, P. Craig and C. Offner, “Extending OpenMP for NUMA Machines,” in Proc. of the IEEE/ACM Supercomputing’2000: High Performance Networking and Computing Conference (SC 2000), November 2000.Google Scholar
  6. 6.
    V. Schuster and D. Miles, “Distributed OpenMP, Extensions to OpenMP for SMP Clusters,” in Proc. of the Workshop on OpenMP Applications and Tools (WOMPAT’2000), July 2000.Google Scholar
  7. 7.
    T.S. Abdelrahman and T.N. Wong, “Compiler support for data distribution on NUMA multiprocessors,” Journal of Supercomputing, vol. 12, no. 4, pp. 349–371, oct. 1998.zbMATHCrossRefGoogle Scholar
  8. 8.
    High Performance Fortran Forum, “High Performance Fortran language specification, version 1.0,” Tech. Rep. CRPC-TR92225, Houston, Tex., 1993.Google Scholar
  9. 9.
    M. Booth and K. Misegades, “Microtasking: A New Way to Harness Multiprocessors,” Cray Channels, pp. 24–27, 1986.Google Scholar
  10. 10.
    Ayon Basumallik, Seung-Jai Min, and Rudolf Eigenmann, “Towards OpenMP execution on software distributed shared memory systems,” in Int’l Workshop on OpenMP: Experiences and Implementations (WOMPEI’02). May 2002, Lecture Notes in Computer Science, 2327, Springer Verlag.Google Scholar
  11. 11.
    Rudolf Eigenmann Greg Gaertner Wesley B. Jones Vishal Aslot, Max Domeika and Bodo Parady, “SPEComp: A New Benchmark Suite for Measuring Parallel Computer Performance,” in Proc. of WOMPAT 2001, Workshop on OpenMP Applications and Tools, Lecture Notes in Computer Science, 2104, July 2001, pp. 1–10.Google Scholar
  12. 12.
    William Blume, Ramon Doallo, Rudolf Eigenmann, John Grout, Jay Hoeflinger, Thomas Lawrence, Jaejin Lee, David Padua, Yunheung Paek, Bill Pottenger, Lawrence Rauchwerger, and Peng Tu, “Parallel programming with Polaris,” IEEE Computer, pp. 78–82, December 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Seung Jai Min
    • 1
  • Ayon Basumallik
    • 1
  • Rudolf Eigenmann
    • 1
  1. 1.School of Electrical and Computer EngineeringPurdue UniversityWest Lafayette

Personalised recommendations