Advertisement

Compiling for a Hybrid Programming Model Using the LMAD Representation

  • Jiajing Zhu
  • Jay Hoeflinger
  • David Padua
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2624)

Abstract

There are two typical ways for a compiler to generate parallel code for distributed memory multiprocessors. One is to generate explicit message passing code and the other is to generate code for a distributed shared memory software layer. In this paper, we propose a new compiler design that combines message passing and distributed shared memory for a single program, depending on how data is accessed. The Linear Memory Access Descriptor (LMAD) is used to represent data distribution and data accesses in our compiler. The LMAD can represent complex distribution and access patterns accurately. We show how LMADs may be used to generate message passing operations. Experimental results indicate that our technique is useful for programs with both regular and irregular access patterns.

Keywords

Shared Memory Message Passing Access Pattern Parallel Loop Distribute Shared Memory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    C. Amza, A. Cox, S. Dwarkadas, P. Keleher, H. Lu, R. Rajamony, W. Yu, and W. Zwaenepoel. TreadMarks: Shared Memory Computing on Networks of Workstations. IEEE Computer, 29(2):18–28, February 1996.Google Scholar
  2. [2]
    Satish Chandra and James R. Larus. Optimizing communication in HPF programs on fine-grain distributed shared memory. ACM. Sigplan Notices (Acm Special Interest Group on Programming Languages), 32(7):100–11, July 1997.Google Scholar
  3. [3]
    Alan L. Cox, Sandhya Dwarkadas, Honghui Lu, and Willy Zwaenepoel. Evaluating the performance of software distributed shared memory as a target for parallelizing compilers. 11th International Parallel Processing Symposium(Cat. No.97TB100107). IEEE Comput. Soc. Press., pages 474–82, 1977.Google Scholar
  4. [4]
    Sandhya Dwarkadas, Alan L. Cox, and Willy Zwaenepoel. An integrated compile-time/run-time software distributed shared memory system. ACM. Sigplan Notices (Acm Special Interest Group on Programming Languages), 31(9): 186–97, Sept. 1996.Google Scholar
  5. [5]
    Sandhya Dwarkadas, Honghui Lu, and Alan L. Cox. Combining compile-time and run-time support for efficient software distributed shared memory. Proceedings of the IEEE, 87(3):476–86, March 1999.CrossRefGoogle Scholar
  6. [6]
    J. Hoeflinger. Interprocedural Parallelization Using Memory Classiffication Analysis. PhD thesis, University of Illinois at Urbana-Champaign, July 1998.Google Scholar
  7. [7]
    J. Hoeflinger and Y. Paek. A Comparative Analysis of Dependence Testing Mechanisms. In Thirteenth Workshop on Languages and Compilers for Parallel Computing, August 2000.Google Scholar
  8. [8]
    Liviu Iftode and Jaswinder Pal Singh. Shared virtual memory: progress and challenges. Proceedings of the IEEE, 87(3):498–507, March 1999.CrossRefGoogle Scholar
  9. [9]
    P. Keleher. The relative importance of concurrent writers and weak consistency models. 16th International Conference on Distributed Computing Systems, May 1996.Google Scholar
  10. [10]
    Message Passing Interface Forum. MPI: A Message-Passing Interface Standard, 1995.Google Scholar
  11. [11]
    Y. Paek. Automatic Parallelization for Distributed Memory Machines Based on Access Region Analysis. PhD thesis, University of Illinois at Urbana-Champaign, April 1997.Google Scholar
  12. [12]
    Y. Paek, J. Hoeflinger, and D. Padua. Simplication of Array Access Patterns for Compiler Optimizations. Proceedings of the SIGPLAN Conference on Programming Language Design and Implementation, June 1998.Google Scholar
  13. [13]
    Steven K. Reinhardt, James R. Larus, and David A. Wood. Tempest and Typhoon: User-level shared memory. Proceedings the 21st Annual International Symposium on Computer Architecture (Cat. No. 94CH3397-7). IEEE Comput. Soc. Press., pages 325–36, 1994.Google Scholar
  14. [14]
    Chau-Wen Tseng. Compiler optimizations for eliminating barrier synchronization. Sigplan Notices (Acm Special Interest Group on Programming Languages), 30(8):144–55, Aug. 1995.MathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Jiajing Zhu
    • 1
  • Jay Hoeflinger
    • 2
  • David Padua
    • 1
  1. 1.University of Illinois at Urbana-ChampaignUrbana
  2. 2.Intel Corporation Champaign

Personalised recommendations