Advertisement

Controlling Distributed Shared Memory Consistency from High Level Programming Languages

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1800)

Abstract

One of the keys for the success of parallel processing is the availability of high-level programming languages for on-the-shelf parallel architectures. Using explicit message passing models allows efficient executions. However, direct programming on these execution models does not give all benefits of high-level programming in terms of software productivity or portability. HPF avoids the need for explicit message passing but still suffers from low performance when the data accesses cannot be predicted with enough precision at compile-time. OpenMP is defined on a shared memory model. The use of a distributed shared memory (DSM) has been shown to facilitate high-level programming languages in terms of productivity and debugging. But the cost of managing the consistency of the distributed memories limits the performance. In this paper, we show that it is possible to control the consistency constraints on a DSM from compile-time analysis of the programs and so, to increase the efficiency of this execution model.

Keywords

Shared Memory Parallel Loop Page Fault Distribute Shared Memory Shared Memory System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    C. Amza, A. L. Cox, S. Dwarkadas, P. Keleher, H. Lu, R. Rajamony, and W. Zwaenepoel. Treadmarks: Shared memory computing on networks of workstations. IEEE Computer, 29(2):18–28, February 1996.Google Scholar
  2. [2]
    B. N. Bershad, M. J. Zekauskas, and W. A. Sawdon. The midway distributed shared memory system. In Proc. of the 38th IEEE Int’l Computer Conf. (COMPCON Spring’ 93), pages 528–537, February 1993.Google Scholar
  3. [3]
    J. B. Carter, J. K. Bennett, and W. Zwaenepoel. Techniques for reducing consistency-related communication in distributed shared memory systems. ACM Transactions on Computer Systems, 13(3):205–243, August 1995.CrossRefGoogle Scholar
  4. [4]
    Satish Chandra and Larus James R. Optimizing communication in hpf programs on fine-grain distributed shared memory. In 6th ACM SIGPLAN Symposiun on Principles and Practice of Parallel Programming, June 1977. Las Vegas, June 18–21.Google Scholar
  5. [5]
    Sandhya Dwarkadas, Honghui Lu, Alan L. Cox, Ramakrishnan Rajomony, and Willy Zwaenepoel. Combining compile-time and run-time support for efficient software distributed shared memory. In Proceedings of IEEE, Special Issue on Distributed Shared Memory, volume 87, No 3, pages 467–475, March 1999.Google Scholar
  6. [6]
    M. Karlsson and P. Stenström. Effectiveness of dynamic prefetching in multiple-writer distributed virtual shared memory system. Journal of Parallel and Distributed Computing, 43(2):79–93, July 1997.CrossRefGoogle Scholar
  7. [7]
    H. Lu, Y. C. Hu, and W. Zwaenepoel. Openmp on networks of workstations. In Proceedings of Supercomputing’ 98, 1998.Google Scholar
  8. [8]
    F. Mueller. Adaptative dsm-runtime behavior via speculative data distribution. In J. Jose and al., editors, Parallel and Distributed Processing — Workshop on Run-Time Systems for Parallel Programming, volume 1586 of LNCS, pages 553–567. Springer, April 1999.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Yvon J
    • 1
  1. 1.IRISA / INRIARennes CedexFrance

Personalised recommendations