Advertisement

Predicting MPI Buffer Addresses

  • Felix Freitag
  • Montse Farreras
  • Toni Cortes
  • Jesus Labarta
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3036)

Abstract

Communication latencies have been identified as one of the performance limiting factors of message passing applications in clusters of workstations/multiprocessors. On the receiver side, message-copying operations contribute to these communication latencies. Recently, prediction of MPI messages has been proposed as part of the design of a zero message-copying mechanism. Until now, prediction was only evaluated for the next message. Predicting only the next message, however, may not be enough for real implementations, since messages do not arrive in the same order as they are requested. In this paper, we explore long-term prediction of MPI messages for the design of a zero message-copying mechanism. To achieve long-term prediction we evaluate two prediction schemes, the first based on graphs, and the second based on periodicity detection. Our experiments indicate that with both prediction schemes the buffer addresses and message sizes of several future MPI messages (up to +10) can be predicted successfully.

References

  1. 1.
    Afsahi, A., Dimopoulos, N.J.: Efficient Communication Using Message Prediction for Cluster of Multiprocessors. Concurrency and Computation: Practice and Experience 14, 859–883 (2002)zbMATHCrossRefGoogle Scholar
  2. 2.
    Bailey, D.H., Barszcz, E., Dagum, L., Simon, H.D.: NAS Parallel Benchmark Results. In: Proceedings of the Scalable High-Performance Computing Conference, pp. 111–120 (1994)Google Scholar
  3. 3.
  4. 4.
  5. 5.
    Curewitz, K.M., Krishnan, P., Vitter, J.S.: Practical Prefetching via Data Compression. In: Proceedings of the 1993 ACM SIGMOD International Conference on Management of Data (SIGMOD 1993), Washington, DC, May 1993, pp. 257–266 (1993)Google Scholar
  6. 6.
    Freitag, F., Corbalan, J., Labarta, J.: A dynamic periodicity detector: Application to Speedup Computation. In: Proceedings of International Parallel and Distributed Processing Symposium (IPDPS 2001) (April 2001)Google Scholar
  7. 7.
    Freitag, F., Caubet, J., Farrara, M., Cortes, T., Labarta, J.: Exploring the Predictability of MPI Messages. In: Proceedings of International Parallel and Distributed Processing Symposium (IPDPS 2003) (April 2003)Google Scholar
  8. 8.
    Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A high-performance, portable implementation of the MPI message passing interface standard. Journal of Parallel Computing 22(6), 789–828 (1996)zbMATHCrossRefGoogle Scholar
  9. 9.
    Kim, J., Lilja, D.J.: Characterization of Communication Patterns in Message-Passing Parallel Scientific Application Programs. In: Proceedings of the Workshop on Communication, Architecture, and Applications for Network-based Parallel Computing, February 1998, pp. 202–216 (1998)Google Scholar
  10. 10.
    LAM/MPI home page, http://www.lam-mpi.org/mpi
  11. 11.
    MPI Forum. MPI: A message-passing interface standard, http://www.mpi-forum.org
  12. 12.
  13. 13.
    Sazeides, Y., Smith, J.E.: The Predictability of Data Values. In: International Symposium on Microarchitecture, MICRO-30 (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Felix Freitag
    • 1
  • Montse Farreras
    • 1
  • Toni Cortes
    • 1
  • Jesus Labarta
    • 1
  1. 1.Computer Architecture Department (DAC) European Center for Parallelism of Barcelona (CEPBA)Politechnic University of Catalonia (UPC) 

Personalised recommendations