Multi-protocol Communications and High Speed Networks

  • Benoôt Planquelle
  • Jean-François Méhaut
  • Nathalie Revol
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1685)


Due to heterogeneity in modern computing systems, methodological and technological problems arise for the development of parallel applications. Heterogeneity occurs for instance at architecture level, when processing units use different data representation or exhibit various performances. At interconnection level, heterogeneity can be found in the programming interfaces and in the communication performances. In this paper, we focus on problems related to heterogeneity in the frame of interconnected clusters of workstations. In order to help the programmer to overcome these problems, we propose an extension to the multithreaded programming environment PM2. It facilitates the development of efficient parallel applications on such systems.


Parallel Application Message Size Data Conversion High Speed Network Message Copy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. [1]
    N. Boden, D. Cohen, R. Feldermann, A. Kulawik, C. Seitz, and W. Su Myrinet: A Gigabit per second Local Area Network. IEEE-Micro, 15-1:29–36, feb 1995. Available from Scholar
  2. [2]
    L. Bougé, JF. Méhaut, and R. Namyst Madeleine: an efficient and portable communication interface for multithreaded environments. In Proc. 1998 Int. Conf. Parallel Architectures and Compilation Techniques (PACT’98), pages 240–247, ENST, Paris, France, October 1998. IFIP WG 10.3 and IEEE.Google Scholar
  3. [3]
    M. Brune, J. Gehring, and A. Reinefeld A lightweight communication interface for parallel programming environments. Lecture Notes in Computer Science, 1225:503, 1997.Google Scholar
  4. [4]
    I. Foster, J. Geisler, C. Kesselman, and S. Tuecke Managing multiple communication methods in high-performance networked computing systems. Journal of Parallel and Distributed Computing, 40:35–48, 1997.Google Scholar
  5. [5]
    M. Lauria and A. Chien MPI-FM: High performance MPI on workstation clusters. Jounal on Parallel and Distributed Computing, 40(01):4–18, 1997.Google Scholar
  6. [6]
    R. Namyst, Y. Denneulin, JM. Geib, and JF. Méhaut FrUtilisation des processus légers pour le calcul parallèle distribué:l’approche PM2.Calculateurs Parallèles, Réseaux et Systèmes réparties, 10(3):237–258, July 1998.Google Scholar
  7. [7]
    A. Plaat, H.E. Bal, and R.F.H. Hofman Sensitivity of parallel applications to large differences in bandwidth and latency in two-layer interconnects. In Proc of 5th IEEE High Performance Computer Architecture HPCA’99, pages 244–253, Orlando, USA, January 1999.Google Scholar
  8. [8]
    N. Revol, Y. Denneulin, J.-F. Méhaut, and B. Planquelle Parallelization of continuous verified global optimization. In 19th IFIP International Federation for Information Processing TC7 Conference on System Modelling and Optimization, Cambridge, England, July 1999. to appear.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Benoôt Planquelle
    • 1
  • Jean-François Méhaut
    • 2
  • Nathalie Revol
    • 3
  1. 1.Université des Sciences et Technologies de LilleLIFLFrance
  2. 2.École Normale Supérieure de LyonLIPFrance
  3. 3.Université des Sciences et Technologies de LilleANOFrance

Personalised recommendations