Skip to main content

Some Aspects of Message-Passing on Future Hybrid Systems (Extended Abstract)

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 5205))

Abstract

In the future, most systems in high-performance computing (HPC) will have a hierarchical hardware design, e.g., a cluster of ccNUMA or shared memory nodes with each node having several multi-core CPUs. Parallel programming must combine the distributed memory parallelization on the node inter-connect with the shared memory parallelization inside each node. There are many mismatch problems between hybrid hardware topology and the hybrid or homogeneous parallel programming models on such hardware. Hybrid programming with a combination of MPI and OpenMP is often slower than pure MPI programming. Major chances arise from the load balancing features of OpenMP and from a smaller memory footprint if the application duplicates some data on all MPI processes [1,2,3].

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Rabenseifner, R., Hager, G., Jost, G., Keller, R.: Hybrid MPI and OpenMP Parallel Programming. In: Half-day Tutorial No. S-10 at Super Computing 2007, SC 2007, Reno, Nevada, USA, November 10-16 (2007)

    Google Scholar 

  2. Rabenseifner, R.: Hybrid Parallel Programming on HPC Platforms. In: Proceedings of the Fifth European Workshop on OpenMP, EWOMP 2003, Aachen, Germany, September 22-26, pp. 185–194 (2003) , www.compunity.org

  3. Rabenseifner, R., Wellein, G.: Communication and Optimization Aspects of Parallel Programming Models on Hybrid Architectures. International Journal of High Performance Computing Applications 17(1), 49–62 (2003)

    Article  Google Scholar 

  4. Chapman, B.M., Huang, L., Jin, H., Jost, G., de Supinski, B.R.: Toward Enhancing OpenMP’s Work-Sharing Directives. In: Nagel, W.E., Walter, W.V., Lehner, W. (eds.) Euro-Par 2006. LNCS, vol. 4128, pp. 645–654. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  5. MPI-2 Journal of Development. The Message Passing Interface Forum, July 18 (1997), http://www.mpi-forum.org

  6. Co-Array Fortran, http://www.co-array.org/

  7. Unified Parallel C, http://www.gwu.edu/upc/

  8. Coarfa, C., Dotsenko, Y., Mellor-Crummey, J.M., Cantonnet, F., El-Ghazawi, T.A., Mohanti, A., Yao, Y., Chavarria-Miranda, D.G.: An Evaluation of Global Address Space Languages: Co-Array Fortran and Unified Parallel C. In: Proceedings of the 10th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP 2005), Chicago, Illinois (June 2005)

    Google Scholar 

  9. Chapel: The Cascade High-Productivity Language, http://chapel.cs.washington.edu/

  10. Titanium, http://titanium.cs.berkeley.edu/

  11. Yelick, K.A., Semenzato, L., Pike, G., Miyamoto, C., Liblit, B., Krishnamurthy, A., Hilfinger, P.N., Graham, S.L., Gay, D., Colella, P., Aiken, A.: Titanium: A High-Performance Java Dialect. Concurrency: Practice and Experience 10(11-13) (September-November 1998)

    Google Scholar 

  12. The Experimental Concurrent Programming Language (X10), http://x10.sf.net/

  13. Dongarra, J., Luszczek, P., Petitet, A.: The LINPACK Benchmark: past, present and future. Concurrency and Computation: Practice and Experience 15(9), 803–820 (2003)

    Article  Google Scholar 

  14. Luszczek, P., Dongarra, J.J., Koester, D., Rabenseifner, R., Lucas, B., Kepner, J., McCalpin, J., Bailey, D., Takahashi, D.: Introduction to the HPC Challenge Benchmark Suite (March 2005) , http://icl.cs.utk.edu/hpcc/

  15. Saini, S., Ciotti, R., Gunney, B.T.N., Spelce, T.E., Koniges, A., Dossa, D., Adamidis, P., Rabenseifner, R., Tiyyagura, S.R., Mueller, M.: Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks. J. Comput. System Sci. (2007) (to be published) doi:10.1016/j.jcss.2007.07.002 (Special issue on Performance Analysis and Evaluation of Parallel, Cluster, and Grid Computing Systems)

    Google Scholar 

  16. Rabenseifner, R., Koniges, A.E., Prost, J.-P., Hedges, R.: The Parallel Effective I/O Bandwidth Benchmark: b_eff_io. In: Cerin, C., Jin, H. (eds.) Parallel I/O for Cluster Computing, Ch. 4., Kogan Page Ltd., Febuary 2004, pp. 107–132 (2004)

    Google Scholar 

  17. Saini, S., Talcott, D., Thakur, R., Adamidis, P., Rabenseifner, R., Ciotti, R.: Parallel I/O Performance Characterization of Columbia and NEC SX-8 Superclusters. In: Proceedings of the IPDPS 2007 Conference, the 21st IEEE International Parallel & Distributed Processing Symposium, Workshop on Performance Modeling, Evaluation, and Optimization of Parallel and Distributed Systems (PMEO-PDS 2007), Long Beach, California, USA, March 26-30 (2007) (to be published)

    Google Scholar 

  18. Message Passing Toolkit (MPT) 3.0 Man Pages, http://docs.cray.com/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Alexey Lastovetsky Tahar Kechadi Jack Dongarra

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rabenseifner, R. (2008). Some Aspects of Message-Passing on Future Hybrid Systems (Extended Abstract). In: Lastovetsky, A., Kechadi, T., Dongarra, J. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2008. Lecture Notes in Computer Science, vol 5205. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87475-1_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-87475-1_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-87474-4

  • Online ISBN: 978-3-540-87475-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics