Skip to main content

Implementing OpenSHMEM Using MPI-3 One-Sided Communication

  • Conference paper
Book cover OpenSHMEM and Related Technologies. Experiences, Implementations, and Tools (OpenSHMEM 2014)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 8356))

Included in the following conference series:

Abstract

This paper reports the design and implementation of Open- SHMEM over MPI using new one-sided communication features in MPI- 3, which include not only new functions (e.g. remote atomics) but also a newmemory model that is consistent with that of SHMEM.We use a new, non-collective MPI communicator creation routine to allow SHMEM collectives to use their MPI counterparts. Finally, we leverage MPI sharedmemory windows within a node, which allows direct (load-store) access. Performance evaluations are conducted for shared-memory and InfiniBand conduits using microbenchmarks.

This manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory (“Argonne”). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DEAC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up, nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bariuso, R., Knies, A.: Shmem user’s guide (1994)

    Google Scholar 

  2. Barrett, B., Hoefler, T., Dinan, J., Thakur, R., Balaji, P., Gropp, B., Underwood, K.D.: Remote memory access programming in MPI-3. Preprint, Argonne National Laboratory (April 2013)

    Google Scholar 

  3. Barrett, B.W., Brightwell, R., Hemmert, S., Pedretti, K., Wheeler, K., Underwood, K., Riesen, R., Maccabe, A.B., Hudson, T.: The Portals 4.0 message passing interface (SAND2013-3181) (April 2013)

    Google Scholar 

  4. Barrett, B.W., Brigthwell, R., Scott Hemmert, K., Pedretti, K., Wheeler, K., Underwood, K.D.: Enhanced support for OpenSHMEM communication in Portals. In: Symposium on High-Performance Interconnects, pp. 61–69 (2011)

    Google Scholar 

  5. Bonachea, D.: GASNet specification, v1.1. Technical Report UCB/CSD-02-1207, U.C. Berkeley (2002)

    Google Scholar 

  6. Bonachea, D., Duell, J.: Problems with using MPI 1.1 and 2.0 as compilation targets for parallel language implementations. Int. J. High Perform. Comput. Netw. 1, 91–99 (2004)

    Article  Google Scholar 

  7. Chapman, B., Curtis, T., Pophale, S., Poole, S., Kuehn, J., Koelbel, C., Smith, L.: Introducing OpenSHMEM: SHMEM for the PGAS community. In: Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model, p. 2. ACM (2010)

    Google Scholar 

  8. Dinan, J., Balaji, P., Hammond, J.R., Krishnamoorthy, S., Tipparaju, V.: Supporting the Global Arrays PGAS model using MPI one-sided communication. In: Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS) (May 2012)

    Google Scholar 

  9. Dinan, J., Krishnamoorthy, S., Balaji, P., Hammond, J.R., Krishnan, M., Tipparaju, V., Vishnu, A.: Noncollective communicator creation in MPI. In: Cotronis, Y., Danalis, A., Nikolopoulos, D.S., Dongarra, J. (eds.) EuroMPI 2011. LNCS, vol. 6960, pp. 282–291. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  10. Feind, K.: Shared memory access (shmem) routines. In: Cray User Group, CUG 2005 (1995)

    Google Scholar 

  11. Gribenko, D., Zinenko, A.: Enabling Clang to statically check MPI type safety. In: International Conferences on High Performance Computing (HPC-UA) (October 2012)

    Google Scholar 

  12. HP. HP Alphaserver SC 40, http://h18002.www1.hp.com/alphaserver/archive/sc/sys_sc40_features.html

  13. IBM. HPC Toolkit, https://computing.llnl.gov/mpi/klepacki.pdf (2004)

  14. Jose, J., Kandalla, K., Luo, M., Panda, D.K.: Supporting hybrid MPI and OpenSHMEM over InfiniBand: Design and performance evaluation. In: 2012 41st International Conference on Parallel Processing (ICPP), pp. 219–228 (2012)

    Google Scholar 

  15. Jose, J., Kandalla, K., Luo, M., Panda, D.K.: Supporting hybrid MPI and OpenSHMEM over InfiniBand: Design and performance evaluation. In: 2012 41st International Conference on Parallel Processing (ICPP), pp. 219–228. IEEE (2012)

    Google Scholar 

  16. Jose, J., Kandalla, K., Zhang, J., Potluri, S., Panda, D.K.: Optimizing collective communication in OpenSHMEM (October 2013)

    Google Scholar 

  17. MPI Forum. MPI: A message-passing interface standard. Version 3.0 (November 2012)

    Google Scholar 

  18. Nieplocha, J., Carpenter, B.: ARMCI: A portable remote memory copy library for distributed array libraries and compiler run-time systems. In: Rolim, J., et al. (eds.) IPPS-WS 1999 and SPDP-WS 1999. LNCS, vol. 1586, pp. 533–546. Springer, Heidelberg (1999)

    Chapter  Google Scholar 

  19. Parzyszek, K., Nieplocha, J., Kendall, R.A.: A generalized portable SHMEM library for high performance computing. Technical report, Ames Lab., Ames, IA, US (2000)

    Google Scholar 

  20. Poole, S.W., Hernandez, O., Kuehn, J.A., Shipman, G.M., Curtis, A., Feind, K.: OpenSHMEM - toward a unified RMA model. In: Encyclopedia of Parallel Computing, pp. 1379–1391. Springer (2011)

    Google Scholar 

  21. Quadrics. Quadrics/SHMEM programming manual (2001)

    Google Scholar 

  22. Shainer, G., Wilde, T., Lui, P., Liu, T., Kagan, M., Dubman, M., Shahar, Y., Graham, R., Shamis, P., Poole, S.: The co-design architecture for exascale systems, a novel approach for scalable designs. In: Computer Science-Research and Development, pp. 1–7 (2013)

    Google Scholar 

  23. Träff, J.L.: Compact and efficient implementation of the MPI group operations, pp. 170–178 (2010)

    Google Scholar 

  24. Woodacre, M., Robb, D., Roe, D., Feind, K.: The SGI AltixTM 3000 global shared-memory architecture (2005)

    Google Scholar 

  25. Yoon, C., Aggarwal, V., Hajare, V., George, A.D., Billingsley III, M.: GSHMEM: A portable library for lightweight, shared-memory, parallel programming. In: Proceedings of Partitioned Global Address Space, Galveston, Texas (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Hammond, J.R., Ghosh, S., Chapman, B.M. (2014). Implementing OpenSHMEM Using MPI-3 One-Sided Communication. In: Poole, S., Hernandez, O., Shamis, P. (eds) OpenSHMEM and Related Technologies. Experiences, Implementations, and Tools. OpenSHMEM 2014. Lecture Notes in Computer Science, vol 8356. Springer, Cham. https://doi.org/10.1007/978-3-319-05215-1_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-05215-1_4

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-05214-4

  • Online ISBN: 978-3-319-05215-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics