Skip to main content

The GASPI API: A Failure Tolerant PGAS API for Asynchronous Dataflow on Heterogeneous Architectures

  • Conference paper
  • First Online:
Sustained Simulation Performance 2014

Abstract

The Global Address Space Programming Interface (GASPI) is a Partitioned Global Address Space (PGAS) API specification. The GASPI API specification is focused on three key objectives: scalability, flexibility and fault tolerance. It offers a small, yet powerful API composed of synchronization primitives, synchronous and asynchronous collectives, fine-grained control over one-sided read and write communication primitives, global atomics, passive receives, communication groups and communication queues. GASPI has been designed for one-sided RDMA-driven communication in a PGAS environment. As such, GASPI aims to initiate a paradigm shift from bulk-synchronous two-sided communication patterns towards an asynchronous communication and execution model. In order to achieve its much improved scaling behaviour GASPI leverages request based asynchronous dataflow with remote completion. In GASPI request based remote completion indicates that the operation has completed at the target window. The target hence can (on a per request basis) establish whether a one sided operation is complete at the target. A correspondingly implemented fine-grain asynchronous dataflow model can achieve a largely improved scaling behaviour relative to MPI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. MPI Forum: MPI: A Message Passing Interface Standard, http://www.mpi-forum.org/docs/docs.html (2014)

  2. Chamberlain, B.L., Callahan, D., Zima, H.P.: Parallel programmability and the chapel language. Int. J. High Perform. Comput. Appl. 21(3), 291–312 (2007)

    Article  Google Scholar 

  3. Consortium, U.: UPC Specifications v1.2. Lawrence Berkeley National Lab Tech Report LBNL-59208 (2005)

    Google Scholar 

  4. Hilfinger, P., Bonachea,D., Datta, K., Gay, D., Graham, S., Liblit, B., Pike, G., Su, J., Yelick, K.: Titanium Language Reference Manual. U.C. Berkeley Tech Report UCB/EECS-2005-15 (2005)

    Google Scholar 

  5. Numrich, R.W., Reid, J.: Co-array Fortran for parallel programming. ACM Fortran Forum 17, 1–31 (1998)

    Article  Google Scholar 

  6. Machado, R., Lojewski, C.: The Fraunhofer virtual machine: a communication library and runtime system based on the RDMA model. Comput. Sci. Res. Dev. 23, 125–132 (2009)

    Article  Google Scholar 

  7. Fraunhofer ITWM: GPI - Global Address space programming Interface. http://www.gpi-site.com (2013)

  8. Simmendinger, C., Jägersküpper, J., Machado, R., Lojewski, C.: A PGAS-based implementation for the unstructured CFD solver TAU. In: 5th Conference on Partitioned Global Address Space Programming Models, Tremont House, Galveston Island (2011)

    Google Scholar 

  9. Machado, R., Lojewski, C., Abreu, S., Pfreundt, F.-J.: Unbalanced tree search on a manycore system using the GPI programming model. Comput. Sci. Res. Dev. 26(3–4), 229–236 (2011)

    Article  Google Scholar 

  10. Poole, S.W., Hernandez, O., Kuehn, J.A., Shipman, G.M., Curtis, A., Feind, K.: OpenSHMEM - toward a unified RMA model. In: Encyclopedia of Parallel Computing, pp. 1379–1391 (2011)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the German Ministry of Education and Science for funding the GASPI project (funding code 01IH11007A) within the program “ICT 2020 - research for innovation”. Further more, the authors are grateful to all of project partners for having fruitful and constructive discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Simmendinger .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Simmendinger, C., Rahn, M., Gruenewald, D. (2015). The GASPI API: A Failure Tolerant PGAS API for Asynchronous Dataflow on Heterogeneous Architectures. In: Resch, M., Bez, W., Focht, E., Kobayashi, H., Patel, N. (eds) Sustained Simulation Performance 2014. Springer, Cham. https://doi.org/10.1007/978-3-319-10626-7_2

Download citation

Publish with us

Policies and ethics