Skip to main content

Scaling Performance Tool MPI Communicator Management

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 6960))

Abstract

The Scalasca toolset has successfully demonstrated measurement and analysis scalability on the largest computer systems, however, applications have growing complexity and increasing demands on performance tools. One such application is the PFLOTRAN code for simulating multiphase subsurface flow and reactive transport. While PFLOTRAN itself and Scalasca runtime summarization both scale well, MPI communicator management becomes critical for trace collection with tens of thousands of processes. Re-design and re-engineering of key components of the Scalasca measurement system are presented which encompass the representation of communicators, communicator definition tracking and unification, and translation of ranks recorded in event traces.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. an Mey, D., Biersdorff, S., Bischof, C., Diethelm, K., Eschweiler, D., Gerndt, M., Knüpfer, A., Lorenz, D., Malony, A.D., Nagel, W.E., Oleynik, Y., Rössel, C., Saviankou, P., Schmidl, D., Shende, S.S., Wagner, M., Wesarg, B., Wolf, F.: Score-P–A unified performance measurement system for petascale applications. In: Proc. Competence in High Performance Computing, HPC Status Konferenz der Gauß-Allianz e.V., CiHPC, Schwetzingen, Germany. Springer, Heidelberg (2010) (to appear)

    Google Scholar 

  2. ANL/LANL/ORNL/PNNL/UIUC: PFLOTRAN, http://ees.lanl.gov/pflotran/

  3. Argonne National Laboratory, USA: FPMPI-2.1g (August 2010), http://www.mcs.anl.gov/research/projects/fpmpi/

  4. Argonne National Laboratory, USA: MPICH2-1.4 MPE (June 2011), http://www.mcs.anl.gov/research/projects/mpich2/

  5. Barcelona Supercomputing Centre, Spain: Extrae-2.1.1 (March 2011), http://www.bsc.es/ssl/apps/performanceTools/

  6. Geimer, M., Saviankou, P., Strube, A., Szebenyi, Z., Wolf, F., Wylie, B.J.N.: Further improving the scalability of the Scalasca toolset. In: Proc. PARA 2010, Reykjavík, Iceland. LNCS. Springer, Heidelberg (2010)

    Google Scholar 

  7. Geimer, M., Wolf, F., Wylie, B.J.N., Ábrahám, E., Becker, D., Mohr, B.: The Scalasca performance toolset architecture. Concurrency and Computation: Practice and Experience 22(6), 702–719 (2010)

    Google Scholar 

  8. Hammond, G.E., Lichtner, P.C.: Cleaning up the Cold War: Simulating uranium migration at the Hanford 300 Area. In: Proc. Scientific Discovery through Advanced Computing, SciDAC, Chattanooga, TN, USA. Journal of Physics: Conference Series. IOP Publishing (July 2010)

    Google Scholar 

  9. Jülich Supercomputing Centre, Germany: Scalasca toolset for scalable performance analysis of large-scale parallel applications, http://www.scalasca.org/

  10. Lawrence Livermore National Laboratory, USA: mpiP-3.3 (June 2011), http://mpip.sourceforge.net/

  11. Mohr, B., Frings, W. (eds.): Jülich Blue Gene/P Extreme Scaling Workshop. FZJ-JSC-IB reports 2010-02, 2010-03 & 2011-02, Jülich Supercomputing Centre (2009, 2010 & 2011), http://www2.fz-juelich.de/jsc/bg-ws11/

  12. Technische Universität Dresden, Germany: VampirTrace-5.11 (June 2011), http://www.tu-dresden.de/zih/vampirtrace/

  13. Technische Universität München, Germany: Periscope-1.3.2 (February 2011), http://www.lrr.in.tum.de/periscope/

  14. Träff, J.L.: Compact and efficient implementation of the MPI group operations. In: Keller, R., Gabriel, E., Resch, M., Dongarra, J. (eds.) EuroMPI 2010. LNCS, vol. 6305, pp. 170–178. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  15. University of Oregon, Eugene, USA: TAU-2.20.2 (May 2011), http://tau.uoregon.edu/tau/

  16. Wylie, B.J.N., Geimer, M.: Large-scale performance analysis of PFLOTRAN with Scalasca. In: Proc. 53rd CUG Meeting, Fairbanks, AK, USA. Cray User Group, Inc. (May 2011)

    Google Scholar 

  17. Wylie, B.J.N., Geimer, M., Mohr, B., Böhme, D., Szebenyi, Z., Wolf, F.: Large-scale performance analysis of Sweep3D with the Scalasca toolset. Parallel Processing Letters 20(4), 397–414 (2010)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Geimer, M., Hermanns, MA., Siebert, C., Wolf, F., Wylie, B.J.N. (2011). Scaling Performance Tool MPI Communicator Management. In: Cotronis, Y., Danalis, A., Nikolopoulos, D.S., Dongarra, J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2011. Lecture Notes in Computer Science, vol 6960. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24449-0_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24449-0_21

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24448-3

  • Online ISBN: 978-3-642-24449-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics