Advertisement

Specification Techniques for Automatic Performance Analysis Tools

  • Michael Gerndt
  • Hans-Georg Eßer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1800)

Abstract

Performance analysis of parallel programs is a time-consuming task and requires a lot of experience. It is the goal of the KOJAK project at the Research Centre Juelich to develop an automatic performance analysis environment. A key requirement for the success of this new environment is its easy integration with already existing tools on the target platform. The design should lead to tools that can be easily retargeted to different parallel machines based on specification documents. This article outlines the features of the APART Specification Language designed for that purpose and demonstrates its applicability in the context of the KOJAK Cost Analyzer, a first prototype tool of KOJAK.

Keywords

Performance Data Performance Property Proof Rule Speci Cation Barrier Synchronization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    P. Bates, J.C. Wileden: High-Level Debugging of Distributed Systems: The Behavioral Abstraction Approach, The Journal of Systems and Software, Vol. 3, pp. 255–264, 1983CrossRefGoogle Scholar
  2. 2.
    CRAY Research: Introducing the MPP Apprentice Tool, Cray Manual IN-2511, 1994, 1994Google Scholar
  3. 3.
    Th. Fahringer, M. Gerndt, G. Riley, J.L. Träff: Knowledge Specification for Automatic Performance Analysis, to appear: APART Technical Report, Forschungszentrum Jülich, FZJ-ZAM-IB-9918, 1999Google Scholar
  4. 4.
    M. Gerndt, A. Krumme: A Rule-based Approach for Automatic Bottleneck Detection in Programs on Shared Virtual Memory Systems, Second Workshop on High-Level Programming Models and Supportive Environments (HIPS’ 97), in combination with IPPS’ 97, IEEE, 1997Google Scholar
  5. 5.
    M. Gerndt, A. Krumme, S. Özmen: Performance Analysis for SVM-Fortran with OPAL, Proceedings Int. Conf. on Parallel and Distributed Processing Techniques and Applications (PDPTA’95), Athens, Georgia, pp. 561–570, 1995Google Scholar
  6. 6.
    M. Gerndt, B. Mohr, F. Wolf, M. Pantano: Performance Analysis on GRAY T3E, Euromicro Workshop on Parallel and Distributed Processing (PDP’ 99), IEEE Computer Society, pp. 241–248, 1999Google Scholar
  7. 7.
    A. Lucas: Basiswerkzeuge zur automatischen Auswertung von Apprentice-Leistungsdaten, Diploma Thesis, RWTH Aachen, Internal Report Forschungszentrums Jülich Jül-3652, 1999Google Scholar
  8. 8.
    B.P. Miller, M.D. Callaghan, J.M. Cargille, J.K. Hollingsworth, R.B. Irvin, K.L. Karavanic, K. Kunchithapadam, T. Newhall: The Paradyn Parallel Performance Measurement Tool, IEEE Computer, Vol. 28, No. 11, pp. 37–46, 1995Google Scholar
  9. 9.
    Paradyn Project: Paradyn Parallel Performance Tools: User’s Guide, Paradyn Project, University of Wisconsin Madison, Computer Sciences Department, 1998Google Scholar
  10. 10.
    F. Wolf, B. Mohr: EARL-A Programmable and Extensible Toolkit for Analyzing Event Traces of Message Passing Programs, 7th International Conference on High-Performance Computing and Networking (HPCN’99), A. Hoekstra, B. Hertzberger (Eds.), Lecture Notes in Computer Science, Vol. 1593, pp. 503–512, 1999CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Michael Gerndt
    • 1
  • Hans-Georg Eßer
    • 1
  1. 1.Central Institute for Applied MathematicsResearch Centre JuelichGermany

Personalised recommendations