Skip to main content
  • 122 Accesses

Abstract

Understanding and explaining the execution of a parallel system is a great challenge for most computer scientists. Textual presentations of data describing the execution of a parallel program are inherently sequential, and may be difficult to assimilate. Graphical visualization is a standard technique for facilitating human comprehension of complex phenomena and large volumes of data. The execution of parallel programs on advanced parallel architectures is often extremely complex, and the hardware or software performance monitoring of such programs can always generate vast quantities of data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. Allen, R. Bowker, et al., Data visualization and performance analysis in the Prism programming environment, Programming Environments for Parallel Computing, Eds.: N. Topham et al., Elsevier Science, 1992, 37–52.

    Google Scholar 

  2. R. Aydt, The Pablo Self-Defining Data Format, Department of Computer Science, University of Illinois, April 1995, ftp-able from bugle.cs.uiuc.edu.

    Google Scholar 

  3. V. Bala, J. Bruck, et al., The IBM external user interface for scalable parallel systems, Parallel Computing, 20(4), April 1994, 445–462.

    Article  MATH  Google Scholar 

  4. A. Beguelin, J. Dongarra, et al., PVM and HeNCE: Tools for heterogeneous network programming, Environments and Tools for Parallel Scientific Computing, Eds.: J.J. Dongarra et al., Elsevier Science Publishers, 1993, 139–153.

    Google Scholar 

  5. T. Bemmerl, Programming tools for massively parallel supercomputers, Environments and Tools for Parallel Scientific Computing, Eds.: J.J. Dongarra et al., Elsevier Science Publishers, 1993, 125–136.

    Google Scholar 

  6. T. Bemmerl and P. Braun, Visualization of message passing parallel programs with the TOPSYS parallel programming environment, Journal of Parallel and Distributed Computing, 18, 1993, 118–128.

    Article  Google Scholar 

  7. T. Bemmerl and T. Ludwig, MMK-A distributed operating system kernel with integrated dynamic load balancing, Lecture Notes in Computer Science 457, Springer-Verlag, Zurich, 1990, 744–755.

    Google Scholar 

  8. T. Bemmerl and B. Ries, Performance tools on Intel scalable high performance computing systems, Lecture Notes in Computer Science 794, May 1994, 76–88.

    Article  Google Scholar 

  9. A. Bequelin, J. Dongarra, A. Geist and V. Sunderam, Visualization and debugging in a heterogeneous environment, Computer, 26(6), June 1993, 88–95.

    Article  Google Scholar 

  10. A. Bode and P. Braun, Monitoring and visualization in TOPSYS, In: Proc. of Workshop on Monitoring and Visualization of Parallel Processing Systems, Eds.: G. Kotsis et al., Elsevier 1993, 97–118.

    Google Scholar 

  11. R. Borgeest and B. Dimke, TATOO User Manual, ESPRIT Project 6290, Technical University of Munich, June 1994.

    Google Scholar 

  12. M. Brorsson and P. Stenstrom, Visualising sharing behaviour in relation to shared memory management, In: Proc. of 1992 International Conference on Parallel and Distributed Systems, 1992, 528–536.

    Google Scholar 

  13. J.D. Bucher and K.L. Beck, Profiling on a massively parallel computer, Lecture Notes in Computer Science 634, Springer-Verlag, 1992, 97–102.

    Article  Google Scholar 

  14. J. E. Devaney, et al.. The Parallel Applications Development Environment (PADE) User’s Manual, Release 1.4, 1995, http://math.nist.gov/pade/pade.html.

  15. J. Dongarra, O. Brewer, J.A. Kohl and S. Fineberg, A tool to aid in the design, implementation and understanding of matrix algorithms for parallel processors, Journal of Parallel and Distributed Computing, 9, 1990, 185–202.

    Article  Google Scholar 

  16. V.V. Dongen, C. Bonello and C. Freehill, High Performance C Language Specification, Technical Report CRIM-EPPP-94/04-12, Centre de recherche informatique de Montreal, Quebec, Canada, April 1994.

    Google Scholar 

  17. V.V. Dongen, G. Hurteau, et al., A performance debugger for a language for data distribution primitives, In: Workshop on Environments and Tools for Parallel Scientific Computing, Tennessee 1994.

    Google Scholar 

  18. J.M. Francioni and J.A. Jackson, Breaking the silence: Auralization of parallel program behavior, Journal of Parallel and Distributed Computing 18, 1993, 181–194.

    Article  Google Scholar 

  19. J.M. Francioni, J.A. Jackson, and L. Albright, The sounds of parallel programs, In: Proc. of the 6th Distributed Memory Computing Conference, Portland, OR, April 1991, 570–577.

    Google Scholar 

  20. J.M. Francioni and D.T. Rover, Visual-aural representations of performance for a scalable application program, In: Proc. of the Scalable High Performance Computing Conference, Williamsburg, VA, April 1992, 433–444.

    Google Scholar 

  21. B. Gaither, Instrumentation for future parallel systems, Instrumentation for Future Parallel Computing Systems, Eds. M. Simmons, et al., ACM Press Frontier Series, 1989, 111–120.

    Google Scholar 

  22. G.AS. Geist, M.T. Heath, B.W. Peyton and P.H. Worley, PICL: A Portable Instrumented Communication Library, C Reference Manual, Technical Report ORNL/TM 11130, Oak Ridge National Laboratory, Oak Ridge, July 1990.

    Book  Google Scholar 

  23. G.A. Geist, J. Kohl and P. Papadopoulos, Visualization, debugging and performance in PVM, In: Proc. of Visualization and Debugging Workshop, October 1994.

    Google Scholar 

  24. A.J. Goldberg and J.L. Hennessy, Performance debugging shared-memory multiprocessor programs with MTOOL, In: Proc. of Supercomputing’91, 1991, 481–490.

    Google Scholar 

  25. S. Grabner and D. Kranzlmueller, Monitoring for detecting bugs and blocking communication, Lecture Notes in Computer Science 854, Eds.: B. Buchberger et al., Springer-Verlag 1994, 66–75.

    Google Scholar 

  26. Graphics, Visualisation, Animation in Cluster Environments-PVaniM, URL: http://www.cc.gatech.edu/gvu/softviz/parviz/pvanim/pvanim.html.

    Google Scholar 

  27. W. Gu, G. Eisenhauer, et al., Falcon: On-line monitoring and steering of largescale parallel programs, In: Frontiers’95, Fifth Symposium on the Frontiers of Massively Parallel Computation, McLean Virginia, February 1995.

    Google Scholar 

  28. M.T. Heath and J. Etheridge, Visualizing the performance of parallel programs, IEEE Software, 8(9), September 1991, 29–39.

    Article  Google Scholar 

  29. M.T. Heath, and J.E. Finger, ParaGraph: A tool for visualizing performance of parallel programs, ParaGraph user’s guide, Dec. 13, 1994.

    Google Scholar 

  30. S.T. Hackstads and A. Malony, Next-generation parallel performance visualization: A prototyping environment for visualization development, Lecture Notes in Computer Science 817, Eds.: C. Halatsis et al., 1994, 192–201.

    Google Scholar 

  31. G.J. Hansen, C.A. Linthicum, and G. Brooks, Experience with a performance analyzer for multithreaded application, In: Proc. of supercomputing’90, 1990, 124–131.

    Google Scholar 

  32. O. Hansen and J. Krammer, A tool for optimizing large scale parallel applications, In: Third International Workshop on Modeling Analysis and Simulation of Computer and Telecommunication Systems, Durham, N.Carolina, January 1995, 293–296.

    Google Scholar 

  33. M.C. Hao, A.H. Karp, A. Waheed and M. Jazayeri, VIZIR: An integrated environment for distributed program visualization, In: Proc. of the 3rd International Workshop on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, Durham N. Carolina, January 1995, 288–292.

    Google Scholar 

  34. V. Herrarte and E. Lusk, Studying parallel program behavior with upshot, Technical Report ANL-91/15, Mathematics and Computer Science Division, Argonne National Laboratory, August 1991.

    Google Scholar 

  35. J.K. Hollingsworth, B.P. Miller, and J.J.E. Lumpp, Techniques for performance measurement of parallel programs, Parallel Computers Theory and Practice, Eds. Casavant, et al., IEEE Computer Society Press, 1996, 283–303.

    Google Scholar 

  36. A. Hondroudakis, K. Shanmugam and R. Procter, Performance evaluation and Visualization with VISPAT, In: the 3rd International Conference on Parallel Computing Technologies, St.Petersburg, September 1995.

    Google Scholar 

  37. A. Hondroudakis, The Parallel Tools Consortium, Edinburgh Parallel Computing Centre, http://www.epcc.edu.

  38. A. Hondroudakis, Performance Analysis Tools for Parallel Programs, Edinburgh Parallel Computing Centre, The University of Edinburgh, July 1995. See also http://www.epcc.ed.ac.

  39. IEEE, Threads Extension for Portable Operating Systems, P1003.4a, 1990.

    Google Scholar 

  40. R.B. Irvin, Performance Measurement Tools for High Level Parallel Programming Languages, Ph.D. thesis, University of Wisconsin-Madison, 1995.

    Google Scholar 

  41. R.B. Irvin and B.P. Miller, Multi-application support in a parallel program performance tool, IEEE Parallel and Distributed Technology 2(1), Spring 1994, 40–50.

    Article  Google Scholar 

  42. R.B. Irvin and B.P. Miller, A performance tool for high-level parallel programming languages, Programming Environments for Massively Parallel Distributed Systems, Eds.: K.M. Decker et al., Birkhauser, April 1994, 299–313.

    Google Scholar 

  43. E. Karrels and E. Lusk, Performance Analysis of MP 1 programs, available from the URL:ftp://info.mcs.anl.gov/pub/mpi/misc/heath.ps.URL:ftp://info.mcs.anl.gov/pub/mpi/misc/heath.ps.

    Google Scholar 

  44. D. Kimelman and T. Ngo, Program Visualization for RP3: An Overview, Technical Report, IBM Research Division, T. J. Watson Research Center, 1990.

    Google Scholar 

  45. D. Kimelman and G. Sangudi, Program Visualization by integration of advanced Compiler technology with configurable views, Environments and Tools for Parallel Scientific Computing, Eds.: J. Dongarra et al., North-Holland 1993, 73–84.

    Google Scholar 

  46. R. Klar, Event-driven monitoring of parallel systems, In: Proc. of Monitoring and Visualization of Parallel Processing Systems, October 1992.

    Google Scholar 

  47. J.A. Kohl and T.L. Cesavant, A software engineering visualization methodology for parallel processing systems, In: Proc. of the Sixteenth Annual International Computer Software and Applications Conference, Chicago, IL, 1992, 51–56.

    Google Scholar 

  48. J.A. Kohl and T.L. Casavant, The IMPROV meta-tool design methodology for visualization of parallel programs, In: International Workshop on Modeling Analysis and Simulation of Computer and Telecommunication Systems, 1993.

    Google Scholar 

  49. J. Kohn and W. Williams, ATExpert, Journal of Parallel and Distributed Computing, 18, 1993, 205–222.

    Article  Google Scholar 

  50. E. Kraemer and J.T. Stasko, The visualization of parallel systems: An overview, Journal of Parallel and Distributed Computing, 18, 1993, 105–117.

    Article  Google Scholar 

  51. L. Lamport, Time, clocks and the ordering of events in a distributed system, Communications of the ACM, 21(7), July 1978, 558–561.

    Article  MATH  Google Scholar 

  52. F. Lange, R. Kroeger, and M. Gergeleit, JEWEL: Design and implementation of a distributed measurement systems, IEEE Trans, on Parallel and Distributed Systems, Vol. 3, No.6, Nov. 1992, 657–671.

    Article  Google Scholar 

  53. T.J. LeBlanc, J.M. Mellor-Crummey, and R.J. Fowler, Analyzing parallel program execution using multiple views, Journal of Parallel and Distributed Computing 9, 2, June 1990, 203–217.

    Article  Google Scholar 

  54. E. Leu and A. Schiper, ParaRex: a programming environment integrating execution replay and visualization, Environments and Tools for Parallel Scientific Computing, Eds.: J.J. Dongarra et al., Elsevier Science Publications, 1993, 155–169.

    Google Scholar 

  55. J. Malard, MPT. A Message Passing Interface Standard; History, Ova-view and Current Status, Edinburgh Parallel Computing Centre, available from the World Wide Web at the URL: http://www.epcc.ed.ac

  56. A.D. Malony and G.V. Wilson, Future directions in parallel performance environments, Performance Measurement and Visualization of Parallel Systems, Eds. G. Haring and G. Kotsis, Elsevier Science Publishers B.V., 1993, 331–351.

    Google Scholar 

  57. E. Maillet, TAPE/PVM an efficient performance monitor for PVM applications User guide, LMC-IMAG, France, 1995.

    Google Scholar 

  58. Maui High Performance Computing Center, Visualization Tool (VT), http://www.mhpcc.edu/training/workshop/html/vt/VT.html.

  59. B.P. Miller, J.M. Cargille, et al., The Paradyn Parallel Performance Measurement Tools, http://www.cs.wisc.edu/~paradyn/papers.html.

  60. B.P. Miller, M. Clark, et al., IPS-2: The second generation of a parallel program measurement system, IEEE Transactions on Parallel and Distributed Systems, Vol. 1, No. 2, 1990, 206–217.

    Article  Google Scholar 

  61. B.P. Miller, What to draw? When to draw? An essay on parallel program visualization, Journal of Parallel and Distributed Computing, 18, 1993, 265–269.

    Article  Google Scholar 

  62. B. Mohr, SIMPLE: A performance evaluation tool environment for parallel and distributed systems, In: Proc. of the 2nd European Distributed Memory Computing Conference, Munich, Germany, LNCS 487, April 1991, 80–89.

    Google Scholar 

  63. B. Mohr, D. Brown and A. Malony, TAU: A portable parallel program performance analysis environment for pC++, Lecture Notes in Computer Science 854, Eds.: B. Buchberger et al., Springer-Verlag 1994, 29–40.

    Google Scholar 

  64. W.E. Nagel and A. Arnold, Performance Visualization of Parallel Programs-The PARvis Environment, Central Institute for Applied Mathematics (ZAM) Germany 1994, http://www.ccsf.caltech.edu.

    Google Scholar 

  65. NCSA, NCSA HDE Version 2.0, University of Illinois at Urbana-Champaign, National Center for Supercomputing Applications, Feb. 1989.

    Google Scholar 

  66. P. Newton and J.C. Browne, The CODE 2.0 graphical parallel programming language, In: Proc. of ACM International Conference on Supercomputing, July 1992.

    Google Scholar 

  67. P. Newton, VPE User Manual, http://www.cs.utk.edu, University of Tennessee, June 1995.

  68. W. Obeloeer, H. Willeke and E. Maehle, Performance measurement and visualization of multi-transputer systems with DELTA-T, In: Proc. of the Workshop on Measurement and Visualization of Parallel Systems, Eds.: G. Haring et al., Advances in Parallel Computing, Volume 7, North Holland, 1993, 119–144.

    Google Scholar 

  69. Performance Co-Pilot Features, http://www.sgi.com/software/co-pilot/features.

    Google Scholar 

  70. S. Poinson, B. Tourancheau and X. Vigouroux, Distributed monitoring for scalable massively parallel machines, Environments and Tools for Parallel Scientific Computing, Eds.: J.J. Dongarra et al., Elsevier Science Publications 1993, 85–101.

    Google Scholar 

  71. D.A. Reed, R.D. Olson, et al., Scalable performance environments for parallel systems, In: Proc. of the 6th Distributed Memory Computing Conference, IEEE Computer Society Press, 1991, 562–569.

    Google Scholar 

  72. D.A. Reed, Experimental analysis of parallel systems: Techniques and open problems, Lecture Notes in Computer Science 794, Springer Verlag, 1994, 25–51.

    Article  Google Scholar 

  73. R.K. Rew, netCDF User Guide Version 1.0, Unidata Program Center, University Corporation for Atmospheric Research, Apr. 1989.

    Google Scholar 

  74. H. Richardson, High Performance Fortran: History, Overview and Current Status, Edinburgh Parallel Computing Centre, http://www.epcc.ed.ac

  75. B. Ries, R. Anderson, W. Auld, et al., The Paragon performance monitoring environment, In: Proc. of Supercomputing’93, Portland, November 1993, 850–859.

    Google Scholar 

  76. D.T. Rover and C.T. Wright, Visualizing the performance of SPMD and data-parallel programs, Journal of Parallel and Distributed Computing 18, 1993, 129–146.

    Article  Google Scholar 

  77. S.R. Sarukkai, D. Kimelman, and L. Rudolph, A methodology for visualizing performance of loosely synchronous programs, In: Proc. of Scalable High Performance Computing Conference, 1992, 424–432.

    Google Scholar 

  78. S.R. Sarukkai and D. Gannon, SIEVE: A performance debugging environment for parallel programs, Journal of Parallel and Distributed Computing 18, 1993, 147–168.

    Article  Google Scholar 

  79. A. B. Sinha and L.V. Kale, Projections: A preliminary performance tool for Charm, In: International Parallel Processing Symposium, 1993.

    Google Scholar 

  80. J.T. Stasko, The PARADE Environment for Visualizing Parallel Program Executions: A Progress Report, Graphics, Visualization and Usability Center, Technical Report, GITGVU-95-03, 1995.

    Google Scholar 

  81. V.S. Sunderam, G.A. Geist, J. Dongarra and R. Manchek, The PVM concurrent computing system, Parallel Computing, 20(4), April 1994, 531–545.

    Article  MATH  Google Scholar 

  82. S. Toledo, PERFSIM: A tool for automatic performance analysis of data-parallel Fortran programs, In: the 5th Symposium on the Frontiers of Massively Parallel Computation, McLean, Virginia, IEEE Computer Society Press, 1995.

    Google Scholar 

  83. B. Topol, V. Sunderam and A. Alund, Performance visualization for PVM, http://www.cc.gatech.edu.

  84. C. Upson, T. Faulhaber, et al., The application visualization system: a computational environment for scientific visualization, IEEE Computer Graphics & Applications, July 1989, 30–42.

    Google Scholar 

  85. X. Vigouroux, Implementation of a scalable trace analysis tool, Programming Environments for Massively Parallel Distributed Systems, Eds.: K.M Decker et al., Birkhauser, Switzerland, 1994, 315–320.

    Google Scholar 

  86. W. Williams, T. Hoel and D. Pase, The MPP Apprentice performance tool: Delivering the performance of the Cray T3D, Programming Environments for Massively Parallel Distributed Systems, Eds.: K.M. Decker et al., Birkhauser Verlag, 1994, 333–345.

    Google Scholar 

  87. P.H. Worley, A New PICL Trace File Format, ORNL/TM-12125, Oak Ridge National Laboratory, Sep. 1992.

    Google Scholar 

  88. Xingfu Wu and Wei Li, Visualizing the Network Activity among Node Processors, In: Proc. of International Symposium on Transmission & Switching New Technology (ISTST96), China, Sep’. 1996, 301–305.

    Google Scholar 

  89. Xingfu Wu, The Design Document of Visualization Tool ParaTools, Technical Report, Department of Computer Science and Engineering, Beijing University of Aeronautics & Astronautics, Feb. 1996.

    Google Scholar 

  90. J.C. Yan, Performance tuning with AIMS-An automated instrumentation and monitoring system for multicomputers, In: Proc. of 27th Hawaii International Conference on System Sciences, Vol. II, January 1994, 625–633.

    Google Scholar 

  91. C. Yang and B.P. Miller, Critical path analysis for the execution of parallel and distributed programs, In: Proc. of the 8th International Conference on Distributed Computing Systems, San Jose, California, June 1988.

    Google Scholar 

  92. E. Zabala and R. Taylor, Maritxu: Generic visualization of highly parallel processing, Programming Environments for Parallel Computing, Eds.: N. Topham et al., Elsevier Science Publishers, 1992, 171–180.

    Google Scholar 

  93. Q.A. Zhao and J.T. Stasko, Visualizing the execution of threads-based parallel programs, Graphics Visualization and Usability Center, Georgia Institute of Technology, Technical Report GIT-GVU-95-01, January 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer Science+Business Media New York

About this chapter

Cite this chapter

Wu, X. (1999). Parallel Performance Visualization. In: Performance Evaluation, Prediction and Visualization of Parallel Systems. The Kluwer International Series on Asian Studies in Computer and Information Science, vol 4. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-5147-8_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-5147-8_6

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-7343-8

  • Online ISBN: 978-1-4615-5147-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics