Skip to main content

Compiler Synthesis of Task Graphs for Parallel Program Performance Prediction

  • Conference paper
  • First Online:
Languages and Compilers for Parallel Computing (LCPC 2000)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2017))

Abstract

Task graphs and their equivalents have proved to be a valuable abstraction for representing the execution of parallel programs in a number of different applications. Perhaps the most widespread use of task graphs has been for performance modeling of parallel programs, including quantitative analytical models [3],[19],[25],[26],[27], theoretical and abstract analytical models [14], and program simulation [5],[13]. A second important use of task graphs is in parallel programming systems. Parallel programming environments such as PYRROS [28], CODE [24], HENCE [24], and Jade [20] have used task graphs at three different levels: as a programming notation for expressing parallelism, as an internal representation in the compiler for computation partitioning and communication generation, and as a runtime representation for scheduling and execution of parallel programs. Although the task graphs used in these systems differ in representation and semantics (e.g., whether task graph edges capture purely precedence constraints or also dataflow requirements), there are close similarities. Perhaps most importantly, they all capture the parallel structure of a program separately from the sequential computations, by breaking down the program into computational “tasks”, precedence relations between tasks, and (in some cases) explicit communication or synchronization operations between tasks.

Acknowledgements

The authors would like to acknowledge the valuable input that several members of the POEMS project have provided to the development of the application representation.We are particularly grateful to the UCLA team and especially Ewa Deelman for her efforts in validating the simulations of the abstractedMPI codes generated by the compiler. This work was carried out while the authors were with the Computer Science Department at Rice University.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. V.A dve and J. Mellor-Crummey. Using Integer Sets for Data-Parallel Program Analysis and Optimization. In Proc. of the SIGPLAN’ 98 Conference on Programming Language Design and Implementation, Montreal, Canada, June 1998.

    Google Scholar 

  2. V.A dve and R. Sakellariou. Application Representations for Multi-Paradigm Performance Modeling of Large-Scale Parallel Scientific Codes. International Journal of High Performance Computing Applications, 14(4), 2000.

    Google Scholar 

  3. V.A dve and M. K. Vernon. A Deterministic Model for Parallel Program Performance Evaluation.Technical Report CS-TR98-333, Computer Science Dept., Rice University, December 1998. Also available at http://www-sal.cs.uiuc.edu/~vadve/Papers/detmodel.ps.gz.

    Google Scholar 

  4. V.S. Adve, R. Bagrodia, J. C. Browne, E. Deelman, A. Dube, E. Houstis, J. R. Rice, R. Sakellariou, D. Sundaram-Stukel, P.J. Teller, and M. K. Vernon. POEMS:End-to-End Performance Design of Large Parallel Adaptive Computational Systems. IEEE Trans. on Software Engineering, 26 (11), November 2000.

    Google Scholar 

  5. V. S. Adve, R. Bagrodia, E. Deelman, T. Phan, and R. Sakellariou. Compiler-Supported Simulation of Highly Scalable Parallel Applications.In Proceedings of Supercomputing’ 99, Portland, Oregon, November 1999.

    Google Scholar 

  6. V. Adve, G. Jin, J. Mellor-Crummey, and Q. Yi. High Performance Fortran Compilation Techniques for Parallelizing Scientific Codes.In Proceedings of SC98: High Performance Computing and Networking, Orlando, FL, November 1998.

    Google Scholar 

  7. S. Amarasinghe and M. Lam. Communication optimization and code generation for distributed memory machines.In Proc. of the SIGPLAN’ 93 Conference on Programming Language Design and Implementation, Albuquerque, NM, June 1993.

    Google Scholar 

  8. C. Ancourt and F. Irigoin. Scanning polyhedra with do loops. In Proceedings of the Third ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Williamsburg, VA, April 1991.

    Google Scholar 

  9. B. Armstrong and R. Eigenmann. Performance Forecasting: Towards a Methodology for Characterizing Large Computational Applications.In Proc. of the Int’l Conf. on Parallel Processing, pages 518–525, August 1998.

    Google Scholar 

  10. R. Bagrodia, E. Deelman, S. Docy, and T. Phan. Performance prediction of large parallel applications using parallel simulation.In Proc. 7th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Atlanta, GA, 1999.

    Google Scholar 

  11. P. Banerjee, J.C handy, M. Gupta, E. Hodges, J. Holm, A. Lain, D. Palermo, S. Ramaswamy, and E. Su. The Paradigm compiler for distributed-memory multicomputers. IEEE Computer, 28(10):37–47, October 1995.

    Google Scholar 

  12. M. Cosn ard and M. Loi. Automatic Task Graph Generation Techniques. Parallel Processing Letters, 5(4):527–538, December 1995.

    Article  Google Scholar 

  13. M. Dikaiakos, A. Rogers, and K. Steiglitz. FAST: A Functional Algorithm Simulation Testbed.In International Workshop on Modelling, Analysis and Simulation of Computer and Telecommunication Systems-Mascots’ 94, 1994.

    Google Scholar 

  14. D. L. Eager, J. Zahorjan, and E.D. Lazowska. Speedup versus Efficiency in Parallel Systems. IEEE Trans. on Computers, C-38(3):408–423, March 1989.

    Article  Google Scholar 

  15. T. Fahringer and H. Zima. A static parameter based performance prediction tool for parallel programs.In Proceedings of the 1993 ACM International Conference on Supercomputing, Tokyo, Japan, July 1993.

    Google Scholar 

  16. P. Havlak. Interprocedural Symbolic Analysis.PhD thesis, Dept. of Computer Science, Rice University, May 1994.Also available as CRPC-TR94451 from the Center for Research on Parallel Computation and CS-TR94-228 from the Rice Department of Computer Science.

    Google Scholar 

  17. S. Hiranandani, K. Kennedy, and C.-W. Tseng. Evaluation of compiler optimizations for Fortran D on MIMD distributed-memory machines.In Proc. of the 1992 ACM International Conference on Supercomputing, Washington, DC, July 1992.

    Google Scholar 

  18. S. Horwitz, T. Reps, and D. Binkley. Interprocedural slicing using dependence graphs. ACM Trans. on Programming Languages and Systems, 12:26–60, 1990.

    Article  Google Scholar 

  19. A. Kapelnikov, R.R. Muntz, and M. D. Ercegovac. A Modeling Methodology for the Analysis of Concurrent Systems and Computations. Journal of Parallel and Distributed Computing, 6:568–597, 1989.

    Article  Google Scholar 

  20. M.S. Lam and M. Rinard. Coarse-Grain Parallel Programming in Jade. In Proc. Third ACM SIGPLAN Symposium on Principles and Practices of Parallel Programming, pages 94–105, Williamsburg, VA, April 1991.

    Google Scholar 

  21. J. Li and M. Chen. Compiling communication-efficient programs for massively parallel machines. IEEE Trans. on Parallel and Distributed Systems, 2(3):361–376, July 1991.

    Article  Google Scholar 

  22. J. Mellor-Crummey and V. Adve. Simplifying control flow in compiler-generated parallel code. International Journal of Parallel Programming, 26(5), 1998.

    Google Scholar 

  23. C. Mendes and D. Reed. Integrated Compilation and Scalability Analysis for Parallel Systems.In Proc. of the Int'l Conference on Parallel Architectures and Compilation Techniques, Paris, October 1998.

    Google Scholar 

  24. P. Newton and J.C. Browne. The CODE 2.0 Graphical Parallel Programming Language.In Proceedings of the 1992 ACM International Conference on Supercomputing, Washington, DC, July 1992.

    Google Scholar 

  25. E. Papaefstathiou, D. J. Kerbyson, G.R. Nudd, T. J. Atherton, and J.S. Harper. An Introduction to the Layered Characterization for High Performance Systems. Research Report 335, University of Warwick, December 1997.

    Google Scholar 

  26. M. Parashar, S. Hariri, T. Haupt, and G. Fox. Interpreting the Performance of HPF/Fortran 90D.In Proceedings of Supercomputing’ 94, Washington, D.C., November 1994.

    Google Scholar 

  27. A. Thomasian and P.F. Bay. Analytic Queueing Network Models for Parallel Processing of Task Systems. IEEE Trans. on Computers, C-35(12):1045–1054, December 1986.

    Article  Google Scholar 

  28. T. Yang and A. Gerasoulis. PYRROS: Static task scheduling and code generation for message passing multiprocessors.In Proceedings of the 1992 ACM International Conference on Supercomputing, Washington, DC, July 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Advea, V., Sakellariou, R. (2001). Compiler Synthesis of Task Graphs for Parallel Program Performance Prediction. In: Midkiff, S.P., et al. Languages and Compilers for Parallel Computing. LCPC 2000. Lecture Notes in Computer Science, vol 2017. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45574-4_14

Download citation

  • DOI: https://doi.org/10.1007/3-540-45574-4_14

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42862-6

  • Online ISBN: 978-3-540-45574-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics