Skip to main content

Compiler architectures for heterogeneous systems

  • Conference paper
  • First Online:
Book cover Languages and Compilers for Parallel Computing (LCPC 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1033))

Abstract

Heterogeneous parallel systems incorporate diverse models of parallelism within a single machine or across machines and are better suited for diverse applications [25, 43, 30]. These systems are already pervasive in industrial and academic settings and offer a wealth of underutilized resources for achieving high performance. Unfortunately, heterogeneity complicates software development. We believe that compilers can and should assist in handling this complexity. We identify four goals for extending compilers to manage heterogeneity: exploiting available resources, targeting changing resources, adjusting optimization to suit a target, and allowing programming models and languages to evolve. These goals do not require changes to the individual pieces of existing compilers so much as a restructuring of a compiler's software architecture to increase its flexibility. We examine six important parallelizing compilers to identify both existing solutions and where new technology is needed.

We are designing a new compiler architecture to meet the needs of heterogeneity. Another paper [27] has a preliminary description of our design along with an expanded survey section.

This work was supported in part by the Advanced Research Projects Agency under contract N00014-94-1-0742, monitored by the Office of Naval Research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Amarasinghe, J. Anderson, M. Lam, and A. Lim. An overview of a compiler for scalable parallel machines. In Proceedings of the Sixth Workshop on Languages and Compilers for Parallel Computing, Portland, OR, August 1993.

    Google Scholar 

  2. R. Bixby, K. Kennedy, and U. Kremer. Automatic data layout using 0–1 integer programming. In International Conference on Parallel Architectures and Compilation Techniques (PACT), pages 111–122, Montreal, August 1994.

    Google Scholar 

  3. W. Blume and R. Eigenmann. The range test: A dependence test for symbolic, non-linear expressions. CSRD 1345, Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign, April 1994.

    Google Scholar 

  4. W. Blume et al. Effective Automatic Parallelization with Polaris. International Journal of Parallel Programming, May 1995.

    Google Scholar 

  5. F. Bodin et al. Distributed pC++: Basic ideas for an object parallel language. Scientific Programming, 2(3), Fall 1993.

    Google Scholar 

  6. F. Bodin et al. Sage++: An object-oriented toolkit and class library for building Fortran and C++ restructuring tools. In Second Object-Oriented Numerics Conference, 1994.

    Google Scholar 

  7. F. Bodin, T. Priol, P. Mehrotra, and D. Gannon. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++. Technical Report 94-54, ICASE, June 1994.

    Google Scholar 

  8. D. Brown, S. Hackstadt, A. Malony, and B. Mohr. Program analysis environments for parallel language systems: the TAU environment. In Proceedings of the 2nd Workshop on Environments and Tools For Parallel Scientific Computing, pages 162–171, Townsend, Tennessee, May 1994.

    Google Scholar 

  9. R. Butler and E. Lusk. Monitors, messages, and clusters: the p4 parallel programming system. Parallel Computing, 20(4):547–564, April 1994.

    Article  Google Scholar 

  10. B. Chapman, P. Mehrotra, and H. Zima. Programming in Vienna Fortran. Scientific Programming, 1(1):31–50, Fall 1992.

    Google Scholar 

  11. B. Chapman, P. Mehrotra, and H. Zima. Vienna Fortran — a Fortran language extension for distributed memory multiprocessors. In J. Saltz and P. Mehrotra, editors, Languages, Compilers, and Run-Time Environments for Distributed Memory Machines. North-Holland, Amsterdam, 1992.

    Google Scholar 

  12. K. Cooper et al. The ParaScope parallel programming environment. Proceedings of the IEEE, 81(2):244–263, February 1993.

    Article  Google Scholar 

  13. T. Fahringer. Using the P3T to guide the parallelization and optimization effort under the Vienna Fortran compilation system. In Proceedings of the 1994 Scalable High Performance Computing Conference, Knoxville, May 1994.

    Google Scholar 

  14. T. Fahringer and H. Zima. A static parameter based performance prediction tool for parallel programs. In Proceedings of the 1993 ACM International Conference on Supercomputing, Tokyo, July 1993.

    Google Scholar 

  15. K. Faigin et al. The polaris internal representation. International Journal of Parallel Programming, 22(5):553–586, Oct. 1994.

    Google Scholar 

  16. S. Feldman, D. Gay, M. Maimone, and N. Schryer. A Fortran-to-C converter. Computing Science 149, AT&T Bell Laboratories, March 1993.

    Google Scholar 

  17. G. Fox et al. Fortran D language specification. Technical Report TR90-141, Rice University, December 1990.

    Google Scholar 

  18. A. Ghafoor and J. Yang. A distributed heterogeneous supercomputing management system. Computer, 26(6):78–86, June 1993.

    Article  Google Scholar 

  19. M. B. Girkar and C. Polychronopoulos. The hierarchical task graph as a universal intermediate representation. International Journal of Parallel Programming, 22(5), 1994.

    Google Scholar 

  20. M. Hall, B. Murphy, and S. Amarasinghe. Interprocedural analysis for parallelization. In Proceedings of the Eighth Workshop on Languages and Compilers for Parallel Computing, Columbus, OH, August 1995.

    Google Scholar 

  21. S. Hiranandani, K. Kennedy, and C. Tseng. Compiler support for machine-independent parallel programming in Fortran D. Technical Report TR91-149, Rice University, Jan. 1991.

    Google Scholar 

  22. S. Hiranandani, K. Kennedy, and C. Tseng. Compiling Fortran D for MIMD distributed-memory machines. Communications of the ACM, 35(8):66–80, August 1992.

    Article  Google Scholar 

  23. K. Kennedy, K. S. McKinley, and C. Tseng. Analysis and transformation in an interactive parallel programming tool. Concurrency: Practice & Experience, 5(7):575–602, October 1993.

    Google Scholar 

  24. A. Khokhar, V. Prasanna, M. Shaaban, and C. Wang. Heterogeneous computing: Challenges and opportunities. Computer, 26(6):18–27, June 1993.

    Article  Google Scholar 

  25. A. E. Klietz, A. V. Malevsky, and K. Chin-Purcell. A case study in metacomputing: Distributed simulations of mixing in turbulent convection. In Workshop on Heterogeneous Processing, pages 101–106, April 1993.

    Google Scholar 

  26. A. Malony et al. Performance analysis of pC++: A portable data-parallel programming system for scalable parallel computers. In Proceedings of the 8th International Parallel Processing Symposium, 1994.

    Google Scholar 

  27. K. S. McKinley, S. Singhai, G. Weaver, and C. Weems. Compiling for heterogeneous systems: A survey and an approach. Technical Report TR-95-59, University of Massachusetts, July 1995. http://osl-www.cs.umass.edu/∼oos/papers.html.

    Google Scholar 

  28. Message Passing Interface Forum. MPI: A message-passing interface standard, v1.0. Technical report, University of Tennessee, May 1994.

    Google Scholar 

  29. B. Mohr, D. Brown, and A. Malony. TAU: A portable parallel program analysis environment for pC++. In Proceedings of CONPAR 94 — VAPP VI, University of Linz, Austria, September 1994. LNCS 854.

    Google Scholar 

  30. H. Nicholas et al. Distributing the comparison of DNA and protein sequences across heterogeneous supercomputers. In Proceedings of Supercomputing '91, pages 139–146, 1991.

    Google Scholar 

  31. D. A. Padua et al. Polaris: A new-generation parallelizing compiler for MPPs. Technical Report CSRD-1306, Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign, June 1993.

    Google Scholar 

  32. D. A. Pauda. Private communication, September 1995.

    Google Scholar 

  33. C. Polychronopoulos et al. Parafrase-2: An environment for parallelizing, partitioning, synchronizing, and scheduling programs on multiprocessors. International Journal of High Speed Computing, 1(1), 1989.

    Google Scholar 

  34. W. Pottenger and R. Eigenmann. Idiom recognition in the Polaris parallelizing compiler. In Proceedings of the 1995 ACM International Conference on Supercomputing, Barcelona, July 1995.

    Google Scholar 

  35. J. Saltz, K. Crowely, R. Mirchandaney, and H. Berryman. Run-time scheduling and execution of loops on message passing machines. Journal of Parallel and Distributed Computing, 8(2):303–312, 1990.

    Article  Google Scholar 

  36. L. Smarr and C. E. Catlett. Metacomputing. Communications of the ACM, 35(6):45–52, June 1992.

    Article  Google Scholar 

  37. Stanford Compiler Group. The SUIF library. Technical report, Stanford University, 1994.

    Google Scholar 

  38. V.S. Sunderam, G.A. Geist, J. Dongarra, and P. Manchek. The PVM concurrent computing system: Evolution, experiences, and trends. Parallel Computing, 20(4):531–545, April 1994.

    Article  Google Scholar 

  39. C. Tseng. An Optimizing Fortran D Compiler for MIMD Distributed-Memory Machines. PhD thesis, Rice University, January 1993.

    Google Scholar 

  40. L. H. Turcotte. A survey of software environments for exploiting networked computing resources. Technical Report MSSU-EIRS-ERC-93-2, NSF Engineering Research Center, Mississippi State University, February 1993.

    Google Scholar 

  41. L. H. Turcotte. Cluster computing. In Albert Y. Zomaya, editor, Parallel and Distributed Computing Handbook, chapter 26. McGraw-Hill, October 1995.

    Google Scholar 

  42. C. Weems et al. The image understanding architecture. International Journal of Computer Vision, 2(3):251–282, 1989.

    Article  Google Scholar 

  43. C. Weems et al. The DARPA image understanding benchmark for parallel processors. Journal of Parallel and Distributed Computing, 11:1–24, 1991.

    Article  Google Scholar 

  44. R. Wilson et al. The SUIF compiler system: A parallelizing and optimizing research compiler. SIGPLAN, 29(12), December 1994.

    Google Scholar 

  45. M. E. Wolf and M. Lam. A loop transformation theory and an algorithm to maximize parallelism. IEEE Transactions on Parallel and Distributed Systems, 2(4):452–471, October 1991.

    Article  Google Scholar 

  46. S. Yang et al. High performance fortran interface to the parallel C++. In Proceedings of the 1994 Scalable High Performance Computing Conference, Knoxville, TN, May 1994.

    Google Scholar 

  47. H. Zima. Private communication, September 1995.

    Google Scholar 

  48. H. Zima and B. Chapman. Compiling for distributed-memory systems. Proceedings of the IEEE, 81(2):264–287, February 1993.

    Article  Google Scholar 

  49. H. Zima, B. Chapman, H. Moritsch, and P. Mehrotra. Dynamic data distributions in Vienna Fortran. In Proceedings of Supercomputing '93, Portland, OR, November 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Chua-Huang Huang Ponnuswamy Sadayappan Utpal Banerjee David Gelernter Alex Nicolau David Padua

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

McKinley, K.S., Singhai, S.K., Weaver, G.E., Weems, C.C. (1996). Compiler architectures for heterogeneous systems. In: Huang, CH., Sadayappan, P., Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1995. Lecture Notes in Computer Science, vol 1033. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0014216

Download citation

  • DOI: https://doi.org/10.1007/BFb0014216

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60765-6

  • Online ISBN: 978-3-540-49446-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics