The Design of the PROMIS Compiler

  • Hideki Saito
  • Nicholas Stavrakos
  • Steven Carroll
  • Constantine Polychronopoulos
  • Alex Nicolau
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1575)


PROMIS is a multilingual, parallelizing, and retargetable compiler with an integrated frontend and backend operating on a single unified/universal intermediate representation. This paper describes the organization and the major features of the PROMIS compiler.

PROMIS exploits multiple levels of static and dynamic parallelism, ranging from task- and loop-level parallelism to instruction-level parallelism, based on a target architecture description. The frontend and the backend are integrated through a unified internal representation common to the high-level, the low-level, and the instruction-level analyses and transformations. The unified internal representation propagates hard to compute dependence information from the semantic rich frontend through the backend down to the code generator. Based on conditional algebra, the symbolic analyzer provides control sensitive and interprocedural information to the compiler. This information is used by other analysis and transformation passes to achieve highly optimized code. Symbolic analysis also helps statically quantify the effectiveness of transformations. The graphical user interface assists compiler development as well as application performance tuning.


Expression Tree Dependence Information Symbolic Analysis Dynamic Parallelism Static Single Assignment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Ballance, R.A., Maccabe, A.B., Ottenstein, K.J.: The program dependence web: A representation supporting control-, data-, and demand-driven interpretation of imperative languages. In: Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation, June 1990, pp. 257–271 (1990)Google Scholar
  2. 2.
    Blume, W., et al.: Effective automatic parallelization with polaris. International Journal of Parallel Programming (May 1995)Google Scholar
  3. 3.
    Blume, W.J.: Symbolic Analysis Techniques for Effective Automatic Parallelization. PhD thesis, University of Illinois at Urbana-Champaign (1995), Also available as CSRD Technical Report No.1433Google Scholar
  4. 4.
    Brownhill, C., Nicolau, A., Novack, S., Polychronopoulos, C.: Achieving multi-level parallelization. In: Araki, K., Joe, K., Polychronopoulos, C.D. (eds.) ISHPC 1997. LNCS, vol. 1336. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  5. 5.
    Brownhill, C., Nicolau, A., Novack, S., Polychronopoulos, C.: The PROMIS compiler prototype. In: Proceedings of the International Conference on Parallel Architectures and Compilation Techniques, PACT (1997)Google Scholar
  6. 6.
    Cytron, R., Ferrante, J., Rosen, B.K., Wegman, M.N., Kenneth Zadeck, F.: Efficiently computing static single assignment form and the control dependence graph. ACM transactions on Programming Languages and Systems 13(4), 451–490 (1991)Google Scholar
  7. 7.
    Girkar, M., Haghighat, M., Grey, P., Saito, H., Stavrakos, N., Polychronopoulos, C.: Illinois-Intel Multithreading Library: Multithreading support for iA-based multiprocessor systems. Intel Technology Journal (1998) 1st Quarter 1998Google Scholar
  8. 8.
    Girkar, M., Polychronopoulos, C.D.: The hierarchical task graph as a universal intermediate representation. International Journal of Parallel Programming 22(5), 519–551 (1994)CrossRefGoogle Scholar
  9. 9.
    Haghighat, M.R.: Symbolic Analysis for Parallelizing Compilers. Kluwer Academic Publishers, Dordrecht (1995)zbMATHGoogle Scholar
  10. 10.
    Hall, M., et al.: Maximizing multiprocessor performance with the SUIF compiler. IEEE Computer (December 1996)Google Scholar
  11. 11.
    Kuck and Associates, Inc. Home Page,
  12. 12.
    Noback, S.: The EVE Mutation Scheduling Compiler: Adaptive Code Generation for Advanced Microprocessors. PhD thesis, University of California at Irvine (1997)Google Scholar
  13. 13.
    Polychronopoulos, C.D., Girkar, M., Haghighat, M.R., Lee, C.L., Leung, B., Schouten, D.: Parafrase-2: An environment for parallelizing, partitioning, synchronizing, and scheduling programs on multiprocessors. International Journal of High Speed Computing 1(1), 45–72 (1989)zbMATHCrossRefGoogle Scholar
  14. 14.
    The National Compiler Infrastructure Project. The national compiler infrastructure project (January 1998), Also at
  15. 15.
    Saito, H.: Frontend-backend integration for high-performance compilers. Technical report, Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign (December 1998) (in Preparation)Google Scholar
  16. 16.
    Tu, P.: Automatic Array Privatization and Demand-Driven Symbolic Analysis. PhD thesis, University of Illinois at Urbana-Champaign (1995), Also available as CSRD Technical Report No.1432Google Scholar
  17. 17.
    Wilson, R., et al.: SUIF: An infrastructure for research on parallelizing and optimizing compilers. Technical report, Computer Systems Laboratory, Stanford UniversityGoogle Scholar
  18. 18.
    Zephyr Compiler Group. Zephyr: Tools for a national compiler infrastructure,

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Hideki Saito
    • 1
  • Nicholas Stavrakos
    • 1
  • Steven Carroll
    • 1
  • Constantine Polychronopoulos
    • 1
  • Alex Nicolau
    • 2
  1. 1.Center for Supercomputing Research and DevelopmentUniversity of Illinois at Urbana-ChampaignUrbanaUSA
  2. 2.Department of Information and Computer ScienceUniversity of California at IrvineIrvine

Personalised recommendations