Advertisement

Designing the Agassiz Compiler for Concurrent Multithreaded Architectures

  • B. Zheng
  • J. Y. Tsai
  • B. Y. Zang
  • T. Chen
  • B. Huang
  • J. H. Li
  • Y. H. Ding
  • J. Liang
  • Y. Zhen
  • P. C. Yew
  • C. Q. Zhu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1863)

Abstract

In this paper, we present the overall design of the Agassiz compiler [1]. The Agassiz compiler is an integrated compiler targeting the concurrent multithreaded architectures [12,13]. These architectures can exploit both loop-level and instruction-level parallelism for general-purpose applications (such as those in SPEC benchmarks). They also support various kinds of control and data speculation, runtime data dependence checking, and fast synchronization and communication mechanisms. The Agassiz compiler has a loop-level parallelizing compiler as its front-end and an instruction-level optimizing compiler as its back-end to support such architectures. In this paper, we focus on the IR design of the Agassiz compiler and describe how we support the front-end analyses, various optimization techniques, and source-to-source translation.

Keywords

Dependence Graph Intermediate Representation Member Function Reference Count Data Flow Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
  2. 2.
    W. Blume, R. Eigenmann, K. Faigin, J. Grout, J. Hoeflinger, D. Padua, P. Petersen, W. Pottenger, L. Rauchwerger, P. Tu, and S. Weatherford. Polaris: Improving the effectiveness of Parallelizing Compilers. Languages and Compilers for Parallel Computing. Lecture Notes in Computer Science 892. K. Pingali, U. Banerjee, D. Gelernter, A. Nicolau, and D. Padua (Eds.) pages 141–154. Springer-Verlag, 1994.CrossRefGoogle Scholar
  3. 3.
    Doug Burger and Todd M. Austin. The SimpleScalar Tool Set. University of Wisconsin-Madison Computer Sciences Department Technical Report #1342, 1997.Google Scholar
  4. 4.
    S. Cho, J.-Y. Tsai, Y. Song, B. Zheng, S. J. Schwinn, X. Wang, Q. Zhao, Z. Li, D. J. Lilja, and P.-C. Yew. High-Level Information-An Approach for Integrating Front-End and Back-End Compilers. In Proceedings of the International Conference on Parallel Processing, pages 345–355, Auguest 1998.Google Scholar
  5. 5.
    Cliff Click and Keith D. Cooper. Combing Analyses Combing Optimizations. In ACM Transactions on Programming Languages and Systems, Vol. 17, No. 2, pages 181–196, March 1995.CrossRefGoogle Scholar
  6. 6.
    Ron Cytron. Limited Processor Scheduling of Doacross Loops. In Proceedings of the International Conference on Parallel Processing, pages 226–234, Auguest, 1987.Google Scholar
  7. 7.
    Ron Cytron, Jeanne Ferrante, Barry K. Rosen, and Mark N. Wegman. Efficiently Computing Static Single Assignment Form and The Control Dependence Graph. In ACM Transactions on Programming Languages and Systems, pages 451–490, Vol 13, No 4, October, 1991.CrossRefGoogle Scholar
  8. 8.
    L. Hendren, C. Donawa, M. Emami, G. Gao, Justiani, and B. Sridharan. Designing the McCAT Compiler Based on a Family of Structured Intermediate Representations. In Proceedings of the 5th International Workshop on Languages and Compilers for Parallel Computing, Auguest 1992.Google Scholar
  9. 9.
    Bo Huang. Context-Sensitive Interprocedural Pointer Analysis. PhD Thesis, Computer Science Department, Fudan University, P.R.China, in preparation.Google Scholar
  10. 10.
    Richard Jones and Rafael Lins. Garbage Collection. John Wiley & Sons Ltd, 1996.Google Scholar
  11. 11.
    Richard C. Johnson. Efficient Program Analysis Using Dependence Flow Graphs. Ph.D. Thesis, Computer science, University of Cornell University, 1994.Google Scholar
  12. 12.
    G. S. Sohi, S. Breach, and T. N. Vijaykumar. Multiscalar Processors. In Proceeding of the 22th International Symposium on Computer Architecture (ISCA-22), 1995.Google Scholar
  13. 13.
    J.-Y. Tsai and P.-C. Yew. The Superthreaded Architecture: Thread Pipelining with Run-Time Data Dependence Checking and Control Speculation. In Proceedings of the Int’l Conf. on Parallel Architectures and Compilation Techniques, October 1996.Google Scholar
  14. 14.
    J.-Y. Tsai. Integrating Compilation Technology and Processor Architecture for Cost-Effective Concurrent. Ph.D. Thesis, Computer Science, University of Illinois at Urbana-Champaign, April 1998.Google Scholar
  15. 15.
    Robert P. Wilson, Robert S. French, Christopher S. Wilson, Saman P. Amarasinghe, Jennifer M. Anderson, Steve W. K. Tjiang, Shih-Wei Liao, Chau-Wen Tseng, Mary W. Hall, Monica S. Lam, and John L. Hennessy. SUIF: An Infrastructure for Research on Parallelizing and Optimizing Compilers. In SUIF document, http://suif.stanford.edu/suif/suifl/.
  16. 16.
    Michael R. Wolfe. High-Performance Compilers for Parallel Computing. Addison-Wesley, Redwood City, CA, 1996.zbMATHGoogle Scholar
  17. 17.
    Bixia Zheng and Pen-Chung Yew. A Hierarchical Approach to Context-Sensitive Interprocedural Alias Analysis. Technical Report 99-018, Comuper Science Department, University of Minnesota, April 1999.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • B. Zheng
    • 1
  • J. Y. Tsai
    • 2
  • B. Y. Zang
    • 3
  • T. Chen
    • 1
  • B. Huang
    • 3
  • J. H. Li
    • 3
  • Y. H. Ding
    • 3
  • J. Liang
    • 1
  • Y. Zhen
    • 1
  • P. C. Yew
    • 1
  • C. Q. Zhu
    • 1
  1. 1.Computer Sci. and Eng. DepartmentUniversity of Minnesota MPLS
  2. 2.Hwelett-Packard Company Cupertino
  3. 3.Institute of Parallel ProcessingFudan UniversityShanghaiP.R. China

Personalised recommendations