Advertisement

The Structure of a Compiler for Explicit and Implicit Parallelism

  • Seon Wook Kim
  • Rudolf Eigenmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2624)

Abstract

We describe the structure of a compilation system that generates code for processor architectures supporting both explicit and implicit parallel threads. Such architectures are small extensions of recently proposed speculative processors. They can extract parallelism speculatively from a sequential instruction stream (implicit threading) and they can execute explicit parallel code sections as a multiprocessor (explicit threading). Although the feasibility of such mixed execution modes is often tacitly assumed in the discussion of speculative execution schemes, little experience exists about their performance and compilation issues. In prior work we have proposed the Multiplex architecture [1], supporting such a scheme. The present paper describes the compilation system of Multiplex.

Our compilation system integrates the Polaris preprocessor with the Gnu C code generating compiler. We describe the major components that are involved in generating explicit and implicit threads. We describe in more detail two components that represent significant open issues. The first issue is the integration of the parallelizing preprocessor with the code generator. The second issue is the decision when to generate explicit and when to generate implicit threads. Our compilation process is fully automated.

Keywords

Speculative Storage Parallel Loop Execution Mode Innermost Loop Automatic Parallelization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [2]
    William Blume, Ramon Doallo, Rudolf Eigenmann, John Grout, Jay Hoeflinger, Thomas Lawrence, Jaejin Lee, David Padua, Yunheung Paek, Bill Pottenger, Lawrence Rauchwerger, and Peng Tu. Parallel programming with Polaris. IEEE Computer, pages 78–82, December 1996.Google Scholar
  2. [3]
    M. W. Hall, J. M. Anderson, S. P. Amarasinghe, B. R. Murphy, S.-W. Liao, E. Bugnion, and M. S. Lam. Maximizing multiprocessor performance with the SUIF compiler. IEEE Computer, pages 84–89, December 1996.Google Scholar
  3. [4]
    Lawrence Rauchwerger and David Padua. The LRPD test: Speculative run-time parallelization of loops with privatization and reduction parallelization. In The ACM SIGPLAN’ 95 Conference on Programming Language Design and Implementation (PLDI’95), pages 218–232, June 1995.Google Scholar
  4. [5]
    Lawrence Rauchwerger and David Padua. The privatizing DOALL test: A run-time technique for DOALL loop identification and array privatization. In International Conference on Supercomputing (ICS’94), pages 33–43, 1994.Google Scholar
  5. [6]
    Gurindar S. Sohi, Scott E. Breach, and T. N. Vijaykumar. Multiscalar processors. In The 22th International Symposium on Computer Architecture (ISCA-22), pages 414–425, June 1995.Google Scholar
  6. [7]
    Kunle Olukotun, Lance Hammond, and Mark Willey. Improving the performance of speculatively parallel applications on the Hydra CMP. In International Conference on Supercomputing (ICS’99), pages 21–30, 1999.Google Scholar
  7. [8]
    J. Gregory Steffan and Todd C. Mowry. The potential for thread-level data speculation in tightly-coupled multiprocessors. Technical Report CSRI-TR-350, University of Toronto, Department of Electrical and Computer Engineering, Feb. 1997.Google Scholar
  8. [9]
    J.-Y. Tsai, Z. Jiang, Z. Li, D.J. Lilja, X. Wang, P.-C. Yew, B. Zheng, and S. Schwinn. Superthreading: Integrating compilation technology and processor architecture for cost-effective concurrent multithreading. Journal of Information Science and Engineering, March 1998.Google Scholar
  9. [10]
    Ye Zhang, Lawrence Rauchwerger, and Josep Torrellas. Hardware for speculative run-time parallelization in distributed shared-memory multiprocessors. In The Fourth International Symposium on High-Performance Computer Architecture (HPCA-4), pages 162–173, February 1998.Google Scholar
  10. [11]
    T. N. Vijaykumar and Gurindar S. Sohi. Task selection for a multiscalar processor. In The 31st International Symposium on Microarchitecture (MICRO-31), pages 81–92, December 1998.Google Scholar
  11. [12]
    Seon Wook Kim. Compiler Techniques for Speculative Execution. PhD thesis, Electrical and Computer Engineering, Purdue University, April 2001.Google Scholar
  12. [13]
    S. I. Feldman, David M. Gay, Mark W. Maimone, and N. L. Schryer. A Fortran-to-C converter. Technical Report Computing Science No. 149, AT&T Bell Laboratories, Murray Hill, NJ, 1995.Google Scholar
  13. [14]
    Richard M. Stallman. Using and Porting GNU Gcc version 2.7.2, November 1995.Google Scholar
  14. [15]
    J.-Y. Tsai, Z. Jiang, and P.-C. Yew. Compiler techniques for the superthreaded architectures. International Journal of Parallel Programming, 27(1):1–19, February 1999.CrossRefGoogle Scholar
  15. [16]
    Seon Wook Kim, Michael Voss, and Rudolf Eigenmann. Performance analysis of parallel compiler backends on shared-memory multiprocessors. In Compilers for Parallel Computers (CPC2000), pages 305–320, January 2000.Google Scholar
  16. [17]
    J. Oplinger, D. Heine, and M. S. Lam. In search of speculative thread-level parallelism. In The 1999 International Conference on Parallel Architectures and Compilation Techniques (PACT’99), Newport Beach, CA, pages 303–313, October 1999.Google Scholar
  17. [18]
    Seon Wook Kim, Chong-Liang Ooi, Rudolf Eigenmann, Babak Falsafi, and T. N. Vijaykumar. Reference idempotency analysis: A framework for optimizing speculative execution. In ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP01), pages 2–11, June 2001.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Seon Wook Kim
    • 1
  • Rudolf Eigenmann
    • 1
  1. 1.School of Electrical and Computer EngineeringPurdue UniversityWest Lafayette

Personalised recommendations