Advertisement

The design of automatic parallelizers for symbolic and numeric programs

  • Williams Ludwell HarrisonIII
  • Zahira Ammarguellat
Part I Parallel Lisp Languages and Programming Models
Part of the Lecture Notes in Computer Science book series (LNCS, volume 441)

Abstract

Parcel was arguably the first complete system for the automatic parallelization of Lisp programs. It was quite successful in several respects: it introduced a sharp interprocedural semantic analysis that computes the interprocedural visibility of side-effects, and allows the placement of objects in memory according to their lifetimes; it introduced several restructuring techniques tailored to the iterative and recursive control structures that arise in Lisp programs; and it made use of multiple procedure versions with a flexible microtasking mechanism for efficient parallelism at run-time. Parcel had several shortcomings however: the intrinsic procedures of Scheme, and those added to Parcel for support of parallelism, were embedded in its interprocedural analysis, transformations, code generation and run-time system, making the system difficult to adapt for other source languages; its interprocedural analysis handled compound, mutable data only indirectly (by analogy to closures), making it less accurate and more expensive than necessary; and its representation of programs as general control-flow graphs made the implementation of complex transformations difficult.

Miprac is a successor to Parcel, in which we are extending the techniques of Parcel, and applying them to a broad class of procedural languages. Miprac's interprocedural analysis includes a gcd test for independence among memory accesses that fall within a single block of storage; consequently, it may be used to analyze programs that create blocks of storage (structures, vectors) dynamically, and access them either by constant or computed offsets. Like Parcel's, this analysis computes the lifetimes of objects and the visibility of side-effects upon them, but also discerns properties of their structure.

Miprac's intermediate form is a compact language in which the concerns of control, memory, and values are made orthogonal. By a radical control-flow normalization, programs in the intermediate form are made highly structured so that transformations may be simply conceived and executed. The intrinsic procedures of the source language being compiled are expressed in this intermediate form, which allows the interprocedural analysis, transformations, and code generation to be written in a language-independent manner, so that retargeting the system to another source language or target machine entails only the implementation of the intermediate form itself.

Keywords

Intermediate Form Sequential Version Source Language Input Program Parallel Loop 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Amm90]
    Zahira Ammarguellat. Normalization of program control flow. Technical Report 885, Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign, 1990.Google Scholar
  2. [Ban76]
    Uptal D. Banerjee. Data dependence in ordinary programs. Master's thesis, University of Illinois at Urbana-Champaign, November 1976.Google Scholar
  3. [Ban79]
    Uptal D. Banerjee. Speedup of Ordinary Programs. PhD thesis, University of Illinois at Urbana-Champaign, October 1979.Google Scholar
  4. [BC86]
    Michael Burke and Ronald G. Cytron. Interprocedural dependence analysis and parallelization. In Proceedings of the SIGPLAN 1986 Symposium on Compiler Construction, pages 162–175. Association for Computing Machinery, July 1986.Google Scholar
  5. [CF89]
    R. Cartwright and M. Felleisen. The semantics of program dependence. In Proceedings of the 1989 ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM, ACM Press, jun 1989.Google Scholar
  6. [Cho90]
    Jhy-Herng Chow. Run-time support for automatically parallelized lisp programs. Master's thesis, University of Illinois at Urbana-Champaign, 1990.Google Scholar
  7. [Cra82]
    Cray Research, Mendota Heights, MN. Cray X-MP Series Mainframe Reference Manual (HR-0032), 1982.Google Scholar
  8. [EHJP90]
    R. Eigenmann, J. Hoeflinger, G. Jaxon, and D. Padua. Cedar fortran and its compiler. Technical Report 966, Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign, jan 1990.Google Scholar
  9. [FOW87]
    J. Ferrante, K. J. Ottenstein, and J. D. Warren. The program dependence graph and its use in optimization. ACM Transactions on Programming Languages and Systems, 9(3):319–349, 1987.Google Scholar
  10. [Har89]
    Williams Ludwell Harrison III. The interprocedural analysis and automatic parallelization of scheme programs. Lisp and Symbolic Computation: an International Journal, 2(3), 1989.Google Scholar
  11. [Tri84]
    Remi Triolet. Contributions to Automatic Parallelization of Fortran Programs with Procedure Calls. PhD thesis, University of Paris VI (I.P.), 1984.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1990

Authors and Affiliations

  • Williams Ludwell HarrisonIII
  • Zahira Ammarguellat

There are no affiliations available

Personalised recommendations