Skip to main content

Interprocedural array redistribution data-flow analysis

  • Compiling HPF
  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1239))

Abstract

In High Performance Fortran (HPF), array redistribution can be described explicitly using directives (REDISTRIBUTE or REALIGN) which specify where new distributions become active or implicitly by calling functions which require different data distributions than the calling function. In order to actually compile an HPF program into an efficient form, however, both the redistribution operations as well as the possible distributions for the individual blocks of code must be known at compile-time. In this paper, we present an interprocedural data-flow framework which takes into account both explicit and implicit redistribution to automatically: (1) determine which distributions hold over specific sections of a program, (2) optimize both the inter- and intraprocedural transitions between dynamic distributions while still maintaining the original semantics of the HPF program, (3) determine when the distribution pattern specified by an HPF program causes a given array to be assigned multiple distributions due to different redistribution operations on multiple paths within a function or as a result of parameter aliasing (resulting in a non-conforming HPF program), as well as (4) convert (well behaved) dynamic HPF programs into equivalent static forms through a process we refer to as static distribution assignment (SDA) which can be used to extend the capabilities of existing subset HPF compilers that support static data distributions. As the approach presented in this paper has already been implemented as part of the PARADIGM (PARAllelizing compiler for DIstributed-memory General purpose Multicomputers) project at the University of Illinois, examples will also be presented to demonstrate several applications of this framework.

This research, performed at the University of Illinois, was supported in part by the National Aeronautics and Space Administration under Contract NASA NAG 1-613, in part by an Office of Naval Research Graduate Fellowship, and in part by the Advanced Research Projects Agency under contract DAA-H04-94-G-0273 administered by the Army Research office.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. V. Aho, R. Sethi, and J. D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley Publ., Reading, MA, 1986.

    Google Scholar 

  2. P. Banerjee, J. A. Chandy, M. Gupta, E. W. Hodges IV, J. G. Holm, A. Lain, D. J. Palermo, S. Ramaswamy, and E. Su. The PARADIGM Compiler for Distributed-Memory Multicomputers. IEEE Computer, 28(10):37–47, Oct. 1995.

    Google Scholar 

  3. M. Burke and R. Cytron. Interprocedural Dependence Analysis and Parallelization. In Proc. of the ACM SIGPLAN Symp. on Compiler Construction, pages 162–175, Palo Alto, CA, June 1986.

    Google Scholar 

  4. L. Choi and P.-C. Yew. Interprocedural Array Data-Flow Analysis for Cache Coherence. In Proc. of the 8th Work. on Langs. and Compilers for Parallel Computing, volume 1033 of Lecture Notes in Computer Science, pages 81–95, Columbus, OH, Aug. 1995. Springer-Verlag. 1996.

    Google Scholar 

  5. F. Coelho and C. Ancourt. Optimal Compilation of HPF Remappings (Extended Abstract). Tech. Report CRI A-277, Centre de Recherche en Informatique, École des mines de Paris, Fontainebleau, France, Nov. 1995.

    Google Scholar 

  6. R. Cytron, J. Ferrante, B. K. Rosen, M. N. Wegman, and F. K. Zadeck. Efficiently Computing Static Single Assignment Form and the Control Dependence Graph. ACM Trans. on Prog. Langs. and Sys., 13(4):451–490, Oct. 1991.

    Article  Google Scholar 

  7. M. W. Hall, S. Hiranandani, K. Kennedy, and C. Tseng. Interprocedural Compilation of Fortran D for MIMD Distributed-Memory Machines. In Proc. of Supercomputing '92, pages 522–534, Minneapolis, MN, Nov. 1992.

    Google Scholar 

  8. M. W. Hall, B. R. Murphy, and S. P. Amarasinghe. Interprocedural Analysis for Parallelization. In Proc. of the 8th Work, on Langs, and Compilers for Parallel Computing, volume 1033, of Lecture Notes in Computer Science, pages 61–80, Columbus, OH, Aug. 1995. Springer-Verlag. 1996.

    Google Scholar 

  9. P. Havlak and K. Kennedy. Experience with Interprocedural Analysis of Array Side Effects. In Proc. of Supercomputing '90, pages 952–961, New York, NY, Nov. 1990.

    Google Scholar 

  10. High Performance Fortran Forum. High Performance Fortran Language Specification, version 1.1. Tech. report, Center for Research on Parallel Computation, Rice Univ., Houston, TX, Nov. 1994.

    Google Scholar 

  11. C. Koelbel, D. Loveman, R. Schreiber, G. Steele, Jr., and M. Zosel. The High Performance Fortran Handbook. The MIT Press, Cambridge, MA, 1994.

    Google Scholar 

  12. D. J. Palermo. Compiler Techniques for Optimizing Communication and Data Distribution for Distributed-Memory Multicomputers. PhD thesis, Dept. of Electrical and Computer Eng., Univ. of Illinois, Urbana, IL, June 1996. CRHC-96-09/UILU-ENG-96-2215.

    Google Scholar 

  13. D. J. Palermo and P. Banerjee. Automatic Selection of Dynamic Data Partitioning Schemes for Distributed-Memory Multicomputer. In Proc. of the 8th Work on Langs. and Compilers for Parallel Computing, volume 1033 of Lecture Notes in Computer Science, pages 392–406, Columbus, OH, Aug. 1995. Springer-Verlag. 1996.

    Google Scholar 

  14. C. D. Polychronopoulos, M. Girkar, M. R. Haghighat, C. L. Lee, B. Leung, and D. Schouten. Parafrase-2: An Environment for Parallelizing, Partitioning, Synchronizing and Scheduling Programs on Multiprocessors. In Proc. of the 18th Int'l Conf. on Parallel Processing, pages II:39–48, St. Charles, IL, Aug. 1989.

    Google Scholar 

  15. S. Ramaswamy and P. Banerjee. Automatic Generation of Efficient Array Redistribution Routines for Distributed Memory Multicomputers. In Frontiers '95: The 5th Symp. on the Frontiers of Massively Parallel Computation, pages 342–349, McLean, VA, Feb. 1995.

    Google Scholar 

  16. R. Triolet, F. Irigion, and P. Feautrier. Direct Parallelization of Call Statements. Proc. of the ACM SIGPLAN Symp. on Compiler Construction, 21(7): 176–185, July 1986.

    Article  Google Scholar 

  17. C. W. Tseng. An Optimizing Fortran D Compiler for MIMD Distributed-Memory Machines. PhD thesis, Rice Univ., Houston, TX, Jan. 1993. COMP TR93-199.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

David Sehr Utpal Banerjee David Gelernter Alex Nicolau David Padua

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Palermo, D.J., Hodges, E.W., Banerjee, P. (1997). Interprocedural array redistribution data-flow analysis. In: Sehr, D., Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1996. Lecture Notes in Computer Science, vol 1239. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0017268

Download citation

  • DOI: https://doi.org/10.1007/BFb0017268

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63091-3

  • Online ISBN: 978-3-540-69128-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics