Analyses for the Translation of OpenMP Codes into SPMD Style with Array Privatization

  • Zhenying Liu
  • Barbara Chapman
  • Yi Wen
  • Lei Huang
  • Tien-Hsiung Weng
  • Oscar Hernandez
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2716)


A so-called SPMD style OpenMP program can achieve scalability on ccNUMA systems by means of array privatization, and earlier research has shown good performance under this approach. Since it is hard to write SPMD OpenMP code, we showed a strategy for the automatic translation of many OpenMP constructs into SPMD style in our previous work. In this paper, we first explain how to interprocedurally detect whether the OpenMP program consistently schedules the parallel loops. If the parallel loops are consistently scheduled, we may carry out array privatization according to OpenMP semantics. We give two examples of code patterns that can be handled despite the fact that they are not consistent, and where the strategy used to translate them differs from the straightforward approach that can otherwise be applied.


Loop Nest Call Graph Parallel Loop Loop Schedule Array Section 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Balasundaram, V., and Kennedy, K.: A Technique for Summarizing Data Access and Its Use in Parallelism Enhancing Transformations. Proceedings of the 1989 ACM SIGPLAN Conference on Programming Language Design and Implementation, Portland, Oregon, June 21–23, (1989) 41–53Google Scholar
  2. 2.
    Bircsak, J., Craig, P., Crowell, R., Cvetanovic, Z., Harris, J., Nelson, C.A., and Offner, C.D.: Extending OpenMP for NUMA machines. Scientific programming. Vol. 8, No. 3, (2000)Google Scholar
  3. 3.
    Callahan, D. and Kennedy, K.: Analysis of Interprocedural Side Effects in a Parallel Programming Environment. Journal of Parallel and Distributed Computing. Vol. 5, (1988)Google Scholar
  4. 4.
    Chandra, R., Chen, D.-K., Cox, R., Maydan, D.E., Nedeljkovic, N., and Anderson, J.M.: Data Distribution Support on Distributed Shared Memory Multiprocessors. Proceedings of the ACM SIGPLAN’97 Conference on Programming Language Design and Implementation, Las Vegas, NV, June (1997)Google Scholar
  5. 5.
    Chapman, B., Bregier, F., Patil, A., and Prabhakar, A.: Achieving High Performance under OpenMP on ccNUMA and Software Distributed Share Memory Systems. Currency and Computation Practice and Experience. Vol. 14, (2002) 1–17CrossRefGoogle Scholar
  6. 6.
    Chapman, B., Patil, A., and Prabhakar, A.: Performance Oriented Programming for NUMA Architectures. Workshop on OpenMP Applications and Tools (WOMPACT’01), Purdue University, West Lafayette, Indiana. July 30–31 (2001)Google Scholar
  7. 7.
    Gonzalez, M., Ayguade, E., Martorell, X., and Labarta, J.: Complex Pipelined Executions in OpenMP Parallel Appliations. International Conferences on Parallel Processing (ICPP 2001), September (2001)Google Scholar
  8. 8.
    Gupta M., and Banerjee, P.: PARADIGM: A Compiler for Automated Data Distribution on Multicomputers. Proceedings of the 7th ACM International Conference on Supercomputing, Tokyo, Japan, July 1993.Google Scholar
  9. 9.
    Hall, M.W., and Kennedy, K.: Efficient call graph analysis. ACM Letters on Programming Languanges and Systems, Vol. 1, No. 3, (1992) 227–242CrossRefGoogle Scholar
  10. 10.
    Havlak, P., and Kennedy, K.: An Implementation of Interprocedural Bounded Regular Section Analysis. IEEE Transactions on Parallel and Distributed Systems, Vol. 2, No. 3, July (1991) 350–360CrossRefGoogle Scholar
  11. 11.
    He, X., and Luo, L.-S.: Theory of the Lattice Boltzmann Method: From the Boltzmann Equation to the Lattice Boltzmann Equation. Phys. Rev. Lett. E, No. 56, Vol. 6, (1997) 6811Google Scholar
  12. 12.
    Jin, H., Frumkin, M., and Yan, J.: The OpenMP Implementation of NAS Parallel Benchmarks and its Performance. NAS Technical Report NAS-99-011, Oct. (1999)Google Scholar
  13. 13.
    Kennedy, K. and Kremer, U.: Automatic Data Layout for High Performance Fortran. Proceedings of the 1995 Conference on Supercomputing (CD-ROM), ACM Press, (1995)Google Scholar
  14. 14.
    Laure, E. and Chapman, B.: Interprocedural Array Alignment Analysis. Proceedings HPCN Europe 1998, Lecture Notes in Computer Science 1401. Springer, April (1998)Google Scholar
  15. 15.
    Li, J. and Chen, M.: Index domain alignment: Minimizing cost of cross-referencing between distributed arrays. Proc. Third Symp. on the Frontiers of Massively Parallel Computation, IEEE. October (1990): 424–433Google Scholar
  16. 16.
    Li, Z., and Yew, P.-C.: Program Parallelization with Interprocedural Analysis, The Journal Jin, H., Frumkin, M., and Yan, J.: The OpenMP Implementation of NAS Parallel Benchmarks of Supercomputing, Vol. 2, No. 2, October (1988) 225–244Google Scholar
  17. 17.
    Liu, Z., Chapman, B., Weng, T.-H., and Hernandez, O.: Improving the Performance of OpenMP by Array Privatization. In the Workshop on OpenMP Applications and Tools (WOMPAT 2002), Fairbanks, Alaska, August (2002)Google Scholar
  18. 18.
    Nikolopolous, D.S., Artiaga, E., Ayguadé, E., and Labarta, J.: Exploiting Memory Affinity in OpenMP through Schedule Reuse. Third European Workshop on OpenMP (EWOMP 2001), (2001)Google Scholar
  19. 19.
    Nikolopoulos, D.S., Papatheodorou, T. S., Polychronopoulos, C. D., Labarta, J., and Ayguadé, E.: Is Data Distribution Necessary in OpenMP? Proceedings of Supercomputing 2000, Dallas, Texas, November (2000)Google Scholar
  20. 20.
    The Open64 compiler.
  21. 21.
    Paek, Y., Navarro, A., Zapata, E., Hoeflinger, J., and Padua, D.: An Advanced Compiler Framework for Non-Cache-Coherent Multiprocessors, IEEE Transactions on Parallel and Distributed Systems. Vol. 13, No. 3, March (2002) 241–259CrossRefGoogle Scholar
  22. 22.
    Ryder, B.G.: Constructing the Call Graph of a Program. IEEE Transactions on Software Engineering, Vol. 5, No. 3, (1979) 216–225MathSciNetCrossRefGoogle Scholar
  23. 23.
    Silicon Graphics Inc. MIPSpro 7 FORTRAN 90 Commands and Directives Reference Manual, Chapter 5: Parallel Processing on Origin Series Systems. Documentation number 007-3696-003.
  24. 24.
    Triolet, R., Irigoin, F., and Feautrier, P.: Direct Parallelization of CALL statements. Proceedings of ACM SIGPLAN’ 86 Symposium on Compiler Construction, July (1986) 176–185Google Scholar
  25. 25.
    Wallcraft, A.J.: SPMD OpenMP vs. MPI for Ocean Models. Proceedings of First European Workshops on OpenMP (EWOMP’99), Lund, Sweden, (1999)Google Scholar
  26. 26.
    Weng, T.-H., Chapman, B., and Wen, Y.: Practical Call Graph and Side Effect Analysis in One Pass. Technical Report, University of Houston, Submitted to ACM TOPLAS (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Zhenying Liu
    • 1
  • Barbara Chapman
    • 1
  • Yi Wen
    • 1
  • Lei Huang
    • 1
  • Tien-Hsiung Weng
    • 1
  • Oscar Hernandez
    • 1
  1. 1.Department of Computer ScienceUniversity of HoustonUSA

Personalised recommendations