Skip to main content

Synchronization issues in data-parallel languages

  • Conference paper
  • First Online:
Languages and Compilers for Parallel Computing (LCPC 1993)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 768))

Abstract

Data-parallel programming has established itself as the preferred way of programming a large class of scientific applications. In this paper, we address the issue of reducing synchronization costs when implementing a data-parallel language on an asynchronous architecture. The synchronization issue is addressed from two perspectives: first, we describe language constructs that allow the programmer to specify that different parts of a data-parallel program be synchronized at different levels of granularity. Secondly, we show how existing tools and algorithms for data dependency analysis can be used by the compiler to both reduce the number of barriers and to replace global barriers by cheaper clustered synchronizations. Although the techniques presented in the paper are general purpose, we describe them in the context of a data-parallel language called UC developed at UCLA. Reducing the number of barriers improves program execution time by reducing synchronization time and also processor stall times.

This research was partially supported under NSF PYI Award No. ASC-9157610, ONR Grant No. N00014-91-J-1605, and Rockwell International Award No. L911014.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. V. Austel, R. Bagrodia, M. Chandy, and M. Dhagat. Reductions + Relations = Data-Parallelism Technical report, University of California, Los Angeles, CA 90024, April 1993.

    Google Scholar 

  2. R. Allen and K. Kennedy. Automatic Translation of FORTRAN Program to Vector Form. ACM Transactions on Programming Languages and Systems, 9(4):491–543, October 1987.

    Article  Google Scholar 

  3. R. Bagrodia and V. Austel. UC User Manual. Computer Science Department, University of California at Los Angeles, 1992.

    Google Scholar 

  4. U. Banerjee, R. Eigenmann, A. Nicolau, and D.A. Padua. Automatic Program Parallelization. Proceedings of the IEEE, 81(2):1–33, February 1993.

    Article  Google Scholar 

  5. High Performance Fortran Forum. High Performance Fortran Language Specification. DRAFT, November 1992.

    Google Scholar 

  6. J. Ferrante, K. Ottenstein, and J.D. Warren. The program dependence graph and its use in optimization. ACM TOPLAS, 9(3), July 1987.

    Google Scholar 

  7. F. Gavril. Algorithms for minimum coloring, maximum clique, minimum covering by cliques, and maximum independent set of a chordal graph. SIAM Journal on Computing, 1(2):180–187, 1972.

    Article  Google Scholar 

  8. Rajiv Gupta. The fuzzy barrier: A mechanism for high-speed synchronization of processors. In Proceedings of the 3rd International Conference on Architectural Support for Programming Languages and Operating Systems, pages 54–64, April 1989.

    Google Scholar 

  9. P. Hatcher, M. Quinn, A. Lapadula, B. Seevers, R. Anderson, and R. Jones. Data-parallel programming on MIMD computers. IEEE Trans. on Parallel and Distributed Systems, July 1991.

    Google Scholar 

  10. S. K. Midkiff and Padua D. A. Compiler Algorithms for Synchronization. IEEE Transactions on Computers, C-36(12):1485–1495, December 1987.

    Google Scholar 

  11. P. Mehrotra and J. Van Rosendale. Programming distributed memory architectures using Kali. Report 90-69, Institute for Computer Application in Science and Engineering, Hampton, VA, 1990.

    Google Scholar 

  12. D.A. Padua and M.J. Wolfe. Advanced Compiler Optimizations for Supercomputers. Communications of the ACM, 29(12):1184–1201, December 1986.

    Article  Google Scholar 

  13. M. Quinn, P. Hatcher, and B. Seevers. Implementing a Data Parallel Language on a Tightly Coupled Multiprocessor. In A. Nicolau, D. Gelernter, Gross T., and Padua D., editors, Advances in Languages and Compilers for Parallel Processing, chapter 20, pages 385–401. MIT Press, Cambridge, Massachusetts, 1991.

    Google Scholar 

  14. J.R. Rose and G.L. Steele. C*: An Extended C Language for Data Parallel Programming. PL-87.5, Thinking Machines Corporation, March 1987.

    Google Scholar 

  15. M. Rosing, R. B. Schnabel, and R. B. Weaver. The DINO Parallel Programming Language. Journal of Parallel and Distributed Computing, 13:30–42, 1991.

    Article  Google Scholar 

  16. C. L. Seitz, J. Seizovic, and W. K. Su. The C Programmer's Abbreviated Guide to Multicomputer Programming. Technical Report Caltech-CS-TR-88-1, Caltech, January 1988.

    Google Scholar 

  17. Michael Wolfe. Optimizing Supercompilers for Supercomputers. MIT Press, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Utpal Banerjee David Gelernter Alex Nicolau David Padua

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Prakash, S., Dhagat, M., Bagrodia, R. (1994). Synchronization issues in data-parallel languages. In: Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1993. Lecture Notes in Computer Science, vol 768. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57659-2_5

Download citation

  • DOI: https://doi.org/10.1007/3-540-57659-2_5

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57659-4

  • Online ISBN: 978-3-540-48308-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics