Skip to main content

Optimizing Data-Parallel Stencil Computations in a Portable Framework

  • Chapter
Languages, Compilers and Run-Time Systems for Scalable Computers

Abstract

We have developed a communication optimizer that concentrates on stencil communication patterns. This optimizer has been done in the context of the UNH C* compiler that targets distributed-memory MIMD computers. Our work has two distinguishing features:

  • The compiler/optimizer is designed to be highly portable. We achieve this goal by providing efficient support for the optimizations in the run-time library.

  • As well as performing aggregation for messages that share the same source and destination, we employ a specialized store-and-forward protocol that reduces the total number of messages initiated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. M. Baber. Hypertasking: Automatic array and loop partitioning on the iPSC. In Proceedings of the 24th Hawaii International Conference on Systems Sciences, pages 438–447, January 1991.

    Google Scholar 

  2. M. Barnett, R. Littlefield, D. Payne, and R. van de Geijn. Efficient communication primitives on mesh architectures with hardware routing. In Sixth SIAM Conference on Parallel Processing for Scientific Computing, March 1993.

    Google Scholar 

  3. P. Bjørstad and R. Schreiber. Unstructured grids on SIMD torus machines. In Proceedings of the 1994 Scalable High Performance Computing Conference, pages 658–665, 1994.

    Google Scholar 

  4. Z. Bozkus, A. Choudhary, G. Fox, T. Haupt, S. Ranka, and M. Wu. Compiling Fortran 90D/HPF for distributed memory MIMD computers. Journal of Parallel and Distributed Computing, 21:15–26, 1994.

    Article  Google Scholar 

  5. E. Brewer and B. Kuszmaul. How to get good performance for the CM5 data network. In Proceedings of the 1994 International Parallel Processing Symposium, April 1994..

    Google Scholar 

  6. G. Chandranmenon, R. Russell, and P. Hatcher. Providing an execution environment for C* programs on a Mach-based PC cluster. Technical Report 94–20, University of New Hampshire, 1994.

    Google Scholar 

  7. J. Frankel. A reference description of the C* language. Technical Report TR-253, Thinking Machines Corporation, Cambaridge,MA,1991.

    Google Scholar 

  8. H. Gerndt. Parallelization for Distributed-Memory Multiprocessing Systems. PhD thesis, University Bonn, 1989.

    Google Scholar 

  9. P. Hatcher and M. Quinn. Data-Parallel Programming on MIMD Computers. The MIT Press, 1991.

    MATH  Google Scholar 

  10. A. Lapadula and K. Herold. A retargetable C* compiler and run-time library for mesh-connected MIMD multicomputers. Technical Report 92–15, University of New Hampshire, 1992.

    Google Scholar 

  11. J. LaRosa and R. Russell. A dedicated network and streamlined protocol to support UNH C* programs in distributed environments. Technical Report 95–07, University of New Hampshire, 1995.

    Google Scholar 

  12. J. Mason. Optimizing irregular communication in C*. Master’s thesis, University of New Hampshire, 1994.

    Google Scholar 

  13. D. Socha. An approach to compiling single-point iterative programs for distributed memory computers. In Fifth Distributed Memory Computing Conference, pages 1017–1027, 1990.

    Google Scholar 

  14. C. Tseng. An Optimizing Fortran D Compiler for MIMD Distributed-Memory Machines. PhD thesis, Department of Computer Science, Rice University, Houston, TX, January 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer Science+Business Media New York

About this chapter

Cite this chapter

Chappelow, S.W., Hatcher, P.J., Mason, J.R. (1996). Optimizing Data-Parallel Stencil Computations in a Portable Framework. In: Szymanski, B.K., Sinharoy, B. (eds) Languages, Compilers and Run-Time Systems for Scalable Computers. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-2315-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-2315-4_4

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-5979-1

  • Online ISBN: 978-1-4615-2315-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics