Skip to main content

Automatic Parallelizing Compiler for Distributed Memory Parallel Computers: New Algorithms to Improve the Performance of the Inspector/Executor

  • Chapter
Parallel Language and Compiler Research in Japan

Abstract

The SPMD (Single-Program Multiple-Data Stream) model has been widely adopted as the base of parallelizing compilers and parallel programming languages for scientific programs [1]. This model will work well not only for shared memory machines but also for distributed memory multicomputers, provided that;

  • ■ data are allocated appropriately by the programmer and/or the compiler itself,

  • ■ the compiler distributes parallel computations to processors so that interprocessor communication costs are minimized, and

  • ■ codes for communication are inserted, only when necessary, at the point adequate for minimizing communication latency.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Joel Saltz and Piyush Mehrotra (ed.), Languages, Compilers and Run Time Environments for Distributed Memory Machines, Amsterdam Elsevier Science Publishers B. V., 1992.

    MATH  Google Scholar 

  2. Seema Hiranandani, Ken Kennedy, and Chau-Wen Tseng, “Compiler Optimizations for Fortran D on MIMD Distributed-Memory Machines”, in Proc. Supercomputing’ 91, pp. 86–100, Albuquerque, NM, November 1991.

    Google Scholar 

  3. Charles Koelbel and Piyush Mehrotra, “Compiling Global Name-space Parallel Loops for Distributed Execution”, IEEE Trans. Parallel and Distributed Systems, 2(4):440-451, October 1991.

    Google Scholar 

  4. Seema Hiranandani, Ken Kennedy, and Chau-Wen Tseng, “Compiler Support for Machine-independent Parallel Programming in Fortran D”, in Joel Saltz and Piyush Mehrotra (ed.), Languages, Compilers and Run-Time Environments for Distributed Memory Machines, Amsterdam, Elsevier Science Publishers B. V., pp. 139–176, 1992.

    Google Scholar 

  5. Hiroaki Ishihata, et al, “Third Generation Message Passing Computer AP1000”, in Proc. International Symposium on Supercomputer, pp. 46–55, Fukuoka, Japan, November 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Kubota, A., Miyoshi, I., Ohno, K., Mori, Si., Nakashima, H., Tomita, S. (1995). Automatic Parallelizing Compiler for Distributed Memory Parallel Computers: New Algorithms to Improve the Performance of the Inspector/Executor. In: Bic, L.F., Nicolau, A., Sato, M. (eds) Parallel Language and Compiler Research in Japan. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-2269-0_13

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-2269-0_13

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-5957-9

  • Online ISBN: 978-1-4615-2269-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics