Skip to main content

PCASE: A Programming Environment for Parallel Supercomputers

  • Chapter
Parallel Language and Compiler Research in Japan

Abstract

Recently, massively parallel distributed systems have been widely researched and developed, and are considered to be the best candidate to realize a teraflops machine. However, in order to fully use this enormous machine power, users have to efficiently parallelize a program, considering not only the parallelism in a program but also the target machine’s architecture. The difficulty of using distributed memories efficiently makes the task of parallel programming particularly complicated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. C. Koelbel, D. Loveman, R.Schreiber, G. Steele, and M. Zosel, The High Performance Fortran Handbook, Cambridge, MIT Press, 1993.

    Google Scholar 

  2. G. Fox, S. Hiranandani, K. Kennedy, C. Koelbel, U. Kremer, C. Tseng, M. Wu, Fortran D Language Specification, Technical Report TR90-141, Rice University, 1990.

    Google Scholar 

  3. B. Chapman, P. Mehrotra, and H. Moritsch, H. Zima, “Dynamic Data Distributions in Vienna Fortran,” in Proc. Supercomputing’ 93, pp. 284–293, 1993.

    Google Scholar 

  4. FORTRAN77/SX User’s Manual, NEC Corporation, 1992.

    Google Scholar 

  5. U. Banerjee, Dependence Analysis for Supercomputing, Boston, Kluwer Academic, 1988.

    Book  Google Scholar 

  6. V. Sarkar, “Determining Average Program Execution Times and their Variance,” in Proc. ACM SIGPLAN’ 89 Symposium on Programming Language Design and Implementation (PLDI), pp. 298–312, 1989.

    Google Scholar 

  7. V. Balasundaram, G. Fox, K. Kennedy, and U. Kremer, “A Static Performance Estimator to Gide Data Partitioning Decisions,” ACM SIGPLAN Symposium on Principle and Practice of Parallel Programmig (PPoPP), pp. 213–223, Williamsburg, VA, April 1991.

    Google Scholar 

  8. M. Burke, and R. Cytron, “Interprocedural Dependence Analysis and Parallelization,” SIGPLAN’ 86 Symposium on Compiler Construction, CA, 1986.

    Google Scholar 

  9. D. Callahan, K. D. Cooper, K. Kennedy, and L. Torczon, “Interprocedural Constant Propagation,” SIGPLAN’ 86 Symposium on Compiler Construction, pp. 152–161, 1986.

    Google Scholar 

  10. P. Havlak and K. Kennedy, “Experience with Interprocedural Analysis of Array Side Effects,” in Proc. Supercomputing’ 90, pp. 952–961, 1990.

    Google Scholar 

  11. H. Zima, and B. Chapman: Supercompilers for Parallel and Vector Computers, ACM Press, Addison-Wesley, New York, 1990.

    Google Scholar 

  12. T. Kamachi, Y. Seo, and S. Matsuno, “Data Distribution Management for Distributed Memory Machines in Parallel Programming Environment PCASE” (in Japanese), in Proc. Joint Symposium on Parallel Processing (JSPP)’ 93, pp. 31–38, Tokyo, May 1993.

    Google Scholar 

  13. S. Hiranandani, K. Kennedy, and C. Tseng, “Compiler Support for Machine-Independent Parallel Programming in Fortran D,” in Language, Compilers and Run-Time Environments for Distributed Memory Machines, pp. 139–176, North-Holland, Amsterdam, 1992.

    Google Scholar 

  14. K. Kennedy, K. S. McKinley, and C. Tseng, “Interactive Parallel Programming Using the ParaScope Editor,” IEEE Trans. Parallel and Distributed Systems, 2(3):329–341, 1991.

    Article  Google Scholar 

  15. M. W. Hall, T. J. Harvey, K. Kennedy, N. Mcintosh, K. S. Kckinley, J.D. Oldham, M. H. Paleczny, and G. Roth, “Experiences Using the ParaS cope Editor: an Interactive Parallel Programming Tool,” in Proc. ACM SIGPLAN Symposium on Principle and Practice of Parallel Program-mig(PPoPP), pp. 33–43, San Diego, May 1993.

    Google Scholar 

  16. M. W. Hall, S. Hiranandani, K. Kennedy, and C. Tseng, “Interprocedural Compilation of Fortran D for MIMD Distributed-Memory Machines,” in Proc. Supercomputing ”92, pp. 522–534, Minneapolis, November 1992.

    Google Scholar 

  17. MIMDizer User’s Guide (Version 7.10), Pacific Sierra Research Corporation, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Seo, Y., Kamachi, T., Kusano, K., Watanabe, Y., Shiroto, Y. (1995). PCASE: A Programming Environment for Parallel Supercomputers. In: Bic, L.F., Nicolau, A., Sato, M. (eds) Parallel Language and Compiler Research in Japan. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-2269-0_16

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-2269-0_16

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-5957-9

  • Online ISBN: 978-1-4615-2269-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics