Skip to main content

High Performance Fortran: Status and prospects

  • Conference paper
  • First Online:
Applied Parallel Computing Large Scale Scientific and Industrial Problems (PARA 1998)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1541))

Included in the following conference series:

Abstract

High Performance Fortran (HPF) is a data-parallel language that was designed to provide the user with a high-level interface for programming scientific applications, while delegating to the compiler the task of generating an explicitly parallel message-passing program. In this paper, we give an outline of developments that led to HPF, shortly explain its major features, and illustrate its use for irregular applications. The final part of the paper points out some classes of problems that are difficult to deal with efficiently within the HPF paradigm.

The work described in this paper was partially supported by the ESPRIT IV Long Term Research Project 21033 “HPF+” of the European Commission, by the Austrian Ministry for Science and Transportation under contract GZ 613.580/2-IV/9/95, and by the National Aeronautics and Space Administration under NASA Contract No. NAS1-19480, while the authors were in residence at ICASE, NASA Langley Research Center, Hampton, VA 23681.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. E. Albert, K. Knobe, J. D. Lukas, and Jr. G. L. Steele. Compiling Fortran 8x array features for the Connection Machine Computer System. In Proceedings of the Symposium on Parallel Programming: Experience with Applications, Languages, and Systems (PPEALS), pages 42–56, New Haven, CT, July 1988.

    Google Scholar 

  2. F. André, J.-L. Pazat, and H. Thomas. PANDORE: A system to manage data distribution. In International Conference on Supercomputing, pages 380–388, Amsterdam, The Netherlands, June 1990.

    Google Scholar 

  3. S. Benkner et al. Vienna Fortran Compilation System—Version 1.2—User’s Guide. Institute for Software Technology and Parallel Systems, University of Vienna, October 1995.

    Google Scholar 

  4. S. Benkner, P. Mehrotra, J. Van Rosendale and H. P. Zima. High-Level Management of Communication Schedules in HPF-like Languages. Technical Report, TR 97-5, Institute for Software Technology and Parallel Systems, University of Vienna, April 1997.

    Google Scholar 

  5. B. Chapman, M. Haines, P. Mehrotra, J. Van Rosendale, and H. Zima. Opus: A coordination language for multidisciplinary applications. Scientific Programming (to appear), 1998.

    Google Scholar 

  6. B. Chapman, P. Mehrotra, and H. Zima. Programming in Vienna Fortran. Scientific Programming, 1 (1):31–50, Fall 1992.

    Google Scholar 

  7. B. Chapman, P. Mehrotra, and H. Zima. Extending HPF for Advanced Data Parallel Applications. IEEE Parallel and Distributed Technology, Fall 1994, pp. 59–70.

    Article  Google Scholar 

  8. R. Das, J. Saltz, R. von Hanxleden. Slicing Analysis and Indirect Access to Distributed Arrays. In: Proc. 6th Workshop on Languages and Compilers for Parallel Computing, 152–168, Springer Verlag (August 1993).

    Google Scholar 

  9. B. Numrich. A Parallel Extension to Fortran 90. In: Proc. Spring’96 Cray User Group Conference, Barcelona, March 1996.

    Google Scholar 

  10. P. Hatcher, A. Lapadula, R. Jones, M. Quinn, and J. Anderson. A production quality C* compiler for hypercube machines. In 3rd ACM SIGPLAN Symposium on Principles Practice of Parallel Programming, papges 73–82, April 1991.

    Google Scholar 

  11. E. Haug, J. Dubois, J. Clinckemaillie, S. Vlachoutsis, G. Lonsdale. Transport Vehicle Crash, Safety and Manufacturing Simulation in the Perspective of High Performance Computing and Networking. Future Generation Computer Systems, Vol 10, pp. 173–181, 1994.

    Article  Google Scholar 

  12. High Performance C++. Http://www.extreme.indiana.edu/hpc++/index.html.

    Google Scholar 

  13. High Performance FORTRAN Forum. High Performance FORTRAN Language Specification, Version 2.0, January 1997.

    Google Scholar 

  14. S. Hiranandani, K. Kennedy, and C. Tseng. Compiling Fortran D for MIMD Distributed Memory Machines. Communications of the ACM, 35(8):66–80, August 1992.

    Article  Google Scholar 

  15. Y.C.Hu, S. Lennart Johnsson, and S.-H.Teng. A Data-Parallel Adaptive N-body Method. Proc.Eight SIAM Conference on Parallel Proceedings for Scientific Computing, March 14–17, 1997.

    Google Scholar 

  16. C. Koelbel and P. Mehrotra. Compiling global name-space parallel loops for distributed execution. IEEE Transactions on Parallel and Distributed Systems, 2(4):440–451, October 1991.

    Article  Google Scholar 

  17. J. Li and M. Chen. Generating explicit communication from shared-memory program references. In Proceedings of Supercomputing ’90, pages 865–876, New York, NY, November 1990.

    Google Scholar 

  18. P. Mehrotra. Programming parallel architectures: The BLAZE family of languages. In Proceedings of the Third SIAM Conference on Parallel Processing for Scientific Computing, pages 289–299, Los Angeles, CA, December 1988.

    Google Scholar 

  19. P. Mehrotra and J. Van Rosendale. Compiling high level constructs to distributed memory architectures. In Proceedings of the Fourth Conference on Hypercube Concurrent Computers and Applications, March 1989.

    Google Scholar 

  20. P. Mehrotra and J. Van Rosendale. Programming distributed memory architectures using Kali. In A. Nicolau, D. Gelernter, T. Gross, and D. Padua, editors, Advances in Languages and Compilers for Parallel Processing, pages 364–384. Pitman/MIT-Press, 1991.

    Google Scholar 

  21. P. Mehrotra, J. Van Rosendale, and H.Zima. High Performance Fortran: History, Status, and Future. Parallel Computing, 1998 (in print).

    Google Scholar 

  22. J. H. Merlin. Adapting fortran 90 array programs for distributed memory architectures. In H. P. Zima, editor, Proc. First International ACPC Conference, Salzburg, Austria, pages 184–200. Lecture Notes in Computer Science 591, Springer Verlag, 1991.

    Google Scholar 

  23. D. Middleton, P. Mehrotra, and J. Van Rosendale. Expressing Direct Simulation Monte Carlo Code in High Performance Fortran. In Proceedings of the Seventh SIAM Conference on Parallel Processing for Scientific Computing, pages 698–703, February 1995.

    Google Scholar 

  24. MIMDizer User’s Guide, Version 7.02. Placerville, CA., 1991.

    Google Scholar 

  25. D. Pase. MPP Fortran programming model. In High Performance Fortran Forum, January 1992.

    Google Scholar 

  26. R. Ponnusamy, J. Saltz, A. Choudhary. Runtime Compilation Techniques for Data Partitioning and Communication Schedule Reuse. Technical Report, UMIACS-TR-93-32, University of Maryland, April 1993.

    Google Scholar 

  27. A. P. Reeves and C. M. Chase. The Paragon programming paradigm and distributed memory multicomputers. In Compilers and Runtime Software for Scalable Multiprocessors, J. Saltz and P. Mehrotra Editors, Amsterdam, The Netherlands, Elsevier, 1991.

    Google Scholar 

  28. A. Rogens and K. Pingali. Process decomposition through locality of reference. In Conference on Programming Language Design and Implementation, pages 69–80, Portland, OR, June 1989.

    Google Scholar 

  29. M. Rosing, R. W. Schnabel, and R. P. Weaver. Expressing complex parallel algorithms in DINO. In Proceedings of the 4th Conference on Hypercubes, Concurrent Computers, and Applications, pages 553–560, 1989.

    Google Scholar 

  30. R. Rühl and M. Annaratone. Parallelization of Fortran code on distributed-memory parallel processors. In Proceedings of the International Conference on Supercomputing. ACM Press, June 1990.

    Google Scholar 

  31. J. Saltz, K. Crowley, R. Mirchandaney, and H. Berryman. Run-time scheduling and execution of loops on message passing machines, Journal of Parallel and Distributed Computing, 8(2):303–312, 1990.

    Article  Google Scholar 

  32. M. Ujaldon, E.L. Zapata, B. Chapman, and H. Zima. Vienna Fortran/HPF Extensions for Sparse and Irregular Problems and Their Compilations. IEEE Transactions on Parallel and Distributed Systems, 8(11), November 1997.

    Google Scholar 

  33. H. Zima, H. Bast, and B. Gerndt. Superb: A tool for semi-automatic MIMD/SIMD parallelization. Parallel Computing, 6:1–18, 1988.

    Article  Google Scholar 

  34. H. Zima, P. Brezany, B. Chapman, P. Mehrotra, and A. Schwald. Vienna Fortran —a language specification. Internal Report 21, ICASE, Hampton, VA, March 1992.

    Google Scholar 

  35. H. Zima and B. Chapman. Compiling for Distributed Memory Systems. Proceedings of the IEEE, Special Section on Languages and Compilers for Parallel Machines, pp. 264–287, February 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Bo Kågström Jack Dongarra Erik Elmroth Jerzy Waśniewski

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Mehrotra, P., Van Rosendale, J., Zima, H. (1998). High Performance Fortran: Status and prospects. In: Kågström, B., Dongarra, J., Elmroth, E., Waśniewski, J. (eds) Applied Parallel Computing Large Scale Scientific and Industrial Problems. PARA 1998. Lecture Notes in Computer Science, vol 1541. Springer, Berlin, Heidelberg . https://doi.org/10.1007/BFb0095356

Download citation

  • DOI: https://doi.org/10.1007/BFb0095356

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65414-8

  • Online ISBN: 978-3-540-49261-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics