Abstract
High Performance Fortran (HPF) is a data-parallel language that was designed to provide the user with a high-level interface for programming scientific applications, while delegating to the compiler the task of generating an explicitly parallel message-passing program. In this paper, we give an outline of developments that led to HPF, shortly explain its major features, and illustrate its use for irregular applications. The final part of the paper points out some classes of problems that are difficult to deal with efficiently within the HPF paradigm.
The work described in this paper was partially supported by the ESPRIT IV Long Term Research Project 21033 “HPF+” of the European Commission, by the Austrian Ministry for Science and Transportation under contract GZ 613.580/2-IV/9/95, and by the National Aeronautics and Space Administration under NASA Contract No. NAS1-19480, while the authors were in residence at ICASE, NASA Langley Research Center, Hampton, VA 23681.
Preview
Unable to display preview. Download preview PDF.
References
E. Albert, K. Knobe, J. D. Lukas, and Jr. G. L. Steele. Compiling Fortran 8x array features for the Connection Machine Computer System. In Proceedings of the Symposium on Parallel Programming: Experience with Applications, Languages, and Systems (PPEALS), pages 42–56, New Haven, CT, July 1988.
F. André, J.-L. Pazat, and H. Thomas. PANDORE: A system to manage data distribution. In International Conference on Supercomputing, pages 380–388, Amsterdam, The Netherlands, June 1990.
S. Benkner et al. Vienna Fortran Compilation System—Version 1.2—User’s Guide. Institute for Software Technology and Parallel Systems, University of Vienna, October 1995.
S. Benkner, P. Mehrotra, J. Van Rosendale and H. P. Zima. High-Level Management of Communication Schedules in HPF-like Languages. Technical Report, TR 97-5, Institute for Software Technology and Parallel Systems, University of Vienna, April 1997.
B. Chapman, M. Haines, P. Mehrotra, J. Van Rosendale, and H. Zima. Opus: A coordination language for multidisciplinary applications. Scientific Programming (to appear), 1998.
B. Chapman, P. Mehrotra, and H. Zima. Programming in Vienna Fortran. Scientific Programming, 1 (1):31–50, Fall 1992.
B. Chapman, P. Mehrotra, and H. Zima. Extending HPF for Advanced Data Parallel Applications. IEEE Parallel and Distributed Technology, Fall 1994, pp. 59–70.
R. Das, J. Saltz, R. von Hanxleden. Slicing Analysis and Indirect Access to Distributed Arrays. In: Proc. 6th Workshop on Languages and Compilers for Parallel Computing, 152–168, Springer Verlag (August 1993).
B. Numrich. A Parallel Extension to Fortran 90. In: Proc. Spring’96 Cray User Group Conference, Barcelona, March 1996.
P. Hatcher, A. Lapadula, R. Jones, M. Quinn, and J. Anderson. A production quality C* compiler for hypercube machines. In 3rd ACM SIGPLAN Symposium on Principles Practice of Parallel Programming, papges 73–82, April 1991.
E. Haug, J. Dubois, J. Clinckemaillie, S. Vlachoutsis, G. Lonsdale. Transport Vehicle Crash, Safety and Manufacturing Simulation in the Perspective of High Performance Computing and Networking. Future Generation Computer Systems, Vol 10, pp. 173–181, 1994.
High Performance C++. Http://www.extreme.indiana.edu/hpc++/index.html.
High Performance FORTRAN Forum. High Performance FORTRAN Language Specification, Version 2.0, January 1997.
S. Hiranandani, K. Kennedy, and C. Tseng. Compiling Fortran D for MIMD Distributed Memory Machines. Communications of the ACM, 35(8):66–80, August 1992.
Y.C.Hu, S. Lennart Johnsson, and S.-H.Teng. A Data-Parallel Adaptive N-body Method. Proc.Eight SIAM Conference on Parallel Proceedings for Scientific Computing, March 14–17, 1997.
C. Koelbel and P. Mehrotra. Compiling global name-space parallel loops for distributed execution. IEEE Transactions on Parallel and Distributed Systems, 2(4):440–451, October 1991.
J. Li and M. Chen. Generating explicit communication from shared-memory program references. In Proceedings of Supercomputing ’90, pages 865–876, New York, NY, November 1990.
P. Mehrotra. Programming parallel architectures: The BLAZE family of languages. In Proceedings of the Third SIAM Conference on Parallel Processing for Scientific Computing, pages 289–299, Los Angeles, CA, December 1988.
P. Mehrotra and J. Van Rosendale. Compiling high level constructs to distributed memory architectures. In Proceedings of the Fourth Conference on Hypercube Concurrent Computers and Applications, March 1989.
P. Mehrotra and J. Van Rosendale. Programming distributed memory architectures using Kali. In A. Nicolau, D. Gelernter, T. Gross, and D. Padua, editors, Advances in Languages and Compilers for Parallel Processing, pages 364–384. Pitman/MIT-Press, 1991.
P. Mehrotra, J. Van Rosendale, and H.Zima. High Performance Fortran: History, Status, and Future. Parallel Computing, 1998 (in print).
J. H. Merlin. Adapting fortran 90 array programs for distributed memory architectures. In H. P. Zima, editor, Proc. First International ACPC Conference, Salzburg, Austria, pages 184–200. Lecture Notes in Computer Science 591, Springer Verlag, 1991.
D. Middleton, P. Mehrotra, and J. Van Rosendale. Expressing Direct Simulation Monte Carlo Code in High Performance Fortran. In Proceedings of the Seventh SIAM Conference on Parallel Processing for Scientific Computing, pages 698–703, February 1995.
MIMDizer User’s Guide, Version 7.02. Placerville, CA., 1991.
D. Pase. MPP Fortran programming model. In High Performance Fortran Forum, January 1992.
R. Ponnusamy, J. Saltz, A. Choudhary. Runtime Compilation Techniques for Data Partitioning and Communication Schedule Reuse. Technical Report, UMIACS-TR-93-32, University of Maryland, April 1993.
A. P. Reeves and C. M. Chase. The Paragon programming paradigm and distributed memory multicomputers. In Compilers and Runtime Software for Scalable Multiprocessors, J. Saltz and P. Mehrotra Editors, Amsterdam, The Netherlands, Elsevier, 1991.
A. Rogens and K. Pingali. Process decomposition through locality of reference. In Conference on Programming Language Design and Implementation, pages 69–80, Portland, OR, June 1989.
M. Rosing, R. W. Schnabel, and R. P. Weaver. Expressing complex parallel algorithms in DINO. In Proceedings of the 4th Conference on Hypercubes, Concurrent Computers, and Applications, pages 553–560, 1989.
R. Rühl and M. Annaratone. Parallelization of Fortran code on distributed-memory parallel processors. In Proceedings of the International Conference on Supercomputing. ACM Press, June 1990.
J. Saltz, K. Crowley, R. Mirchandaney, and H. Berryman. Run-time scheduling and execution of loops on message passing machines, Journal of Parallel and Distributed Computing, 8(2):303–312, 1990.
M. Ujaldon, E.L. Zapata, B. Chapman, and H. Zima. Vienna Fortran/HPF Extensions for Sparse and Irregular Problems and Their Compilations. IEEE Transactions on Parallel and Distributed Systems, 8(11), November 1997.
H. Zima, H. Bast, and B. Gerndt. Superb: A tool for semi-automatic MIMD/SIMD parallelization. Parallel Computing, 6:1–18, 1988.
H. Zima, P. Brezany, B. Chapman, P. Mehrotra, and A. Schwald. Vienna Fortran —a language specification. Internal Report 21, ICASE, Hampton, VA, March 1992.
H. Zima and B. Chapman. Compiling for Distributed Memory Systems. Proceedings of the IEEE, Special Section on Languages and Compilers for Parallel Machines, pp. 264–287, February 1993.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Mehrotra, P., Van Rosendale, J., Zima, H. (1998). High Performance Fortran: Status and prospects. In: Kågström, B., Dongarra, J., Elmroth, E., Waśniewski, J. (eds) Applied Parallel Computing Large Scale Scientific and Industrial Problems. PARA 1998. Lecture Notes in Computer Science, vol 1541. Springer, Berlin, Heidelberg . https://doi.org/10.1007/BFb0095356
Download citation
DOI: https://doi.org/10.1007/BFb0095356
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65414-8
Online ISBN: 978-3-540-49261-0
eBook Packages: Springer Book Archive