Abstract
Although modern computers, from pipelined and superscaler processors to massively parallel computers, exploit parallelism, most programming languages still assume a sequential computation scheme. The explicit descriptions of procedural-based parallel execution and mapping to a particular architecture require a high degree of skill. Implicit parallel programming language, in which compilers and runtime systems exploit parallelism automatically, seems to be desirable in a world where parallel computers are available. The followings are the important features of promising parallel programming languages:
-
The more parallelism hardware tends to exploit, the more promising implicit parallel programming languages become. It does not seem appropriate to merely extend a sequential computation model with some explicit parallel constructors.
-
The ability to write comprehensive and reusable program is very important. Programming languages with high abstraction ability can abstract architectural peculiarity and can promote programming productivity.
-
It should be relatively easy to reason about programs, both formally and informally, by both the programmer and the implementation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
M. Amamiya, et al., “Valid: A High-Level Functional Programming Language for Data Flow Machine”, Rev. Electrical Communications Laboratories, 32(5):793–802, NTT, 1984.
M. Amamiya and R. Taniguchi, “Datarol: A Massively Parallel Architecture for Functional Language,” Proc. IEEE 2nd Symp. on Parallel and Distributed Processing, pp.726–735, Dallas, Texas, December 1990.
Lennart Augustsson and Thomas Johnsson, “Parallel Graph Reduction with the (υ, G)-Machine,” Proc. ACM 4th Int. Conf. on Functional Programming Languages and Computer Architecture, pp.202–213, London, September 1989.
D. E. Culler et al., “Fine-Grain Parallelism with Minimal Hardware Support: A Compiler-Controlled Threaded Abstract Machine,” Proc. 4th Int. Conf. on Architectural Support for Programming Languages and Operating Systems, Santa-Clara, CA, April 1991.
Tetsuo Kawano, Shigeru Kusakabe, Rin-ichiro Taniguchi, and Makoto Amamiya, “Datarol-II: A Fine-Grain Massively Parallel Architecture,” Proc. Parallel Architectures and Language Europe, pp.781–784, Athens, Greece, July 1994.
E. Takahashi et al., “Compiling Technique Based on Dataflow Analysis for Functional Programming Language Valid,” Proc. SISAL’93, pp.47–58, San Diego, CA, October 1993.
A. Yonezawa et al., “Implementing Concurrent Object-Oriented Languages on Multicomputers,” IEEE Parallel & Distributed Technology, 1(2):49–61, May 1993.
Toshiyuki Shimizu, Takeshi Horie, and Hiroaki Ishihata, “Low-latency message communication support for the AP1000,” Proc. 19th Ann. Int. Symposium on Computer Architecture, pp.288–297, Gold Coast, Australia, May 1992.
Sequent Computer Systems, Inc., Symmetry System Summary.
Paul Hudak, “Exploring Parafunctional Programming: Separating the What from the How,” IEEE Software, 5(1):54–61, January 1988.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1995 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Kusakabe, S., Takahashi, E., Taniguchi, Ri., Amamiya, M. (1995). Implementation of Parallel Functional Language on Conventional Multiprocessors. In: Bic, L.F., Nicolau, A., Sato, M. (eds) Parallel Language and Compiler Research in Japan. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-2269-0_6
Download citation
DOI: https://doi.org/10.1007/978-1-4615-2269-0_6
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-5957-9
Online ISBN: 978-1-4615-2269-0
eBook Packages: Springer Book Archive