Advertisement

MaRS, a combinator graph reduction multiprocessor

  • A. Contessa
  • E. Cousin
  • C. Coustet
  • M. Cubero-Castan
  • G. Durrieu
  • B. Lecussan
  • M. Lemaître
  • P. Ng
Submitted Presentations
Part of the Lecture Notes in Computer Science book series (LNCS, volume 365)

Abstract

The MaRS machine is an experimental modular distributed control multiprocessor, for parallel graph reduction, using a combinator machine language, and dedicated to the parallel execution of purely functional languages. A basic programming language for the machine, named MaRS Lisp, has been defined, the main features of which are : call-by-need (default), higher order functions, implicit curryfication of functions. It includes a simple mechanism to express parallelism, similar to the future construction of Multilisp [15]. A prototype of MaRS is currently being designed in VLSI 1.5-micron CMOS technology with two levels of metal, by means of a CAD system. The prototype is scheduled to be completed by 1989. The machine uses specific types of processor for Reduction, Memory and Communication. The Communication processor is the basic element of an Omega switching interconnection network. In addition to its routing function, the communication network is able to balance the activities in the machine, thanks to charge information processed by each Communication processor and propagated along the network, in the opposite direction of messages. This charge information also serves to modify dynamically the execution model, so as to resume execution in a sequential manner when the parallelism is high enough to keep each reduction processor fully busy; and vice versa, so as to allow creation of new parallel processes when the machine is not saturated. The machine architecture and its functional organization are described, as well as the execution model. Some expected performances are given, obtained by mean of fine-grained simulations.

Keywords

Shared Memory Interconnection Network Execution Model Functional Language Object Code 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    S.K. Abdali. An abstraction algorithm for combinatory logic. Journal of Symbolic Logic, (1):31–49, March 1976.Google Scholar
  2. [2]
    D. I. Bevan, G. L. Burn, and R. J. Karia. Overview of a parallel reduction machine project. In PARLE — Parallel Architectures and Languages Europe, Eindhoven, The Nederlands, pages 394–413. Springer-Verlag, Lecture Notes in Computer Science 258, September 1987.Google Scholar
  3. [3]
    M. Castan, M.-H. Durand, G.Durrieu, B. Lécussan, and M. Lemaître. MaRS: a multiprocessor machine for parallel graph reduction. In 9th Annual Hawai International Conference on System Science, January 1986.Google Scholar
  4. [4]
    M. Castan, M.-H. Durand, and M. Lemaître. A set of combinators for abstraction in linear time. Information Processing Letters, 24:183–188, February 1987.CrossRefGoogle Scholar
  5. [5]
    M. Castan, G. Durrieu, B. Lécussan, M. Lemaître, A. Contessa, E. Cousin, and P. Ng. Toward the design of a parallel graph reduction machine: the MaRS project. In Proceedings of a Workshop on Graph Reduction, Lecture Notes in Computer Science 279, pages 160–180, Santa Fe, USA, September 1986.Google Scholar
  6. [6]
    M. Cubero-Castan. Vers une définition méthodique d'architecture de calculateur pour l'exécution parallèle des langages fonctionnels. Thèse de Doctorat d'Etat, Université Paul Sabatier, Toulouse, Septembre 1988.Google Scholar
  7. [7]
    H.B. Curry, R. Feys, and W. Craig. Combinatory Logic. Nort-Holland, 1968.Google Scholar
  8. [8]
    J. Darlington and M. Reeve. Alice — a multiprocessor reduction machine for the parallel evaluation of applicative languages. In Proceedings of the ACM Symposium on Functional Languages and Computer Architecture, Portsmouth, pages 65–76, October 1981.Google Scholar
  9. [9]
    M.-H. Durand. Étude et évaluation du parallélisme dans les langages fonctionnels — une approche de la réduction de graphe par les combinateurs. Thèse de Docteur-Ingénieur, ENSAE, Toulouse, Juin 1986.Google Scholar
  10. [10]
    J. H. Fasel and R. M. Keller, editors. Graph Reduction, Proceedings of a Workshop, Santa Fé, New Mexico, USA, October 1986. Springer-Verlag, Lecture Notes in Computer Science 279.Google Scholar
  11. [11]
    Richard P. Gabriel. Performance and Evaluation of Lisp Systems. The MIT Press, Cambridge, Massachusetts and London, England, 1985.Google Scholar
  12. [12]
    B. Goldberg. Buckwheat: Graph reduction on a shared memory multiprocessor. In Proceedings of the 1988 ACM Conference on Lisp and Functional Programming, pages 40–51, Snowbird, Utah, July 1988.Google Scholar
  13. [13]
    B. Goldberg and P. Hudak. Alfalfa: Distributed graph reduction on a hypercube multiprocessor. In Graph Reduction, pages 94–113. Springer-Verlag, Lecture Notes in Computer Science 279, October 1986.Google Scholar
  14. [14]
    R. H. Halstead. An assessment of Multilisp: Lessons from experience. International Journal of Parallel Programming, 15(6), December 1986.Google Scholar
  15. [15]
    R. H. Halstead Jr. Multilisp: A language for concurrent symbolic computation. ACM Transactions on Programming Languages and Systems, 7(4):501–538, October 1985.CrossRefGoogle Scholar
  16. [16]
    D. Hellis. The Connection Machine. The MIT Press, 1985.Google Scholar
  17. [17]
    P. Hudak and B. Goldberg. Distributed execution of functional programs using serial combinators. IEEE Transactions on Computers, c-34(10):881–891, October 1985.Google Scholar
  18. [18]
    P. Hudak and E. Mohr. Graphinators and the duality of SIMD and MIMD. In Proceedings of the 1988 ACM Conference on Lisp and Functional Programming, pages 224–234, Snowbird, Utah, July 1988.Google Scholar
  19. [19]
    S. L. Peyton Jones, C. D. Clack, and J. Salkild. Grip — a high-performance architecture for parallel graph reduction machine. In Functional Programming Languages and Computer Architecture, Portland, Oregon, pages 98–112. Springer-Verlag, Lecture Notes in Computer Science 274, September 1987.Google Scholar
  20. [20]
    M. Lemaître, M. Castan, M.-H. Durand, G. Durrieu, and B. Lécussan. Mechanisms for efficient multiprocessor combinator reduction. In Proceedings of the 1986 ACM Conference on Lisp and Functional Programming, pages 113–121, Cambridge, Massachusetts, August 1986.Google Scholar
  21. [21]
    F.C.H. Lin R. M. Keller. Simulated performance of a reduction-based multiprocessor computer. Computer, pages 70–82, July 1984.Google Scholar
  22. [22]
    M. Scheevel. Norma: a graph reduction processor. In Proceedings of the 1986 ACM Conference on Lisp and Functional Programming, pages 212–219, Cambridge, Massachusetts, August 1986.Google Scholar
  23. [23]
    D.A. Turner. A new implementation technique for applicative languages. Software — Practice and Experience, 9(1):31–49, January 1979.Google Scholar
  24. [24]
    P. Watson and Ian Watson. Evaluating functional programs on the Flagship machine. In Functional Programming Languages and Computer Architecture, Portland, Oregon, pages 80–97. Springer-Verlag, Lecture Notes in Computer Science 274, September 1987.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1989

Authors and Affiliations

  • A. Contessa
    • 1
  • E. Cousin
    • 1
  • C. Coustet
    • 1
  • M. Cubero-Castan
    • 1
  • G. Durrieu
    • 1
  • B. Lecussan
    • 1
  • M. Lemaître
    • 1
  • P. Ng
    • 1
  1. 1.Centre d'Etudes et de Recherches de ToulouseToulouse CedexFrance

Personalised recommendations