Universal mechanisms for concurrency

  • William J. Dally
  • D. Scott Wills
Invited Lectures
Part of the Lecture Notes in Computer Science book series (LNCS, volume 365)


We propose a machine model consisting of a set of primitive mechanisms for communication, synchronization, and naming. These mechanisms have been selected as a compromise between what can easily be implemented in hardware and what is required to support parallel models of computation. Implementations of three models of parallel computation: actors, dataflow, and shared-memory using this model are sketched. The costs of the mechanisms on a particular parallel machine are presented and issues involved in implementing the model are discussed. Identifying a primitive set of mechanisms separates issues of programming models from issues of machine organization. Problems are partitioned into those involving implementation of the primitive mechanisms and those involving implementation of programming models and systems using the mechanisms.


Destination Node Task Switch Direct Memory Access Task Creation Storage Allocation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Agha, Gul A., Actors: A Model of Concurrent Computation in Distributed Systems, MIT Press, Cambridge, MA, 1986.Google Scholar
  2. [2]
    Arvind, Nikhil, R., and Pingali, K., I-Structures: Data Structures for Parallel Computing, MIT Laboratory for Computer Science, Computation Structures Group Memo 269, February 1987.Google Scholar
  3. [3]
    Baker, H. and Hewitt, C., “The Incremental Garbage Collection of Processes,” ACM Conference on AI and Programming Languages, Rochester, New York, August, 1977, pp. 55–59.Google Scholar
  4. [4]
    BBN Laboratories, Inc., Butterfly Parallel Processor Overview, BBN Report No. 6148, 1986.Google Scholar
  5. [5]
    Dally, William J. “Performance Analysis of k-ary n-cube Interconnection Networks,” IEEE Transactions on Computers, to appear.Google Scholar
  6. [6]
    Dally, William J., “The J-Machine: A Fine-Grain Concurrent Computer,” IFIP Congress, 1989.Google Scholar
  7. [7]
    Dally, William J., “The J-Machine: System Support for Actors,” Actors: Knowledge-Based Concurrent Computing, Hewitt and Agha eds., MIT Press, 1989.Google Scholar
  8. [8]
    Dally, William J., and Song, Paul., “Design of a Self-Timed VLSI Multicomputer Communication Controller,” Proc. International Conference on Computer Design, ICCD-87, October 1987, pp. 230–234.Google Scholar
  9. [9]
    Dally, William J., “Architecture of a Message-Driven Processor,” Proceedings of the 14th ACM/IEEE Symposium on Computer Architecture, June 1987, pp. 189–196..Google Scholar
  10. [10]
    Goto, A. et. al., “Overview of the Parallel Inference Machine Architecture,” Proc. FGCS-88, pp. 208–229.Google Scholar
  11. [11]
    Gurd, J.R., Kirkham, C.C., and Watson, I., “The Manchester Prototype Dataflow Computer,” CACM, Vol. 28, No. 1, January 1985, pp. 34–52.Google Scholar
  12. [12]
    Halstead, Robert H., “Parallel Symbolic Computation,” IEEE Computer, Vol. 19, No. 8, Aug. 1986, pp. 35–43.Google Scholar
  13. [13]
    Hoare, C.A.R., “Communicating Sequential Processes,” Comm. ACM, Vol. 21, No. 8, August 1978, pp. 666–677.CrossRefGoogle Scholar
  14. [14]
    Iannucci, R.A., “Toward a Dataflow/von Neumann Hybrid Architecture,” 15th ACM/IEEE Symposium on Computer Architecture, June 1988, pp. 131–140.Google Scholar
  15. [15]
    Jordan, H.F., “Performance Measurements on HEP — A Pipelined MIMD Computer” 10th ACM/IEEE Symposium on Computer Architecture, June 1983, pp. 207–212.Google Scholar
  16. [16]
    Nuth, Peter, Parallel Processor Architecture: A Thesis Proposal MIT AI Lab, Ph.D. Thesis Proposal, 1988.Google Scholar
  17. [17]
    Organick, E.I., “Computer System Organization, The B5700/6700 Series,” Academic Press, New York, 1973.Google Scholar
  18. [18]
    Papadopoulos, G.M., An Engineering Implementation of the TTDA MIT Laboratory for Computer Science, Computation Structures Group Memo 270, February 1987.Google Scholar
  19. [19]
    Totty, Brian, An Operating Environment for the Jellybean Machine, MIT Artificial Intelligence Laboratory Memo No. 1070, 1988.Google Scholar
  20. [20]
    Veen, Arthur H., “Dataflow Machine Architecture,” ACM Computing Surveys, Vol. 18, No. 4, December 1986, pp. 365–396.CrossRefGoogle Scholar
  21. [21]
    Wills, D. Scott, Multi-Model Execution on a Fine Grain Message Passing Substrate, Sc.D. Thesis Proposal, MIT Artificial Intelligence Laboratory, 19 January 1989.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1989

Authors and Affiliations

  • William J. Dally
    • 1
  • D. Scott Wills
    • 1
  1. 1.Artificial Intelligence Laboratory Laboratory for Computer ScienceMassachusetts Institute of TechnologyCambridge

Personalised recommendations