A combining mechanism for parallel computers

  • Leslie G. Valiant
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 678)


In a multiprocessor computer communication among the components may be based either on a simple router, which delivers messages point-to-point like a mail service, or on a more elaborate combining network that, in return for a greater investment in hardware, can combine messages to the same address prior to delivery. This paper describes a mechanism for recirculating messages in a simple router so that the added functionality of a combining network, for arbitrary access patterns, can be achieved by it with reasonable efficiency. The method brings together the messages with the same destination address in more than one stage, and at a set of components that is determined by a hash function and decreases in number at each stage.


Hash Function Performance Factor Basis Sequence Memory Address Destination Address 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    R.J. Anderson and G.L. Miller. Optical communication for pointer based algorithms. TR CRI-88-14, Computer Science Department, University of Southern California, 1988.Google Scholar
  2. 2.
    A.V. Gerbessiotis and L.G. Valiant. Direct bulk-synchronous parallel algorithms. Third Scandinavian Workshop on Algorithm Theory, Lecture Notes in Computer Science, Vol 621, Springer-Verlag (1992) 1–18.Google Scholar
  3. 3.
    M. Geréb-Graus and T. Tsantilas. Efficient optical communication in parallel computers. Proc. 4th ACM Symp. on Parallel Algorithms and Architectures, June 29–July 1, (1992) 41–48.Google Scholar
  4. 4.
    A. Gottlieb et al. The NYU Ultracomputer — Designing an MIMD shared-memory parallel computer. IEEE Trans. on Computers, C-32:2, (1983) 175–189.Google Scholar
  5. 5.
    A. Hartmann and S. Redfield. Design sketches for optical crossbar switches intended for large scale parallel processing applications. Optical Engineering, 29:3 (1989) 315–327.Google Scholar
  6. 6.
    A. Karlin and E. Upfal. Parallel hashing — an efficient implementation of shared memory. Proc. 18th ACM Symp. on Theory of Computing (1986) 160–168.Google Scholar
  7. 7.
    R.M. Karp and V. Ramachandran. A survey of algorithms for shared-memory machines. In Handbook of Theoretical Computer Science, (J. van Leeuwen, ed.), North Holland, Amsterdam, (1990) 869–941.Google Scholar
  8. 8.
    C.P. Kruskal, L. Rudolph and M. Snir. A complexity theory of efficient parallel algorithms. Theor. Comp. Sci., 71 (1990) 95–132.Google Scholar
  9. 9.
    N. Littlestone. Manuscript (1990).Google Scholar
  10. 10.
    P. Raghavan. Probabilistic construction of deterministic algorithms. Proc. 27th IEEE Symp. on Foundations of Computer Science (1986) 10–18.Google Scholar
  11. 11.
    A. Ranade. How to emulate shared memory. In Proc. 28th IEEE Symp. on Foundation of Computer Science (1987) 185–194.Google Scholar
  12. 12.
    L.G. Valiant. A bridging model for parallel computation. CACM 33: 8 (1990) 103–111.Google Scholar
  13. 13.
    L.G. Valiant. General purpose parallel architectures. In Handbook of Theoretical Computer Science (J. van Leeuwen, ed.), North Holland, Amsterdam (1990) 944–971.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Leslie G. Valiant
    • 1
  1. 1.Aiken Computation LaboratoryHarvard UniversityCambridge

Personalised recommendations