A Parallel Symbolic Computation Environment: Structures and Mechanics

  • ’Mantŝika Matooane
  • Arthur Norman
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1685)


We describe a set of representations for polynomials and sparse matrices suited for use with fine-grain parallelism on a distributed memory multiprocessor system. Our aim is to support use of supercomputers with this style of architecture to perform computations that would exceed the main memory capacity of more traditional computers: although such systems have very high performance communication networks it is still essential to avoid letting any one part of the network become a bottleneck. We use randomised data placement both to avoid hot-spots in the communication patterns and to balance (in a probabilistic sense) the memory load placed upon each processing element. The expected application areas for such a system will be those where intermediate expression swell means that the huge primary memory available on MPP systems will be needed if the smaller final result is to be successfully computed.


Processing Element Memory Load Hash Table Sparse Matrice Distribute Memory Architecture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. [1]
    Dinda, P.A., Garcia, B.M., and Leung, D.-S. The measured network traffic of compiler-parallelized programs. Tech. rep., Carnegie Mellon University School of Computer Science, 1998. Techincal Report CMU-CS-98-144.Google Scholar
  2. [2]
    Geist, G., Kohl, J., and Papadopoulos, P. Pvm vs mpi: A comparison of features. In Calculateurs Paralleles (1996).Google Scholar
  3. [3]
    Gustavson, F., and Yun, D.Y.Y. Arithmetic complexity of unordered sparse polynomials. In SYMSAC’76 (August 1976), R.D. Jenks, Ed., SIGSAM, ACM, pp. 149–153.Google Scholar
  4. [4]
    Hitachi Ltd. A tutorial for parallel programming, 1995.Google Scholar
  5. [5]
    Kumar, V., Grama, A., Gupta, A., and Karypis, G. Introduction to Parallel Computing:Design and Analysis of Algorithms. Benjamin/Cummings Publishing Company, Inc., Redwood City, California, 1994.Google Scholar
  6. [6]
    Miller, R., and Stout, Q.F. Parallel Algorithms for Regular Architectures: Meshes and Pyramids. The MIT Press, Cambridge, Massachusetts, 1996.Google Scholar
  7. [7]
    Norman, A., and Fitch, J. Cabal: Polynomial and power series algebra on a parallel computer. In Proceedings of PASCO 1997 (1997).Google Scholar
  8. [8]
    Pacheco, P.S. Parallel Programming with MPI. Morgan Kaufmann Publishers, Inc., San Francisco, California, 1997.Google Scholar
  9. [9]
    Sasaki, T., and Kanada, Y. Parallelism in algebraic computaion and parallel algorithms for symbolic linear systems. In Proceedings of 1981 ACM Symposium on Symbolic and Algebraic Computation (1981), P.S. Wang, Ed., pp. 160–167.Google Scholar
  10. [10]
    Smit, J. New recursive minor expansion algorithms: A presentation in a comparative context. In Proceedings of International Symposium on Symbolic and Algebraic Manipulation (1979), E.W. Ng, Ed., Lecture Notes in Computer Science 72, Springer-Verlag, pp. 74–87.Google Scholar
  11. [11]
    Smit, J. A cancellation free algorithm, with factoring capabilities, for the efficient solution of large sets of equations. In Proceedings of the 1981 ACM Symposium on Symbolic and Algebraic Computation (1981), P.S. Wang, Ed.Google Scholar
  12. [12]
    Snir, M., Otto, S.W., Huss-lederman, S., Walker, D.W., and Don-garra, J. MPI:The Complete Reference. The MIT Press, Cambridge, Massachusetts, 1996.Google Scholar
  13. [13]
    Tobey, R. Experience with formac algorithm design. In Communications of the ACM (1966), pp. 589–597.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • ’Mantŝika Matooane
    • 1
  • Arthur Norman
    • 1
  1. 1.University of Cambridge Computer LaboratoryCambridgeUK

Personalised recommendations