Abstract
Space-efficient data structures for sparse matrices typically yield programs in which not all data dependencies can be determined at compile time. Automatic parallelization of such codes is usually done at run time, e.g. by applying the inspector-executor technique, incurring tremendous overhead. - Program comprehension techniques have been shown to improve automatic parallelization of dense matrix computations. We investigate how this approach can be generalized to sparse matrix codes. We propose a speculative program comprehension and parallelization method. Placement of parallelized run-time tests is supported by a static data flow analysis framework.
For the full version of this paper see http://www.informatik.uni-trier.de/-kessler/sparamat
Chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
R. Barrett, M. Berry, T. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. van der Vorst. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods. SIAM, 1994.
I. S. Duff. MA28-a set of Fortran subroutines for sparse unsymmetric linear equations. Tech. rept. AERE R8730, HMSO, London. Sources at netlib [7], 1977.
R. Grimes. SPARSE-BLAS basic linear algebra subroutines for sparse matrices, written in Fortran77. Source code available via netlib [7], 1984.
C. W. Ke\ler. Pattern-driven Automatic Parallelization. Scientific Programming, 5:251–274, 1996.
K. Kundert. SPARSE 1.3 package of routines for sparse matrix LU factorization, written in C. Source code available via netlib [7], 1988.
R. Mirchandaney, J. Saltz, R. Smith, D. Nicol, and K. Crowley. Principles of runtime support for parallel processors. In Proc. 2nd ACM Int. Conf. on Supercomputing, pages 140–152. ACM Press, July 1988.
NETLIB. Collection of free scientific software. Accessible by anonymous ftp to netlib2.cs.utk.edu or netlib.no or e-mail “send index” to netlib@netlib.no.
L. Rauchwerger and D. Padua. The Privatizing DOALL Test: A Run-Time Technique for DOALL Loop Identification and Array Privatization. In Proc. 8th ACM Int. Conf. on Supercomputing, pages 33–43. ACM Press, July 1994.
M. Ujaldon, E. Zapata, S. Sharma, and J. Saltz. Parallelization Techniques for Sparse Matrix Applications. J. of Parallel and Distr. Computing, 38(2), 1996.
H. Zima and B. Chapman. Supercompilers for Parallel and Vector Computers. ACM Press Frontier Series. Addison-Wesley, 1990.
Z. Zlatev. Computational Methods for General Sparse Matrices. Kluwer, 1991.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1997 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ke\ler, C.W. (1997). Applicability of program comprehension to sparse matrix computations. In: Lengauer, C., Griebl, M., Gorlatch, S. (eds) Euro-Par'97 Parallel Processing. Euro-Par 1997. Lecture Notes in Computer Science, vol 1300. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0002755
Download citation
DOI: https://doi.org/10.1007/BFb0002755
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-63440-9
Online ISBN: 978-3-540-69549-3
eBook Packages: Springer Book Archive