Skip to main content

Implementing Linear Algebra Routines on Multi-core Processors with Pipelining and a Look Ahead

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4699))

Abstract

Linear algebra algorithms commonly encapsulate parallelism in Basic Linear Algebra Subroutines (BLAS). This solution relies on the fork-join model of parallel execution, which may result in suboptimal performance on current and future generations of multi-core processors. To overcome the shortcomings of this approach a pipelined model of parallel execution is presented, and the idea of look ahead is utilized in order to suppress the negative effects of sequential formulation of the algorithms. Application to one-sided matrix factorizations, LU, Cholesky and QR, is described. Shared memory implementation using POSIX threads is presented.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anderson, E., Bai, Z., Bischof, C., Blackford, L.S., Demmel, J.W., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., Sorensen, D.: LAPACK Users’ Guide. SIAM (1992)

    Google Scholar 

  2. Barker, V.A., Blackford, L.S., Dongarra, J., Du Croz, J., Hammarling, S., Marinova, M., Wasniewski, J., Yalamov, P.: LAPACK95 Users’ Guide. SIAM (1992)

    Google Scholar 

  3. Basic Linear Algebra Technical Forum: Basic Linear Algebra Technical Forum Standard (2001)

    Google Scholar 

  4. Agarwal, R.C., Gustavson, F.G.: A parallel implementation of matrix multiplication and LU factorization on the IBM 3090. In: Wright, M.H. (ed.) Proceedings of the IFIP WG 2.5 Working Conference on Aspects of Computation on Asynchronous Parallel Processors, North-Holland, pp. 217–221. Elsevier, Amsterdam (1988)

    Google Scholar 

  5. Agarwal, R.C., Gustavson, F.G.: Vector and parallel algorithm for Cholesky factorization on IBM 3090. In: Proceedings of the 1989 ACM/IEEE Conference on Supercomputing (1989)

    Google Scholar 

  6. Strazdins, P.E.: A comparison of lookahead and algorithmic blocking techniques for parallel matrix factorization. Int. J. Parallel Distrib. Systems Networks 4(1), 26–35 (2001)

    Google Scholar 

  7. Gustavson, F.G., Karlsson, L., Kågström, B.: Three algorithms for Cholesky factorization on distributed memory using packed storage. In: Proceedings of the Workshop on State-of-the-Art in Scientific and Engineering Computing (PARA 2006), pp. 550–559 (to appear, 2006)

    Google Scholar 

  8. Dongarra, J.J., Duff, I.S., Sorensen, D.C., van der Vorst, H.A.: Numerical Linear Algebra for High-Performance Computers. SIAM (1998)

    Google Scholar 

  9. Demmel, J.W.: Applied Numerical Linear Algebra. SIAM (1997)

    Google Scholar 

  10. Bischof, C., van Loan, C.: The WY representation for products of householder matrices. J. Sci. Stat. Comput. 8, 2–13 (1987)

    Article  MathSciNet  Google Scholar 

  11. Schreiber, R., van Loan, C.: A storage-efficient WY representation for products of householder transformations. J. Sci. Stat. Comput. 10, 53–57 (1991)

    Article  MathSciNet  Google Scholar 

  12. Dongarra, J.J., Gustavson, F.G., Karp, A.: Implementing linear algebra algorithms for dense matrices on a vector pipeline machine. SIAM Review 26(1), 91–112 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  13. Menon, V., Pingali, K.: Look left, look right, look left again: An application of fractal symbolic analysis to linear algebra code restructuring. Int. J. Parallel Comput. 32(6), 501–523 (2004)

    Article  MATH  Google Scholar 

  14. Dongarra, J.J., Luszczek, P., Petitet, A.: The LINPACK Benchmark: past, present and future. Concurrency Computat.: Pract. Exper. 15, 803–820 (2003)

    Article  Google Scholar 

  15. Petitet, A., Whaley, R.C., Dongarra, J.J., Cleary, A.: HPL - A portable implementation of the high-performance linpack benchmark for distributed-memory computers (2006), http://www.netlib.org/benchmark/hpl/

  16. Goto, K., van de Geijn, R.: On reducing TLB misses in matrix multiplication. Technical Report TR-02-55, Department of Computer Sciences, University of Texas at Austin (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Bo Kågström Erik Elmroth Jack Dongarra Jerzy Waśniewski

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kurzak, J., Dongarra, J. (2007). Implementing Linear Algebra Routines on Multi-core Processors with Pipelining and a Look Ahead. In: Kågström, B., Elmroth, E., Dongarra, J., Waśniewski, J. (eds) Applied Parallel Computing. State of the Art in Scientific Computing. PARA 2006. Lecture Notes in Computer Science, vol 4699. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-75755-9_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-75755-9_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-75754-2

  • Online ISBN: 978-3-540-75755-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics