Skip to main content

Exploiting Data Sparsity in Parallel Matrix Powers Computations

  • Conference paper
  • First Online:
Parallel Processing and Applied Mathematics (PPAM 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8384))

Abstract

We derive a new parallel communication-avoiding matrix powers algorithm for matrices of the form \(A=D+USV^H\), where \(D\) is sparse and \(USV^H\) has low rank and is possibly dense. We demonstrate that, with respect to the cost of computing \(k\) sparse matrix-vector multiplications, our algorithm asymptotically reduces the parallel latency by a factor of \(O(k)\) for small additional bandwidth and computation costs. Using problems from real-world applications, our performance model predicts up to \(13\times \) speedups on petascale machines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bebendorf, M.: A means to efficiently solve elliptic boundary value problems. In: Bart, T., Griebel, M., Keyes, D., Nieminen, R., Roose, D., Schlick, T. (eds.) Hierarchical Matrices. LNCS, vol. 63, pp. 49–98. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  2. Chan, E., Heimlich, M., Purkayastha, A., Van De Geijn, R.: Collective communication: theory, practice, and experience. Concurrency Comput.: Pract. Exper. 19, 1749–1783 (2007)

    Article  Google Scholar 

  3. Chandrasekaran, S., Dewilde, P., Gu, M., Lyons, W., Pals, T.: A fast solver for HSS representations via sparse matrices. SIAM J. Matrix Anal. Appl. 29, 67–81 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  4. Demmel, J., Hoemmen, M., Mohiyuddin, M., Yelick, K.: Avoiding communication in computing Krylov subspaces. Technical report UCB/EECS-2007-123, University of California-Berkeley (2007)

    Google Scholar 

  5. Hoemmen, M.: Communication-avoiding Krylov subspace methods. Ph.D. thesis, University of California-Berkeley (2010)

    Google Scholar 

  6. Hong, J., Kung, H.: I/O complexity: the red-blue pebble game. In: Proceedings of the 13th ACM Symposium on Theory of Computing, pp. 326–333. ACM, New York (1981)

    Google Scholar 

  7. Knight, N., Carson, E., Demmel, J.: Exploiting data sparsity in parallel matrix powers computations. Technical report UCB/EECS-2013-47, University of California-Berkeley (2013)

    Google Scholar 

  8. Kriemann, R.: Parallele Algorithmen für \(\cal H\)-Matrizen. Ph.D. thesis, Christian-Albrechts-Universität zu Kiel (2005)

    Google Scholar 

  9. Leiserson, C., Rao, S., Toledo, S.: Efficient out-of-core algorithms for linear relaxation using blocking covers. J. Comput. Syst. Sci. Int. 54, 332–344 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  10. Mohiyuddin, M.: Tuning hardware and software for multiprocessors. Ph.D. thesis, University of California-Berkeley (2012)

    Google Scholar 

  11. Mohiyuddin, M., Hoemmen, M., Demmel, J., Yelick, K.: Minimizing communication in sparse matrix solvers. In: Proceedings of the Conference on High Performance Computing Networking, Storage, and Analysis, pp. 36:1–36:12. ACM, New York (2009)

    Google Scholar 

  12. Philippe, B., Reichel, L.: On the generation of Krylov subspace bases. Appl. Numer. Math. 62, 1171–1186 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  13. Wang, S., Li, X., Xia, J., Situ, Y., de Hoop, M.: Efficient scalable algorithms for hierarchically semiseparable matrices. SIAM J. Sci. Comput. (2012, under review)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicholas Knight .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Knight, N., Carson, E., Demmel, J. (2014). Exploiting Data Sparsity in Parallel Matrix Powers Computations. In: Wyrzykowski, R., Dongarra, J., Karczewski, K., Waśniewski, J. (eds) Parallel Processing and Applied Mathematics. PPAM 2013. Lecture Notes in Computer Science(), vol 8384. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-55224-3_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-55224-3_2

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-55223-6

  • Online ISBN: 978-3-642-55224-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics