Skip to main content

A Flexible Parallel Runtime for Large Scale Block-Based Matrix Multiplication

  • Conference paper
Web Technologies and Applications (APWeb 2012)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 7234))

Included in the following conference series:

  • 1216 Accesses

Abstract

Block-based matrix multiplication plays an important role in statics computing. It is hard to make large scale matrix multiplication in data statistics and analysis. A flexible parallel runtime for large scale block-based matrix is proposed in this paper. With MapReduce framework, four parallel matrix multiplication methods have been discussed. Three methods use the HDFS to be the storage and one method utilizes the Cloud storage to be the storage. The parallel runtime will determine to use the appropriate block-based matrix multiplication. Experiments have been made to test the proposed flexible parallel runtime with large scale randomly generated data and public matrix collection. The results have shown that the proposed runtime has a good effect to select the best matrix multiplication strategy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dean, J., Ghemawat, S.: MapReduce: Simplified Data Processing on Large Clusters. In: OSDI 2004: Sixth Symposium on Operating System Design and Implementation, pp. 137–150 (2004)

    Google Scholar 

  2. http://hadoop.apache.org/

  3. Low, Y., Gonzalez, J., Kyrola, A.: GraphLab: A New Parallel Framework for Machine Learning. In: Conference on Uncertainty in Artificial Intelligence (UAI), pp. 340–349 (2010)

    Google Scholar 

  4. http://mahout.apache.org/

  5. Seo, S., Yoon, E.J., Kim, J.: HAMA: An efficient matrix computation with the MapReduce framework. In: IEEE CloudCom 2010 Workshop (2010)

    Google Scholar 

  6. Norstad, J.: A MapReduce Algorithm for Matrix Multiplication, http://homepage.mac.com/j.norstad/matrix-multiply/index.html

  7. http://hbase.apache.org/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Liu, K., Song, S., Zhou, N., Ma, Y. (2012). A Flexible Parallel Runtime for Large Scale Block-Based Matrix Multiplication. In: Wang, H., et al. Web Technologies and Applications. APWeb 2012. Lecture Notes in Computer Science, vol 7234. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29426-6_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-29426-6_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-29425-9

  • Online ISBN: 978-3-642-29426-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics