Abstract
Large scale statistical computing is crucial for extracting useful information from huge amount of data for both large companies and research scientists. The Solutions developed by high-performance communities have been more limited to clusters or high-end machines for decades. The cost of maintaining such dedicated clusters are prohibiting, people start to look at cloud computing where we can rent a cluster by time and pay-as-we-go. In a cloud setting, system features including fault tolerance and scalability become important. In this paper, we proposed a simple and universal parallel execution model for large matrix workloads. We implement the model in Hadoop MapReduce framework using map-only jobs. Because of the superiority of the model, experiments show that our Hadoop-based execution engine can reduce the execution time of matrix multiplication by half comparing with previous works.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Castaldo, A.M., Whaley, R.C.: Scaling LAPACK panel operations using parallel cache assignment. In: ACM Sigplan Symposium on Principles and Practice of Parallel Programming, vol. 45, no. 5, pp. 223–232 (2010)
Toledo, S., Gustavson, F.G.: The design and implementation of SOLAR, a portable library for scalable out-of-core linear algebra computations. In: Proceedings of the Fourth Workshop on I/O in Parallel and Distributed Systems, pp. 28–40 (1996)
Wu, P., Chen, Z.Z.: FT-ScaLAPACK: correcting soft errors on-line for ScaLAPACK cholesky, QR and LU factorization routines. In: Proceedings of the 23rd International Symposium on High-performance Parallel and Distributed Computing (HPDC 2014), Vancouver, BC, Canada (2014)
Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. In: 6th Conference on Symposium on Operating Systems Design & Implementation (OSDI 2004), Berkeley, CA, USA, vol. 6, pp. 107–113 (2004)
Li, J., Ma, X., Yoginath, S.B., Kora, G., Samatova, N.F.: Transparent runtime parallelization of the R scripting language. J. Parallel Distrib. Comput. 71(2), 157–168 (2011)
Das, S., Sismanis, Y., Beyer, K.S., Gemulla, R., Haas, P.J., McPherson, J.: Ricardo: integrating R and Hadoop. In: Proceedings of the 2010 International Conference on Management of Data, Indianapolis, Indiana, USA, pp. 987–998 (2010)
Boehm, M., Tatikonda, S., Reinwald, B., Sen, P., Tian, Y., Burdick, D. R., Vaithyanathan, S.: Hybrid parallelization strategies for large-scale machine learning in SystemML. In: Proceedings of the VLDB Endowment, Hangzhou, China, vol. 7, no. 7 (2014)
Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: Proceedings of the NIPS, Denver, CO, USA (2000)
Acknowledgement
Our research was supported by the Natural Science Foundation of China under grant No: 61462037 and the Natural Science Foundation of Jiangxi under grant No: 20142BAB217014.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Deng, S., Xu, X., Zhou, F., Weng, H., Luo, W. (2020). General Parallel Execution Model for Large Matrix Workloads. In: Liu, Q., Mısır, M., Wang, X., Liu, W. (eds) The 8th International Conference on Computer Engineering and Networks (CENet2018). CENet2018 2018. Advances in Intelligent Systems and Computing, vol 905. Springer, Cham. https://doi.org/10.1007/978-3-030-14680-1_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-14680-1_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-14679-5
Online ISBN: 978-3-030-14680-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)