Advertisement

DistSNNMF: Solving Large-Scale Semantic Topic Model Problems on HPC for Streaming Texts

  • Fatma S. GadelrabEmail author
  • Rowayda A. Sadek
  • Mohamed H. Haggag
Chapter
Part of the Studies in Systems, Decision and Control book series (SSDC, volume 295)

Abstract

The scalability means a time and space complexity of the topic modeling algorithms are needed to be reduced. Although the parallel algorithms on the multi-processor architecture have a small complexity for time and space, the communication costs among processors lead to a serious scalability problem. On the other hand, when the topic modeling dealing with large scale streaming data, it still suffers from the limited model capacity problem. To tackle this problem, in this paper we proposed a distributed version of the prior topic modeling algorithm (SNNMF) named DistSNNMF. The training task splits into many sub-batch tasks and distributed across multiple worker nodes, such that the whole training process is accelerated through the cooperation with the data-parallel platform. Extensive experiments conducted on real-world datasets demonstrate the usability and scalability of the proposed algorithm.

Keywords

Distributed topic model Scalability topic model Parallel topic model Scalable topic model 

Notes

Acknowledgements

This work was supported by computational resources is provided by the Bibliotheca Alexandrina9 on its High-Performance Computing (HPC) infrastructure (see HPC. Bibalex.Org).

We thank the anonymous reviewers for their constructive comments. We thank the supercomputer unite in Bibliotheca Alexandrina (BA) for supplying infrastructure and, timely support for our experiments.

References

  1. 1.
    Li, K.C., Jiang, H., Yang, L.T., Cuzzocrea, A. (eds.): Big Data: Algorithms, Analytics, and, Applications. CRC Press (2015)Google Scholar
  2. 2.
    Al-Drees, A., Bin-Hezam, R., Al-Muwayshir, R.: Unified retrieval model of big data. In: INNS Conference on Big Data, pp. 323–332. Springer International Publishing (2016)Google Scholar
  3. 3.
    Desarkar, A., Das, A.: Big-data analytics, machine learning algorithms and, scalable/parallel/distributed algorithms. In: Internet of Things and, Big Data Technologies for Next Generation Healthcare, pp. 159–197. Springer International Publishing (2017)‏Google Scholar
  4. 4.
    Platoš, J., Gajdoš, P., Krömer, P., Snášel, V.: Non-negative matrix factorization on GPU. In: Networked Digital Technologies, pp. 21–30 (2010)Google Scholar
  5. 5.
    Řehůřek, R.: Scalability of Semantic Analysis in Natural Language Processing (Doctoral dissertation, Masarykova univerzita, Fakulta informatiky) (2011)Google Scholar
  6. 6.
    Yan, J., Zeng, J., Liu, Z.Q., Yang, L., Gao, Y.: Towards big topic modeling. Inf. Sci. 390, 15–31 (2017)CrossRefGoogle Scholar
  7. 7.
    Bhardwaj, M.: Parallel Approach for Implementing Data Mining Algorithms (2016)Google Scholar
  8. 8.
    Chong, P.K., Karuppiah, E.K., Yong, K.K.: A Multi-GPU framework for in-memory text data analytics. In: 2013 27th International Conference on Advanced Information Networking and, Applications Workshops (WAINA), pp. 1411–1416. IEEE (2013Google Scholar
  9. 9.
    Gadelrab, F.S., Haggag, M.H., Sadek, R.A.: Novel semantic tagging detection algorithms based non-negative matrix factorization. SN Appl. Sci. 2(1), 54 (2020)CrossRefGoogle Scholar
  10. 10.
    Yang, W., Li, K., Li, K.: A parallel computing method using blocked format with optimal partitioning for SpMV on GPU. J. Comput. Syst. Sci. 92, 152–170 (2018)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Liu, W., Vinter, B.: A framework for general sparse matrix–matrix multiplication on GPUs and heterogeneous processors. J. Parallel Distrib. Comput. 85, 47–61 (2015)CrossRefGoogle Scholar
  12. 12.
    Yuk, J.H.: A Large-Scale Sparse Matrix Multiplication Method based on Streaming Matrix to GPUs (Doctoral dissertation, DGIST) (2017)Google Scholar
  13. 13.
    Erra, U., Senatore, S., Minnella, F., Caggianese, G.: Approximate tf-idf based on topic extraction from massive message stream using the GPU. Inf. Sci. 292, 143–161 (2015)CrossRefGoogle Scholar
  14. 14.
    Zhang, Y., Mueller, F., Cui, X., Potok, T.: Data-intensive document clustering on graphics processing unit (GPU) clusters. J. Parallel Distrib. Comput. 71(2), 211–224 (2011)CrossRefGoogle Scholar
  15. 15.
    Ciccoti, P., Oral, H.S., Kestor, G., Gioiosa, R., Strande, S., Taufer, M., Carrington, L.: Conquering Big Data with High Performance Computing. Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF) (2016)Google Scholar
  16. 16.
    Hsu, C.H., Fox, G., Min, G., Sharma, S.: Advances in big data programming, system software and HPC convergence. J. Supercomputing 75(2), 489–493 (2019)CrossRefGoogle Scholar
  17. 17.
    Shi, S., Wang, Q., Chu, X.: Performance modeling and, evaluation of distributed deep learning frameworks on gpus. In: 2018 IEEE 16th Intl Conf on Dependable, Autonomic and, Secure Computing, 16th Intl Conf on Pervasive Intelligence and, Computing, 4th Intl Conf on Big Data Intelligence and, Computing and, Cyber Science and, Technology Congress (DASC/PiCom/DataCom/CyberSciTech), pp. 949–957. IEEE (2018)‏Google Scholar
  18. 18.
    Goldsborough, P.: A tour of TensorFlow. arXiv preprint arXiv:1610.01178 (2016)
  19. 19.
    Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Kudlur, M., et al.: TensorFlow: a system for large-scale machine learning. OSDI 16, 265–283 (2016)Google Scholar
  20. 20.
    Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Ghemawat, S., et al.: Tensorflow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv preprint arXiv:1603.04467 (2016)‏
  21. 21.
    Xie, X., Liang, Y., Li, X., Tan, W.: CuLDA_CGS: Solving Large-Scale LDA Problems on GPUs. arXiv preprint arXiv:1803.04631 (2018)‏
  22. 22.
    Fuentes-Pineda, G., Meza-Ruiz, I.V.: Topic Discovery in Massive Text Corpora Based on Min-Hashing. arXiv preprint arXiv:1807.00938 (2018)
  23. 23.
    Li, Y., Song, W.Z., Yang, B.: Stochastic variational inference-based parallel and online supervised topic model for large-scale text processing. J. Comput. Sci. Technol. 33(5), 1007–1022 (2018)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Zhao, B., Zhou, H., Li, G., Huang, Y.: ZenLDA: large-scale topic model training on distributed data-parallel platf orm. Big Data Mining Anal. 1(1), 57–74 (2018)CrossRefGoogle Scholar
  25. 25.
    Gropp, C., Herzog, A., Safro, I., Wilson, P.W., Apon, A. W.: Scalable Dynamic Topic Modeling with Clustered Latent Dirichlet Allocation (clda). arXiv preprint arXiv:1610.07703. (2016)
  26. 26.
    Tu, D., Chen, L., Lv, M., Shi, H., Chen, G.: Hierarchical online NMF for detecting and tracking topic hierarchies in a text stream. Pattern Recogn. 76, 203–214 (2018)CrossRefGoogle Scholar
  27. 27.
    Zhang, D., Han, Y., Li, X.: Dynamic detection method of micro-blog topic based on time series. In: International Conference of Pioneering Computer Scientists, Engineers and, Educators, pp. 192–200. Springer, Singapore (2018)‏Google Scholar
  28. 28.
    Mejía-Roa, E., Tabas-Madrid, D., Setoain, J., García, C., Tirado, F., Pascual-Montano, A.: NMF-mGPU: non-negative matrix factorization on multi-GPU systems. BMC Bioinform. 16(1), 4Stochastic3 (2015)Google Scholar
  29. 29.
    Newman, D., Smyth, P., Steyvers, M.: Scalable parallel topic models. J. Intell. Community Res. Dev. 5 (2006)‏Google Scholar
  30. 30.
    Li, Y., Feng, D., Lu, M., Li, D.: A distributed topic model for large-scale streaming text. In: International Conference on Knowledge Science, Engineering and Management, pp. 37–48. Springer, Cham (2019)‏Google Scholar
  31. 31.
    Shi, S., Wang, Q., Chu, X.: Performance modeling and, evaluation of distributed deep learning frameworks on gpus. In: 2018 IEEE 16th Intl Conf on Dependable, Autonomic and, Secure Computing, 16th Intl Conf on Pervasive Intelligence and, Computing, 4th Intl Conf on Big Data Intelligence and, Computing and, Cyber Science and, Technology Congress (DASC/PiCom/DataCom/CyberSciTech), pp. 949–957. IEEE (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Department of Information Technology, Faculty of Computers and Artificial IntelligenceHelwan UniversityCairoEgypt
  2. 2.Department of Computer Science, Faculty of Computers and Artificial IntelligenceHelwan UniversityCairoEgypt

Personalised recommendations