Advertisement

Assessing the Effectiveness of Topic Modeling Algorithms in Discovering Generic Label with Description

  • Shadikur Rahman
  • Syeda Sumbul HossainEmail author
  • Md. Shohel Arman
  • Lamisha Rawshan
  • Tapushe Rabaya Toma
  • Fatama Binta Rafiq
  • Khalid Been Md. Badruzzaman
Conference paper
  • 4 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1130)

Abstract

Analyzing short text or documents using topic modeling becomes a popular solutions for the increasing number of documents produced in everyday life. For handling the large amount of documents, many topic modeling algorithms are used e.g. LDA, LSI, pLSI, NMF. In this study, we have used LDA, LSI, NMF and also lexical database wordNet synset for candidate labels in our topics labeling. And finally compare the effectiveness of topic modeling algorithms for short documents. Among those LDA gives the better result in terms of WUP similarity. This study will help to select the proper algorithm for labeling topics and can easily identify the meaning of topics.

Keywords

Topic modeling LDA NMF LSI Topic labeling 

References

  1. 1.
    Aker, A., Paramita, M., Kurtic, E., Funk, A., Barker, E., Hepple, M., Gaizauskas, R.: Automatic label generation for news comment clusters. In: Proceedings of the 9th International Natural Language Generation Conference, pp. 61–69 (2016)Google Scholar
  2. 2.
    Basave, A.E.C., He, Y., Xu, R.: Automatic labelling of topic models learned from twitter by summarisation. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 618–624 (2014)Google Scholar
  3. 3.
    Bhatia, S., Lau, J.H., Baldwin, T.: Automatic labelling of topics with neural embeddings. arXiv preprint arXiv:1612.05340 (2016)
  4. 4.
    Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)zbMATHGoogle Scholar
  5. 5.
    Brown, P.F., Desouza, P.V., Mercer, R.L., Pietra, V.J.D., Lai, J.C.: Class-based n-gram models of natural language. Comput. Linguist. 18(4), 467–479 (1992)Google Scholar
  6. 6.
    Chang, J., Gerrish, S., Wang, C., Boyd-Graber, J.L., Blei, D.M.: Reading tea leaves: how humans interpret topic models. In: Advances in Neural Information Processing Systems, pp. 288–296 (2009)Google Scholar
  7. 7.
    Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391–407 (1990)CrossRefGoogle Scholar
  8. 8.
    Hofmann, T.: Probabilistic latent semantic analysis. In: Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, pp. 289–296. Morgan Kaufmann Publishers Inc. (1999)Google Scholar
  9. 9.
    Hossain, S.S., Ul-Hassan, R., Rahman, S.: Polynomial topic distribution with topic modeling for generic labeling. In: Communications in Computer and Information Science, vol. 1046, pp. 413–419. Springer (2019)Google Scholar
  10. 10.
    Hulpus, I., Hayes, C., Karnstedt, M., Greene, D.: Unsupervised graph-based topic labelling using DBPedia. In: Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, pp. 465–474. ACM (2013)Google Scholar
  11. 11.
    Kou, W., Li, F., Baldwin, T.: Automatic labelling of topic models using word vectors and letter trigram vectors. In: AIRS, pp. 253–264. Springer (2015)Google Scholar
  12. 12.
    Lau, J.H., Grieser, K., Newman, D., Baldwin, T.: Automatic labelling of topic models. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 1536–1545. Association for Computational Linguistics (2011)Google Scholar
  13. 13.
    Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788 (1999)CrossRefGoogle Scholar
  14. 14.
    Mei, Q., Shen, X., Zhai, C.X.: Automatic labeling of multinomial topic models. In: Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 490–499. ACM (2007)Google Scholar
  15. 15.
    Miller, G.A.: WordNet: a lexical database for English. Commun. ACM 38(11), 39–41 (1995)CrossRefGoogle Scholar
  16. 16.
    Niu, L., Dai, X., Zhang, J., Chen, J.: Topic2Vec: learning distributed representations of topics. In: 2015 International Conference on Asian Language Processing (IALP), pp. 193–196. IEEE (2015)Google Scholar
  17. 17.
    Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Sharing clusters among related groups: hierarchical Dirichlet processes. In: Advances in Neural Information Processing Systems, pp. 1385–1392 (2005)Google Scholar
  18. 18.
    Wu, Z., Palmer, M.: Verbs semantics and lexical selection. In: Proceedings of the 32nd Annual Meeting on Association for Computational Linguistics, pp. 133–138. Association for Computational Linguistics (1994)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Shadikur Rahman
    • 1
  • Syeda Sumbul Hossain
    • 1
    Email author
  • Md. Shohel Arman
    • 1
  • Lamisha Rawshan
    • 1
  • Tapushe Rabaya Toma
    • 1
  • Fatama Binta Rafiq
    • 1
  • Khalid Been Md. Badruzzaman
    • 1
  1. 1.Daffodil International UniversityDhakaBangladesh

Personalised recommendations