Advertisement

On the Equivalence of Inductive Content Analysis and Topic Modeling

  • Aneesha BakhariaEmail author
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1112)

Abstract

Inductive content analysis is a research task in which a researcher manually reads text and identifies categories or themes that emerge from a document corpus. Inductive content analysis is usually performed as part of a formal qualitative research methodology such as Grounded Theory. Topic modeling algorithms discover the latent topics in a document corpus. There has been a general assumption, that topic modeling is a suitable algorithmic aid for inductive content analysis. In this short paper, the findings from a between-subjects experiment to evaluate the differences between topics identified by manual coders and topic modeling algorithms is discussed. The findings show that the topic modeling algorithm was only comparable to the human coders for broad topics and that topic modeling algorithms would require additional domain knowledge in order to identify more fine-grained topics. The paper also reports issues that impede the use of topic modeling within the quantitative ethnography process such as topic interpretation and topic size quantification.

Keywords

Topic modeling Inductive content analysis 

Notes

Acknowledgement

The experiments described within this paper were conducted as part of my doctorate degree at Queensland University of Technology. I would like to thank and acknowledge my supervisors Peter Bruza, Jim Watters, Bhuva Narayan and Laurianne Sitbon.

References

  1. 1.
    Bakharia, A., Bruza, P., Watters, J., Narayan, B., Sitbon, L.: Interactive topic modeling for aiding qualitative content analysis. In: Proceedings of the 2016 ACM on Conference on Human Information Interaction and Retrieval, pp. 213–222. ACM (2016)Google Scholar
  2. 2.
    Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)zbMATHGoogle Scholar
  3. 3.
    Chang, J., Gerrish, S., Wang, C., Boyd-Graber, J.L., Blei, D.M.: Reading tea leaves: how humans interpret topic models. In: Advances in Neural Information Processing Systems, pp. 288–296 (2009)Google Scholar
  4. 4.
    Hsieh, H.F., Shannon, S.E.: Three approaches to qualitative content analysis. Qual. Health Res. 15(9), 1277–1288 (2005)CrossRefGoogle Scholar
  5. 5.
    Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: Advances in Neural Information Processing Systems, pp. 556–562 (2001)Google Scholar
  6. 6.
    Lin, C.J.: Projected gradient methods for nonnegative matrix factorization. Neural Comput. 19(10), 2756–2779 (2007)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Sievert, C., Shirley, K.: LDAVis: a method for visualizing and interpreting topics. In: Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces, pp. 63–70 (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.The University of QueenslandBrisbaneAustralia

Personalised recommendations