Skip to main content

Grouped Text Clustering Using Non-Parametric Gaussian Mixture Experts

  • Conference paper
  • First Online:
PRICAI 2016: Trends in Artificial Intelligence (PRICAI 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9810))

Included in the following conference series:

  • 2575 Accesses

Abstract

Text clustering has many applications in various areas. Before being clustered, texts often have already been grouped or partially grouped in practise. Texts from the same group are related to each other and concentrate on a few topics. The group information turns out to be valuable for text clustering. In this paper, we propose a model called Non-parametric Gaussian Mixture Experts to get better clustering result through utilizing group information. After converting texts to vectors by semantic embedding, our model can automatically infer proper cluster number for every group and the whole corpus. We develop an online variational inference algorithm which is scalable and can handle incremental datasets. Our algorithm is tested on various text datasets. The results demonstrate our model has significantly better performance in cluster quality than some other classical and recent text clustering methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://archive.ics.uci.edu/ml/datasets/.

  2. 2.

    http://news.google.com/, We obtain it from the author of [12].

  3. 3.

    http://qwone.com/~jason/20Newsgroups/.

  4. 4.

    http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html.

References

  1. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)

    MATH  Google Scholar 

  2. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  3. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv: 1301.3781 (2013)

  4. Le, Q.V., Mikolov, T.: Distributed representations of sentences, documents. arXiv preprint arXiv: 1405.4053 (2014)

  5. Pennington, J., Socher, R., Manning, C.D., Glove: global vectors for word representation. In: Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), vol. 12 (2014)

    Google Scholar 

  6. Rasmussen, C.E.: The infinite gaussian mixture model. In: NIPS, vol. 12, pp. 554–560 (1999)

    Google Scholar 

  7. Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Hierarchical dirichlet processes. J. Am. Stat. Assoc. 101(476), 1566–1581 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  8. Sethuraman, J.: A constructive definition of dirichlet priors. Technical report, DTIC Document (1991)

    Google Scholar 

  9. Hoffman, M.D., Blei, D.M., Wang, C., Paisley, J.: Stochastic variational inference. J. Mach. Learn. Res. 14(1), 1303–1347 (2013)

    MathSciNet  MATH  Google Scholar 

  10. Blei, D.M., Jordan, M.I., et al.: Variational inference for dirichlet process mixtures. Bayesian Anal. 1(1), 121–143 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  11. Amari, S.-I.: Natural gradient works efficiently in learning. Neural Comput. 10(2), 251–276 (1998)

    Article  MathSciNet  Google Scholar 

  12. Yin, J., Wang, J.: A dirichlet multinomial mixture model-based approach for short text clustering. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 233–242. ACM (2014)

    Google Scholar 

  13. Kuang, D., Park, H.: Fast rank-2 nonnegative matrix factorization for hierarchical document clustering. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 739–747. ACM (2013)

    Google Scholar 

  14. Blei, D.M.: Mcauliffe, J.D.: Supervised topic models. In: Neural Information Processing Systems (2007)

    Google Scholar 

  15. Perotte, A.J., Wood, F., Elhadad, N., Bartlett, N.: Hierarchically supervised latent Dirichlet allocation. In: Advances in Neural Information Processing Systems, pp. 2609–2617 (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Tian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Tian, Y., Rong, Y., Yao, Y., Liu, W., Song, J. (2016). Grouped Text Clustering Using Non-Parametric Gaussian Mixture Experts. In: Booth, R., Zhang, ML. (eds) PRICAI 2016: Trends in Artificial Intelligence. PRICAI 2016. Lecture Notes in Computer Science(), vol 9810. Springer, Cham. https://doi.org/10.1007/978-3-319-42911-3_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-42911-3_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-42910-6

  • Online ISBN: 978-3-319-42911-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics