Advertisement

Scientometrics

, Volume 121, Issue 3, pp 1385–1406 | Cite as

Personal research idea recommendation using research trends and a hierarchical topic model

  • Hei-Chia WangEmail author
  • Tzu-Ting Hsu
  • Yunita Sari
Article

Abstract

In the era of rapid technological advance, it is an important task for all researchers to keep up with trends when performing research. How to efficiently find suitable research topics while the number of papers is increasing rapidly is worthwhile to explore. To solve such problems, some researchers attempted to find research ideas by topic detection and tracking methods. However, these methods do not consider the users’ background knowledge and preferences, and they express a topic with general keywords, which does not effectively help researchers to develop new research ideas. Existing studies support that the title expresses the research idea the best. This study adapts this concept to propose an automatic title generation method that combines personalized recommendation methods and topic trend analysis methods to achieve this task. First, it uses hierarchical latent tree analysis to find the users’ interests for a topic structure and its representative keywords hidden in the existing research. Second, the interesting topic trends, popularity and user preferences in a hybrid recommendation method are considered. Finally, a natural language generation algorithm that is suitable for the titles of academic papers converts the original recommended-keywords into fluent title sentences that are designed for the users. Experiments have found that adding Google Trend indicators and personal factors can improve the performance of topic recommendations. The automatic title generation method using template-based and statistical information methods leads to excellent performances in both grammatical correctness and semantic expression. Moreover, for the users, the title is indeed more inspirational than the simple keywords for users to develop new research ideas.

Keywords

Hierarchical topic model Personalized recommendation system Automatic title generation 

Notes

Acknowledgements

The research is based on work supported by Taiwan Ministry of Science and Technology under Grant No. MOST 107-2410-H-006 040-MY3 and MOST 108-2511-H-006-009.

References

  1. Allan, J., Papka, R., & Lavrenko, V. (1998). On-line new event detection and tracking. Paper presented at the proceedings of the 21st annual international ACM SIGIR conference on research and development in information retrieval, Melbourne, Australia.Google Scholar
  2. Blei, D. M., Griffiths, T. L., & Jordan, M. I. (2010). The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies. Journal of the ACM (JACM),57(2), 7.MathSciNetCrossRefGoogle Scholar
  3. Blei, D. M., & Lafferty, J. D. (2006). Dynamic topic models. Paper presented at the proceedings of the 23rd international conference on machine learning, Pittsburgh, Pennsylvania, USA.Google Scholar
  4. Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research,3(Jan), 993–1022.zbMATHGoogle Scholar
  5. Boon, S. (2017). 21st Century science overload. Retrieved from http://blog.cdnsciencepub.com/21st-century-science-overload/. Accessed 7 Jan 2017.
  6. Chen, P., Zhang, N. L., Liu, T., Poon, L. K., Chen, Z., & Khawar, F. (2017). Latent tree models for hierarchical topic detection. Artificial Intelligence,250, 105–124.MathSciNetCrossRefGoogle Scholar
  7. Deemter, K. V., Theune, M., & Krahmer, E. (2005). Real versus template-based natural language generation: A false opposition? Computational Linguistics,31(1), 15–24.CrossRefGoogle Scholar
  8. Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science,41(6), 391.CrossRefGoogle Scholar
  9. Hartley, J. (2005). To attract or to inform: What are titles for? Journal of Technical Writing and Communication,35(2), 203–213.CrossRefGoogle Scholar
  10. Hofmann, T. (1999). Probabilistic latent semantic analysis. Paper presented at the proceedings of the fifteenth conference on uncertainty in artificial intelligence.Google Scholar
  11. Howald, B., Kondadadi, R., & Schilder, F. (2013). Domain adaptable semantic clustering in statistical NLG. Paper presented at the proceedings of the 10th international conference on computational semantics (IWCS 2013)—Long papers.Google Scholar
  12. Jamali, H. R., & Nikzad, M. (2011). Article title type and its relation with the number of downloads and citations. Scientometrics,88(2), 653–661.CrossRefGoogle Scholar
  13. Jinha, A. E. (2010). Article 50 million: An estimate of the number of scholarly articles in existence. Learned Publishing,23(3), 258–263.CrossRefGoogle Scholar
  14. Lau, J. H., Baldwin, T., & Newman, D. (2013). On collocations and topic models. ACM Transactions on Speech and Language Processing (TSLP),10(3), 10.Google Scholar
  15. Lopez, C., Prince, V., & Roche, M. (2011). Automatic titling of articles using position and statistical information. Paper presented at the proceedings of the international conference recent advances in natural language processing 2011.Google Scholar
  16. Lopez, C., Prince, V., & Roche, M. (2014). How can catchy titles be generated without loss of informativeness? Expert Systems with Applications,41(4), 1051–1062.CrossRefGoogle Scholar
  17. Lü, L., Medo, M., Yeung, C. H., Zhang, Y.-C., Zhang, Z.-K., & Zhou, T. (2012). Recommender systems. Physics Reports,519(1), 1–49.CrossRefGoogle Scholar
  18. Lu, Y., Mei, Q., & Zhai, C. (2011). Investigating task performance of probabilistic topic models: An empirical study of PLSA and LDA. Information Retrieval,14(2), 178–203.CrossRefGoogle Scholar
  19. Luong, M.-T., Pham, H., & Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. Paper presented at the proceedings of the 2015 conference on empirical methods in natural language processing.Google Scholar
  20. Mairesse, F., Gašić, M., Jurčíček, F., Keizer, S., Thomson, B., Yu, K., & Young, S. (2010). Phrase-based statistical language generation using graphical models and active learning. Paper presented at the proceedings of the 48th annual meeting of the association for computational linguistics.Google Scholar
  21. Ogawa, T., & Kajikawa, Y. (2017). Generating novel research ideas using computational intelligence: A case study involving fuel cells and ammonia synthesis. Technological Forecasting and Social Change,120, 41–47.CrossRefGoogle Scholar
  22. Paisley, J., Wang, C., Blei, D. M., & Jordan, M. I. (2015). Nested hierarchical Dirichlet processes. IEEE Transactions on Pattern Analysis and Machine Intelligence,37(2), 256–270.CrossRefGoogle Scholar
  23. Perera, R., & Nand, P. (2017). Recent advances in natural language generation: A survey and classification of the empirical literature. Computing and Informatics,36(1), 1–32.MathSciNetCrossRefGoogle Scholar
  24. Reiter, E., & Dale, R. (2000). Building natural language generation systems. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  25. Salakhutdinov, R., & Mnih, A. (2008). Probabilistic matrix factorization. Paper presented at the proceedings of advances in neural information processing systems 20 (NIPS 07) (pp. 1257–1264). ACM Press.Google Scholar
  26. Sasaki, A. (2017). Search engine statistics 2017. Retrieved from https://www.airsassociation.org/airs-articles/search-engine-statistics-2017. Accessed on 5 May 2017.
  27. Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics,6(2), 461–464.MathSciNetCrossRefGoogle Scholar
  28. Stent, A., Marge, M., & Singhai, M. (2005). Evaluating evaluation methods for generation in the presence of variation. Paper presented at the international conference on intelligent text processing and computational linguistics.Google Scholar
  29. Sun, L., & Yin, Y. (2017). Discovering themes and trends in transportation research using topic modeling. Transportation Research Part C: Emerging Technologies,77, 49–66.CrossRefGoogle Scholar
  30. Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Paper presented at the advances in neural information processing systems 27.Google Scholar
  31. Wang, H., Wang, N., & Yeung, D. Y. (2015) Collaborative deep learning for recommender systems. Paper presented at the proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, Sydney, NSW, Australia (pp. 1235–1244).Google Scholar
  32. Wang, H., Xingjian, S., & Yeung, D. Y. (2016) Collaborative recurrent autoencoder: Recommend while learning to fill in the blanks. Paper presented at the proceedings of the 30th annual conference on neural information processing systems, Barcelona, Spain (pp. 415–423).Google Scholar
  33. Zhang, Y., Zhang, G., Chen, H., Porter, A. L., Zhu, D., & Lu, J. (2016). Topic analysis and forecasting for science, technology and innovation: Methodology with a case study focusing on big data research. Technological Forecasting and Social Change,105, 179–191.CrossRefGoogle Scholar
  34. Zheng, H.-T., Kang, B.-Y., & Kim, H.-G. (2009). Exploiting noun phrases and semantic relationships for text document clustering. Information Sciences,179(13), 2249–2262.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2019

Authors and Affiliations

  1. 1.Institute of Information ManagementNational Cheng Kung UniversityTainanTaiwan
  2. 2.Department of Computer Sciences and Electronics, Faculty of Mathematics and Natural SciencesUniversitas Gadjah MadaYogyakartaIndonesia

Personalised recommendations